Zimbra sold to Telligent

So Zimbra has been sold to Telligent. The VAR Guy has a short analysis of the sale.

It’s clear that Zimbra has not been the success and fit VMware wished for. Looking at Red Hat’s success in the market place it would be cool if Zimbra were to Open Source everything and focus on installation, migration, integration and support. Obviously those areas also offer opportunities for partners. However, Telligent seems a typical proprietary software company with a modus operandi closer to Oracle or Microsoft than to Red Hat. So I would be very pleasantly surprised if they went the Red Hat route but I don’t get my hopes up.

So where does that leave us? As it stands there are three thriving Open Source Exchange alternatives: Zarafa, OX and SOGo. There’s plenty of room for those three in the market place to peacefully co-exist and grow. Any Zimbra partner or developer looking for greener pastures will find three alternatives to Zimbra that continually get stronger and better.

Postfix Pre-queue content-filter connection overload

What causes a “Pre-queue content-filter connection overload”

From the logwatch docs: “This sometimes occurs in reaction to a portscan or broken bots, or when postfix is overloaded, due to excessive header_checks/body_checks content filtering, or even too few smtpd processes to service the demand.”

So it can be caused by multiple things: maybe your Postfix server is too slow or does not have enough resources to handle all the incoming connections, maybe some botnet is scanning/hammering/DOS’ing your Postfix server, maybe your Postfix server is on a tiny VM which does not have enough horsepower to handle all the incoming connections or maybe your Postfix config is not as efficient as it could be, sucking up all available resources.

You can find these ghost connections with:

The reason for “unknown” instead of an IP address is that the connection got dropped almost immediately after being established. This caused the entry in kernel space to be removed before Postfix was able to process the connection. So once Postfix got around to handle the connection all information from that connection like IP address was already lost. The only thing possible for Postfix was to log a connection from an unknown source.

What to do about “Pre-queue content-filter connection overload”

If a logwatch report mentions “Pre-queue content-filter connection overload” then you may want to read:
http://www.postfix.org/STRESS_README.html
http://www.postfix.org/postconf.5.html#smtpd_timeout

And tweak your Postfix config with the suggestions from the STRESS_README document.

For example in /etc/postfix/master.cf
– specify a higher “maxproc” field for all pre-queue smtpd processes
– specify a zero “maxproc” field in all policy server entries

and in /etc/postfix/main.cf
– specify smtpd_timeout = ${stress?10}${stress:60}s

Now reload postfix and monitor your postfix logfile and logwatch reports. Refine if necessary. If this does not solve the connection overload because your postfix server gets hammered by thousands of bots then you may want to look at postscreen which is part of Postfix 2.8.

Disable this message in Logwatch

If you prefer Logwatch not to report this then you can disable it in the logwatch configuration by setting option “$postfix_ConnectionLostOverload” to 0 (that’s a zero):

More info can be found in /usr/share/logwatch/default.conf/services/postfix.conf