We're looking to use MailCleaner as a replacement for Maia, our current clustered solution, since this is long gone and is no longer supported.
I was wondering whether MailCleaner would be happy handling 60,000-70,000 domains in the database well, and what implications (if any) this would have on performance? We have added the 60,000 domains to the database, however as you can imagine, loading of the UI and rendering of the domains page is incredibly slow (not that we would touch this section very often, but it's less than ideal at this stage).
We have no problems chucking $hardware++ at this, which would be the intention.
Is there a suggested limit to the amount of domains used in the MailCleaner system, and if so, what is this limit?
I cannot come close to your numbers, about 1000 here.. however we found the biggest problem was disk IO.
Anyway, we have been using it for a few years now and we are very happy.
Not sure that is of much help.
Thank you for that. We have some large numbers to play with, and the post was definitely helpful with regards to Disk IO.
I'm wondering whether the IO would be able to spread out across a clustered solution, both for redundancy and performance purposes with round robin DNS, much like we have at the moment on our existing cluster.
I'm interested to see if anybody else uses MailCleaner to the scale we intend to, with about half a million to a million emails per day, it's important to get this right in advance.
We use three in a cluster with round robin DNS, we did this to spread the IO load and it worked well, we then moved to SSD and saw the biggest improvement as you can imagine.
I have attached disk IO and memory graphs for one of our VM's, this handles around 60K a day.
I would be happy to scale it up if we needed to, it clusters really well.
One point on clustering, if you intend to use different physical locations then the DB communication needs to be fast, you will see errors when you try to release messages and get other niggles otherwise, nothing that caused us a nightmare but the admin overhead went up. We now just use one DC and replicate in case of disaster.
- count1.png (29.24 KiB) Viewed 19726 times
- count.png (44.74 KiB) Viewed 19726 times
Who is online
Users browsing this forum: No registered users and 0 guests