Dave Neary (bolsh) posted an interesting blog about the RBL and blocked addresses. We are a site that blocks all of France, but not because of any "Freedom Fries" issues. :) The City of Largo has tasked us with providing a hostile-free environment, and part of that is very aggressive cleaning of email. People like to complain if they know they can, and we were getting flooded with email that was 'offending' people, which means for some being one step away from a lawsuit. The RBL has helped us block ISPs that are not actively going after spammers on their network. The theory is that the user community will give them so many support calls about email being blocked (like you!), that they will hire enough people to manage the network better. As for blocking entire countries, there is a reason for this. Spam appliances do a forward and reverse DNS check on sites as part of the first check. If this passes, filters are applied and then it's released into our GroupWise server. Our observations indicated that certain countries were sending out a great deal of spam that was getting through more than others. France (.fr) unfortunately was one of the top senders. Possibly, only about 5 out of 750 of us receive email from other countries, so we had to make the decision to block and work out exceptions as they come up.
I did a live snapshot of our spam appliance running, you can see the scope of our problems. I am sure many of you are dealing with the same thing.
Wednesday, August 30, 2006
Tuesday, August 22, 2006
GNOME By The Numbers
Federico mentioned in one of his blogs about the high number of users of GNOME from thin clients and on multi-user systems.
As software developers, some of you possibly had not considered a few things when designing, debugging and testing on single user systems:
As software developers, some of you possibly had not considered a few things when designing, debugging and testing on single user systems:
- Most business environments are open around 250 days a year. (52 x 5 - 10). We have approximately 750 users. Let's assume that a known issue is in a software package, but considered minor because it only happens once a year. If this issue requires a call to our support staff, we instantly have 3 calls a day. (750 / 250)
- If a condition is left in a software package that locks a NFS drive, or other issue that requires a reboot, we have to schedule this and then kick people off the server during a time that was not pre-scheduled. It's not just an issue of rebooting your local computer. There are multiple users on servers 24 hours of the day, and it's always a bad time for someone.
- If a software package is leaking memory, and let's assume that it's 50MB a day. Your first thought might be that 50MB is no big deal, and cheap and certainly not a problem. For instance, on our Evolution server, we are running about 210 concurrent users. A 50MB a day memory leak then suddenly becomes a bigger problem.
- File handles can become a problem if they leak. It really helps to consider hundreds of people running a software package rather than just one person. At a certain point there will be problems and tuning issues with Linux with file handles. Often, this shouldn't be required because it's just an issue of them not being closed and released.
Friday, August 18, 2006
Largo Application Server Design
I have had the pleasure of meeting Nat and Harish in person, and they were given a quick tour of our network and design. Since I have contact with so many more of you, I thought I would explain our design a bit and how it relates to GNOME and software.
There are 3 basic designs for deploying software to users, Largo has deployed Application Servers.
Desktop Computer
In this design a physical computer is sitting at the desk of each user, each loaded with operating system and software. This is by far the most expensive, and support intensive way to deploy. Technology churn forces upgrades and changes often, and patches need to be applied to each machine. Even when pushed from a central server, it's support intensive.
Departmental Servers In this design, groups of users log in with thin clients. It's very similar to a desktop computer in that most of your software is located on one server, except that multiple people are logging into the same computer. The problem with this design is that similar packages are still on multiple departmental computers and require upgrades. It's also sometimes difficult to have multiple software packages on one machine and be able to upgrade them without library upgrades and patches that cause problems with other software. You also sometimes have problems where a user might lock up a package or NFS mount which forces a reboot at the expense of kicking everyone off the server.
Application Servers
In this design, each unique major application is given a server. This allows you to select the best operating system to deploy, and upgrades can be installed without worrying about other things failing. Upgrades are very simple, you install the new version and then just change the launch script and point to the new version. As people request this package, the new version just goes live instantly until the old one is no longer used.
In our case, we have a big server running GNOME. All that runs on this server is the desktop itself and this allows users to customize their desktop. The basic tasks like fonts, wallpapers, colors and themes are configured on this machine. When a user requests an application like Evolution, a signal is sent to another server which is running only that program and it launches a session and then remote displays it back to the end user. From a users perspective, they cannot tell that they are logging into multiple servers at once, as it's fully automated.
Because the horsepower comes from the servers, the user thin client devices have a long duty cycle. Our current devices were deployed in 1997 and are still working fine. They are being retired because of certain features that have become standard since 10 years ago: the RENDER extension is missing on them, they only support 1024x768 and they do not have scroll wheels. This fall we are installing new thin clients which will once again have a 10 year duty cycle. A 500 dollar thin client therefore will have a 50 dollar per year cost per user.
I have created a simplified image of our design. The user logs into the GNOME server with a thin client. Once there, as they click icons a session is formed on another server and then sent back to the users device. This allows you to scale easily to hundreds of users on the same server and support is lowered greatly. We have been running this design for 10 years and it's the most stable environment I have ever seen.
There are 3 basic designs for deploying software to users, Largo has deployed Application Servers.
Desktop Computer
In this design a physical computer is sitting at the desk of each user, each loaded with operating system and software. This is by far the most expensive, and support intensive way to deploy. Technology churn forces upgrades and changes often, and patches need to be applied to each machine. Even when pushed from a central server, it's support intensive.
Departmental Servers In this design, groups of users log in with thin clients. It's very similar to a desktop computer in that most of your software is located on one server, except that multiple people are logging into the same computer. The problem with this design is that similar packages are still on multiple departmental computers and require upgrades. It's also sometimes difficult to have multiple software packages on one machine and be able to upgrade them without library upgrades and patches that cause problems with other software. You also sometimes have problems where a user might lock up a package or NFS mount which forces a reboot at the expense of kicking everyone off the server.
Application Servers
In this design, each unique major application is given a server. This allows you to select the best operating system to deploy, and upgrades can be installed without worrying about other things failing. Upgrades are very simple, you install the new version and then just change the launch script and point to the new version. As people request this package, the new version just goes live instantly until the old one is no longer used.
In our case, we have a big server running GNOME. All that runs on this server is the desktop itself and this allows users to customize their desktop. The basic tasks like fonts, wallpapers, colors and themes are configured on this machine. When a user requests an application like Evolution, a signal is sent to another server which is running only that program and it launches a session and then remote displays it back to the end user. From a users perspective, they cannot tell that they are logging into multiple servers at once, as it's fully automated.
Because the horsepower comes from the servers, the user thin client devices have a long duty cycle. Our current devices were deployed in 1997 and are still working fine. They are being retired because of certain features that have become standard since 10 years ago: the RENDER extension is missing on them, they only support 1024x768 and they do not have scroll wheels. This fall we are installing new thin clients which will once again have a 10 year duty cycle. A 500 dollar thin client therefore will have a 50 dollar per year cost per user.
I have created a simplified image of our design. The user logs into the GNOME server with a thin client. Once there, as they click icons a session is formed on another server and then sent back to the users device. This allows you to scale easily to hundreds of users on the same server and support is lowered greatly. We have been running this design for 10 years and it's the most stable environment I have ever seen.
Subscribe to:
Posts (Atom)