The case for client-side computing power

OK, tech blogs. Hype all you like about thin clients, Web-based apps, and the end of heavy client-side computing power, but at the end of the day, there is still something to be said for powerful, well-provisioned end user workstations.

One project I've been working on lately involves a lot of image registration work. Essentially, we're giving people a pair of 3D images (about 200 MB each) and having them try to find features that match up. The software in which this is done is written in MATLAB, so there's about 300 MB of overhead there on top of the 300 or so that Windows XP likes to take up for itself.

The origins of a problem start to become obvious: Windows, plus MATLAB, plus a couple of data sets adds up to around a gigabyte. That's not much by today's standards (it's pretty pitiful even by 2007 standards), but we're not doing this on machines that are up to today's standards. One gig of working data coupled with one gig of RAM means that something's going to get shoved into virtual memory, and just about every performance metric plummets dramatically when that happens. The result? Data loads that should take six or seven seconds are taking close to a minute, and control response becomes so laggy that it feels like you're on a remote desktop link to a high school computer in Siberia. What should be a two-hour job quickly swells to three, four or more hours.

And so, we hit on the part that really bugs me: cost/benefit metrics that don't take actual productivity into account. For about \$200 per workstation, these machines could be given memory and processor upgrades that would shave at least five minutes of waiting time per hour. Let's say your staff are worth \$30 an hour (a gross underestimate in this case)- the upgrades would improve productivity by \$2.50 an hour and would pay for themselves in two weeks. For \$1000 per station, they could be replaced with cutting-edge quad core machines, swimming in RAM, that would save ten or more minutes an hour and pay for themselves in about five weeks. But \$200 per workstation in an organization with 1200 or so computers is a quarter of a million dollars in capital expenditure, something that is not easy for an IT department to justify to management. A million-plus to replace perfectly serviceable computers with newer, faster ones is an even tougher sell. After all, the existing fleet works just fine, right? I can check my email, I can read PDFs, I can surf the Net, and it doesn't feel that slow.

But it is slow- slower than it could be, at least- and an extra one second to open a folder, an extra three seconds to open an email, all adds up to a remarkable amount of time spent waiting for the computer to respond. Shifting to web-based apps doesn't help matters: not only is there a network connection (and the associated lag) in the way, but the part of the app running client-side is saddled with the overhead of a browser, half a dozen abstraction layers, and limited access to the client machine's processing power.

Delays- even tiny ones- waste time, thus wasting productivity, thus wasting money. Perhaps at some point in the future, we'll have true gigabit links to all workstations, near-zero lag on web app and virtualized desktop control inputs, and enough processing power in the data centre that thin or zero client stations will be fast enough for practical use. For the moment, though, desktop processing power still counts for a lot, and small expenditures on speed-boosting upgrades still have dramatic effects on the amount of work that actually gets done.

Topic: 

Technology: 

Add new comment