Friday, December 08, 2006

Thin Clients And Remote 3D

We contacted two of the big Linux operating system companies and requested some minor technical help in attempting to deploy our next generation thin client solution. Neither had the resources to help, so we are moving ahead and doing the engineering and design in-house. I invite anyone with access to Gartner to read study G00140085.

I believe the mistake being made by these vendors is that they are attempting to install Linux on the personal computer, instead of putting Linux on their desktop. The Gartner study shows that there is a 48% reduction in cost on the Microsoft Windows platform by moving it from an unmanaged PC environment to a centralized design with thin clients. 1/2 the cost, and no change in functionality. Imagine then what the savings would be if companies had the option to move to thin clients *and* Linux at the same. A major part of the cost in the white paper is licenses and software products. Imagine going into companies and telling them that they could save 60-70% on computing costs. Really, trying to shake off Microsoft Windows from their personal computers just isn't enough to warrant a change for most people. It doesn't offer the major cost reductions that are found with a complete, and stable re-design. Centralized computing using thin clients really works. There shouldn't be so few of us implementing and being the voice.

With that, I have been able to make significant headway this week on getting our design goals for our scheduled early 2007 rollout of new thin clients. We are deploying HP 5725 devices, and a new 2GB flash device has become available in the last week. I installed this 'disk drive' into the case and this increased capacity made it a LOT easier to load Linux and GNOME. Many thanks to the email messages and blog responses with ideas. We are going to experiment with some other 3D video cards and also are testing AIGLX instead of XGL. I was able to get Fedora Core 6 to install and with a kernel upgrade and a few packages it was working standalone in 3D.

[ In the shot below you can see the opened case. The flash device is circled. Installed Nvidia card into expansion slot. (Go to my blog if you don't see the images) ]




At this point, it was identical as a personal computer. Everything was running on the thin client. But our design goal is to move this to the server and turn these into thin clients. So we loaded Fedora Core 6 on a computer to simulate a server and then logged in remotely with XDMCP. The server recognized the video card and performance with Beryl is mostly, excellent; even over a network! We were really amazed at how well this works, even in prototype form. We hobbled together a quick new cubecap and the shot below shows a prototype of what our users will have early next year. This increase in capability will come in around 40 dollars extra per user for the video cards, with a projected duty cycle of 10 years -- and no support at their desks. The GNOME session and 3D elements are pushed down from one big server which is then upgraded every 3-4 years for hundreds of users.

[ In the shot below, Beryl cube running over the network via remote display, with Evolution, GIMP and GoogleEarth ]



Up next, testing of other video cards for performance and ease of installation.

Beryl guys: Nice job on what is running on Fedora Core 6. Some issues that seem to need work over remote display: wobble windows and initial startup time. I hope to work with you in the near future on testing over remote display.

27 comments:

Janne said...

Have you tried using FreeNX? NX seems like a very interesting project, and I would love to see it being used more.

Dave Richards said...

I have not warmed to the NX/Citrix/VNC solutions for thin clients that have enough network bandwidth. I prefer to use native X. It 'feels' much more crisp and responsive to me. Bandwidth compression server solutions seem to have repaint issues and never feel quite as nice as X does.

We do use Citrix at our non-fiberoptic remote sites, and it works great.

Anonymous said...

If your thin clients are so powerful, why not netboot and run apps locally? The management cost should be the same.

Dave Richards said...

Running the apps locally won't work with our goals. We want to have roaming profiles, each device needs to be generic and provide a login only. It also has other limitations. 1Ghz might work ok for now, but it will begin to slow over time and you will never get a 10 year duty cycle. 1Ghz is plenty for remote display and as applications get bigger, it's easy to increase the size of the servers. Also, you lose shared memory. An application that takes 200MB running locally, might take up only 1MB of RAM when run from the server. The multiple instances on the server will share memory between users.

Anonymous said...

Obviously you've never looked at FreeNX or even used it... It is actually faster than XDMP on our (rather crowded) dev vlan. The genius behind NX is that it is an "intelligent" compression algorithm for native X and so it runs much faster.

With all of that aside, you are looking at thinclients and I have another suggestion for you. The Edubuntu team came up with a new way of managing LTSP and it was so much better, upstream quickly moved to it. Most of the LTSP development is on Ubuntu and Edubuntu has the best "thin-client" support of them all. You seriously should consider taking a look at Edubuntu. Just remove the default theming and some of the applications.
https://help.ubuntu.com/community/ThinClientHowto

Anonymous said...

My only piece of advice is not to use NVidia. Going with a binary kernel module essentially throws any debuggability out the window. And you'll regret that one day when something goes wrong at the absolute worst possible time.

ingwa said...

You could also look at ThinLinc. It seems to work better than NX in practice. There are some real nice installations in schools using it, where all the students use Linux through thin clients.

Anonymous said...

"My only piece of advice is not to use NVidia. Going with a binary kernel module essentially throws any debuggability out the window. And you'll regret that one day when something goes wrong at the absolute worst possible time."

No shit.

Why support a company like Nvidia when you can be using Intel hardware?

Intel 945g system with the 950 GMA.
- runs cooler
- uses less energy
- open source drivers
- supports AIGLX
- much more inexpensive
- good performance for Linux 3d desktops.

Especially when going with thin clients it seems that electrical usage and quietness would be a high priority.

Core duo mini-itx or micro ATIX motherboard with 1 or 2 gigs of RAM, on board audio, onboard video, onboard network.

Aopen for instance has their Mini-ITX boards (unfortunately quite expensive right now) that support core duo or pentium-m proccessors use a total of 25 watts of electricity!

You can make a machine dead quiet. It's very healthy for a business environment to have a quiet workspace aviable..

Why fuck around with shoveling propriatory code into your kernel when you are better off with somebody else's product?

Anonymous said...

I've tried both the commercial NoMachine NX and FreeNX on a hobby basis myself, and both had showstoppers like missing redraws and irrecoverable hangs on certain operations. I think Dave knows what he's doing when going with plain X for high-bandwidth clients.

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...

Anonymous: Is there an Intel GMA950 board with DVI for its onboard video yet?

I love pretty much everything about the Intel solution right now, but pushing an analog signal between a digital chip and a digital LCD in 2006 is kind of a dealbreaker.

I suppose I could put Linux on my Mac mini, but I've heard about trouble with Linux and EFI, and you can't really add multiple SATA disks to it.

- anonymous#2

Anonymous said...

This is to the Anonymous@1:25 AM post. How does the fact that those Intel chips rely on system memory to render. Won't it be that when you have a thin client with lower memory availible the performance will dip.

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...

P9zO0W Your blog is great. Articles is interesting!

Anonymous said...

iWQhbB Nice Article.

Anonymous said...

Thanks to author.

Anonymous said...

Wonderful blog.

Anonymous said...

Please write anything else!

Anonymous said...

actually, that's brilliant. Thank you. I'm going to pass that on to a couple of people.

Anonymous said...

cbWuMh actually, that's brilliant. Thank you. I'm going to pass that on to a couple of people.

Anonymous said...

Does this mean the server has the 3D graphics card and all graphics is being displayed over the network? Or is it that each thin client has a 3D graphics card being rendered locally.

Stuart Robinson said...

Check out this remote 3D graphics performance (GPU rendered, hardware PCoIP display compression). Over long latency, low bandwidth links - including live network demo from New York to London.

http://www.youtube.com/watch?v=UuEhGzoo0lQ

Stu

Unknown said...

i really apreciated your way of feeling in this regard. i know having a good simcity buildit tips and simcity buildit tricks are very rare now. if you know about simcity game, then you are a huge knowledgeable person. finding new tricks for simcity buildit and having joy of this game now a days.

Unknown said...

hi there, i just loved your article here, you are absolutely right, you can also extend this topic in further if its changed the directions of itself. you may know that there is simcity buildit game which is very trending nowadays. i am a big fan of it and i want something like simcity buildit guide for my game. in simcity buildit it is very difficult to maintain the fund but using proper simcity tricks you can manage it easily.