Box Model / CRM / DEAD / DIWG / Doc / dust / ESS / FACTs / IPCC / NCO / PBS / NCO Project / Zender

Welcome to, the Zender Group Webserver

You may be lost, looking for a different page...

History of

The original Zender group server,, was born in September, 1999, for about $10k. Dell provided a Precision 610 dual Intel Pentium 500 MHz, 1 GB RAM, with 2×36 GB SCSI disks. One shot disk, one fried video card, and almost five years later, her 1920×1200 resolution monitor still weighs about ninety-pounds and is as good as it gets. dust's disks whined like spoiled babies, yet they never failed until one week when I was really busy, but that's another story. She originally ran Apache 1.x on factory RedHat Linux 6.0. Now she runs Apache 2.0.40 on RedHat Linux 9. The physical machine dust was retired from active server duty 20041228, when it was replaced by sand which was renamed dust. The original dust continued to run for many years under the alias dirt.

From 20041228—20090206 the group server was the physical machine The hostname became a “virtual” name (via a designated IP pointer) on 20050105. Requests to dust are automatically routed to sand. Using the dust address for URLs and web services provides continuity with sites that link to dust and eliminates the need to edit web pages everytime we changes physical web servers.

sand is a 64-bit AMD GNU/Linux system—complete independence from the Wintel monopolies at last! Western Scientific sold me a dual AMD Opteron Processor 244 (1.8 MHz); 2 GB RAM, with 2×250 GB IDE disks for about $4000. The disks are mirrored to provide robust data storage. Her original fans were noisy, like small jet engines under my desk. Western Scientific replaced those with quiet fans and now I'm happy. She originally ran Apache 1.x on factory SuSE Linux Professional version 9.0. Now she runs Apache 2.0.50 under Ubuntu's 64-bit Debian-based GNU/Linux distribution. Harry Mangalam installed a very fast eight disk RAID5 with 2 TB capacity. In February, 2009, sand returned to private life and the group server became pbs. became the group server on the afternoon of Friday, February 6, 2009. By that evening she had crashed and all data was completely inaccessible for about a week. This after weeks and months planning and crafting her as our first rack-mounted, uninterruptible, access-controlled server. Sometimes less planning is better.

pbs is a 64-bit AMD GNU/Linux system running Ubuntu 8.10, Intrepid Ibex. It is the head node of the former cluster named pbs, and sits in CalIT2's server room. pbs began life as the development cluster for the NCO/SDO project. Daniel Wang used pbs extensively while he wrote his thesis and developed SWAMP.

Western Scientific sold us the four-node pbs cluster for about $40000 in 2006. Each node had 16 GB RAM, inifiniband fabric, and about 1 TB of disk. When we reconfigured the cluster into a server, we combined into 2.5 TB of RAID level fxm for pbs. The other three nodes pbs1, pbs2, and pbs3, are now used as a standalone servers and backup destinations. A daily cron job uses rsync to back-up /home and /data/www from pbs to pbs1.

On November 22, 2011, dust was pointed to another machine reconfigured as pbs. This is a pure Apache2 web server based on a 2U Broadcom retiree with dual Opterons and a 1.5 TB mdadm RAID5. It runs Ubuntu Lucid 10.04.3 LTS. /home, currently on a single OS disk, is mirrored nightly to the RAID5. The previous pbs became pbs4, a storage server for the BDUC cluster running gluster over RDMA/Infiniband.

Here are the usage statistics for, courtesy of the webalizer.

Address questions and comments about this website to
Charlie “my surname is zender” Zender

[ Powered by Apache ]

[ Powered by Red Hat Linux ]