Skip to content

Major coffee equipment upgrade

I’ve been spending a fair amount of time on the East Coast of America, which is home to many great things but sadly in the Boston suburbs, good coffee isn’t one of them. In Boston the main coffee chain is Dunkin’ Donuts, a shop that likes to market itself as a coffee house but in reality its coffee is of extremely poor quality.

For the last few years, at home, I’ve been using my Isomac Tea II and Rancillo Rocky coffee machine/grinder combo with average results. The Rocky is a fine grinder but it isn’t a very good espresso grinder – there is a lot of clumping and with a naked portafilter you can see lots of channelling, even after a vigorous shake of the grinds with a kebab stick. Anyhow here in Boston I decided to upgrade; big time.

For reference an Isomac Tea II is an E61 heat exchanger with a vibration pump. What this means is that it uses a heat exchanger to provide both hot water and steam at the same time. The group head is a Faema E61, or a replica of it. The E61 grouphead was the first to allow automatic pre-infusion. This is the stage prior to the actual coffee extraction where the coffee grounds are wetted to expand and fill the portafilter. After a certain amount of time, water pressure is increased and extraction begins. The E61 grouphead was nothing short of revolutionary and even today it is a truly outstanding bit of mechanical engineering, plus it looks superb. (There are also downsides to the E61, chiefly temperature stability and the need to do a cooling flush prior to the first extraction, but everything in engineering has up-sides and down-sides).


So what did I upgrade to? The grand-daddy of all the setups – a La Marzocco GS3/1 Mechanical Paddle coffee machine and a Compak K10 Fresh grinder. This is pretty much the dream setup for the home – some would say a Mazzer Robur E is better than the Compak but personally I preferred the electronic controls of the K10 Fresh and the fact that I could get it with a short hopper from Chris’ Coffee in New York.

I’m actually writing this post after using this set up for a month, so hopefully the honeymoon period is well and truly over. Couple of points to get out of the way – I don’t have buyer’s remorse, though spending so much money on a coffee setup obviously isn’t for everyone and I had been eyeing (and saving up) this for well over a year. Secondly, make sure you have space, conical burr grinders such as the K10 Fresh and Mazzer Robur are much bigger than you would think, especially in a home kitchen.

So how is it to live with such a coffee set up? Well, it is much easier to pull high quality espresso shots. I could make nice espresso on and off with my previous Rancillo Rocky/Isomac Tea II setup but my consistency is far better with the new set up. Secondly, good quality espresso grinders (not just the K10 Fresh) removes “channelling”. I wasn’t forced to make any changes to my technique except that I no longer have to prod the grinds with a kebab skewer to “fluff them up”.

The Compak K10 Fresh still produces some clumping, though the clumps themselves are smaller and lighter. It should also be noted that conical burr grinders of this size need a fair amount of beans to go through them before the burrs are seasoned. I effectively wasted around 4KG of beans before I started using the grinder for actual drinking coffee though some people say you need a lot more than that.

As for the GS3/1, the machine is extremely well built and the mechanical paddle is something that not only adds to the tactile feel of making coffee, it allows you to experiment in the future with altering the pressure during the pour – something known as “flavour profiling”. The steam wand is nothing short of sensational – taking less than five seconds to stretch a small pitcher of cold milk. And finally the drip tray is large enough for a good amount of shots being pulled before you need to take it out. That said taking the tray out isn’t quite as slick as it should be, and frankly is quite tedious.

I would definitely recommend those that can, to plumb in the GS3/1. As I am currently in rented accommodation this isn’t viable  – though not impossible. One thing I would say is that not having to the E61 cooling flush is very nice and saves a lot of time if you have to pull a few shots in a row.

As for my Rocky/Tea II set up, that is in London for whenever I go back there to spend a few weeks over there.

Generic performance metrics mean less and less in servers

Computing has long promoted performance metrics as a means of visualising performance of a system. Back in the day it used to be MIPS (millions of instructions per second) then for a while it was the processor’s clock frequency. AMD was the first major chip maker to move away from frequency by offering “performance ratings” on its processors, something that Intel adopted later on. Frequency and performance ratings are useful tools in the consumer market where consumers generally have to trust the numbers for a relative performance of a product within a chip vendor’s product line. Of course the performance rating between AMD and Intel (and for that matter any other chip vendor) does not transfer.

However in servers it is a different story. About a decade ago, if my memory serves me correctly, AMD again tried to shift the metric away from frequency and started talking about performance-per-watt. This mattered a lot for those folks buying racks of servers back then (I was one of those people), because by 2005 datacenter energy prices had started to go up considerably and it wasn’t a matter of how much processing power you had but whether there was enough Amps being supplied to your rack to fill it with kit. I remember seeing many racks that were left at half-full because the companies hit their rack’s power budget and the datacenter couldn’t cool any more servers, even if the company renting the rack would pay for extra power.

These days companies such as Google, Microsoft, Amazon, Rackspace, Facebook and numerous other large cloud providers are the ones that put in the large orders for servers and really drive the server industry due to their collective buying power. Aside from buying power, some of these companies build their own datacenters or at the very least have significant input into how the infrastructure is presented to them, whether it be in the shape of a rack or the source of the electricity.

The point here is that these customers do enough deals with various stakeholders – local governments, utility companies, etcetera – so that their bottom line is affected in different ways with each input having a different weighting. For example, a $1 rise power may hit one datacenter operator more than it hits another. Then there’s the definition of performance.

Performance ratings are generated using a wide range of benchmarks, which is how it should be. Except a particular user may want to run just one workload on a particular server for its lifetime, say a relational database, and couldn’t care less how the chip performs transcoding video. Again this relates back to the point made in the previous paragraph regarding power, it’s all about personalising the metric for the particular user.

While metrics such as performance-per-watt are a very good way of glancing at the power efficiency of a chip, you can bet your bottom Dollar that high volume server purchases are not based on that figure. Rather it’s based on months of testing on very specific workloads and how its performance in terms of computation, power, space, maintenance and even the chip vendor’s roadmap fits into its bottom line calculation.

Ultimately it’s all about personalising performance, which is why you are starting to see a range of servers. Two socket servers with “big core” chips will continue to sell by the bucket load but it will be joined by so-called small core-based servers that will provide this personalised performance that some companies are looking for. The question then becomes, how do you mix these big core and little core servers at the datacenter level. The answer to that is scalable fabrics, something I will touch on soon.

VMWare Workstation 9 on Ubuntu 13.10

For the past 18 months I’ve been able to consign Microsoft Windows to a virtual machine with Linux-based operating systems on my workstation and laptop (CentOS 6.x and Ubuntu respectively). The hypervisor I use is VMWare Workstation 9 and all was fine until I recently updated the Ubuntu installation on my laptop to 13.10, when VMWare would ask me to re-compile some modules into the Linux kernel.

Recompiling the VMWare modules itself isn’t particularly hard – the command is sudo vmware-modconfig –console –install-all. However doing this left me with at least one module not being able to be compiled and started me on a search of the Internets. Apparently it is a fairly common problem and requires a few Heath Robinson measures.

A post on the VMWare forum highlights the problem and while it was made prior to the official release of Ubuntu 13.10, the error posted in that thread is identical to mine. Fortunately, Rainmaker52‘s reply did the trick – it involves downloading three patches, applying them to the VMWare module source code and then running the aforementioned vmware-modconfig command.

While the instructions on the reply are self-explanatory, if you are not running the “default” Ubuntu 13.10 release and have a Linux 3.10.* kernel then you can avoid the vmblock.3.11.patch. For my standard installation of Ubuntu 13.10, I needed the 3.11 patch.

As the three patches are located personal websites, I have mirrored them – links are below.

Procfs patch
VMblock.3.11.patch (only needed if you are running Linux 3.11)

After applying the patches and running the vmware-modconfig command, I was able to start VMWare Workstation 9 without any problems. That you need to use such annoying third party patches is pretty shoddy on VMWare’s part – Ubuntu is hardly a niche Linux distribution and it is gaining credibility in enterprise, especially with companies that run OpenStack clouds (which is perhaps ironic in many ways). Nevertheless, VMWare and Canonical should work together a bit more closely in the future to ensure a more official patch appears.

The unspoken problem with technology journalism

Having written for technology publications for over a decade I have had the displeasure to witness first hand the decline in the quality of technology journalism. Others, far more experienced and smarter than I, have talked about the possible reasons for this but in my view the underlying factor is a willingness on editors and publishers to cater to the herd mentality.

Technology journalism has always been a pretty exciting job for young people who grew up with electronics such as games consoles, personal computers and the Internet. At least in the UK, technology journalists are well catered for in terms of free drinks and many technology companies pay for flights and hotels out to interesting press events, sometimes in exotic parts of the world.*

Journalism isn’t well paid. A former editor simply said of the writers “you pay peanuts, you get monkeys” – this is an example of the sad state of editorial attitudes in technology  journalism. Many professions do not provide a fair compensation and one of the problems with journalism is that you really need to devote your life to the subject matter if you want to provide thoughtful analysis and perhaps most importantly, nail a company when they are peddling marketing guff.

The reason some people cite pay as a problem is that low pay results in the best qualified journalists to leave the field, or not get into it at all. However in my view pay isn’t the problem, neither is the wider education system as a whole, no the real problem is editors and publishers that simply want a quick buck rather than take the opportunity to build a brand.

News aggregators such as Google News, Techmeme and others are resulting in editors looking for an easy way out of paying for proper journalism. What some editors and publishers have done is fallen into the trap of believing these websites are far more important and valuable than having a regular readership that associates a title with a particular trait. Just as traditional broadsheet newspapers are known for certain specialisms such as the Washington Post for American political coverage or the Wall Street Journal and/or the Financial Times are respected for their financial coverage.

In theory there is nothing wrong with the service news aggregating websites provides – it is a single source of information that combines multiple sources. Techmeme claims to use editors while Google News is automated but the problem is that editors use these websites to see what others are writing, presuming that writing the same content will mean readers will come to their publication as the topic is popular. Fair enough, a popular topic is always worth investigating.

Investigating is the key word here. Editors and publishers have now gone into a mode where investigating or even pushing the story forward (that is, to find something new about an existing, reported topic) has gone out of the window. Instead the editor, perhaps pushed by the publisher, is now telling their writers to simply do a “me too” story in order to pick up straggling readers that visit news aggregator websites. I’ve seen this happen while working at one publication and it is devastating.

It is devastating for writers, some of which want to do real work rather than churn out the same stuff everyone else is doing, and it is devastating for the publication, which loses its identity. What news aggregators do is anonymise and de-brand news. That’s good for the reader, a reader shouldn’t solely rely upon one publication for their news comsumption, but for publications that follow this herd there’s little opportunity to build up a regular readership. It also shows that the editor is not doing their job and is devoid of any creativity or vision.

A regular readership is vital to any publication for two reasons. One is the basic business of publishing; a well defined regular readership is what helps the publisher ink the big advertising deals, but perhaps more importantly, the writers gain respect with readers and vendors for covering a particular field well. This not only helps the writer to develop their knowledge of their beat but increases the chances of news tips being sent in.

Editors and publishers need to realise that a good publication isn’t something that is built in six months by doing what everyone else is doing. Building up a respected publication takes time, guts and above all supporting your writers rather than seeing them a pieces of meat.

* I am aware that many US publications do not allow this for fear of bias, but if journalists are so easily persuaded then perhaps they shouldn’t be in a field where, over the course of their career, their impartiality will be tested with far more than an economy class seat and a three-star hotel room.

Software won’t be far behind those 64-bit ARM chips

For three years until August 2013 I spent writing about the semiconductor industry, primarily on The INQUIRER but a couple of other places too.  Despite outward appearances, the semiconductor industry is an extremely interesting one and one that has many bona-fide characters. Since August 2013 I have been lucky enough to be a part of the industry in a small way, working at AMD.

arm_arch64All of that should serve as some context for what follows. One of the things I was covering when writing professionally was 64-bit ARM (technically ARMv8 architecture) processors, chips that will be used in servers. 64-bit ARM processors are already here thanks to Apple’s A7, however building a “server” chip is somewhat harder than one for consumer electronics – the memory controller, the branch prediction units, the validation and a whole heap of other stuff is considerably harder to develop and test for, and thats not even touching on the more rigorous on validation that is required.

One of the tactics that Intel is using to combat these upcoming chips from AMD and a number of other vendors, is to claim that x86 processors such as the Xeon or even AMD’s traditional Opteron units have a huge library of software. It’s a tactic that works well, software is absolutely vital to hardware adoption and it can be a vicious cycle, just look at Intel’s own Itanium processor.

However, 64-bit ARM processors are not having to rely on traditional software vendors to provide the ecosystem needed to power their sales. It is expected that Linux distributions such as Ubuntu, Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) will be operating systems of choice, not Microsoft’s Windows. Being open source community driven distributions – RHEL and SLES significantly use Fedora and OpenSUSE respectively – it allows companies to contribute directly to the operating system including the Linux kernel. Intel knows this and that is why it is one of the biggest contributors to the Linux kernel.

Not only is the operating system an area where companies and individuals contribute to a greater good, or even their own self-serving needs, but projects such as Apache Hadoop and OpenStack are community driven and open source, meaning interested parties can contribute – in a selfish manner – if needed.

The point here is that 64-bit ARM won’t be like 64-bit x86 or any other emergent instruction set. It won’t need Microsoft or any other company to bring the software to the hardware. Work on supporting the the ARMv8 (64-bit) architecture has been going on for the best part of two years and that’s not even taking into account that ARMv8 processors have backwards compatibility with ARMv7 and ARMv6 code – something that Linux and other software vendors have supported for years. Linaro, the non-profit body that oversees a number of projects to bring support for new hardware and new features to Linux software, has been beating the drum for ARMv8 and

It would be foolish to say that 64-bit ARM is a big change, but if Intel is banking on software being the stumbling block to ARM adoption in the datacenter then it will have a real shock next year. Far too many semiconductor companies have put their weight, faith and most importantly balance sheets, behind ARMv8 to succeed that it isn’t leaving the software ecosystem to chance.

The question is just how much market share will Intel cede in the datacenter. It certainly won’t remain at the 90-or-so percent.

Why can’t other companies copy Lenovo’s keyboards?

I used to be a journalist and while I left that life three months now, I still spend a huge amount of time typing on a keyboard, especially on a laptop. For the past eight years I have been lucky enough to use IBM Model M keyboards on my desktop – I swear by them. IBM’s Model M keyboard isn’t the first “buckling spring” keyboard but it is easily available and cheap if you look on auction websites.

Why does the IBM Model M work so well? That buckling spring (which is an IBM patent) provides superb tactile feedback – it actually makes it easier to type. Since the 1990s, companies have taken the view that people don’t care about the quality of keyboards and bundling cheap and nasty units that not only feel terrible to type on, but simply don’t last. I have an IBM Model M from 1985 that is going strong, don’t expect the five quid keyboard with a Dell or HP to last that long.

So where’s the catch? The Model M is a noisy keyboard and newer tactile keyboards that employ Cherry MX key switches provide much of the tactile feedback without the noise. But this is all an aside really as these days I’m rarely at a desktop, rather I spend my time working off laptops. And judging by the popularity and functionality of laptops and tablets, it doesn’t take a fortune teller to work out that standalone keyboards are going to play less of a role in our lives.

To cut a long story short, if you want a decent keyboard on a laptop you pretty much have one choice – Lenovo Thinkpads. It’s not surprising that the Thinkpads, an IBM product line, had good keyboards but Lenovo was clever, it stuck to the formula when it bought IBM’s computer business.  I’ve talked to executives at companies such as Dell and they still don’t seem to understand what makes a good keyboard, the formula is pretty simple really.

  • Provide a rigid chassis for the keys. Flex is bad.
  • Key travel. More is better but obviously height restriction limits this so look at scissor action keys.
  • If all else fails, take a Lenovo Thinkpad and find out how to replicate its keyboard.

Today consumer computer companies such as Apple and Samsung are spending hundreds of millions on developing display technology on touch screen devices. This is an obvious thing, after all the screen now provides two vital input and output mechanisms. But the keyboard isn’t dead – after all the so-called two-in-one tablets are being heavily pushed. So why not invest a fraction of the amount needed to improve touch screens and provide those folks that can’t get a Lenovo machine with a decent keyboard.

Quick and dirty Crontab tutorial

For those who have used Linux for any length of time the issue of automating tasks will have come up. The time honoured way of doing this is through Cron. Each user on a Linux system has a crontab – a list of commands that is run at a predetermined time.

I’m not going into massive detail into what Cron is and how it works – the MAN page is a good start for that. However in order to automate tasks you need to put them into your crontab, and here is how you do that.

1. Type ‘crontab -e‘ into the shell prompt. This will let you edit the crontab file, on some distributions (CentOS, Fedora) the editor will be Vi (or Vim) or in case of Debian and Ubuntu (and maybe others) it is nano.
2. Enter the timing and command – use must you absolute pathnames for everything.
The way crontab files are formatted is something like this:

*1  *2  *3  *4  *5 <command>

*1 = minute of the hour (0 – 59)
*2 = hour of the day (24 hour clock, 0 – 24)
*3 = week of the month (0 – 52)
*4 = month of the year (0 – 12 where January = 0)
*5 = day of the week (0 – 7 where Sunday = 0)
<command> = command using absolute paths

So for example if I wanted to run /home/crapple/ every Monday at 1am I would put the following in the crontab:

* 1 * * 1 /home/crapple/

Basically * means every minute during 1am on Monday, so the script will run 60 times. If you want it to run only once, pick a minute and change the first * to a number between 0 and 59.

Once you have entered the command you wish to run, save the file and quit – so in Vi/Vim, escape edit mode and :wq as normal or in nano ctrl+o then ctrl+x.

And you’re done. Just remember if your script outputs to STDOUT then when it runs the output will be placed in your user account’s email. This can be suppressed by putting > /dev/null at the end of your command, so using our previous example:

* 1 * * 1 /home/crapple/ > /dev/null

Now all input will be sent to /dev/null.

One final thing, if you want to comment your crontab file to make it understandable at a later date, single line comments start with a #, for example

* 1 * * 1 /home/crapple/ > /dev/null # this is a comment

Happy automation!

Cuisinart (not so) “professional” burr coffee grinder

Over the past three years I have been using the Cuisinart “professional” grinder to try and make espresso – the longevity of its use serves more to highlight my lack of research into proper espresso brewing. Simply put, if you want to drink espresso, do not use this machine to grind your coffee beans.

Professional in all but name

Cuisinart label this product as a professional grinder simply on the basis that is uses a burr mechanism. Indeed a burr grinder is used by “professional” machines (machines such as those made by Mazzer et al.) but the biggest difference between those 500 quid (or more) machines is the ability to adjust the coarseness of the grind. The Cuisinart grinder simply does not grind coffee beans fine enough for espresso, resulting in under extraction which can be seen through a lack of crema (the thick “cream”/oil that should be on top of an espresso).

Amazon, among many other retailers, sell the Cuisinart “professional” grinder at a price that suggests it is a bargain compared to other units that cost hundreds of pounds. I bought mine from Amazon and for three years it gave reliable, but sub-standard service.

There are a few reviewers on Amazon that say the same comment regarding grind granularity but it is outweighed by the very many who seem to think it is perfectly fine. This could be for many reasons; like me they were ignorant to what good espresso should look and taste like, but more likely it is that they are not using it in an espresso machine, rather a French press or worse still, a filter coffee machine.

If you intend to use this machine to grind beans for a French press or a filter coffee machine then it is fine, the coarser grounds will suffice. However if you want espresso, it is simply not worth buying this machine.

In the last week I replaced both my Cuisinart grinder and my cheapo Krups espresso machine with a Racilio Rocky (without doser) and an Isomac Tea II. Needless to say the espresso I drink now is significantly better.

How to scam on eBay

From a recent dealing with an individual on eBay, I figured out how easy it is for someone to scam another person on eBay, without the usual tricks of cheque fraud, escrow services and other similarly well documented tricks. No forget all of that, instead all that is required is a tracking number and eBay’s utterly useless chimps in customer service will side with the seller.

After purchasing a job lot of processors (for £29), the seller claimed to have sent the items to me. Furnishing me with a DHL tracking number that doesn’t work on DHL’s website but rather through Parcels2Go, it clearly states it wasn’t delivered to my address and was picked up by some bloke who doesn’t even exist on the street. So obviously I take this up with eBay, providing them with evidence that clearly states that the item was not delivered to the address I had asked.

The response after appealing the first decision that went against me? eBay cannot overturn the verdict because a valid tracking number was provided.

Yes, you read right, a tracking number that not only doesn’t work on the courier’s website but on a third party website and one that shows, clearly, that the package has not be delivered to the right address.

So if you want to scam people on eBay as the ‘auction’ site simply hires chimps that obviously don’t know their arse from their elbow then provide a valid tracking number. Hell send it to yourself, eBay won’t care.

I pray for the day a company with enough clout to challenge eBay and its payment service, Paypal, comes out and kicks this once useful website into the annals of web history.

Western Digital TV Live Ethernet

Perhaps a glaring sign of assumption leading to a mistake, I had thought Western Digital had put a gigabit Ethernet controller on its WD TV Live device. After all why wouldn’t you? The device can clearly play back high definition content and Blu-Ray discs can transfer up to 40MBps at present. To my shock after upgrading my LAN to gigabit Ethernet and changing the patch panels to Category 6, I bothered to check WD’s specification only to find that the device supports 10/100 Ethernet. Absolutely shocking.

You have to ‘upgrade’ to the WD TV Live Hub if you want gigabit Ethernet. That’s a nice device with a 1TB hard drive which I have absolutely no need for. It also costs double that of the WD TV Live. Gigabit Ethernet chips are so cheap these days it’s shocking that WD stuck 10/100 Ethernet on the device. Not only does it cripple functionality it shows how cheap the company’s product designers are.

It’s a real shame WD did this as apart from this glaring omission the WD TV Live has performed faultlessly.

Easiest way to play MKV videos

Perhaps the most impressive bit of hardware I have purchased since my first solid state drive is Western Digital’s WD TV Live  box. It all but removes the need for having a computer under your telly.

The unit itself isn’t exactly new technology, it’s been around for a few years but prices for the WD TV Live is now around the 75 quid mark including VAT and since it can play MKV files without any trouble at all, it’s almost a no brainer.

As there’s no hard drive in the WD TV Live, it’s up to you to provide the storage. There’s a gigabit Ethernet port and two USB ports. The device supports some USB WiFi adaptors (link to a full list of supported devices) but most useful of all is the ability to plug in both FAT32 and NTFS drives and play videos directly. If you have a 2.5-inch hard drive, chances are the USB bus can provide enough power meaning you don’t even need to plug it in).

Aside from playing x.264/h.264 files in MKV containers, it supports DTS and AC3 passthru. For lossless music playback, there’s FLAC support. I have even managed to transcode a Blu-ray image of Inception into a high-profile x.264 with DTS-HD audio passthru into an 45GB MKV file, which the WD TV Live had no problem in playing.

The underlying OS is a custom Linux variant but you can even install Debian if you want, though I can’t be bothered as it seems WD has done a great job in making the device do all I need and is easy to use.

I’m probably going to end up getting one upstairs so I can listen to FLAC audio without the need to run a desktop. Congratulations to Western Digital for making such a decent bit of kit, it certainly follows on from the firm’s hard drives.

Moronic father sues school after girl allows phone to be searched

Some Texan moron who felt violated that his daughter was asked to reveal text messages on her phone has sued the county’s school district for over $7 million.

Believing that his little princess’s constitutional rights had been violated, Mr John Beaird naturally thought it would be right to sue. Why not? After all, MacArthur High School or perhaps MacArthur Prison Camp as Mr Beaird would have you believe, only asked his little ankle biter, Madelyn, to fork over her mobile phone in order to eliminate her from an investigation.

The investigation was prompted after officials from MacArthur High got wind of minor fracas involving keyed cars and, naturally this being in Texas, a gun. Initially the school thought the doe-eyed Madelyn was involved, proceeding to question her and search the contents of her phone.

This was the final straw for daddy. Upon hearing of the flagrant disregard for his cherished daughter’s right to bare a phone, Mr Beaird declared all out war, slapping the Independent School District (ISD) with a lawsuit and claiming over $7 million in damages.

As to how the doting father reached the figure of $7 million, well that’s a story best told by the hero of hour himself. “I remember back when hot coffee was spilled in the McDonald’s law suit. They were awarded $4.5 million. I said you know, I guess a constitutional right is worth at least $4 million today.” There you have it, Mr Beaird putting a value on the very fabric of American life.

And what about the victim of the heinous crime? It seems the sight of four figures of authority caused Madelyn to shut her claptrap. “I knew they could not do it but I was kind of scared to ask for it back because you know I was like there were three principals and a police officer.” Of course you were dear, now skip along and play with your marbles.

The ISD told the local rag that there was “reasonable cause for the district to search the phone” and that the school had got permission from all students involved. It also rubbished Mr Beaird’s claims but did offer to reimburse his little treasure’s phone, as it had not been returned.

While Mr Beaird wonders what his next money spinning move would be, the majority of other American parents would be grateful to know that a school would investigate a matter that relate to guns in schools. Mr Beaird on the other hand would prefer to use his daughter as a pawn in furthering his own bank balance.

Jumpdomain finally bites the dust

Scott Ison has finally decided to call it quits, with his sham of a registrar at long last showing an “out of business” sign.

The Jumpdomain site now simply tells people to move away, which is what people should have done ages ago, if they could. No links to other parts of the site are available but the site does exist behind the useless index.html.

My original advice on how to move away from Jumpdomain still remain in place and I had contact with eNom in the past week who were again very quick in handing over an EPP code.

Unsurprisingly the shyster, Ison, offers no help and frankly I hope someone decides to take legal action against this guy to try and set a precedent for registrars to take responsibility against doing these sorts of things in the future. I was lucky and didn’t lose any money but there are stories of people who lost domains and more directly, money, thanks to this conman.

Don’t expect any flowers Ison.

New speakers

For the past three and a bit years I’ve been using a pair of Fujitsu Ten Eclipse TD 307 speakers which I managed to nab off eBay for a half decent price. The speakers were an excellent upgrade from the crappy Yamahas I had for around 10 years previously.

Fujitsu Ten Eclipse TD speakers, where the TD stands for “Time Domain” are very interesting (and relatively expensive) speakers, so occasionally I look out for them on the auction site. They are pretty rare and although a few years back a couple of 512s were knocking about which had a bit of damage (and resold quickly by the person who won them in a matter of days). At the time I missed out bidding for them and felt somewhat gutted – the 512s were the top of the range at that time.

So a couple of weeks ago, I noticed a seller with a pair of 510s and 508s. The former was to end a good four hours before the latter and were the “beefed up” version of the older 508s. I put in what was my limit and was outbid, which is fine. I would have liked to win them but they went for just under 500 quid, a little more than I wanted to pay for it.

The 508s came up and I put my bid and won them for what I thought was a bargain. The seller was very slow getting the speakers to me but once they arrived I was amazed at the difference between the 307s and the 508s. The bass on the 508s (from the same Eclipse 307 amplifier) is simply outstanding for a speaker of its size. The guy who sold them had kept the speakers in almost perfect condition, I couldn’t find any marks on the body or the woofer cone.

Little 'n large

Little 'n large

Unlike the smaller 307 there’s no grille although the woofer diameter is exactly the same. Sadly I couldn’t get them in silver but I’m happy nonetheless.

The 508s have been replaced by the updated 508 II but for the time being I’m perfectly happy with these. If you have a chance to audition Fujitsu Ten TD speakers, I thoroughly encourage you to do so, they are utterly superb.

Frode explodes

One of the machines in the lab decided to give up the ghost sometime in the past 10 days though it certainly went with a bang.

This machine wasn’t attended to very often so when I saw it was off, I turned it back on and was almost engulfed in grey smoke and the stench of burning silicon. What had happened is that sometime earlier there was a coolant leak from the watercooling system which had led to the machine originally being switched off. The cause of all this smoke however was that some was still residing in the case (though a lot of it made it’s way onto the floor) and passing electricity through it meant death, very quickly to some components.

The one which bore the brunt of the damage was an Adaptec 29360 SCSI card. The card itself is only a 20-30 quid replacement on eBay but the PCI slot it was parked in also was full of coolant so while the motherboard may be okay I would rather not take the risk. So we’ve got one useless motherboard and one very dead SCSI card. Check out the pictures.

This computer was mainly used for the odd bit of gaming (the CPU was a Core2Duo 8400 (3.0GHz), 8GB RAM, couple of SCSI drives and a NVIDIA 8800GTX video card. I’m trying to salvage the video card by replacing the watercooling block and putting in an Arctic Cooler thing which I’ll post about when it arrives.

The smell of burning silicon is extremely painful.