My Rambling Thoughts

Quote:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

Brian W. Kernighan

Quote:

If you ride a motorcycle often, you will be killed riding it. That much is as sure as night follows day. Your responsibility is to be vigilant and careful as to continue to push that eventuality so far forward that you die of old age first.

Unknown source

News:

Date: . Source: .

The XBox 360 has very impressive security

computer

Now to the XBox 360. It uses a 3-core 3.2 GHz PowerPC CPU, 512 MB RAM, a >20 GB HD (capacity varies) and a 12x DVD drive. Pretty good specs for a 2005 system.

A buffer overflow exploit in a game does not work anymore. First, the game runs in user mode. We'll need another privilege escalation exploit to run in kernel mode. But even worse, the main memory is encrypted! Even if we manage to put arbitrary data into memory and trick the processor to execute it, it won't because it becomes garbage when decrypted. Plus, the user mode memory is always marked as W^X, meaning writable memory is never executable.

We cannot encrypt the code because we don't have the keys. The encryption is done in the ASIC (containing the CPU + encryption block) and the keys never leave it. Moreover, the key is generated by a Random Number Generator, so it is never the same across power-cycles.

The boot loader is now much more secure because it resides entirely in the ASIC. It has 32 kB ROM and 64 kB RAM, so it is able to use a real encryption algo (modified SHA-1, RC4). Also, the ASIC has programmable write-once eFuses (768-bit) that allow, among other things, a unique per-machine master key. And it never leaves the ASIC. It is now much harder to replace the firmware.

The weakest link

There is one glaring weakness: the DVD drive. The drive is used to check for genuine game discs (by reading data that do not exist in a DVD-R). The problem is that the system trusts the drive implicitly. If you hack the DVD drive firmware, you can run copied games.

The latest DVD drive firmware is now encrypted, but it is still somewhat weaker compared to the rest of the system. (People actually extracted the chips and probed them electrically.)

Note that this does not break the XBox 360 security, but it does achieve a common goal — to play pirated games.

It is wildly suspected Microsoft is able to detect this hack. Modding has a huge penalty: Microsoft has banned several hundred thousand modded XBox 360s from XBox Live in late 2009.

Tip: never buy a second hand XBox 360.

A few mistakes together can be fatal

What if we are interested in gaining root access? Got to look for weakness in the system itself.

First, the syscall was buggy on one of the firmware. It did not check the inputs properly and allowed arbitrary unencrypted code to run — in Hypervisor mode.

It turns out that some games do not sign their graphics shader code, so it can be overwritten at will. The graphics shader can be used to DMA arbitrary data into the main memory.

There are two things to overwrite: the thread data (registers) and the destination address.

The registers are overwritten to the correct syscall values with the NIP (Next Instruction Pointer) pointing to a known syscall instruction. This works because data is unencrypted.

The context restore is then triggered. The registers are loaded, syscall is executed and calls our unencrypted code at the destination address — in Hypervisor mode!

(Data being able to execute as code is a side-effect of the memory encryption page table implementation.)

This exploit was demonstrated in 2006 and Microsoft had long closed this loophole — by checking the syscall inputs properly and encrypting the thread data. There has been no known Hypervisor hacks since.

My take

The XBox 360 has very impressive security. Microsoft can do a proper state-of-the-art security design if it wants to. :lol:

XBox weak points

computer

I just watched this interesting Google Tech Talk The Xbox 360 Security System and its Weaknesses.

(Note: there is an earlier Google Tech Talk Deconstructing The Xbox Security System that focuses on the XBox. It is very interesting — especially if you are familiar with the x86 architecture.)

Let's talk about the XBox first.

The XBox uses a modified Pentium III 733 MHz CPU, 64 MB RAM, 10 GB HD and a 5x DVD drive. It uses mainly off-the-shelf parts. The specs may seem lackluster, but remember, it was released in 2001.

Main weakness: games run in kernel mode. Games are always less secure compared to the OS and once a buffer overflow exploit is found, arbitrary code can be run. If the game is not run in kernel mode, it will require another privilege escalation exploit.

So the XBox can be hacked easily.

(Now, this isn't as straightforward as it seems. First, we can't just change the game code. They are always signed. Even the game data is signed, but the private key resides in the game, so once we have control of the system [via other exploits], we can find it.)

What if we want control of the system at the point it is powered up? We'll need to replace the firmware. Obviously, it is encrypted/signed. But don't worry, the boot loader that checks the firmware is just 512 bytes and is loaded to the CPU via a high speed bus that is sniffable, despite Microsoft's assumption. (Where there's a will, there's a way.)

Once the boot loader is disassembled, the game is up. We can now encrypt our own firmware.

The bootloader has some really clever ideas, but some of it are implemented wrongly, so it is much weaker than expected. If it were written correctly, it would have been much more difficult.

My take

XBox is not without security, just that it has several significant weaknesses. Just one weakness is enough to break the system.

The XBox being hacked so easily was a major blow to Microsoft. The hardware was (heavily?) subsidized, so Microsoft wanted people to run only licensed games on it.

The first PS3 hack

computer

The first crack finally appeared in the PS3 armour 3 years after it was released. (Some people claimed to have hacked the PS3, but none of them have provided any evidence.)

The first few PS3 versions, until the PS3 Slim, could run Linux as the OtherOS, but it was still under the control of the Hypervisor that restricted its access to memory and the GPU.

Now, someone uses an electrical "glitching" attack to retain access to freed memory (cache fails to update to main memory), then keep creating page tables until one page table falls into the "freed" memory region. Once this is done, he is able to rewrite the page table to give himself full access to all the memory, including the Hypervisor.

The PS3 is still far from defeated. For one, this must be done on every power up, as the Hypervisor ROM image is still not cracked. An article suggests it is decrypted by hardware keys.

Also, it appears the all important root keys are kept by one of the Cell coprocessors and the main processor has no access to it.

And the most important of all, it only works if the PS3 allows OtherOS to be run. This means it won't work on the PS3 Slim!

(Even on existing units, the Hypervisor can be patched to check that the free operation actually succeeded.)

So far, the PS3 remains the only console that has not been hacked. It seems pretty secure, I must say. :thumbsup: Someone said it is because the PS3 allowed Linux to be run, so hackers were not interested. Well, now that it is removed, let's see.

There is actually another reason to hack the PS3: to play PS2 games. The first two PS3 versions could do it, but later ones could not. This is not a simple matter of enabling the PS2 emulation. The entire code needs to be written!

My PC, my rules

programming

We have pretty lax rules for our development machines. We can log into one another's machine and use them freely. Mine has a few special rules, though:

  • Time-consuming processes are automatically lowered in priority. This is to ensure good foreground response.
  • 200 simultaneous processes. This is to safeguard against runaway processes. (200 is good enough for 3 simultaneous builds — not something you should do anyway.)
  • 50 GB HD quota. If a user needs more, he can always use his own HD over the network. (This is a very generous limit. You will never hit it unless you don't do house cleaning.)

Anyway, almost no one uses my PC anymore as most developers have their own equally fast PC.

Speeding up compilation

programming

Our development machines have 2 quad core CPUs each. I had never seen compilation go so fast before — it was 8x faster than my old PC! Suddenly, there was no more excuse to laze around. :-P

Even so, what if you need more speed? Someone has the bright idea to use 20 compilation processes. (The build script defaults to #cores.)

It does not work, because compilation is CPU-bound. All 8 cores are at 100% usage until the linking stage. Linking is unfortunately a single process and takes up 10-15% of the build time.

The key thing is to profile. I thought the PC would be IO-bound, but no, the SATA HD is fast enough and most of the apps are cached in memory.

Because of the 100% CPU usage, my PC has an obvious input lag during compilation. I had to write a script to monitor and lower the priority of time-consuming processes.

I don't know why no one else has this issue. My request to the build team to lower the build priority by default was rejected. (I can nice my build, but I can't expect every user to do it.)

Feedback on my website!

programming

A colleague who was suddenly very interested in Web programming poked around my website, but didn't like what he saw. :lol:

No end tags

I don't end <p>, <img> and <input>. I end all others unless they are not allowed (<br>, <hr> and <col>). It is just more natural to me when I write the raw HTML using a text editor.

I chose to conform to strict HTML 4.01 in 2003. For a while, I thought I had chosen wrongly because it looked like XHTML was the way to go. It turned out XHTML was poorly supported and was very hard to get right. Anyway, HTML 5.0 is now the path forward.

HTML is the way to go for webpages and everyone just have to use better HTML parsers.

JavaScript in HTML

My colleague mentioned I did this wrongly:

--> </script>

It should be like this:

// -->
</script>

I vaguely remember that browsers will treat -- as comments for JavaScript embedded in HTML, but I guess he is right. I didn't change it because it still passed the strict HTML 4.01 validation.

(I try not to embed JavaScript in HTML, but it is sometimes convenient to do so.)

Minified code

My colleague also mentioned my JavaScript code was hard to read. It turned out he was reading the minified code! I minify the code dynamically to remove comments and whitespaces. Simple obfuscation is a side-effect.

I have this rule in my .htaccess:

RewriteCond %{REQUEST_URI} ^(.*\.(js|css))$ [NC]
RewriteRule .* /<scripts>/minify.php?f=%1 [QSA,L]

(Sidenote: if you use Apache, you must know .htaccess.)

minifiy.php is a thin wrapper around JSMin and CSSMin that caches the minified output and handles HTTP If-Modified-Since.

The next thing I want to do is to concatenate multiple JS files together. That will really help.

Ohhh, preferred platinum card

finance

When I started working, there were two common credit cards: normal (S$30k annual income) and gold (S$60k annual income).

Just about anyone could get a normal credit card if they earn S$2,000 — most fresh grads do — so it was no big deal. (They crossed the $30k mark with bonuses and employer's CPF contribution.) People aspired to get the gold card as it signaled their earning power.

The gold card got easier and easier to apply for — everyone wanted one, you see. A few years later, it eventually became the same as the normal card, which was renamed the classic card. The platinum card, which was really prestigious in the past (S$100k annual income?), became the new gold card.

Fast forward another few years. I just recevied a platinum card to replace my soon-to-expire classic card. I don't want a platinum card. I don't want to pay the annual fees and I especially don't want to change my credit card number.

So I called up my bank to ask if I could continue with my classic card. No, it can't be done, the operator said, the platinum card is the classic card now.

(Except that it is free for the first 3 years rather than lifetime.)

Okay.

I checked a few banks' website and found that platinum is really the new normal! This must have happened a while ago. Can't blame me for not knowing as I live under a rock. :lol:

So what is considered prestigious these days? Black card, white card, signature card, preferred card; it depends on the bank. There are a few ways to tell: (i) high annual income requirement, (ii) high annual fees, (iii) high minimum required spending, (iv) by invitation only.

For me, I just need a card I can use to buy stuff online. Damn, I need to update all my accounts again. :rant:

My brief credit card history

Corporate gold card I like it for the easy-to-remember number and the company logo (well, it sets it apart).
Classic card Some time later, I applied for a personal credit card, but I never used it, so I canceled it.
Classic card 2 The corporate card was discontinued, so I had to apply for my own card again.
Classic card 3 The old card was canceled when I reported a suspicious transaction. It turned out to be a mistake on my part. I was unable to keep the card — I was on the verge of memorizing the number.
Platinum card My latest card.

The missing gold: the prequel

finance

News: What a Run on Gold Looks Like

Date: 19 October 2009. Source: NumisMaster.com.

Rob Kirby of Kirby Analytics in Toronto has reported details of a recent "run on the bank" in the London Bullion Market Association Gold Exchange.

The London Bullion Market is the world's largest gold exchange with daily turnover now running almost equal to a year's global gold mine output. Since this market theoretically is trading contracts for actual delivery of physical metal, gold sellers are supposed to be ready to deliver the real thing and not paper.

Kirby attributes his information to impeccable reliable sources that on Sept. 30, the last trading day for the LBMA September 2009 futures contracts, deep pockets buyers "bought" substantial tonnage worth of September 2009 gold contracts. The buyers then told the sellers that they wanted to take immediate delivery of the physical metal.

This article seems to be legit. I'm not surprised two big banks were caught with their pants down. I read that the big western banks are now mostly short gold — because they don't believe it is worth so much?

Well, they underestimated India and China, who are still very pro-gold, have the means to buy huge quantities and make sure they get physical deliveries, unlike the rest of the "small" gold funds that have to make do with paper deliveries.

Gold is golden, but is it really there?

finance

News: Fake gold bars in Bank of England and Fort Knox

Date: 11 January 2010. Source: Pakalert Press.

It's one thing to counterfeit a twenty or hundred dollar bill. The amount of financial damage is usually limited to a specific region and only affects dozens of people and thousands of dollars. Secret Service agents quickly notify the banks on how to recognize these phony bills and retail outlets usually have procedures in place (such as special pens to test the paper) to stop their proliferation.

But what about gold? This is the most sacred of all commodities because it is thought to be the most trusted, reliable and valuable means of saving wealth.

A recent discovery — in October of 2009 — has been suppressed by the main stream media but has been circulating among the "big money" brokers and financial kingpins and is just now being revealed to the public. It involves the gold in Fort Knox — the US Treasury gold — that is the equity of our national wealth. In short, millions (with an "m") of gold bars are fake!

Who did this? Apparently our own government.

Now, this is serious tin foil stuff. I find it extremely unbelievable. I'll take the blue pill this time, thank you very much.

(5,600 bars at US$400,000 each is US$2.24 billion. That's huge! On the other hand, Ethiopia lost "millions of dollars" in fake gold. How many bars do they have? Probably just 10 to 15, since it doesn't look like it crossed US$10 million.)

Mankind has always been obsessed with gold. Personally, I think gold makes sense as currency, because the total quantity is more-or-less fixed, unlike fiat money where you can print as much as you like. (I don't think gold has an intrinsic value, though.)

But a word of warning: unless you have the gold on hand, it is not really yours. And the Government can easily outlaw gold when it is desperate — the US did so in 1933.

Random musing: gold is soft, malleable and extremely dense. One of my wishes is to hold a gold bar to see for myself. :-D

America Rising — An Open Letter to Democrat Politicians

On November 4, 2008... We gave you power. In the House. In the Senate. And in the Oval.

We voted for hope. We voted for change. (Change We Can Believe In, Obama'08)

Balance would be restored. Our world would be safer. Our families would be stronger. Our future would be brighter.

We trusted you. So we elected you. We regret it.

This is amazing. The video was prepared by Democrats for Democrats. They are fed up.

This video was removed by YouTube many times (they'll do it if you report it), but it kept coming back — being reposted by many other people. It is a fight between the Obama loyalists and the "rebels".

There is a second video asking the Republican Party to join them, but the message isn't as strong.

Final Fantasy XIII walkthrough is out!

By accident, I found that people have uploaded Final Fantasy XIII walkthroughs on YouTube. In fact, they have started to do so since the first day the game was out! :-O

A note: Square Enix forced everyone to remove the first few cutscenes, but allowed later ones. The official explanation is to avoid spoilers — SE also blocked the ending.

FF XIII screenshot

The graphics is top-notch and the whole game has the same awesome graphical look — I can't tell when the movie cutscenes end. The dialogue cutscenes are rendered in real-time and the lips move in sync. Movements are also pretty natural. This is an amazing achievement because it is almost good enough to pass off as a 3D movie. (They didn't use enough polygons for the fingers. That really stood out.)

Unfortunately, that is about the only good thing about the game.

The game is one long dungeon run. You just run in one straight line and fight enemies along the way (you can avoid many of them, but not all). It has no side-quests nor NPC interaction. Previous FFs are also mostly dungeon runs, but they weren't so obvious. Plus, they had side-quests and NPC interaction.

Then there's the music.. what music? FFs always have had some good music, but I've not heard any memorable ones yet. Case in point: I recently watched the Crisis Core - FF VII walkthrough and boy, does it have some good music!

I can't really comment on the story since I don't understand a single word. :lol: But there has been very little exposition. Usually you'll expect bits and pieces of the story from time to time. So far, it is all running and fighting. (I've watched up to chapter 8 of 13.)

I look forward to March when the English version is out. I don't expect it to be as good — generally, the Japanese voice-actors sound the most natural — but at least I'll be able to understand it! (The best is Japanese voices with English subtitles.)

On the battle system

The battle system was criticised too. However, I think it is fine — it is not a pure slash-and-hack anymore. Some of the harder battles can be akin to puzzle-solving and require the right tactics to win.

On the downside, the familiar victory tune (since the first FF?) is absent.

On the playthroughs

The best quality videos are from gamers who use a pass-through capture card/device — they are almost 720p HD quality. The videos are not perfect because YouTube restricts the 720p videos to ~2 Mbps. One uploader told me he captured the videos at 25 Mbps! :-O

(For previous FFs, the best sources were from those who played using an emulator.)

Just as always, not all HD videos are equal. Some look good, but the motions are not smooth.

Some people like the battles to be included, so that the playthrough is really 100% complete, but I think it works better without them. A single battle takes several minutes, and there are random battles every minute or so. It means the story is not really advancing most of the time!

jQuery 1.4 is out

programming

Key milestones:

Date Version Size Zipped
Aug 2006 1.0 16.7 kB 8.8 kB
Jul 2007 1.1.3 20.9 kB 10.9 kB
May 2008 1.2.6 54.4 kB 16.6 kB
Feb 2009 1.3.2 55.9 kB 19.4 kB
Jan 2010 1.4 68.2 kB 23.3 kB

jQuery has grown very large over the years. Of course, it has also added much new functionality. However, I wonder if we will see jQuery Lite — a return to its 20+ kB roots, perhaps.

(A typical webpage may have just 3-10 kB of compressed JavaScript. Compare this to jQuery/jQuery UI's 50+ kB — the library is far bigger!)

What is so attractive about jQuery?

I'll have to say it is (i) easy-to-use, (ii) lite and, (iii) easily extensible.

Easy to use: jQuery makes it so much easier to manipulate the DOM. Its small size that makes it a no-brainer to use. However, it is not always true now.

Lite: jQuery is a library, not a framework. You can use any one of its functions standalone without having to learn anything else.

Extensible: it is trivial to add your own functions to jQuery.

jQuery is such a game-changer for me that I've decided to contribute to its cause — I've donated US$10 to them. :lol:

How Google was compromised

News: Google Attack Part of Widespread Spying Effort

Date: 13 January 2010. Source: PC World.

Google's decision Tuesday to risk walking away from the world's largest Internet market may have come as a shock, but security experts see it as the most public admission of a top IT problem for U.S. companies: ongoing corporate espionage originating from China.

It's a problem that the U.S. lawmakers have complained about loudly. In the corporate world, online attacks that appear to come from China have been an ongoing problem for years, but big companies haven't said much about this, eager to remain in the good graces of the world's powerhouse economy.

So that was how it was done: plain old attachments.

Stop the press — Google vs China!

News: A new approach to China

Date: 12 January 2010. Source: Google Blog.

Like many other well-known organizations, we face cyber attacks of varying degrees on a regular basis. In mid-December, we detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google. However, it soon became clear that what at first appeared to be solely a security incident — albeit a significant one — was something quite different.

First, this attack was not just on Google. As part of our investigation we have discovered that at least twenty other large companies from a wide range of businesses — including the Internet, finance, technology, media and chemical sectors — have been similarly targeted. We are currently in the process of notifying those companies, and we are also working with the relevant U.S. authorities.

Google has also announced that it would make Gmail use HTTPS by default. This will prevent third-parties from snooping the data easily.

The Gmail attacks are account-level hacks, that's small potatoes. The big news is the "highly sophisticated and targeted attack on [their] corporate infrastructure originating from China". It sounds like their US corporate network was hacked — successfully (they admitted it).

Interestingly, Google was able to find out other companies were also targeted. My guess: DNS cache poisoning.

As I said before, you can never be too paranoid on the Internet.

The hunt for dead pixels

A friend got a new plasma HDTV and asked me to help him spot dead pixels. Without a video diagnostics disc, the easiest way is to hook up a notebook to the TV and set the desktop wallpaper to a solid color.

In the past, I only checked black and white screens. Black is to spot stuck pixels, and white is to spot dead pixels.

Stuck pixels are very annoying, so there should not be any at all. Luckily, they are easy to spot. I've never seen stuck pixels on a new display, though.

In theory, checking the white screen for dead pixels is sufficient because as long as any of the RGB sub-pixels are not working, it won't show up as white. While this is true, it turns out that white is the hardest to check. I have never caught a dead pixel without knowing where it was.

Online wisdom is to check the R, G and B components separately. For completeness, I also check Cyan (G-B), Magenta (R-B) and Yellow (R-G).

Checking individual components is easier than white. It is either the color or black. Even so, it is not easy to spot a dead pixel. You have to scan the screen at a nose's length away. (But once you spotted one, you can see it from 1-2 metres away.) A plasma screen makes it harder due to its inherent noise.

HDTV dead pixel

After 45 minutes — 1920 x 1080 is a lot of pixels — I spotted one fully dead red pixel and two partially dead blue and red pixels. I thought the dead red pixel was a dead RGB pixel, but the macro photo showed that it was just the red pixel.

There are three levels of severity:

  • Fully dead RGB pixel: obvious (relatively speaking), but also very rare.
  • Single dead sub-pixel: not obvious, but once you found it, you can spot it again easily (within 2 metres).
  • Partially dead sub-pixel: hard to find even though you know where it is

The dead pixels are all at the peripheral, so I recommend to my friend to keep his unit. I also decided to do him a favour by not telling him the location of the dead pixels. :-D

(My own LCD TV has one fully dead pixel and several partially dead pixels. The dead pixel is an eye sore — if I look at it. The other dead pixels are not noticeable. I missed all of them until I saw the dead pixel by chance a few months after I got the TV. :-O)

Finally, a netbook I can live with

computer

Netbooks have always been too small and too slow for me, but no longer. Enter the Asus Eee PC 1201N: 1.6 GHz dual-core Atom, 2 GB RAM, ION, 12.1" (1366 x 768), 250GB HD at 1.45 kg and US$500.

It does not have a built-in optical drive and is a little on the heavy side, but hey, you can't expect everything to be perfect.

Atom is good enough for office apps and surfing net, but it is too slow for playing 720p videos. That's where the ION comes in: dedicated graphics. With it, netbooks can play 1080p videos without a hitch.

(The dual-core Atom N330 is a gimmick, though. It is actually meant for desktop use, so it has less power saving features — the 1201N can last just a little over 3 hours on battery. It is low by netbook standard, but it is good enough for me.)

Something I like about the 1201N: RAM. Unlike many other Atom netbooks, it can use more than 2 GB RAM. This is important as I need 3 GB RAM to run my apps comfortably.

The 1201N is almost perfect for me. However, it also almost triggered my instant-discard filter: an extra column of keys beside the backspace. I absolutely hated that. When I need to correct mistakes while touch-typing, I naturally use the edge of the notebook to "home".

Screen size

Most people feel that 10" is the perfect netbook size. However, I find it too small. 12" is the minimum for me — I'll be hard-pressed to even consider 11.6".

My ideal notebook

To me, a notebook is meant to be carried around, so I have always been willing to trade features/speed for weight. Specs:

  • Light: 1.3 kg
  • Small, but not too small: 12.1" (1280 x 800)
  • Run apps and store files comfortably: 1.6 GHz dual-core CPU, 3 GB RAM, 120 GB HD
  • 2.5 hours battery life
  • Optional optical drive
  • Able to play 720p video
  • Backspace/enter by the right edge
  • Must look sleek

There is one ultra-portable notebook that meets all my requirements: the HP EliteBook 2530p. Unfortunately, it is also very expensive. The Eee PC 1201N is the first netbook to meet most of them.

(Sidenote: the 1.3 kg 13.3" MSI X340 would have been the first if it had dual-core instead of one [Core2Solo]. Its keyboard was very flimsy and it had short battery life due to its 4-cell battery.)

12" is the battlefield

12" is where netbooks and ultra-portable notebooks will fight over for a while — and I believe netbooks will prevail. This is unfortunate because I want a notebook with a dual-core CULV CPU. No matter how you look at it, an Atom CPU is slow.

The truth is inconvenient, so we hide it

finance

News: Geithner's Fed Told AIG to Limit Swaps Disclosure

Date: 7 January 2010. Source: Bloomberg.

The Federal Reserve Bank of New York, then led by Timothy Geithner, told American International Group Inc. to withhold details from the public about the bailed-out insurer's payments to banks during the depths of the financial crisis, e-mails between the company and its regulator show.

This could be big.

Judging by the way the Congress, the President and the Federal Reserve favour Wall Street over Main Street, I wonder if it is time for another round of "no taxation without representation"!

Y2k all over again

computer

There have been reports that some systems/devices failed to work properly once the year turned 2010. Two main causes:

  • Using just the last digit
  • Parsing the digits wrongly (assuming decimal instead BCD)

Unlike year 2000, no one expected 2010 to be a problem. Now we know better. Date is never easy to handle. I won't be surprised to see more date problems in the future.

2038 is known to be problematic (32-bit Unix-style timestamp overflow), but it is still very far off. By then, we will be using 64-bit timestamps.

Why is year always a problem?

In the past, space was at a premium, so people either stored the year in 2 digits or as a n-bit offset from a reference year (say 1980).

The first gives us the Y2k problem. The second can hit us any time, but the effect is usually localized unless we are talking about OS / major apps. (Btw, a 5-bit offset from 1980 will roll over in 2012.)

The next problem is storage. Dates are often saved/transferred in human readable form. There are so many problems with this: (i) is it d/m/y, m/d/y or y/m/d, (ii) is it in decimal or in string, (iii) is it 0-based or 1-based? Parsing is a big headache.

A more sensible approach is to store in seconds and convert to human readable form for display/input. It simplifies date arithmetic too.

Storing in seconds is not a cure-all. We usually use the Unix-style timestamp, so the date can only start from 1970. (Unless you are willing to use 64-bit timestamps.) The time zone is also implicit. For sanity, always use GMT and add the time zone offset for display — never use local time! However, it is a problem if the time zone is not available.

Borneo Motors fire the first shot

transport

News: Big discount on Toyota cars causing a stir

Date: 6 January 2010. Source: ST.

Toyota distributor Borneo Motors has astonished competitors by offering whopping discounts of up to S$6,000 on its bestselling Corolla Altis and Vios models.

The cut amounts to a discount of around 9 per cent for the Corolla and 11 per cent for the Vios, and has raised eyebrows in the motor industry as the quanta are close to the car's profit margins.

Good for BM! An Altis 1.6 with OMV of $18k means a cost of $61.2k (COE $18.5k).

I don't see why BM is affected by the rising yen; its cars are manufactured in Thailand. Honda, on the other hand, must really have its hand tied. Higher OMV is one thing, but having a 2.37x (tax) multiplier basically just screws them.

The local car market is very small and easily controlled (whether by LTA or the big car dealers), so I do not expect car prices to go down much.

One thing I see consistently is that everyone in the car industry always forecast the COE will go up.

Do I have 1 TB or not?

computer

A 1 TB HD has a capacity of 1,000,202,272,768 bytes, but it shows up as 931.5 GB. This is thanks to the use of binary units (1 kB = 1024 bytes) than the decimal SI units (1k = 1000).

Binary prefixes have been defined — KiB, MiB, GiB and TiB — but they are still not in common use.

AFAIK, only memory and flash are still using binary units to indicate their capacity — a 1 GB SD card is really 1 GiB. All others, including bandwidth and throughput, are in SI units.

Software

Software is a big issue. You'll never know whether it uses 1,000 or 1,024 unless you can see the actual number of bytes.

On my part, I'll try to do the right thing from now on and use k- for SI prefixes and Ki- for binary prefixes.

One terabyte of data

computer

I share a 1 TB HD with my brother. More accurately, he bought the HD and I leeched on him. :-D

The HD finally ran out of space after one year of dumping data into it indiscriminately, so I bought a new external 1 TB HD as a stop-gap measure. To my surprise, they cost >$130. I thought I saw them selling around $100 a few weeks ago.

So, the space issue is averted for now. I am still looking for a long-term solution. My requirements:

RAID 1 to achieve near-100% reliability This is the minimal a storage solution should provide. No more HD failure.

(Also, it may sound obvious, but I must be able to mirror a new drive easily.)

User and directory level access control The HD will be shared, so there must be access control to prevent problems.

I trust my family, but I don't trust their computers. I don't want a virus/worm to wipe out the HD. (For example, I only store data files in my brother's shared HD. I store exe files in my own smaller external HD.)

Accessible over network I won't want to move the storage around to access it on different computers.

Moving it around increase the chances of dropping it.

Prefer to be in NTFS format In case of disk errors, I want to be able to take the HD out and recover the data on a Windows machine. (You can tell I'm not a Linux person.)

This is not really needed for RAID-1, though.

Able to add capacity in the future I may start with 1-2 TB, but I would like to upgrade/change the storage to 4-6 TB a few years down the road.

I don't really need all of these features, but since I'm looking for a permanent solution, I want to do it right — if the price allows.

A NAS (Network Attached Storage) can do all of the above, but it is not cheap. A typical NAS costs $400 — excluding the HD. At that price, I'm thinking of getting a cheap (perhaps 2nd hand) PC to be the server.

Speed

There's one downside to using NAS: speed. Current 7200 RPM HDs can transfer well over 60 MB/s. USB 2.0 allows 25-30 MB/s (theoretical 60 MB/s), but Fast Ethernet (theoretical 12.5 MB/s) and 802.11g (theoretical 6.75 MB/s) are even slower.

I'll need to upgrade my router and cables to get Gigabit Ethernet (125 MB/s).

Why RAID 1?

RAID 1 is the least efficient at 50% — the two disks mirror each other. RAID 5 is more efficient, but you need a couple of more disks. (3 disks = 67% efficiency; 5 disks = 80% efficiency) The NAS is also much more expensive.

Really long term storage

Even 2 TB will soon be insufficient. I foresee that I'll have to backup the oldest / least-recently used files to DVD-Rs. DVD-R doesn't make sense for periodic backup, but it is suitable for archival files (files that will never be changed again).

I will make two copies to avoid data loss. I have had CDR/DVD-Rs go bad on me before (even on slow burn). However, I don't really want to do this because DVD-Rs are like tapes of the past — they are offline and slow.

2009 Vehicle Expenses Report

finance
Category YBR CB400F MX-5
LTA related 82.19 145.19 1,176.06
Insurance 149.58 155.28 1,356.28
Petrol 175.93 124.16 458.83
Season parking 199.92 200.26 882.00
Other parking 13.00 6.50 0.00
Cashcard top-up 10.00 10.00 147.16
Maintenance 213.50 383.50 1,750.00
Parts 26.00 105.00 18.70
Fine 0.00 10.00 30.00
Total 870.12 1,139.89 5,819.03
Avg per month 72.51 94.99 484.92

I really under-utilitized the MX-5. (You can tell from the petrol used.) I didn't drive it more often as I said I would. I should really drive more often this year.

I also didn't ride my CB400F sufficiently.

Insurance went up, as usual. :rant:

The MX-5's cashcard top-up is mostly for my office's parking. It works out to be ~$12/month, or an extra day (beyond the claim limit). It may increase this year as I drive more often to work.

All three vehicles required some servicing. I changed the battery and tyres for both bikes, and the gearbox for the MX-5. I don't foresee any servicing needed this year (except for oil change).

How much video can I fit on a DVD?

One day, my father wanted to burn some videos to give to a friend, so he asked me how much video a DVD could hold. I told him it can hold from 63 mins (4.7 GB / 9.8 Mbps) to 9.4 hours (8.5 GB / 2 Mbps).

(2 Mbps is not the minimum allowed MPEG-2 bitrate, but it is the lowest you want to use in real life. Also, I'm using just the two most common DVD formats.)

Single/dual layer was easy to explain (2x capacity with some overhead), but variable bitrate required a bit more explanation.

Since the DVD capacity is fixed (4.7 GB or 8.5 GB), shouldn't there be just one (or two) maximum run-time? This was where my father got lost.

Well, you can control the bitrate: higher bitrate means higher quality and lower run-time. (8.5 GB / 6 Mbps = 3.15 hours)

My father was still lost. Why doesn't video use the same amount of space?

The answer is, video is compressed. Compression means using less space than the original. What's more, video is compressed lossily, meaning the least-important data (wrt to the bitrate) is thrown away. The more you throw away, the less it looks like the original scene and hence the worse it looks.

My father still doesn't quite understand, but I think he'll get it over time. :lol:

How much is video compressed?

It is easy to calculate. An uncompressed 720x480 24 fps 12-bit YUV stream is 17.8 MB/s (720 * 480 * 24 * 1.5 * 3 [channels] * 0.5 [4:2:0 YUV efficiency]). The max DVD bitrate is 9.8 Mbps (1.2 MB/s), so the compression ratio is ~15x.

MPEG-2 requires around 6 Mbps to look well, so the typical compression ratio is ~23x.

Is this a new decade or what?

2010 marks the start of a new decade. No big deal, every year is the start of a new decade. :lol:

The millenium is confusing. 2001 marks the second millenium, but most people celebrate it on year 2000. This is a rare instance where 0- or 1- based numbering trips up the real life.

I don't really care because our calendar is a mess:

  • We didn't account for leap years properly early on, so seasons go out of sync.
  • Some countries lost 11 days in 1752 when they switched calendars.
  • Jesus Christ himself was born on 4 BC, not 1 AD!