My Rambling Thoughts

Video encoding script

I use a couple of scripts to help me encode whole directories at a time. Each directory contains one season of videos and can have its own set of options. There are pre/post processing options and steps. It can process just a subset of the files so that it can be parallelized.

It works pretty well, but it is not without its achilles heels.

Shortcomings

Usually, I just need to set one dimension of the output resolution. However, if it is an odd-ball resolution or "non-standard" aspect ratio (AR), I need to set both dimensions, effectively hardcoding it.

I need to set explicit denoise parameters (if needed).

There is no episode-specific settings.

It does not handle multi-volume multi-discs hierarchy.

It cannot encode selected chapters only.

What I hope to change

I should be able to specify the aspect ratio and set just one dimension of the output resolution. This is complicated by "Scope" anamorphic DVDs, i.e. 2.35:1 films.

It should use generic denoise parameters. HB 0.10 has a new denoiser. I hope to switch to it transparently.

It should handle multi-volume and multi-disc shows via nested directories.

It should support encoding chapters into separate files in an automated fashion.

It should support a source type parameter so that it can vary the CRF. I find that DVDs require slightly higher CRF (-2) than blu-rays.

Checking x264 for 10-bit'ness

x264

x264 --help

Look for

Output bit depth: 10 (configured at compile time)

ffmpeg

ffmpeg -h encoder=libx264

8-bit output:

supported pixel formats: yuv420p yuvj420p yuv422p yuvj422p yuv444p yuvj444p nv12 nv16

10-bit output:

supported pixel formats: yuv420p10le yuv422p10le yuv444p10le nv20le

10-bit or not

HandBrake not supporting 10-bit x264 is a bummer. My encoding workflow is centered around it.

I have to explore a ffmpeg-centric workflow, but I have two major concerns: I-frames at chapter stops and denoising.

HandBrake on RHEL 6.6 and 10-bit issue

Compiling on RHEL 6.6

After using HandBrakeCLI 0.9.9 on RHEL 6.x for over two years, I finally decided to upgrade to 0.10.2. As usual, I followed the guide at CompileOnLinux.

I got an error quickly when I ran make, because /tmp was mounted as noexec. No problem, remount it as exec.

On re-running make, I got this:

ATTENTION! pax archive volume change required.
Ready for archive volume: 1
Input archive name or "." to quit pax.
Archive name >

It is sufficient to type '.' to continue. The reason is that fdkaac tries different methods to untar and it looks like the pax syntax has changed, at least on RHEL 6.6. I realized this when I went back and found that I could not compile HandBrakeCLI 0.9.9 too!

After this small hiccup, I ran into compilation errors. HandBrake now uses system libraries for lame, OGG, Theroa, Vorbis and x264, among others, so they have to be compiled and installed first. The list is in make/include/main.defs lines 43-53. The alternative is to move the list out of the if block.

After that, the build went well. It was able to encode videos (as expected).

Issue with 10-bit

Now that I was able to compile HandBrakeCLI, I decided to try out 10-bit encoding.

I compiled x264 with --bit-depth=10 and reinstalled the binary/libraries.

Then, I recompiled HandBrakeCLI.

Unfortunately, it did not work. When I tried to encode a video, it showed:

x264 [info]: profile High 10, level 1.3, 4:2:0 10-bit
...
...
x264 [error]: This build of x264 requires high depth input. Rebuild to
support 8-bit input.
...

The output file could not be played.

I compiled ffmpeg on the same system and it was able to do 10-bit encoding using libx264. This is to make sure I enabled 10-bit x264 correctly.

Finally, I posted a question on HandBrake's forum — really a last resort — and got the reply that HandBrake is 8-bit only. :duh: I should have asked first.

What 0.10 brings to the table

  • x265
  • New denoise filter (nlmeans)
  • New AAC encoder

Currently, x265 encodes 15 - 20 times slower than x264. On my first test video, x264 encodes at 15 - 20 fps, but x265 encodes at a mere sub-1 fps! :-O

Assuming the quality is similar, the file size is 20% to 30% smaller.

On the other hand, I can't wait to try out the new denoise filter on my grainy videos. :lol:

Potholes on the road to financial independence

finance

Arbitrage purchases

Local stuff is expensive, no thanks to the triple threat of high rental, wages and utilities. There are three places to buy the same stuff for cheaper: Amazon, Taobao (淘宝网) and Malaysia.

Amazon is almost hassle-free, provided the item qualifies for free shipping. Taobao requires you 会读华语. For forays into Malaysia, you probably need a car, although it is possible to use public transport.

We are talking about savings of 20 - 40%. This can bring down the effective cost of living.

Over-insured

It is easy to over-insure. Plus, most insurance policies have a savings or investment portion. Agents like these because they get more commission. The returns are only projections, and wildly optimistic ones at that.

If it were up to me, I would buy only term insurance, and only life and critical illness. Often, I think we can get by with just the company's medical coverage.

And of course, keep yourself healthy!

Enforced savings

Get the feeling that your savings never seem to grow? The answer is to set explicit goals.

The first goal is $10k/year, which translates to $833/month. If this is too easy, then set a goal of $15k/year ($1,250/month) or $24k/year ($2,000/month). It may sound easy to save more as you earn more, but expenses have a way to catch up with your income.

Note that I use dollars instead of percentages, because the latter can be a little abstract. But you should try to save up to 30% of your net income — it gets increasingly difficult after that.

On one hand, it can be difficult to save $800 - $1,200 per month. On the other hand, $10k - $15k does not seem significant. However, that is myopic. In three years, that $10k will be $30k — if you keep at it. And that is finally sufficient for some big ticket purchase or investment.

While it is tempting to save as much as possible, it should be for a specific goal and for a short period only, say six months. And the reason is as below.

Live in the present

It is possible to be too frugal or obsessed with achieving the FIRE (Financial Independence and Retire Early) dream that we forget to live in the present. Don't waste it. It will never come back again.

Not focused on work

Sometimes, we spend so much time looking for ways and means to generate "passive" income that we forget that we need to work! We actually shortchange ourselves in two ways: by not improving our work-related skills, and not working to further our career.

Passive income streams, feasible?

finance

A recent blog article mentioned these four "truly passive" income streams that the average Singaporean should take advantage of:

  • rent out a room
  • buy dividend stocks
  • credit card rewards
  • bank interest

Are they really feasible?

Rent out a room

There are a few conditions: that your flat have the desirability (e.g. location, cleanliness, and quietness), that you have a spare room and you don't mind the loss of privacy.

This can be a viable strategy. However, this should be considered before you buy a property. For example, you might want to buy a 4-bedroom flat near an MRT station.

It should be easy to rent out a common room for $500/month.

Buy dividend stocks

To a certain extent, low-risk dividend stocks can be treated as high-yield "bank interest". However, there is capital risk, which is often understated.

If you invest $30,000 at 5%, that is $125/month.

Credit card rewards

Credit cards can give up to 3% rebate, which means you "save" $24 for every $800. I quoted "save" because it is easy to overspend to try to get the rewards, when the simple alternative is not to spend at all.

I use $800 here because I think it is time to rethink your spending if it is exceeded.

Bank interest

If you put $30,000 in a bank that pays 2% annual interest, that is $50/month.

The takeaway

Not all streams are created equal. Some are fixed, while others scale. Rental and credit card rewards are more-or-less fixed. Dividend stocks and bank interest scale; the more you invest/save, the more the rewards.

Of these four streams, rental gives the best return out of the box and it takes a while — a long while — before dividend stocks and bank interest can match it.

Five places in our Solar System we need to go today

If I were dictator-for-life for planet Earth, I would send robotic probes to these places right away:

  • Titan, moon of Saturn
  • Europa, moon of Jupiter
  • Enceladus, moon of Saturn
  • Venus
  • Triton, moon of Neptune

Titan has a dense atmosphere and surface liquid! What are we waiting for?

Europa and Enceladus are frozen ice worlds. But beneath the surface is liquid water. What are we waiting for?

Venus is hell. We think it is due runaway greenhouse effect. All the more reason to learn about it to make sure we do not go that way!

Lastly, Triton, which is interesting because it is still geologically active. But, it is so far away and so cold, that it is lower in priority.

Close encounter with Pluto

The New Horizons space probe, despite zipping through space at 14 km/s (you read that right), still took over nine and a half years to reach Pluto.

But it was there at last, on 14th July 2015.

Pluto, courtesy of New Horizons

This is history. This is also a testimonial of what humanity can do, if it chooses to focus on science.

Here's something I always keep in mind: "Dinosaurs ain't here anymore cos they didn't have a space program." Humanity must not make that mistake.

1215n netbook born again

So, my "Atom server" is now running on life support on the EliteBook 2510p notebook. While it works, I kept thinking whether it was possible to use the Asus Eee PC 1215n netbook.

The reason why it did not work was because it booted off the internal HD and I was unable to enter the BIOS due to the faulty keyboard.

So, let's remove the internal HD?

It was quite simple — there is a pictorial step-by-step tutorial on the Internet. Once done, it did boot off USB. :thumbsup:

And I found that I was able to use an external USB keyboard to enter the BIOS. I am very certain it did not work the last time.

So, the netbook is still usable if I replace the keyboard — for US$23. While I'm at it, maybe I should swap out its 2x1 GB RAM and put in 2x2 GB RAM. And swap the HD for an SSD? :-O

That was when I started to pause, "wait a minute..."

And reminded myself: this machine is history. I do not want to use it anymore. While it is great that it still runs, it is at the end of its road. No upgrades or repairs.

An alternate history

Although Intel's doc says the Atom D525 can only address 4 GB of memory, it is apparently capable of 8 GB RAM. As a result, the 1215n can use 2x4 GB DDR3 SO-DIMM — at a slow-poke 800 MHz FSB.

Add in a SSD, and it would have been one usable 64-bit Windows 7 machine.

I never knew it could address so much memory. When the 1215n first came out, the limit was 2.74 GB even on a 64-bit OS, so it was not cost effective to upgrade to 4 GB RAM. Asus later updated the BIOS to allow the memory to be remapped — which people thought was impossible. But that is water under the bridge.

Second life

The 1215n netbook works perfectly as a headless file server, no upgrades or repairs needed. It runs the same "brain" off USB 2.0. It is workable, but the setup is fragile.

The most economical path forward is for me to clone the Ubuntu installation on the internal 2.5" 250 GB HD and run it off that.

EliteBook 2510p max memory

The 2510p has only 2 GB RAM, so it is very slow running Windows 7 as it swaps a lot.

To my surprise, I just found that it can take a 4 GB DDR2 SO-DIMM (it has only one slot).

But that is way too late now. No more upgrades for this slow-poke notebook from 2007 either. Not to mention DDR2 RAM is now even more expensive than DDR3 RAM.

How much RAM before ECC is required?

All storage, from HD, optical disc, flash to tape, all have error correction (EC). In fact, EC is required. If you knew just how unreliable our media is, you'll go back to pen-and-paper!

All transports, such as Ethernet, WiFi, USB and SATA, also have error detection and/or correction.

All except one — RAM.

RAM suffers from soft errors. They get hit by comsic rays and a bit is flipped. It has no effect if the RAM is unused, or it may not be obvious (data, disk cache), although the data is corrupted.

Data is hard to come by, especially for modern dense memory modules. Modern RAMs are a smaller target (good), but denser (less charge, bad) and have lower voltage (easier to flip, bad). But they are also supposedly designed to be more resilient (good).

Some reports 1 bit per 4 GB every 3 days. That seems kind of high. Others claim 1 bit per 1 GB every month. That seems reasonable. Some even claim 1 bit every few years! That seems pretty optimistic.

To me, the error rate should be a function of surface area, density and layout/orientation. 8 pieces of 1 GB module has 8x the error rate of one 8 GB module if they are spread out like a solar cell collector.

So, we know that the error rate is pretty low, thus desktop PCs and notebooks all use non-ECC RAM. But if a computer is run 24/7, it will get hit eventually. Servers that run 24/7 use ECC RAM as standard.

The smallest RAM module that has ECC is 4 GB, but 8 GB is much more common. This does not necessarily mean 4 GB RAM does not need ECC, but that servers, where ECC is commonly used, require large RAMs.

So, when do we need ECC?

IMO, we need ECC everywhere. Silent errors should not be tolerated. Currently, people blame software when computers crash. But is it always true?

Today, Intel enables ECC only on its low-end (Celeron, Pentium, i3) and its high-end CPUs (Xeon), and the C-series workstation motherboard is required. The cheapest ECC option with Intel CPU:

CPUCeleron G1620US$46
M/BAsrock E3C204US$145
RAM4 GBUS$35

US$226. Not cheap, but not exactly unaffordable either.

The second thing is that we need more data. We should monitor the number of soft errors for computers with ECC RAM.

RAM combo, go!

The cheapest 2 GB RAM on Amazon is US$14 (+/-US$1). If you only want 2 GB, that is about the only choice.

There are a few combinations for 4 GB RAM:

SizeMHzCLVoltsPrice
2 GB1600CL111.35VUS$14.74
4 GB1600CL111.5VUS$24.74
4 GB1600CL111.35VUS$24.99
4 GB1600CL91.35VUS$27.74
2x2 GB1600CL91.5VUS$30.99

Is it worth paying US$3 more for CL9? And then another US$3.25 for dual channel?

Dual channel, even at CL11, should outperform single channel at CL9. But is it worth US$1.74?

8 GB RAM:

SizeMHzCLVoltsPrice
8 GB1600CL111.5VUS$44.74
8 GB1600CL111.35VUS$45.74
8 GB1600CL91.35VUS$49.74
4x2 GB1600CL91.35VUS$51.74

Price difference is US$7 for the fastest to slowest RAM.

I would get 1.35V over 1.5V, as it is only US$1 different. If I want CL9, I would pay US$2 more for dual channel.

The price difference between the slowest 2 GB RAM config and the fastest 8 GB RAM config is almost US$38.

Based on several reviews, there is significant difference in only three kinds of workload: memory benchmarks (10+%), IGP (10%) and file compression (5+%). The rest? 1-3%.

Which is the most cost effective power supply of them all?

Power usage taking power efficiency (estimated) into account:

PowerEffCost10W20W
EffActualEffActual
300W75%$060%16.7W65%30.8W
300W80%$3070%14.3W75%26.7W
250W85%$6575%13.3W80%25W
90W86%$13080%12.5W86%23.3W

My old Atom D510 draws 20W with one HDD (my guess; I have never measured it). The new board should draw just 10W.

Cost per year for 24/7 operation with electricity at 22.41 cents/kWh:

Power10W20W
300W$32.79$60.46
300W$28.07$52.42
250W$26.11$49.08
90W$24.54$45.74

Years to break-even:

Power10W20W
SavingsYearsSavingsYears
300W$00$00
300W$4.726.36$8.043.73
250W$6.689.73$11.385.71
90W$8.2515.76$14.728.83

The answer is clear. An energy efficient power supply costs too much to make sense.

Electricity cost will increase, so break-even point will shorten. We are moving up from the low of 20.87 cents/kWh in Apr 2015. It hit 28.78 cents/kWh in Apr 2012. That is about 30% more expensive.

Cost per year with electricity at 26 cents/kWh:

Power10WSavingsYears
300W$38.04$00
300W$32.57$5.475.48
250W$30.29$7.758.39
90W$28.47$9.5713.58

Still takes a mighty long time to break-even.

The choice is clear!

After weighing the pros and cons, I decided I would buy the N3150. The main reason is that it is newer and more future-proof:

  • 2 SATA-III ports
  • 2 USB 3.0 ports
  • Triple monitor support
  • 4K (@ 30 Hz)
  • H.265 decoding

The Asrock N3150-ITX has 4 SATA-III ports, 6 USB 3.0 ports and 6 USB 2.0 ports, and can use up to 16 GB RAM. Very interesting.

Now to see if I can find it!

My plan B is Asrock Q1900-ITX. It has 2 SATA-III ports and 4 USB 3.0 ports.

RAM

I need only 2 GB RAM, but I may get a pair for dual channel boost. I expect real-world performance difference to be 1% — I don't use the IGP.

But what kind of RAM?

The common RAMs are 1066 MHz at CL7 (6.567ns), 1333 MHz at CL9 (6.752ns) and 1600 MHz at CL11 (6.875ns). Slower RAM seems to be better. Hmm...

The "best" yet affordable I've managed to find is 1666 MHz at CL9 (5.402ns). The timing for Corsair's Vengeance is 9-9-9-24, edging Kingston's HyperX Impact at 9-9-9-27. However, these are 4 GB and above.

Again, I think real-world performance will only be different by 1%.

And if possible, I want DDR3L (1.35V) instead of DDR3 (1.5V).

Who knew there are so many things to look out for in RAM?

Power supply

I just found that mATX power supplies like mine are inefficient at low loads! :cry:

A typical power supply is 75% efficient at 20-80% load. A 80 Plus certified power supply is at least 80% efficient.

Given that my power supply is 300W and the expected load is only 20-30W, that is just 10% load! Power efficiency could be just 60-70%!

Unfortunately, it is not a simple matter of using a more efficient power supply. First, it is almost impossible to find a ATX power supply under 400W now. Even if it is rated Titanium — which costs a bomb — it is only rated at 90% efficiency at 10% load (40W).

Then, I can use a notebook adapter (80-90+% efficient) and picoPSU (96% efficient). But a picoPSU is not cheap.

I need to calculate how long it will take for a new efficient power supply to break-even. :lol:

Energy efficient low-end server candidates

YearTechTDPCPUSpeedCores/HTL2MemPrice
2010 q145 nm13 WAtom D5101.66 GHz2/41 MB4 GB$63
2013 q422 nm10 WCeleron J18002.41 - 2.58 GHz2/21 MB8 GB$72
Celeron J19002 - 2.42 GHz4/42 MB$82
Pentium J29002.41 - 2.66 GHz4/42 MB$94
2015 q114 nm6 WCeleron N30501.6 - 2.16 GHz2/22 MB8 GB$107
Celeron N31501.6 - 2.08 GHz4/42 MB$107
Pentium N37001.6 - 2.4 GHz4/42 MB$161

Given that the N3150 is a souped-up N3050 for the same price, I'm not sure why anyone would buy the N3050.

The Celeron N3050, part of the Braswell family, is quite new. For example, Asus just unveiled their mini-ITX m/b a few days ago!

Surprisingly, the N3050 is slower than J1800 by about 10% in CPU performance. It trades off with a 40% power reduction.

Finally, all N3050 m/b are fanless. J1800 still requires a fan, but it should be inaudible when enclosed.

Memory

CPUMemChannelsTypeSpeed
Atom D5104 GB1DDR2667/800
Celeron J1x008 GB2DDR3L1333
Celeron N3x508 GB2DDR3L1600

My requirements

I'm fine with either the J1800 or N3150. I would prefer the N3150, but given the base price (US$107 vs US$72), the m/b will be more expensive. Retailers should be trying to clear the old J1800 stock, so I suspect it can be had for a very good price.

The J1800 is twice as fast as the D510, so it should be plenty fast! :lol:

2 GB RAM is sufficient, but 4 GB seems to be the smallest available. I may use 2x 4 GB RAM modules for dual channel operation, which gives up to 5% performance boost at a cost of 1-2W.

I want 2 SATA ports (SATA-II will do for mechanical disks), 2 USB 3.0 ports and 1 Gigabit Ethernet port.

Video can be either VGA or HDMI.

I'm not interested in the graphics processor or 3D performance at all.

Am I forgetting something?

Something that all 24/7 servers should have: ECC RAM. The commonly quoted error rate is 1 bit per 4 GB every 3 days. That seems pretty high!

Unfortunately, for Intel, only Celeron/Pentium G series and Xeon CPUs support ECC, and a C series workstation-class motherboard is needed.

An Atom implodes

Like a star when it runs out of energy, the Atom server was living on borrowed time. The stop-gap measure lasted just one day. It died for good yesterday.

It might take a couple of days — or a few weeks — to look for its replacement, but I want it to limp along in the mean time.

My first choice was an unused Core PC from 2007. It turned out this PC was in an even worse state: it could not even boot up!

Next, I tried to clean the layer of dust off the Atom server's motherboard. It seemed to last a bit longer before it rebooted. So, this failed as well.

As a last-ditch attempt, I used the Core PC's power supply. Nope, the Atom server still rebooted spontaneously.

Then, I hit upon the bright idea of putting the HD in an external USB enclosure and booting it off a notebook! Really, the HD is the server. Who cares about the machine?

I did not have a spare external USB enclosure, so the only way was to "loan" one from my 1 TB Seagate Desktop Expansion HD. This drive is already filled to the brim with backup data, so it is very rarely accessed.

It was difficult to pry the enclosure open. I marvelled at the mechanical ingenuity that enabled it to be held tight without using any screws. This is an innovation alright.

The next thing was to find a notebook. I called upon my retired Asus 1215n netbook. It still worked, but it booted the internal HD. I was unable to enter the BIOS Setup because the F2 key was spoilt. Using a USB keyboard did not work. There is a lesson here.

Last choice: my glacially slow but still-in-use EliteBook 2510p. It worked!

It takes much longer to boot up, though. Previously, it took only 10+ seconds. Now, it takes well over a minute. Is USB 2.0 really that slow?

I was half-expecting it not to mount the partitions, because they are now /dev/sdb. It is a good thing I used UUID to identify the partitions, so it works. :nod:

Bad news: there is no network connection. :-O

Luckily, the reason is that it thinks this is a new network adapter and maps it as eth1. I simply added the eth1 settings to /etc/network/interfaces.

Reboot and voila, the "Atom" is open for business!

One boiling Atom server

Ever since I shifted my Atom server physically to another location, it has hung a couple of times.

A few days ago, it rebooted a number of times before it managed to boot successfully.

Yesterday, it finally failed. It would enter a reboot loop every 10-15s. Still, it was enough time for me to get into the BIOS and see that the CPU was running at 87-89 degree celsius.

That seemed rather high. It is only supposed to reach this temperature when the CPU is at 100% load, not when it is idle.

The motherboard comes with a CPU fan, but I detached it a couple of years ago because it was too noisy. It seemed to work, anyway.

Until now. I reconnected the CPU fan and checked the temperature again. It is a steady 63 degree celsius. Wow, so much difference!

It seems to be working fine now. :lol:

This server has been running 24/7 for 5 years and is showing its age. Also, it could be on its last leg. 63 degree celsius still seems very high for a low-power Atom CPU.

I would love to run a 5th gen Core-M CPU. They are both fast (compared to Atom) and energy efficient.

RHEL workstation disk allocation 2014

I converted my Linux workstation to use multiple partitions last year — not long after I got it, iirc.

FSSpace%Use
/25 GB40%
/var4 GB44%
/var/log2 GB58%
/var/tmp2 GB1%
/tmp10 GB1%
unalloc
swap32 GB
/mnt/work268 GB60%
/mnt/data586 GB90%

/tmp is mounted as tmpfs. It is fine here because the workstation has 32 GB RAM. There is no performance difference.

I keep a large swap (same as RAM size) because I wanted to hibernate the workstation. I could not get it to work reliably.

/mnt/data is for large and static files and /mnt/work is for smaller and frequently changed files.

Design intent for work:

  • improve throughput (placed near front of disk)
  • reduce seek time (small partition)
  • reduce file system fragmentation (separate partition)

The size is not picked randomly either:

  • a working set is around 10 GB
  • I can handle at most 10 working sets
  • Add a bit of buffer (multi-user)

A lot of thought goes into this. :lol:

RHEL server disk allocation 2015

Tweaked from my current workstation disk allocation.

FSSpace%Use
/25 GB41%
/var4 GB66%
/var/log4 GB22%
/var/tmp2 GB1%
/tmp8 GB1%
unalloc9 GB
swap8 GB
/mnt/work266 GB20%
/mnt/data592 GB89%

The first five partitions, plus swap and the unallocated space, add up to 60 GB. I keep some free space around in case I need to expand some partitions in the future.

This time, I decided not to mount /tmp as tmpfs, because I can only set aside 6 GB for it — the server has only 16 GB RAM — and more importantly, it did not make any performance difference at all.

From one to many

It takes several steps to convert from one single partition to multipe partitions.

  • For safety and to simplify operations, clone the entire HD. This takes almost an hour even when copying at 172 MB/s.
  • Use gdisk and change to GPT partitioning.
  • Mount / and delete /mnt/work. Now only 13+ GB is used!
  • Unmount / and shrink it to 25 GB. However, it can only be shrunk to 27+ GB, due to journal size. This takes over half an hour.
  • Shrink it again to 25 GB.
  • Create the rest of the partitions.
  • Mount the partitions and copy /var, /var/log, /var/tmp over. Delete the contents of the original dirs, but keep them around as mount points.
  • Mount the cloned / and copy the cloned /mnt/work to /mnt/work and /mnt/data. Create the mount points in / too.
  • Edit /etc/fstab to include the new partitions.
  • Reboot into RHEL rescue mode and reinstall grub.

Mission accomplished!

A weak spot for retro Monopoly game

The 80th anniversary edition

Especially when it costs just US$15.99 with free shipping. I wouldn't have bought it at the local retail price of S$49.90 ($39.92 after 20% off).

I very much prefer the plainness of the retro version. It is a throwback to a simpler era.

The game, however, is awful. :lol:

A game of Monopoly can be divided into four phases:

  • Acquisition
  • The big trade
  • Powering up
  • Deathmatch

In the first phase, the acquisition, players are just moving around the board slowly and buying up properties. This phase can take a while, but nothing interesting happens. Sometimes, a player may get lucky and get a complete set all by himself, after which he may start to build houses and get a headstart over the others. But his lead may be short-lived, because others may not be as willing to trade with him later.

Once players accumulate enough properties for a multi-way trade to form complete sets, it is time for the big trade! This may be one intensive negotiation session, or it may take place over several smaller sessions. This phase is over when 5-6 of the 9 sets (including railroads but excluding utilities) are completed.

Next, players "power up" their properties with houses. This will happen very quickly if players are flushed with cash. There is definitely a first-mover advantage here, given the limited houses.

There are three rules: one house is better than none, three houses is the sweet spot, and stop at four houses.

Once the houses are more-or-less gobbled up, there is nothing left to do but to see who is unlucky enough to land on the "traps". This phase has very high positive feedback. Once a player needs to sell houses, or worse, mortgage his properties, to pay the rent, it is basically game over for him.

Windows 8.1, out-of-box to up-to-date

You need to run Windows Update at least three times.

First, it will download a number of updates, and you need to restart your PC.

Then, it will download Windows 8.1 Update together with a few updates. You will need to restart your PC again.

Finally, it will download a huge bunch of updates that takes "forever" (two hours) to install. Strangely, the CPU is only 30% loaded and there is no disk nor network activity. You will need to restart your PC.

Altogether, you need to download some 2.5 GiB worth of updates.

Installing Windows 8.1 and bringing it up-to-date is a huge time sink. I have done this several times since Nov 2014. If only it were easy to slipstream updates... like good old Windows XP.

One possible workaround is to install Windows 8.1 in a Virtual Machine, keep it unactivated and leave all settings untouched, and keep it up-to-date. When it is needed, just make a system image of it.

Windows partition scheme 2015

As a rule, I do not like one single partition.

I arrive at this new partition scheme after some trial-n-error:

HomeWork
C:App60 GiB80 GiB
D:Cache15 GiB20 GiB
E:Datarestrest

The App drive contains the swap and hibernation files, so it is effectively around 10 GiB smaller.

The main motive is to reduce file system fragmentation. The App drive should be mildly fragmented, the Cache drive terribly, and the Data drive almost not.

There is no such thing on a SSD, but it still helps to have multiple partitions. It is easier to clear the cache or reinstall the OS, and the data is segregated clearly.

Squeezing the last drop of speed

Workstations

Our z620 Linux workstations are supremely fast: two Xeon E-2670 v2 CPUs @ 2.50 GHz for a total of 40 logical cores :-O, with 32 GB RAM.

But they are let down by the spinning 7200 RPM HD. It is fast, but it pales besides a SSD.

Until now. Some of us will have a 256 GB SSD. Surprisingly, it won't help with compilation, which is CPU-bound, but it will help with disk intensive operations — especially ones that involve random access to thousands of files.

Notebook

I also found that my EliteBook 2560p notebook is only due for replacement in one year's time! IIRC, I got it in April 2012, so that means the replacement period is now 4 years!

I got an additional 4 GB RAM module and a 256 GB SSD. I would like a 8 GB RAM module, but it costs 4x the price! 4 GB is at the borderline for everyday use, but 8 GB is sufficient.

(Or I can switch to Windows 8.1, which is more memory efficient. Or I can do both. ;-))

I got the SSD because my HD is dying — it has the occassional terrifying clicking sound. One day, it will be the click-of-death.

Why SSD? Because every notebook should have an SSD! :-P The notebook slows to a crawl whenever it needs to access the disk — especially random access. Programs take tens of minutes to install, throttled by the disk. The 2.50 GHz i5-2520M CPU, still pretty decent today, is sitting idle.

Work PC

My workhorse Windows PC, a xw8600 ex-Linux workstation, has been running well for over a year. However, I routinely hit its 4 GB RAM limit and it slows down once it starts to swap. I run many programs on it — it has three monitors and eight virtual desktops. It is almost never shutdown or even rebooted, because it takes a while to get everything up and running again.

The three biggest memory hogs are Firefox (by far), Outlook and the ALM client. They inevitably leak memory over time, although the current versions are much better than their earlier incarnations.

So, I asked our local IT support if they have some spare DDR2 RAM modules. They do not. However, they have something better: z600 workstations!

So, I got one. :-D

xw8600z600
CPUX5450X5650
Speed3.0 GHz2.67 GHz
Total cores824
RAM4 GB12 GB

Installation is as simple as moving the HD over.

Objective

The objective of this little exercise is to reduce the lag that lowers the productivity of our day-to-day work.

New OCBC 365 strategy

I'm going to estimate how much of my monthly spending qualify for the 3% rebate. I'm guessing it is S$150 to S$300.

I'm going to stop once I charge S$300 to S$450 of non-qualifying items to the card.

By doing this, I should get a rebate of 0.75% to 1.5% — or 0.3% if I miss S$600.

Trying to maximize rebate is good, but what I really need to do is to find out why my CC expenses are so crazily high. I used to be able to spend S$500 or less. :sweat:

My brush with OCBC 365

I switched to using the OCBC 365 credit card in the belief that most of my spending qualifies for the 3% rebate. I was mistaken.

MonthExpensesRebate%ageCharges
Aug 14$2,532.41$51.242.02%
Sep 14$1,071.56$15.691.46%
Oct 14$1,356.13$23.201.71%
Nov 14$3,205.21$80.002.50%
Dec 14$2,397.96$25.401.06%$159.90 + $60
Jan 15$2,277.28$45.722.01%-$60
Feb 15$1,454.72$15.821.09%$86.42 + $60
Mar 15$582.26$1.750.30%-$86.42 + -$60
Apr 15$1,275.18$13.061.02%

What is worse is that I forgot to pay my credit card — twice! — and was slapped with the hefty late charge and interest.

OCBC will waive the late charge of S$60, but it will almost never waive the interest charge.

I was in Malaysia on both occasions (Dec 14 and Feb 15) and overlooked the due date. OCBC did not accept my reasons.

I finally got them to waive the interest charge by using GIRO to pay my credit card bill, hereby ensuring I will never be late again.

The single interest charge of S$159.90 basically wipes out most of my cash rebate. And that makes me very sour of the card — and the bank.

Going forward, I need to consider two things. First, am I able to meet the minimum S$500 spending to earn the 0.5% bonus interest on my 360 Account? That translates to S$5 every S$1,000 per annum.

Second, am I able to hit S$600 to get 3% rebate — with sufficient qualifying items? I need a better strategy.

So far, my CC expenses are frightening! :-O

Resizing disk partitions

On one of my machines, the 50 GiB OS partition is always on the brink of being full. It has just 3 to 5 GiB free.

Finally, I needed to install Visual Studio 2013 and there was just not enough space.

I had no choice but to resize it. There is a giant 415 GiB data partition adjacent to it, of which just 28 GiB is used.

Windows does not provide a way to move a partition, so I used GParted Live.

I shrunk the data partition by 30 GiB and moved it "to the right" to make space. It took 5 hours.

I have to ask, why?

Why couldn't it stop once it moved the used data? It should be smart enough to skip the free space.

If I had first shrunk the data partition to 30 GiB, move it to the right, then expand it back, it would have taken maybe just 15 minutes.

Takeaway

50 GiB is not sufficient for a Windows 7 "development" machine.

Resizing partitions is a slow operation, but it can sometimes be optimized — manually.

The 1 MB/s barrier

I noticed since a couple of months ago that the network throughput of my 24/7 Atom server was limited to 1 MB/s.

It was strange. First, I attributed it to my 2.4 GHz WiFi. Later, when I switched to the 5 GHz WiFi, I attributed it to the mobile app or the router.

I finally knew something was wrong when high-motion scenes of a 720p video could not play smoothly from my server. That should not happen.

I checked the network tab of my notebook's Task Manager. Throughput was capped at 9.8 Mbps, despite being connected at 300 Mbps.

Suddenly, it struck me. The 9.8 Mbps rate is awfully close to 10BaseT. Surely I'm not running that slowly? :-O

I ran ethtool eth0 on my server and got

Speed: 10Mb/s

Oops. :sweat:

Restoring it to full speed:

ethtool -s eth0 speed 100 duplex full

I don't know if this will stick after power cycle. (Update: it does not.)

Now the network throughput reaches 20.x Mbps — still on the low side. The same high motion scenes now play better, but there is still occasional stuttering.

One mystery remains: when was the speed reduced and why? Was it due to Ubuntu 14.04, the router or cable?

Update: I used a new cat 5e cable and it showed 100 Mb/s. I threw the old cable away.