My Rambling Thoughts

Time to worry about lease

News: Buyers who pay high prices for old flats face reality check

Date: 29 Mar 2017. Source: ST.

Minister's cautionary note that not all old flats undergo Sers may force buyers to weigh not just location and size

Last year, Ms Siah Yuet Whey bought a Housing Board flat that is older than she is.

She is 28 years old. It is 44.

This means that she will most likely outlive its lease - which runs out in 55 years. She will be 83 then.

That did not stop her and her husband, 31, from paying more than $700,000 for the three-room unit in Jalan Ma'mor, in Whampoa.

At 861 sq ft, it works out to $854 per sq ft - the third-highest amount paid last year for flats with less than 60 years of lease left. "It is a rare terraced unit in an area with a lot of character, and it does not feel like it is very old at all. We think it is a fair price," said Ms Wong.

Many people will blame the Government for this situation, but it is the people who ignore reality.

The Government has been discouraging sale of old flats via a "subtle" market mechanism: loan restrictions. But too many people are cash rich.

People think there will be value left after the lease expired, despite black letter agreement saying no. Now the Government has to come out and say it.

Logistically, it is impossible to replace all the old flats. There are just too many of them. One-third of HDB flats are now older than 30 years old (pre-1987).

IMO, when significant number of flats are 50 years old — in 20 years! — the Government will be forced to come out with some extension scheme.

It's going to be a tough inspection

It will be harder to pass inspection from next year onwards.

Cars:

RegisteredNowApr '18
>= 1/20013.5% CO1% CO, 300 ppm hydrocarbons
>= 4/2014"0.3% CO, 200 ppm @ 2k RPM

Motorcycles:

Registered4-stroke2-stroke
>= 7/20032,000 ppm7,800ppm
>= 10/20141,000 ppm, 3% CO

This kills two birds with one stone. First, it ensures cleaner vehicles — Singapore has 956,430 vehicles! Second, this might deter people from renewing their car's COE.

If the car cannot pass inspection, its road tax cannot be renewed...

Scaling interfaces

Now that we know to scale data access and operations, we quickly bump into the next bottleneck: the interface.

If the interface is designed to accept only one entry at a time, we cannot scale even if we want to. For example:

ip_addrs=($a $b $c $d $e)

for ip_addr in "${ip_addrs[@]}"; do
  curl "$svs_url?cmd=update&ip_addr=$(urlencode $ip_addr)"
done

This calls curl five times. And each time, the server can only process one entry.

The great thing about dynamic languages is that they allow dynamic types, so let's make use of it:

ip_addrs=($a $b $c $d $e)

ip_addr_qs=
for ip_addr in "${ip_addrs[@]}"; do
  ip_addr_qs="$ip_addr_qs&ip_addr\[\]=$(urlencode $ip_addr)"
done

curl "$svs_url?cmd=update$ip_addr_qs"

It is convention (started by PHP?) that [] suffix means an array. Let's make the server accept both scalar and array:

$ip_addr_arr = $_GET["ip_addr"];
if(!is_array($ip_addr_arr))
  $ip_addr_arr = array($ip_addr_arr);

By doing this, we only call curl once and the server can process the entries as a batch.

As a rule of thumb, if an interface is called in a loop to process the entries, it should allow multiple entries be passed in with one call.

There are some "designers" who resist this. Sorry, they are wrong.

Note: in this example, I use GET method to do processing. This is not a good practice, because GET is supposed to be idempotent and cacheable. I make this mistake all the time.

Scaling operations

Suppose we want to check the status of a bunch of devices on the network. Obviously we start with one:

$status = check_status($ip_addr);

Then we scale it up with a for loop:

foreach($dev_arr as &$dev)
  $dev["status"] = check_status($dev["ip_addr"]);

Um, no.

The reason is very simple. Network operations are very slow. A single check may take 50 to 100 ms. Just checking 10 devices will take 500ms to 1s!

We need to check the devices in parallel.

In traditional programming, a logical way is to make the program multi-threaded. In a naive implementation, we will spawn one thread per device. But this can overwhelm the machine temporarily if we are not careful. A smarter implementation will use a thread pool to check n devices in parallel.

If we are restricted to one thread (this is a common restriction among scripting languages), we have to use the select() pattern.

Suppose we check the device status with curl, it supports a mode of operation called multi curl. Basically, it allows multiple curl operations at the same time. It is hard enough to get right that it is best written once as a helper function and reused across projects.

In addition to handling the multi curl state machine, we need to make decisions such as:

  • Do we check all devices at once or n at a time?
  • If n, do we check one batch of n at a time or do we roll in new devices as old ones are done?
  • Do we return the status once it is available or do we wait for all to be available?

Our code now looks like this:

check_all_status($dev_arr);

All the work is hidden.

Checking 10 devices is as fast as checking the slowest device. It will typically take 50 to 100 ms — the same as checking one device!

This really shows the importance of scaling.

It ain't over

What if we need to query the device again, depending on the result from the first check?

Previously, it was very clean:

foreach($dev_arr as &$dev) {
  $dev["status"] = check_status($dev["ip_addr"]);

  if($dev["status"] == something)
    check_detailed_status($dev["ip_addr"], url_1);
  else
    check_detailed_status($dev["ip_addr"], url_2);
  }

But this is the slowest code you can ever write.

After modifying to use multi curl:

check_all_status($dev_arr);

$next_dev_arr = array();

foreach($dev_arr as $dev) {
  if($dev["status"] == something)
    $next_dev_arr[] = url_1;
  else
    $next_dev_arr[] = url_2;
  }

check_all_detailed_status($dev_arr, $next_dev_arr);

Better, but not optimal. There is a gap between the first and second part — the first part must complete before the second part starts.

To be optimal, we need to be able to overlap the two parts. When a device finishes the first part, it will go on to the second part.

Needless to say, the code is now much more complicated.

And this is another lesson: the structures of the simple code and the optimal code are totally different. You cannot modify one to the other. It must be totally redesigned and rewritten.

Scaling data access

Adding a new row to SQLite is a basic operation. Can it go wrong?

INSERT INTO tbl(c1, c2, c3) VALUES (v1, v2, v3);

Scaling it naively to 5 rows:

INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13);
INSERT INTO tbl(c1, c2, c3) VALUES (v21, v22, v23);
INSERT INTO tbl(c1, c2, c3) VALUES (v31, v32, v33);
INSERT INTO tbl(c1, c2, c3) VALUES (v41, v42, v43);
INSERT INTO tbl(c1, c2, c3) VALUES (v51, v52, v53);

It works, but performance drops... drastically. It is not really noticeable at 5, but it is at 50, and 50 is not really a big number.

What went wrong?

By default, each operation is an implicit transaction. To execute multiple statements, we should use a bulk statement or wrap them in a transaction.

This works (from SQLite 3.7.11 onwards):

INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13),
  (v21, v22, v23),
  (v31, v32, v33),
  (v41, v42, v43),
  (v51, v52, v53);

Or this:

BEGIN TRANSACTION
INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13);
INSERT INTO tbl(c1, c2, c3) VALUES (v21, v22, v23);
INSERT INTO tbl(c1, c2, c3) VALUES (v31, v32, v33);
INSERT INTO tbl(c1, c2, c3) VALUES (v41, v42, v43);
INSERT INTO tbl(c1, c2, c3) VALUES (v51, v52, v53);
END TRANSACTION

An INSERT statement with implicit transaction may take 2ms, so 50 rows will take 100ms. A bulk statement or one transaction takes just 4 - 10 ms. The overhead is that significant.

Takeaway

Database access is one of the most fundamental operations. Being correct is not good enough. We have to be optimal.

Distributed video encoding

The first version of vtec is finally done! :clap:

It is a distributed video encoder. It distributes videos to a farm of machines to encode. This is useful if there are many videos to encode.

There are three components:

  • video encoder
  • webserver that has the job details
  • distributed workers

The implementation is simplified by the fact that the machines can access one another via NFS over a Gigabit network.

Correction: only the engine part is done. There is a progress webpage — rendered purely in PHP, no JavaScript, first for me :lol: — but there is no webpage nor REST API to add jobs. I add them to the SQLite database directly. :-O

The video encoder is a frontend to HandBrakeCLI. It supports dir-level encoding options, pre-, post-processing, and naming the output files in a consistent manner. This is an entire solution in its own right and has been more-or-less field tested.

There are three parts to the distributed workers: the worker itself, a controller, and a progress monitor. They are all shell scripts.

The worker polls the job server for new jobs and calls the video encoder. There can be multiple workers per machine. Each uses a pre-defined set of CPU cores. Originally, they sleep in short durations (no longer than 2 mins) in order to respond quickly to a new job. Now, they are put into long sleep* and the per-machine controller will wake them up.

The per-machine progress monitor sends the job progress to the job server. The worker cannot do this because it calls video encoder synchronously and is blocked while waiting for it to finish. The progress monitor will go into long sleep when there are no active jobs. This functionality has been folded into the controller. It makes the controller a little more complex, but there is one less command to run.

* What is long sleep? We make the script block somehow (using 0% CPU) and then send it a signal to make it resume. It's like an interrupt. :lol:

Scaling

This architecture came about because the worker was developed first. Then with multiple workers, I found the short sleep to be inefficient. Hence the controller.

A better solution is to have just the controller and let it spawn workers as needed (up to the predefined limit for that machine).

An even better solution is to have just one controller per farm and let it create workers as needed remotely through ssh.

This illustrates the scaling problem. A solution that works may not scale optimally to a large data set. It is often necessary to redesign.

Why not just design a large-scale solution upfront? There are three reasons. First, it is a lot more work. Second, we do not know if we scale correctly (too little, useless; too much, overkill). Third, it may not be needed.

I still prefer to do it the old-fashioned way. Do a simple solution, see its bottlenecks and then decide how much to redesign.

For example, I was not sure what the I/O load would be on the file server — where the files reside (which can be different from the job server) — when there are 20+ concurrent encodings. But from preliminary results, it seems to be negligible.

Small touches

After encoding, a worker will wait depending on his 5-min load before getting a new job if there are idle workers. This is to let other (less loaded) workers have first dip.

If there is no job, the job server will return the sleep time to the controller depending whether other controllers are waking up soon. This allows the farm to respond to new jobs in a timely manner.

What's next

Each worker uses a fixed number of cores. It is not adaptive. Suppose we have half as many videos as workers, we can speed up encoding by pausing half the workers and letting the other half use twice as many cores.

What if we only have one video to encode? Only one worker is doing the work. We can break up the video into chunks and let each worker encode each block, then merge them back into one stream when all are done. This is especially helpful for HEVC, which is slow as moses.

Tiered morotcycle ARF from Budget 2017

Effectively immediately (the second Feb 2017 COE bidding), motorcycle ARF will be tiered:

OMVARF
First $5k15%
Next $5k50%
Above $10k100%

As usual, LTA claims this is okay because the majority of buyers are not affected.

LTA does not have statistics by motorcycle OMV, so we will use the motorcycle CC as proxy.

CC200620112016
<=200110,326110,18897,924
<=50021,72021,57523,237
>5009,83213,91721,278

Indeed, there is a growing trend towards big bikes.

10% of de-registered motorcycle COEs go into cat E. That has been blamed for fewer COEs. LTA will stop that now. But really, the elephant in the room is that more motorcycle COEs are being renewed:

#Years200620112016
<=10108,230104,18686,535
<=2019,66730,25146,823
>2013,98411,2439,081

It is a vicious cycle: lower COE quota leads to higher COE causing more renewal resulting in lower quota.

How to reduce COE renewal?

For bikes, it is a no-brainer. There is no PARF to get back, so why not pay the prevailing COE and keep your bike?

I have the ultimate killer suggestion for LTA: owners have to pay half the vehicle's OMV to renew its COE.

For cars, this means giving up the PARF and then paying half the OMV.

This will prevent people from renewing COEs in perpetuity (since there is no further penalty at 20th year).

Locking in water price for the next 17 years

The price of water is broken down into four parts: tariff, water conservation tax (WCT), waterborne fee (WBF) and sanitary appliance fee!

7/20007/20177/2018
Tariff$1.17$1.19$1.21
WCT30%35%50%
Total price$2.10$2.39$2.74
Tariff (>40m3)$1.40$1.46$1.52
WCT45%50%65%
Total price$2.61$3.21$3.69

Still need to add GST to the total price. :-O

It is instructive to see the previous water tariffs. The last big increase was done over four years:

<7/977/977/987/99
Tariff$0.56$0.73$0.87$1.03
WCT0%10%20%25%
WBF$0.10$0.10$0.20$0.25
Tariff (>20m3)$0.80$0.90$0.98$1.06
WCT15%20%25%30%
WBF$0.10$0.15$0.20$0.25
Tariff (>40m3)$1.17$1.21$1.24$1.33
WCT15%25%35%40%
WBF$0.10$0.15$0.20$0.25

Water was so cheap before 1997? Wow, I don't remember.

Water tariff is tiered. PUB should create a new category to encourage people to conserve water. My proposal:

Price
<5m3$2.10
<40m3$2.74
>=40m3$3.69

But the Government prefers to give out (annual) U-Save Rebate:

1-, 2-room$260+$120
3-room$240+$100
4-room$220+$80
5-room$200+$60
EC$180+$40

It has two advantages: it targets only Singaporeans, and it makes the receivers beholden to the Government.

Just a quick note. GST has not increased for 10 years already. :lol:

Is the Earth typical?

The Earth is special in our Solar System:

  • it has a magnetic field*
  • it has surface water1
  • it has plate tectonics2

* I'm going to postulate that this determines if a planet is "alive" or not.

1 Because it is in the habitable zone. But 3 billion years ago, our Sun was 30% less bright and Earth should have been too cold for surface water. This is the faint young Sun paradox.

2 Probably possible due to the lubricating effect of water.

What happened to Venus?

Venus cannot be ignored. It is our sister planet — 95% of Earth's diameter, 80% surface area and 81% the mass. Yet it is so different.

  • It has no magnetic field
  • Its surface temperature is 462°C
  • Its surface atmospheric pressure is 92 times of Earth
  • It has a retrograde rotation of 243 Earth days (vs its orbital period of 225 days!)
  • It was totally resurfaced by volcanic activities 500 mil years ago

The last two points are very interesting to me. When did Venus start to have its slow retrograde rotation? (Basically, it was game over once this happened.) What caused it? Was it caused by an impact? (Likely.) But where was the impactor?

Was it recent or in the distant past? Could it have life before that? Venus was in the optimium habitable zone 3 - 4 billion years ago.

What happened 500 million years ago?

Planetary scientists think a runaway greenhouse effect had occurred on Venus, causing its demise. This is basically positive feedback running amok. People are worried about global warming on Earth because they fear the same may happen here.

Personally, I'll prefer to investigate the mysteries of Venus than to explore Mars. It is too small to hold onto semi-light gases such as oxygen.

Is our solar system typical?

A yellow star*. Four rocky inner planets and four giant gas/ice outer planets.

How typical is our solar system?

Consider these.

Our Sun is not a very big star. Nevertheless, it is in the 90th percentile by brightness in the Milky Way. Red dwarfs and giant stars are not hospitable to life.2

Something stablized Jupiter's inward spiraling orbit. See Hot Jupiters in other systems.

Comets brought water from the outer solar system to Earth soon after its formation. No water, no life. Okay, this one may not be so rare.

Earth has a large moon that stabilizes its orbital axis, making its climate stable. This may or may not be an issue — life is tough if it can take root.

Earth has a magnetic field. This is very simple. No magnetic field, no life. It shields the Earth from the Sun's solar wind of charged particles that would strip away the ozone layer, which in turn absorbs UV radiation, which is lethal to life.

* Our Sun is actually white. It is a G-type main sequence star that is sometimes nicknamed "yellow dwarf".

2 Never say never, but it would be extremely challenging, particularly for intelligent life to evolve.

x265 settings

x265 is still prone to smoothing. These options are recommended to retain details (some are new):

--tune grain
--aq-mode 3
--no-sao
--no-strong-intra-smoothing
--ctu 32
--max-tu-size 16
--tu-inter-depth 2
--tu-intra-depth 2

--tune grain. Retains details, but increases bitrate substantially.

--aq-mode 3 biases toward dark regions and reduces banding in 8-bit color depth.

--no-sao. SAO (Sample Adaptive Offset) is also known as smooth-all-objects. :lol:

--no-strong-intra-smoothing. With a name like this, of course you want to turn it off.

--ctu 32, --max-tu-size 16. For HD and lower encodes. The default CTU of 64 pixels is only suitable for UHD encodes. For 480p encodes, I'm considering using CTU of 16. (Smaller blocks = more details.)

--tu-inter-depth 2, --tu-intra-depth 2. Increase search depth to use smaller TU.

Results

Grainy blu-ray source. CRF 22, slow preset.

Intel Xeon CPU E5-2670 v2 @ 2.50 GHz. (3 real and 3 HT cores are used.)

PresetFPSQPkbps
slow1.76790225.776,475.79
+fine1.60952425.876,742.30
+aq-mode 31.53578124.5610,012.88
+aq-mode 3, fine1.41559924.6610,454.77
+grain1.21380824.1716,183.83
+grain, aq-mode 3, fine1.17830324.1716,069.47

The fine settings increase bitrate a little. That is expected as it retains more details. aq-mode 3 really blows the bitrate. grain really takes the cake.

slower preset

PresetFPSQPkbps
slower0.53676325.947,318.01
+tweaked1.28237725.836,614.38
+tweaked, fine1.24099425.986,731.26

Tweaked:

--bframes 6 (was 8)
--rc-lookahead 40 (was 30)
--lookahead-slices 2 (was 4)
--rd 4 (was 6)

The main effect is due to --rd 6. It is really slow.

More options

To try. They won't help as much, though:

--preset veryslow
--rd 6
--amp
--no-rskip
--aq-motion (v2.2+ only)
--deblock -3:-1

I have not tested these.

x265 2.1 presets test

Grainy blu-ray source. CRF 22.

Ivy Bridge

Intel Xeon CPU E5-2670 v2 @ 2.50 GHz. (3 real and 3 HT cores are used.)

PresetFPSQPkbps
ultrafast16.94372628.683,486.33
superfast12.73704828.444,007.72
veryfast7.92241125.925,659.28
faster7.74707125.925,660.07
fast6.83665325.805,699.99
medium4.05469225.916,593.67
slow1.76790225.776,475.79
slower0.53676325.947,318.01
veryslow0.36561325.857,187.14
placebo0.14597825.897,371.47

Haswell

Intel Xeon CPU E5-2660 v3 @ 2.60 GHz. (3 real and 3 HT cores are used.)

PresetFPSQPkbps
ultrafast23.90755528.693,475.09
superfast18.17956528.454,006.14
veryfast12.3461675,659.26
faster12.036365
fast10.682315
medium6.119694
slow2.551558
slower0.732159
veryslow0.490835
placebo0.189853

AVX2 is not bit-identical.

Takeaway

medium and slower presets are slower than in v1.9. Bit rate is now much higher.

There is a big decrease in speed from medium to slow and slow to slower.

I'll probably use a tweaked slow preset. It is the slowest I can bear.

Compiling HandBrake 1.0.2 on RHEL 6.x

HandBrakeCLI fails to compile on RHEL 6.x due to four missing components:

  • lame
  • opus
  • jansson
  • harfbuzz
    • Edit make/include/main.defs, move these out of the if block (line 44):

      MODULES += contrib/jansson
      MODULES += contrib/lame
      MODULES += contrib/libopus
      MODULES += contrib/x264
      

      I was not able to use the included harfbuzz. I had to download it (1.4.2), build and install it separately.

      Note: this is assuming the system has compiled HandBrakeCLI 0.10.x before. Otherwise, it requires more components.

      There is official instructions to build HB on RHEL 6.x, but (i) I missed it and (ii) it is slightly more complex.

      HandBrake 1.0.2 brings with it x264 r148 (was r146 in 0.10.x) and x265 v2.1 (was v1.9 in 0.10.5).

Everyone needs a wall

Date#Attemptsroot %#IP addr
2015/9904,99096.6%484
2015/10426,78795.8%335
2016/9345,78086.5%609
2016/10425,67892.4%608
2016/1126,56090.1%239
2016/129,32061.9%473
2017/114,59162.7%1,289

Count attempts:

sudo last -f /var/log/btmp.1 | head -n -2 | wc -l

Count root attempts:

sudo last -f /var/log/btmp.1 | head -n -2 | grep "^root " | wc -l

Count IP address:

sudo last -f /var/log/btmp.1 | head -n -2 | awk '{print $3}' | sort | uniq | wc -l

The SSH login attempts dropped like a stone after I implemented the defense mechanism. :-D

It went dead silent for a while. However, I saw a suspicious pattern after a while. By right I only allow 12 attempts every two minutes per IP address, but I can see more attempts than that. I suspect the attacker made a bunch of connections first, then proceeded with the attempts.

I need to come up with even more aggressive heuristics.

RHEL disk allocation 2017

Server

FSSpace%Use
/25 GB45%
/var4 GB66%
/var/log4 GB62%
/var/tmp2 GB1%
/tmp8 GB1%
unalloc9 GB
swap8 GB
/mnt/work266 GB50%
/mnt/data592 GB56%
/mnt/speedy *118 GB38%

(No change.)

* speedy is a SSD.

Workstation

FSSpace%Use
/25 GB44%
/var4 GB64%
/var/log6 GB3%
/var/tmp2 GB1%
/tmp *10 GB1%
unalloc12 GB
swap16 GB
/mnt/work268 GB55%
/mnt/data586 GB91%
/mnt/archive1.8 TB64%
/mnt/speedy235 GB11%

* /tmp is mounted as tmpfs.

I finally resized /var/log from 2 GB to 6 GB.

swap is reduced from 32 GB to 16 GB, since it is hardly ever used.

The one book all programmers should read


The Pragmatic Programmer (1999)

"This book will help you become a better programmer", so boldly claimed the authors in the preface. This book is about the pragmatic stuff, hence its title. No fancy architecture or buzzword-of-the-day.

I don't agree 100% with the authors, nor do I think it is necessary to do everything they say, but one can already be very effective using 60% of their advice.

If there is one book I think all programmers should read, this would be it. I like this book so much that I have bought it four times over the years. I lost it twice (lent and not returned) and gave one away.

This book will help you become a better programmer. Yes, seriously. Go read it today!

It's a matter of darkness

The current standard model of cosmology says that the total mass-energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.

Dark matter and dark energy provide very simple and clean explanations for the phenomena we observe, but I think they are wrong.

The evidence for dark matter is that the outer rims of galaxies rotate faster than the visible mass indicate, hence their existence. If I have to add 5x the known matter to make my model work, I think my model is wrong. I simply think there is another explanation for this, one that we do not understand yet.

The evidence for dark energy is even shakier. Galaxies are moving apart faster than expected. Dark energy is the one that provides the force. Again, we need to use a very large number to make it work. For this, I simply assume the way we measure distance (billion of years back in time) is wrong. :lol:

No more C++ for me


Effective Modern C++ (2014)

This is one book by Scott Meyers that I won't be buying.

I have his other books:

  • More Effective C++ (1996)
  • Effective C++, 2nd Ed (1998)
  • Effective STL (2001)
  • Effective C++, 3rd Ed (2005)

These are required reading. Otherwise, you are not even aware of C++'s many traps and pitfalls.

I have renounced C++. Since 2006 or so. The C++ I know is C++98 and later C++03. C++11 is almost a new language — I cannot read it. It is now C++14.

I used to like C++ — it was the only language I would use. But after several years, I realized I spent more of my time "fighting" the language than solving my problems! It gave the illusion of power, but it was actually very limiting.*

I decided to go back to the basics: C. In C, you only "pay for what you use". This was the motto of early C++, but it was never true.

For dynamic stuff, I use JavaScript — and later PHP for server-side processing. It is liberating to use dynamic languages. Strings and arrays are first-class objects. Associative arrays and regular expressions are available and are very useful. Loose typing works — learn to let go :-P. There is no need to worry about memory management. You just concentrate on solving your problem.

*At the intermediate level. Template metaprogramming is very powerful, but oh boy, the syntax and the error messages. And you will waste a lot of time fiddling with it instead of solving your problem.

Projects 17

These are some projects I have in mind. Some have been on the backburner for years. :-(

Miniature lighting. I've always wanted to light up my (yet to be constructed :-P) Lego city. But I don't just want a simple on/off switch. I want to control individual lightings — street lights, house lights, etc. How to do that with minimal wiring? This is phase 1. Phase 2 is lighting up the vehicles. O_O

Miniature painting. I intend to paint some of my board game tokens to personalize them. This is a special case of the next project.

Pimp boardgames. There are third-party tokens, but they are very expensive. I'm now inclined to 3D-print the parts and paint them myself. (I like to do things the hard way.)

Update server. Change to 64-bit Ubuntu, get USB 3.0 ports working.

Backup file checksum. Use checksum to make sure files are copied correctly. I have encountered very rare cases where files were corrupted silently. Currently I'm running sha1sum manually.

Alerts. Alert me when things happen, e.g. when the Toto jackpot exceeds S$4 million. :lol:

Comic on-demand. To view my comic collection over the network without having to unzip the files manually.

Real-time info: bus, carpark.

Concurrent video encoder. Upgrade to HandBrake 1.0. Create Web frontend. Standardize encoding settings. Re-encode videos.

IP hammer. It does not auto-populate the block list on power cycle. To do.

Improve my programming toolkit. To enhance my library of code so that I can implement solutions faster.

HDB flats break the million barrier soundly

Record 19 HDB resale flats sold over S$1 million in 2016. 11 were from Pinnacle@Duxton. Others were from City View @ Boon Keng and Natura Loft in Bishan.

(Note: Resale Flat Prices at data.gov.sg shows only 3 flats above S$1 mil.)

Is 19 shocking? There were 12 in 2015.

Personally, the figure that gives me more pause is that most resale 5-room flats are above S$500k. Even worse, almost all 3-room flats and above are above S$300k. There is no cheap housing in Singapore.

Housing and private transport are the two biggest money drains in Singapore. These take years — perhaps even the entire working life — to pay off. Can't blame people for wanting stable jobs.

New Year Resolutions 2017

The resolutions are the same as 2016.

Don't squander time. Work on projects. Limit net surfing and YouTube time.

Exercise. This is more important than before now that I have high-blood pressure.

Keep track of tasks/schedules.

Housekeeping. Throw or give away unused stuff. Replace worn-out stuff. Optimize storage space.

Expenses visibility. Keep track of major expenses at least. Need to account for 80 - 90% of the expenses.

Incremental clothing replacement. Buy new t-shirts.

Home Improvement Project. Will give my IVAR shelves one last upgrade. Replace spoilt light bulb socket. Run fibre cable through false ceiling.

Specific issues

Deadlines. I have an issue with non-work deadlines. I procrastinate and do things last minute. I missed several critical deadlines as a result, including overstaying! :-O

Vouchers. I dislike vouchers. I tend to forget about them until they expire. In fact, some did, but luckily the shop still accepted them. Some I used on the very last day.

The last mile. It takes 20+ minutes to walk to the neighbourhood centre and back. A personal transporter will cut down the time by half and more.

Dust. It is simply too dusty. I'm thinking of getting a robot sweeper to sweep the floor every day. In the mean time, I'm using the low-tech solution of closing the windows — it cuts the dust down by 80%!