Optimized Trapping of Undefined-like Values In JavaScript

While working on the FusionCharts JavaScript charts, there was a frequent need to test whether a variable was null, undefined, NaN or an empty string. The “frequent” need was so frequent that the probing function alone took up 15% of the chart’s execution time.

A very straightforward function checks for the usual set of values of a variable that we (and generally many other JS libraries) treat as undefined: undefined, null, (empty string) and NaN and returns true for all such values.

The above works just fine for a couple of calls, but for calls to the tune of 100 consecutive, even 1ms overhead is a lot. So, the goal was to recode this function by keeping ‘execution time’ as the fulcrum of the optimization process.

As a note, the above function performs an effective nine operations and a function call. This would help us perform a superficial estimation while we try to optimize the code.

Optimizing It Objectively

At the first glance, we could reduce the above operation by removing two distinct test for null and undefined by a single test: x == undefined.

Thus we are left with (empty string) and NaN to be tested. For both, we have no option but to leave the rest of the expression exactly as original. Not much of a help.

Thinking the Straight Crooked Way

When true-negative testing resulted in negligible optimization, the attention was  shifted to false-positive testing. Thus the objective was now to test true for the rest of the values.

The various forms of possibly ‘troublesome’ values would be null, undefined, (empty string), NaN, 0, 1, –1, true, false and Infinity.

So, we do a simple boolean casting of the parameter: Boolean(x). This returned true for all the unwanted values. But it also returned positive for a couple of usable values: 0 and false. But that is definitely a better situation.

Finally we come up with the revised code !(!x && x === false && x === 0). Then again, this expression has a flaw: even if the first part of the expression evaluates to true, the rest of the expressions are still evaluated.

On comes our trusted De-Morgan’s theorem and we convert the OR operations to AND. This causes the expression to stop being evaluated the moment a part of the expression evaluates to false.

Thus, with only five effective operations and no function call, we achieve the same result as the initial code.

Profiling the Fruit of Effort

Though the above difference in the number of operations would clearly suggest that our new function will perform better, yet a good programmer never takes his chances. Profiling the functions would decisively settle the score.

So, we prepare our set of possible input values (more than what was initially discussed,) then we iterate over them 15,000 times to get the average time and then compare the percentage of time consumed by each.

ScenariosPercentage of time occupied under sequential execution.
Original FunctionOptimized Function
Best Case84%16%
Average Case77%23%
Worst Case57%43%

In all the above cases, we see our new function fare much better than the original function. And ah! That much of thinking does go into coding various aspects of FusionCharts.