V8, modern JavaScript, and beyond – Google I/O 2016

Speed

Welcome to this summary of Seth Thompson‘s V8, modern JavaScript and beyond talk from Google I/O 2016.

This talk lasts approximately 37 minutes and covers five exciting topics:

  1. Real-world performance
  2. JS engine upgrades
  3. ES6 & ES7
  4. Debugging + Node.js
  5. WebAssembly

Real-world Performance

V8’s mission

“Speed up real-world performance for modern JavaScript, and enable developers to build a faster future web.”

Seth says the real-world aspect is really important and shows a chart from 2007 to 2016. Microbenchmarks has existed all along and static test suites started to appear from 2o12.

An example of a microbenchmark is SunSpider. It runs small pieces of code, such as bitmask operations or Regex string replacement, 10,000 times in a hot loop.

Set says this is useful for finding regressions, but not real world performance.

Examples of static test suites are:

Seth say they’ve only made bug fixes to Octane since it was launched in 2012, but the web has changed so much since then.

We are at the dawn of the 3rd era in measuring benchmarks: measuring real web pages as they’re accessed by users.

As web developers, every new feature we add, every optimization we make, is driven by a notion that we can measure performance.

The V8 team has compiled detailed performance statistics across the most popular sites this year.

They’ve used the results to optimize many commonly appearing builtins, and found that the most important one was Object.assign.

JS engine upgrades

JavaScript is a dynamic language and in the process of running it, we tend to create a lot of objects. There’s a trade-off between performance and memory consumption.

A sophisticated garbage collector can make a big difference. Seth says the median pause for garbage collection is now down to 4.4 milliseconds.

Orinoco is more intelligent about when, and when not, to schedule garbage collection.

They are also working on a new interpreter which is designed to improve startup, and reduce the compile time, and also reduce memory usage.

The interpreter is slightly slower when executing JavaScript, but it faster on parsing and compiling.

ES6 & ES7

Seth introduces these as specifications from the ECMAScript TC39 team.

As of Chrome 52, V8 supports both ES6 and ES7, except for modules which requires interaction with the DOM.

Tail call optimizations are not implemented as it is still under discussion at TC39.

There is also a prototype implementation of async / await and String.prototype.padStart / padEnd (aka leftpad)

Why do we care about browser support for language features?

  • Polyfills & transpilers can’t replicate all features
  • Less overhead (more code to ship to users, more parsing)
  • Easier debugging
  • Speed over time (browsers optimize native features over time)

Todo MVC

TodoMVC examples are available here

Seth takes the Vanilla ES6 implementation, runs rollup on it (because modules aren’t yet supported) and runs it up in Chrome without the need for transpilation.

Debugging + Node.js

Seth says it’s frustrating when you see a 500 internal server error because you need to switch mental context, and often need to move to a separate codebase.

Why is it so hard to debug Node.js using DevTools? The V8 team have now added proper DevTools support to Node.js.

Seth demos this. It uses an inspect flag:

node2 –inspect/app.js

We get a URL back and can paste it into any Chrome window. We see this is using a preview version of Node version 7.

Seth explains the bug is due to the JavaScript timestamps working in milliseconds and Unix timestamps being measured in seconds.

WebAssembly

WebAssembly is a new cross-browser, low-level language.

It’s designed to run native C/C++ code in a tiny binary format.

  • open standard
  • plugin-free
  • cross-browser
  • web platform APIs
  • no new permissions exposed
  • view-source enabled
  • familiar toolchain
  • interops with JS
  • same browser sandbox
  • asm.js compatible

Unlike asm.js, WebAssembly is a binary format which is smaller in size, and much faster for the machine to decode.

Demo available at http://webassembly.github.io/demo/

Further Reading

For the latest news from the V8 team see their blogspot

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s