Google I/O 2017: An Insider’s Perspective

By Wilhelm Fitzpatrick, Technical Director

Once a year, Google summons 7,000 of its closest friends to Mountain View, CA for a gathering called I/O, inspired by the programming acronym input/output. It is a chance for developers who work daily with Google products and services to get a first look at what the tech giant has planned for the coming year, and for Google to hear from them what they are trying to accomplish, and what frustrations they encounter. The keynote, and many of the informational sessions that are presented at I/O are streamed online for all to see, but those who are lucky enough to get a ticket to I/O itself have access to much more. Tents full of demonstrations show off everything from a Volvo automobile with Android software fully integrated into its dashboard to a drink mixing robot powered by Google Assistant. You can attend office hours sessions with the Google developers who created the software and devices you are currently trying to work with, and ask those tricky technical questions you just can’t get answered on Stack Overflow. And, you can rub shoulders with a whole lot of other really smart people who are building amazing things.

Given the size of Google as company, and all the many product and service areas it operates in, not to mention the scale of the I/O conference itself, it is impossible as one person to come away feeling that you were able to experience and learn everything that was on offer. My personal experience of I/O this year centered around three main themes:

 

Machine Learning

Google started telling their machine learning story several years ago with the release of the TensorFlow neural network libraries as open source, revealing some of the deep learning techniques behind the image and speech recognition features that had begun to feature in many of their products. This year’s announcements at I/O showed that things are only accelerating…

  • Inside Googlefeatured in the keynote were pictures of next generation of Tensor Processing Units (TPUs), custom chips optimized for running the kind of neural network calculations that power TensorFlow. Google is applying giant stacks of this fancy new hardware to the ever more robust speech recognition and natural language processing that underlie the Google Assistant, as well as more intelligent features in many of their other apps, such as Maps being able to warn you if you need to leave earlier due to traffic congestion, or Photos not only organizing your photos for you as was demonstrated last year, but now making suggestions about who you might want to share those photos with. I noticed the frequent repetition of a phrase that I am used to hearing from Apple, “and all of this happens on your device” to acknowledge end user concerns that we are paying for all this magical convenience with a loss of control over where our personal data lives and what it is used for. To this end, we learned that the next version of Android will contain an API to enable developers to access accelerator hardware for increased neural network performance on mobile devices. This will begin by leveraging existing graphic processing units (GPU) that every device already has, but there was a suggestion that we may soon see custom co-processors. Perhaps the Google-designed Pixel 2017 phone will include a mobile TPU?
  • Outside Googlewith the release the TensorFlow libraries at the end of 2015, Google made it clear that they wanted to get the power of machine learning into as many hands as possible. One of the ways they added to that story at I/O 2017 was the announcement that those shiny new TPUs weren’t just for use by internal Google applications, but that Cloud TPUs  would soon be available as a part of their cloud platform offering. To emphasize the leverage that developers can gain with access to these powerful tools, they highlighted Chicago high-school student Abu Qader  who taught himself TensorFlow and built a system to improve diagnosis of mammogram images. Another machine learning success story was Japanese cucumber farmer Makoto Koike who built his own automated conveyor belt to speed up the tedious task of sorting cucumbers by grade, combining an inexpensive Raspberry Pi computer with both locally run and cloud-powered machine learning algorithms.

 

Google Assistant and Conversational Interfaces

Google made it clear that they are fully committed to the talking cylinder wars, and in recognition of that, the developer keynote ended with a gift to each attendee of a Google Home device and a strong exhortation to developers to go forth and create Actions, Google Assistant’s equivalent to Alexa skills. Such actions aren’t only limited to Google Home, but find a place in the Google Assistant that is now accessible on nearly all reasonably modern Android phones, and even on the iPhone as a Google Assistant app for that platform was released at I/O. To give developers an edge up in this brave new world of chatty devices, Google provided:

  • Api.ai, a toolkit for building conversational interfaces (chatbots) that can not only be plugged into the Actions framework, but also surfaced as an Alexa skill, a Slack integration, or plugged into many of the other places we talk to computers (and to each other) these days. Api.ai not only leverages machine learning on the back-end to power its natural language processing capabilities, but also assists developers in building out the specifications for the types of conversations they’d like their bot to be able to carry on by analyzing sample sentences and extracting key details. The end result is that end users get comfortable fluid conversations and developers get API calls to their backend services full of nicely structured data that is easy to interpret. Api.ai is the product of a startup recently acquired by Google and it stands alongside similar efforts from Facebook (Wit.ai, also a recent acquistion), Microsoft (LUIS), and Amazon (Amazon Lex).
  • I/O also featured a number of sessions by UX specialists helping developers understand the patterns behind everyday speech and how you can leverage those to give users a more fluid experience (and hide your mistakes!). Linguist James Giangola gave an especially insightful talk on the ways that common “phone tree” type voice-user interfaces flout the expectations of a human conversationalist and how we can harness those expectations to get our users to treat us as an ally in achieving their goals, and not an opponent that must be battled through by finding the magic phrase.

I don’t think we’ve quite reached the point where talking is going to replace pointing, tapping, and clicking as our primary way of interacting with the digital world, but  it is definitely a mode of interaction that brings benefits to a certain class of tasks, and users are going to start expecting to have that option. It is great that as developers we are going to have intelligent tools to help us build those experiences.

 

Android

The most significant announcement around Android came out just before I/O, which was Project Treble, Google’s latest attempt to crack the nut of timely updates for the general population of Android phones. Unlike previous attempts, this time out they have an actual engineering solution, and some teeth in the form of certification tests, new devices will have to pass. But it’s a move that will only show benefits in the long term and could still be derailed by lack of cooperation from chip manufacturers. So for the rank and file developers at I/O most of the excitement was focused on two things:

  • Official support for the Kotlin programming language as an alternative and companion to Java for developing Android apps. Kotlin embodies modern programming language concepts and bears many similarities to Apple’s Swift, but it is designed to interoperate smoothly with Java. Kotlin code can be compiled to run on both on desktop JVMs (Java Virtual Machine) and Android’s ART (Android Runtime). In the future it will also be able to produce Javascript for the browser and native code for greatest efficiency. Originally from JetBrains, creator of the IntelliJ IDE, it is developed as open source and has been rapidly gaining mindshare among Android developers as a way to streamlining their app development process by eliminating some of the tedious verbosity often found in Java, as well as providing greater stability through a more robust type system that catches many common coding mistakes in advance. Many production Android apps have already made the move to Kotlin in whole or in part, but having Google’s seal of approval means that more developers are going to be jumping on board this bandwagon with the assurance they won’t be left stranded by unanticipated future changes to the Android OS and tools. Numerous sessions  during I/O featured eager early adopters of Kotlin ready to show the way for the next wave of developers.
  • Android Architecture Components, still in an early stage but progressing rapidly, provide a set of libraries which tackle head-on a number of key pain points that face Android developers, especially ones new to the platform. The libraries provide higher level abstractions for structured data storage, application lifecycle management, and live updates in response to back-end data changes. This is combined with an effort by Google to provide more detailed and opinionated guidance to developers on best practices for app development.

Combining Kotlin and the Architecture Components – along with yet more bits and pieces to help build Material Design user interfaces – should result in more developers begin able to bring theirs apps to Android in a quick, robust, and good-looking fashion.

 

All the Rest

As I mentioned at the top, it was not possible as just one person to take in everything that was on offer. Some things I barely dipped a toe in, like checking out the state of Tango, Google’s Augmented Reality (AR) platform that allows phones (and other devices) equipped with a special sensor suite to figure out their precise location in space, and allow developers to overlay digital experiences on top of real ones. Others I only heard talked about in passing, such as Progressive Web Apps, a potential standard for allowing browser based applications to “climb out” of the browser and provide an experience closer to a desktop or mobile native app. Even Google’s mysterious Fuschia operating system project had an impact via a spinoff of its Flutter UI toolkit, promoted as a solution for writing cross-platform mobile apps.

At the end of it, you come away with your brain full and buzzing. Google has always been known as a place that throws off exciting ideas and unusual innovations like sparks. Some catch fire and change whole industries, others are just lingering embers. I/O is the place to huddle close to the bonfire, watch the sparks fly up, and try to guess which ones will become stars.

 

**Image source: https://events.google.com/io/ 

 

Related Posts

Hi! Let's stay in touch.
Sign-up for our newsletter!