Cutting edge technology

Introduction: On January 15th, Jeff Dean, senior researcher at Google and head of Google AI, published a blog post reviewing the progress of Google's technology research in 2018.

On January 15th, Jeff Dean, Google senior researcher and head of Google AI, published a blog post, reviewing and reviewing the progress of Google's technology research in 2018.

Jeff Dean systematically restored Google AI 2018 from Google's artificial intelligence, quantum computing, sensing technology, algorithm theory, AutoML, robotics and TPU.

As we all know, in the whole year of last year, Google suffered many crises, especially the principle of not doing evil was questioned both inside and outside. The first thing that comes to Jeff Dean's blog is Google's ethical principles and AI.

Ethical principles and artificial intelligence

This year, we released the Google AI principle, but because AI is developing very rapidly, AI principles such as "avoid making or strengthening unfair prejudice", "Responsible for the people" and so on are also constantly changing and improving.

Among them, new research in the areas of machine learning fairness and model interpretability is pushing forward and making our products more inclusive. For example, we reduce “gender bias” in Google Translate and allow for the exploration and release of more inclusive image datasets and models that enable computer vision to adapt to the diversity of global cultures.

Social Welfare

Jeff Dean exemplified the case of AI applied to solve real public problems:

Flood forecasting work. The research works with many of Google's teams to provide accurate and granular information about the likelihood and extent of flooding, enabling people in flood-prone areas to better protect themselves and their property.

Work on earthquake aftershock prediction. Google showed that machine learning (ML) models can predict aftershock locations more accurately than traditional physics-based models.

In addition, there are many Google researchers and engineers working together to solve various scientific and social problems using open source software such as TensorFlow, such as using convolutional neural networks to identify the position of humpback whales and detect new ones. ofExoplanets, identifying diseased cassava plants, etc.

AI assistive technology

In order to enable ML and computer science to help users complete tasks faster and more efficiently, Google launched the intelligent voice technology Google Duplex.

This is a technique that covers natural language research and dialogue understanding as well as text and language recognition. At its core is a circular neural network built using TensorFlow Extend (TFX) machine learning platform.

When Google Duplex makes a call, it sounds almost real to the average person. You can hear Google Duplex calling you to make an appointment for a haircut.

Other application cases include Smart Compose, which uses predictive models to provide advice on how to compose emails and make the email authoring process faster and easier.

One of the key points of our research is to enable products like Google Assistant to support more languages ​​and to better understand semantic similarity.

Quantum Computation

In the past year, we have produced many exciting new results in quantum computing, including the development of a new 72-bit general-purpose quantum computing device, Bristlecone, which Devices can expand the problems that quantum computers can solve in the quantum field.

We also released Cirq, an open source programming framework for quantum computers, and explored how quantum computers can be used in neural networks. Finally, we share the experience and techniques of quantum processor performance fluctuations and how quantum computers can be used as neural network computing substrates.

Natural Language Understanding

In 2018, Google's natural language research achieved great results in basic research and product-centric cooperation. Based on the previous machine learning model, we developed a new parallel version of the model Universal Transformer, which demonstrates strong technical capabilities in many natural language tasks, including translation and language inference.

We also developed BERT, the first deep two-way, unsupervised natural language processing model.Using pre-training with a plain text corpus, you can use migration learning to fine-tune various natural language tasks.

Perception

Our perceptual research solves the problem of allowing computers to understand images, sounds, and more powerful tools for image acquisition, compression, processing, creative expression, and augmented reality.

One of the keys to the Google AI mission is to enable others to benefit from our technology, and this year we've made great strides in improving the functionality and building blocks that are part of the Google API. For example, through the ML Kit, visual and video improvements and new features are implemented in the Cloud ML API and face-related device building blocks.

MobileNetV2 is Google's next-generation mobile computer vision model, and our MobileNets are widely used in academia and industry. MorphNet proposes an effective way to learn the structure of deep networks to improve performance on image and audio models while computing resources are limited.

Computational photography

The improvement of mobile phone camera performance is not only due to the improvement of physical sensors, but also to the development of computational photography.

Our computational photography technology is working closely with Google's Android and consumer hardware teams to deliver this research to the latest Pixel and Android phones and other devices. In 2014, we introduced HDR+, which aligns frames in software and combines them with computing software to give images a higher dynamic range than a single exposure. This is the basis for developing Motion Photos in Pixel 2 in 2018 and the augmented reality model in Motion Stills.

This year, one of our main tasks in computing photography research was to create a new feature called Night Sight that allows Pixel users to be in very dimly lit scenes even in the absence of a flash. Take a clear picture.

Algorithms and Theory

In the past year, our research has covered a wide range of fields, from theoretical foundations to applied algorithms, from graphical mining to privacy protection computing.Our work in optimization involves the field of continuous optimization from machine learning to distributed combinatorial optimization. In the former field, we studied the convergence of stochastic optimization algorithms used to train neural networks (which won the ICLR 2018 Best Paper Award), demonstrating the problems of popular gradient-based optimization methods (such as some variants of ADAM). , provides a solid foundation for the new gradient-based optimization method.

Hot topic