In the early morning of May 19th, the Google I/O 2021 Developer Conference (hereinafter referred to as Google I/O) opened today. After being cancelled last year due to the new crown epidemic, the 2021 Google I/O Conference will return and adopt the full line The above form is free and open to everyone.
In this keynote speech, the TPU V4 artificial intelligence chip was released. More importantly, Google announced a series of products guided by its own AI technology. At the same time, it updated Android 12, Wear OS and other systems familiar to ordinary users.
AI guides all Google products
Google CEO Sundar Pichai (Sundar Pichai) opened the topic naturally on the epidemic and human safety. He mentioned that the epidemic has changed people’s lives in the past year, and Google has also made efforts to help people solve this challenge with technology.
The first product at the opening was Google Maps, which added 150,000 kilometers of bicycle lanes for car navigation. Added “greener routes” and “safer routes”. The former is for the environment and the latter is for the safety of people, such as avoiding dangerous roads and so on.
Google CEO Pichai opens, Google Maps adds bike routes
The second topic is education. Pichai talked about how Chromebooks (Google’s netbooks) can help education during the epidemic, but the next point is the improvement of home office. Today’s first brand new product is Smrt. Canvas Smart Canvas, a collaborative office function, is further integrated for form documents and the offices of people scattered in various places, and these functions will be integrated into Google Docs in the future.
Google’s AI technology also functions in the details, such as noise reduction in online meetings, optimization of online videos, and so on.
Smrt canvas online collaborative office
Google’s core competitiveness is still in the direction of AI and machine learning. After Pichai returned to the stage, he once again introduced their progress in image recognition and speech recognition. He used an example to illustrate the “complexity of language”, such as “really cold, frozen to death” is not really frozen to death. This is the flexibility of human language, an expression we are familiar with, but these make machine learning very confusing.
Machines understand human vocabulary
Google’s solution is called “LaMDA”, a language model for conversational applications. It is still under research and development, but it will soon be available for third-party testing. Pichai’s example is a conversation between a man and LaMDA about Pluto, which seems to be very close to a conversation between two humans. The concept of learning plays a vital role in this. LaMDA will continue to understand during the dialogue, so that the subsequent dialogue will continue, instead of interrupting each time, you need to learn what you are talking about again.
Humans and LaMDA talk about Pluto
LaMDA’s current training method is still text, but Pichai said that it will eventually be integrated into Google’s voice assistant Google assistan and other products to make some very vague words become reality, such as “looking for a beautiful mountain view route”, which is actually called Nearby location information and judge what is “beautiful”, Google will also integrate it into its other products in the future.
Fuzzy speech recognition
When it comes to machine learning training, Google also mentioned hardware. Its name is TPU V4, Google’s customized tensor processing unit (TPU) artificial intelligence chip. It is currently the fourth-generation product, and Google claims that it is twice as fast as the previous version. One pod can provide more than one exaflop of AI computing power.
These custom chips power many of Google’s machine learning services, but will also be provided to developers as part of its Google cloud platform.
In the video, Google also showed its quantum data center, which is still in its early stages. Due to the limitations of materials and experimental sites, quantum computing still needs to run at extremely low temperatures.
Google shows off its quantum computer
After this super-class content of quantum computers, Google returned to the more understandable topic of privacy and security.
The first is still related to AI, allowing Chrome to help users check and correct risky passwords; the second part is Google’s security protection, they said that private data such as mailboxes and user sexual orientation will not be used for user analysis; Google also mentioned the concept of “differential privacy”, which is to maximize the accuracy of data query while minimizing the chance of identifying its records. In this way, data can be obtained from big data without losing privacy.
AI’s efforts in the direction of privacy and security
Another topic is related to search. Google once again uses this feature to let the outside world understand their technology. For example, a user said, “I have climbed Mount Adams and want to climb Mount Fuji next fall. What preparations should I make?” This is simple and understandable for humans, but it will be fruitless on search engines. Google’s product MUM is a preliminary product, and it is also a technical point similar to LaMDA. For example, Google put it on Google Lens to recognize images, and benefit non-English speaking people.
Search engines don’t understand the problem of climbing a mountain.
When the topic comes to AR, indoor map recognition and street view maps benefit from the addition of AR and AI, and there will be new performances in indoor navigation; street view maps will also be expanded to 50 cities, and they will be more dynamic and customized. The personal experience is different. There is also “area busyness”, which is similar to traffic conditions, but it specifies an area. “Regional busyness” will be launched in the next few months, which can provide users with travel suggestions.
The map can show the busyness of the area
Google Shopping is actually similar to Taobao’s comprehensive display. When users search for a product, Google displays the integrated results, claiming that this can benefit more businesses.
Improvement of Google Photos
In Google Photos, machine learning will allow users to classify their albums according to a certain rule, such as a certain trip, or automatically generate a video from photos. Just have two photos, and the machine will automatically insert frames in the middle. Actually similar functions are also available on the iOS platform or Google Photos itself. This time it can be said that an upgrade, AI makes the previous results more optimized.
Android 12 changes UI to establish privacy protection image
Finally, some users can understand the actual content, starting with the appearance. Google overturned its own Material Design design style and renamed it Material YOU. The external performance is that the colors are richer, and users can decide the color style of the UI part by themselves.
The dressing style of Material YOU designer was very conspicuous at the press conference
In addition to colors, some detailed dynamic effects have also been improved, such as wake up the screen, and some water ripples on the wallpaper. Even when the phone is picked up, the screen will gradually light up with the direction in which the phone is lit.
Of course, the appearance change is a very superficial improvement. Google also talked about more security options, such as the privacy board, which is actually similar to Apple’s privacy permission list. Users can intuitively see which privacy is called and can be closed at any time. All private data and processing are processed on the device side to protect user privacy.
From another perspective, Google hopes to use the new system to establish privacy-conscious images, and let the user understand that these will not have a negative impact on the advertising business.
In addition to optimizing privacy control, Android 12 has indicators or indicators when an application accesses the phone’s camera or microphone. In fact, these have been implemented in iOS a year ago.
Google deliberately strengthened the ecological connection between its products. It specifically introduces the linkage between mobile phones and Chromebooks, such as photo transmission and sharing, mobile phones acting as remote controls, wireless car (Android Auto Wireless), mobile phone keys, etc.
The first batch of mobile phone manufacturers to upgrade Android 12
The biggest upgrade in Wear OS history
The system for smart watches is mainly upgraded in three directions: development platform, brand new user experience, and health services. In the ecological direction, Google uses Samsung as an example to explain the improvements of the watch system: 30% speed increase, battery life improvement, health monitoring, etc.
The addition of Samsung gives Wear OS the support of a large manufacturer and hardware manufacturing experience
With this Wear OS upgrade, Samsung has become the forerunner. There have been rumors before that Samsung may abandon the self-developed Tizen system on the next-generation Galaxy Watch and put it in the embrace of Wear OS again. Although there is no clear statement today that Tizen is to be abandoned, it is clear that Samsung’s rejoining Google’s side is a major benefit to both Samsung itself and Wear OS.
Summary: The protagonist is still AI during the bland speech
A pure technical flow of Google I/O without hardware, just like in previous years. Google has spent a lot of energy using its current/future product lines to talk about their efforts in the direction of AI, and even show them quantum computers for this purpose.
AI once again runs through the entire conference, every product, every function, every corner.
Opening machine learning concert
But at the same time, it must be said that the presentation and expression of these great products and goals at the press conference is relatively boring, and it is not suitable for general users to watch (of course, Google I/O was originally a developer conference) . Especially the quantum computer part is too obscure for 90% of people. Other new products/new features do not have too many impressive presentation sessions, which will make it difficult for many viewers to understand and remember them.
This is also an overall direction of the Google I/O Conference in recent years, and its keynote speeches are getting farther and farther away from the general audience . One reason is that AI is not as simple as toC products, and the improvement of AI level is not as intuitive as CPU scores. And, although Google has many new features, most of them are products or data models under development, which are not obvious to users.