It’s been a busy three days here in Mountain View, as more than 7,000 developers joined us at Shoreline Amphitheatre for this year’s Google I/O. From AI to VR, and everything in between, here’s an exhaustive—we mean that—recap of everything we announced.
1. The Google Assistant is already available on more than 100 million devices!
2. Soon, with Google Lens—a new way for computers to “see”—you’ll be able to learn more about and take action on the things around you, while you’re in a conversation with your Assistant.
3. We’ve brought your Google Assistant to iPhones.
4. Call me maybe? With new hands-free calling on Google Home, you’ll be able to make calls with the Assistant to landlines and mobile numbers in U.S. and Canada for free.
5. You can now type to your Google Assistant on eligible Android phones and iPhones.
6. Bonjour. Later this year people in Australia, Canada, France, Germany and Japan will be able to give the Assistant on Google Home a try.
7. And Hallo. Soon the Assistant will roll out to eligible Android phones in Brazilian Portuguese, French, German and Japanese. By the end of the year the Assistant will support Italian, Korean and Spanish.
8. We’re also adding transactions and payments to your Assistant on phones—soon you can order and pay for food and more, with your Assistant.
9. With 70+ home automation partners, you can water your lawn and check the status of your smoke alarm with the Assistant on Google Home and phones.
10. Soon you’ll get proactive notifications for reminders, flight delays and traffic alerts with the Assistant on Google Home and phones. With multi-user support, you can control the type of notifications to fit your daily life.
12. Listen to all your favorite tunes. We’ve added Deezer and Soundcloud as partners, plus Spotify’s free music offering coming soon.
12. Bluetooth support is coming to Google Home, so you can play any audio from your iOS or Android device.
13. Don’t know the name of a song, but remember a few of the lyrics? Now you can just ask the Assistant to “play that song that goes like…” and list some of the lyrics.
14. Use your voice to play your favorite shows and more from 20+ new partners (HBO NOW, CBS All Access, and HGTV) straight to your TV.
15. With visual responses from your Assistant on TVs with Chromecast, you’ll be able to see Assistant answers on the biggest screen in your house.
16. You can stream with your voice with Google Home on 50 million Cast and Cast-enabled devices.
17. For developers, we’re bringing Actions on Google to the Assistant on phones—on both Android and iOS. Soon you’ll find conversation apps for the Assistant that help you do things like shopping for clothes or ordering food from a lengthy menu.
18. Also for developers, we’re adding ways for you to get data on your app’s usage and performance, with a new console.
19. We’re rolling out an app directory, so people can find apps from developers directly in the Google Assistant.
20. People can now also create shortcuts for apps in the Google Assistant, so instead of saying "Ok Google, ask Forecaster Joe what’s the surf report for the Outer Banks," someone can just say their personal shortcut, like "Ok Google, is the surf up?"
21. Last month we previewed the Google Assistant SDK, and now we’re updating it with hotword support, so developers can build devices that are triggered by a simple "Ok Google."
22. We’re also adding to the SDK the ability to have both timers and alarms.
23. And finally, we’re launching our first developer competition for Actions on Google.
24. With the addition of Smart Reply to Gmail on Android and iOS, we’re using machine learning to make responding to emails easier for more than a billion Gmail users.
25. New Cloud TPUs—the second generation of our custom hardware built specifically for machine learning—are optimized for training ML models as well as running them, and will be available in the Google Compute Engine.
26. And to speed up the pace of open machine-learning research, we’re introducing the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs available for free to top researchers.
27. Google for Jobs is our initiative to use our products to help people find work, using machine learning. Through Google Search and the Cloud Jobs API, we’re committed to helping companies connect with potential employees and job seekers with available opportunities.
28. The Google Cloud Jobs API is helping customers like Johnson & Johnson recruit the best candidates. Only months after launching, they’ve found that job seekers are 18 percent more likely to apply on its career page now they are using Cloud Jobs API.
29. With Google.ai, we’re pulling all our AI initiatives together to put more powerful computing tools and research in the hands of researchers, developers and companies. We’ve already seen promising research in the fields of pathology and DNA research.
30. We must go deeper. AutoML uses neural nets to design neural nets, potentially cutting down the time-intensive process of setting up an AI system, and helping non-experts build AI for their particular needs.
31. We’ve partnered with world-class medical researchers to explore how machine learning could help improve care for patients, avoid costly incidents and save lives.
32. We introduced a new Google Cloud Platform service called Google Cloud IoT Core, which makes it easy for Google Cloud customers to gain business insights through secure device connections to our rich data and analytics tools.
33. We first launched Google Photos two years ago, and now it has more than 500 million monthly users.
34. Every day more than 1.2 billion photos and videos are uploaded to Google Photos.
35. Soon Google Photos will give you sharing suggestions by selecting the right photos, and suggesting who you should send them to based on who was in them
36. Shared libraries will let you effortlessly share photos with a specific person. You can share your full photo library, or photos of certain people or from a certain date forward.
37. With photo books, once you select the photos, Google Photos can curate an album for you with all the best shots, which you can then print for $9.99 (20-page softcover) or $19.99 (20-page hardcover), in the U.S. for now.
38. Google Lens is coming to Photos later this year, so you’ll be able to look back on your photos to learn more or take action—like find more information about a painting from a photo you took in a museum.
39. We reached 2 billion monthly active devices on Android!
40. Android O, coming later this year, is getting improvements to “vitals” like battery life and performance, and bringing more fluid experiences to your smaller screen, from improved notifications to autofill.
41. With picture-in-picture in Android O, you can do two tasks simultaneously, like checking your calendar while on a Duo video call.
42. Smart text selection in Android O improves copy and paste to recognize entities on the screen—like a complete address—so you can easily select text with a double tap, and even bring up an app like Maps to help navigate you there.
43. Our emoji are going through a major design refresh in Android O.
44. For developers, the first beta release of Android O is now available.
45. We introduced Google Play Protect—a set of security protections for Android that’s always on and automatically takes action to keep your data and device safe, so you don’t have to lift a finger.
46. The new Find My Device app helps you locate, ring, lock and erase your lost Android devices—phones, tablets, and even watches.
47. We previewed a new initiative aimed at getting computing into the hands of more people on entry-level Android devices. Internally called Android Go, it’s designed to be relevant for people who have limited data connectivity and speak multiple languages.
48. Android Auto is now supported by 300 car models, and Android Auto users have grown 10x since last year.
49. With partners in 70+ countries, we’re seeing 1 million new Android TV device activations every two months, doubling the number of users since last year.
50. We’ve refreshed the look and feel of the Android TV homescreen, making it easy for people to find, preview and watch content provided by apps.
51. With new partners like Emporio Armani, Movado and New Balance, Android Wear now powers almost 50 different watches.
52. We shared an early look at TensorFlow Lite, which is designed to help developers take advantage of machine learning to improve the user experience on Android.
53. As part of TensorFlow Lite, we’re working on a Neural Network API that TensorFlow can take advantage of to accelerate computation.
54. An incredible 82 billion apps were downloaded from Google Play in the last year.
55. We honored 12 Google Play Awards winners—apps and games that give their fans particularly delightful and memorable experiences.
56. We’re now previewing Android Studio 3.0, focused on speed and Android platform support.
57. We’re making Kotlin an officially supported programming language in Android, with the goal of making Android development faster and more fun.
58. And we’ll be collaborating with JetBrains, the creators of Kotlin, to move Kotlin into a nonprofit foundation.
59. Android Instant Apps are now open to all developers, so anyone can build and publish apps that can be run without requiring installation.
60. Thousands of developers from 60+ countries are now using Android Things to create connected devices that have easy access to services like the Google Assistant, TensorFlow and more.
61. Android Things will be fully released later this year.
62. Over the last year, the number of Google Play developers with more than 1 million installs grew 35 percent.
63. The number of people buying on Google Play grew by almost 30 percent this past year.
64. We’re updating the Google Play Console with new features to help developers improve your app’s performance and quality, and grow your business on Google Play.
65. We’re also adding a new subscriptions dashboard in the Play Console, bringing together data like new subscribers and churn so you can make better business decisions.
66. To make it easier and more fun for developers to write robust apps, we announced a guide to Android app architecture along with a preview of Architecture Components.
67. We’re adding four new tools to the Complications API for Android Wear, to help give users more informative watch faces.
68. Also for Android Wear, we’re open sourcing some components in the Android Support Library.
69. More Daydream-ready phones are coming soon, including the Samsung Galaxy S8 and S8+, LG’s next flagship phone, and devices from Motorola and ASUS.
70. Today there are 150+ applications available for Daydream.
71. More than 2 million students have gone on virtual reality Expeditions using Google Cardboard, with more than 600 tours available.
72. We’re expanding Daydream to support standalone VR headsets, which don’t require a phone or PC. HTC VIVE and Lenovo are both working on devices, based on a Qualcomm reference design.
73. Standalone Daydream headsets will include WorldSense, a new technology based on Tango which enables the headset to track your precise movements in space, without any extra sensors.
74. The next smartphone with Tango technology will be the ASUS ZenFone AR, available this summer.
75. We worked with the Google Maps team to create a new Visual Positioning Service (VPS) for developers, which helps devices quickly and accurately understand their location indoors.
76. We’re bringing AR to the classroom with Expeditions AR, launching with a Pioneer Program this fall.
77. We previewed Euphrates, the latest release of Daydream, which will let you capture what you’re seeing and cast your virtual world right onto the screen in your living room, coming later this year.
78. A new tool for VR developers, Instant Preview, lets developers make changes on a computer and see them reflected on a headset in seconds, not minutes.
79. Seurat is a new technology that makes it possible to render high-fidelity scenes on mobile VR headsets in real time. Somebody warn Cameron Frye.
80. We’re releasing an experimental build of Chromium with an augmented reality API, to help bring AR to the web.
81. Soon you’ll be able to watch and control 360-degree YouTube videos and live streams on your TV, and use your game controller or remote to pan around an immersive experience.
82. Super Chat lets fans interact directly with YouTube creators during live streams by purchasing highlighted chat messages that stay pinned to the top of the chat window. We previewed a developer integration that showed how the Super Chat API can be used to trigger actions in the real world—such as turning the lights on and off in a creator’s apartment.
83. A new feature in the YouTube VR app will soon let people watch and discuss videos together.
84. We announced that we will make Fabric’s Crashlytics the primary crash reporting product in Firebase.
85. We’re bringing phone number authentication to Firebase, working closely with the Fabric Digits team, so your users can sign in to your apps with their phone numbers.
86. New Firebase Performance Monitoring will help diagnose issues resulting from poorly performing code or challenging network conditions.
87. We’ve improved Firebase Cloud Messaging.
88. For game developers, we’ve built Game Loop support & FPS monitoring into Test Lab for Android, allowing you to evaluate your game’s frame rate before you deploy.
89. We’ve taken some big steps to open source many of our Firebase SDKs on GitHub.
90. We’re expanding Firebase Hosting to integrate with Cloud Functions, letting you can do things like send a notification when a user signs up or automatically create thumbnails when an image is uploaded to Cloud Storage.
91. Developers interested in testing the cutting edge of our products can now sign up for a Firebase Alpha program.
92. We’re adding two new certifications for web developers, in addition to the Associate Android Developer Certification announced last year.
93. We opened an Early Access Program for Chatbase, a new analytics tool in API.ai that helps developers monitor the activity in their chatbots.
94. We’ve completely redesigned AdMob, which helps developers promote, measure and monetize mobile apps, with a new user flow and publisher controls.
95. AdMob is also now integrated with Google Analytics for Firebase, giving developers a complete picture of ads revenue, mediation revenue and in-app purchase revenue in one place.
96. With a new Google Payment API, developers can enable easy in-app or online payments for customers who already have credit and debit cards stored on Google properties.
97. We’re introducing new ways for merchants to engage and reward customers, including the new Card Linked Offers API.
98. We’re introducing a new options for ads placement through Universal App Campaigns to help users discover your apps in the Google Play Store.
99. An update to Smart Bidding strategies in Universal App Campaigns helps you gain high-value users of your apps—like players who level-up in your game or the loyal travelers who book several flights a month.
100. A new program, App Attribution Partners, integrates data into AdWords from seven third-party measurement providers so you can more easily find and take action on insights about how users engage with your app.
101. Firebase partnered up with Google Cloud to offer free storage for up to 10 gigabytes in BigQuery so you can quickly, easily and affordably run queries on it.
That’s all, folks! Thanks to everyone who joined us at I/O this year, whether in person, at an I/O Extended event or via the live stream. See you in 2018.
from Official Google Blog http://ift.tt/2q4EJpP
via IFTTT