มัลติเสิร์ชเป็นวิธีการค้นหาแบบใหม่โดยใช้รูปภาพและข้อความไปพร้อมๆ กัน |
ด้วยฟีเจอร์การแปลด้วย Lens เวอร์ชันอัปเดต คุณจะเห็นคำแปลวางซ้อนบนรูปภาพอย่างแนบเนียน |
มุมมองแบบสมจริงบน Google Maps ช่วยให้คุณสำรวจสถานที่ก่อนที่คุณจะไปที่นั่นจริงๆ |
For over two decades, we've dedicated ourselves to our mission: to organize the world’s information and make it universally accessible and useful. We started with text search, but over time, we've continued to create more natural and intuitive ways to find information — you can now search what you see with your camera, or ask a question aloud with your voice.
At Search On today, we showed how advancements in artificial intelligence are enabling us to transform our information products yet again. We're going far beyond the search box to create search experiences that work more like our minds, and that are as multidimensional as we are as people.
We envision a world in which you’ll be able to find exactly what you’re looking for by combining images, sounds, text and speech, just like people do naturally. You’ll be able to ask questions, with fewer words — or even none at all — and we’ll still understand exactly what you mean. And you’ll be able to explore information organized in a way that makes sense to you.
We call this making search more natural and intuitive, and we’re on a long-term path to bring this vision to life for people everywhere. To give you an idea of how we’re evolving the future of our information products, here are three highlights from what we showed today at Search On.
Making visual search work more naturally
Cameras have been around for hundreds of years, and they’re usually thought of as a way to preserve memories, or these days, create content. But a camera is also a powerful way to access information and understand the world around you — so much so that your camera is your next keyboard. That’s why in 2017 we introduced Lens, so you can search what you see using your camera or an image. Now, the age of visual search is here — in fact, people use Lens to answer 8 billion questions every month.
We’re making visual search even more natural with multisearch, a completely new way to search using images and text simultaneously, similar to how you might point at something and ask a friend a question about it. We introduced multisearch earlier this year as a beta in the U.S., and at Search On, we announced we’re expanding it to more than 70 languages in the coming months. We’re taking this capability even further with “multisearch near me,” enabling you to take a picture of an unfamiliar item, such as a dish or plant, then find it at a local place nearby, like a restaurant or gardening shop. We will start rolling “multisearch near me” out in English in the U.S. this fall.
Multisearch enables a completely new way to search using images and text simultaneously. |
With the new Lens translation update, you’ll now see translated text realistically overlaid onto the pictures underneath. |
These announcements, along with many others introduced at Search On, are just the start of how we’re transforming our products to help you go beyond the traditional search box. We’re steadfast in our pursuit to create technology that adapts to you and your life — to help you make sense of information in ways that are most natural to you.
Prabhakar Raghavan
Senior Vice President
ดูหนังออนไลน์ ดูบอล สุดมันส์
ตอบลบเล่นบาคาร่า บาคาร่าออนไลน์ มือถือ
ตอบลบบาคาร่าเว็บตรง Win8s เดิมพันง่าย จ่ายจริง ชวนเพื่อนรับ 100
ตอบลบ