Training a machine on the difference between acceptable and policy-violating content is a delicate balance. For example, sometimes the word “gay” is used as a derogatory term, and that’s not something we tolerate in Google reviews. But if we teach our machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space. Our human operators regularly run quality tests and complete additional training to remove bias from the machine learning models. By thoroughly training our models on all the ways certain words or phrases are used, we improve our ability to catch policy-violating content and reduce the chance of inadvertently blocking legitimate reviews from going live.
If our systems detect no policy violations, then the review can be posted within a matter of seconds. But our job doesn’t stop once a review goes live. Our systems continue to analyze the contributed content and watch for questionable patterns. These patterns can be anything from a group of people leaving reviews on the same cluster of Business Profiles to a business or place receiving an unusually high number of 1 or 5-star reviews over a short period of time.
Keeping reviews authentic and reliable
Like any platform that welcomes contributions from users, we also have to stay vigilant in our efforts to prevent fraud and abuse from appearing on Maps. Part of that is making it easy for people using Google Maps to flag any policy-violating reviews. If you think you see a policy-violating review on Google, we encourage you to report it to our team. Businesses can report reviews on their profiles here, and consumers can report them here.
|
Google Maps users and businesses can easily report reviews that they feel violate one of our policies. |
Our team of human operators works around the clock to review flagged content. When we find reviews that violate our policies, we remove them from Google, and, in some cases, suspend the user account or even pursue litigation.
In addition to reviewing flagged content, our team proactively works to identify potential abuse risks, which reduces the likelihood of successful abuse attacks. For instance, when there’s an upcoming event with a significant following — such as an election — we implement elevated protections to the places associated with the event and other nearby businesses that people might look up on Maps. We continue to monitor these places and businesses until the risk of abuse has subsided to support our mission of only publishing authentic and reliable reviews. Our investment in analyzing and understanding how contributed content can be abused has been critical in keeping us one step ahead of bad actors.
With more than 1 billion people turning to Google Maps every month to navigate and explore, we want to make sure the information they see — especially reviews — is reliable for everyone. Our work is never done; we’re constantly improving our system and working hard to keep abuse, including fake reviews, off of the map.