Today the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We're happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.
Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data-centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they're bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.
We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.
For more details about the kinds of data-center traffic filtered by the TAG Data-center IP blacklist read our detailed post on the Google Online Security Blog.
Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality
Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data-centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they're bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves.
We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.
For more details about the kinds of data-center traffic filtered by the TAG Data-center IP blacklist read our detailed post on the Google Online Security Blog.
Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality