G

Content Adversarial Red Team Analyst, Trust and Safety

Google
Full-time
Remote
United States
$110,000 - $157,000 USD yearly
Trust & Safety

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Content Adversarial Red Team Analyst, you will be a key contributor in identifying and mitigating emerging content safety risks within Google's GenAI products. You will lead the charge in uncovering unknown generative AI issues, threats and vulnerabilities that are not captured by traditional testing methods. Your understanding of AI safety and ability to think strategically will be instrumental in shaping the future of AI development, ensuring that Google's AI products are safe, fair, and unbiased.