Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 5 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- 5 years of experience managing projects and defining project scope, goals, and deliverables.
Preferred qualifications:
- Master’s or Ph.D. degree in a quantitative discipline (e.g., Computer Science, Statistics, Mathematics, Operations Research etc.)
- 5 years of experience in a large-scale data analysis or data science setting, including abuse and fraud disciplines, with a focus on web security, harmful content moderation and threat analysis.
- Experience scripting languages in (e.g., C/C++) with proficiency in programming languages (e.g., Python, Julia) and database languages like SQL.
- Experience with prompt engineering and fine-tuning LLMs.
- Proficiency in applying machine learning techniques to large datasets.
About the job
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The AI Safety Protections team within Trust and Safety develops and implements cutting-edge AI/LLM-powered solutions to ensure the safety of generative AI across Google's products. This includes safeguards for consumer products, enterprise offerings (Vertex AI), on-device applications, as well as foundational models (Gemini, Juno, Veo) in collaboration with Google DeepMind. We are a team of passionate data scientists and machine learning experts dedicated to mitigating risks associated with generative AI, and addressing real-world safety with LLM/AI technology (e.g., imminent threat, child safety).
As a member of our team, you will have the opportunity to apply the latest advancements in AI/LLM, work with teams developing cutting-edge AI technologies, as well as protecting the world from real-world harms.
This role works with sensitive content or situations and may be exposed to graphic, controversial, or upsetting topics or content.Responsibilities
- Develop scalable safety solutions for AI products across Google by leveraging advanced machine learning and AI techniques.
- Apply statistical and data science methods to examine Google's protection measures, uncover potential shortcomings, and develop actionable insights for continuous security enhancement.
- Drive business outcomes by crafting compelling data stories for a variety of stakeholders, including executive leadership.
- Develop automated data pipelines and self-service dashboards to provide timely insights at scale.