Nia Castelly, Co-Founder and Head of Legal at Checks (Google’s AI compliance platform), joins Navigating Abroad to discuss how businesses can build safer applications and deploy generative AI responsibly. She breaks down the challenges developers face in an evolving regulatory landscape and how Checks helps solve them with tools like automated adversarial testing and continuous monitoring.
Nia shares how existing legal frameworks—copyright, contracts, and privacy laws—can still offer useful guardrails, even as legislation like the EU AI Act takes shape. She also explores the importance of transparency and attribution in model training, the gaps in current compensation structures for creators, and how innovation in GenAI can itself be used to make GenAI safer.
Nia Castelly, Co-Founder and Head of Legal at Checks (Google’s AI compliance platform), joins Navigating Abroad to discuss how businesses can build safer applications and deploy generative AI responsibly. She breaks down the challenges developers face in an evolving regulatory landscape and how Checks helps solve them with tools like automated adversarial testing and continuous monitoring.
Nia shares how existing legal frameworks—copyright, contracts, and privacy laws—can still offer useful guardrails, even as legislation like the EU AI Act takes shape. She also explores the importance of transparency and attribution in model training, the gaps in current compensation structures for creators, and how innovation in GenAI can itself be used to make GenAI safer.