JCC_Sub-brand_Long_Intellectual Forum White
2020_logo

The AI Responsibility Lab

Leaders in Responsible AI
Summit 2023

March 16th, 2023.
Jesus College, Cambridge, UK.

The Intellectual Forum at Jesus College, Cambridge and The AI Responsibility Lab Public Benefit Corporation are hosting a private 1-day Summit on the state and future of Responsible AI.

We invite you to join other senior private and public sector policy leaders for a day of intimate discussion about our responsible and productive future with Artificial Intelligence.

Leaders in Responsible AI Summit. March 16th, 2023.
Frankopan Hall. Jesus College, Cambridge, UK.

Hosted by The Intellectual Forum, Jesus College, Cambridge and The AI Responsibility Lab.

March 16th, 2023. 09:00 - 18:00. Jesus College, Cambridge, UK.

Attendance by invitation and application only.

All discussions conducted according to the Chatham House Rule.

Tea, Lunch, and Cocktail reception included.

No cost to attend. Overnight accommodation available.

AI is one of the most impactful technologies of our time.

AI is one of the most impactful technologies of our time.

Orienting its development with productivity, sustainably, grace and dignity requires impactful conversations.

The Intellectual Forum at Jesus College, Cambridge and The AI Responsibility Lab Public Benefit Corporation are hosting a private 1-day Summit on the state and future of Responsible AI.

We invite you to join other senior private and public sector leaders for a day of intimate discussion about our responsible and productive future with Artificial Intelligence.

Agenda

Our day is separated into morning and afternoon sessions, punctuated by a catered lunch.

0900: Welcome, coffee and tea
1000: Opening remarks
1030: Morning session A
1115: Morning session B
1200: Lunch
1330: Afternoon session C
1430: Coffee and tea
1500: Afternoon session D
1600: Closing remarks
1630: Cocktail hour

We conclude with a cocktail hour; a special chance to strengthen old relationships and foster new ones.

Topics

Join us as we advanced the dialogue on:

  • How AI will shape the near future of the UK and global labor market

  • How Responsible AI animates our shared ESG goals

  • How policymakers should develop informed and effective responses to the growth of AI

  • Strategies teams can employ today to better operationalize Responsible AI
  • Attendance

    You will be joined by other senior leaders in data, analytics, AI, and Governance, convened under the Chatham House Rule.

    Your fellow attendees will arrive from the UK, EU, and around the world. They represent organizations in the private sector, public sector, and not-for-profit charities.

    Many of them (yourself included) may be asked to lead or contribute to the dialogues and conversations that animate our day.

    Accommodation

    Complimentary overnight accommodation for the evenings of March 15th and March 16th, at Jesus College, Cambridge, may be available to you.

    Inquire with Ramsay Brown or Dr Julian Huppert for availability.

    Other local accommodation may be recommended in the adjacent Cambridge area.

    We seek to address the fast-approaching fork in our collective path forward with AI.

    One path leads toward unprecedented languishing.

    Today, AI exacerbates existing structural inequality.

    Systems that control critical decisions in our lives behave inexplicably.

    Ethnic cleansings have been exacerbated by algorithmic optimization.

    2024 may see a jobless economic recovery as jobs automate in 2022–2023 in response to global macroeconomic pressure.

    There’s an oft-hidden ecological impact of raw materials and energy that goes into building and training AI systems.

    We’re losing fundamental rights to privacy, the sanctity of our personal data, and autonomy itself. All of these harms are real, today.

    They hurt the companies building the AI systems that create these outcomes.

    They hurt our communities. They hurt us.

    One path leads toward unprecedented flourishing.

    In this second path, AI technologies are designed, tested, and deployed with fairness as an accountable success criteria, not an afterthought.

    Critical systems can provide clear, acceptable explanations for their decisions and predictions.

    Our algorithms, small and large, are beneficent and well-aligned to our notions of human wants and human rights.

    The benefits of labor displacement are weighed critically against the tangible harms.

    Our data and analytic development is harmonious with our actions reversing the climate catastrophe.

    Our fundamental rights to privacy, the sanctity of our personal data, and human autonomy itself are not only protected, but enhanced by the presence of advanced analytics in our lives.

    We get to choose which path we go down.
    Join us as we choose flourishing.

    Registration is currently closed, as the event is at capacity.

    We're grateful for your interest.

    We invite you to follow the work of The Lab in our magazine; Accelerate.

    exploratory

    Enable Responsible AI education in a turn-key managed platform.

    growth

    Automatically unite cross-functional AI Risk audits with AI artifact management.

    advanced

    No-code AI Governance orchestration with deep API integrations, inference, and automated workflows.

    insights

    The world's most complete AI Risk Incident intelligence and research platform.

    Resources

    Mission Control is a product from AIRL: The AI Responsibility Lab Public Benefit Corporation.

    (c) 2022