Connect with community members, organizations, and practitioners, through a monthly newsletter, local events, and an organization directory.

Newsletter

01
Feb

February 2026

Organizations

  • AI Safety Reading Group (Mila)
    Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
  • AI Alignment McGill (AIAM)
    Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
  • Canadian AI Safety Institute (CAISI)
    Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
  • CEIMIA
    Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
  • CIFAR — AI & Society
    Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
  • McGill Centre for Media, Technology & Democracy (CMTD)
    Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
  • Encode Canada
    Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
  • Goodheart AI
    Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
  • Horizon Omega (HΩ)
    Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
  • IVADO — R³AI / R10: AI Safety & Alignment
    Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
  • Krueger AI Safety Lab (KASL)
    Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
  • LawZero
    Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
  • Montréal AI Ethics Institute (MAIEI)
    International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
  • Montréal AI Governance, Ethics & Safety Meetup
    Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
  • Mila – Québec AI Institute
    Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
  • PauseAI Montréal
    Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.
  • OBVIA
    Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.