Responsible use of artificial intelligence (AI)

Exploring the future of responsible AI in government

Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws.

Information and services

Our guiding principles

To ensure the effective and ethical use of AI.

Our timeline

Follow the evolution of our work to support the responsible use of AI in the Government of Canada.

Directive on Automated Decision-Making

See how we ensure that the government's automated decision-making systems are used responsibly.

Algorithmic Impact Assessment (AIA)

See how the AIA helps designers understand and manage the impacts of their AI solutions from an ethical perspective.

Guide on the use of Generative AI

Explore our guide, which provides guidance to federal institutions in their responsible use of generative AI.

Guideline on Service and Digital

Section 4.5 provides additional guidance on the responsible and ethical use of automated decision systems.

List of qualified AI suppliers

Current list of businesses looking to sell AI solutions to the Government of Canada.

Our guiding principles

The government is committed to ensuring the effective and ethical use of AI. The following actions are aligned with the Digital Nations Shared Approach to AI, and reflect shared core values and principles:

  1. Promoting openness about how, why, and when AI is used;
  2. Prioritizing the needs of individuals and communities, including Indigenous peoples, and considering the institutional and public benefits of AI;
  3. Assessing and mitigating the risks of AI to legal rights and democratic norms early in the lifecycle of AI systems and following their launch;
  4. Ensuring training or other input data used by AI systems is lawfully collected, used, and disclosed, taking account of applicable privacy and intellectual property rights;
  5. Evaluating the outputs of AI systems, including generative tools, to minimize biases and inaccuracies, and enabling users to distinguish between AI and human outputs;
  6. Publishing legal or ethical impact assessments, source code, training data, independent audits or reviews, or other relevant documentation about AI systems, while protecting privacy, government and national security, and intellectual property;
  7. Explaining automated decisions to people impacted by them and providing them with opportunities to contest decisions and seek remedies, which could involve human review, where applicable;
  8. Encouraging the creation of controlled test environments to foster responsible research and innovation;
  9. Establishing oversight mechanisms for AI systems to ensure accountability and foster effective monitoring and governance throughout the lifecycle;
  10. Assessing and mitigating the environmental impacts of the training and use of AI systems, and where appropriate opting for zero-emissions systems;
  11. Providing training to civil servants developing or using AI so that they understand legal, ethical, and operational issues, including privacy and security, and are equipped to adopt AI systems responsibly; and
  12. Creating processes for inclusive and meaningful public engagement on AI policies or projects with a view to raising awareness, building trust, and addressing digital divides.

AI procurement for a digital world

AI procurement for a digital world - Transcript

The Government of Canada is starting to use Artificial Intelligence to inform decision-making, be more efficient, and provide better services to Canadians.

While AI is a powerful tool, it must be used responsibly. We have to eliminate bias, be open about how AI is informing decisions, and ensure potential benefits are weighed against unintended results. That’s why we build responsible use into everything we do, including our first AI procurement process.

Here’s how the process works:

  1. First, interested suppliers must apply and demonstrate that they can deliver AI solutions in a responsible manner. 
  2. The Government will then present them with challenges.
  3. Interested bidders will need to specify which challenges they’d like to work on.
  4. From this group, Government will pick three suppliers and randomly select another seven. These suppliers will be eligible to submit proposals.
  5. Finally, the Government will evaluate bids and award contracts.

This simpler, faster process will not only facilitate collaboration between Government and small and medium-sized enterprises, it will also ensure that we build ethics and responsibility into projects from start to finish.

Agile, transparent, collaborative: that’s procurement for a digital world. Find out more at ca-ciconline.com.

Algorithmic Impact Assessment

Algorithmic Impact Assessment - Transcript

Artificial Intelligence can help us do great things, like preserving indigenous languages or helping Canadians do their taxes and access benefits. However, as for any new disruptive technology, we need to ensure it is used correctly, with the best interests of Canadians in mind.

That’s why rooting out bias and inequality in AI design has become a top priority. We need to shape how AI is built, monitored and governed from the get-go. The Government of Canada’s Algorithmic Impact Assessment (AIA) aims to do just that.

The AIA provides designers with a measure to evaluate AI solutions from an ethical and human perspective, so that they are built in a responsible and transparent way. For example, the AIA can ensure economic interests are balanced against environmental sustainability.

The AIA also includes ways to measure potential impacts to the public, and outlines appropriate courses of action, like behavioral monitoring and algorithm assessments.

Visit ca-ciconline.com/GCdigital to find out how Canada is leading the way in responsible and ethical use of AI.

Our timeline

  • Endorsement of the updated Digital Nations Shared Approach to the Responsible Use of Artificial Intelligence in Government (November 13, 2023)

    • The Shared Approach was initially developed in 2018 by member countries including Canada
    • The updated Shared Approach re-confirms the collective commitment to develop and implement approaches to AI governance in the public sector that reflect the core principles of transparency, accountability, and procedural fairness
  • Release of the Guide on the use of generative artificial intelligence (September 6, 2023)

    • Provides guidance to federal institutions in their use of generative AI
    • Includes an overview of generative AI, identifies limitations and concerns about its use, puts forward “FASTER” principles for its responsible use, and includes policy considerations and best practices
  • Updates to the Directive on Automated Decision-Making (April 25, 2023)

    • The Directive was amended following the third review of the instrument
    • Key changes include an expanded scope and new measures for explanation, bias testing, data governance, GBA+, and peer review
    • The Algorithmic Impact Assessment was updated to support changes to the directive. This includes new questions concerning the reasons for automation and impacts on persons with disabilities
  • Stakeholder engagement on the third review of the Directive on Automated Decision-Making (April – November, 2022)

    • Engagement with over 30 stakeholder groups, including in federal institutions, universities, civil society organizations, governments in other jurisdictions, and international organizations
    • Engagement included roundtables with the GC Advisory Council on AI, Canadian Human Rights Commission, Digital Governance Council, bargaining agents, networks for equity-seeking federal employees, and representatives from relevant GC functional communities
  • Updates to the Directive on Automated Decision-Making (April 1, 2021)

    • The Directive was amended based on feedback received from stakeholders
  • Compliance with the Directive on Automated Decision-Making (April 1, 2020)

    • All new automated decision systems must now comply with the Directive
  • Launch of the Directive on Automated Decision-Making (March 4, 2019)

    • Official launch of the Directive during the Second AI Day
  • Lunch and Learn with GC Entrepreneurs group (October 12, 2018)

  • Consultations in Toronto and Montreal on the Directive and Algorithmic Impact Assessment

    • External stakeholders included UQAM, CIFAR, Osgoode Law, and AI Impact Alliance (AiiA)
  • Consultation with the Office of the Privacy Commissioner of Canada (September 18, 2018)

  • Justice AI taskforce session (June 12, 2018)

    • Justice AI taskforce created to provide input and direction on legal issues
    • 25 representatives including from human rights, IP, commercial, IRCC, ESDC, and TBS
  • AI Day (May 28, 2018)

    • 120 participants from industry, academia, and government
  • AI policy working group kick-off (February 16, 2018)

    • Hosted by GAC to develop departmental policies on AI
  • Policy Horizons Directive Design Session (February 13, 2018)

    • Interdepartmental workshop to talk about the development of the Directive
    • Participants included TBS, IRCC, ISED, and ESDC
  • Kick-off session with Departments (January 22, 2018)

    • Organized workshop with over 100 participants
    • Participants included TBS, IRCC, DFO, AAFC, CBSA, Funding Councils, GAC, ESDC, NRC, PCH, HC, NRCAN, Canada Council for the Arts, CRA, ISED, Policy Horizons, and SSC
  • Drafting of the Directive (October, 2017 – March, 2019)

    • TBS binding policy focused on the automation of decisions
  • Drafting of the AI whitepaper (October, 2016 – October, 2017)

    • Developed in the open with several academic, civil society, and government subject matter experts

Page details

Date modified: