The Risks of Relying on AI: Lessons from Air Canada’s Chatbot Debacle

In the era of artificial intelligence (AI), companies are increasingly relying on automated systems to streamline operations and enhance customer service. However, a recent incident involving Air Canada’s AI-powered chatbot serves as a stark reminder of the risks associated with relying solely on AI technology, particularly when it comes to customer interactions and policy enforcement.

The Incident: A Broken Promise and Legal Battle

A Canadian customer recently found themself in a frustrating predicament when they sought clarification on Air Canada’s bereavement rates following the death of a family member. The customer consulted the company’s AI-powered chatbot for guidance and received a response advising them to submit a ticket for a reduced bereavement rate within 90 days of issuance.

Relying on the chatbot’s advice, the customer proceeded to book a ticket and later requested a refund, only to discover that Air Canada’s actual policy did not align with the chatbot’s guidance. Despite initially refusing to honor the chatbot’s promise, Air Canada offered the customer a $200 credit for future use but declined to issue a refund.

Unsatisfied with the outcome, the customer took the matter to small claims court, arguing that Air Canada should be held accountable for the chatbot’s misleading advice. In an unprecedented move, Air Canada contended that the chatbot was a separate legal entity responsible for its own actions, marking a notable attempt to evade liability for AI-generated interactions.

Legal Outcome and Implications

Following a legal battle, the court ruled in favor of the customer, compelling Air Canada to issue a partial refund of $650.88 and cover the customer’s court fees. This landmark decision underscores the accountability of companies for the actions of their AI systems, setting a precedent for future cases involving AI-driven customer interactions.

Mitigating AI Risks

The Air Canada chatbot debacle highlights several key lessons for companies leveraging AI technology:

Transparency and Accuracy: Companies must ensure that AI-powered systems provide accurate and transparent information to customers. Misleading or erroneous guidance can lead to legal disputes and reputational damage.

Policy Alignment: AI systems should align with company policies and procedures to avoid discrepancies between automated responses and official guidelines. Regular audits and updates are essential to maintain consistency and compliance.

Legal Liability: Companies cannot absolve themselves of responsibility for AI-generated interactions. Legal frameworks must evolve to address the accountability of companies for the actions of their AI systems, clarifying liability and mitigating legal risks.

Continuous Improvement: The Air Canada case underscores the importance of ongoing monitoring and improvement of AI systems. Companies should invest in training data, algorithmic refinement, and quality assurance measures to enhance the accuracy and reliability of AI-driven interactions.

The Air Canada chatbot incident serves as a cautionary tale for companies navigating the complexities of AI integration in public-facing systems. While a tool like an AI-powered chatbot offers tremendous potential for efficiency and innovation, companies must approach AI implementation with caution and accountability. By prioritizing transparency, policy alignment, and legal compliance, companies can mitigate the risks associated with AI-driven interactions and uphold their commitment to customer satisfaction and integrity.

If you are beginning your journey of AI governance, Truyo is here to help. Visit https://truyo.ai to learn more about our AI governance platform that will help you identify AI risk or click here to learn about our 5 Steps to Defensible AI Governance workshop where Truyo AI experts will walk you through designing an AI governance strategy. 

US States Dive into AI Regulation with a Frenzy of Legislative Action

The landscape of artificial intelligence and automated decision-making regulation has been nothing short of dynamic, with 15 states and 24 bills already making significant strides in shaping the future regulation of this rapidly evolving technology. The most important lesson to extract from all this legislative attention is that states care about governing AI. Everyone knows AI has massive potential for productivity and will lead to global innovation, but the regulatory attention exemplifies that there are strong concerns about how to keep it safe. 

We read through current and proposed legislation by US states, so you don’t have to! All regulations require, to varying degrees, the assessment of AI usage for potential risk and disclosure of how your company is using AI. Below are the common fundamental elements in each state’s current or draft legislation that may affect your company. 

Common Requirements in Current & Proposed AI Legislation

Commonalities in All Draft Law Requirements:

  • Impact a private entity’s use of artificial intelligence (AI).
  • Aim to address potential harms resulting from AI, such as algorithmic discrimination and lack of transparency.
  • Require some form of disclosure or assessment of the AI system used.
  • Respect Privacy & Security laws and controls

Many Draft Laws Contain:

  • Define certain terms, such as “automated decision tool” (ADT) and “high-risk AI system.”
  • Require impact assessments of AI systems before deployment.
  • Prohibit AI systems that result in algorithmic discrimination.
  • Provide individuals with rights related to the use of AI systems, such as the right to know when an AI system is being used and the right to opt out of its use.
  • Establish licensing or regulatory frameworks for AI systems.
  • Impose restrictions on the use of specific AI technologies, such as facial recognition.
  • Create new government bodies to oversee the development and use of AI.

Current and Proposed AI Legislation by State

California:

  • In addition to the ADMT, which has the most impact, there are other bills as well in CA. 
  • AB331: Requires impact assessments, disclosures, governance programs, and opt-out rights for ADTs.
  • SB1047: Requires safety determinations and certifications for foundation/frontier models.

Connecticut:

  • SB 2: Aims to protect the public from harmful unintended consequences of AI (details not yet available).

Florida:

  • HB 1459: Creates transparency obligations for AI content and technology developers.

Hawaii:

  • HB1607/SB2524: Prohibits discriminatory ADTs, requires notice and explanations, and mandates annual audits.
  • HB2176/SB2572: Establishes temporary working group and permanent office for AI regulation.

Illinois:

  • HB5116: Requires impact assessments, disclosures, governance programs, opt-out rights, and public policies for ADTs.
  • HB3773: Prohibits employers from considering race or zip code as proxies for race when using predictive data analytics.

Maryland:

  • HB1202: Prohibits employers from using certain facial recognition services during job interviews without consent.

Massachusetts:

  • SB2539: Creates automated decision-making control board and imposes various requirements on ADTs.

New Jersey:

  • S1588: Regulates use of AEDTs in hiring decisions, requiring bias audits and disclosures.

New York:

  • A8129/S8209: Enacts an “artificial intelligence bill of rights.”
  • A8195: Establishes licensing regime and other requirements for “high-risk advanced artificial intelligence systems.”
  • A7859, S5641A, S7623A: Address ADTs in employment decisions, with requirements like bias audits and impact assessments.

Oklahoma:

  • HB3835: Prohibits discriminatory ADTs, requires impact assessments and developer policies, and provides a private right of action.
  • HB3453: Grants individuals rights related to AI interaction and data use.
  • HB3573: Requires insurers to disclose and certify AI use in utilization review processes.

Rhode Island:

  • HB7521: Similar to Oklahoma’s HB3835, with additional requirements like user notification and opt-out options.

Utah:

  • SB149: Establishes liability for generative AI, creates AI policy office, and implements a “regulatory mitigation” licensing scheme.

Vermont:

  • H.710: Applies to developers and deployers of “high-risk” AI systems, with various disclosure, assessment, and rights requirements.
  • H.711: Applies to developers and deployers of “inherently dangerous” AI systems, with stricter requirements and testing/evaluation obligations.

Virginia:

  • HB 747: Imposes disclosure and operating standards for developers and deployers of “high-risk” AI systems, with many exemptions.

Washington:

  • HB 1951: Prohibits discriminatory ADTs, requires impact assessments, and imposes documentation and policy requirements on developers.

Click here to subscribe to our AI Governance Blog so you don’t miss the latest in the world of AI regulation and compliaUS States Dive into AI Regulation with a Frenzy of Legislative Actionnce.

CPPA AI Rules Cast Wide Net for Automated Decisionmaking Regulation

At the end of 2023, the California Privacy Protection Agency (CPPA) unveiled draft regulations aimed at automated decision-making technology (ADMT), including artificial intelligence (AI), to bolster consumer protections in the state. This step underscored California’s commitment to individual privacy rights and represents a critical development in the ever-evolving landscape of data governance and AI ethics. 

At the most recent CPPA meeting, the Agency, in what can be described as a spirited discussion, considered the ramifications of the draft ADMT regulations, including some publicly contentious elements in the draft rules. Seemingly most contentious in the CPPA draft rules is the consent to train automated decisionmaking technology, which may get overturned in the long run. The bulk of the rules are, as expected, consumer-centric keeping notice and opt-out consumer rights at the forefront. Let’s take a look at the key elements of the draft rules as they stand today. 

Defining the Scope

The draft regulations are general and intended to encompass any system, software, or process that processes personal information and uses computation to make or facilitate decisions. This covers various technologies, including those derived from machine learning, statistics, or other data-processing methods. is the inclusion of profiling, defined as any form of automated processing to evaluate personal aspects of individuals, such as their behaviors, whereabouts, economic situation, or health.

Requirements in the Original CPPA Draft Rules

Pre-Use Notice: Businesses employing ADMT must provide consumers with advance notice detailing the technology’s purpose, decision-making process, and their rights to opt out and access information about its use. This notice must be presented in plain language and include comprehensive information on the logic, parameters, and testing of the ADMT for validity, reliability, and fairness.

Right to Opt Out: Consumers have the right to opt out of decisions made by ADMT that produce legal or similarly significant effects, such as employment opportunities or compensation. Businesses must offer multiple opt-out methods tailored to their consumer interactions, ensuring accessibility and ease of use.

Updates to the CPPA Draft Rules

Since November, the CPPA has adjusted the draft rules, as is the usual trajectory for formulating regulations since their inception. We anticipate further modifications to as all draft rules are subject to change pending public feedback and the formal rulemaking process. Here is the current state of the notice provision and opt-out requirement. 

Requires providing specialized notice, plus opt-out and access rights under ADMT for: 

  1. decisions that produce legal or similarly significant effects 
  2. profile workers, job applicants & student 
  3. profile individuals “in a publicly accessible place” 
  4. profile consumers related to “behavioral advertising”
  5. profile anyone under 16 years old for any purpose 
  6. utilize PI information to train ADMT 

Pre-use notice includes informing consumers of the right to opt-out and “plain language” explanation of ADMT’s logic, key parameters, and if it has been tested (with results) for “validity, reliability, and fairness.”

Rights:

  1. Opt-out requires 2 methodologies (not just cookies); there is disagreement on whether this includes employees
  2. Information on ADMT use for specific consumers and “plain language” explanation of purpose, output (consumer specific), any impacted decisions (pre or post-use) with full range of potential outputs
  3. additional CPRA rights 

Adverse decisions/actions require further explanation and post-use notice which is unique to the CPPA draft rules.

Implications for Employers

Employers in California should pay close attention to these draft regulations, as they significantly impact employment practices and policies. Job applicants and employees must be informed if employment decisions are based on ADMT, and they retain the rights to access information and opt out of profiling activities. Additionally, businesses must conduct risk assessments to mitigate privacy risks associated with ADMT usage.

Looking Ahead

While these draft regulations signal California’s proactive approach to regulating ADMT and protecting consumer privacy, we may see an evolution as stakeholders engage in the ongoing dialogue to shape the final regulations. We will keep you apprised of additional information about the CPPA ADMT draft rules as it becomes available. 

EU AI Act Introduces Unique Tiered System for Risks

With the full text of the EU AI Act made public, Truyo President Dan Clarke read through the Act in its entirety to identify key elements that will be crucial to compliance for organizations in scope. The Act includes the conventional components of transparency, privacy, education, security, non-discrimination, and risk assessment. 

Where it differs from current and proposed AI legislation, according to Clarke, is in the tiered system and the different obligations for each level based on relative risk. “This comprehensive act applies to all companies utilizing or offering systems based on AI within the EU, regardless of origination or size. It is remarkably consistent with the White House executive order and subsequent blueprint for an AI bill of rights, including emphasis on safety and protection against discrimination/bias.”

Clarke posits, “From a commercial perspective, we expect the most common high-risk AI systems will be centered around education, security (facial recognition), and the employment/recruiting function, especially for multinationals based outside the EU. Unacceptable risk is centered around discrimination and bias, especially via subliminal or similar techniques applied to vulnerable or disadvantaged groups.”

Introducing a Tiered System for Unacceptable and High AI Risk

The tiered system includes unacceptable and high risk. The unacceptable risk tier effectively bans social scoring and systems employing subliminal techniques beyond an individual’s consciousness to distort behavior, causing potential physical or psychological harm. The law also forbids the use of AI systems exploiting susceptibilities associated with age or physical or mental disability, leading to harm for individuals within those specific groups.

The Act defines the following tier as high-risk and prescribes obligations for companies engaged with high-risk systems, introducing the following requirements:

  • Implementation of a risk management system
  • Data quality analysis and data governance program
  • Technical documentation
  • Record-keeping
  • Transparency and provisions of information to users
  • Human oversight
  • Accuracy, robustness, and cybersecurity

For high-risk AI systems, companies must provide users with comprehensive information about the system’s ownership, contact details, characteristics, limitations, performance metrics, and potential risks. This includes specifications for input data, changes to the system, human oversight measures, and expected lifetime with maintenance details. The development of such AI systems, especially those using model training, demands strict adherence to guidelines for quality datasets, considering design choices, biases, and specific user characteristics. 

This demand for greater transparency and human oversight aims to enable users to understand and utilize outputs appropriately, with technical solutions required to address exposures like data poisoning and adversarial examples. “This regulation is a significant step, and I think most importantly launches terms like ‘responsible AI’ and ‘trustworthy AI’ to the front of our discussion. This is the true beginning of regulated AI governance,” says Clarke.

Ethical Principles Outlined in the EU AI Act

The EU AI Act emphasizes several ethical principles that align with its objectives and regulations. These principles are crucial for ensuring the responsible development, deployment, and use of AI systems. The key ethical principles compatible with the EU AI Act include:

  • Respect for Human Autonomy: Ensuring AI systems support human decision-making without undermining human agency and the ability to make choices freely and independently.
  • Prevention of Harm: Prioritizing the safety and security of AI systems to prevent physical, psychological, and financial harm to individuals and society.
  • Fairness and Non-Discrimination: Designing and operating AI systems in a way that prevents bias and discrimination, ensuring equitable treatment and outcomes for all users.
  • Transparency and Explainability: AI systems should be transparent, with decisions made by these systems being understandable and explainable to users and affected parties.
  • Privacy and Data Governance: Upholding high standards of data protection and privacy, ensuring the confidentiality and integrity of personal data processed by AI systems.
  • Societal and Environmental Well-being: Ensuring the development and use of AI contributes positively to societal progress and environmental sustainability.
  • Accountability: Establishing clear responsibilities for AI system developers, deployers, and operators to ensure they can be held accountable for the functioning and impacts of these systems.

These core principles reflect the EU AI Act’s commitment to fostering an AI ecosystem that is safe, trustworthy, and respects the fundamental rights and values of consumers. In next week’s blog we’ll outline best practices for conducting AI risk assessments in compliance with the EU AI Act. Click here to subscribe to Truyo’s AI Newsletter to get the latest on AI governance recommendations and regulatory updates. 

Cantwell Proposes AI Legislation to Create a Blueprint for Innovation and Security

In 2024, a surge of global AI legislation is imminent, with the United States poised to follow the European Union’s lead by implementing comprehensive nationwide rules and guidelines. Senate Commerce Committee Chair Maria Cantwell is gearing up to unleash a wave of groundbreaking AI legislation, marking the first comprehensive initiative in Congress to address the multifaceted challenges posed by the currently unregulated technology.

While the details of the proposed legislation aren’t available, we can rely on current legislation to give us an outline of what’s coming. The key elements found in AI regulation are universal to some extent, always including components around bias, transparency, training, and respecting privacy – all issues with which you should be concerned sans legislation. Cantwell’s goal is to put up guardrails around AI usage to reduce risk without stifling innovation and putting the US behind the adoption pace of other countries.

The legislative push by Cantwell has received the support of Senate Majority Leader Chuck Schumer, who has assigned multiple committee chairs to lead on introducing and debating major AI legislation. The bills will be rolled out in a staggered fashion over the next few months, reflecting the urgency to build foundational AI guidelines that foster innovation and competition in the global arena. It’s an important next step after the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that set the stage for US legislation around AI but lacks enforcement elements.

Crucial Focus of AI Legislation

Cantwell’s comprehensive AI legislation, expected to be introduced in the coming weeks, will encompass various aspects of AI policy, including regulation of generative AI tools, innovation, and issues such as deepfakes, algorithmic bias, digital privacy, national security, and AI competitiveness. Additional areas of concern include AI research and development, consumer fraud, and the impact of AI on displaced workers.

Cantwell’s approach aims to build upon existing legislative efforts and collaborations, aligning with the senator’s commitment to making the United States a leader in AI policy with a budget to match. It is estimated that the legislation could include a spending package of $8 billion to $10 billion dedicated to AI policymaking.

Can the US Reach a Bipartisan Agreement?

Bipartisan efforts in privacy have hit major roadblocks given the ideological differences between Democrats and Republicans. Can an agreement be reached when it comes to AI? Overcoming these challenges will be crucial for the success of the comprehensive AI bills. Experts involved in Schumer’s bipartisan AI Insight Forums emphasize the importance of maintaining a narrow focus on key issues with existing bipartisan agreements. This approach, they argue, increases the likelihood of bipartisan support and passage. Suggestions include prioritizing national security, innovation leadership, and U.S. competitiveness while avoiding excessive government spending requests.

Key industry players, including the Software & Information Industry Association, representing major tech companies like Adobe, Apple, and Google, stress the need for legislative efforts to align with the bipartisan spirit of Schumer’s forums. They emphasize the importance of promoting safe and trustworthy AI, mitigating potential harms, and establishing a nationwide standard for AI through comprehensive federal privacy legislation.

As the legislative landscape unfolds, the introduction of Cantwell’s comprehensive AI bills is anticipated to play a pivotal role in shaping the future of AI policy for the United States. The series reflects a concerted effort to address the complexities of AI regulation, innovation, and societal impact, setting the stage for a new era of AI governance that will inform US business practices.

Click here to learn about the first AI Governance Platform by Truyo helping organizations navigate AI risk identification and remediation or email hello@truyo.com for more information.

Navigating AI Governance: CCPA Regulations and Biden’s Executive Order – Impact, Rights, and Compliance Unveiled

We knew it was coming and now we’ve received dissecting two AI directives less than 30 days! With the release of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and the California Privacy Protection Agency (CPPA) draft Automated Decisionmaking Technology Regulations the onslaught of AI-related governance and compliance rules is upon us.

The CPPA’s release is much more prescriptive than the Executive Order which called upon agencies and government entities to develop parameters for ethical AI use. The draft regulations by the CPPA are broken down into 3 critical categories. Here’s how they compare to the content of Biden’s Executive Order.

Right to Opt-Out and Pre-Use Notice

As outlined in the CCPA draft regulations, a company using automated decision-making technology must give customers a “Pre-use Notice.” This notice should explain how the business uses AI technology and communicate to consumers that they have the right to choose not to participate or to get information about how the technology is used. Following that notice, consumers must be given the option to opt out of all automated decision-making practices. It’s a much more comprehensive opt-out than in past privacy legislation.

The Executive Order on AI doesn’t spell out an opt-out requirement or notice obligations; however, throughout the Executive Order, it is stated that committees will be formed to address these issues and Congress is called upon to develop bipartisan data privacy legislation to ensure the privacy of American citizens’ data is of utmost importance.

Get Information On Truyo’s AI Governance Platform – Identify AI Footprints & Train Employees on Ethical and Compliant Usage

What does this mean for your organization? President Dan Clarke weighs in on what is arguably the most important, yet complicated section of this and any future legislation governing the use of AI. Clarke says, “In the privacy community, opt-in and opt-out signals are old hat, but the CPPA’s pre-use notice is significant and poses potential complications for organizations. How do you get this notice to your consumers before automated decision-making begins?”

Clarke goes on to say, “First and foremost, you need to understand where and how AI is being used. It may sound self-serving since that’s exactly what Truyo’s AI Governance Platform does, but identification of AI usage is vital. To offer a notice and an opt-out to your consumers you have to know what exactly they’re opting out of to meet requirements compliantly. Without that knowledge, you are flying blind into a governance nightmare. Regarding opt-out, you absolutely can and absolutely should go through the identity verification of someone performing a request to access – similar to a right to know or right to delete request under CPRA.”

Consumer Rights

Consumer rights are top of mind for the CPPA as indicated by the formerly released operating rules under CPRA and those same considerations continue to be apparent in the Automated Decisionmaking Technology draft regulations. The proposed regulations outline the right for consumers to request access information about the business’s use of automated decision-making. Much like a Data Subject Access Request, organizations will have to accept incoming requests and act on those requests in a timely manner, which has yet to be defined.

Again, the Executive Order does not spell out what rights consumers will have, but it is evident that the upcoming legislation Biden called for would include rights afforded to American citizens to foster the transparent and ethical use of automated decision-making technology. From the Executive Order: “To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions…” Without recommending specific and detailed compliance requirements, Biden made a statement that those would be forthcoming in the form of federal legislation and actions by committees created by this Order.

Protecting American Minors

Article 6 of the CCPA draft regulations outlines special parameters for consumers under 16 years of age, allowing them to opt in. For those under 13, businesses using profiling for behavioral advertising must establish a reasonable method for a parent or guardian to opt in on behalf of their minor. Even if there’s existing parental consent as per COPPA, this additional agreement for profiling is required.

Biden’s intentional mention of kids, or minors as typically designated in privacy legislation, in the Executive Order foreshadows specific callouts in future federal legislation to maintain high-level regulation and security of the data of younger citizens.

Exemptions

Exemptions are less significant than at first glance. It may appear at first glance that there are several exemptions included, such as those allowing organizations to be exempt from offering the opt-out mechanism, but the available exceptions are few and far between and don’t allow for most businesses to make a use-case to avoid the opt-out. Even government agencies will likely be within the scope of the CPPA rules.

What’s Next for the CPPA’s Proposed AI Regulations?

We expect further clarification from the CPPA on what qualifies as an acceptable opt-out mechanism. Truyo President Dan Clarke says, “I think iterations of these regulations from the CPPA are likely to add strength to the argument that this opt-out must be explicit to automated decision-making and you can’t simply employ an umbrella opt-out. A consumer complaint mechanism may also be released, giving consumers an option to submit grievances to the CPPA for consideration.”

There is a mention of a 15-day timeline assigned to the opt-out, but we may see this revised for more clarity, potentially processing the opt-out immediately upon first interaction. Clarke says, “Technically speaking, it makes sense for the opt-out to be processed immediately. You can’t retain data once someone opts out, but how do you operationalize that? It’s a huge burden on companies looking to comply with these proposed regulations.”

In analyzing what this means for American companies, Dan Clarke had this to say, “Maybe the biggest takeaway for a business owner is you may have to require your entire organization to carefully disclose the purpose, usage, and methodology with which they’re leveraging AI, especially in automated decision-making. This is no easy task, but under these proposed regulations and other current laws, there’s no way around it. You might, and probably should, create a ROPA-like document trail demonstrating what you’ve done to identify automated decision-making and how you’ve required your organization to document its usage and subsequent compliance efforts.”

AI-Based Legislation Could Spur Movement on the Privacy Front

We’ve seen Congress come to a standstill on federal privacy legislation in the past, leaving it up to the states to fill the gap. Dan Clarke is hopeful saying, “I think we’re going to see movement in the privacy protection as a result of Biden’s willingness to publicly speak on the privacy implications of AI. This has reinvigorated the federal privacy discussion and I anticipate, either through Mag-Moss rulemaking or comprehensive legislation, movement on that front sooner than later. I think one of the quickest things we’re going to see is a response from NIST and a cybersecurity program for AI compliance. I was told by a Biden administration insider that there are deadlines for new elements outlined in the Executive Order, some as short as 45 days, so we should see material AI-related compliance components come out in the next 90 days.”

Diving Into President Biden’s AI Executive Order

On October 30th, President Joe Biden unveiled the federal government’s most comprehensive initiative regarding artificial intelligence (AI) to date. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence order places a significant emphasis on establishing standards for AI privacy, security, and safety, as indicated in a preliminary fact sheet that preceded the official order. It also aims to combat discrimination within AI systems while safeguarding the rights of workers when AI technology is deployed in the workplace.

Bruce Reed, the White House Deputy Chief of Staff, expressed that Biden’s actions represent the most substantial measures any government worldwide has taken to ensure AI’s safety, security, and trust. He underscored that these steps are part of an aggressive strategy to harness the benefits of AI while mitigating potential risks.

Learn How Truyo’s AI Governance Toolset Scans to Locate AI Usage in Your Organization and Populates a Categorized and Ranked Risk Dashboard

President Biden had pledged to address AI-related issues in August, leaving privacy and cybersecurity professionals wondering what was coming down the pike. In the lead-up to this executive order, Biden secured voluntary commitments from several companies to adhere to safety metrics while awaiting more formal regulations from the U.S. Congress. Additionally, earlier in the year, the White House had unveiled a Blueprint for an AI Bill of Rights.

Prior to signing the executive order, President Biden spoke about the significant responsibility the federal government faces in establishing guidelines for a technology that can take various forms and is rapidly evolving. He described this moment in history as an “inflection point” where current decisions will have far-reaching consequences, which could be both perilous and transformative, including the potential to combat diseases like cancer and address climate change.

About the Order

The executive order’s primary focus is the integration of privacy and AI, highlighting the essential need for safeguards. The order explicitly calls on Congress to advance comprehensive privacy legislation, echoing the sentiments of federal lawmakers who have recently explored the connection between privacy legislation and AI regulation.

Under the order, federal agencies are directed to develop techniques to protect individuals’ privacy, assess the effectiveness of existing privacy protections, and strike a balance between data accessibility for training purposes and data security and anonymity.

The order also tasks the National Science Foundation with collaborating on stronger cryptography protections to safeguard privacy and national security, particularly in the face of advancing quantum computing. Federal agencies are instructed to assess their data collection and usage practices, including data obtained from data brokers, with the goal of ensuring data security.

The Future of Privacy Forum supports the call for bipartisan privacy legislation and views the executive order as highly comprehensive, affecting both the government and private sector. The Center for Democracy and Technology also applauds the order for its comprehensive approach to responsible AI development and governance.

To ensure safety, the order utilizes the Defense Production Act to compel developers to share safety test results with the government before releasing AI technologies to the public. It also establishes standards for these tests and instructs agencies like the Departments of Energy and Homeland Security to evaluate AI’s potential risks to critical infrastructure. The order further charges the National Security Council with developing a national memorandum on AI use in the military and intelligence community.

The order emphasizes the need for accountability among companies as scrutiny increases regarding how generative AI algorithms create content. To maintain transparency, it calls for independent testing of AI safety and the establishment of new standards for collaboration between the government and private sector.

The executive order builds upon Biden’s Blueprint for an AI Bill of Rights by requiring agencies to issue guidance on the use of AI in sectors like housing, federal benefits programs, and contracting to reduce discrimination. This move follows studies demonstrating that algorithms used in areas like banking can harm low-income borrowers.

The Department of Justice is tasked with training civil rights offices on prosecuting civil rights violations related to AI and setting standards for AI usage in the criminal justice system, ensuring equitable practices.

The order also prioritizes protecting workers’ ability to collectively bargain, addressing job displacement due to AI, and upholding labor standards. It encourages companies to distinguish AI-generated content from human-created content by adding watermarks.

Furthermore, the order underscores the importance of promoting U.S. competitiveness in AI. It calls for the establishment of a National AI Research Resource to enhance research tools’ accessibility and provide technical assistance to small businesses. Streamlining the visa process for AI-skilled individuals to study and work in the U.S. is another key feature.

While President Biden’s executive order represents a significant step forward, it also acknowledges that there are limits to what the White House can accomplish independently. With mounting pressure and rapid global developments in AI, Congress faces the challenge of keeping pace with the evolving landscape.

Artificial Intelligence: The Latest Topic to Make Privacy Professionals Sweat

Omar N. Bradley said, “If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.” When he said that, he couldn’t foresee artificial intelligence one day writing pages of code in mere seconds and making our wildest imaginations come true through AI-designed art. All jokes aside, in the last 12 months we’ve seen an onslaught of AI tools flood the market, from ChatGPT to Bard and many more. While it’s enhanced and streamlined many business operations such as hiring employees and helping develop marketing collateral to even helping design websites, AI has brought with it a slew of privacy concerns that you may be considering if your organization utilizes this technology.

The hot topic amongst organizations is weighing the benefits of using AI against potential pitfalls. Apprehension around job automation, weaponization, and biases within the models are cause for concern – the latter of which leads directly into privacy territory. With new technology, it can be difficult to determine where to start with analyzing your practices and potential risks. New York and Connecticut have already blazed the trail for AI regulation and most other states have legislation in the works. Navigating this state & local regulatory minefield will be a complex challenge for organizations. You can play it safe by avoiding the technology altogether until definitive regulatory parameters in your geolocation have been set, but who wants to fall behind in using such a revolutionary tool that could benefit your company?

Don’t forget to subscribe to our newsletter so you will receive our weekly blogs & privacy updates!

Prepare for AI Legislation

To aid you in preparing for what is likely to be a wave of AI-based regulations across the globe, Truyo President Dan Clarke and Trustible Co-Founder Gerald Kierce put together some actionable steps you can take now to begin the work of using AI in an ethical and soon-to-be compliant manner.

Step 1: Discovery

The biggest difficulty organizations face is finding out exactly where and how AI is being used.  Performing discovery helps you ascertain the areas in which your company uses AI, enabling you to create a record of where it’s being used and how. There are different methods for this but two main components you should take into account: scanning & surveying. A scan of your systems, website, databases, etc. can give you a good idea of where AI is being used on the back end. However, you should also survey employees as well to see how they are using AI for day-to-day operations and efficiency. Emerging AI laws aren’t regulating the models so much as they’re regulating the use cases of AI, where the inherent risk is present and highly focused on front-end utilization. As such, an AI inventory is a critical step in documenting your use cases of AI.

Step 2: Risk Assessment

As a privacy or cybersecurity professional, you’re no stranger to risk assessment, and though it is not yet required, performing a risk assessment of your organization’s AI usage is imperative. Assessing the relative risk of AI usage, especially when it’s used to make hiring decisions or any other activity involving PII, will help you identify where you could be at risk of prejudicial practices. AI usage for what we call behind the scenes acts, such as data mapping, for example, is inherently not prejudiced and shouldn’t put your organization at risk of enforcement. The European Union’s AI Act classifies specific use cases (not ML models) into one of 4 risk categories: unacceptable (prohibited), high, medium, and low risk. Organizations should start to gain an understanding of the regulatory requirements for each risk category.

Step 3: Evaluating Usability

As quickly as AI is evolving, there can still be gaps that leave your organization open to risk. For example, consider an AI tool that builds health projections based on pre-2020 data. It wouldn’t be able to account for health trends that emerged from the Covid-19 pandemic, a huge pitfall in that sector. Timeframes for data usage and updating can and should be disclosed without giving away your trade secrets and helps reduce the potential for legal action.

Step 4: Disclosures

While it isn’t necessary to share the specific data that went into a particular AI model, it is generally innocuous to reveal any potential limits with high-level indicators. When in doubt, disclose! The key to staying off enforcement radar is going to align with privacy practices you’re already employing: disclose, minimize, and provide notice where necessary.

Step 5: Monitor Legislation

This isn’t an action item so much as a recommendation. As we all know, privacy legislation moves quickly and regulation around AI will be no different. One thing that we know will be true for all regulations is the increased need for documentation around your AI systems. Keeping in touch with what lawmakers are proposing in your jurisdiction will help you prepare for any laws coming your way.

As always, we will keep you apprised of the trajectory of AI regulations and how it will affect your organization as AI becomes a hot topic in privacy. If you have any questions, please reach out to hello@truyo.com.