Navigating AI Governance: CCPA Regulations and Biden’s Executive Order – Impact, Rights, and Compliance Unveiled

We knew it was coming and now we’ve received dissecting two AI directives less than 30 days! With the release of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and the California Privacy Protection Agency (CPPA) draft Automated Decisionmaking Technology Regulations the onslaught of AI-related governance and compliance rules is upon us.

The CPPA’s release is much more prescriptive than the Executive Order which called upon agencies and government entities to develop parameters for ethical AI use. The draft regulations by the CPPA are broken down into 3 critical categories. Here’s how they compare to the content of Biden’s Executive Order.

Right to Opt-Out and Pre-Use Notice

As outlined in the CCPA draft regulations, a company using automated decision-making technology must give customers a “Pre-use Notice.” This notice should explain how the business uses AI technology and communicate to consumers that they have the right to choose not to participate or to get information about how the technology is used. Following that notice, consumers must be given the option to opt out of all automated decision-making practices. It’s a much more comprehensive opt-out than in past privacy legislation.

The Executive Order on AI doesn’t spell out an opt-out requirement or notice obligations; however, throughout the Executive Order, it is stated that committees will be formed to address these issues and Congress is called upon to develop bipartisan data privacy legislation to ensure the privacy of American citizens’ data is of utmost importance.

Get Information On Truyo’s AI Governance Platform – Identify AI Footprints & Train Employees on Ethical and Compliant Usage

What does this mean for your organization? President Dan Clarke weighs in on what is arguably the most important, yet complicated section of this and any future legislation governing the use of AI. Clarke says, “In the privacy community, opt-in and opt-out signals are old hat, but the CPPA’s pre-use notice is significant and poses potential complications for organizations. How do you get this notice to your consumers before automated decision-making begins?”

Clarke goes on to say, “First and foremost, you need to understand where and how AI is being used. It may sound self-serving since that’s exactly what Truyo’s AI Governance Platform does, but identification of AI usage is vital. To offer a notice and an opt-out to your consumers you have to know what exactly they’re opting out of to meet requirements compliantly. Without that knowledge, you are flying blind into a governance nightmare. Regarding opt-out, you absolutely can and absolutely should go through the identity verification of someone performing a request to access – similar to a right to know or right to delete request under CPRA.”

Consumer Rights

Consumer rights are top of mind for the CPPA as indicated by the formerly released operating rules under CPRA and those same considerations continue to be apparent in the Automated Decisionmaking Technology draft regulations. The proposed regulations outline the right for consumers to request access information about the business’s use of automated decision-making. Much like a Data Subject Access Request, organizations will have to accept incoming requests and act on those requests in a timely manner, which has yet to be defined.

Again, the Executive Order does not spell out what rights consumers will have, but it is evident that the upcoming legislation Biden called for would include rights afforded to American citizens to foster the transparent and ethical use of automated decision-making technology. From the Executive Order: “To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions…” Without recommending specific and detailed compliance requirements, Biden made a statement that those would be forthcoming in the form of federal legislation and actions by committees created by this Order.

Protecting American Minors

Article 6 of the CCPA draft regulations outlines special parameters for consumers under 16 years of age, allowing them to opt in. For those under 13, businesses using profiling for behavioral advertising must establish a reasonable method for a parent or guardian to opt in on behalf of their minor. Even if there’s existing parental consent as per COPPA, this additional agreement for profiling is required.

Biden’s intentional mention of kids, or minors as typically designated in privacy legislation, in the Executive Order foreshadows specific callouts in future federal legislation to maintain high-level regulation and security of the data of younger citizens.


Exemptions are less significant than at first glance. It may appear at first glance that there are several exemptions included, such as those allowing organizations to be exempt from offering the opt-out mechanism, but the available exceptions are few and far between and don’t allow for most businesses to make a use-case to avoid the opt-out. Even government agencies will likely be within the scope of the CPPA rules.

What’s Next for the CPPA’s Proposed AI Regulations?

We expect further clarification from the CPPA on what qualifies as an acceptable opt-out mechanism. Truyo President Dan Clarke says, “I think iterations of these regulations from the CPPA are likely to add strength to the argument that this opt-out must be explicit to automated decision-making and you can’t simply employ an umbrella opt-out. A consumer complaint mechanism may also be released, giving consumers an option to submit grievances to the CPPA for consideration.”

There is a mention of a 15-day timeline assigned to the opt-out, but we may see this revised for more clarity, potentially processing the opt-out immediately upon first interaction. Clarke says, “Technically speaking, it makes sense for the opt-out to be processed immediately. You can’t retain data once someone opts out, but how do you operationalize that? It’s a huge burden on companies looking to comply with these proposed regulations.”

In analyzing what this means for American companies, Dan Clarke had this to say, “Maybe the biggest takeaway for a business owner is you may have to require your entire organization to carefully disclose the purpose, usage, and methodology with which they’re leveraging AI, especially in automated decision-making. This is no easy task, but under these proposed regulations and other current laws, there’s no way around it. You might, and probably should, create a ROPA-like document trail demonstrating what you’ve done to identify automated decision-making and how you’ve required your organization to document its usage and subsequent compliance efforts.”

AI-Based Legislation Could Spur Movement on the Privacy Front

We’ve seen Congress come to a standstill on federal privacy legislation in the past, leaving it up to the states to fill the gap. Dan Clarke is hopeful saying, “I think we’re going to see movement in the privacy protection as a result of Biden’s willingness to publicly speak on the privacy implications of AI. This has reinvigorated the federal privacy discussion and I anticipate, either through Mag-Moss rulemaking or comprehensive legislation, movement on that front sooner than later. I think one of the quickest things we’re going to see is a response from NIST and a cybersecurity program for AI compliance. I was told by a Biden administration insider that there are deadlines for new elements outlined in the Executive Order, some as short as 45 days, so we should see material AI-related compliance components come out in the next 90 days.”

Diving Into President Biden’s AI Executive Order

On October 30th, President Joe Biden unveiled the federal government’s most comprehensive initiative regarding artificial intelligence (AI) to date. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence order places a significant emphasis on establishing standards for AI privacy, security, and safety, as indicated in a preliminary fact sheet that preceded the official order. It also aims to combat discrimination within AI systems while safeguarding the rights of workers when AI technology is deployed in the workplace.

Bruce Reed, the White House Deputy Chief of Staff, expressed that Biden’s actions represent the most substantial measures any government worldwide has taken to ensure AI’s safety, security, and trust. He underscored that these steps are part of an aggressive strategy to harness the benefits of AI while mitigating potential risks.

Learn How Truyo’s AI Governance Toolset Scans to Locate AI Usage in Your Organization and Populates a Categorized and Ranked Risk Dashboard

President Biden had pledged to address AI-related issues in August, leaving privacy and cybersecurity professionals wondering what was coming down the pike. In the lead-up to this executive order, Biden secured voluntary commitments from several companies to adhere to safety metrics while awaiting more formal regulations from the U.S. Congress. Additionally, earlier in the year, the White House had unveiled a Blueprint for an AI Bill of Rights.

Prior to signing the executive order, President Biden spoke about the significant responsibility the federal government faces in establishing guidelines for a technology that can take various forms and is rapidly evolving. He described this moment in history as an “inflection point” where current decisions will have far-reaching consequences, which could be both perilous and transformative, including the potential to combat diseases like cancer and address climate change.

About the Order

The executive order’s primary focus is the integration of privacy and AI, highlighting the essential need for safeguards. The order explicitly calls on Congress to advance comprehensive privacy legislation, echoing the sentiments of federal lawmakers who have recently explored the connection between privacy legislation and AI regulation.

Under the order, federal agencies are directed to develop techniques to protect individuals’ privacy, assess the effectiveness of existing privacy protections, and strike a balance between data accessibility for training purposes and data security and anonymity.

The order also tasks the National Science Foundation with collaborating on stronger cryptography protections to safeguard privacy and national security, particularly in the face of advancing quantum computing. Federal agencies are instructed to assess their data collection and usage practices, including data obtained from data brokers, with the goal of ensuring data security.

The Future of Privacy Forum supports the call for bipartisan privacy legislation and views the executive order as highly comprehensive, affecting both the government and private sector. The Center for Democracy and Technology also applauds the order for its comprehensive approach to responsible AI development and governance.

To ensure safety, the order utilizes the Defense Production Act to compel developers to share safety test results with the government before releasing AI technologies to the public. It also establishes standards for these tests and instructs agencies like the Departments of Energy and Homeland Security to evaluate AI’s potential risks to critical infrastructure. The order further charges the National Security Council with developing a national memorandum on AI use in the military and intelligence community.

The order emphasizes the need for accountability among companies as scrutiny increases regarding how generative AI algorithms create content. To maintain transparency, it calls for independent testing of AI safety and the establishment of new standards for collaboration between the government and private sector.

The executive order builds upon Biden’s Blueprint for an AI Bill of Rights by requiring agencies to issue guidance on the use of AI in sectors like housing, federal benefits programs, and contracting to reduce discrimination. This move follows studies demonstrating that algorithms used in areas like banking can harm low-income borrowers.

The Department of Justice is tasked with training civil rights offices on prosecuting civil rights violations related to AI and setting standards for AI usage in the criminal justice system, ensuring equitable practices.

The order also prioritizes protecting workers’ ability to collectively bargain, addressing job displacement due to AI, and upholding labor standards. It encourages companies to distinguish AI-generated content from human-created content by adding watermarks.

Furthermore, the order underscores the importance of promoting U.S. competitiveness in AI. It calls for the establishment of a National AI Research Resource to enhance research tools’ accessibility and provide technical assistance to small businesses. Streamlining the visa process for AI-skilled individuals to study and work in the U.S. is another key feature.

While President Biden’s executive order represents a significant step forward, it also acknowledges that there are limits to what the White House can accomplish independently. With mounting pressure and rapid global developments in AI, Congress faces the challenge of keeping pace with the evolving landscape.

Artificial Intelligence: The Latest Topic to Make Privacy Professionals Sweat

Omar N. Bradley said, “If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.” When he said that, he couldn’t foresee artificial intelligence one day writing pages of code in mere seconds and making our wildest imaginations come true through AI-designed art. All jokes aside, in the last 12 months we’ve seen an onslaught of AI tools flood the market, from ChatGPT to Bard and many more. While it’s enhanced and streamlined many business operations such as hiring employees and helping develop marketing collateral to even helping design websites, AI has brought with it a slew of privacy concerns that you may be considering if your organization utilizes this technology.

The hot topic amongst organizations is weighing the benefits of using AI against potential pitfalls. Apprehension around job automation, weaponization, and biases within the models are cause for concern – the latter of which leads directly into privacy territory. With new technology, it can be difficult to determine where to start with analyzing your practices and potential risks. New York and Connecticut have already blazed the trail for AI regulation and most other states have legislation in the works. Navigating this state & local regulatory minefield will be a complex challenge for organizations. You can play it safe by avoiding the technology altogether until definitive regulatory parameters in your geolocation have been set, but who wants to fall behind in using such a revolutionary tool that could benefit your company?

Don’t forget to subscribe to our newsletter so you will receive our weekly blogs & privacy updates!

Prepare for AI Legislation

To aid you in preparing for what is likely to be a wave of AI-based regulations across the globe, Truyo President Dan Clarke and Trustible Co-Founder Gerald Kierce put together some actionable steps you can take now to begin the work of using AI in an ethical and soon-to-be compliant manner.

Step 1: Discovery

The biggest difficulty organizations face is finding out exactly where and how AI is being used.  Performing discovery helps you ascertain the areas in which your company uses AI, enabling you to create a record of where it’s being used and how. There are different methods for this but two main components you should take into account: scanning & surveying. A scan of your systems, website, databases, etc. can give you a good idea of where AI is being used on the back end. However, you should also survey employees as well to see how they are using AI for day-to-day operations and efficiency. Emerging AI laws aren’t regulating the models so much as they’re regulating the use cases of AI, where the inherent risk is present and highly focused on front-end utilization. As such, an AI inventory is a critical step in documenting your use cases of AI.

Step 2: Risk Assessment

As a privacy or cybersecurity professional, you’re no stranger to risk assessment, and though it is not yet required, performing a risk assessment of your organization’s AI usage is imperative. Assessing the relative risk of AI usage, especially when it’s used to make hiring decisions or any other activity involving PII, will help you identify where you could be at risk of prejudicial practices. AI usage for what we call behind the scenes acts, such as data mapping, for example, is inherently not prejudiced and shouldn’t put your organization at risk of enforcement. The European Union’s AI Act classifies specific use cases (not ML models) into one of 4 risk categories: unacceptable (prohibited), high, medium, and low risk. Organizations should start to gain an understanding of the regulatory requirements for each risk category.

Step 3: Evaluating Usability

As quickly as AI is evolving, there can still be gaps that leave your organization open to risk. For example, consider an AI tool that builds health projections based on pre-2020 data. It wouldn’t be able to account for health trends that emerged from the Covid-19 pandemic, a huge pitfall in that sector. Timeframes for data usage and updating can and should be disclosed without giving away your trade secrets and helps reduce the potential for legal action.

Step 4: Disclosures

While it isn’t necessary to share the specific data that went into a particular AI model, it is generally innocuous to reveal any potential limits with high-level indicators. When in doubt, disclose! The key to staying off enforcement radar is going to align with privacy practices you’re already employing: disclose, minimize, and provide notice where necessary.

Step 5: Monitor Legislation

This isn’t an action item so much as a recommendation. As we all know, privacy legislation moves quickly and regulation around AI will be no different. One thing that we know will be true for all regulations is the increased need for documentation around your AI systems. Keeping in touch with what lawmakers are proposing in your jurisdiction will help you prepare for any laws coming your way.

As always, we will keep you apprised of the trajectory of AI regulations and how it will affect your organization as AI becomes a hot topic in privacy. If you have any questions, please reach out to