AI Chatbots and Canadian News: Analyzing the Legal Implications

AI Chatbots and Canadian News: Analyzing the Legal Implications

The Canadian government is no stranger to regulating tech companies, but some are pointing out that AI chatbots escape regulation despite models being trained on Canadian news content.

When questioned about this potential discrepancy, the government in recent times appears to have sidestepped the issue of whether AI companies should compensate Canadian news publishers for utilizing their content. The Liberal government's passage of the Online News Act 2023 mandates certain tech companies to negotiate licensing agreements with news publishers. However, the applicability of this law to AI services, such as OpenAI's ChatGPT, Google's Gemini, and Meta AI, remains ambiguous.

The Online News Act was enacted to ensure that news publishers receive due compensation when tech companies utilize their content. However, its application to AI services is yet to be clarified. The office of Heritage Minister Pascale St-Onge indicated that the determination of whether AI services are covered by this law falls within the jurisdiction of Canada's broadcasting regulator. The statement from the office emphasized, "We are closely monitoring developments in artificial intelligence and their implications for the news media sector."

This ambiguity invites extensive legal interpretation. Should AI chatbots be deemed to use news content in a manner warranting compensation, it could initiate new legal avenues for news publishers to seek remuneration.

AI Companies and News Content Usage

Several AI models, including ChatGPT, openly acknowledge using Canadian news sources to train their algorithms or provide information. ChatGPT, for instance, accesses publicly available data from various news sites, while Google's Gemini incorporates news articles as part of its training dataset. Meta AI similarly acknowledges employing news sites to assist in responding to user queries.

These admissions raise significant legal questions regarding the obligations of AI companies. If deemed subject to the same standards as traditional tech companies under the Online News Act, AI services could face substantial financial implications, necessitating the negotiation of licensing agreements and compensation for the content they utilize.

One could be left wondering if AI companies may take a similar approach to the fallout of this law last year,whereby Meta simply barred Canadian users from sharing Canadian news content across their platforms. Would this then mean, if the laws were changed, that Canadian users would not be able to use Meta's AI products also?

Prime Minister Justin Trudeau, when queried about extending the Online News Act to encompass AI on a technology podcast, refrained from providing a direct answer. Instead, he underscored the ethical responsibility of platforms to self-regulate without excessive government intervention.

Juustin Trudeau on a New York Times podcast, "Hard Fork"

He remarked, "What I want is not for government to legislate what platforms should do or not do, because that's a recipe for disaster. We all know how slow governments end up working."

This position suggests a preference for self-regulation among AI companies, yet it leaves numerous legal questions unresolved regarding the requisite framework to protect Canadian news publishers--an activity the government has taken significant steps to do with earlier legislation.

Thus far, Google has negotiated an exemption from the Online News Act by agreeing to remit $100 million annually to Canadian news publishers. As mentioned previously, Meta took a different direction by just blocking news to Canadian users in order to comply with the law.

AI technology is likely here to stay. And the ongoing debate about how exactly governments will react to it is not well known yet.

Future legislation could profoundly influence the relationship between AI companies and content creators such as news organizations that the Canadian government has already worked hard to protect in other ways. There are some characteristic differences in how machine learning models get trained versus how social media sites 'share' existing content, though. And we are likely to see these AI companies underscore these differences and communicate them more to regulators.