How AI Technology Is Changing Brand Protection and Advertising Laws

Brands are the foundation of commerce, built on trust, reputation, and legal protection. But the rise of generative AI is reshaping that landscape, introducing new concerns around the use of copyrighted content in AI training, imitation of real people and protected trademarks, and the growing use of autonomous systems with uncertain legal accountability.

For brand owners in Ghana and across Africa, these issues are no longer theoretical. They present both immediate risks and significant opportunities.

As AI becomes more deeply embedded in branding and advertising, it is creating fresh legal questions at the intersection of intellectual property, regulation, and liability. This article explores the legal implications of AI systems trained on datasets containing copyrighted works and registered trademarks, particularly when those systems are used in advertising and marketing campaigns.

It examines ownership rights, infringement risks, and accountability for AI-generated content and advertising materials. The discussion also looks at emerging court decisions that deny copyright protection to fully AI-generated works, while still recognising liability where AI outputs infringe on existing copyrights or misuse trademarks.

The analysis further considers who should bear responsibility when AI-generated content becomes defamatory, misleading, or deceptive — whether it is the AI developer, the advertiser, the platform, or even the consumer.

The article also reviews evolving regulatory approaches focused on transparency and accountability for developers and brands. In addition, it proposes a possible legislative direction for Ghanaian policymakers and stakeholders. It concludes with practical compliance strategies, including AI audit tools, disclosure requirements, and intellectual property checks for datasets used to train AI systems integrated into branding and digital platforms.

Intellectual Property

Copyright at the Input Stage: Training AI on Protected Works

Generative AI systems learn from vast amounts of data, much of it sourced online without express permission. This has triggered a major legal debate: does using copyrighted material to train AI systems amount to fair use, or is it copyright infringement?

In 2025, a US court delivered its first major ruling on AI training datasets in Bartz v. Anthropic. The court held that legally obtained materials could be used for AI training under fair use principles, but pirated books amounted to copyright infringement. The resulting $1.5 billion settlement — the largest AI copyright settlement so far — required Anthropic to destroy pirated datasets. Another US court also suggested that Meta’s use of copyrighted material could be considered transformative, while stressing that copyright owners may still deserve compensation where such use is commercially essential.

In the UK case Getty Images v Stability AI, Getty failed to secure most of its copyright claims but succeeded on a narrower trademark infringement argument after dropping some key copyright claims during trial.

The decision offered temporary legal breathing room for AI developers regarding training data, while making it clear that reproducing proprietary watermarks in AI-generated outputs could still trigger liability.

In Nigeria, the Advertising Regulatory Council (ARCON) has already raised concerns about AI-generated fraudulent advertisements spreading across social media, especially deceptive health-related promotions that pose risks to public safety.

What this means for brands

Your copyrighted content may already be part of datasets used to train AI systems without your consent. Brand owners should regularly monitor potential misuse of their intellectual property and issue takedown requests where necessary.

Copyright at the Output Stage: Who Owns AI-Generated Brand Content?

On March 2, 2026, the US Supreme Court declined to review Thaler v. Perlmutter, effectively confirming that works created entirely by AI without meaningful human creativity cannot qualify for copyright protection.

That means logos, ad copy, slogans, or product descriptions generated solely by AI may not enjoy copyright protection.

At the same time, courts are increasingly open to “output-based infringement” claims, where AI-generated content reproduces or closely imitates copyrighted works belonging to others. In other words, while creators may not own purely AI-generated works, they could still face legal action if their AI systems infringe on existing rights.

Practical guidance for Ghanaian brands

Brands should carefully document human creative contributions in AI-assisted projects. Depending entirely on AI for key brand assets carries significant legal risk.

Trademarks: From Training Data to AI Outputs

Trademark law focuses heavily on commercial use. In the AI era, trademark disputes may arise when protected marks are included in training datasets, reproduced in generated outputs, or used as names for AI products.

One notable example is Cameo v. OpenAI. Cameo, the celebrity video platform, sued OpenAI after it introduced a “cameo” feature in its Sora video application that allowed users to generate AI videos featuring celebrity likenesses.

The court granted a temporary injunction stopping OpenAI from using the “cameo” name, citing a strong likelihood of consumer confusion. The case highlights the legal dangers of adopting names already associated with established brands.

Similarly, in the Getty Images trademark dispute, the UK court found Stability AI liable for reproducing watermark-like elements resembling Getty’s trademarks in AI-generated images. The court ruled that Stability AI exercised enough control over training data, prompts, and outputs to meet the legal threshold for trademark use “in the course of trade.”

Importantly, the court held that the technical autonomy of AI systems does not shield providers from responsibility.

Practical guidance for brands

Watermarking digital assets is no longer just about branding aesthetics. It can serve as critical evidence in infringement disputes and strengthen enforcement actions.

Liability

Who Is Responsible When AI Violates Intellectual Property Rights?

Liability for AI-generated content may rest with different parties depending on the circumstances — including the AI developer, the advertiser or brand using the AI, or the platform distributing the content.

AI Developer Liability

In the Getty Images trademark case, the court outlined factors that could make an AI developer liable for infringement. These included:

  • Control over training data
  • The ability to block prohibited prompts
  • Capacity to filter outputs
  • The inability of users to influence the AI’s core behaviour
  • Consumer perception that the platform itself is the source of the content

Where these factors point to significant developer control, courts are more likely to hold AI providers accountable.

Brand and Advertiser Liability

For startups and emerging businesses, the greater danger may not come from lawsuits by giant corporations, but from failing to ensure their own AI-generated advertising complies with existing laws.

Brands should create simple compliance checklists for AI-generated marketing materials. AI content that imitates real people, celebrities, endorsements, or testimonials could easily breach Ghana’s advertising standards and consumer protection laws.

Platform Liability and Safe Harbour Rules

Within the European Union, the AI Act imposes transparency obligations on both AI providers and deployers. Article 50 requires clear disclosure whenever content — including advertisements and synthetic media — is AI-generated.

Violations can attract penalties of up to €15 million or 3% of global annual turnover.

One reported example involved a Guess advertisement published in Vogue magazine in August 2025. The campaign featured a model generated using AI, but the disclosure appeared only as a small disclaimer in the corner of the advertisement.

Critics argued that the disclosure may not satisfy EU transparency standards. If distributed within Europe, the campaign could expose Guess to regulatory action, even if the ad was created outside the EU.

Regulation

Ghana and Africa’s Response to AI

Ghana currently has no dedicated AI law, but the country is positioning itself as a regional leader in responsible AI governance through emerging policy frameworks.

In September 2025, the Ghana AI Practitioners’ Guide (GAIPG) was introduced as a national framework for ethical AI development tailored to Ghana’s realities. It promotes globally recognised principles such as fairness, accountability, and transparency.

At the same time, consultations are underway on the proposed Emerging Technologies Bill, which is expected to establish an Emerging Technologies Agency with a specialised AI division to oversee AI systems in Ghana.

A revised Data Protection Bill is also being developed to address AI systems, automated decision-making, deepfakes, and cross-border data transfers.

The proposed legislation would replace the current Data Protection Act and amend parts of the Electronic Transactions Act, both of which are seen as inadequate for managing AI-related challenges.

The new bill aims to strengthen enforcement, increase penalties for non-compliance, and respond to growing concerns about data harvesting by multinational technology companies and the absence of strong domestic data infrastructure.

At present, it remains unclear how Ghanaian courts would handle disputes involving AI-generated advertisements that infringe copyrights or trademarks. In the absence of local precedent, courts may rely on persuasive foreign decisions for guidance.

The Broader African Context

Nigeria’s ARCON is already confronting the rise of AI-generated deceptive advertisements on digital platforms. The country’s advertising industry is also forming specialised AI committees to develop ethical standards.

Kenya, South Africa, and Rwanda are pursuing similar AI initiatives.

Meanwhile, the African Continental Free Trade Area (AfCFTA) is expected to increase pressure for harmonised AI regulation, data protection standards, and cross-border digital commerce rules across the continent.

Ghana’s early engagement with AI governance positions it as a potentially influential player in these discussions.

The EU AI Act

The EU AI Act — widely regarded as the world’s first comprehensive AI law — began phased enforcement in 2025.

Among its key provisions are mandatory transparency requirements for AI-generated content and restrictions on manipulative AI advertising practices, including the exploitation of vulnerable individuals.

Importantly, the law also applies to non-EU businesses if their AI-generated outputs are accessible within Europe.

For Ghanaian brands hoping to expand into European markets, compliance will be essential.

Practical Recommendations for Brands

The risks associated with AI-generated advertising can be reduced if developers, brands, platforms, and advertisers adopt responsible practices, including:

  • Documenting human creative input in AI-assisted works
  • Avoiding complete reliance on AI for core brand assets
  • Monitoring whether copyrighted materials are used in AI training datasets
  • Exploring licensing arrangements or opt-out mechanisms where appropriate
  • Conducting trademark clearance checks for AI-generated names, slogans, and logos
  • Clearly disclosing AI-generated advertising content to consumers
  • Updating contracts with agencies and technology vendors to allocate liability appropriately
  • Monitoring developments surrounding Ghana’s Emerging Technologies Bill and Data Protection Bill
  • Avoiding AI-generated content that could mislead consumers about authenticity, endorsements, or product performance

Conclusion: Brands, AI and the Road Ahead

The rise of artificial intelligence does not erase existing legal and ethical obligations. Ghanaian brands remain protected — and regulated — by longstanding laws governing copyright, trademarks, contracts, and consumer protection.

AI does not eliminate infringement risks; it simply creates new ways for those risks to emerge.

In Ghana, the key question is no longer whether AI should be regulated, but how regulation should be implemented. The GAIPG, the proposed Emerging Technologies Bill, and the revised Data Protection Bill all point toward a regulatory approach that seeks to balance innovation with accountability.

Brands that build compliance into their AI strategies from the beginning — by documenting human contributions, reviewing AI outputs for intellectual property risks, and providing transparent disclosures — will be better positioned to reduce legal exposure and maintain consumer trust in an increasingly AI-driven marketplace.

As courts around the world continue to emphasise, technology may evolve rapidly, but the principles of fairness, authorship, and accountability remain constant.

Leave a Reply

Your email address will not be published. Required fields are marked *