Artificial Intelligence (AI) is making waves across industries, and the legal field is no exception. From research assistance to contract drafting, AI has transformed the way legal professionals work. But let’s get one thing straight: AI is a tool, not a lawyer. It can assist, enhance and even accelerate legal processes, but it cannot – and should not- replace human expertise.

AI can’t read the room

One of the fundamental aspects of legal practice is emotional intelligence. No matter how advanced AI becomes, it lacks the ability to assess the nuances of a negotiation or the mood of a courtroom. Legal strategy is often about timing, persuasion and understanding the human element – something AI simply cannot replicate.

I’ve seen firsthand how courtroom dynamics shift in an instant. A judge’s expression, an opposing counsel’s hesitation- these are cues that seasoned legal professionals pick up on and use to their advantage. AI, on the other hand, processes data, not emotions. It doesn’t know when to press harder or when to pivot. This is why, while AI is a powerful research tool, it will never replace the strategic instincts of an experienced lawyer.

The need for AI policies in business

As AI continues to be integrated into workplaces, companies need to take a hard look at their policies. In South Africa, we’re still operating in a legal grey area when it comes to AI regulation. There is no specific legislation governing AI use in business, which means companies must take it upon themselves to set clear guidelines.

AI-generated content can be impressive, but it’s not always legally sound. Businesses that blindly rely on AI for contracts, compliance reports or HR policies risk serious legal exposure. This is why I advise every business to establish clear AI usage policies. These should cover aspects like:

  • Data security and privacy: Who owns AI-generated content? How is confidential client information protected?
  • Decision-making & liability: If AI makes an error, who is responsible?
  • Ethical AI use: Are there safeguards against biased or misleading AI outputs?

Companies that fail to implement such policies are leaving themselves open to unnecessary risks.

Cybersecurity and AI manipulation: A growing threat

Cybersecurity threats have escalated alongside AI advancements. The reality is, AI isn’t just being used for good – it’s also being weaponised. Deepfake technology and AI-driven scams are becoming increasingly sophisticated.

Imagine receiving a video call from your CEO, instructing you to authorise a payment, only to later discover it wasn’t them at all – it was AI-generated fraud.

A recent conversation with the CEO of a top tech company, highlighted how AI is being used to impersonate key business figures in emails, WhatsApp messages and even live calls. The financial and reputational consequences of such attacks can be catastrophic.

This is why businesses must go beyond traditional cybersecurity measures and incorporate AI-specific safeguards. AI policies should clearly define:

  • How AI tools interact with sensitive business data
  • Protocols for verifying communications, even if they appear authentic
  • Legal recourse in case of AI-driven fraud or cyberattacks

Responsible AI integration: finding the balance

AI is here to stay, and businesses that integrate it responsibly will gain a competitive edge. However, the key is balance. AI should be viewed as a support system rather than a decision-maker. In the legal world, this means using AI to enhance research and efficiency while ensuring that critical thinking and legal judgment remain in human hands.

I really do encourage businesses to stay ahead of the curve by proactively implementing AI policies. As regulatory frameworks evolve, having these structures in place will put businesses in a stronger position when legislation inevitably catches up.

The bottom line? AI is a powerful ally, but it’s not a substitute for expertise. In law, as in business, the human element remains irreplaceable.

[Author:  PJ Veldhuizen]