I have had the pleasure of presenting on this topic twice so far this year, first to the Intellectual Property Society of Australia and New Zealand (IPSANZ) in March of this year, and again in October to the Victorian Society for Computers and The Law.

My presentations have been a combination of: a roadmap of available tools; practical advice on using those tools effectively; and a survey of current litigation involving generative AI companies.

The latter can not only highlight where the future disputes are likely to arise, it can also shed light on the way these systems work. A careful reading of the complaint in the Github Copilot litigation shows the way in which the Copilot system (a cousin of ChatGPT) combines snippets of the data it is trained on in the responses it gives, by illustrating distinctive features of the source material which are found in Copilot-generated code.

The Sarah Silverman litigation shows how authors, facing the black-box of ChatGPT, argue that this system must be trained on their works, because of the level of detail with which the system can recite the details of their books. But a question arises - is that the only reasonable inference? Is it possible that a system like ChatGPT could assemble details about the book, like pieces of a jigsaw puzzle, from the snippets of data about the book which are available all over the internet?

Deepfakes like the Fake Drake song, ask us to consider whether moral rights, a wholly underutilised regime within copyright law, might offer protection to artists concerned that their work is being imitated.

One thing which has become clear from preparing these presentations, is the pace of change in this area. Things which in May were obvious weaknesses have improved greatly in 5 months. For example, when I presented in May, ChatGPT 3.5 and 4 both failed to produce accurate citations for cases, and just Made Them Up™. But in October, the citations I tested on ChatGPT were accurate. Bard got the citations I tried wrong, but it hadn't been released in Australia back in May, so it's fair to infer that it is just a little bit behind, and this will improve in time.

The second big change is in the availability of integrations - ChatGPT Plugins, Bard Extensions, Microsoft Copilot. These affect both the scope of material which the genAI has access to, and the availability of these solutions. For example, once Microsoft Copilot launches in November, ChatGPT is going to be sitting right there, looking at you, as you type that letter to a client, offering to help.

This has implications both for the quality of the material produced by genAI chatbots, but changes the risk analysis. To put it bluntly, lawyers are going to be using genAI to help them in their work. If you aren't you are missing out. But if you are, then you need to do the risk analysis. That risk analysis involves asking yourself questions like:

  1. Does my client (or employer) know that I am using genAI to assist me in my work?
  2. How reliable is the answer the genAI just gave me?
  3. Could I have asked the question better?
  4. Do I need to ask further questions to clarify that answer?
  5. Is the data which I am providing this system being kept confidential?
  6. Is the service provider storing my chat history, which may or may not include confidential client information, sufficiently secure?

There is much fear amongst some in the community, and some in the legal profession, about the likely impact of genAI. Writers and actors in Hollywood were sufficiently concerned about a "dehumanisation of the workforce" that they sought standard conditions in contracts to the effect that a writer, or an actor, "must be a human".

But the potential positives of mastering this new toolkit are huge. Lawyers assisted by generative AIs can generate better analyses, for less money. Not only might they save their clients money, and become more competitive in the marketplace, but they might extend the availability of legal services to others who could not otherwise afford them.

Because (some) lawyers are not technology wizards, there is a tendency to think that the only way forward is to hire in the tech talent, or to leave this world to others. But the truth is, that the field of prompt engineering is not new. The ability to ask clear, specific questions, and to assess the credibility, reliability and accuracy of responses is, and has long been, the domain of lawyers. I suggest that lawyers are very well qualified to take on the challenge of questioning these systems, in a way that many are not.

I suggest that instead of letting the debate about these systems being driven by fear of the unknown, a better approach is to unleash your curiosity. Find out by experimentation the way in which you might improve your work product. These systems stand to eliminate some of the least rewarding, most tedious, and most costly aspects of legal practice, and to allow lawyers the opportunity to spend our time on higher level analyses - which is where we truly add value.

As with any discipline, the only way to truly get better is to try, and try again. So start looking for opportunities to save yourself time, in ways which don't create confidentiality or ethical issues, and start cross-examining your AI expert witness until you are satisfied with the results. A brave new world awaits.