We’ve been talking about intellectual property in one way or another for at least the last five hundred years, and each new wave of technology or creativity leads to new kinds of arguments. We invented performance rights for composers and we decided that photography – ‘mechanical reproduction’ – could be protected as art, and in the 20th century we had to decide what to think about everything from recorded music to VHS to sampling. Generative AI poses some of those questions in new ways (or even in old ways), but it also poses some new kinds of puzzles – always the best kind. At the simplest level, we will very soon have smartphone apps that let you say “play me this song, but in Taylor Swift’s voice”. That’s a new possibility, but we understand the intellectual property ideas pretty well – there’ll be a lot of shouting over who gets paid what, but we know what we think the moral rights are. Record companies are already having conversations with Google about this. But what happens if I say “make me a song in the style of Taylor Swift” or, even more puzzling, “make me a song in the style of the top pop hits of the last decade”? A person can’t mimic another voice perfectly (impressionists don’t have to pay licence fees) but they can listen to a thousand hours of music and make something in that style – a ‘pastiche’, we sometimes call it. If a person did that, they wouldn’t have to pay a fee to all those artists, so if we use a computer for that, do we need to pay them? I don’t think we know how we think about that. We might know what the law might say, but we might want to change that.
Full commentary : A look at the ethical and legal issues around generative AI, which makes things that were previously only possible on a small scale practical at a massive scale.