On unbalanced AI criticism
I find the state of AI criticism, especially about LLMs, to be woefully unbalanced towards the pure negative. And I’m someone deeply sympathetic with the critical viewpoint.
But the technology is not going away, and this is why I enjoyed what Simon Willison wrote in his recent Things we learned about LLMs in 2024:
LLMs absolutely warrant criticism. We need to be talking through these problems, finding ways to mitigate them and helping people learn how to use these tools responsibly in ways where the positive applications outweigh the negative … I think telling people that this whole field is environmentally catastrophic plagiarism machines that constantly make things up is doing those people a disservice, no matter how much truth that represents. There is genuine value to be had here, but getting to that value is unintuitive and needs guidance.
L.M. Sacasas echoes this (very briefly) in his recent The Cat in the Tree: Why AI Content Leaves us Cold:
Among those who are not AI boosters and techno-optimists, there can be a tendency to reflexively downplay the sophistication of the technology in question or the impressive pace at which it has progressed. But uncritical cynicism can blind us to reality just as easily as uncritical optimism. There’s no use in it.
How can we use criticism to steer things towards those use cases where there is both value to be had while eliminating or substantially reducing harm? Should LLMs be used for search or fact-finding? Absolutely not, nor do I believe this is a solvable problem given the inherent nature of the technology. But, for instance, I’ve personally found LLMs to be quite valuable in the context of programming, where the harm derives primarily from: 1) Unmaintainable code, 2) possible license infringement and 3) energy cost. All potentially solvable.