A first look at technology
Jason Kottke wrote today about writing on his website for 25 years (plus several years prior to that at earlier sites, so basically back to the dawn of the web).
I’ve been running pile.org for that long, too, though obviously I haven’t been posting anywhere near as much. The earliest forms of the site are lost to the ether, which is probably for the best.1 But that got me thinking to my first glimpse of the Web, back in 1993 or 1994.
I was in one of the computer labs at school, and there was a new application running on some of the graphical terminals: In hindsight, it was NCSA Mosaic, one of the first web browsers. I looked over somebody’s shoulder, and probably poked around at one of the handful of World Wide Web pages. And I kind of shrugged, and thought to myself something to the effect of “hm, that’s kind of cool, I guess, but it doesn’t seem much better than gopher”.
On a completely unrelated note, I don’t know how ChatGPT-style chatbots can reliably produce truth. At least today, they produce plausibility, which is a very different thing. And even if you train one on nothing but 100% pure fact, it seems like it can still generate mistakes, as a feature of how it works, right? How can it be anything but a bullshit generator?
Jon and I are going to compare notes about this in a year. He’s more optimistic than I am.
Added the next morning, 3/15:
I want to be clear what I am and am not skeptical about here. I don’t think that chatbots, generally, are useless or doomed to failure or anything like that.
I do think that ChatGPT-style text producers are probably a dead-end, in terms of producing truth, which is critical for many of the uses that have been claimed for them (e.g. search engines). I don’t understand how “truth” can be added to this approach, unless it’s something clunky like having an entirely separate piece of code that evaluates its output for truth, and rejects its output and tells it to try again if it’s untrue. That thing would be interesting, and I think it would be a major advance in what we can do, but it would be bolted on to GPT to override fundamental behavior, not integrated into the technology.2
I think that the things ChatGPT can output are fascinating. I think that when companies saw that people were paying attention to it, they raced to catch a bit of the buzz, just like so many companies did with VR and NFTs and cryptocurrency, even if most of them didn’t have a compelling use-case or even vision for how the technology could help people. I think that, maybe, ChatGPT could be part of a bigger stack in the future — it’s clearly quite good at producing sentences as good as (or, let’s be honest, better than) those of most humans. But I don’t think that, by itself, it can produce anything other than bullshit.
I think it’s useful, and can improve at what it does, but I think that what it does is ultimately a dead-end.