My take on ChatGPT and its ilk

LLMs connected to chat as with ChatGPT are to the properly executed AIs as Windows in 1985 was to properly executed operating system designs as exemplified by the Xerox Star. In the 1980s it somehow became possible to get away with shipping prototypes while claiming they are finished products and the latest benchmark of this trend was sprung on an unprepared world by openai.org.

ChatGPT is deeply flawed and exactly like the unprepared computer salespeople: they don’t know when they are spewing BS. But unlike the salespeople ChatGPT lacks the mechanism to tap into another reference to detect its BS. This is because the training data was not properly vetted and weights were not assigned and incorporated reflect how trustworthy the data is known to be. Instead ChatGPT was rushed out as the ultimate shiny new toy to sucker vast numbers of people unable to appreciate how risky a tool it is to use. Shame on openai, shame on microsoft and shame on google for rushing headlong into creating tools that cannot be trusted but are believed to work by uninformed millions.