2 Comments

Yes, your point is taken. Perhaps NL AI will always produce bullshit responses until metacognitive functions are developed. In the abstract, it doesn't seem like metacognitive functions should be difficult to develop, but I'm not an AI programmer, or even a programmer.

Expand full comment

One of the things that will always be an issue here is preconceptions in = preconceptions out. If you build and train an AI in accordance with your preconceptions, it will always result in something in accordance with your preconceptions — you likely wouldn't publish otherwise. Since our preconceptions of what "intelligence" is are currently constrained by a lack of knowledge of what human (or even animal) intelligence is, the result is something like building a computer model of magnetism using Descartes' little screws — a good model of Descartes's little screws but not a good model of magnetism.

(sorry for the delay; not quite got the notifications down on substack yet)

Expand full comment