You must log in or register to comment.
I don’t think this is a clever take on it. Why shouldn’t we be able to tackle it? I mean sure, it can’t go 100% away. But we might be able to improve substantially from where we’re now. Maybe to the point where it’s 95% correct, and that’s good enough for us? I don’t see a reason why it has to stay like now, where ChatGPT “invents” facts or misses the point with the majority of tasks I hand to it.
And I think a few claims in the article are plain wrong. We know for example that it can memorize stuff. And it (LLMs) can generalize and form some “concepts” behind the words. That’s why we do machine learning the way we do it now, and don’t use markov chains for a chatbot any more.