That’s different than what I said though, which is that you can’t disagree with me without reading Gramsci. And is also typically how these authors’ names are invoked in arguments which are not about the authors themselves.
he/him. https://lib.lgbt
That’s different than what I said though, which is that you can’t disagree with me without reading Gramsci. And is also typically how these authors’ names are invoked in arguments which are not about the authors themselves.
You are correct this is a misunderstanding here. But it is of your misunderstanding of neural networks, not mine of memory.
LLMs are mathematical models. It does not know any information about Paris, not in the same way humans do or even the Wikipedia does. It knows what words appear in response to questions about Paris. That is not the same thing as knowing anything about Paris. It does not know what Paris is.
I agree with you the word “Paris” exists in it. But I disagree that information is relevant in any human sense.
You have apparently been misled into believing a word generation tool contains any information at all other than word weights. Every word it contains is as exactly meaningless to it as every other word.
Brains do not store data in this way. Firstly, neural networks are mathematical approximations of neurons. But they are not neurons and do not have the same properties of neurons, even in aggregate. Secondly, brains contain thoughts, memories, and consciousness. Even if that is representable in a similar vector space as LLM neural networks (a debatable conjecture), the contents of that vector space are as different as newts are from the color purple.
I encourage you to do some more research on this before continuing to discuss it. Ask ChatGPT itself if its neural networks are like human brains; it will tell you categorically no. Just remember it also doesn’t know what it’s talking about. It is reporting word weights from its corpus and is no substitute for actual thought and research.
Have you even read Gramsci? You really can’t disagree with anything I say until you’ve read Gramsci. Sorry, I don’t make the rules!
This is why my instance is defederated with them though. It’s just bad faith nonsense all the way down.
Yeah it’s just bad faith. They just want access to every space so accuse those that shut their doors on them of being an echo chamber.
Lol right? And if you even try to engage it’s constant sealioning, memeing, and dunking.
Most defederation isn’t because people are disagreeing though. It’s because the people they’re defederating from are assholes.
Can we please stop with the drama
Makes another post discussing the drama
Everyone here is busy describing the difference between memories and databases to me as if I don’t know what it is.
Our memories are not a database. But our memories are like a database in that databases contain information, which our memories do too. Our consciousness is informed by and can consult our memories.
LLMs are not like memories, or a database. They don’t contain information. It’s literally a mathematical formula; if you put words in one end, words come out the other. The only difference between a statement like “always return the word Paris
in response to any query” and what LLMs do is complexity, not kind. Whereas I think we can agree humans are something else entirely, right?
The fact they use neural networks does not make them similar to human cognition or consciousness or memory. (Separately neural networks, while inspired by biological neural networks, are categorically different from biological neural networks and there are no “emergent properties” in that network that makes it anything other than a sophisticated way of doing math.)
So… yeah, LLMs are nothing like us, unless you believe humans are deterministic machines with no inner thought processes and no consciousness.
Based dad!
Edit: daughter bad!!
Yeah his female characters were never great. It’s disappointing to hear it just got worse.
We know things more like a database knows things than LLMs, which do not “know” anything in any sense. Databases contain data; our head contains memories. We can consult them and access them. LLMs do not do that. They have no memories and no thoughts.
They are not word-based. They contain only words. Given a word and its context, they create textual responses. But it does not “know” what it is talking about. It is a mathematical model that responds using likely responses sourced from the corpus it was trained on. It generates phrases from source material and randomness, nothing more.
If a fact is repeated in its training corpus multiple times, it is also very likely to repeat that fact. (For example, that the Eiffel tower is in Paris.) But if its corpus has different data, it will respond differently. (That, say, the Eiffel tower is in Rome.) It does not “know” where the Eiffel tower is. It only knows that, when you ask it where the Eiffel tower is, “Rome” is a very likely response to that sequence of words. It has no thoughts or memories of Paris and has no idea what Rome is, any more than it knows what a duck is. But given certain inputs, it will return the word “Paris.”
You can’t erase facts when the model has been created since the model is basically a black box. Weights in neural networks do not correspond to individual words and editing the neural network is infeasible. But you can remove data from its training set and retrain it.
Human memories are totally different, and are obviously not really editable by the humans in whose brains they reside.
Theirs are composed of word weights. Ours are composed of thoughts. It’s entirely dissimilar.
The beginning is a good place to start! The main books are all ordered chronologically. So pick up Storm Front and get started.
Is it actually truly the year of the Linux desktop?
I guess what’s the alternative?
I don’t think Ukraine can take or hold Moscow or any other targets inside Russia itself, so Russia capitulating is very unlikely. It seems like it also won’t be able to conquer Ukraine. And the war won’t last forever. So… some kind of negotiated settlement is the only remaining solution.
No, they embed word weights in metric spaces. Human thought is more like semantic concepts in a metric space (though I don’t think that’s entirely unequivocal, human thought is not very well-understood). Even if the space is similar what’s in them is definitely not.
That might be good too though. Like if a bunch of unreconstructed Trumpers vote Trump in the next election, and he’s either jailed or not the candidate… that paves the way for a pretty easy Democrat victory.
I’ve seen straight performances more sexualized than anything I’ve seen at any drag performance that had children in attendance. I mean, children can go into any Hooters, whose entire point is sexualizing women.
The problem has always been LGBTQ+ people, not sexual conduct in front of children.
I’m so glad he’s running again. Please, please split that R vote.
You asked a question and answered it next sentence!