Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.
deleted by creator
Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you
Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough
instead of llm, slm! (small language model)
Yes technology can be useful but that doesn’t make it “intelligent.”
Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.
LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.
Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.
That’s not true.
How is this untrue? Generative pre-training is literally training the model to predict what might come next in a given text.
That’s not what an LLM is. That’s part of how it works, but it’s not the whole process.
They never claimed that it was the whole thing. Only that it was part of it.
FTFY.
“intelligence” is not a very narrow term! imagenet classifiers are definitely intelligent in some way.
You might keep hearing people say this, but that doesn’t make it true (and it isn’t true).