you're posing an unfalsifiable statement as a question
"prove to me that you don't have an invisible purple unicorn friend that's only visible to you"
you're posing an unfalsifiable statement as a question
"prove to me that you don't have an invisible purple unicorn friend that's only visible to you"
yes, as I said it's an EVOLUTION of markov chains, but the idea is the same. As you pointed out one major difference is that instead of accounting for only the last 1-5 words, it accounts for a larger context window. The LSTM is just a parler trick. Read the paper on the original transformer model https://browse.arxiv.org/pdf/1706.03762.pdf
it's not about feeling intellectually superior; words matter. I'll grant you one thing, it's definitely "artificial", but it's not intelligence!
LLMs are an evolution of Markov Chains. We have known how to create something similar to LLMs for decades, getting close to a century, we just lacked the raw horse power and the literal hundreds of terabytes of data needed to get there. Anyone who knows how markov chains work can figure out how an LLM works.
I'm not downplaying the development needed to get an LLM up and running, yes, it's harder than just taking the algorithm for a markov chain, but the real evolution is how much computer power we can shove into a small amount of space now.
Calling LLMs AI would be the same as calling a web crawler AI, or a moderation bot, or many similar things.
I recommend you to read about the chinese room experiment
I'm always astonished by the amount of information that people give away freely without securing it properly.
As for yet another billion dollar company's data being stolen... well... that's just a normal Friday. I'm not one for government intervention, especially considering how our governments act nowadays, but I seriously think that our privacy laws should be a lot more useful and a lot more severe.
I don't even know what this company was thinking, what goes through someone's brain to not stop for 20 seconds and think that storing this information unencrypted and just behind a simple login screen is a bad idea? Isn't it just blatantly obvious that they should've used e2e encryption? Require people to generate a key before they send their sample? Or if you want to make it moron proof, was it really impossible to write a unique seed phrase on each box and require users to type that to see their PRIVATE GENETIC INFORMATION?
I'm not anti capitalism, but the audacity of certain companies especially in the us is a sight to behold
I keep telling people that, but for some, what amount to essentially a simulacra really can pass off as human and no matter how much you try to convince them they won't listen
adobe creative I get, I know plenty of people that are forced to use their products because of the stubbornness of other people they work with.
sharepoint I kinda get, I assume that your company is a windows-only shop?
but one drive? why would anyone use one drive?
Anyone who uses excel in a business capacity canβt switch
interesting, what features of excel are you missing in libreoffice, onlyoffice or cryppad?
does solidworks not work under wine?
sigh 'member when computers were there to serve you and not the other way around? pepperidge farm 'members
at this point is there even a reason to use windows? I genuinely want to know from windows users, why are you still on this operating system?
for many years (since windows 8.1) I switched to using only linux (and at times macos), and I have never regretted my decision; what keeps you using this hellish platform?
it's not about the frequency, it's about the protocol. both 2.4 GHz and 5GHz are vulnerable with WPA2 (or worse WEP). WPA3 is not vulnerable
I can disprove what you're saying with four words: "The Chinese Room Experiment".
Imagine a room where someone who doesn't understand Chinese receives questions in Chinese and consults a rule book to send back answers in Chinese. To an outside observer, it looks like the room understands Chinese, but it doesn't; it's just following rules.
Similarly, advanced language models can answer complex questions or write code, but that doesn't mean they truly understand or possess rationality. They're essentially high-level "rule-followers," lacking the conscious awareness that humans have. So, even if these models perform tasks and can fool humans to make them believe they're intelligent, it's not a valid indicator of genuine intelligence.