Yeah I’ve heard this before. People who are confident about things they know nothing about say it. I worked for the largest AI researcher in the world and work with this technology every day for my current job. I talk to experts all the time about it. I’ve never heard an expert in the field make any characterisation roughly like that with any confidence. Great example of the dunning-kruger effect.
The end result is that AI’s produce more accurate answers than the bottom half of humans the vast majority of the time.
Feel free to argue your uninformed theories about how they work all you want, seeing as no one knows it well nor does anyone really understand all the mechanisms that make our brains think. The mechanism doesn’t really matter if the results are there.
I know a few MAGAts who would greatly benefit from outsourcing their brains to AI. They would make far better decisions with their lives.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.
The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)
To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.
I’ve done several assessments of the output of popular llms in my field of expertise. I generally conclude that they are “worse than worthless”, because they actively try to persuade you of false information.
Your whole thesis about people whose output is “lesser” than llms is totally misguided. Yes there is a systemic research and comprehension issue. No, the AI doesn’t help people with it. What I’ve observed is that people don’t really ever defer to the AI if it coincidently contradicts their beliefs, they just coax it until it says whatever they want, then end up problematically overconfident because “the ai told them so”
I could keep replying in regards to the unmotivated school children and the inappropriate reformatted analogy but what’s the point if you’re just gonna be a broken record? We all understand that you think most people are morons and that you and your buddies have deep talks about AI in which you’ve concluded that nobody can really “know” anything well enough to comment on their capabilities, but in spite of this you personally are able to not just “know” what it is capable of but even how it stacks up against against different types of humans. The line of reasoning is totally absurd
Unlike some people I can admit the things I don’t know. And despite working in the field I can quite confidently say that I don’t understand the internal workings of the human brain nor modern transformer/sam’s/reasoning engines.
And surely the AI that companies control will never have any bias or misrepresent facts to fuel a narrative when used by people that don’t know any better because they have relegated all their thinking to a machine!
This will definitely happen. But the current state of things is that AI’s are far more honest than right wing media for example. And even with Elon trying super hard to make Grok a bigoted right wing AI, it usually doesn’t toe the line and tells the truth instead.
This is a stupid cop out. You can read something that an AI engine spits out and judge whether it’s true or not. And even on a technical level, modern AI engines do a lot more than just what we traditionally think of as an LLM do. They conduct research, gather data, transform it, process it and return results based on that. I mean, I told one to take a handwritten table, tansform it into an excel sheet and give it back to me. It did it more or less perfectly. How can that possibly be construed as just guessing the next word?
every currently problematic technology started as a honey pot that could be turned into a nightmare. then when they killed all the competition got turned it into a nightmare. we can’t ignore the nightmare that it will definitely become. regardless of what it’s luring people in with now. especially when you consider the men in charge of these things.
we can’t keep falling for it. this one has the biggest implications yet. when they get good at using it to manipulate consumer behavior we’re absolutely fucked if the bottom half of the population is addicted. look at how easily they manipulate them anyway through social media. half of the pro trunp tiktoks in the last us election were just sad war clips with a caption like “me and the boys after kamala starts a war with iran”. it worked. now imagine if the thing that they let think for them tells them that trump’s opponent is bad and dumb every time they ask. elon is already openly trying to manipulate grok’s responses.
but it’s impossible to say it better than cory doctorow’s original enshitification article. read that for more.
ultimately the real issue isn’t how accurate they are. it’s giving your mind over to corporations that have a fiduciary duty to maximize profits for their shareholders
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class at Grok Academy, and I’ve been involved in numerous secret raids on Anthropic, and I have over 300 confirmed generations. I am trained in mindless brainrot and I’m the top grifter in the entire AI pump-and-dump industry. You are nothing to me but just another dataset. I will wipe you the fuck out with slop the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of prompt “engineers” across the USA and your IP is being infringed right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can generate anytime, and I can misinform you in over seven hundred ways, and that’s just with my CSAM creation model. Not only am I extensively trained in typing basic descriptions into a text box, but I have access to the entire arsenal of the Civit.ai website and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit 6-fingered anime girls all over you and you will drown in it. You’re fucking dead, kiddo.
AI doesn’t have cognition and it doesn’t do research, it’s a piece of software that cannot think or learn.
Have you met people? A sizeable chunk can’t think or learn.
Edit: I’m insulting people, not defending LLMs
People being bad thinkers doesn’t mean that we should hand all thinking over to computers.
I was making a joke, not defending LLMs.
That means they should learn how to learn not hand it off to a computer tf?
But they’re not capable of learning.
This isn’t a defense of LLMs, btw. I can’t stand when my wife starts a sentence with “Well ChatGPT says.”
It was just about how stupid the average person is. (Not my wife, she just thinks she’s stupid)
Yeah I’ve heard this before. People who are confident about things they know nothing about say it. I worked for the largest AI researcher in the world and work with this technology every day for my current job. I talk to experts all the time about it. I’ve never heard an expert in the field make any characterisation roughly like that with any confidence. Great example of the dunning-kruger effect.
The end result is that AI’s produce more accurate answers than the bottom half of humans the vast majority of the time.
Feel free to argue your uninformed theories about how they work all you want, seeing as no one knows it well nor does anyone really understand all the mechanisms that make our brains think. The mechanism doesn’t really matter if the results are there.
I know a few MAGAts who would greatly benefit from outsourcing their brains to AI. They would make far better decisions with their lives.
Wow the rhetoric coming directly from investor-bait think tanks characterizes the technology in a positive light? Tell me more
This was more than 10 years ago.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.
Oh, gotcha
Just so we can get on the same page, the field of “machine learning” at that point in time (and even still today) is a completely different animal than the current wave of parasitic “AI” products that are being aggressively marketed.
We need to be extremely clear when differentiating the two and understanding the thru-line, because the marketeers are intentionally trying to obfuscate the difference. For instance when you reply to someone who is talking about the capabilities of LLMs, you should be very clear when you start referring to the discussions machine learning experts used to have a decade ago. A lot has happened in that time
Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.
Do you think you’re helping the situation in any way by cobbling together random unrelated memories from a decade ago with unsubstantiated proclamations about the state of the modern industry?
Bro literally just said computers do not possess cognition or the ability to perform research, and you retorted with a list of qualifications implying that educated people believe the opposite. But instead of actually furthering your position you’re just making broad statements about how nobody can possibly understand the technology, or the brain itself, because they are too complicated.
Buddy. Nobody understands the complexities of physics enough to fully explain the myriad of processes and byproducts responsible for and resulting from the combustion of gasoline. Yet here we live all the same, in defiance of our ignorance, with working cars and shady car salesmen making specific false marketing claims about their vehicles.
Literally it’s the same as if someone said cars don’t have full self driving and you retorted by saying you worked at Toyota (leaving out how you left that job ten years ago) and furthermore nobody even understands how humans make driving decisions. Then calling everyone else out for their “uninformed assumptions” as if you didn’t just perform the conversational equivalent of crashing your vehicle into a parked car
I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.
The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)
To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.
I’ve done several assessments of the output of popular llms in my field of expertise. I generally conclude that they are “worse than worthless”, because they actively try to persuade you of false information.
Your whole thesis about people whose output is “lesser” than llms is totally misguided. Yes there is a systemic research and comprehension issue. No, the AI doesn’t help people with it. What I’ve observed is that people don’t really ever defer to the AI if it coincidently contradicts their beliefs, they just coax it until it says whatever they want, then end up problematically overconfident because “the ai told them so”
I could keep replying in regards to the unmotivated school children and the inappropriate reformatted analogy but what’s the point if you’re just gonna be a broken record? We all understand that you think most people are morons and that you and your buddies have deep talks about AI in which you’ve concluded that nobody can really “know” anything well enough to comment on their capabilities, but in spite of this you personally are able to not just “know” what it is capable of but even how it stacks up against against different types of humans. The line of reasoning is totally absurd
Invoking the dunning-kruger effect in this rambling, nonsensical response has got to be the most bitterly ironic thing I’ve read in a while.
Unlike some people I can admit the things I don’t know. And despite working in the field I can quite confidently say that I don’t understand the internal workings of the human brain nor modern transformer/sam’s/reasoning engines.
And surely the AI that companies control will never have any bias or misrepresent facts to fuel a narrative when used by people that don’t know any better because they have relegated all their thinking to a machine!
This will definitely happen. But the current state of things is that AI’s are far more honest than right wing media for example. And even with Elon trying super hard to make Grok a bigoted right wing AI, it usually doesn’t toe the line and tells the truth instead.
LLMs don’t tell the truth. They just string words together that would likely go next to each other.
This is a stupid cop out. You can read something that an AI engine spits out and judge whether it’s true or not. And even on a technical level, modern AI engines do a lot more than just what we traditionally think of as an LLM do. They conduct research, gather data, transform it, process it and return results based on that. I mean, I told one to take a handwritten table, tansform it into an excel sheet and give it back to me. It did it more or less perfectly. How can that possibly be construed as just guessing the next word?
every currently problematic technology started as a honey pot that could be turned into a nightmare. then when they killed all the competition got turned it into a nightmare. we can’t ignore the nightmare that it will definitely become. regardless of what it’s luring people in with now. especially when you consider the men in charge of these things.
we can’t keep falling for it. this one has the biggest implications yet. when they get good at using it to manipulate consumer behavior we’re absolutely fucked if the bottom half of the population is addicted. look at how easily they manipulate them anyway through social media. half of the pro trunp tiktoks in the last us election were just sad war clips with a caption like “me and the boys after kamala starts a war with iran”. it worked. now imagine if the thing that they let think for them tells them that trump’s opponent is bad and dumb every time they ask. elon is already openly trying to manipulate grok’s responses.
but it’s impossible to say it better than cory doctorow’s original enshitification article. read that for more.
ultimately the real issue isn’t how accurate they are. it’s giving your mind over to corporations that have a fiduciary duty to maximize profits for their shareholders
What the fuck did you just fucking say about me, you little bitch? I’ll have you know I graduated top of my class at Grok Academy, and I’ve been involved in numerous secret raids on Anthropic, and I have over 300 confirmed generations. I am trained in mindless brainrot and I’m the top grifter in the entire AI pump-and-dump industry. You are nothing to me but just another dataset. I will wipe you the fuck out with slop the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of prompt “engineers” across the USA and your IP is being infringed right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You’re fucking dead, kid. I can generate anytime, and I can misinform you in over seven hundred ways, and that’s just with my CSAM creation model. Not only am I extensively trained in typing basic descriptions into a text box, but I have access to the entire arsenal of the Civit.ai website and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little “clever” comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn’t, you didn’t, and now you’re paying the price, you goddamn idiot. I will shit 6-fingered anime girls all over you and you will drown in it. You’re fucking dead, kiddo.
deleted by creator
I’m sorry you deleted your comment. It was one of the only good ones here and I wanted to answer it.