

This is true. I’d actually really like to observe someone who lacked critical thinking using one. I have had them quote them to me, and they were less off base than usual., so antecdotally in my experience they do seem to help.


This is true. I’d actually really like to observe someone who lacked critical thinking using one. I have had them quote them to me, and they were less off base than usual., so antecdotally in my experience they do seem to help.


No, I’m a firm believer in education and teaching people critical thinking skills. But many for whatever reason don’t get them. By your mid 20’s if you haven’t been taught to think, you are just a rube to be taken advantage of by whatever unscrupulous people you come across. To me that’s the worst outcome. Having the second opinion of an AI I think can only help this case.


I mean, the proof is in the pudding. Go ask Google Gemini to do some research on a topic you know something about and check it’s work. The result will be somewhere below expert level and above the abilities of a large portion of the population to do that research themselves. If you want some proof, go ask random kids you didn’t really know that well in high school to do the same research. Many of them will fail. If you are relatively intelligent, I’m sure you know which ones it will be too.
The reason I know that these randos on the internet don’t know how modern AI’s work is because the experts also don’t know (I still work with many of them). They know how they constructed the algorithms which construct the neural networks, but they don’t really understand how the neural networks themselves are composed or work (though progress is being made here)
To take your gasoline analogy, it’s as if someone comes along and says “gasoline can never explode, it just kind of barely burns”. While, you, who work in the field and know a bit about stoichiometry, know how to mix it with air, compress it and combust it. You cannot explain to him what exactly is happening on the molecular level, but you know he’s wrong because you’ve worked in the field enough to know how to use it to produce useful results, and you have worked with the experts that created the stochiometric equations that prove it.


I’m sorry you deleted your comment. It was one of the only good ones here and I wanted to answer it.


I started on Kermit, BBS’, and usenet. You’ll never get rid of me.


Yes this is more concerning. I wonder how it will play out. It’s a bit like using a calculator. It’s going to atrophy certain abilities like programming and research for a lot of people, but I suppose people will find other things to think about. Not many people are concerned about the fact that most people are lousy at working things out with a paper and pencil. The people who need to be able to do it still can. The people who are best at it will probably still be called on to do it to check the AI’s, do the things they can’t, and guide strategy.


I mean basically if you haven’t mastered critical thinking and literacy by your 20’s, it’s probably not going to happen. There are many walking example of this fact.


it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them
And people aren’t? Have you spoken with a Trump supporter recently? They are far more programmed than any modern AI engine. I’d take any modern AI programming them over whoever’s currently doing it.
I do agree with you that this will probably be a problem in the future, but for the time being, for those people at least, I do think it’s a net positive.


This is a stupid cop out. You can read something that an AI engine spits out and judge whether it’s true or not. And even on a technical level, modern AI engines do a lot more than just what we traditionally think of as an LLM do. They conduct research, gather data, transform it, process it and return results based on that. I mean, I told one to take a handwritten table, tansform it into an excel sheet and give it back to me. It did it more or less perfectly. How can that possibly be construed as just guessing the next word?


Even talking about LLM’s is largely useless now as most of the products we actually use these days have moved on from simply being LLM’s. So the uninformed assumptions people are bandying about in the thread aren’t even correct on a technical level.


There’s a lot of people who can’t reason to begin with. For them I think it’s a net win.


At the moment LLM’s will give them considerably better answers than conservative media, I’d count it a win. Maybe it won’t always be that way but it’s that way now.


I also worry about this. But people who give up their thinking to AI probably weren’t capable of critical thought to begin with. They are already likely fully programmed by the lies of those with those agendas.


This was more than 10 years ago.
I have a ton of concerns about the effects of AI on society.
I am concerned that it will also eat into the critical thinking capabilities of those who are capable of critical thinking.
I do worry that billionaires will control armies of robots and that capital will replace labor making labor worthless and most people serfs with no possibility to make money.
And in the same breath, I can safely say that all of those things are way too complex for me to predict, because I’m not a charlatan.


Unlike some people I can admit the things I don’t know. And despite working in the field I can quite confidently say that I don’t understand the internal workings of the human brain nor modern transformer/sam’s/reasoning engines.


I know humans that are far bigger Nazi machines and listen to far worse than anything the AI’s say and take it as fact.


This will definitely happen. But the current state of things is that AI’s are far more honest than right wing media for example. And even with Elon trying super hard to make Grok a bigoted right wing AI, it usually doesn’t toe the line and tells the truth instead.


Yeah I’ve heard this before. People who are confident about things they know nothing about say it. I worked for the largest AI researcher in the world and work with this technology every day for my current job. I talk to experts all the time about it. I’ve never heard an expert in the field make any characterisation roughly like that with any confidence. Great example of the dunning-kruger effect.
The end result is that AI’s produce more accurate answers than the bottom half of humans the vast majority of the time.
Feel free to argue your uninformed theories about how they work all you want, seeing as no one knows it well nor does anyone really understand all the mechanisms that make our brains think. The mechanism doesn’t really matter if the results are there.
I know a few MAGAts who would greatly benefit from outsourcing their brains to AI. They would make far better decisions with their lives.


To be honest, for a lot of people this is a good decision. AI is already better at cognition and research than a fairly good chunk of the population. If they start thinking at a reasonable level, believing less obvious lies with the help of AI, we all will benefit. Obviously Grok isn’t the best choice but still probably better than what a lot of them would come up with on their own.
These are compelling points and you are swaying my belief with them. I’d like to do a similar study and see the results .
I certainly do not believe anything I said applied to school children. Honestly I think they should be kept entirely away from any form of conversational AI until they have a fully developed frontal cortex and have been taught how to conduct research and think critically for themselves