• fubbernuckin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 days ago

    I’m going to assume you’re saying this in good faith. The problem with handing thinking over to a computer is not just about computers being worse thinkers, it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them. This creates a concentration of power issue as it’s another avenue to influence how people think, and it’s a pretty strong one at that if people are literally handing over their thinking. This problem is likely to get worse over time as selling this influence in the same way much of the internet sells ad space will likely be quite profitable, and we’re probably not seeing it as much now because AI companies are trying to get their LLMs integrated into society so people become dependent on them.

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      it is also about the fact that these computer systems are being conditioned to reflect the views of the organizations that created them

      And people aren’t? Have you spoken with a Trump supporter recently? They are far more programmed than any modern AI engine. I’d take any modern AI programming them over whoever’s currently doing it.

      I do agree with you that this will probably be a problem in the future, but for the time being, for those people at least, I do think it’s a net positive.

    • Ravel@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      Targeted LLM labotomization turns out to be very difficult. You can still get Grok to shit on Musk.