• emptiestplace@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    I appreciate what you are saying, and I don’t really disagree, but… as you have identified, these are technical challenges: how many extra checks? As many as are needed. Consider the absolutely absurd amount of computation involved in generating a single token - what’s a little more?

    Oh no. You’ve just created a mind.

    My point was that this might be closer than LLM naysayers think: as the critical limitations of current models are resolved, as we discover sustainable strategies for context persistence and feedback, the emergence of new capabilities is inevitable. Are there limitations inherent to our current approach? Almost certainly, but we already know that the possible risks involved in overcoming them won’t slow us down.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      I’m not really talking about technical limitations, so I don’t know that there is a disagreement here at all. The solution could be 5 years away, or 50, who knows.

      I’m more pointing out that regardless of the exact techniques used, context is key to creating things that make sense, rather than things that are just shallow mimicry. I think that barrier cannot be breached without creating an actual intelligence, because we are fundamentally talking about meaning.

      And I agree these ethical considerations won’t slow people down. That’s what I’m concerned about. People will be so focussed on making better tools that they will be very keen to overlook the fact that they’re creating personalities purely to enslave them.

      • emptiestplace@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I’m not really talking about technical limitations

        Even in the case of ostensibly fundamental obstacles, the moment we can effortlessly brute force our way around, they become effectively irrelevant. In other words, if consciousness does emerge from complexity, then it is perfectly sensible to view the shortcomings you mention as technical in nature.

        • Excrubulent@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I am only talking about those limitations inasmuch as they interact with philosophy and ethics.

          I don’t know what your point is. ML models can become conscious given enough complexity? Sure. That’s the premise of what I’m saying. I’m talking about what that means.

          • emptiestplace@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Solid edit. If I found myself confused about the context of the discussion, I wouldn’t try to resolve it with “I don’t know what your point is”.