Star Trek: Voyager S2E23 “The Thaw”

  • samus12345@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Strong disagree. There’s no reason why sufficiently advanced AI couldn’t replace brain function. Note this is ACTUAL AI, not LLMs, which are not intelligence in any way, shape, or form.

  • sik0fewl@piefed.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 days ago

    I appreciate the original memes, but I could really do without the bouncing text 🙂

    • SatyrSack@quokk.auOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Hmm, I have actually had people complement my bouncing text. There are two different types of bounces here. Which do you dislike?

      1. When each line of text enters the frame, it sort of bounces into place by overshooting the final position, then overcorrecting, until it finally reaches the final position.
      2. As each line of text overshoots its final position, it bumps into the line above it, causing the line above to bounce a little bit.

      Or are both unwanted?

      • sik0fewl@piefed.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        You don’t need to change it on my account! But in my opinion, #2 is the bigger annoyance. #1 might be ok without #2.

        • SatyrSack@quokk.auOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 days ago

          You don’t need to change it on my account!

          It wouldn’t just be your account, seeing that your comment has a lot of upvotes. If enough people definitely dislike it, I’ll avoid doing it with future posts. But I don’t plan on fixing this one unless people are really that disgusted by it lol

          #1 might be ok without #1.

          I assume you mean “#1 might be ok without #2”. While I pretty much always do both, here is a previous GIF that I made without doing that for some reason. Better?

          • sik0fewl@piefed.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            I assume you mean “#1 might be ok without #2”.

            Oops, yes. Fixed!

            I’m still not a fan, personally. I think just sliding in or maybe reducing the bounce would be better. That being said, I think it’s much better without all of the text bouncing.

      • Steve@communick.news
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Complexity isn’t relevant to my analogy.
        The lessons learned from the failures and eventual success of machine sewing are.

        Unless you’re being sarcastic.
        Sewing really is surprisingly complex.

        • Sundray@lemmus.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I was being a little sarcastic 😆 . But I admit I don’t understand the analogy; what relationship does human thought have to do with human sewing?

          • Steve@communick.news
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 days ago

            Sewing machines don’t make stiches the way people do. People tried for decades and failed to build machines that sewed like humans. They work by making their stiches in ways humans never would, or could realy. They had to invent a whole new way get the job done, not remotely the way a person would do it.

            AI will very likely be the same. Expecting machine minds to do things the same way a human mind would, to mimic human thought, strikes me as some kind of human centric bias.

            • Sundray@lemmus.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Ah, in that case we agree! I also believe that if a genuine AI ever comes about it will be quite alien.

              • Digit@lemmy.wtf
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                4 days ago

                That’s like saying there’s no way a machine can replicate hand sewing.

                Gets me thinking there’s no way I could do sewing consistently. My adhd novelty seeking creative side (over powering my autism side) would be switching stiching types constantly, before I give up in the tedium of it. Could a machine do that?

                • Buddahriffic@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  There are sewing machines that offer didn’t stitching modes. In fact, different use cases have different optimal stitches. Like a decorative stitch can be whatever, and a hem doesn’t need to handle the same kind of forces as a join, which itself might require different strengths (like a dress shirt sleeve vs a jean’s pocket).

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 days ago

    1 - Well, it can simulate them, and we will probably never use simulation when we want intelligence. (We will for understanding the brain, though)

    2 - It doesn’t matter at all, intelligence doesn’t need to think like us.

    3 - We are nowhere close to any general one, and the more investors bet all their money and markets sell all their hardware to the same few companies that will burn at their local maximum, the further away we will become.

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      2 - It doesn’t matter at all, intelligence doesn’t need to think like us.

      Agreed, but look at the history of how humans have thought about the presumed intelligence (or lack of it) in animals; we seem to be bad at recognizing intelligence that doesn’t mirror our own.

        • Sundray@lemmus.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Those are two separate questions, I think.

          1. “You think we won’t be able to use AI” – If there is some day actual artificial intelligence, I have no idea if humans can “use” it.
          2. “we can’t recognize intelligence?” – I think you can make the case that historically we haven’t been great about recognizing non-human intelligence.

          What I am saying is that if we ever invent an actual AGI, unless it thinks and, more importantly, speaks in a way we recognize, we won’t even realize what we invented.

          • marcos@lemmy.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            4 days ago

            Recognizing the intelligence is something you pushed into the discussion, I just want to know why you think it’s important.

            • Sundray@lemmus.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Hm? I was agreeing with your 2nd point. I was merely adding to that by pointing out that we’ve only recently begun to recognize non-human intelligence in species like crows (tool use), cetaceans (language), higher primates (tool use, language, and social organization); which leaves me concerned that, if an AI were to “emerge” that was very different than human intelligence, we’d likely fail to notice it, potentially cutting off an otherwise promising development path.

              • marcos@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                4 days ago

                Oh ok, you have a completely new concern.

                I don’t think we will fail to spot intelligence in AIs, since they have advocates, something that animals never had. But we have a problem in that “intelligence” seems to be a multidimensional continuum, so until we solve lots of different kinds of it, there will exist things that fit some form of it but really don’t deserve the unqualified name.