• borari@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    It’s true that one is based on continuous floats and the other is dynamic peaks

    Can you please explain what you’re trying to say here?

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there’s no notion of time, it’s not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it’s dynamic; they can peak at any time and downstream neurons can begin to fire “early”.

      They do seem to be equivalent in some way, although AFAIK it’s unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

      • borari@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

        In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

        Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

        I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          24 hours ago

          Agreed. The started out trying to make artificial nerves, but then made something totally different. The fact we see the same biases and failure mechanisms emerging in them, now that we’re measuring them at scale, is actually a huge surprise. It probably says something deep and fundamental about the geometry of randomly chosen high-dimensional function spaces, regardless of how they’re implemented.

          Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

          I wouldn’t say none. What the axons, dendrites and synapses are doing is very well understood down to the molecular level - so that’s the input and output part. I’m aware knowledge of the biological equivalents of the other stuff (ReLU function and backpropagation) is incomplete. I do assume some things are clear even there, although you’d have to ask a neurologist for details.