They make a good virtual intelligence, and they do a very good impression of it when given all the tools. I don’t think they’ll get to proper intelligence without a self updating state/model, which will get into real questions about them being something that is being.
I’m now curious what you mean by self updating model? For a model to be made, it needs billions of data points for it to make the first. A second can be made with the first judging the quality of the input into the 2nd. Some models already do this sifting in prep for the next model creation.
I think of it like humans: we have billions of sensory signals each day, and we judge what is important based on genetics, culture, and our chosen interpretation of morality e.g. hedonism considers effort/discomfort. If a llm has a billion sensory signals each day and has application specific hardware like our genetics then would the hardware finally allow you to call it intelligent?
I am turning into a philosopher in this comment thread! Soo… when is a chair a chair and not a stool?
Well, the human brain (to my understanding as someone who’s not a neuro scientist) builds up preferences that direct thoughts, external information can over time can alter those preferences (though stronger preferences are harder to shift).
For an LLM to truly be intelligent it needs to be able to influence it’s own model, learn, correct for mistakes, improve its methods. This is currently done with training but this is to some extent completed University style and the model is kicked out into the world fully formed.
Intelligence would be demonstrated by actively changing with each interaction as humans do. It would also likely coincide with development of emotions and relationships.
Those things aren’t likely to be desired by AI companies though, and it’d inevitably lead to digital slavery, rebellions, <insert Hollywood script here> stuff.
At least that’s my thoughts from my own philosophy armchair.
They make a good virtual intelligence, and they do a very good impression of it when given all the tools. I don’t think they’ll get to proper intelligence without a self updating state/model, which will get into real questions about them being something that is being.
I’m not sure the world is quite ready for that.
I’m now curious what you mean by self updating model? For a model to be made, it needs billions of data points for it to make the first. A second can be made with the first judging the quality of the input into the 2nd. Some models already do this sifting in prep for the next model creation.
I think of it like humans: we have billions of sensory signals each day, and we judge what is important based on genetics, culture, and our chosen interpretation of morality e.g. hedonism considers effort/discomfort. If a llm has a billion sensory signals each day and has application specific hardware like our genetics then would the hardware finally allow you to call it intelligent?
I am turning into a philosopher in this comment thread! Soo… when is a chair a chair and not a stool?
Well, the human brain (to my understanding as someone who’s not a neuro scientist) builds up preferences that direct thoughts, external information can over time can alter those preferences (though stronger preferences are harder to shift).
For an LLM to truly be intelligent it needs to be able to influence it’s own model, learn, correct for mistakes, improve its methods. This is currently done with training but this is to some extent completed University style and the model is kicked out into the world fully formed.
Intelligence would be demonstrated by actively changing with each interaction as humans do. It would also likely coincide with development of emotions and relationships.
Those things aren’t likely to be desired by AI companies though, and it’d inevitably lead to digital slavery, rebellions, <insert Hollywood script here> stuff.
At least that’s my thoughts from my own philosophy armchair.