• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • Is there any point at which the distance becomes too large to the extreme where you basically get “deleted” from existence?

    This is basically what the definition of “observable universe” is. It is the part of the universe that is close enough in space and time for light to reach us. So if you say they get transported to the observable part of the universe, then yes, their signals will eventually reach earth. But the closer they are to the edge of the observable universe, the longer the signals will take to reach, and the more red shifted they will be due to the expansion of the intermediate space as the signals travel to Earth.

    Note that there are some semantics at play; “observable universe” might refer to the parts of the universe that have emitted light in the past that is reaching earth now. But the the light emitted by those places now might never reach Earth because they are now too far away. So if these astronauts got sent to one of those places then no, their signals would not reach earth.








  • HIPAA allows medical care providers to share your information with each other for the purposes of providing care (whether that sharing happens through MyChart or some other means). It does not require your consent (and this could be a good thing if, for example, you were taken to a hospital while unconscious). You simply may not have a lot of options for preventing this. As NOT_RICK mentioned, you could opt out of Care Everywhere at the psychiatric hospital to prevent them from sharing your information that way. You could also try to amend their record or request that they restrict access to your records, as per https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html#general. All of those options would require interacting with the original psychiatric hospital, so if you’re unwilling to do that, I’m not aware that there are other options available.


  • Small correction, but this bit isn’t quite correct:

    If you go just below light speed, you’ll see the world outside go past like it’s being fast forwarded, and when you return, 8 years will have been compressed into something that seems much shorter to you.

    During the time that you are just below light speed at a constant velocity, clocks that are “stationary” will appear to be moving slow to you. And clocks moving with you will appear to be running slow for a “stationary” observer. As I mentioned in a comment in another reply, the trip would feel short to you because the distance to your destination would contract to nearly zero. “Fast forwarding” (ie, having both you and a stationary observer agree that more time has passed on the stationary observer’s clock) would happen during the periods of acceleration/deceleration at the beginning and end of the trip.




  • One of my favorites is the “ladder paradox” in special relativity, although I originally learned is as a pole vaulter rather than a ladder:

    A pole vaulter is running carrying a pole that is 12m long at rest, holding it parallel to the ground. He is running at relativistic speed, such that lengths dilate by 50% (this would be (√3/2)c). And he runs through a barn that is 10m long that has open doors in the front and back.

    Imagine standing inside barn. The pole vaulter is running so fast that the length of the pole, in your frame of reference, has contracted to 6m. So while the pole is entirely inside the barn you press a button the briefly closes the doors, so that for just a moment the pole is entirely closed inside the barn.

    The question is, what does the pole vaulter see? For him, the pole has not contracted; instead the barn has. He’s running with a 12m pole through what, in his frame of reference, is a 5m barn. What happens when the doors shut? How can both the doors shut?

    I will admit that I have never used this thought experiment for any practical end.






  • Cherry-picking a couple of points I want to respond to together

    It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.

    It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.

    I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don’t think you’d say that that person wasn’t sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.

    As a component of a system, it becomes much more promising.

    It sounds to me like you’re more strict about what you’d consider to be “the LLM” than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone’s brain is sentient.

    Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.

    I’m not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.


  • LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology

    Do you have an example I could check out? I’m curious how a study would show a process to be “fundamentally incapable” in this way.

    LLMs do not synthesize. They do not have persistent context.

    That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that “conversations” with an LLM are really just including all previous parts of the conversation in a new prompt. Isn’t this analagous to short term memory? Now suppose you were to take all of an LLM’s conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There’s no technical reason that this can’t be done, although in practice it’s computationally expensive. Would you consider that LLM system to have persistent context?

    On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?