

To oversimplify with another example from the theory, assume that planet earth was in superposition between two states with a non-zero separation. Semi-classical gravity says the distribution of the gravity field would be split evenly between the two points, but observing such a state is impossible as it must decohere into 100% of the mass being either in one point or the other. It simply doesn’t make sense when we try to apply quantum maths to gravitationally-significant objects because gravity/spacetime isn’t a quantum field.
So yes, the predictions made by semi-classical gravity diverge from reality when faced with extreme masses, but that theory was only ever intended to be an approximation. It is useful and consistent with reality under certain ranges of conditions, but we shouldn’t jump to the conclusion that physics breaks from all known fundamentals in the presence of large masses when the simpler answer is that this is a case where the approximation is wrong. A more complete theory will be able to accurately explain physics across a wider range of conditions without requiring the untestable assumption that there are places where the rules don’t apply. We’ve got a good reason to believe that the rules of physics don’t change in the fact that no matter where we look the rules seem to always have been the same and all prior divergences from the model could be explained by better models.
The problem in physics is that we have two models that describe reality with absurd mathematical precision at different scales but which seem to be fundamentally irreconcilable. But we know they must be, because reality has to be assumed to be consistent with itself.
On the contrary, this breaks semi-classical gravity’s usage of quantum mechanics. The predictions the approximation makes are not compatible with our observations of how quantum mechanics works, and scientists are working on an experiment that can disprove the hypothesis. ( https://doi.org/10.1103/PhysRevLett.133.180201 )
I’m afraid you’ve got that precisely backwards. Falsifiability is the core of science, as it is the method by which factually-deficient hypotheses are discarded. If there is no contradiction between the theory and experimental practice then either all false theories have been discarded or we have overlooked an experiment that could prove otherwise.
That’s distinctly false. The Higgs Boson was only proposed in 1964 and wasn’t measured 'til just 13 years ago.
Because we still have falsifiable hypotheses to test.
We have, actually. The list of unsolved problems in physics on Wikipedia is like 15 pages long and we’re developing new experiments to address those questions constantly.
Likewise, there’s no reason to assume that the universe is not acting the way we’d like it to except where contradicted by observable evidence. If the laws of physics can “break down” then they aren’t “laws”, merely approximations that are only accurate under a limited range of conditions. The fact that the universe continues to exist despite the flaws in our theories proves that there must be a set of rules which are applicable in all cases.
And if the rules can change, then our theories will have to be updated to describe those changes and the conditions where they occur.