• bitfucker@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Yes, medicine works through diagnosis… which the AI did… We prefer false positives so the doctor may or may not perform further inspection, but it was diagnosed/flagged nonetheless. That doctor has a second opinion just with a computer instead of talking with his peers which may be busy. And I did not said that the doctor will trust the output blindly aren’t I? That’s why no layman should operate the AI as I said.

    this is the most brain numbing take. AI can generate 15 billion compounds with medical implications. out of those only 200 are viable. out of those 15 aren’t toxic to humans. problem is, it’s going to take 50 years to find those 200 and another 25 years for the 15. in the meantime all medical research has been dedicated to finding those 15 medications for 75 years and have completely ignored research into specific medicines to treat problems now. the biggest joke about those 15 medicines? they’re all “boner” pills because the model was trained on Pfizer data.

    Well, then that is not the fault of the AI. Why did humans act irrational as you said? The AI is just trained that way. Maybe train another AI on another data then? The concept clearly works because in the 75 years we have 15 out of 15 billion, and not maybe thousand potential from a handful of manual research which still also needs to be tested.

    what’s your point? of course you need specialists to train the models, that’s besides the point I made.

    Your point does not make sense because if AI cannot do all of that, then every early cancer diagnosis being made by a computer is not worth checking. Those 15 compounds are BS. And astronomy may be wrong. As you clearly stated yourself, AI is damn good at detecting patterns that a human may miss. If that does not mean an AI is capable of something, then I don’t know what is.