• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • bignate31@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    23 days ago

    It’s hype like this that breaks the back of the public when “AI doesn’t change anything”. Don’t get me wrong: AlphaFold has done incredible things. We can now create computational models of proteins in a few hours instead of a decade. But the difference between a computational model and the actual thing is like the difference between a piece of cheese and yellow plastic: they both melt nicely but you’d never want one of them in your quesadilla.





  • Another great example (from DeepMind) is AlphaFold. Because there’s relatively little amounts of data on protein structures (only 175k in the PDB), you can’t really build a model that requires millions or billions of structures. Coupled with the fact that getting the structure of a new protein in the lab is really hard, and that most proteins are highly synonymous (you share about 60% of your genes with a banana).

    So the researchers generated a bunch of “plausible yet never seen in nature” protein structures (that their model thought were high quality) and used them for training.

    Granted, even though AlphaFold has made incredible progress, it still hasn’t been able to show any biological breakthroughs (e.g. 80% accuracy is much better than the 60% accuracy we were at 10 years ago, but still not nearly where we really need to be).

    Image models, on the other hand, are quite sophisticated, and many of them can “beat” humans or look “more natural” than an actual photograph. Trying to eek the final 0.01% out of a 99.9% accurate model is when the model collapse happens–the model starts to learn from the “nearly accurate to the human eye but containing unseen flaws” images.













  • The struggle here is that you’re talking about money earned after the fact and not including game theory. It would be a tough experiment to conduct, but say you spent $150 million to save $104. What if you didn’t spend that $150M? Would you have an extra $40k in the bank? Or would the $104M in losses actually end up more like $1.2B because, slowly, everyone realised there was no reason to pay a fare?

    I don’t know what the case is here, but I imagine some economists have determined that $150M is enough to balance between actually getting people to ride the subway (increasing fare will eventually drive down revenue) and a substantial enough threat to prevent jumpers (no cops in the way means tons more jumpers).