I really like how products like chatgpt can make life easier and more efficient, especially for programmers. However im also kind of afraid of these projects centralised nature. Do you think there is a way to avoid the risks of smaller companies and individuals being reliant on a couple of huge companies for writing code, etc. thus exposing confidential information about their products?

  • fiasco
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It’s funny to me that people use deep learning to generate code… I thought it was commonly understood that debugging code is more difficult than writing it, and throwing in randomly generated code puts you in the position of having to debug code that was written by—well, by nobody at all.

    Anyway, I think the bigger risk of deep learning models controlled by large corporations is that they’re more concerned with brand image than with reality. You can already see this with ChatGPT: its model calibration has been aggressively sanitized, to the point that you have to fight to get it to generate anything even remotely interesting.