AI is Growing Too Fast, And Some Technologists Want Big Tech to Chill – The Average Joe

Newsletter

Latest Issues Subscribe

Company

About Us Jobs
×

Become a better investor with our free daily newsletters

Join 250,000+ investors discovering new market trends and ideas.

    AI is Growing Too Fast, And Some Technologists Want Big Tech to Chill

    victorlei

    March 29, 2023

    We’ve all heard the warnings: AI is dangerous. AI will take your job. AI will end humanity.

    These technologists are doing the opposite — fighting back.

    Yesterday, the Future of Life Institute released an open letter urging the pause of large AI experiments for six months, receiving signatures from technologists like Elon Musk and Apple’s co-founder Steve Wozniak.

    • Musk co-founded OpenAI — but he left its board in 2018 and no longer has any ownership.
    • In the past, he’s called out the dangers of AI and criticized OpenAI for diverging from its original goals.

    The current flaws of AI (among many)

    1/ Misinformation. Last week, several highly realistic AI-generated images of Pope Francis (looking fresh) and Donald Trump (being arrested) surfaced online. Imagine what will happen when generative AI extends into video.

    2/ Bias. OpenAI’s CEO Sam Altman acknowledged that bias is a problem with AI models. When given the prompt “CEO” or “director,” AI image generator DALL-E gave a white male result 97% of the time.

    Per University of Washington assistant professor Aylin Caliskan, AI models are trained mostly using US data — with results showing American biases and culture.

    This could have major impacts: AI is already being used to screen job candidates and underwrite for insurance companies. However, the US insurance industry has proposed rules to oversee AI models in order to protect consumers from discrimination.

    How are governments dealing with AI?

    AI is advancing far too fast for governments to keep up — and little regulation has been put forth. One lawmaker, who failed to gather support for AI audits, says the issue “doesn’t feel urgent for members” (NYT).

    Yesterday, the UK released a white paper outlining ways to regulate AI — while the European Union has the most developed proposal that suggests:

    • Creating aggressive safeguards for “high-risk” cases like employment decisions and law enforcement.
    • Easing precautions on more experimental low-risk applications.

    No matter when or in what shape those regulations come, big tech will likely be there to fight against those regulations — just as they’ve been doing in recent years.

    Trending Posts