• Café Life is the Colony's main hangout, watering hole and meeting point.

    This is a place where you'll meet and make writing friends, and indulge in stratospherically-elevated wit or barometrically low humour.

    Some Colonists pop in religiously every day before or after work. Others we see here less regularly, but all are equally welcome. Two important grounds rules…

    • Don't give offence
    • Don't take offence

    We now allow political discussion, but strongly suggest it takes place in the Steam Room, which is a private sub-forum within Café Life. It’s only accessible to Full Members.

    You can dismiss this notice by clicking the "x" box

News Hmmm...

(This is geo-blocked outside of USA, so text follows)

OpenAI whistleblower found dead in San Francisco apartment

By Jakob Rodgers | | Bay Area News Group
UPDATED: December 13, 2024 at 3:21 PM CST

SAN FRANCISCO — A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week.

Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said.

The medical examiner’s office has not released his cause of death, but police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

Its public release in late 2022 spurred a torrent of lawsuits against OpenAI from authors, computer programmers and journalists, who say the company illegally stole their copyrighted material to train its program and elevate its value past $150 billion.

The Mercury News and seven sister news outlets are among several newspapers, including the New York Times, to sue OpenAI in the past year.

In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.

“If you believe what I believe, you have to just leave the company,” he told the outlet, adding that “this is not a sustainable model for the internet ecosystem as a whole.”

Balaji grew up in Cupertino before attending UC Berkeley to study computer science. It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper.

But his outlook began to sour in 2022, two years after joining OpenAI as a researcher. He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported.

The practice, he told the Times, ran afoul of the country’s “fair use” laws governing how people can use previously published work. In late October, he posted an analysis on his personal website arguing that point.

No known factors “seem to weigh in favor of ChatGPT being a fair use of its training data,” Balaji wrote. “That being said, none of the arguments here are fundamentally specific to ChatGPT either, and similar arguments could be made for many generative AI products in a wide variety of domains.”

Reached by this news agency, Balaji’s mother requested privacy while grieving the death of her son.

In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.

Generative artificial intelligence programs work by analyzing an immense amount of data from the internet and using it to answer prompts submitted by users, or to create text, images or videos.

When OpenAI released its ChatGPT program in late 2022, it turbocharged an industry of companies seeking to write essays, make art and create computer code. Many of the most valuable companies in the world now work in the field of artificial intelligence, or manufacture the computer chips needed to run those programs. OpenAI’s own value nearly doubled in the past year.
 
There’s clearly more to this story than we are being told at the moment.
In an interview with the New York Times published Oct. 23, Balaji argued OpenAI was harming businesses and entrepreneurs whose data were used to train ChatGPT.
Couldn't agree more. This data should be subject to reasonable licensing terms. Not stealing.
 
Someone believes the masses can't read between the lines. A key witness dying BEFORE the trial?
It defies belief, especially as his final tweet (the account is still up on Twitter/X) only a month before his death hardly reads as a suicide note. And somehow this story is flying under the radar of the major news media?
 
What gives me pause for thought here is the god-like status a very few of our wealthiest planetary inhabitants clearly crave. The world, quite literally, is not enough… hence so much interest in colonizing Mars.

My NY lawyer said to me years ago, “being a millionaire is so commonplace now, you’re nothing unless you’re at least a centillionaire.” I hadn’t come across that word before. But now, of course – inflation being what it is – if you’re not at least a billionaire, how can you look yourself in the mirror in the mornings? I mean, where is your self-respect?

AI may well be the path to trillionairehood, or at least, that’s how it seems to quite a few folk at the moment. A technology so utterly disruptive that it will remake human society in your own image. Again, that’s the current hype. I don’t swallow it, but that doesn’t matter. What counts is the faith, belief and ambition of its proponents.

So if you’re one of these people who covet the immortality that being the world’s first trillionaire will confer, are you going to let a wayward 26-year old thwart your vaunting ambition? Or are you going to… remove... them?

I know what I think.

And btw, a trillion dollars equals 1,000 billion dollars. As someone once said – let that sink in.
 
What gives me pause for thought here is the god-like status a very few of our wealthiest planetary inhabitants clearly crave.
This is nothing new though, it is? Ever since the advent of farming and its allowance for non-food-producing specialists there have been high levels of inequality in civil society. And history does seem to suggest that extremely unequal societies are less stable. One might argue that the story of civil society is the story of its masses trying to keep the power hungry in check. Lately, in some of our most significant democracies, we seem to be failing at that. So yes, I share your disquiet.

AI [... a] technology so utterly disruptive
Is it though? Or is it simply us being disruptive with our new tools, those power hungry types all the more so? (The opening scenes of 2001: A Space Odyssey spring to mind.)

So if you’re one of these people who covet the immortality that being the world’s first trillionaire will confer, are you going to let a wayward 26-year old thwart your vaunting ambition? Or are you going to… remove... them?
Or is the stress of being at the centre of a whistleblowing storm going to cause a vulnerable young man to take his own life? Or something else? Someone or some group may well be causally responsible for his death. That cause may be more or less prosecutable in a court of law. I agree the optics are not good.

I don't know. I can't help but feel that any rush to judgment in any case is potentially unproductive, possibly ghoulish. That's why, in the liberal democracies, we enshrine the rule of law, right? A law that does seem to be failing left right and centre, to be sure. Perhaps it's never been more than an abstract ideal.

It's hard, isn't it? In "free" societies, if you see a conspiracy, you're a nutjob; but if you don't, you're irredeemably naive. What a world we live in!
 
This is nothing new though, it is? Ever since the advent of farming and its allowance for non-food-producing specialists there have been high levels of inequality in civil society. And history does seem to suggest that extremely unequal societies are less stable. One might argue that the story of civil society is the story of its masses trying to keep the power hungry in check. Lately, in some of our most significant democracies, we seem to be failing at that. So yes, I share your disquiet.


Is it though? Or is it simply us being disruptive with our new tools, those power hungry types all the more so? (The opening scenes of 2001: A Space Odyssey spring to mind.)


Or is the stress of being at the centre of a whistleblowing storm going to cause a vulnerable young man to take his own life? Or something else? Someone or some group may well be causally responsible for his death. That cause may be more or less prosecutable in a court of law. I agree the optics are not good.

I don't know. I can't help but feel that any rush to judgment in any case is potentially unproductive, possibly ghoulish. That's why, in the liberal democracies, we enshrine the rule of law, right? A law that does seem to be failing left right and centre, to be sure. Perhaps it's never been more than an abstract ideal.

It's hard, isn't it? In "free" societies, if you see a conspiracy, you're a nutjob; but if you don't, you're irredeemably naive. What a world we live in!
Well said @Rich!
 
Back
Top