• Café Life is the Colony's main hangout, watering hole and meeting point.

    This is a place where you'll meet and make writing friends, and indulge in stratospherically-elevated wit or barometrically low humour.

    Some Colonists pop in religiously every day before or after work. Others we see here less regularly, but all are equally welcome. Two important grounds rules…

    • Don't give offence
    • Don't take offence

    We now allow political discussion, but strongly suggest it takes place in the Steam Room, which is a private sub-forum within Café Life. It’s only accessible to Full Members.

    You can dismiss this notice by clicking the "x" box

News Eh, Eye!

Invest in You. Get Full Membership now.

AgentPete

Capo Famiglia
Guardian
Full Member
Joined
May 19, 2014
Location
London UK
LitBits
42
United-Nations
While I’m on the subject of AI, this tech news is actually pretty important.

There is now a way to functionally remove your copyright data from an LLM, that they’ve scanned and included without permission. They can no longer claim that it’s impossible, or too expensive…

Maybe a watershed moment.

 
At the bottom of the above excellent article, there is a link to a rather worrying study:
I'm not actually concerned about applying for jobs, but this rang ghastly alarms re. something I saw recently in a US publishing industry publication. (I'm sorry, I've forgotten which one.) That was talking about agents, and possibly also publishers, reducing their 'slush pile' by 'weeding it out' using AI.

If, as the study (see link) suggests, the bots recognise, and favour, their own (i.e. AI) work, will that lead to AI Mss-sorting technology favouring AI generated submissions/queries over purely human-created ones? Any agent using it should be alerted to this possible bias.
 
If, as the study (see link) suggests, the bots recognise, and favour, their own (i.e. AI) work, will that lead to AI Mss-sorting technology favouring AI generated submissions/queries over purely human-created ones? Any agent using it should be alerted to this possible bias.
I think that’s more than likely - and there are plenty of examples of “AI self-bias” already.

The endpoint (at least as far as “Big AI” is concerned) is the total elimination of the human workforce. That, really, is their ultimate monetization strategy.

OpenAI is already trying to create (and of course monetize) a new market for employers to recruit “AI specialists” whose job will be to replace corporate humans with AI agents. Of course, this is really just a stepping stone to full AI/human replacement.

Is this going to actually happen? Not on the evidence of ChatGPT’s most recent model, GPT-5. Contrary to what their boss says, it really is very, very far indeed from having a “PhD level” assistant working for you. Will it ever be there? With present tech, I doubt it.
 

Further Articles from the Author Platform

Latest Articles By Litopians

What Goes Around
Comes Around!
Back
Top