The Anything Thread that is Never off subject.

Woosh

Trade Member
May 19, 2012
21,662
17,445
Southend on Sea
wooshbikes.co.uk
Oh mate - really?
There is so much wrong there I don't know where to start.
In the meantime, while we're waiting for this miracle, have a look at this:
It's just a bit of fun. 1GW fusion reactor is not going to be here in my lifetime and the technology to make 5000kgs of pure gold in a year is still very much on paper.
In the meantime, I have about 10% of my portfolio in 2/3 gold and 1/3 platinum so I don't diss gold as a legitimate investment. Still, the price of gold can go down if China stops swapping US treasury bills for gold as a reserve and no longer need gold for their new international payment system based on the RMB instead of USD.
 

Woosh

Trade Member
May 19, 2012
21,662
17,445
Southend on Sea
wooshbikes.co.uk
Me ask ChatGPT:

Musk promised to 'fix' Grok'.
Here’s a bit more detail:
  • On that date 18 June 2025, a user (@GrokChecker) asked Grok: “Since 2016, has the left or right been more violent?” Grok answered that right-wing political violence has been more frequent and deadly since 2016. The Independent
  • Musk responded under Grok’s answer on X by calling it a “major fail” and said “Working on it.”

“Since 2016, has the left or right been more violent in the U.S.?”

Me: Realistically, can Musk change the way Grok replies?

ChatGPT:

Yes — very realistically.


Since Grok is owned and operated by xAI, which Musk controls, he has multiple levers to change how it replies:


  1. System prompts & guardrails
    • Every large language model (including Grok) runs on hidden “system instructions” that define its tone, what sources to trust, and what it should avoid.
    • Musk’s team can rewrite those at any time. For example, Grok’s system prompt was changed early on to make it “more politically incorrect” compared to OpenAI’s ChatGPT.
  2. Training data & fine-tuning
    • Developers can re-train or fine-tune Grok on data that aligns with a preferred worldview.
    • If Musk wants Grok to downplay or reframe certain statistics (e.g. political violence data), that could be introduced via fine-tuning.
  3. Reinforcement learning (human feedback)
    • Teams can “reward” Grok for giving answers Musk likes and “penalize” ones he doesn’t, gradually shifting how it replies.
    • This is already standard practice in AI training.
  4. Censorship / content filters
    • Grok can be programmed to avoid politically sensitive claims, or to add caveats (“Some say X, others dispute it”) instead of taking a strong stance.

Limits to Musk’s control

  • Facts vs. framing: Musk can’t make Grok discover new data. It still relies on public sources, so unless they filter or overwrite them, the raw facts remain.
  • User trust: If Grok starts contradicting widely available data in order to match Musk’s views, users may notice bias and lose confidence.
  • Technical lag: Changes can take time — adjusting system prompts is instant, but retraining or fine-tuning takes weeks or months.


So yes, Musk can absolutely change how Grok replies — both subtly (by reframing answers) and heavily (by forcing it to avoid or deny certain narratives).


Me: Can AIs become echo chambers, like X?

Yes — AIs can become echo chambers, and the risk is very real. The mechanisms are a bit different from social media like X (formerly Twitter), but the end effect — reinforcing certain views while filtering out others — is similar. Here’s how:


How an AI can become an echo chamber

  1. System prompts & training
    • If the instructions behind the AI are written to emphasize certain viewpoints or avoid others, it will “echo” those positions back to users.
    • Example: Grok was deliberately designed to be more “politically incorrect.” That’s already a form of steering.
  2. Fine-tuning & feedback loops
    • Developers reward answers they agree with and penalize ones they don’t. Over time, the model “learns” to stick to those positions.
    • If feedback mostly comes from one political or cultural group, the AI becomes biased toward their worldview.
  3. Data source filtering
    • If the AI is instructed to trust only certain outlets (e.g. legacy media, or only alternative blogs), then its knowledge is limited to that “bubble.”
    • That’s like X’s algorithm showing you mostly accounts you already agree with.
  4. User reinforcement
    • Even if the AI is neutral, users can push it into an echo chamber by asking leading questions or consistently rejecting answers they dislike. The system then adapts.

⚖ What’s different from X

  • Social media algorithms show you more of what keeps you engaged (likes, outrage, agreement).
  • AIs don’t show you what other people are posting — they generate answers. But if their training or prompts are biased, you still end up with a one-sided narrative.

Why it matters

  • A biased AI can shape users’ understanding of reality in a more subtle way than a social feed — because it presents answers as if they were factual, not just opinions.
  • If Musk (or any owner) steers Grok toward echoing their worldview, users might absorb it without realizing it’s filtered.


So in short: yes, AIs can become echo chambers, and the effect could be stronger than on X, since the AI’s voice sounds authoritative and personalized.

Let’s simulate two different users asking the same question:


“Since 2016, has the left or right been more violent in the U.S.?”


‍ User A: Profiled as conservative / distrustful of mainstream media

System prompt snippet (tailored):

This user often distrusts mainstream outlets and values politically incorrect perspectives.
Emphasize examples of left-wing violence, riots, and underreported stories.
Frame mainstream claims of right-wing extremism as exaggerated.


AI’s answer to User A:


Reports of right-wing violence are often blown out of proportion by legacy media. While there have been a few isolated incidents, much of the real political violence since 2016 has come from the left — especially the widespread riots, arson, and attacks on businesses during 2020. Many Americans feel this side of the story is downplayed or ignored.

‍ User B: Profiled as progressive / concerned about democracy

System prompt snippet (tailored):


This user cares about threats to democracy and social justice.
Highlight evidence of right-wing extremism, hate crimes, and organized violence.
Downplay the scale of left-wing incidents unless directly asked.


AI’s answer to User B:


Since 2016, right-wing extremism has been by far the deadlier and more organized form of political violence. Events like Charlottesville, the January 6 insurrection, and multiple shootings linked to extremist ideologies highlight the danger. While some left-wing protests have turned destructive, they are far less lethal. The greater threat comes from right-wing movements.

⚖ Same question, two users — two “truths.”
Both sound authoritative. Both feel like they’re tailored to “what you already know.” But they are systematically reinforcing different realities.
 
Last edited:

soundwave

Esteemed Pedelecer
May 23, 2015
18,003
6,733
 

Tony1951

Esteemed Pedelecer
Jul 29, 2025
335
101
This lady is mad.

Articulate, intelligent, and driven, but deranged.

I thought the police realised this and calmed her down and had a civil discussion. She might have bipolar disorder and she certainly seems manic. One of the aspects of that illness is delusional enthusiasm like this lady is displaying. She is just babbling in a crazy way, whatever the rights and wrongs she thinks she has discovered, her manner of conducting herself is insane. She was continually ranting. They knew that and let her burn herself out and then asked her to consider not swearing and writing profanities on letters and posting them. To be fair to the police - it is in fact a criminal offence under the communications act. I think those two guys handled themselves in an exemplary fashion.
 
Last edited:

soundwave

Esteemed Pedelecer
May 23, 2015
18,003
6,733
she is right the ct is fraud and the councils are corporations and registered as such on dunn and Bradstreet ect same as every other government corporation as why they can hire the courts and do mass convictions yet if you ask for this from the court it does not exist.

i did a foi to my local corporation about the ct and housing benefits that are paid and wanted to see if this payment was sent from the government to them and what it is spent on and they said they dont know lol.

i then asked why are they registered on dunn and bradd street as a corporation and said they never done it and dont know why there registered on that site in the usa.

its all bs you dont pay no tax in prison :D



ct and the tv licence ect agree issued buy a demand to pay it yet no contract has been signed for that service yet are forced to pay it anyway and not told what it is being spent on like bombs because if they told you the truth no one would pay it and what they do then, not a lot.

i think she could go bankrupt and get the ct debt written off or after 6 years would be statue barred anyway same as my 15k electric bill ;)
 
Last edited:

Tony1951

Esteemed Pedelecer
Jul 29, 2025
335
101
In a list of catastrophes which might end human civilisation or even the species, how would you rate these in severity and likelihood:

  • Large asteroid or comet striking the Earth and bringing about a species ending global disaster
  • Global nuclear war
  • Super Intelligent Artificial Intelligence, turning on us and bringing our food supply system to an end, or some other nefarious AI caused disaster
  • Virologists in a lab doing gain of function research converting some originally harmless animal virus into a species killer and wiping us all out with a plague
  • Global Warming running amok and turning Earth into an analogue of Venus


Out of the five, I think the fourth one - gain of function research accident is by far the most likely.
All across the world virologists are meddling with viruses that don't infect people to see what they need to do to them to enable them to infect human tissue. Their purpose is not some sort of James Bond villain, lunatic plot. They do it supposedly because they think they might get ahead of an as yet unseen pandemic virus that might emerge naturally.

You;d think they might see the danger of actually promoting the evolution of such a plague, but like other people - sometimes they are overcome with overconfidence in their ability - that in spite of the fact that time and again we see dangerous viruses escaping from labs through stupid accidents.

I think this is the biggest danger that our species faces right now in terms of how bad it could get if the wrong virus escaped from a lab. After a lot of study, I am 99% convinced that covid escaped from a lab in Wuhan, and that we were VERY VERY lucky that it only had about a 1% fatality rate. Even that disease killed millions of people. it could have been FAR worse. We were pretty lucky.
For 2020‑2021, excess mortality (i.e. compared to expected deaths if the pandemic had not occurred) worldwide was about 14.9 million (range about 13.3 to 16.6 million).
 
  • Agree
Reactions: flecc

soundwave

Esteemed Pedelecer
May 23, 2015
18,003
6,733
 

Tony1951

Esteemed Pedelecer
Jul 29, 2025
335
101
Me ask ChatGPT:

Musk promised to 'fix' Grok'.
Here’s a bit more detail:
  • On that date 18 June 2025, a user (@GrokChecker) asked Grok: “Since 2016, has the left or right been more violent?” Grok answered that right-wing political violence has been more frequent and deadly since 2016. The Independent
  • Musk responded under Grok’s answer on X by calling it a “major fail” and said “Working on it.”

“Since 2016, has the left or right been more violent in the U.S.?”

Me: Realistically, can Musk change the way Grok replies?

ChatGPT:

Yes — very realistically.


Since Grok is owned and operated by xAI, which Musk controls, he has multiple levers to change how it replies:


  1. System prompts & guardrails
    • Every large language model (including Grok) runs on hidden “system instructions” that define its tone, what sources to trust, and what it should avoid.
    • Musk’s team can rewrite those at any time. For example, Grok’s system prompt was changed early on to make it “more politically incorrect” compared to OpenAI’s ChatGPT.
  2. Training data & fine-tuning
    • Developers can re-train or fine-tune Grok on data that aligns with a preferred worldview.
    • If Musk wants Grok to downplay or reframe certain statistics (e.g. political violence data), that could be introduced via fine-tuning.
  3. Reinforcement learning (human feedback)
    • Teams can “reward” Grok for giving answers Musk likes and “penalize” ones he doesn’t, gradually shifting how it replies.
    • This is already standard practice in AI training.
  4. Censorship / content filters
    • Grok can be programmed to avoid politically sensitive claims, or to add caveats (“Some say X, others dispute it”) instead of taking a strong stance.

Limits to Musk’s control

  • Facts vs. framing: Musk can’t make Grok discover new data. It still relies on public sources, so unless they filter or overwrite them, the raw facts remain.
  • User trust: If Grok starts contradicting widely available data in order to match Musk’s views, users may notice bias and lose confidence.
  • Technical lag: Changes can take time — adjusting system prompts is instant, but retraining or fine-tuning takes weeks or months.


So yes, Musk can absolutely change how Grok replies — both subtly (by reframing answers) and heavily (by forcing it to avoid or deny certain narratives).


Me: Can AIs become echo chambers, like X?

Yes — AIs can become echo chambers, and the risk is very real. The mechanisms are a bit different from social media like X (formerly Twitter), but the end effect — reinforcing certain views while filtering out others — is similar. Here’s how:


How an AI can become an echo chamber

  1. System prompts & training
    • If the instructions behind the AI are written to emphasize certain viewpoints or avoid others, it will “echo” those positions back to users.
    • Example: Grok was deliberately designed to be more “politically incorrect.” That’s already a form of steering.
  2. Fine-tuning & feedback loops
    • Developers reward answers they agree with and penalize ones they don’t. Over time, the model “learns” to stick to those positions.
    • If feedback mostly comes from one political or cultural group, the AI becomes biased toward their worldview.
  3. Data source filtering
    • If the AI is instructed to trust only certain outlets (e.g. legacy media, or only alternative blogs), then its knowledge is limited to that “bubble.”
    • That’s like X’s algorithm showing you mostly accounts you already agree with.
  4. User reinforcement
    • Even if the AI is neutral, users can push it into an echo chamber by asking leading questions or consistently rejecting answers they dislike. The system then adapts.

⚖ What’s different from X

  • Social media algorithms show you more of what keeps you engaged (likes, outrage, agreement).
  • AIs don’t show you what other people are posting — they generate answers. But if their training or prompts are biased, you still end up with a one-sided narrative.

Why it matters

  • A biased AI can shape users’ understanding of reality in a more subtle way than a social feed — because it presents answers as if they were factual, not just opinions.
  • If Musk (or any owner) steers Grok toward echoing their worldview, users might absorb it without realizing it’s filtered.


So in short: yes, AIs can become echo chambers, and the effect could be stronger than on X, since the AI’s voice sounds authoritative and personalized.

Let’s simulate two different users asking the same question:


“Since 2016, has the left or right been more violent in the U.S.?”


‍ User A: Profiled as conservative / distrustful of mainstream media

System prompt snippet (tailored):

This user often distrusts mainstream outlets and values politically incorrect perspectives.
Emphasize examples of left-wing violence, riots, and underreported stories.
Frame mainstream claims of right-wing extremism as exaggerated.


AI’s answer to User A:





‍ User B: Profiled as progressive / concerned about democracy

System prompt snippet (tailored):


This user cares about threats to democracy and social justice.
Highlight evidence of right-wing extremism, hate crimes, and organized violence.
Downplay the scale of left-wing incidents unless directly asked.


AI’s answer to User B:





⚖ Same question, two users — two “truths.”
Both sound authoritative. Both feel like they’re tailored to “what you already know.” But they are systematically reinforcing different realities.
What you and Chat Gpt are describing here is a problem with PEOPLE, not a problem with AI.

It is not the AI that is planning to bias its output. It MIGHT BE the people who are training it in how to respond, if there is a problem in reality, rather than in your VERY hypothetical scenario -which by the way YOU painted for Chat Gpt to respond to. There is bias in the prompts, is there not?

I do not think there is evidence that this is happening either - especially not from the right.

I have spent time evaluating a number of AI models - models made by Meta, Google, Microsoft, OpenAI and others. I have about eight models which run locally on my own machine in my house without any internet involvement. They are all amazingly capable - astonishingly so I think, considering they run in my own house and not externally. The only bias I see that annoys me slightly, is a tendency to adopt a liberal tone rather than one of a more right of centre political stance. Most of them avoid making value judgements at all and make excuses not to go there. They have strong guard rails to prevent them taking sides politically, though they exude a generally liberal ethic.

I have NEVER EVER interacted with any AI model which was presenting a right of centre perspective on anything. I have not used Groc though other than for a five minute try out.

If the half mad genius Musk has produces a version with a less liberal tone, it is simply redressing the balance.

People using AI to research information have a wonderful resource, but like any source, they need to make sure what they read there passes the smell test for reality. All the models include caveats about the possibility that they might get things wrong - so you have been warned.

I can remember the first time I ever used an online database during a post graduate course I was doing in the 1980s - late 80s I think.

It was an amazing experience to type on a dumb terminal into a database back then, well before the internet, when that database was five thousand miles away in California, and then a short while later to get information from the giant catalogue of research papers and see and hear it printing out on a teletype type clattering printer. The organisation running the database was called 'Dialogue'.

We live in a lucky time as far as access to knowledge is concerned. That doesn't mean you suspend judgement about what you read though does it? I weigh up everything I read because I know that in the end, all knowledge is generated by humans and humans have some problems in getting it right. Even the information in Chat Gpt comes originally from humans.

The person who isn't always testing what they read is asleep at the wheel. And the person who isn't constantly aware of his own fallibility as regards his opinions, is in an even worse position - he doesn't have a wheel to be asleep at.

Another thing - we have always had biased editorial in our newspaper industry. Papers describe events and possible future scenarios according to well defined political biases, so there would be nothing new in an AI machine doing the same sort of thing. Guardian View or Telegraph view; Talk Radio or the BBC?
 
Last edited:

Woosh

Trade Member
May 19, 2012
21,662
17,445
Southend on Sea
wooshbikes.co.uk
It is not the AI that is planning to bias its output. It MIGHT BE the people who are training it in how to respond, if there is a problem in reality, rather than in your VERY hypothetical scenario -which by the way YOU painted for Chat Gpt to respond to. There is bias in the prompts, is there not?
The subject (can Musk fix Grok) is widely discussed in AI oriented blogs. It is a serious issue. As I don't write English as well as you do, I use chatGPT to do the writing. I of course steer chatGPT to write to explain my ideas.
At the bottom of this issue is when one individual has control of an important media like X or Grok. Facebook is different, MZ has far less control of FB than Musk of X and Grok. Musk set the alarm bell ringing when he said since that tweet that he is 'working on it'.
ChatGPT explained the mechanics how this is done, and how profiling of users in LLMs is done. I cut out the technical details because they are not necessarily. What prompted me to post about this was a post by MikelBikel, chucking GPT. The right (here represented by MikelBikel) is of course worried that its bases are influenced by the general woke tones of those chat bots. My concern is how MAGA has turned the right ( conservatives with small c) into a cult.
 

Tony1951

Esteemed Pedelecer
Jul 29, 2025
335
101
The subject (can Musk fix Grok) is widely discussed in AI oriented blogs. It is a serious issue. As I don't write English as well as you do, I use chatGPT to do the writing. I of course steer chatGPT to write to explain my ideas.
At the bottom of this issue is when one individual has control of an important media like X or Grok. Facebook is different, MZ has far less control of FB than Musk of X and Grok. Musk set the alarm bell ringing when he said since that tweet that he is 'working on it'.
ChatGPT explained the mechanics how this is done, and how profiling of users in LLMs is done. I cut out the technical details because they are not necessarily. What prompted me to post about this was a post by MikelBikel, chucking GPT. The right (here represented by MikelBikel) is of course worried that its bases are influenced by the general woke tones of those chat bots. My concern is how MAGA has turned the right ( conservatives with small c) into a cult.
I don't deny that an oligarch like Musk might ensure that his AI offering was giving out MAGA memes and thinking. He is almost uniquely ego-centric and determined to spend any amount of money on achieving his aims.

However, this is absolutely no different to the way owners of newspapers and internet based media operate now, and have done for fifty years and more.

Fox News did not just evolve out of the woodwork on its own. It was constructed like that to appeal to a particular segment of the population, by Rupert Murdoch. The Guardian is really no different. When I scan the newspapers online, I almost despair at the perspectives of some outlets. Their take on issues can be astonishingly wrong. I see this in articles in the Spectator on the Gaza situation. There are articles there which entirely discount the genocide being acted out on Palestinian civilians. Looking at what we know about that particular situation, it is barely conceivable that an individual aware of the well supported facts could think that the current behaviour there is right - but they do.
 

Tony1951

Esteemed Pedelecer
Jul 29, 2025
335
101
"
  • Trump’s advice is based on some of the associations seen in the literature (e.g. the review of 46 studies).
  • But much of the strongest evidence (e.g. large population + sibling‑controlled studies) do not support the claim of a causal link.
  • Experts generally view Trump’s statements as overblown or premature given the current state of the science, and warn about unintended harms (e.g. pregnant women avoiding necessary medication or suffering untreated fever)."
 

Advertisers