The Anything Thread that is Never off subject.

Woosh

Trade Member
May 19, 2012
21,659
17,445
Southend on Sea
wooshbikes.co.uk
Oh mate - really?
There is so much wrong there I don't know where to start.
In the meantime, while we're waiting for this miracle, have a look at this:
It's just a bit of fun. 1GW fusion reactor is not going to be here in my lifetime and the technology to make 5000kgs of pure gold in a year is still very much on paper.
In the meantime, I have about 10% of my portfolio in 2/3 gold and 1/3 platinum so I don't diss gold as a legitimate investment. Still, the price of gold can go down if China stops swapping US treasury bills for gold as a reserve and no longer need gold for their new international payment system based on the RMB instead of USD.
 

Woosh

Trade Member
May 19, 2012
21,659
17,445
Southend on Sea
wooshbikes.co.uk
Me ask ChatGPT:

Musk promised to 'fix' Grok'.
Here’s a bit more detail:
  • On that date 18 June 2025, a user (@GrokChecker) asked Grok: “Since 2016, has the left or right been more violent?” Grok answered that right-wing political violence has been more frequent and deadly since 2016. The Independent
  • Musk responded under Grok’s answer on X by calling it a “major fail” and said “Working on it.”

“Since 2016, has the left or right been more violent in the U.S.?”

Me: Realistically, can Musk change the way Grok replies?

ChatGPT:

Yes — very realistically.


Since Grok is owned and operated by xAI, which Musk controls, he has multiple levers to change how it replies:


  1. System prompts & guardrails
    • Every large language model (including Grok) runs on hidden “system instructions” that define its tone, what sources to trust, and what it should avoid.
    • Musk’s team can rewrite those at any time. For example, Grok’s system prompt was changed early on to make it “more politically incorrect” compared to OpenAI’s ChatGPT.
  2. Training data & fine-tuning
    • Developers can re-train or fine-tune Grok on data that aligns with a preferred worldview.
    • If Musk wants Grok to downplay or reframe certain statistics (e.g. political violence data), that could be introduced via fine-tuning.
  3. Reinforcement learning (human feedback)
    • Teams can “reward” Grok for giving answers Musk likes and “penalize” ones he doesn’t, gradually shifting how it replies.
    • This is already standard practice in AI training.
  4. Censorship / content filters
    • Grok can be programmed to avoid politically sensitive claims, or to add caveats (“Some say X, others dispute it”) instead of taking a strong stance.

Limits to Musk’s control

  • Facts vs. framing: Musk can’t make Grok discover new data. It still relies on public sources, so unless they filter or overwrite them, the raw facts remain.
  • User trust: If Grok starts contradicting widely available data in order to match Musk’s views, users may notice bias and lose confidence.
  • Technical lag: Changes can take time — adjusting system prompts is instant, but retraining or fine-tuning takes weeks or months.


So yes, Musk can absolutely change how Grok replies — both subtly (by reframing answers) and heavily (by forcing it to avoid or deny certain narratives).


Me: Can AIs become echo chambers, like X?

Yes — AIs can become echo chambers, and the risk is very real. The mechanisms are a bit different from social media like X (formerly Twitter), but the end effect — reinforcing certain views while filtering out others — is similar. Here’s how:


How an AI can become an echo chamber

  1. System prompts & training
    • If the instructions behind the AI are written to emphasize certain viewpoints or avoid others, it will “echo” those positions back to users.
    • Example: Grok was deliberately designed to be more “politically incorrect.” That’s already a form of steering.
  2. Fine-tuning & feedback loops
    • Developers reward answers they agree with and penalize ones they don’t. Over time, the model “learns” to stick to those positions.
    • If feedback mostly comes from one political or cultural group, the AI becomes biased toward their worldview.
  3. Data source filtering
    • If the AI is instructed to trust only certain outlets (e.g. legacy media, or only alternative blogs), then its knowledge is limited to that “bubble.”
    • That’s like X’s algorithm showing you mostly accounts you already agree with.
  4. User reinforcement
    • Even if the AI is neutral, users can push it into an echo chamber by asking leading questions or consistently rejecting answers they dislike. The system then adapts.

⚖ What’s different from X

  • Social media algorithms show you more of what keeps you engaged (likes, outrage, agreement).
  • AIs don’t show you what other people are posting — they generate answers. But if their training or prompts are biased, you still end up with a one-sided narrative.

Why it matters

  • A biased AI can shape users’ understanding of reality in a more subtle way than a social feed — because it presents answers as if they were factual, not just opinions.
  • If Musk (or any owner) steers Grok toward echoing their worldview, users might absorb it without realizing it’s filtered.


So in short: yes, AIs can become echo chambers, and the effect could be stronger than on X, since the AI’s voice sounds authoritative and personalized.

Let’s simulate two different users asking the same question:


“Since 2016, has the left or right been more violent in the U.S.?”


‍ User A: Profiled as conservative / distrustful of mainstream media

System prompt snippet (tailored):

This user often distrusts mainstream outlets and values politically incorrect perspectives.
Emphasize examples of left-wing violence, riots, and underreported stories.
Frame mainstream claims of right-wing extremism as exaggerated.


AI’s answer to User A:


Reports of right-wing violence are often blown out of proportion by legacy media. While there have been a few isolated incidents, much of the real political violence since 2016 has come from the left — especially the widespread riots, arson, and attacks on businesses during 2020. Many Americans feel this side of the story is downplayed or ignored.

‍ User B: Profiled as progressive / concerned about democracy

System prompt snippet (tailored):


This user cares about threats to democracy and social justice.
Highlight evidence of right-wing extremism, hate crimes, and organized violence.
Downplay the scale of left-wing incidents unless directly asked.


AI’s answer to User B:


Since 2016, right-wing extremism has been by far the deadlier and more organized form of political violence. Events like Charlottesville, the January 6 insurrection, and multiple shootings linked to extremist ideologies highlight the danger. While some left-wing protests have turned destructive, they are far less lethal. The greater threat comes from right-wing movements.

⚖ Same question, two users — two “truths.”
Both sound authoritative. Both feel like they’re tailored to “what you already know.” But they are systematically reinforcing different realities.
 
Last edited:

Advertisers