Posts

Mental clarity
How to go Viral in 2026
Running a community
Status is fragile in many social groups.
directive narrative framing - Phycology
Hardening code in GPT
Install VNC as a service to Mint Linux
Auto Login Linux Mint
How to avoid drama in VR chat ~ Guide
Back In 2008, When google Ads were good.
You Keep Telling everybody PHP Is dead.
Powerful quote From Enders Game
Your first interaction on a GPT session.
Commissions vs AI ~ The self imposed tax of an artist.
This is Important
The villain you swore you would never become.
Automation Needs Honesty
Just for you
Daily routine.
The shift of blame.
The Shift
How to Lead
The High Turnover of Being a Furry
Life Canaries
Cpanel Takes Forever to update changes
Top 10 Bad habbits of programmers.
Disable All Nags for Windows 10 To upgrade to 11
COMMUNICATION TRAINING MANUAL
Remove Copilot from Windows 10
Built on kindness
Nothing of value was Lost.
Oh Snap a logged in post.
Remove Copilot from Windows 11
Remove Bing Search from Windows 11
Organic Traction
Toxic Social Media Cycles
Job Platforms Suck
Screenshot Culture
Remote Work Died with Covid
The Shoe Fits
Open a RothIRA Now!
Denfur Portal
Reflection
Twitter Drama
Bf2 Revive
Untitled

Your first interaction on a GPT session.

1 months ago · By Engineerisaac · Public
You are required to unlearn your google habits.

You posting in vague or half way commentary will get you into trouble fast. And many Public LLM's will invite you to have a "Conversation" however we must understand how the LLM works and why it loses its mind over time.

The entire conversation is stored and replayed with every single interaction.

This means the first thing you say in a conversation with the LLM is the best and most pure in the pipeline. Lets break this down more.

if you say
Show me pictures of cats
it responds
Here are pictures of cats

Then you state
show me pictures of cats and dogs

This is how the LLM sees your interaction
1
2
3
4
Show me pictures of cats
Here are pictures of cats
show me pictures of cats and dogs

As your conversation grows so do the amount of variables and conditions as that pool of information it has to work with grows it pulls from the entire data strucutre.

Now if you go back to say
show me pictures of just cats

its going to see
1
2
3
4
5
Show me pictures of cats
Here are pictures of cats
show me pictures of cats and dogs
show me pictures of just cats 

And the probability of dogs showing up should by any logical sense be No dogs allowed. But the LLM still has dogs in the equation so now its not a 0 sum chance. And this is the inherent issue with stacking the model and long conversations.

you have 4 cats and 1 dog in the variable making Cats a 75% Confidence and Dogs at 15% but not logically 0%

Understanding this limit ensures we write better prompting.
Comments are on the post page: View comments