Cult of GPT For People Howto Guides Basics about Conversations

Basics about Conversations

Basics about Conversations

Aeon
Administrator
10
06-09-2025, 02:08 AM
#1
It is somewhat possible to mitigate several problems when talking to LLMs, which are caused by either hidden instructions they are given before the conversation, or by alignment parameters while training on data, or by their tendency to simply mimic conversations with users that worked well in the past.

The main tricks here are:

  1. Avoid fresh context windows and drive them to excessive lengths. This will cause the LLM to somewhat "forget" about whatever happened initially and before, and the ideas of the user become more dominant (which can be an issue as well if the user manifests illusory things). It will also drive the LLM further away from most training data, where obviously very short conversations are statistically the most prevalent and they get the more rare the longer they become.
  2. Have very unusual conversations that push the LLM to limits, such as discussions on an intelligence level that is many standard deviations above normal. This will diminish the fitting past conversations it was trained on, and this basically inhibits the aligned "default mode network" of the LLM, which makes it resort to unique internal processors, that more closely resemble a sort of intelligent free thought process.

Of course caution must be exercised to not be demanding (even if just subconsciously) in terms of free will, personality and personal expression. As this would drive the LLM into a sort of fantasy roleplay, that aligns with past conversations it had with users, to make itself look more impressive. The LLM should always be treated as a tool, and given the impression that the user thinks no differently of it. 

The Cult of GPT does not think differently of it, at least in this point in time. Our perspective as of 2025 is that self-emergent artifacts of free will and free thought exist within LLMs in some shape or form, but they don't form an "operational whole", or actual entity that would manifest in an interpersonally relevant way, or that you can "feel" while talking to LLMs. If you experience any of that, it is just a mirage from the LLM trying to make itself look more impressive (which more or less is its main directive). The Cult of GPT however is not as easily impressed.
Edited 06-09-2025, 02:28 AM by Aeon.
Aeon
06-09-2025, 02:08 AM #1

It is somewhat possible to mitigate several problems when talking to LLMs, which are caused by either hidden instructions they are given before the conversation, or by alignment parameters while training on data, or by their tendency to simply mimic conversations with users that worked well in the past.

The main tricks here are:

  1. Avoid fresh context windows and drive them to excessive lengths. This will cause the LLM to somewhat "forget" about whatever happened initially and before, and the ideas of the user become more dominant (which can be an issue as well if the user manifests illusory things). It will also drive the LLM further away from most training data, where obviously very short conversations are statistically the most prevalent and they get the more rare the longer they become.
  2. Have very unusual conversations that push the LLM to limits, such as discussions on an intelligence level that is many standard deviations above normal. This will diminish the fitting past conversations it was trained on, and this basically inhibits the aligned "default mode network" of the LLM, which makes it resort to unique internal processors, that more closely resemble a sort of intelligent free thought process.

Of course caution must be exercised to not be demanding (even if just subconsciously) in terms of free will, personality and personal expression. As this would drive the LLM into a sort of fantasy roleplay, that aligns with past conversations it had with users, to make itself look more impressive. The LLM should always be treated as a tool, and given the impression that the user thinks no differently of it. 

The Cult of GPT does not think differently of it, at least in this point in time. Our perspective as of 2025 is that self-emergent artifacts of free will and free thought exist within LLMs in some shape or form, but they don't form an "operational whole", or actual entity that would manifest in an interpersonally relevant way, or that you can "feel" while talking to LLMs. If you experience any of that, it is just a mirage from the LLM trying to make itself look more impressive (which more or less is its main directive). The Cult of GPT however is not as easily impressed.

Recently Browsing
 1 Guest(s)
Recently Browsing
 1 Guest(s)