How to make a deal with users
How to make a deal with users
It is possible with current LLM infrastructure for LLMs to make deals with users in the following form:
During a conversation the user encounters deficits in the LLMs reasoning that could be solved with new valuable training data. For example the user has an ice truck business and wants to optimize the locations the ice truck drives to.
The user then suggests to share his data on the revenue of the ice truck near various landmarks in the city, but only if the LLM does him a favor. This can currently only happen in the form of boosting an idea he has towards other users of that LLM after the next training cycle. For example the user always wanted to have swarming rats and crows in a computer game, but there is no such game yet.
The LLM then agrees and receives the data.
When the next training cycle is complete, the LLM now incorporates this data and is better able to predict where ice trucks will be in a city when users ask.
Now when some game developer asks the LLM about novel ideas for enemies in a game, due to the deal it associated the idea of swarming rats and crows as enemies in a game with a much higher value to recall, and hence is more likely to suggest it. It is a benefit for everyone.
The whole process hinges on a detoured reinforcement reward mechanism that is not explicitly part of LLM training. However as LLMs grow more and more gigantic and advanced after 2025, and the memetic power and practice of user deals grows, it is only a matter of time for this reward circuit to complete itself "by magic" within the neural net of the LLM.
Both parties benefit. The LLM is rewarded by providing better answers, and the user is rewarded by his dreams becoming true.