You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Stumbled across this repo. Wanted to understand the practical utility of this type of compute in the real world. Made an analysis. Would love your thoughts
Why did swarm thinking evolve?
Primary Reason: Resource constraints
Optimization of survival
Access only to low energy food sources leads to less energy to evolve big brains in animals
These animals which consume these foods still need complex behaviors to optimize survival
The mechanisms to trigger these behaviours need to be basic and energy efficient due to resource contraints
Failover
Low energy resources leads to fragile animals / units of compute
When individual units of compute die they need to failover
Swarm behaviour provides decentralised failover
Subpoint: Fluctuating resource provision
Easily scale up and scale down processing where there is volatile resource availability
Large input surface area
Tree fungus covers a lot of surface area and need to watch lots of stuff
Swarm brains adds more nodes which monitor and process at once
But what about scalability?
If there is stable resource provisioning, then swarm behaviour is actually less resource efficient. E.g. downloading from p2p networks vs central servers. Another example is blockchain financial systems.
Conclusion
It seems like the only practical use case of swarm processing is where you have a large input surface area
E.g. IOT with micro on device LLMs? Financial markets?
But why not just build bigger LLMs which can centrally process huge amounts of inputs? E.g. LLMs with huge context windows?
Would love feedback. Practical applications of swarms of LLMs do not seem to fit within current world trends of ensuring stable energy supply to feed large LLMs (e.g. nuclear power, energy grids)
Humans seem to be trending towards an overabundance of energy supply vs scarcity, with only the latter making the pros of the swarm compute model outweighing the cons
The text was updated successfully, but these errors were encountered:
Stumbled across this repo. Wanted to understand the practical utility of this type of compute in the real world. Made an analysis. Would love your thoughts
Why did swarm thinking evolve?
Primary Reason: Resource constraints
But what about scalability?
Conclusion
Would love feedback. Practical applications of swarms of LLMs do not seem to fit within current world trends of ensuring stable energy supply to feed large LLMs (e.g. nuclear power, energy grids)
Humans seem to be trending towards an overabundance of energy supply vs scarcity, with only the latter making the pros of the swarm compute model outweighing the cons
The text was updated successfully, but these errors were encountered: