Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideological POVs on Swarm Processing #105

Open
HelloAlexPan opened this issue Feb 12, 2025 · 2 comments
Open

Ideological POVs on Swarm Processing #105

HelloAlexPan opened this issue Feb 12, 2025 · 2 comments

Comments

@HelloAlexPan
Copy link

HelloAlexPan commented Feb 12, 2025

Stumbled across this repo. Wanted to understand the practical utility of this type of compute in the real world. Made an analysis. Would love your thoughts

Why did swarm thinking evolve?

Primary Reason: Resource constraints

  1. Optimization of survival
  • Access only to low energy food sources leads to less energy to evolve big brains in animals
  • These animals which consume these foods still need complex behaviors to optimize survival
  • The mechanisms to trigger these behaviours need to be basic and energy efficient due to resource contraints
  1. Failover
  • Low energy resources leads to fragile animals / units of compute
  • When individual units of compute die they need to failover
  • Swarm behaviour provides decentralised failover
  • Subpoint: Fluctuating resource provision
    • Easily scale up and scale down processing where there is volatile resource availability
  1. Large input surface area
  • Tree fungus covers a lot of surface area and need to watch lots of stuff
  • Swarm brains adds more nodes which monitor and process at once

But what about scalability?

  • If there is stable resource provisioning, then swarm behaviour is actually less resource efficient. E.g. downloading from p2p networks vs central servers. Another example is blockchain financial systems.

Conclusion

  • It seems like the only practical use case of swarm processing is where you have a large input surface area
    • E.g. IOT with micro on device LLMs? Financial markets?
    • But why not just build bigger LLMs which can centrally process huge amounts of inputs? E.g. LLMs with huge context windows?

Would love feedback. Practical applications of swarms of LLMs do not seem to fit within current world trends of ensuring stable energy supply to feed large LLMs (e.g. nuclear power, energy grids)

Humans seem to be trending towards an overabundance of energy supply vs scarcity, with only the latter making the pros of the swarm compute model outweighing the cons

Copy link

Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap.

@evelynmitchell
Copy link
Contributor

Thanks for your comments. We'll consider this.

This can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants