This page gives some pointers for everybody who wishes to contribute a lot of storage space to Swarm.
We advise running multiple bee nodes, each with the default size of the database. To understand why this is the case, we must understand the level of pickiness (how picky a node is when deciding to store a chunk) of nodes with a large capacity versus a node with a small capacity. A node with a large capacity has a much lower pickiness than a node with a smaller capacity. Hence, the node will store chunks that would not have been stored by a more picky node. If your node has a much larger capacity than the average node in the network, it is likely to store chunks that belong together with chunks that were already deleted by other nodes. Since your node gets paid by serving chunks upon request and it is less likely that chunks are requested that belong to content that is mostly gone from Swarm, it is best to run a node whose pickiness is equal to the average pickiness of the network.
If you just want to run a handful of bee nodes, you can run multiple bee nodes by creating separate configuration files. Create your first configuration file by running
Make as many copies of bee-config-1.yaml as you want to run bee nodes. Increment the number in the name (
bee-config-2) for each new configuration file.
Configure your nodes as desired, but ensure that the values
clef-signer-endpoint are unique for each configuration.
Start each bee node in a separate terminal by running:
It becomes easier to run multiple bee nodes with
docker-compose. Please have a look at the README the docker-compose section of the bee node.
If you really want to run a lot of bee nodes and you have experience using Kubernetes with Helm, you can have a look at how we manage our cluster under Ethersphere/helm.