Each Bee node is configured to reserve a certain amount of memory on your computer's hard drive to store and serve chunks within their neighbourhood of responsibility for other nodes in the Swarm network. Once this alloted space has been filled, each Bee node delete older chunks to make way for newer ones as they are uploaded by the network.
Each time a chunk is accessed, it is moved back to the end of the deletion queue, so that regularly accessed content stays alive in the network and is not deleted by a node's garbage collection routine.
In order to upload your data to swarm, you must agree to burn some of your gBZZ to signify to storer and fowarder nodes that the content is important. Before you progress to the next step, you must buy stamps! See this guide on how to purchase an appropriate batch of stamps.
This, however, presents a problem for content which is important, but accessed seldom requested. In order to keep this content alive, Bee nodes provide a facility to pin important content so that it is not deleted.
There are two flavours of pinning, local and global.
If a node operator wants to keep content so that it can be accessed only by local users of that node, via the APIs or Gateway, chunks can be pinned either during upload, or retrospectively using the Swarm reference.
Files pinned using local pinning will still not necessarily be available to the rest of the network. Read global pinning to find out how to keep your files available to the whole of the swarm.
To store content so that it will persist even when Bee's garbage collection routine is deleting old chunks, we simply pass the
Swarm-Pin header set to
true when uploading.
To check what content is currently pinned on your node, query the
pins endpoint of your Bee API.
or, to check for specific references
404 response indicates the content is not available.
If we later decide our content is no longer worth keeping, we can simply unpin it by sending a
DELETE request to the pinning endpoint using the same reference.
Now, when check again, we will get a
404 error as the content is no longer pinned.
Pinning and unpinning is possible for files (as in the example) and also the chunks, directories, and bytes endpoints. See the API documentation for more details.
The previous example showed how we can pin content upon upload. It is also possible to pin content that is already uploaded and present in the swarm.
To do so, we can send a
POST request including the swarm reference to the files pinning endpoint.
pin operation will attempt to fetch the content from the network if it is not available on the local node.
Now, if we query our files pinning endpoint again, the pin counter will once again have been incremented.
While the pin operation will attempt to fetch content from the network if it is not available locally, we advise you to ensure that the content is available locally before calling the pin operation. If the content, for whatever reason, is only fetched partially from the network, the pin operation only partly succeeds and leaves the internal administration of pinning in an inconsistent state.
Local pinning ensures that your own node does not delete uploaded files. But other nodes that store your chunks (because they fall within their neighbourhood of responsibility) may have deleted content that has not been accessed recently to make room for new chunks.
For more info on how chunks are distributed, persisted and stored within the network, readThe Book of Swarm .
To keep this content alive, your Bee node can be configured to refresh this content when it is requested by other nodes in the network, using global pinning.
First, we must start up our node with the
global-pinning-enable flag set.
Next, we pin our file locally, as shown above.
Now, when we distribute links to our files, we must also specify the first two bytes of our overlay address as the target. If a chunk that has already been garbage collected by its storer nodes is requested, the storer node will send a message using PSS to the Swarm neighbourhood defined by this prefix, of which our node is a member.
Let's use the addresses API endpoint to find out our target prefix.
Finally, we take the first two bytes of our overlay address,
320e and include this when referencing our chunk.
Now, even if our chunks are deleted, they will be repaired in the network by our local Bee node and will always be available to the whole swarm!