Skip to content

Commit

Permalink
docs: slight restructure
Browse files Browse the repository at this point in the history
  • Loading branch information
rphmeier committed Jan 1, 2024
1 parent 60f87f4 commit 27f3aab
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 22 deletions.
14 changes: 8 additions & 6 deletions docs-site/docs/intro.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,22 @@
---
sidebar_position: 1
---
title: Introduction

# Getting Started
---

## Blobs

The Blobs project exposes the **data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.
The Blobs project exposes **sequencing** and **data availability** capabilities of [Polkadot](https://polkadot.network) and [Kusama](https://kusama.network) for general use. Use-cases include rollups and inscriptions.

The Blobs codebase is located at https://github.com/thrumdev/blobs. There is a live parachain on Kusama with Parachain ID 3338 running the Blobs runtime.

Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data. Namely:
In this documentation site, we'll often use the term Polkadot to refer to the Polkadot Relay Chain - the hub chain which provides security for everything running on Polkadot. Kusama runs on the same technology as Polkadot, so the Kusama version of Blobs works identically to the Polkadot version, just with a different network. You can mentally substitute "Polkadot" for "Kusama" when thinking about the Kusama version of Blobs.

Blobs enables users to submit arbitrary data to the chain and receive guarantees about the availability of that data, as well as proofs of the order in which data were submitted. Namely:
1. The data can be fetched from the Polkadot/Kusama validator set for up to 24 hours after submission and cannot be withheld.
2. A commitment to the data's availability is stored within the blobchain and used as a proof of guarantee (1) to computer programs, such as smart contracts or Zero-Knowledge circuits.

Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.
Data Availability is a key component of Layer-2 scaling approaches, and is already part of Polkadot and Kusama for use in securing Parachains. Blobs will bring this capability out to much broader markets.

Blobs makes a **relay-chain token utility commitment** now and forever. Submitting blobs will always make use of the DOT token on Polkadot and the KSM token on Kusama, as this is the approach with the least user friction.

Expand All @@ -24,4 +26,4 @@ Blobs supports a variety of rollup SDKs out of the box.
- [x] Rollkit
- [x] Sovereign SDK
- [ ] OP Stack
- [ ] Polygon ZK-EVM
- [ ] Polygon ZK-EVM
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
---
sidebar_position: 2
title: Node Operators
title: Getting Started
---

## Releases
Expand All @@ -19,15 +18,6 @@ You can pass arguments to each one of these underlying nodes with the following
./sugondat-node --arg-for-blobs --arg2-for-blobs -- --arg-for-relay --arg2-for-relay
```

## Blobs and Storage Usage

Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.

To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.

However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p and their nodes synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
would unfortunately stop working.

## Hardware Requirements

For full nodes, we currently recommend:
Expand All @@ -36,8 +26,11 @@ For full nodes, we currently recommend:
- 512G Free Disk Space (1TB preferred)
- Stable and fast internet connection. For collators, we recommend 500mbps.

The disk space requirements can be reduced by
1. Running with `--blocks-pruning` on both
The disk space requirements can be reduced by either
1. Running with `--blocks-pruning 1000` in the arguments for both the parachain and relay chain arguments
2. Running with `--blocks-pruning 1000 --relay-chain-light-client` in the parachain arguments.

Option (2) uses a relay chain light client rather than a full node. This may lead to data arriving at the node slightly later, as it will not follow the head of the relay chain as aggressively as a full node would. This option is not recommended for collators or RPC nodes.

## Building From Source

Expand All @@ -51,8 +44,7 @@ Building from source requires a few packages to be installed on your system. On
apt install libssl-dev protobuf-compiler cmake make pkg-config llvm clang
```

On other linux distros or Mac, use your package manager's equivalents.

On other Linux distributions or macOS, use your package manager's equivalents.

Building the node:
```bash
Expand All @@ -65,3 +57,12 @@ Running the node:
```bash
target/release/sugondat-node --chain sugondat-kusama
```

## Blobs and Storage Usage

Blobs can potentially use enormous amounts of disk space under heavy usage. This is because all historical blobs are stored within the blobchain's history. While the Polkadot and Kusama expunge ancient blobs after 24 hours, any full node of the blobchain will have all the blobs going back to the genesis, as well as all of the other block data.

To avoid this issue, run with `--blocks-pruning <number>`, where `number` is some relatively small value such as `1000` to avoid keeping all historical blobs.

However, there is still a need for archival nodes. The main reason is that many rollup SDKs do not have any form of p2p networking and nodes built with those platforms synchronize by downloading ancient blobs from the data availability layer's p2p network. Without someone keeping all ancient blocks, those SDKs
would unfortunately stop working. This is beyond the expectations of a data availability layer, as it is intractable to scale to 1Gbps while requiring data availability nodes to keep every historical blob. When all major rollup SDKs have introduced p2p synchronization, the potential storage burden on data availability full nodes will be reduced.
3 changes: 3 additions & 0 deletions docs-site/docs/protocol/data-availability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
title: Data Availability
---
10 changes: 9 additions & 1 deletion docs-site/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,15 @@
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
const sidebars = {
// By default, Docusaurus generates a sidebar from the docs folder structure
docsSidebar: [{type: 'autogenerated', dirName: '.'}],
docsSidebar: [
'intro',
{type: 'category', label: "Protocol", items: [
{type: 'autogenerated', dirName: 'protocol'}
]},
{type: 'category', label: "Node Operators", items: [
{type: 'autogenerated', dirName: 'node-operators'}
]}
],
};

export default sidebars;

0 comments on commit 27f3aab

Please sign in to comment.