Celestia Improvement Proposal (CIP) process
Read CIP-1 for information on the CIP process.
Meetings
№ | Date | Agenda | Notes | Recording |
---|---|---|---|---|
1 | November 29, 2023 | Agenda | Tweet Thread | Recording |
2 | December 13, 2023 | Agenda | Tweet Thread | Recording |
3 | January 24, 2024 | Agenda | Tweet Thread | Recording |
4 | February 6, 2024 | Agenda | Tweet Thread | Recording |
5 | February 20, 2024 | Agenda | Tweet Thread | Recording |
6 | March 6, 2024 | Agenda | Tweet Thread | Recording |
7 | March 20, 2024 | Agenda | Tweet Thread | Recording |
8 | April 3, 2024 | Agenda | Tweet Thread | Recording |
9 | April 17, 2024 | Agenda | Tweet Thread | Recording |
10 | April 30, 2024 | Agenda | Tweet Thread | Recording |
11 | May 22, 2024 | Agenda | Tweet Thread | Recording |
12 | June 5, 2024 | Agenda | Tweet Thread | Recording |
13 | July 3, 2024 | Agenda | Tweet Thread | Recording |
14 | July 24, 2024 | Agenda | Tweet Thread, Notes | Recording |
15 | August 7, 2024 | Agenda | Notes | Recording |
16 | September 4, 2024 | Agenda | TBD | Recording |
17 | October 2, 2024 | Agenda | Tweet Thread | Recording |
18 | October 16, 2024 | Agenda | TBD | Recording |
19 | November 6, 2024 | Agenda | TBD | Set reminder |
Celestia Improvement Proposals (CIPs)
№ | Title | Author(s) |
---|---|---|
1 | Celestia Improvement Proposal Process and Guidelines | Yaz Khoury [email protected] |
2 | CIP Editor Handbook | Yaz Khoury (@YazzyYaz) |
3 | Process for Approving External Resources | Yaz Khoury (@YazzyYaz) |
4 | Standardize data expiry time for pruned nodes | Mustafa Al-Bassam (@musalbas), Rene Lubov (@renaynay), Ramin Keene (@ramin) |
5 | Rename data availability to data publication | msfew (@fewwwww), Kartin, Xiaohang Yu (@xhyumiracle) |
6 | Enforce payment of the gas for a transaction based on a global minimum price | Callum Waters (@cmwaters) |
7 | Managing Working Groups in the Celestia Improvement Proposal Process | Yaz Khoury [email protected] |
8 | Roles and Responsibilities of Working Group Chairs in the CIP Process | Yaz Khoury [email protected] |
9 | Packet Forward Middleware | Alex Cheng (@akc2267) |
10 | Coordinated network upgrades | Callum Waters (@cmwaters) |
11 | Refund unspent gas | Rootul Patel (@rootulp) |
12 | ICS-29 Relayer Incentivisation Middleware | Susannah Evans [email protected] (@womensrights), Aditya Sripal [email protected] (@AdityaSripal) |
13 | On-chain Governance Parameters for Celestia Network | Yaz Khoury [email protected], Evan Forbes [email protected] |
14 | ICS-27 Interchain Accounts | Susannah Evans [email protected] (@womensrights), Aidan Salzmann [email protected] (@asalzmann), Sam Pochyly [email protected] (@sampocs) |
15 | Discourage memo usage | Rootul Patel (@rootulp), NashQueue (@nashqueue) |
16 | Make Security Related Governance Parameters Immutable | Mingpei CAO (@caomingpei) |
17 | Lemongrass Network Upgrade | Evan Forbes (@evan-forbes) |
18 | Standardised Gas and Pricing Estimation Interface | Callum Waters (@cmwaters) |
19 | Shwap Protocol | Hlib Kanunnikov (@Wondertan) |
20 | Disable Blobstream module | Rootul Patel (@rootulp) |
21 | Introduce blob type with verified signer | Callum Waters (@cmwaters) |
22 | Removing the IndexWrapper | NashQueue (@Nashqueue) |
23 | Coordinated prevote times | Callum Waters (@cmwaters) |
24 | Versioned Gas Scheduler Variable | Nina Barbakadze (@ninabarbakadze) |
25 | Ginger Network Upgrade | Josh Stein (@jcstein), Nina Barbakadze (@ninabarbakadze) |
26 | Versioned timeouts | Josh Stein (@jcstein), Rootul Patel (@rootulp), Sanaz Taheri (@staheri14 |
27 | Block limits for number of PFBs and non-PFBs | Josh Stein (@jcstein), Nina Barbakadze (@ninabarbakadze), rach-id (@rach-id), Rootul Patel (@rootulp) |
28 | Transaction size limit | Josh Stein (@jcstein), Nina Barbakadze (@ninabarbakadze), Rootul Patel (@rootulp) |
Contributing
Files in this repo must conform to markdownlint. Install markdownlint and then run:
markdownlint --config .markdownlint.yaml '**/*.md'
Running the site locally
Prerequisites:
mdbook serve -o
cip | 1 |
---|---|
title | Celestia Improvement Proposal Process and Guidelines |
author | Yaz Khoury [email protected] |
status | Living |
type | Meta |
created | 2023-04-13 |
Table of Contents
- What is a CIP?
- CIP Rationale
- CIP Types
- CIP Work Flow
- Shepherding a CIP
- Core CIPs
- CIP Process
- What belongs in a successful CIP?
- CIP Formats and Templates
- CIP Header Preamble
- author header
- discussions-to header
- type header
- category header
- created header
- requires header
- Linking to External Resources
- Data Availability Specifications
- Consensus Layer Specifications
- Networking Specifications
- Digital Object Identifier System
- Linking to other CIPs
- Auxiliary Files
- Transferring CIP Ownership
- CIP Editors
- CIP Editor Responsibilities
- Style Guide
- Titles
- Descriptions
- CIP numbers
- RFC 2119 and RFC 8174
- History
- Copyright
What is a CIP
CIP stands for Celestia Improvement Proposal. A CIP is a design document providing information to the Celestia community, or describing a new feature for Celestia or its processes or environment. The CIP should provide a concise technical specification of the feature and a rationale for the feature. The CIP author is responsible for building consensus within the community and documenting dissenting opinions.
CIP Rationale
We intend CIPs to be the primary mechanisms for proposing new features, for collecting community technical input on an issue, and for documenting the design decisions that have gone into Celestia. Because the CIPs are maintained as text files in a versioned repository, their revision history is the historical record of the feature proposal.
For Celestia software clients and core devs, CIPs are a convenient way to track the progress of their implementation. Ideally, each implementation maintainer would list the CIPs that they have implemented. This will give end users a convenient way to know the current status of a given implementation or library.
CIP Types
There are three types of CIP:
- Standards Track CIP describes any change that affects
most or all Celestia implementations, such as a change to
the network protocol, a change in block or transaction
validity rules, proposed standards/conventions, or any
change or addition that affects the interoperability of
execution environments and rollups using Celestia. Standards
Track CIPs consist of three parts: a design document,
an implementation, and (if warranted) an update to the
formal specification. Furthermore, Standards Track CIPs
can be broken down into the following categories:
- Core: improvements requiring a consensus fork, as well as changes that are not necessarily consensus critical but may be relevant to “core dev” discussions (for example, validator/node strategy changes).
- Data Availability: improvements to the Data Availability layer that while not consensus breaking, would be relevant for nodes to upgrade to after.
- Networking: includes improvements around libp2p and the p2p layer in general.
- Interface: includes improvements around consensus and data availability client API/RPC specifications and standards, and also certain language-level standards like method names. The label “interface” aligns with the client repository and discussion should primarily occur in that repository before a CIP is submitted to the CIPs repository.
- CRC: Rollup standards and conventions, including standards for rollups such as token standards, name registries, URI schemes, library/package formats, and wallet formats that rely on the data availability layer for transaction submission to the Celestia Network.
- Meta CIP describes a process surrounding Celestia or proposes a change to (or an event in) a process. Meta CIPs are like Standards Track CIPs but apply to areas other than the Celestia protocol itself. They may propose an implementation, but not to Celestia’s codebase; they often require community consensus; unlike Informational CIPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Celestia development.
- Informational CIP describes a Celestia design issue, or provides general guidelines or information to the Celestia community, but does not propose a new feature. Informational CIPs do not necessarily represent Celestia community consensus or a recommendation, so users and implementers are free to ignore Informational CIPs or follow their advice.
It is highly recommended that a single CIP contain a single key proposal or new idea. The more focused the CIP, the more successful it tends to be. A change to one client doesn’t require a CIP; a change that affects multiple clients, or defines a standard for multiple apps to use, does.
A CIP must meet certain minimum criteria. It must be a clear and complete description of the proposed enhancement. The enhancement must represent a net improvement. The proposed implementation, if applicable, must be solid and must not complicate the protocol unduly.
Celestia Improvement Proposal (CIP) Workflow
Shepherding a CIP
Parties involved in the process are you, the champion or CIP author, the CIP editors, and the Celestia Core Developers.
Before diving into writing a formal CIP, make sure your idea stands out. Consult the Celestia community to ensure your idea is original, saving precious time by avoiding duplication. We highly recommend opening a discussion thread on the Celestia forum for this purpose.
Once your idea passes the vetting process, your next responsibility is to present the idea via a CIP to reviewers and all interested parties. Invite editors, developers, and the community to give their valuable feedback through the relevant channels. Assess whether the interest in your CIP matches the work involved in implementing it and the number of parties required to adopt it. For instance, implementing a Core CIP demands considerably more effort than a CRC, necessitating adequate interest from Celestia client teams. Be aware that negative community feedback may hinder your CIP’s progression beyond the Draft stage.
Core CIPs
For Core CIPs, you’ll need to either provide a client implementation or persuade clients to implement your CIP, given that client implementations are mandatory for Core CIPs to reach the Final stage (see “CIP Process” below).
To effectively present your CIP to client implementers, request a Celestia CoreDevsCall (CDC) call by posting a comment linking your CIP on a CoreDevsCall agenda GitHub Issue.
The CoreDevsCall allows client implementers to:
- Discuss the technical merits of CIPs
- Gauge which CIPs other clients will be implementing
- Coordinate CIP implementation for network upgrades
These calls generally lead to a “rough consensus” on which CIPs should be implemented. Rough Consensus is informed based on the IETF’s RFC 7282 which is a helpful document to understand how decisions are made in Celestia CoreDevCalls. This consensus assumes that CIPs are not contentious enough to cause a network split and are technically sound. One important excerpt from the document that highlights based on Dave Clark’s 1992 presentation is the following:
We reject: kings, presidents and voting. We believe in: rough consensus and running code.
:warning: The burden falls on client implementers to estimate community sentiment, obstructing the technical coordination function of CIPs and AllCoreDevs calls. As a CIP shepherd, you can facilitate building community consensus by ensuring the Celestia forum thread for your CIP encompasses as much of the community discussion as possible and represents various stakeholders.
In a nutshell, your role as a champion involves writing the CIP using the style and format described below, guiding discussions in appropriate forums, and fostering community consensus around the idea.
CIP Process
The standardization process for all CIPs in all tracks follows the below status:
- Idea: A pre-draft idea not tracked within the CIP Repository.
- Draft: The first formally tracked stage of a CIP in development.
A CIP is merged by a CIP Editor into the CIP repository when properly
formatted.
- ➡️ Draft: If agreeable, CIP editor will assign the CIP a number (generally the issue or PR number related to the CIP) and merge your pull request. The CIP editor will not unreasonably deny a CIP.
- ❌ Draft: Reasons for denying Draft status include being too unfocused, too broad, duplication of effort, being technically unsound, not providing proper motivation or addressing backwards compatibility, or not in keeping with the Celestia values and code of conduct.
- Review: A CIP Author marks a CIP as ready for and requesting Peer Review.
- Last Call: The final review window for a CIP before moving to
Final. A CIP editor assigns Last Call status and sets a review end
date (last-call-deadline), typically 14 days later.
- ❌ Review: A Last Call which results in material changes or substantial unaddressed technical complaints will cause the CIP to revert to Review.
- ✅ Final: A successful Last Call without material changes or unaddressed technical complaints will become Final.
- Final: This CIP represents the final standard. A Final CIP exists in a state of finality and should only be updated to correct errata and add non-normative clarifications. A PR moving a CIP from Last Call to Final should contain no changes other than the status update. Any content or editorial proposed change should be separate from this status-updating PR and committed prior to it.
Other Statuses
- Stagnant: Any CIP in Draft, Review, or Last Call that remains inactive for 6 months or more is moved to Stagnant. Authors or CIP Editors can resurrect a proposal from this state by moving it back to Draft or its earlier status. If not resurrected, a proposal may stay forever in this status.
- Withdrawn: The CIP Author(s) have withdrawn the proposed CIP. This state has finality and can no longer be resurrected using this CIP number. If the idea is pursued at a later date, it is considered a new proposal.
- Living: A special status for CIPs designed to be continually updated and not reach a state of finality. This status caters to dynamic CIPs that require ongoing updates.
As you embark on this exciting journey of shaping Celestia’s future with your valuable ideas, remember that your contributions matter. Your technical knowledge, creativity, and ability to bring people together will ensure that the CIP process remains engaging, efficient, and successful in fostering a thriving ecosystem for Celestia.
What belongs in a successful CIP
A successful Celestia Improvement Proposal (CIP) should consist of the following parts:
- Preamble: RFC 822 style headers containing metadata about the CIP, including the CIP number, a short descriptive title (limited to a maximum of 44 words), a description (limited to a maximum of 140 words), and the author details. Regardless of the category, the title and description should not include the CIP number. See below for details.
- Abstract: A multi-sentence (short paragraph) technical summary that provides a terse and human-readable version of the specification section. By reading the abstract alone, someone should be able to grasp the essence of what the proposal entails.
- Motivation (optional): A motivation section is crucial for CIPs that seek to change the Celestia protocol. It should clearly explain why the existing protocol specification is insufficient for addressing the problem the CIP solves. If the motivation is evident, this section can be omitted.
- Specification: The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to enable competing, interoperable implementations for any of the current Celestia platforms.
- Parameters: Summary of any parameters introduced by or changed by the CIP.
- Rationale: The rationale elaborates on the specification by explaining the reasoning behind the design and the choices made during the design process. It should discuss alternative designs that were considered and any related work. The rationale should address important objections or concerns raised during discussions around the CIP.
- Backwards Compatibility (optional): For CIPs introducing backwards incompatibilities, this section must describe these incompatibilities and their consequences. The CIP must explain how the author proposes to handle these incompatibilities. If the proposal does not introduce any backwards incompatibilities, this section can be omitted.
- Test Cases (optional): Test cases are mandatory for CIPs affecting
consensus changes. They should either be inlined in the CIP as data (such
as input/expected output pairs) or included in
../assets/cip-###/<filename>
. This section can be omitted for non-Core proposals. - Reference Implementation (optional): This optional section contains a reference/example implementation that people can use to better understand or implement the specification. This section can be omitted for all CIPs ( mandatory for Core CIPs to reach the Final stage).
- Security Considerations: All CIPs must include a section discussing relevant security implications and considerations. This section should provide information critical for security discussions, expose risks, and be used throughout the proposal’s life-cycle. Examples include security-relevant design decisions, concerns, significant discussions, implementation-specific guidance, pitfalls, an outline of threats and risks, and how they are addressed. CIP submissions lacking a “Security Considerations” section will be rejected. A CIP cannot reach “Final” status without a Security Considerations discussion deemed sufficient by the reviewers.
- Copyright Waiver: All CIPs must be in the public domain. The copyright waiver MUST link to the license file and use the following wording: Copyright and related rights waived via CC0.
CIP Formats and Templates
CIPs should be written in markdown format. There is a CIP template to follow.
CIP Header Preamble
Each CIP must begin with an RFC 822 style header preamble in a markdown table. In order to display on the CIP site, the frontmatter must be formatted in a markdown table. The headers must appear in the following order:
cip
: CIP number (this is determined by the CIP editor)title
: The CIP title is a few words, not a complete sentencedescription
: Description is one full (short) sentenceauthor
: The list of the author’s or authors’ name(s) and/or username(s), or name(s) and email(s). Details are below.discussions-to
: The url pointing to the official discussion threadstatus
: Draft, Review, Last Call, Final, Stagnant, Withdrawn, Livinglast-call-deadline
: The date last call period ends on (Optional field, only needed when status is Last Call)type
: One of Standards Track, Meta, or Informationalcategory
: One of Core, Data Availability, Networking, Interface, or CRC (Optional field, only needed for Standards Track CIPs)created
: Date the CIP was created onrequires
: CIP number(s) (Optional field)withdrawal-reason
: A sentence explaining why the CIP was withdrawn. (Optional field, only needed when status is Withdrawn)
Headers that permit lists must separate elements with commas.
Headers requiring dates will always do so in the format of ISO 8601 (yyyy-mm-dd).
author
header
The author
header lists the names, email addresses or usernames of the
authors/owners of the CIP. Those who prefer anonymity may use a username
only, or a first name and a username. The format of the author
header
value must be:
Random J. User <[email protected]>
or
Random J. User (@username)
or
Random J. User (@username <[email protected]>
if the email address and/or GitHub username is included, and
Random J. User
if neither the email address nor the GitHub username are given.
At least one author must use a GitHub username, in order to get notified on change requests and have the capability to approve or reject them.
discussions-to
header
While an CIP is a draft, a discussions-to
header will indicate
the URL where the CIP is being discussed.
The preferred discussion URL is a topic on Celestia Forums. The URL cannot point to Github pull requests, any URL which is ephemeral, and any URL which can get locked over time (i.e. Reddit topics).
type
header
The type
header specifies the type of CIP: Standards Track,
Meta, or Informational. If the track is Standards please include
the subcategory (core, networking, interface, or CRC).
category
header
The category
header specifies the CIP’s category. This is
required for standards-track CIPs only.
created
header
The created
header records the date that the CIP was
assigned a number. Both headers should be in yyyy-mm-dd
format, e.g. 2001-08-14.
requires
header
CIPs may have a requires
header, indicating the CIP
numbers that this CIP depends on. If such a dependency
exists, this field is required.
A requires
dependency is created when the current CIP
cannot be understood or implemented without a concept or
technical element from another CIP. Merely mentioning another
CIP does not necessarily create such a dependency.
Linking to External Resources
Other than the specific exceptions listed below, links to external resources SHOULD NOT be included. External resources may disappear, move, or change unexpectedly.
The process governing permitted external resources is described in CIP-3.
Data Availability Client Specifications
Links to the Celestia Data Availability Client Specifications may be included using normal markdown syntax, such as:
[Celestia Data Availability Client Specifications](https://github.com/celestiaorg/celestia-specs)
Which renders to:
Celestia Data Availability Client Specifications
Consensus Layer Specifications
Links to specific commits of files within the Celestia Consensus Layer Specifications may be included using normal markdown syntax, such as:
[Celestia Consensus Layer Client Specifications](https://github.com/celestiaorg/celestia-specs)
Which renders to:
Celestia Consensus Layer Client Specifications
Networking Specifications
Links to specific commits of files within the Celestia Networking Specifications may be included using normal markdown syntax, such as:
[Celestia P2P Layer Specifications](https://github.com/celestiaorg/celestia-specs)
Which renders as:
Celestia P2P Layer Specifications
Digital Object Identifier System
Links qualified with a Digital Object Identifier (DOI) may be included using the following syntax:
This is a sentence with a footnote.[^1]
[^1]:
```csl-json
{
"type": "article",
"id": 1,
"author": [
{
"family": "Khoury",
"given": "Yaz"
}
],
"DOI": "00.0000/a00000-000-0000-y",
"title": "An Awesome Article",
"original-date": {
"date-parts": [
[2022, 12, 31]
]
},
"URL": "https://sly-hub.invalid/00.0000/a00000-000-0000-y",
"custom": {
"additional-urls": [
"https://example.com/an-interesting-article.pdf"
]
}
}
```
Which renders to:
This is a sentence with a footnote.1
{
"type": "article",
"id": 1,
"author": [
{
"family": "Khoury",
"given": "Yaz"
}
],
"DOI": "00.0000/a00000-000-0000-y",
"title": "An Awesome Article",
"original-date": {
"date-parts": [
[2022, 12, 31]
]
},
"URL": "https://sly-hub.invalid/00.0000/a00000-000-0000-y",
"custom": {
"additional-urls": [
"https://example.com/an-interesting-article.pdf"
]
}
}
See the Citation Style Language Schema for the supported fields. In addition to passing validation against that schema, references must include a DOI and at least one URL.
The top-level URL field must resolve to a copy of the referenced
document which can be viewed at zero cost. Values under
additional-urls
must also resolve to a copy of the
referenced document, but may charge a fee.
Linking to other CIPs
References to other CIPs should follow the format CIP-N
where N
is the CIP number you are referring to. Each CIP
that is referenced in an CIP MUST be accompanied by a
relative markdown link the first time it is referenced, and
MAY be accompanied by a link on subsequent references.
The link MUST always be done via relative paths so that
the links work in this GitHub repository, forks of this repository,
the main CIPs site, mirrors of the main CIP site, etc.
For example, you would link to this CIP as ./cip-1.md
.
Auxiliary Files
Images, diagrams and auxiliary files should be included in a
subdirectory of the assets
folder for that CIP as follows:
assets/cip-N
(where N is to be replaced with the CIP
number). When linking to an image in the CIP, use relative
links such as ../assets/cip-1/image.png
.
Transferring CIP Ownership
It occasionally becomes necessary to transfer ownership of CIPs to a new champion. In general, we’d like to retain the original author as a co-author of the transferred CIP, but that’s really up to the original author. A good reason to transfer ownership is because the original author no longer has the time or interest in updating it or following through with the CIP process, or has fallen off the face of the ’net (i.e. is unreachable or isn’t responding to email). A bad reason to transfer ownership is because you don’t agree with the direction of the CIP. We try to build consensus around an CIP, but if that’s not possible, you can always submit a competing CIP.
If you are interested in assuming ownership of an CIP, send a message asking to take over, addressed to both the original author and the CIP editor. If the original author doesn’t respond to the email in a timely manner, the CIP editor will make a unilateral decision (it’s not like such decisions can’t be reversed :)).
CIP Editors
The current CIP editors are
Emeritus EIP editors are
- Yaz Khoury (@YazzyYaz)
If you would like to become a CIP editor, please check CIP-2.
CIP Editor Responsibilities
For each new CIP that comes in, an editor does the following:
- Read the CIP to check if it is ready: sound and complete. The ideas must make technical sense, even if they don’t seem likely to get to final status.
- The title should accurately describe the content.
- Check the CIP for language (spelling, grammar, sentence structure, etc.), markup (GitHub flavored Markdown), code style
If the CIP isn’t ready, the editor will send it back to the author for revision, with specific instructions.
Once the CIP is ready for the repository, the CIP editor will:
- Assign an CIP number (generally the next unused CIP number, but the decision is with the editors)
- Merge the corresponding pull request
- Send a message back to the CIP author with the next step.
Many CIPs are written and maintained by developers with write access to the Celestia codebase. The CIP editors monitor CIP changes, and correct any structure, grammar, spelling, or markup mistakes we see.
The editors don’t pass judgment on CIPs. We merely do the administrative & editorial part.
Style Guide
Titles
The title
field in the preamble:
- Should not include the word “standard” or any variation thereof; and
- Should not include the CIP’s number.
Descriptions
The description
field in the preamble:
- Should not include the word “standard” or any variation thereof; and
- Should not include the CIP’s number.
CIP numbers
When referring to an CIP with a category
of CRC
, it must be written
in the hyphenated form CRC-X
where X
is that CIP’s assigned number.
When referring to CIPs with any other category
, it must be written in
the hyphenated form CIP-X
where X
is that CIP’s assigned number.
RFC 2119 and RFC 8174
CIPs are encouraged to follow RFC 2119 and RFC 8174 for terminology and to insert the following at the beginning of the Specification section:
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
History
This document was derived heavily from Ethereum’s EIP Process written by Hudson Jameson which is derived from Bitcoin’s BIP-0001 written by Amir Taaki which in turn was derived from Python’s PEP-0001. In many places text was simply copied and modified. Although the PEP-0001 text was written by Barry Warsaw, Jeremy Hylton, and David Goodger, they are not responsible for its use in the Celestia Improvement Process, and should not be bothered with technical questions specific to Celestia or the CIP. Please direct all comments to the CIP editors.
Copyright
Copyright and related rights waived via CC0.
cip | 2 |
---|---|
title | CIP Editor Handbook |
description | Handy reference for CIP editors and those who want to become one |
author | Yaz Khoury (@YazzyYaz) |
discussions-to | https://forum.celestia.org |
status | Draft |
type | Informational |
created | 2023-04-13 |
requires | CIP-1 |
Abstract
CIP stands for Celestia Improvement Proposal. A CIP is a design document providing information to the Celestia community, or describing a new feature for Celestia or its processes or environment. The CIP should provide a concise technical specification of the feature and a rationale for the feature. The CIP author is responsible for building consensus within the community and documenting dissenting opinions.
This CIP describes the recommended process for becoming an CIP editor.
Specification
Application and Onboarding Process
Anyone having a good understanding of the CIP standardization and network upgrade process, intermediate level experience on the core side of the Celestia blockchain, and willingness to contribute to the process management may apply to become a CIP editor. Potential CIP editors should have the following skills:
- Good communication skills
- Ability to handle contentious discourse
- 1-5 spare hours per week
- Ability to understand “rough consensus”
The best available resource to understand the CIP process is CIP-1. Anyone desirous of becoming an CIP editor MUST understand this document. Afterwards, participating in the CIP process by commenting on and suggesting improvements to PRs and issues will familliarize the procedure, and is recommended. The contributions of newer editors should be monitored by other CIP editors.
Anyone meeting the above requirements may make a pull request adding themselves as an CIP editor and adding themselves to the editor list in CIP-1. If every existing CIP editor approves, the author becomes a full CIP editor. This should notify the editor of relevant new proposals submitted in the CIPs repository, and they should review and merge those pull requests.
Copyright
Copyright and related rights waived via CC0.
cip | 3 |
---|---|
title | Process for Approving External Resources |
description | Requirements and process for allowing new origins of external resources |
author | Yaz Khoury (@YazzyYaz) |
discussions-to | https://forum.celestia.org |
status | Draft |
type | Meta |
created | 2023-04-13 |
requires | CIP-1 |
Abstract
Celestia improvement proposals (CIPs) occasionally link to resources external to this repository. This document sets out the requirements for origins that may be linked to, and the process for approving a new origin.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119.
Definitions
- Link: Any method of referring to a resource, including: markdown links, anchor tags (
<a>
), images, citations of books/journals, and any other method of referencing content not in the current resource. - Resource: A web page, document, article, file, book, or other media that contains content.
- Origin: A publisher/chronicler of resources, like a standards body (eg. w3c) or a system of referring to documents (eg. Digital Object Identifier System).
Requirements for Origins
Permissible origins MUST provide a method of uniquely identifying a particular revision of a resource. Examples of such methods may include git commit hashes, version numbers, or publication dates.
Permissible origins MUST have a proven history of availability. A origin existing for at least ten years and reliably serving resources would be sufficient—but not necessary—to satisfy this requirement.
Permissible origins MUST NOT charge a fee for accessing resources.
Origin Removal
Any approved origin that ceases to satisfy the above requirements MUST be removed from CIP-1. If a removed origin later satisfies the requirements again, it MAY be re-approved by following the process described in Origin Approval.
Finalized CIPs (eg. those in the Final
or Withdrawn
statuses) SHOULD NOT be updated to remove links to these origins.
Non-Finalized CIPs MUST remove links to these origins before changing statuses.
Origin Approval
Should the editors determine that an origin meets the requirements above, CIP-1 MUST be updated to include:
- The name of the allowed origin;
- The permitted markup and formatting required when referring to resources from the origin; and
- A fully rendered example of what a link should look like.
Rationale
Unique Identifiers
If it is impossible to uniquely identify a version of a resource, it becomes impractical to track changes, which makes it difficult to ensure immutability.
Availability
If it is possible to implement a standard without a linked resource, then the linked resource is unnecessary. If it is impossible to implement a standard without a linked resource, then that resource must be available for implementers.
Free Access
The Celestia ecosystem is built on openness and free access, and the CIP process should follow those principles.
Copyright
Copyright and related rights waived via CC0.
cip | 4 |
---|---|
title | Standardize data expiry time for pruned nodes |
description | Standardize default data expiry time for pruned nodes to 30 days + 1 hour worth of seconds (2595600 seconds). |
author | Mustafa Al-Bassam (@musalbas), Rene Lubov (@renaynay), Ramin Keene (@ramin) |
discussions-to | https://forum.celestia.org/t/cip-standardize-data-expiry-time-for-pruned-nodes/1326 |
status | Final |
type | Standards Track |
category | Data Availability |
created | 2023-11-23 |
Abstract
This CIP standardizes the default expiry time of historical blocks for pruned (non-archival) nodes to 30 days + 1 hour worth of seconds (2595600 seconds).
Motivation
The purpose of data availability layers such as Celestia is to ensure that block data is provably published to the Internet, so that applications and rollups can know what the state of their chain is, and store that data. Once the data is published, data availability layers do not inherently guarantee that historical data will be permanently stored and remain retrievable. This task is left to block archival nodes on the network, which may be ran by professional service providers.
Block archival nodes are nodes that store a full copy of the historical chain, whereas pruned nodes store only the latest blocks. Consensus nodes running Tendermint are able to prune blocks by specifying a min-retain-blocks
parameter in their configuration. Data availability nodes running celestia-node will also soon have the ability to prune blocks.
It is useful to standardize a default expiry time for blocks for pruned nodes, so that:
- Rollups and applications have an expectation of how long data will be retrievable from pruned nodes before it can only be retrieved from block archival nodes.
- Light nodes that want to query data in namespaces can discover pruned nodes over the peer-to-peer network and know which blocks they likely have, versus non-pruned nodes.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Nodes that prune block data SHOULD store and distribute data in blocks that were created in the last 30 days + 1 hour worth of seconds (2595600 seconds). The additional 1 hour acts as a buffer to account for clock drift.
On the Celestia data availability network, both pruned and non-pruned nodes MAY advertise themselves under the existing full
peer discovery tag, in which case the nodes MUST store and distribute data in blocks that were created in the last 30 days + 1 hour worth of seconds (2595600 seconds).
Non-pruned nodes MAY advertise themselves under a new archival
tag, in which case the nodes MUST store and distribute data in all blocks.
Data availability sampling light nodes SHOULD sample blocks created in the last 30 days worth of seconds (the sampling window of 2592000 seconds).
Definitions
Sampling Window - the period within which light nodes should sample blocks, specified at 30 days worth of seconds.
Pruning Window - the period within which both pruned and non-pruned full storage nodes must store and distribute data in blocks, specified at 30 days + 1 hour worth of seconds.
Rationale
30 days worth of seconds (2592000 seconds) is chosen for the following reasons:
- Data availability sampling light nodes need to at least sample data within the Tendermint weak subjectivity period of 21 days in order to independently verify the data availability of the chain, and so they need to be able to sample data up to at least 21 days old.
- 30 days worth of seconds (2592000 seconds) ought to be a reasonable amount of time for data to be downloaded from the chain by any application that needs it.
Backwards Compatibility
The implementation of pruned nodes will break backwards compatibility in a few ways:
- Light nodes running on older software (without the sampling window) will not be able to sample historical data (blocks older than 30 days) as nodes advertising on the
full
tag will no longer be expected to provide historical blocks. - Similarly, full nodes running on older software will not be able to sync historical blocks without discovering non-pruned nodes on the
archival
tag. - Requesting blobs from historical blocks via a light node or full node will not be possible without discovering non-pruned nodes on the
archival
tag.
Reference Implementation
Data Availability Sampling Window (light nodes)
Implementation for light nodes can be quite simple, where a satisfactory implementation merely behaves in that the choice to sample headers should not occur for headers whose timestamp is outside the given sampling window.
Given a hypothetical “sample” function that performs data availability sampling of incoming extended headers from the network, the decision to sample or not should be taken by inspecting the header’s timestamp, and ignoring it in any sampling operation if the duration between the header’s timestamp and the current time exceeds the duration of the sampling window. For example:
const windowSize = time.Second * 86400 * 30 // 30 days worth of seconds (2592000 seconds)
func sample(header Header) error{
if time.Since(header.Time()) > windowSize {
return nil // do not perform any sampling
}
// continue with rest of sampling operation
}
Example implementation by celestia node
Storage Pruning
Pruning of data outside the availability window will be highly implementation specific and dependent on how data storage is engineered.
A satisfactory implementation would be where any node implementing storage pruning may, if NOT advertising oneself to peers as an archival node on the ‘full’ topic, discard stored data outside the 30 day + 1 hour worth of seconds (2595600 seconds) availability window. A variety of options exist for how any implementation might schedule pruning of data, and there are no requirements around how this is implemented. The only requirement is merely that the time guarantees around data within the availability window are properly respected, and that data availability nodes correctly advertise themselves to peers.
An example implementation of storage pruning (WIP at time of writing) in celestia node
Security Considerations
As discussed in Rationale, data availability sampling light nodes need to at least sample data within the Tendermint weak subjectivity period of 21 days in order to independently verify the data availability of the chain. 30 days of seconds (2592000 seconds) exceeds this.
Copyright
Copyright and related rights waived via CC0.
cip | 5 |
---|---|
title | Rename data availability to data publication |
description | Renaming data availability to data publication to better reflect the message |
author | msfew (@fewwwww) [email protected], Kartin [email protected], Xiaohang Yu (@xhyumiracle) |
discussions-to | https://forum.celestia.org/t/informational-cip-rename-data-availability-to-data-publication/1287 |
status | Review |
type | Informational |
created | 2023-11-06 |
Abstract
The term data availability
isn’t as straightforward as it should be and could lead to misunderstandings within the community. To address this, this CIP proposes replacing data availability
with data publication
.
Motivation
The term data availability
has caused confusion within the community due to its lack of intuitive clarity. For instance, in Celestia’s Glossary, there isn’t a clear definition of data availability
; instead, it states that data availability
addresses the question of whether this data has been published. Additionally, numerous community members have misinterpreted data availability
as meaning data storage
.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
The term data availability
is RECOMMENDED to be renamed to data publication
.
Data availability
in existing works, such as research papers and docs, and cohesive terms, such as data availability sampling
, MAY retain the existing wording.
Rationale
Motivations:
Data publication
is the act of making data publicly accessible. In Celestia’s context, it means the block data was actually published to the network and ready to be accessed, downloaded, and verified. This aligns more precisely with the intended meaning, which revolves around whether data has been published.- The community already favors and commonly uses the term
data publication
. Data publication
maintains a similar structure todata availability
, making it easier for those familiar with the latter term to transition.
Alternative designs:
Proof of publication
: While intuitive, it differs in structure fromdata availability
and may be too closely associated with terms likeproof of work
, potentially causing confusion within consensus-related mechanisms.Data availability proof
: While logically coherent, it may create issues when used in conjunction with other terms, as the emphasis falls on “proof”. For instance, “verify a rollup’s data availability” and “verify a rollup’s data availability proof” might not refer to the same concept.Data caching
: While indicative of the intended time frame of the existence of proof of publication, the term “caching” is not widely adopted within the context of blockchain networks.
Copyright
Copyright and related rights waived via CC0.
cip | 6 |
---|---|
title | Mininum gas price enforcement |
description | Enforce payment of the gas for a transaction based on a governance modifiable global minimum gas price |
author | Callum Waters (@cmwaters) |
discussions-to | https://forum.celestia.org/t/cip-006-price-enforcement/1351 |
status | Final |
type | Standards Track |
category | Core |
created | 2023-11-30 |
Abstract
Implement a global, consensus-enforced minimum gas price on all transactions. Ensure that all transactions can be decoded and have a valid signer with sufficient balance to cover the cost of the gas allocated in the transaction. The minimum gas price can be modified via on-chain governance.
Parameter | Default | Summary | Changeable via Governance |
---|---|---|---|
minfee.MinimumGasPrice | 0.000001 utia | Globally set minimum price per unit of gas | True |
Motivation
The Celestia network was launched with the focus on having all the necessary protocols in place to provide secure data availability first and to focus on building an effective fee market system that correctly captures that value afterwards.
This is not to say that no fee market system exists. Celestia inherited the default system provided by the Cosmos SDK. However, the present system has several inadequacies that need to be addressed in order to achieve better pricing certainty and transaction guarantees for it’s users and to find a “fair” price for both the sellers (validators) and buyers (rollups) of data availability.
This proposal should be viewed as a foundational component of a broader effort and thus its scope is strictly focused towards the enforcement of some minimum fee: ensuring that the value captured goes to those that provided that value. It does not pertain to actual pricing mechanisms, tipping, refunds, futures and other possible future works. Dynamic systems like EIP-1559 and uniform price auctions can and should be prioritised only once the network starts to experience congestion over block space.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
All transactions MUST be decodable by the network. They MUST have a recognised signer, a signature that authenticates the signer, a fee and a gas. This does not imply that they need be completely valid, they just need a minimum degree of validity that allows the network to charge the signer for the allocation of gas for performing the computation, networking and storage.
We define the gas price as the fee divided by the gas. In other words, this is the amount of utia paid per unit of gas. All transactions MUST have a gas price that is greater than or equal to a minimum network-wide agreed upon gas price.
Both of these rules are block validity rules. Correct validators will vote nil
or against the proposal if the proposed block contains any transaction that violates these two rules.
The global minimum gas price can be modified via on-chain governance.
Note that validators may in addition set their own constraints as to what they deem acceptable in a proposed block. For example, they may only accept transaction with a gas-price that is higher than their locally configured minimum.
This minimum gas price SHOULD be queryable by client implementations.
Rationale
The primary rationale for this decision is to prevent the payment system for data availability from migrating off-chain and manifesting in secondary markets. As a concrete example, currently validators would earn more revenue if they convinced users to pay them out of band and set the transaction fee to 0 such that all revenue went to the proposer and none to the rest of the validators/delegators. This is known as off-chain agreements (OCA)
There are two other reasons:
- Better UX as clients or wallets can query the on-chain state for the global min gas price whereas currently each node might have a separate min gas price and given the proposer is anonymous it’s difficult to know whether the user is paying a sufficient fee.
- Easier to coordinate: a governance proposal that is passed automatically updates the state machine whereas to manually change requires telling all nodes to halt, modify their config, and restart their node
Lastly, this change removes the possible incongruity that would form when it comes to gossiping transactions when consensus nodes use different minimum gas prices.
The minimum gas price defaults to a neglible value (0.000001
) because this is the minimum gas price that would result in a tx priority >= 1 given the current tx prioritization implementation.
$gasPrice = fee / gas$
$priority = fee * priorityScalingFactor / gas$
$priority = gasPrice * priorityScalingFactor$
Note priorityScalingFactor is currently 1,000,000
.
Backwards Compatibility
This requires a modification to the block validity rules and thus breaks the state machine. It will need to be introduced in a major release.
Wallets and other transaction submitting clients will need to monitor the minimum gas price and adjust accordingly.
Test Cases
The target for testing will be to remove the ability for block proposers to offer block space to users in a way that circumvents the fee system currently in place.
Reference Implementation
In order to ensure transaction validity with respect to having a minimum balance to cover the gas allocated, the celestia-app
go implementation requires a small change to ProcessProposal
, namely:
sdkTx, err := app.txConfig.TxDecoder()(tx)
if err != nil {
- // we don't reject the block here because it is not a block validity
- // rule that all transactions included in the block data are
- // decodable
- continue
+ return reject()
}
This will now reject undecodable transactions. Decodable transactions will still have to pass the AnteHandlers
before being accepted in a block so no further change is required.
A mechanism to enforce a minimum fee is already in place in the DeductFeeDecorator
. Currently, the decorator uses a min gas price sourced from the validator’s local config. To introduce a network-wide constraint on min gas price, we introduce a new param.Subspace=minfee
, which contains the global min gas price. If the param is unpopulated, it defaults to 0.002utia
(which matches the current local min gas price).
The DeductFeeDecorator
antehandler will receive a new ante.TxFeeChecker
function called ValidateTxFee
which will have access to the same param.Subspace
. For CheckTx
, it will use the max of either the global min gas price or the local min gas price. For PrepareProposal
, ProcessProposal
and DeliverTx
it will only check using the global min gas price and ignore the locally set min gas price.
The minimum gas price can already be queried through the gRPC client as can any other parameter.
Security Considerations
Any modification to the block validity rules (through PrepareProposal
and ProcessProposal
) introduces implementation risk that may cause the chain to halt.
Given a voting period of one week, it will take at least one week for the network to update the minimum gas price. This could potentially be too slow given large swings in the underlying price of TIA.
Copyright
Copyright and related rights waived via CC0.
cip | 7 |
---|---|
title | Managing Working Groups in the Celestia Improvement Proposal Process |
description | A guide to effectively managing working groups within the Celestia Improvement Proposal process. |
author | Yaz Khoury [email protected] |
discussions-to | https://forum.celestia.org/t/cip-for-working-group-best-practices/1343 |
status | Draft |
type | Informational |
created | 2023-11-29 |
Abstract
This document provides a detailed guide for managing working groups within the Celestia Improvement Proposal (CIP) process. It draws from best practices in organizations like the IETF, focusing on the formation, management, and closure of working groups, ensuring their alignment with the overarching goals of the Celestia network.
Motivation
The successful implementation of the CIP process requires a structured approach to managing working groups. These groups are pivotal in addressing various aspects of Celestia’s core protocol and ecosystem. Effective management ensures collaborative progress and helps align group outputs with Celestia’s broader objectives.
Specification
1. Formation of Working Groups
- Identify key areas needing attention.
- Announce formation and invite participation.
- Appoint chairs or leaders.
2. Defining Scope and Goals
- Draft a charter for each group.
- Set realistic and measurable goals.
3. Establishing Procedures
- Decide on a consensus process (e.g. rough consensus)
- Schedule regular meetings.
- Implement documentation and reporting systems for each meeting.
4. Collaboration and Communication
- Utilize tools like GitHub, Slack, Telegram and video conferencing.
- Foster an environment of open communication.
5. Conflict Resolution
- Establish a conflict resolution mechanism.
- Define the role of chairs in conflict management.
6. Review and Adaptation
- Regularly review progress.
- Adapt scope or processes as needed.
7. Integration with the Larger Process
- Ensure alignment with the overall CIP process.
- Create a feedback loop with the community.
8. Closure
- Define criteria for completion.
- Document and share outcomes.
- Conduct a retrospective.
Rationale
The rationale for this approach is based on established practices in standardization bodies. By applying these methods, Celestia can ensure that its working groups are productive, inclusive, and effectively contribute to the network’s development.
Security Considerations
The management of working groups primarily involves process and communication security. Ensuring transparent and secure communication channels and documenting management practices is essential.
Copyright
Copyright and related rights waived via CC0.
cip | 8 |
---|---|
title | Roles and Responsibilities of Working Group Chairs in the CIP Process |
description | Outlining the key roles and responsibilities of working group chairs within the Celestia Improvement Proposal process. |
author | Yaz Khoury [email protected] |
discussions-to | https://forum.celestia.org/t/cip-for-wg-chair-responsibilities/1344 |
status | Draft |
type | Informational |
created | 2023-11-29 |
Abstract
This document details the roles and responsibilities of working group chairs in the Celestia Improvement Proposal (CIP) process. Inspired by best practices in standardization processes like Ethereum’s EIP and the IETF, it provides a comprehensive guide on chair duties ranging from facilitation of discussions to conflict resolution.
Motivation
Effective leadership within working groups is crucial for the success of the CIP process. Chairs play a pivotal role in guiding discussions, driving consensus, and ensuring that the groups’ efforts align with Celestia’s broader goals. This document aims to establish clear guidelines for these roles to enhance the efficiency and productivity of the working groups.
Specification
Roles of Working Group Chairs
- Facilitate Discussions: Chairs should ensure productive, focused, and inclusive meetings.
- Drive Consensus: Guide the group towards consensus and make decisions when necessary.
- Administrative Responsibilities: Oversee meeting scheduling, agenda setting, and record-keeping.
- Communication Liaison: Act as a bridge between the working group and the wider community, ensuring transparency and effective communication.
- Guide Work Forward: Monitor progress and address challenges, keeping the group on track with its goals.
- Ensure Adherence to Process: Uphold the established CIP process and guide members in following it.
- Conflict Resolution: Actively manage and resolve conflicts within the group.
- Reporting and Accountability: Provide regular reports on progress and be accountable for the group’s achievements.
Rationale
Drawing inspiration from established practices in bodies like the IETF and Ethereum’s EIP, this proposal aims to create a structured and effective approach to managing working groups in Celestia. Clear definition of chair roles will facilitate smoother operation, better decision-making, and enhanced collaboration within the groups.
Security Considerations
The primary security considerations involve maintaining the confidentiality and integrity of discussions and decisions made within the working groups. Chairs should ensure secure communication channels and safeguard sensitive information related to the group’s activities.
Copyright
Copyright and related rights waived via CC0.
cip | 9 |
---|---|
title | Packet Forward Middleware |
description | Adopt Packet Forward Middleware for multi-hop IBC and path unwinding |
author | Alex Cheng (@akc2267) |
discussions-to | https://forum.celestia.org/t/cip-packet-forward-middleware/1359 |
status | Final |
type | Standards Track |
category | Core |
created | 2023-12-01 |
Abstract
This CIP integrates Packet Forward Middleware, the IBC middleware that enables multi-hop IBC and path unwinding to preserve fungibility for IBC-transferred tokens.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
The packet-forward-middleware is an IBC middleware module built for Cosmos blockchains utilizing the IBC protocol. A chain which incorporates the packet-forward-middleware is able to route incoming IBC packets from a source chain to a destination chain.
- Celestia MUST import and integrate Packet Forward Middleware.
- This integration SHOULD use defaults for the following configs: [
Retries On Timeout
,Timeout Period
,Refund Timeout
,Fee Percentage
].- Retries On Timeout - how many times will a forward be re-attempted in the case of a timeout.
- Timeout Period - how long can a forward be in progress before giving up.
- Refund Timeout - how long can a forward be in progress before issuing a refund back to the original source chain.
- Fee Percentage - % of the forwarded packet amount which will be subtracted and distributed to the community pool.
- Celestia MAY choose different values for these configs if the community would rather have auto-retries, different timeout periods, and/or collect fees from forwarded packets.
Rationale
The defaults set in Packet Forward Middleware ensure sensible timeouts so user funds are returned in a timely manner after incomplete transfers. Timeout
follows IBC defaults and Refund Timeout
is 28 days to ensure funds don’t remain stuck in the packet forward module. Retries On Timeout
is defaulted to 0, as app developers or cli users may want to control this themselves. Fee Percentage
is defaulted to 0 for superior user experience; however, the Celestia community may decide to collect fees as a revenue source.
Backwards Compatibility
No backward compatibility issues found.
Reference Implementation
The integration steps include the following:
- Import the PFM, initialize the PFM Module & Keeper, initialize the store keys and module params, and initialize the Begin/End Block logic and InitGenesis order.
- Configure the IBC application stack (including the transfer module).
- Configuration of additional options such as
timeout period
, number ofretries on timeout
,refund timeout
period, andfee percentage
.
Integration of the PFM should take approximately 20 minutes.
Example integration of the Packet Forward Middleware
// app.go
// Import the packet forward middleware
import (
"github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v7/packetforward"
packetforwardkeeper "github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v7/packetforward/keeper"
packetforwardtypes "github.com/cosmos/ibc-apps/middleware/packet-forward-middleware/v7/packetforward/types"
)
...
// Register the AppModule for the packet forward middleware module
ModuleBasics = module.NewBasicManager(
...
packetforward.AppModuleBasic{},
...
)
...
// Add packet forward middleware Keeper
type App struct {
...
PacketForwardKeeper *packetforwardkeeper.Keeper
...
}
...
// Create store keys
keys := sdk.NewKVStoreKeys(
...
packetforwardtypes.StoreKey,
...
)
...
// Initialize the packet forward middleware Keeper
// It's important to note that the PFM Keeper must be initialized before the Transfer Keeper
app.PacketForwardKeeper = packetforwardkeeper.NewKeeper(
appCodec,
keys[packetforwardtypes.StoreKey],
app.GetSubspace(packetforwardtypes.ModuleName),
app.TransferKeeper, // will be zero-value here, reference is set later on with SetTransferKeeper.
app.IBCKeeper.ChannelKeeper,
appKeepers.DistrKeeper,
app.BankKeeper,
app.IBCKeeper.ChannelKeeper,
)
// Initialize the transfer module Keeper
app.TransferKeeper = ibctransferkeeper.NewKeeper(
appCodec,
keys[ibctransfertypes.StoreKey],
app.GetSubspace(ibctransfertypes.ModuleName),
app.PacketForwardKeeper,
app.IBCKeeper.ChannelKeeper,
&app.IBCKeeper.PortKeeper,
app.AccountKeeper,
app.BankKeeper,
scopedTransferKeeper,
)
app.PacketForwardKeeper.SetTransferKeeper(app.TransferKeeper)
// See the section below for configuring an application stack with the packet forward middleware
...
// Register packet forward middleware AppModule
app.moduleManager = module.NewManager(
...
packetforward.NewAppModule(app.PacketForwardKeeper),
)
...
// Add packet forward middleware to begin blocker logic
app.moduleManager.SetOrderBeginBlockers(
...
packetforwardtypes.ModuleName,
...
)
// Add packet forward middleware to end blocker logic
app.moduleManager.SetOrderEndBlockers(
...
packetforwardtypes.ModuleName,
...
)
// Add packet forward middleware to init genesis logic
app.moduleManager.SetOrderInitGenesis(
...
packetforwardtypes.ModuleName,
...
)
// Add packet forward middleware to init params keeper
func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey storetypes.StoreKey) paramskeeper.Keeper {
...
paramsKeeper.Subspace(packetforwardtypes.ModuleName).WithKeyTable(packetforwardtypes.ParamKeyTable())
...
}
Configuring the transfer application stack with Packet Forward Middleware
Here is an example of how to create an application stack using transfer
and packet-forward-middleware
.
The following transferStack
is configured in app/app.go
and added to the IBC Router
.
The in-line comments describe the execution flow of packets between the application stack and IBC core.
For more information on configuring an IBC application stack see the ibc-go docs
// Create Transfer Stack
// SendPacket, since it is originating from the application to core IBC:
// transferKeeper.SendPacket -> packetforward.SendPacket -> channel.SendPacket
// RecvPacket, message that originates from core IBC and goes down to app, the flow is the other way
// channel.RecvPacket -> packetforward.OnRecvPacket -> transfer.OnRecvPacket
// transfer stack contains (from top to bottom):
// - Packet Forward Middleware
// - Transfer
var transferStack ibcporttypes.IBCModule
transferStack = transfer.NewIBCModule(app.TransferKeeper)
transferStack = packetforward.NewIBCMiddleware(
transferStack,
app.PacketForwardKeeper,
0, // retries on timeout
packetforwardkeeper.DefaultForwardTransferPacketTimeoutTimestamp, // forward timeout
packetforwardkeeper.DefaultRefundTransferPacketTimeoutTimestamp, // refund timeout
)
// Add transfer stack to IBC Router
ibcRouter.AddRoute(ibctransfertypes.ModuleName, transferStack)
Configurable options in the Packet Forward Middleware
The Packet Forward Middleware has several configurable options available when initializing the IBC application stack.
You can see these passed in as arguments to packetforward.NewIBCMiddleware
and they include the number of retries that
will be performed on a forward timeout, the timeout period that will be used for a forward, and the timeout period that
will be used for performing refunds in the case that a forward is taking too long.
Additionally, there is a fee percentage parameter that can be set in InitGenesis
, this is an optional parameter that
can be used to take a fee from each forwarded packet which will then be distributed to the community pool. In the
OnRecvPacket
callback ForwardTransferPacket
is invoked which will attempt to subtract a fee from the forwarded
packet amount if the fee percentage is non-zero.
Retries On Timeout
: how many times will a forward be re-attempted in the case of a timeout.Timeout Period
: how long can a forward be in progress before giving up.Refund Timeout
: how long can a forward be in progress before issuing a refund back to the original source chain.Fee Percentage
: % of the forwarded packet amount which will be subtracted and distributed to the community pool.
Test Cases
The targets for testing will be:
- Successful path unwinding from gaia-testnet-1 to celestia-testnet to gaia-testnet-2
- Proper refunding in a multi-hop IBC flow if any step returns a
recv_packet
error - Ensure
Retries On Timeout
config works, with the intended number of retry attempts upon hitting theTimeout Period
- Ensure
Refund Timeout
issues a refund when a forward is in progress for too long - If
Fee Percentage
is not set to 0, ensure the proper token amount is claimed from packets and sent to the Community Pool.
Security Considerations
The origin sender (sender on the first chain) is retained in case of a failure to receive the packet (max-timeouts or ack error) on any chain in the sequence, so funds will be refunded to the right sender in the case of an error.
Any intermediate receivers, though, are not used anymore. PFM will receive the funds into the hashed account (hash of sender from previous chain + channel received on the current chain). This gives a deterministic account for the origin sender to see events on intermediate chains. With PFM’s atomic acks, there is no possibility of funds getting stuck on an intermediate chain, they will either make it to the final destination successfully, or be refunded back to the origin sender.
We recommend that users set the intermediate receivers to a string such as “pfm” (since PFM does not care what the intermediate receiver is), so that in case users accidentally send a packet intended for PFM to a chain that does not have PFM, they will get an ack error and refunded instead of funds landing in the intermediate receiver account. This results in a PFM detection mechanism with a graceful error.
Copyright
Copyright and related rights waived via CC0.
cip | 10 |
---|---|
title | Coordinated network upgrades |
description | Protocol for coordinating major network upgrades |
author | Callum Waters (@cmwaters) |
discussions-to | https://forum.celestia.org/t/cip-coordinated-network-upgrades/1367 |
status | Final |
type | Standards Track |
category | Core |
created | 2023-12-07 |
Abstract
Use a pre-programmed height for the next major upgrade. Following major upgrades will use an in-protocol signalling mechanism. Validators will submit messages to signal their ability and preference to use the next version. Once a quorum of 5/6 has signalled the same version, the network will migrate to that version.
Motivation
The Celestia network needs to be able to upgrade across different state machines so new features and bugs that are protocol breaking can be supported. Versions of the Celestia consensus node are required to support all prior state machines, thus nodes are able to upgrade any time ahead of the coordinated upgrade height and very little downtime is experienced in transition. The problem addressed in this CIP is to define a protocol for coordinating that upgrade height.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
The next upgrade will be coordinated using a hard coded height that is passed as a flag when the node commences. This is an exception necessary to introduce the upgrading protocol which will come into affect for the following upgrade.
The network introduces two new message types MsgSignalVersion
and MsgTryUpgrade
message MsgSignalVersion {
string validator_address = 1;
uint64 version = 2;
}
message MsgTryUpgrade { string signer = 1; }
Only validators can submit MsgSignalVersion
. The Celestia state machine tracks which version each validator has signalled for. The signalled version MUST either be the same version or the next. There is no support for skipping versions or downgrading.
Clients may query the tally for each version as follows:
message QueryVersionTallyRequest { uint64 version = 1; }
message QueryVersionTallyResponse {
uint64 voting_power = 1;
uint64 threshold_power = 2;
uint64 total_voting_power = 3;
}
When voting_power
is greater or equal to theshold_power
, the network MAY upgrade. This is done through a “crank” transaction, MsgTryUpgrade
, which can be submitted by any account. The account covers the gas required for the calculation. If the quorum is met, the chain will update the AppVersion
in the ConsensusParams
returned in EndBlock
. Celestia will reset the tally and perform all necessary migrations at the end of processing that block in Commit
. The proposer of the following height will include the new version in the block. As is currently, nodes will only vote for blocks that match the same network version as theirs.
If the network agrees to move to a version that is not supported by the node, the node will gracefully shutdown.
The threshold_power
is calcualted as 5/6ths of the total voting power. Rationale is provided below.
Rationale
When attempting a major upgrade, there is increased risk that the network halts. At least 2/3 in voting power of the network is needed to migrate and agree upon the next block. However this does not take into account the actions of byzantine validators. As an example, say there is 100 in total voting power. 66 have signalled v2 and 33 have yet to signal i.e. they are still signalling v1. It takes 1 byzantine voting power to pass the threshold, convincing the network to propose v2 blocks and then omitting their vote leaving the network failing to reach consensus until one of the 33 are able to upgrade. At the other end of the spectrum, increasing the necessary threshold means less voting power required to veto the upgrade. The middle point here is setting a quorum of 5/6ths which provides 1/6 byzantine fault tolerance to liveness and requiring at least 1/6th of the network to veto the upgrade.
Validators are permitted to signal for the current network version as a means of cancelling their prior decision to upgrade. This is important in the case that new information arises that convinces the network that the next version is not ready.
An on-chain tallying system was decided over an off-chain as it canonicalises the information which is important for coordination and adds accountability to validators. As a result of this decision, validators will need to pay fees to upgrade which is necessary to avoid spamming the chain.
Backwards Compatibility
This feature modifies the functionality of the state machine in a breaking way as the state machine can now dictate version changes. This will require a major upgrade to implement (thus the protocol won’t come into affect until the following major upgrade).
As the API is additive, there is no need to consider backwards compatibility for clients.
Test Cases
All implementations are advised to test the following scenarios:
- A version x node can run on a version y network where x >= y.
- A
MsgTryUpgrade
should not modify the app version if there has been less than 5/6th and set the new app version when that threshold has been reached. - A version x node should gracefully shutdown and not continue to validate blocks on a version y network when y > x.
MsgSignal
should correctly tally the accounts voting power. Signalling multiple times by the same validator should not increase the tally. A validator should be able to resignal a different version at any time.
Reference Implementation
The golang implementation of the signal
module can be found here
Security Considerations
See the section on rationale for understanding the network halting risk.
Copyright
Copyright and related rights waived via CC0.
cip | 11 |
---|---|
title | Refund unspent gas |
description | Refund allocated but unspent gas to the transaction fee payer. |
author | Rootul Patel (@rootulp) |
discussions-to | https://forum.celestia.org/t/cip-refund-unspent-gas/1374 |
status | Withdrawn |
withdrawal-reason | The mitigation strategies for the security considerations were deemed too complex. |
type | Standards Track |
category | Core |
created | 2023-12-07 |
Abstract
Refund allocated but unspent gas to the transaction fee payer.
Motivation
When a user submits a transaction to Celestia, they MUST specify a gas limit. Regardless of how much gas is consumed in the process of executing the transaction, the user is always charged a fee based on their transaction’s gas limit. This behavior is not ideal because it forces users to accurately estimate the amount of gas their transaction will consume. If the user underestimates the gas limit, their transaction will fail to execute. If the user overestimates the gas limit, they will be charged more than necessary.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Consider adding a posthandler that:
- Disables the gas meter so that the following operations do not consume gas or cause an out of gas error.
- Calculate the amount of coins to refund:
- Calculate the transaction’s
gasPrice
based on the equationgasPrice = fees / gasLimit
. - Calculate the transaction’s fee based on gas consumption:
feeBasedOnGasConsumption = gasPrice * gasConsumed
. - Calculate the amount to refund:
amountToRefund = fees - feeBasedOnGasConsumption
.
- Calculate the transaction’s
- Determine the refund recipient:
- If the transaction had a fee granter, refund to the fee granter.
- If the transaction did not have a fee granter, refund to the fee payer.
- Refund coins to the refund recipient. Note: the refund is sourced from the fee collector module account (the account that collects fees from transactions via the
DeductFeeDecorator
antehandler).
Rationale
The entire fee specified by a transaction is deducted via an antehandler (DeductFeeDecorator
) prior to execution. Since the transaction hasn’t been executed yet, the antehandler does not know how much gas the transaction will consume and therefore can’t accurately calculate a fee based on gas consumption. To avoid underestimating the transaction’s gas consumption, the antehandler overcharges the fee payer by deducting the entire fee.
This proposal suggests adding a posthandler that refunds a portion of the fee for the unspent gas back to the fee payer. At the time of posthanlder execution, the gas meter reflects the true amount of gas consumed during execution. As a result, it is possible to accurately calculate the amount of fees that the transaction would be charged based on gas consumption.
The net result of the fee deduction in the antehandler and the unspent gas refund in the posthandler is that user’s will observe a fee that is based on gas consumption (gasPrice * gasConsumed
) rather than based on gas limit (gasPrice * gasLimit
).
Backwards Compatibility
This proposal is backwards-incompatible because it is state-machine breaking. Put another way, state machines with this proposal will process transactions differently than state machines without this proposal. Therefore, this proposal cannot be introduced without an app version bump.
Test Cases
TBA
Reference Implementation
https://github.com/celestiaorg/celestia-app/pull/2887
Security Considerations
DoS attack
This proposal has implications on how many transactions can be processed in a block. Currently consensus/max_gas = -1
which means there is no upper bound on the amount of gas consumed in a block. However, if this proposal is implemented AND the consensus/max_gas
parameter is modified, then a single transaction could prevent other transactions from being included in the block by specifying a large gas limit and actually consuming a small amount of gas. Since the unspent gas would be refunded to the fee payer, an attacker could perform this attack for low cost. Notably, this attack vector arises because the block gas meter is deducted by BaseApp.runTx
after transaction processing.
Proposed mitigation strategies:
- Consider bounding the amount of gas that can be refunded to the fee payer. For example bound the amount of gas that can be refunded to 50% of the gas consumed.
- Make the block gas limit aware of gas refunds. This approach has a prerequisite on celestia-app adopting some flavor of immediate or optimistic execution.
Gas metering during unspent gas refund
This proposal includes adding a posthandler that explicitly disables gas metering during the execution of the unspent gas refund. This is a risky change because an attacker may be able to craft a transaction that performs a large amount of computation while executing the unspent gas refund posthandler.
Proposed mitigation strategy:
- Analyze the amount of gas consumed by the refund posthandler for a variety of transactions.
- Define a new constant
MaxRefundGasCost
that is the maximum amount of gas that can be consumed by the refund posthandler. - If the transaction reaches the post handler with less remaining gas than
MaxRefundGasCost
then skip the refund. - If the transaction reaches the post handler with more remaining gas than
MaxRefundGasCost
then refundgasMeter.GasRemaining() - MaxRefundGasCost
.
Implementation risk
TBA
Copyright
Copyright and related rights waived via CC0.
cip | 12 |
---|---|
title | ICS-29 Relayer Incentivisation Middleware |
description | Adding ics-29 to Celestia to move towards sustainable relayer funding for IBC |
author | Susannah Evans [email protected] (@womensrights), Aditya Sripal [email protected] (@AdityaSripal) |
discussions-to | https://forum.celestia.org/t/cip-relayer-incentivisation-middleware/1383 |
status | Review |
type | Standards Track |
category | Core |
created | 2023-12-12 |
Abstract
ICS-29 defines a specification for incentivising relayers to deliver IBC packets as fast as possible by rewarding a relayer with a fee upon proving that they relayed a given packet. The specification is implemented as a middleware that is added to both ends of an IBC channel.
Motivation
In the past 30 days (11th Dec 2023 at time of writing), Celestia has had an IBC volume of $90M from 72k transfers from inbound and outbound combined. Using rough estimates of fees for each transfer with a lower bound of 0.002 TIA and an upper bound of 0.02 TIA, we can estimate between 144 - 1440 TIA has been spent by relayers on IBC fees.
In general, relayers are either subsidised through off-chain arrangements such as delegations or service agreements, or are acting on a pure loss basis. This is not a sustainable, long-term solution for ensuring IBC packets are reliably and promptly delivered.
Specification
The specification has been copied directly from the ICS-29 specification approved in the ibc protocol repository.
Definitions
forward relayer
: The relayer that submits the recvPacket
message for a given packet
reverse relayer
: The relayer that submits the acknowledgePacket
message for a given packet
timeout relayer
: The relayer that submits the timeoutPacket
or timeoutOnClose
message for a given packet
receive fee
: The fee paid for submitting the recvPacket
message for a given packet
ack fee
: The fee paid for submitting the acknowledgePacket
message for a given packet
timeout fee
: The fee paid for submitting the timeoutPacket
or timeoutOnClose
message for a given packet
source address
: The payee address selected by a relayer on the chain that sent the packet
destination address
: The address of a relayer on the chain that receives the packet
General Design
In order to avoid extra fee packets on the order of the number of application packets, as well as provide an opt-in approach, we store all fee payment info only on the source chain. The source chain is the one location where the sender can provide tokens to incentivize the packet. The fee distribution may be implementation specific and thus does not need to be in the IBC spec (just high-level requirements are needed in this doc).
We require that the relayer address is exposed to application modules for
all packet-related messages, so the modules are able to incentivize the packet relayer. acknowledgePacket
, timeoutPacket
,
and timeoutOnClose
messages will therefore have the relayer address and be capable of sending escrowed tokens to such address.
However, we need a way to reliably get the address of the relayer that submitted recvPacket
on the destination chain to
the source chain. In fact, we need a source address for this relayer to pay out to, not the destination address that signed
the packet.
The fee payment mechanism will be implemented as IBC Middleware (see ICS-30) in order to provide maximum flexibility for application developers and blockchains.
Given this, the flow would be:
- Relayer registers their destination address to source address mapping on the destination chain’s fee middleware.
- User/module submits a send packet on the
source
chain, along with a message to the fee middleware module with some tokens and fee information on how to distribute them. The fee tokens are all escrowed by the fee module. - RelayerA submits
RecvPacket
on thedestination
chain. - Destination fee middleware will retrieve the source address for the given relayer’s destination address (this mapping is already registered) and include it in the acknowledgement.
- RelayerB submits
AcknowledgePacket
which provides the reverse relayer address on the source chain in the message sender, along with the source address of the forward relayer embedded in the acknowledgement. - Source fee middleware can distribute the tokens escrowed in (1) to both the forward and the reverse relayers and refund remainder tokens to original fee payer(s).
Alternate flow:
- User/module submits a send packet on the
source
chain, along with some tokens and fee information on how to distribute them - Relayer submits
OnTimeout
which provides its address on the source chain - Source application can distribute the tokens escrowed in (1) to this relayer, and potentially return remainder tokens to the original fee payer(s).
Fee details
For an example implementation in the Cosmos SDK, we consider 3 potential fee payments, which may be defined. Each one may be paid out in a different token. Imagine a connection between IrisNet and the Cosmos Hub. To incentivize a packet from IrisNet to the Cosmos Hub, they may define:
- ReceiveFee: 0.003 channel-7/ATOM vouchers (ATOMs already on IrisNet via ICS20)
- AckFee: 0.001 IRIS
- TimeoutFee: 0.002 IRIS
Ideally the fees can easily be redeemed in native tokens on both sides, but relayers may select others. In this example, the relayer collects a fair bit of IRIS, covering its costs there and more. It also collects channel-7/ATOM vouchers from many packets. After relaying a few thousand packets, the account on the Cosmos Hub is running low, so the relayer will send those channel-7/ATOM vouchers back over channel-7 to it’s account on the Hub to replenish the supply there.
The sender chain will escrow 0.003 channel-7/ATOM and 0.002 IRIS from the fee payers’ account. In the case that a forward relayer submits the recvPacket
and a reverse relayer submits the ackPacket
, the forward relayer is rewarded 0.003 channel-7/ATOM and the reverse relayer is rewarded 0.001 IRIS while 0.002 IRIS is refunded to the original fee payer. In the case where the packet times out, the timeout relayer receives 0.002 IRIS and 0.003 channel-7/ATOM is refunded to the original fee payer.
The logic involved in collecting fees from users and then paying it out to the relevant relayers is encapsulated by a separate fee module and may vary between implementations. However, all fee modules must implement a uniform interface such that the ICS-4 handlers can correctly pay out fees to the right relayers, and so that relayers themselves can easily determine the fees they can expect for relaying a packet.
Data Structures
The incentivized acknowledgment written on the destination chain includes:
- raw bytes of the acknowledgement from the underlying application,
- the source address of the forward relayer,
- and a boolean indicative of receive operation success on the underlying application.
interface Acknowledgement {
appAcknowledgement: []byte
forwardRelayerAddress: string
underlyingAppSuccess: boolean
}
Store Paths
Relayer Address for Async Ack Path
The forward relayer addresses are stored under a store path prefix unique to a combination of port identifier, channel identifier and sequence. This may be stored in the private store.
function relayerAddressForAsyncAckPath(packet: Packet): Path {
return "forwardRelayer/{packet.destinationPort}/{packet.destinationChannel}/{packet.sequence}"
}
Fee Middleware Contract
While the details may vary between fee modules, all fee modules must ensure they does the following:
- It must allow relayers to register their counterparty payee address (i.e. source address).
- It must have in escrow the maximum fees that all outstanding packets may pay out (or it must have ability to mint required amount of tokens)
- It must pay the receive fee for a packet to the forward relayer specified in
PayFee
callback (if unspecified, it must refund forward fee to original fee payer(s)) - It must pay the ack fee for a packet to the reverse relayer specified in
PayFee
callback - It must pay the timeout fee for a packet to the timeout relayer specified in
PayTimeoutFee
callback - It must refund any remainder fees in escrow to the original fee payer(s) if applicable
// RegisterCounterpartyPayee is called by the relayer on each channelEnd and
// allows them to specify their counterparty payee address before relaying.
// This ensures they will be properly compensated for forward relaying since
// destination chain must send back relayer's source address (counterparty
// payee address) in acknowledgement.
// This function may be called more than once by relayer, in which case, latest
// counterparty payee address is always used.
function RegisterCounterpartyPayee(relayer: string, counterPartyAddress: string) {
// set mapping between relayer address and counterparty payee address
}
// EscrowPacketFee is an open callback that may be called by any module/user
// that wishes to escrow funds in order to incentivize the relaying of the
// given packet.
// NOTE: These fees are escrowed in addition to any previously escrowed amount
// for the packet. In the case where the previous amount is zero, the provided
// fees are the initial escrow amount.
// They may set a separate receiveFee, ackFee, and timeoutFee to be paid
// for each step in the packet flow. The caller must send max(receiveFee+ackFee, timeoutFee)
// to the fee module to be locked in escrow to provide payout for any potential
// packet flow.
// The caller may optionally specify an array of relayer addresses. This MAY be
// used by the fee module to modify fee payment logic based on ultimate relayer
// address. For example, fee module may choose to only pay out relayer if the
// relayer address was specified in the `EscrowPacketFee`.
function EscrowPacketFee(packet: Packet, receiveFee: Fee, ackFee: Fee, timeoutFee: Fee, relayers: []string) {
// escrow max(receiveFee+ackFee, timeoutFee) for this packet
// do custom logic with provided relayer addresses if necessary
}
// PayFee is a callback implemented by fee module called by the ICS-4 AcknowledgePacket handler.
function PayFee(packet: Packet, forward_relayer: string, reverse_relayer: string) {
// pay the forward fee to the forward relayer address
// pay the reverse fee to the reverse relayer address
// refund extra tokens to original fee payer(s)
// NOTE: if forward relayer address is empty, then refund the forward fee to original fee payer(s).
}
// PayFee is a callback implemented by fee module called by the ICS-4 TimeoutPacket handler.
function PayTimeoutFee(packet: Packet, timeout_relayer: string) {
// pay the timeout fee to the timeout relayer address
// refund extra tokens to original fee payer(s)
}
The fee module should also expose the following queries so that relayers may query their expected fee:
// Gets the fee expected for submitting RecvPacket msg for the given packet
// Caller should provide the intended relayer address in case the fee is dependent on specific relayer(s).
function GetReceiveFee(portID, channelID, sequence, relayer) Fee
// Gets the fee expected for submitting AcknowledgePacket msg for the given packet
// Caller should provide the intended relayer address in case the fee is dependent on specific relayer(s).
function GetAckFee(portID, channelID, sequence, relayer) Fee
// Gets the fee expected for submitting TimeoutPacket msg for the given packet
// Caller should provide the intended relayer address in case the fee is dependent on specific relayer(s).
function GetTimeoutFee(portID, channelID, sequence, relayer) Fee
Since different chains may have different representations for fungible tokens and this information is not being sent to other chains; this ICS does not specify a particular representation for the Fee
. Each chain may choose its own representation, it is incumbent on relayers to interpret the Fee correctly.
A default representation will have the following structure:
interface Fee {
denom: string,
amount: uint256,
}
IBC Module Wrapper
The fee middleware will implement its own ICS-26 callbacks that wrap the application-specific module callbacks as well as the ICS-4 handler functions called by the underlying application. This fee middleware will ensure that the counterparty module supports incentivization and will implement all fee-specific logic. It will then pass on the request to the embedded application module for further callback processing.
In this way, custom fee-handling logic can be hooked up to the IBC packet flow logic without placing the code in the ICS-4 handlers or the application code. This is valuable since the ICS-4 handlers should only be concerned with correctness of core IBC (transport, authentication, and ordering), and the application handlers should not be handling fee logic that is universal amongst all other incentivized applications. In fact, a given application module should be able to be hooked up to any fee module with no further changes to the application itself.
Fee Protocol Negotiation
The fee middleware will negotiate its fee protocol version with the counterparty module by including its own version next to the application version. The channel version will be a string of a JSON struct containing the fee middleware version and the application version. The application version may as well be a JSON-encoded string, possibly including further middleware and app versions, if the application stack consists of multiple milddlewares wrapping a base application.
Channel Version:
{"fee_version":"<fee_protocol_version>","app_version":"<application_version>"}
Ex:
{"fee_version":"ics29-1","app_version":"ics20-1"}
The fee middleware’s handshake callbacks ensure that both modules agree on compatible fee protocol version(s), and then pass the application-specific version string to the embedded application’s handshake callbacks.
Handshake Callbacks
function onChanOpenInit(
order: ChannelOrder,
connectionHops: [Identifier],
portIdentifier: Identifier,
channelIdentifier: Identifier,
counterpartyPortIdentifier: Identifier,
counterpartyChannelIdentifier: Identifier,
version: string): (version: string, err: Error) {
if version != "" {
// try to unmarshal JSON-encoded version string and pass
// the app-specific version to app callback.
// otherwise, pass version directly to app callback.
metadata, err = UnmarshalJSON(version)
if err != nil {
// call the underlying applications OnChanOpenInit callback
return app.onChanOpenInit(
order,
connectionHops,
portIdentifier,
channelIdentifier,
counterpartyPortIdentifier,
counterpartyChannelIdentifier,
version,
)
}
// check that feeVersion is supported
if !isSupported(metadata.feeVersion) {
return "", error
}
} else {
// enable fees by default if relayer does not specify otherwise
metadata = {
feeVersion: "ics29-1",
appVersion: "",
}
}
// call the underlying application's OnChanOpenInit callback.
// if the version string is empty, OnChanOpenInit is expected to return
// a default version string representing the version(s) it supports
appVersion, err = app.onChanOpenInit(
order,
connectionHops,
portIdentifier,
channelIdentifier,
counterpartyPortIdentifier,
counterpartyChannelIdentifier,
metadata.appVersion,
)
if err != nil {
return "", err
}
// a new version string is constructed with the app version returned
// by the underlying application, in case it is different than the
// one passed by the caller
version = constructVersion(metadata.feeVersion, appVersion)
return version, nil
}
function onChanOpenTry(
order: ChannelOrder,
connectionHops: [Identifier],
portIdentifier: Identifier,
channelIdentifier: Identifier,
counterpartyPortIdentifier: Identifier,
counterpartyChannelIdentifier: Identifier,
counterpartyVersion: string): (version: string, err: Error) {
// try to unmarshal JSON-encoded version string and pass
// the app-specific version to app callback.
// otherwise, pass version directly to app callback.
cpMetadata, err = UnmarshalJSON(counterpartyVersion)
if err != nil {
// call the underlying application's OnChanOpenTry callback
return app.onChanOpenTry(
order,
connectionHops,
portIdentifier,
channelIdentifier,
counterpartyPortIdentifier,
counterpartyChannelIdentifier,
counterpartyVersion,
)
}
// select mutually compatible fee version
if !isCompatible(cpMetadata.feeVersion) {
return "", error
}
feeVersion = selectFeeVersion(cpMetadata.feeVersion)
// call the underlying application's OnChanOpenTry callback
appVersion, err = app.onChanOpenTry(
order,
connectionHops,
portIdentifier,
channelIdentifier,
counterpartyPortIdentifier,
counterpartyChannelIdentifier,
cpMetadata.appVersion,
)
if err != nil {
return "", err
}
// a new version string is constructed with the final fee version
// that is selected and the app version returned by the underlying
// application (which may be different than the one passed by the caller)
version = constructVersion(feeVersion, appVersion)
return version, nil
}
function onChanOpenAck(
portIdentifier: Identifier,
channelIdentifier: Identifier,
counterpartyChannelIdentifier: Identifier,
counterpartyVersion: string) {
cpMetadata, err = UnmarshalJSON(counterpartyVersion)
if err != nil {
// call the underlying application's OnChanOpenAck callback
return app.onChanOpenAck(
portIdentifier,
channelIdentifier,
counterpartyChannelIdentifier,
counterpartyVersion,
)
}
if !isSupported(cpMetadata.feeVersion) {
return error
}
// call the underlying application's OnChanOpenAck callback
return app.onChanOpenAck(
portIdentifier,
channelIdentifier,
counterpartyChannelIdentifier,
cpMetadata.appVersion,
)
}
function onChanOpenConfirm(
portIdentifier: Identifier,
channelIdentifier: Identifier) {
// fee middleware performs no-op on ChanOpenConfirm,
// just call underlying callback
return app.onChanOpenConfirm(portIdentifier, channelIdentifier)
}
Packet Callbacks
function onRecvPacket(packet: Packet, relayer: string): bytes {
app_acknowledgement = app.onRecvPacket(packet, relayer)
// in case of asynchronous acknowledgement, we must store the relayer
// address. It will be retrieved later and used to get the source
// address that will be written in the acknowledgement.
if app_acknowledgement == nil {
privateStore.set(relayerAddressForAsyncAckPath(packet), relayer)
}
// get source address by retrieving counterparty payee address of
// this relayer stored in fee middleware.
// NOTE: source address may be empty or invalid, counterparty
// must refund fee in these cases
sourceAddress = getCounterpartyPayeeAddress(relayer)
// wrap the acknowledgement with forward relayer and return marshalled bytes
// constructIncentivizedAck takes:
// - the app-specific acknowledgement,
// - the receive-packet relayer (forward relayer)
// - and a boolean indicative of receive operation success,
// and constructs the incentivized acknowledgement struct with
// the forward relayer and app-specific acknowledgement embedded.
ack = constructIncentivizedAck(app_acknowledgment, sourceAddress, app_acknowledgment.success)
return marshal(ack)
}
function onAcknowledgePacket(packet: Packet, acknowledgement: bytes, relayer: string) {
// the acknowledgement is a marshalled struct containing:
// - the forward relayer address as a string (called forward_relayer)
// - and the raw acknowledgement bytes returned by the counterparty application module (called app_ack).
// get the forward relayer from the (incentivized) acknowledgement
// and pay fees to forward and reverse relayers.
// reverse_relayer is submitter of acknowledgement message
// provided in function arguments
// NOTE: Fee may be zero
ack = unmarshal(acknowledgement)
forward_relayer = getForwardRelayer(ack)
PayFee(packet, forward_relayer, relayer)
// unwrap the raw acknowledgement bytes sent by counterparty application
// and pass it to the application callback.
app_ack = getAppAcknowledgement(acknowledgement)
app.OnAcknowledgePacket(packet, app_ack, relayer)
}
function onTimeoutPacket(packet: Packet, relayer: string) {
// get the timeout relayer from function arguments
// and pay timeout fee.
// NOTE: Fee may be zero
PayTimeoutFee(packet, relayer)
app.OnTimeoutPacket(packet, relayer)
}
function onTimeoutPacketClose(packet: Packet, relayer: string) {
// get the timeout relayer from function arguments
// and pay timeout fee.
// NOTE: Fee may be zero
PayTimeoutFee(packet, relayer)
app.onTimeoutPacketClose(packet, relayer)
}
function constructIncentivizedAck(
app_ack: bytes,
forward_relayer: string,
success: boolean): Acknowledgement {
return Acknowledgement{
appAcknowledgement: app_ack,
forwardRelayerAddress: relayer,
underlyingAppSuccess: success,
}
}
function getForwardRelayer(ack: Acknowledgement): string {
ack.forwardRelayerAddress
}
function getAppAcknowledgement(ack: Acknowledgement): bytes {
ack.appAcknowledgement
}
Embedded applications calling into ICS-4
Note that if the embedded application uses asynchronous acks then, the WriteAcknowledgement
call in the application must call the fee middleware’s WriteAcknowledgement
rather than calling the ICS-4 handler’s WriteAcknowledgement
function directly.
// Fee Middleware writeAcknowledgement function
function writeAcknowledgement(
packet: Packet,
acknowledgement: bytes) {
// retrieve the relayer that was stored in `onRecvPacket`
relayer = privateStore.get(relayerAddressForAsyncAckPath(packet))
// get source address by retrieving counterparty payee address
// of this relayer stored in fee middleware.
sourceAddress = getCounterpartyPayeeAddress(relayer)
ack = constructIncentivizedAck(acknowledgment, sourceAddress, acknowledgment.success)
ack_bytes = marshal(ack)
// ics4Wrapper may be core IBC or higher-level middleware
return ics4Wrapper.writeAcknowledgement(packet, ack_bytes)
}
// Fee Middleware sendPacket function just forwards data to ics-4 handler
function sendPacket(
capability: CapabilityKey,
sourcePort: Identifier,
sourceChannel: Identifier,
timeoutHeight: Height,
timeoutTimestamp: uint64,
data: bytes): uint64 {
// ics4Wrapper may be core IBC or higher-level middleware
return ics4Wrapper.sendPacket(
capability,
sourcePort,
sourceChannel,
timeoutHeight,
timeoutTimestamp,
data)
}
User Interaction with Fee Middleware
User sending Packets
A user may specify a fee to incentivize the relaying during packet submission, by submitting a fee payment message atomically with the application-specific “send packet” message (e.g. ICS-20 MsgTransfer
). The fee middleware will escrow the fee for the packet that is created atomically with the escrow. The fee payment message itself is not specified in this document as it may vary greatly across implementations. In some middleware, there may be no fee payment message at all if the fees are being paid out from an altruistic pool.
Since the fee middleware does not need to modify the outgoing packet, the fee payment message may be placed before or after the send packet message. However in order to maintain consistency with other middleware messages, it is recommended that fee middleware require their messages to be placed before the send packet message and escrow fees for the next sequence on the given channel. This way when the messages are atomically committed, the next sequence on the channel is the send packet message sent by the user, and the user escrows their fee for the created packet.
In case a user wants to pay fees on a packet after it has already been created, the fee middleware SHOULD provide a message that allows users to pay fees on a packet with the specified sequence, channel and port identifiers. This allows the user to uniquely identify a packet that has already been created, so that the fee middleware can escrow fees for that packet after the fact.
Relayers sending RecvPacket
Before a relayer starts relaying on a channel, they should register their counterparty message using the standardized message:
interface RegisterCounterpartyPayeeMsg {
portID: string
channelID: string
relayer: string // destination address of the forward relayer
counterpartyPayee: string // source address of the forward relayer
}
It is the responsibility of the receiving chain to authenticate that the message was received from owner of relayer
. The receiving chain must store the mapping from: relayer -> counterpartyPayee
for the given channel. Then, onRecvPacket
of the destination fee middleware can query for the counterparty payee address of the recvPacket
message sender in order to get the source address of the forward relayer. This source address is what will get embedded in the acknowledgement.
If the relayer does not register their counterparty payee address (or registers an invalid address), then the acknowledgment will still be received and processed but the forward fee will be refunded to the original fee payer(s).
Reasoning
This proposal satisfies the desired properties. All parts of the packet flow (receive/acknowledge/timeout) can be properly incentivized and rewarded. The protocol does not specify the relayer beforehand, thus the incentivization can be permissionless or permissioned. The escrowing and distribution of funds is completely handled on source chain, thus there is no need for additional IBC packets or the use of ICS-20 in the fee protocol. The fee protocol only assumes existence of fungible tokens on the source chain. By creating application stacks for the same base application (one with fee middleware, one without), we can get backwards compatibility.
Rationale
To create a pathway for more sustainable funding of IBC transactions, ICS-29 has been designed with the following desired properties:
- Incentivize timely delivery of the packet (recvPacket called)
- Incentivize relaying acks for these packets (acknowledgePacket called)
- Incentivize relaying timeouts for these packets when the timeout has expired before packet is delivered (for example as receive fee was too low) (timeoutPacket called)
- Produces no extra IBC packets
- One direction works, even when destination chain does not support concept of fungible tokens
- Opt-in for each chain implementing this. e.g. ICS27 with fee support on chain A could connect to ICS27 without fee support on chain B.
- Standardized interface for each chain implementing this extension
- Support custom fee-handling logic within the same framework
- Relayer addresses should not be forgeable
- Enable permissionless or permissioned relaying
Backwards Compatibility
Maintaining backwards compatibility with an unincentivized chain directly in the fee module, would require the top-level fee module to negotiate versions that do not contain a fee version and communicate with both incentivized and unincentivized modules. This pattern causes unnecessary complexity as the layers of nested applications increase.
Instead, the fee module will only connect to a counterparty fee module. This simplifies the fee module logic, and doesn’t require it to mimic the underlying nested application(s).
In order for an incentivized chain to maintain backwards compatibility with an unincentivized chain for a given application (e.g. ICS-20), the incentivized chain should host both a top-level ICS-20 module and a top-level fee module that nests an ICS-20 application each of which should bind to unique ports.
Test Cases
The targets for testing will be:
- Channel handshake with a fee-enabled counterparty will setup a fee-enabled channel.
- Channel handshake with a non-fee-enabled counterparty will automatically downgrade to a non-fee-enabled channel.
- Packet sent on fee-enabled channel without a fee set can still be relayed.
- RecvFee set by the packet sender will be paid out to relayer who sent RecvPacket upon packet lifecycle completion.
- AckFee set by the packet sender will be paid out to relayer who sent AckPacket upon packet lifecycle completeion.
- TimeoutFee set by the packet sender will be paid out to relayer who sent TimeoutPacket upon packet lifecycle completion.
- Any additional funds escrowed by sender that isn’t sent to relayers will be refunded to original escrower(s).
- Additional fees may be escrowed after initial fee payment before packet lifecycle completes.
All of the above have been tested in end-to-end tests on the ibc-go repository. See e2e tests.
Reference Implementation
The implementation of this specification can be found in the ibc-go respository.
Security Considerations
All CIPs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks and can be used throughout the life cycle of the proposal. For example, include security-relevant design decisions, concerns, important discussions, implementation-specific guidance and pitfalls, an outline of threats and risks and how they are being addressed. CIP submissions missing the “Security Considerations” section will be rejected. An CIP cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers.
The current placeholder is acceptable for a draft.
Correctness
The fee module is responsible for correctly escrowing and distributing funds to the provided relayers. The ack and timeout relayers are trivially retrievable since they are the senders of the acknowledgment and timeout message. The forward relayer is responsible for registering their source address before sending recvPacket
messages, so that the destination fee middleware can embed this address in the acknowledgement. The fee middleware on source will then use the address in acknowledgement to pay the forward relayer on the source chain.
The source chain will use a “best efforts” approach with regard to the forward relayer address. Since it is not verified directly by the counterparty and is instead just treated as a string to be passed back in the acknowledgement, the registered forward relayer source address may not be a valid source chain address. In this case, the invalid address is discarded, the receive fee is refunded, and the acknowledgement processing continues. It is incumbent on relayers to register their source addresses to the counterparty chain correctly. In the event that the counterparty chain itself incorrectly sends the forward relayer address, this will cause relayers to not collect fees on source chain for relaying packets. The incentivize-driven relayers will stop relaying for the chain until the acknowledgement logic is fixed, however the channel remains functional.
We cannot return an error on an invalid source address as this would permanently prevent the source chain from processing the acknowledgment of a packet that was otherwise correctly received, processed and acknowledged on the counterparty chain. The IBC protocol requires that incorrect or malicious relayers may at best affect the liveness of a user’s packets. Preventing successful acknowledgement in this case would leave the packet flow at a permanently incomplete state, which may be very consequential for certain IBC applications like ICS-20.
Thus, the forward relayer reward is contingent on it providing the correct payOnSender
address when it sends the receive_packet
message. The packet flow will continue processing successfully even if the fee payment is unsuccessful.
With the forward relayer correctly embedded in the acknowledgement, and the reverse and timeout relayers available directly in the message; the fee middleware will accurately escrow and distribute fee payments to the relevant relayers.
Copyright
Copyright and related rights waived via CC0.
cip | 13 |
---|---|
title | On-chain Governance Parameters for Celestia Network |
description | Specification of Mainnet governance parameters in the Celestia network |
author | Yaz Khoury [email protected], Evan Forbes [email protected] |
discussions-to | https://forum.celestia.org/t/cip-13-mainnet-on-chain-governance-parameters/1390 |
status | Draft |
type | Standards Track |
category | Core |
created | 2023-12-08 |
Abstract
This CIP outlines the on-chain governance parameters for the Celestia Mainnet. It details both global and module-specific parameters, including their default settings, summaries, and whether they are changeable via on-chain governance. This CIP serves as a reference to making on-chain governance parameters in their current status on Celestia Mainnet transparent for all contributors.
Motivation
Given the Celestia community and core developers are adopting the CIP process, it helps to have a CIP that references all active on-chain governance parameters as well as their current values.
Because of that, the recommendation for this CIP is to move towards being a Living status for a document as on-chain governance parameters will change over time.
Furthermore, the motivation for adding on-chain governance parameters as a reference CIP in the CIP process is to ensure discussions about on-chain parameters can still happen off-chain and in the Core Devs Calls and working groups given those are steered by the core developers of the Celestia community. This does not necessarily need to apply to parameters that are not part of the Core track of the CIP process.
Specification
These are the parameters that are active on Celestia Mainnet. Note that not all of these parameters are changeable via on-chain governance. This list also includes parameter that require a breaking network upgrade to change due to being manually hardcoded in the application or they are blocked by the x/paramfilter
module. The Celestia Mainnet on-chain governance parameters are as follows:
Global parameters
Parameter | Default | Summary | Changeable via Governance |
---|---|---|---|
MaxBlockBytes | 100MiB | Hardcoded value in CometBFT for the protobuf encoded block. | False |
MaxSquareSize | 128 | Hardcoded maximum square size determined per shares per row or column for the original data square (not yet extended). | False |
Module parameters
Module.Parameter | Default | Summary | Changeable via Governance |
---|---|---|---|
auth.MaxMemoCharacters | 256 | Largest allowed size for a memo in bytes. | True |
auth.SigVerifyCostED25519 | 590 | Gas used to verify Ed25519 signature. | True |
auth.SigVerifyCostSecp256k1 | 1000 | Gas used to verify secp256k1 signature. | True |
auth.TxSigLimit | 7 | Max number of signatures allowed in a multisig transaction. | True |
auth.TxSizeCostPerByte | 10 | Gas used per transaction byte. | True |
bank.SendEnabled | true | Allow transfers. | False |
blob.GasPerBlobByte | 8 | Gas used per blob byte. | True |
blob.GovMaxSquareSize | 64 | Governance parameter for the maximum square size determined per shares per row or column for the original data square (not yet extended)s. If larger than MaxSquareSize, MaxSquareSize is used. | True |
blobstream.DataCommitmentWindow | 400 | Number of blocks that are included in a signed batch (DataCommitment). | True |
consensus.block.MaxBytes | 1.88MiB | Governance parameter for the maximum size of the protobuf encoded block. | True |
consensus.block.MaxGas | -1 | Maximum gas allowed per block (-1 is infinite). | True |
consensus.block.TimeIotaMs | 1000 | Minimum time added to the time in the header each block. | False |
consensus.evidence.MaxAgeDuration | 1814400000000000 (21 days) | The maximum age of evidence before it is considered invalid in nanoseconds. This value should be identical to the unbonding period. | True |
consensus.evidence.MaxAgeNumBlocks | 120960 | The maximum number of blocks before evidence is considered invalid. This value will stop CometBFT from pruning block data. | True |
consensus.evidence.MaxBytes | 1MiB | Maximum size in bytes used by evidence in a given block. | True |
consensus.validator.PubKeyTypes | Ed25519 | The type of public key used by validators. | False |
consensus.Version.AppVersion | 1 | Determines protocol rules used for a given height. Incremented by the application upon an upgrade. | True |
distribution.BaseProposerReward | 0 | Reward in the mint denomination for proposing a block. | True |
distribution.BonusProposerReward | 0 | Extra reward in the mint denomination for proposers based on the voting power included in the commit. | True |
distribution.CommunityTax | 0.02 (2%) | Percentage of the inflation sent to the community pool. | True |
distribution.WithdrawAddrEnabled | true | Enables delegators to withdraw funds to a different address. | True |
gov.DepositParams.MaxDepositPeriod | 604800000000000 (1 week) | Maximum period for token holders to deposit on a proposal in nanoseconds. | True |
gov.DepositParams.MinDeposit | 10_000_000_000 utia (10,000 TIA) | Minimum deposit for a proposal to enter voting period. | True |
gov.TallyParams.Quorum | 0.334 (33.4%) | Minimum percentage of total stake needed to vote for a result to be considered valid. | True |
gov.TallyParams.Threshold | 0.50 (50%) | Minimum proportion of Yes votes for proposal to pass. | True |
gov.TallyParams.VetoThreshold | 0.334 (33.4%) | Minimum value of Veto votes to Total votes ratio for proposal to be vetoed. | True |
gov.VotingParams.VotingPeriod | 604800000000000 (1 week) | Duration of the voting period in nanoseconds. | True |
ibc.ClientGenesis.AllowedClients | []string{“06-solomachine”, “07-tendermint”} | List of allowed IBC light clients. | True |
ibc.ConnectionGenesis.MaxExpectedTimePerBlock | 7500000000000 (75 seconds) | Maximum expected time per block in nanoseconds under normal operation. | True |
ibc.Transfer.ReceiveEnabled | true | Enable receiving tokens via IBC. | True |
ibc.Transfer.SendEnabled | true | Enable sending tokens via IBC. | True |
mint.BondDenom | utia | Denomination that is inflated and sent to the distribution module account. | False |
mint.DisinflationRate | 0.10 (10%) | The rate at which the inflation rate decreases each year. | False |
mint.InitialInflationRate | 0.08 (8%) | The inflation rate the network starts at. | False |
mint.TargetInflationRate | 0.015 (1.5%) | The inflation rate that the network aims to stabalize at. | False |
slashing.DowntimeJailDuration | 1 min | Duration of time a validator must stay jailed. | True |
slashing.MinSignedPerWindow | 0.75 (75%) | The percentage of SignedBlocksWindow that must be signed not to get jailed. | True |
slashing.SignedBlocksWindow | 5000 | The range of blocks used to count for downtime. | True |
slashing.SlashFractionDoubleSign | 0.02 (2%) | Percentage slashed after a validator is jailed for double signing. | True |
slashing.SlashFractionDowntime | 0.00 (0%) | Percentage slashed after a validator is jailed for downtime. | True |
staking.BondDenom | utia | Bondable coin denomination. | False |
staking.HistoricalEntries | 10000 | Number of historical entries to persist in store. | True |
staking.MaxEntries | 7 | Maximum number of entries in the redelegation queue. | True |
staking.MaxValidators | 100 | Maximum number of validators. | True |
staking.MinCommissionRate | 0.05 (5%) | Minimum commission rate used by all validators. | True |
staking.UnbondingTime | 1814400 (21 days) | Duration of time for unbonding in seconds. | False |
Rationale
This section will cover the rationale of making a CIP to track Mainnet on-chain governance parameters to help the CIP process be the primary specification reference over time for those parameters. Parameters change over time, some can be added in future CIPs and authors can update this CIP doc with future ones to conveniently reflect the changes that are active on Celestia Mainnet.
This is the primary reason for recommending this document become a Living document. Furthermore as mentioned in the Motivation section, it helps ensure that changes to on-chain governance parameters happen off-chain due to ensuring the client teams in the Core Devs Call ensure rough consensus before proposing changes to consensus-critical on-chain governance parameters and activations that require a network upgrade.
Backwards Compatibility
The proposed parameters are intended for the Mainnet and some of the parameters do require a breaking network upgrade, which introduces backward incompatibility after a network upgrade if one of those values needs to be changed.
Security Considerations
Each parameter within the governance model must be scrutinized for its security implications. This would primarily happen in the Celestia Core Devs calls. Special attention should be paid to parameters that affect network consensus, as improper settings could lead to vulnerabilities.
Copyright
Copyright and related rights waived via CC0.
cip | 14 |
---|---|
title | ICS-27 Interchain Accounts |
description | Adding ICS-27 Interchain Accounts to Celestia to enable cross-chain account management |
author | Susannah Evans [email protected] (@womensrights), Aidan Salzmann [email protected] (@asalzmann), Sam Pochyly [email protected] (@sampocs) |
discussions-to | https://forum.celestia.org/t/moving-toward-safer-and-more-aligned-tia-liquid-staking/1422 |
status | Final |
type | Standards Track |
category | Core |
created | 2023-01-04 |
Abstract
This proposal outlines the integration of the Interchain Accounts (ICA) host implementation into Celestia, as defined by ICS-27. ICS-27 specifies a cross-chain account management system built on IBC. The ICS-27 implementation consists of a module at both ends of an IBC channel, enabling one chain to act as the account controller, and the other chain to act as the account manager and message recipient. Messages are sent from the controller chain and executed by the host chain. Most of the largest IBC-enabled blockchains have had ICA enabled for more than a year (Cosmos Hub, Osmosis, Neutron, Stride). The integration of the host ICA module into Celestia would enhance interoperability with external chains, specifically in the context of liquid staking and other DeFi applications.
Motivation
ICS-27 enabled chains can programmatically create ICAs (interchain accounts) on other ICS-27 enabled chains and control ICAs via IBC transactions (instead of signing with a private key). ICAs can retain all of the capabilities of a normal account (i.e. stake, send, vote) but instead are managed by a separate chain via IBC such that the owner account on the controller chain retains full control over any interchain account(s) it registers on host chain(s). The host chain (Celestia) can restrict which messages ICAs have access to call in an “allow list”.
ICA is secure, minimal, and battle-tested. Secure: ICA one of a few core apps implemented by the ibc go team. The ICA implementation has been audited by Informal Systems. Minimal: Adding ICA to a chain is around 100 LoC, and the host module itself is lightweight. Battle-tested: ICA modules have been used in production on most large IBC enabled chains for more than a year, and ICAs currently hold hundreds of millions of dollars.
While ICAs are flexible and enable numerous use cases, they’ve mainly been used for liquid staking. Liquid staking has high product market fit in crypto, evidenced by Lido on Ethereum, which holds the highest TVL of any DeFi protocol. Its popularity stems from key advantages over native staking for many users: it gives users liquidity on their stake, while still accumulating staking rewards and decreases the DeFi hurdle rate (e.g. you can lend your stake to earn additional yield). However, these benefits come with trade-offs, such as the added protocol risk associated with an additional application layer, or the case of multisig liquid staking, the need to trust a third party.
Liquid staking providers (LSPs) can accumulate a large share of stake and impact the decentralization of the network, depending on how validators are selected. Given the high market demand for liquid staked TIA and impact LSPs can have on network decentralization, Celestia’s LSPs should align with Celestia’s core values: decentralization, trust minimization, and community governance. ICA minimizes trust by enabling the use of a Tendermint based chain for liquid staking instead of a multisig.
By enabling ICA on an accelerated timeline, Celestia can enable battle-tested protocols like Stride to provide more decentralized, trust-minimized liquid staking services that are credibly governed by the Celestia community.
Specification
For context, both the host and controller module specification are described below; however, this proposal is to integrate only the host module. For the full technical specification, see the ICS-27 spec in the ibc protocol repository.
Adoption of the host module implies the addition of new parameters:
Module.Parameter | Proposed value | Description | Changeable via Governance |
---|---|---|---|
icahost.HostEnabled | true | Controls a chains ability to service ICS-27 host specific logic. | True |
icahost.AllowMessages | ["/ibc.applications.transfer.v1.MsgTransfer", "/cosmos.bank.v1beta1.MsgSend", "/cosmos.staking.v1beta1.MsgDelegate", "/cosmos.staking.v1beta1.MsgBeginRedelegate", "/cosmos.staking.v1beta1.MsgUndelegate", "/cosmos.staking.v1beta1.MsgCancelUnbondingDelegation", "/cosmos.distribution.v1beta1.MsgSetWithdrawAddress", "/cosmos.distribution.v1beta1.MsgWithdrawDelegatorReward", "/cosmos.distribution.v1beta1.MsgFundCommunityPool", "/cosmos.gov.v1.MsgVote", "/cosmos.feegrant.v1beta1.MsgGrantAllowance", "/cosmos.feegrant.v1beta1.MsgRevokeAllowance"] | Provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format. | True |
Definitions
Host Chain
: The chain where the interchain account is registered. The host chain listens for IBC packets from a controller chain which contain instructions (e.g. cosmos SDK messages) that the interchain account will execute.Controller Chain
: The chain registering and controlling an account on a host chain. The controller chain sends IBC packets to the host chain to control the account.Interchain Account
: An account on a host chain. An interchain account has all the capabilities of a normal account. However, rather than signing transactions with a private key, a controller chain will send IBC packets to the host chain which signals what transactions the interchain account must execute.Interchain Account Owner
: An account on the controller chain. Every interchain account on a host chain has a respective owner account on the controller chain.
The IBC handler interface & IBC relayer module interface are as defined in ICS-25 and ICS-26, respectively.
General design
A chain can utilize one or both parts of the interchain accounts protocol (controlling and hosting). A controller chain that registers accounts on other host chains (that support interchain accounts) does not necessarily have to allow other controller chains to register accounts on its chain, and vice versa.
This specification defines the general way to register an interchain account and send tx bytes to be executed on behalf of the owner account. The host chain is responsible for deserializing and executing the tx bytes and the controller chain must know how the host chain will handle the tx bytes in advance of sending a packet, thus this must be negotiated during channel creation.
High level flow
Registering an Account
- The controller chain binds a new IBC port for a given interchain account owner
- The controller chain creates a new IBC channel for the associated account owner. Only the owner is authorized to send packets over the channel.
- During the channel handshake, the host chain registers an interchain account address that is mapped to the account owner on the controller chain
Submitting Transactions
- The controller chain serializes messages and sends them along the channel associated with the account
- The host chain receives the IBC packet and deserializes the message
- The host chain authenticates the transaction by retrieving the relevant address from the controller’s portID, and confirming it matches the signer of the message
- The host chain checks that each message type is whitelisted and executes the transaction
Integration of host module
The interchain accounts module should be registered as an AppModule
in the same way all SDK modules are registered on a chain, as well as an IBCModule
.
// app.go
// Register the AppModule for the Interchain Accounts module
ModuleBasics = module.NewBasicManager(
...
ica.AppModuleBasic{},
...
)
...
// Add module account permissions for the Interchain Accounts module
// Only necessary for host chain functionality
// Each Interchain Account created on the host chain is derived from the module account created
maccPerms = map[string][]string{
...
icatypes.ModuleName: nil,
}
...
// Add Interchain Accounts Keepers for each submodule used and the authentication module
// If a submodule is being statically disabled, the associated Keeper does not need to be added.
type App struct {
...
ICAHostKeeper icahostkeeper.Keeper
...
}
...
// Create store keys for each submodule Keeper and the authentication module
keys := sdk.NewKVStoreKeys(
...
icahosttypes.StoreKey,
...
)
...
// Create the scoped keepers for the host submodule
scopedICAHostKeeper := app.CapabilityKeeper.ScopeToModule(icahosttypes.SubModuleName)
...
// Create the Keeper for the host submodule
app.ICAHostKeeper = icahostkeeper.NewKeeper(
appCodec, keys[icahosttypes.StoreKey], app.GetSubspace(icahosttypes.SubModuleName),
app.IBCKeeper.ChannelKeeper, // may be replaced with middleware such as ics29 fee
app.IBCKeeper.ChannelKeeper, &app.IBCKeeper.PortKeeper,
app.AccountKeeper, scopedICAHostKeeper, app.MsgServiceRouter(),
)
// Create Interchain Accounts AppModule
// Since only the host module is registered, nil is passed as the controller keeper
icaModule := ica.NewAppModule(nil, &app.ICAHostKeeper)
// Create a host IBC module
icaHostIBCModule := icahost.NewIBCModule(app.ICAHostKeeper)
// Register host route
ibcRouter.
AddRoute(icahosttypes.SubModuleName, icaHostIBCModule)
...
// Register Interchain Accounts AppModule
app.moduleManager = module.NewManager(
...
icaModule,
)
...
// Add Interchain Accounts to begin blocker logic
app.moduleManager.SetOrderBeginBlockers(
...
icatypes.ModuleName,
...
)
// Add Interchain Accounts to end blocker logic
app.moduleManager.SetOrderEndBlockers(
...
icatypes.ModuleName,
...
)
// Add Interchain Accounts module InitGenesis logic
app.moduleManager.SetOrderInitGenesis(
...
icatypes.ModuleName,
...
)
// initParamsKeeper init params keeper and its subspaces
func initParamsKeeper(appCodec codec.BinaryCodec, legacyAmino *codec.LegacyAmino, key, tkey sdk.StoreKey) paramskeeper.Keeper {
...
paramsKeeper.Subspace(icahosttypes.SubModuleName)
...
}
Host module parameters
Key | Type | Default Value |
---|---|---|
HostEnabled | bool | true |
AllowMessages | []string | ["*"] |
HostEnabled
The HostEnabled
parameter controls a chain’s ability to service ICS-27 host specific logic.
AllowMessages
The AllowMessages
parameter provides the ability for a chain to limit the types of messages or transactions that hosted interchain accounts are authorized to execute by defining an allowlist using the Protobuf message type URL format.
For example, a Cosmos SDK-based chain that elects to provide hosted Interchain Accounts with the ability of staking and unstaking will define its parameters as follows:
"params": {
"host_enabled": true,
"allow_messages": ["/cosmos.staking.v1beta1.MsgDelegate", "/cosmos.staking.v1beta1.MsgUndelegate"]
}
There is also a special wildcard "*"
value which allows any type of message to be executed by the interchain account. This must be the only value in the allow_messages
array.
"params": {
"host_enabled": true,
"allow_messages": ["*"]
}
Rationale
- Permissionless: An interchain account may be created by any actor without the approval of a third party (e.g. chain governance). Note: Individual implementations may implement their own permissioning scheme, however the protocol must not require permissioning from a trusted party to be secure.
- Fault isolation: A controller chain must not be able to control accounts registered by other controller chains. For example, in the case of a fork attack on a controller chain, only the interchain accounts registered by the forked chain will be vulnerable.
- The ordering of transactions sent to an interchain account on a host chain must be maintained. Transactions must be executed by an interchain account in the order in which they are sent by the controller chain.
- If a channel closes, the controller chain must be able to regain access to registered interchain accounts by simply opening a new channel.
- Each interchain account is owned by a single account on the controller chain. Only the owner account on the controller chain is authorized to control the interchain account. The controller chain is responsible for enforcing this logic.
- The controller chain must store the account address of any owned interchain accounts registered on host chains.
- A host chain must have the ability to limit interchain account functionality on its chain as necessary (e.g. a host chain can decide that interchain accounts registered on the host chain cannot take part in staking).
Backwards Compatibility
This proposal is backwards-incompatible because it is state-machine breaking. The feature must be introduced in a new major version.
Test Cases
The following test cases are available in the ibc-go e2e repository.
- Registration of an interchain account - test link
- [OPTIONAL] Transfer funds from interchain account to a different account on the same chain using an unordered channel - test link. Note: requires ibc-go >= v8.1.0.
- Transfer funds from interchain account to a different account on the same chain using an ordered channel - test link
- A failed transfer of funds from interchain account to a different account on the same chain due to insufficient funds in the interchain account balance - test link
- Transfer funds from interchain account to a different account on the same chain after an ordered channel closes and a new channel is reopened to connect to the existing interchain account - test link
- A transfer of funds from an interchain account to a different account on the same chain using an x/gov sdk module based controller (on the controlling chain) - test link
- A transfer of funds from an interchain account to a different account on the same chain using a x/group sdk module based controller (on the controlling chain) - test link
- [OPTIONAL] A transfer of funds from an interchain account to a different account on the same chain using an incentivised IBC packet - test link. Note: requires relayer incentivization middleware.
- Query if host functionality is enabled - test link
- [OPTIONAL] Transfer funds from interchain account to a different account after upgrading the channel from ordered to unordered - test link. Note: requires ibc-go >= v8.1.0.
Reference Implementation
The implementation of this specification can be found in the ibc-go respository.
Security Considerations
SDK Security Model
SDK modules on a chain are assumed to be trustworthy. For example, there are no checks to prevent an untrustworthy module from accessing the bank keeper.
The implementation of ICS-27 in ibc-go uses this assumption in its security considerations.
The implementation assumes other IBC application modules will not bind to ports within the ICS-27 namespace.
Copyright
Copyright and related rights waived via CC0.
cip | 15 |
---|---|
title | Discourage memo usage |
description | Discourage memo usage by modifying two auth params. |
author | Rootul Patel (@rootulp), NashQueue (@nashqueue) |
discussions-to | https://forum.celestia.org/t/cip-discourage-memo-usage/1508 |
status | Draft |
type | Standards Track |
category | Core |
created | 2024-01-21 |
Abstract
This proposal aims to discourage the use of transaction memos in Celestia by modifying two parameters:
- Decrease
auth.MaxMemoCharacters
from 256 to 16. - Increase
auth.TxSizeCostPerByte
from 10 to 16.
Additionally, this CIP converts these two parameters from governance modifiable to hard-coded values that are unmodifiable by governance.
Motivation
Transactions on Celestia may optionally include a user-specified memo. The Cosmos SDK describes the memo as “a note or comment to send with the transaction”. The memo field is often used by centralized exchanges to uniquely identify the user depositing into an exchange. The memo field has also been used by IBC relayers to tag IBC transactions with the relayer’s name and software version.
Recently, a number of inscriptions projects on Cosmos chains have built entire protocols based on the information transmitted via the memo field. One such project that launched on Celestia is CIAS. For example, see this tx which has a memo field that base64 decodes to
data:,{"op":"cancel","amt":3230000,"tick":"cias","p":"cia-20"}
based on the CIAS docs this memo cancels an inscription listing. There are similar memos to deploy, mint, transfer, and list inscriptions.
On one hand, it is exciting to see protocols launch on top of Celestia. On the other hand, this usage poses risks to the Celestia network.
Celestia was designed to support the publication of arbitrary data but the memo field is not the ideal mechanism to publish such data. One of the design goals stated in the original LazyLedger paper is:
Application message retrieval partitioning. Client nodes must be able to download all of the messages relevant to the applications they use from storage nodes, without needing to downloading any messages for other applications.
For this reason, blob data is scoped to an application-specific namespace. However, the transaction memo field is not scoped to an application-specific namespace. Instead, it pollutes the reserved TRANSACTION_NAMESPACE
. This is undesirable because partial nodes that want to verify the current Celestia state must download and execute all transactions in the TRANSACTION_NAMESPACE
(including memos).
As of January 17, 2024, 0.51 GiB of data has been published to the Celestia network via memos and 1.46 GiB of data has been published via blob data.
This proposal seeks to realign incentives so that protocol builders are encouraged to favor application-specific namespace blob data over memo fields.
Specification
Param | Current | Proposed |
---|---|---|
auth.MaxMemoCharacters | 256 | 16 |
auth.TxSizeCostPerByte | 10 | 16 |
Rationale
auth.MaxMemoCharacters
auth.MaxMemoCharacters
places an upper bound on the number of characters in the memo field. Note that not all uses of the memo field are nefarious:
-
Crypto exchanges use memos to uniquely identify the user depositing into an exchange.
Exchange Memo characters Binance 13 Bithumb 10 Coinbase 10 Gemini 13 KuCoin 10 -
Some IBC relayers include the Hermes version in their memo. For example:
mzonder | hermes 1.7.4+ab73266 (https://hermes.informal.systems)
which is 64 characters.
Given this context, what is the distribution of memo lengths in practice? How often are they used vs. empty?
Observe that the distribution of memo lengths is spikey at 80 and 59 characters. The spike at 0 is expected (txs by default don’t contain a memo). To learn why the other spikes exist, we have to inspect the most common memos:
| Tx count | Memo length | Memo | Base64 decoded | ———|———––|–––––––––––––––––––––––––––––––––––––––––––––––||———————————————————––| | 4296795 | 80 | ZGF0YToseyJvcCI6Im1pbnQiLCJhbXQiOjEwMDAwLCJ0aWNrIjoiY2lhcyIsInAiOiJjaWEtMjAifQ== | data:,{“op”:“mint”,“amt”:10000,“tick”:“cias”,“p”:“cia-20”} | | 1874034 | 59 | data:,{“op”:“mint”,“amt”:100000,“tick”:“TIMS”,“p”:“tia-20”} | N/A | | 210265 | 80 | ZGF0YToseyJvcCI6Im1pbnQiLCJhbXQiOjEwMDAwMCwidGljayI6IlRJTVMiLCJwIjoidGlhLTIwIn0= | data:,{“op”:“mint”,“amt”:100000,“tick”:“TIMS”,“p”:“tia-20”} | | 78409 | 77 | Yours truly, ValiDAO | hermes 1.7.1+0658526 (https://hermes.informal.systems) | N/A | | 66181 | 80 | ZGF0YToseyJwIjoiY2lhLTIwIiwib3AiOiJtaW50IiwidGljayI6ImNpYXMiLCJhbXQiOiIxMDAwMCJ9 | data:,{“p”:“cia-20”,“op”:“mint”,“tick”:“cias”,“amt”:“10000”} | | 65931 | 80 | ZGF0YToseyJwIjoic2VpLTIwIiwib3AiOiJtaW50IiwidGljayI6InNlaXMiLCJhbXQiOiIxMDAwIn0= | data:,{“p”:“sei-20”,“op”:“mint”,“tick”:“seis”,“amt”:“1000”} | | 53313 | 80 | ZGF0YToseyJvcCI6Im1pbnQiLCJhbXQiOjEwMDAwLCJ0aWNrIjoiQ0lBUyIsInAiOiJjcmMtMjAifQ== | data:,{“op”:“mint”,“amt”:10000,“tick”:“CIAS”,“p”:“crc-20”} | | 51378 | 80 | ZGF0YToseyJvcCI6Im1pbnQiLCJhbXQiOjEwMDAwLCJ0aWNrIjoiY2lhcyIsInAiOiJjcmMtMjAifQ== | data:,{“op”:“mint”,“amt”:10000,“tick”:“cias”,“p”:“crc-20”} | | 40568 | 17 | Delegate(rewards) | N/A | | 31932 | 91 | relayed by CryptoCrew Validators | hermes 1.6.0+4b5b34ea2 (https://hermes.informal.systems) | N/A | | 31233 | 76 | Relayed by Stakin | hermes 1.7.3+e529d2559 (https://hermes.informal.systems) | N/A |
Observe that seven of the top ten are base64 encoded data. Three of the top ten are relayers. The last one: “Delegate(rewards)” appears to be the default memo applied via Keplr wallet for a delegate tx.
auth.TxSizeCostPerByte
auth.TxSizeCostPerByte
is the gas cost per byte of a transaction. The current value of 10 is a Cosmos SDK default and it is comparable to the current blob.GasPerBlobByte
value of 8. In order to discourage the usage of the memo field and encourage the use of blob data, we propose increasing auth.TxSizeCostPerByte
to 16 so that each memo byte costs roughly twice as much as a blob byte. It is worth noting that auth.TxSizeCostPerByte
is important outside the context of transactions memos because this parameter is used for all transaction bytes. Non-memo transaction contents may similarly bloat the TRANSACTION_NAMESPACE
.
How expensive are transactions after a auth.TxSizeCostPerByte
increase?
auth.TxSizeCostPerByte | MsgSend without memo | MsgSend with 256 character memo | MsgPFB with 256 byte blob |
---|---|---|---|
10 | 77004 gas | 79594 gas | 67765 gas |
16 | 78906 gas | 83050 gas | 69763 gas |
100 | 105534 gas | 131434 gas | 97735 gas |
1000 | 390834 gas | 649834 gas | 397435 gas |
Assuming minimum-gas-prices = "0.002utia"
auth.TxSizeCostPerByte | MsgSend without memo | MsgSend with 256 character memo | MsgPFB with 256 byte blob |
---|---|---|---|
10 | 154 utia | 159 utia | 135 utia |
16 | 157 utia (+2%) | 166 utia (+4%) | 139 utia (+3%) |
100 | 211 utia (+37%) | 262 utia (+65%) | 195 utia (+44%) |
1000 | 781 utia (+407%) | 1299 utia (+716%) | 794 utia (+488%) |
Therefore, increasing from 10 to 16 is a conserative increase.
FAQ
What do other blockchains use for these params?
Param | Celestia | Cosmos Hub | Osmosis |
---|---|---|---|
auth.MaxMemoCharacters | 256 | 512 | 256 |
auth.TxSizeCostPerByte | 10 | 10 | 10 |
How does this proposal affect ICS-020 memos?
The ICS-20 memo is distinct from the transaction memo so auth.MaxMemoCharacters
does not constrain the ICS-20 memo field. The ICS-20 memo field counts towards a transaction’s bytes so transactions with large ICS-20 memo fields will be more expensive if auth.TxSizeCostPerByte
is increased. This is relevant because we can expect the usage and size of the ICS-20 memo field to increase if Packet Forward Middleware is adopted (see CIP-9).
Why convert these params from governance modifiable to hard-coded values?
The CIP process defined in CIP-1 is distinct from on-chain governance which relies on token voting. The authors of this CIP would rather use the CIP process to reach “rough consensus” than implement the param changes desired via an on-chain governance proposal. Since the CIP process can not enforce the outcome of an on-chain governance vote, this CIP suggests removing the ability for on-chain governance to modify these parameters in favor of explicitly setting them via hard-coded values. Put another way, this CIP moves the authority to modify these parameters from on-chain governance to the CIP process. This is directionally consistent with the rationale for CIPs:
We intend CIPs to be the primary mechanisms for proposing new features, for collecting community technical input on an issue, and for documenting the design decisions that have gone into Celestia.
One downside of moving these parameters from governance modifiable to CIP modifiable is that they will only be modifiable via subsequent hard-forks which are expected to be less frequent than the current on-chain governance voting period of one week.
Backwards Compatibility
This proposal is backwards compatible. However, clients that hard-coded gas estimation based on the previous auth.TxSizeCostPerByte
value will need to be updated.
Test Cases
TBA
Reference Implementation
Rough steps:
- Add two parameters to the paramfilter block list.
- Define two new versioned constants in the v2 directory for the two proposed parameter values.
- Explicitly set the two parameters to the versioned constants defined in step 2.
Security Considerations
Since this CIP restricts usage of the memo field, projects that are no longer viable will need to migrate to alternative mechanisms. Ideally project migrate to blob data but as Data Insertion in Bitcoin’s Blockchain points out, there are other mechanisms to store data in a blockchain. The two most applicable alternatives in Celestia’s case are:
- Users could send 1utia to a fake address. Addresses are 20 bytes so a user could theoretically include 20 bytes of data per msg send.
- The block proposer can include arbitrary data via their moniker.
This list is not exhaustive and there are likely other mechanisms to store data in Celestia.
Future Work
Currently, the gas cost for a PayForBlob transaction accounts for the number of shares occupied by its blobs. Since shares are 512 bytes, in rare cases, it may be cheaper to publish small data via memos rather than blobs. We may consider future protocol changes to guarantee that blobs are always cheaper than memos. One possible solution is to charge a flat fee for transactions that include memos where the flat fee is greater than the cost of a blob that occupies one share.
Copyright
Copyright and related rights waived via CC0.
cip | 16 |
---|---|
title | Make Security Related Governance Parameters Immutable |
description | Consensus-related parameters should not be modified via on-chain governance in the Celestia network. |
author | @caomingpei |
discussions-to | https://forum.celestia.org/t/cip-make-security-related-governance-parameters-immutable/1566 |
status | Draft |
type | Standards Track |
category | Core |
created | 2024-02-07 |
requires | CIP-13 |
Abstract
This CIP suggests that the security-critical governance parameters—consensus.evidence.MaxAgeDuration
and consensus.evidence.MaxAgeNumBlocks
—should be immutable to on-chain governance proposals. In light of the present Celestia specifications and the details provided in CIP-13, maintaining the mutability of those parameters could open the door to future on-chain proposals that alter their values, potentially leading to an inconsistency between the protocol and implementation. This CIP also briefly analyzes the potential security risks that may arise from the parameter modifications.
Motivation
Consensus protocols play an important role in Celestia. As a Data Availability solution, the robustness of Celestia is crucial to the performance and reliability of higher-level applications, such as rollups. CIP-13 introduces a framework for the on-chain governance parameters. Considering that governance proposals have the potential to alter parameter values, it is essential to designate certain parameters as immutable for the on-chain governance to prevent inconsistencies and mitigate security risks.
Specification
Param \ Changeable via Governance | Current | Proposed |
---|---|---|
consensus.evidence.MaxAgeDuration | True | False |
consensus.evidence.MaxAgeNumBlocks | True | False |
Note: Parameters listed in this table are identical to those module parameters specified in CIP-13. The purpose of this CIP is to modify the
Changeable via Governance
attribute of those two parameters fromTrue
toFalse
.
Rationale
Adopting an on-chain governance method comes with inherent risks of governance attacks, particularly concerning parameters related to consensus.evidence
.
As outlined in the decentralized governance documentation, all holders of the native token TIA can propose and vote on on-chain governance proposals. It is unrealistic to expect the majority of holders to understand the technical details of the protocol and implementation thoroughly. Consequently, on-chain governance participants (a.k.a. TIA holders) may be incentivized (or, in some cases, bribed) to vote for or against a proposal without understanding the potential impact. Any changeable parameter within the Governance Parameter for Celestia
could be targeted for changes through on-chain governance proposals. Therefore, making security-related parameters unchangeable for the on-chain governance proposal could serve as an effective solution to alleviate introduced risks.
Inconsistency
Module.Parameter | Default | Summary | Changeable via Governance |
---|---|---|---|
consensus.evidence.MaxAgeDuration | 1814400000000000 (21 days) | The maximum age of evidence before it is considered invalid in nanoseconds. This value should be identical to the unbonding period. | True |
consensus.evidence.MaxAgeNumBlocks | 120960 | The maximum number of blocks before evidence is considered invalid. This value will stop CometBFT from pruning block data. | True |
…… | …… | …… | …… |
staking.UnbondingTime | 1814400 (21 days) | Duration of time for unbonding in seconds. | False |
This is a part of the table introduced in CIP-13. The summary of parameter consensus.evidence.MaxAgeDuration
states “…This value should be identical to the unbonding period”. Meanwhile, the parameter staking.UnbondingTime
is NOT changeable since the Changeable via Governance
is set to False. Suppose an on-chain governance proposal tries to modify the default value of consensus.evidence.MaxAgeDuration
from 1814400000000000 (21 days)
to a different value. It would create an inconsistency between the description and implementation because the modified value would no longer be identical to the unbonding period.
Security Risk
func (evpool *Pool) verify(evidence types.Evidence) error {
...
// check that the evidence hasn't expired
if ageDuration > evidenceParams.MaxAgeDuration && ageNumBlocks > evidenceParams.MaxAgeNumBlocks {
...
}
...
}
Those two parameters are used in the function of verify.go
, which is responsible for verifying whether evidence has expired or not. According to the if
statement, even without modifying consensus.evidence.MaxAgeDuration
, it is still possible to prolong the expiration time of evidence by increasing consensus.evidence.MaxAgeNumBlocks
, which means that older evidence will be considered valid. An extended expiration time of evidence introduces the potential risk of Resource Consumption, and a detailed discussion can be found in the Security Considerations.
Additionally, suppose that an on-chain governance proposal sets the evidence.MaxAgeDuration
and evidence.MaxAgeNumBlocks
to extremely low values, meaning that evidence expires quickly. If a malicious validator were to engage in Duplicate Vote or Light Client Attack, it would lead to consensus instability. Given that Celestia is a solution to data availability, this consensus instability would introduce security risk to the upper-layer applications (e.g. rollup). A detailed discussion can be found in the Security Considerations.
Summary
In summary, configuring these two parameters as immutable values that can NOT be changed via on-chain governance can mitigate the risks of inconsistency and security issues introduced by unintentional (or malicious) governance activities. Moreover, in the face of a security incident that concerns these parameters, reliance on on-chain governance may be inadequate. Implementing modifications through a hard fork represents a more resilient approach.
Backwards Compatibility
This CIP recommends freezing on-chain governance for two security related parameters, which introduces backward incompatibility. This incompatibility is due to any future modifications to these parameters requiring at least a breaking network upgrade.
Reference Implementation
func (*App) BlockedParams() [][2]string {
return [][2]string{
...
// consensus.validator.PubKeyTypes
{baseapp.Paramspace, string(baseapp.ParamStoreKeyValidatorParams)},
+ // consensus.evidence.MaxAgeDuration and .MaxAgeNumBlocks
+ {baseapp.Paramspace, string(baseapp.ParamStoreKeyEvidenceParams)},
}
}
The above example serves as a conceptual illustration, and MaxBytes
in ParamStoreKeyEvidenceParams
should still remain changeable.
Besides, relevant documents should be updated accordingly, such as Celestia App Specifications
Security Considerations
This CIP recommends setting those two parameters as immutable constants which are NOT allowed to change via on-chain governance proposals. Adopting this CIP means future changes for those two parameters require community coordination and breaking network upgrades. Although these upgrades carry the community-divided risk, it is worth noting that many blockchain communities have successfully navigated multiple breaking network upgrades. The risks of division and disagreement can be minimized by thoroughly discussing and working towards widespread agreement before moving forward with a breaking network upgrade. Consequently, the risk is manageable and should not be a significant concern.
If the proposed two parameters have not been changed since genesis, the security impact of making them NOT changeable via on-chain governance is not significant. Besides, if modifications to those two parameters via on-chain governance are still allowed, this could not only result in the inconsistency between the protocol and implementation but also introduce the following potential security risks:
Consensus Instability
+---------------------+
·—— | Block Header (H1) |
| +---------------------+ ......
| | PayForBlobs |
| +---------------------+
+---------------------+ |
| Block Header (H0) | |
+---------------------+ <--|
| Block Body |
+---------------------+ <--|
|
|
| +---------------------+
·—— | Block Header (H1') |
+---------------------+ ......
| PayForBlobs' |
+---------------------+
Assuming that evidence.MaxAgeDuration
and evidence.MaxAgeNumBlocks
are configured with extremely low values, such a configuration implies that evidence would expire rapidly. Under these conditions, malicious validators might exploit the system by double voting on two blocks simultaneously, potentially leading to temporary network partitions. In the context of a Data Availability solution like Celestia, any instability in consensus could compromise the blob transactions. This situation risks the security and reliability of upper-layer rollups dependent on this infrastructure.
Resource Consumption
Resource consumption attacks targeting ComeBFT consensus algorithms aim to exploit the resource constraints of nodes within the network. The summary of consensus.evidence.MaxAgeNumBlocks
states “…This value will stop CometBFT from pruning block data”. If the pruned window is set too large, nodes must store all the relevant data for the extended period, which can inundate their storage and memory capacity.
Copyright
Copyright and related rights waived via CC0.
cip | 17 |
---|---|
title | Lemongrass Network Upgrade |
description | Reference specifications included in the Lemongrass Network Upgrade |
author | @evan-forbes |
discussions-to | https://forum.celestia.org/t/lemongrass-hardfork/1589 |
status | Final |
type | Meta |
created | 2024-02-16 |
requires | CIP-6, CIP-9, CIP-10, CIP-14, CIP-20 |
Abstract
This Meta CIP lists the CIPs included in the Lemongrass network upgrade.
Specification
Included CIPs
- CIP-6: Price Enforcement
- CIP-9: Packet Forward Middleware
- CIP-10: Coordinated Upgrades
- CIP-14: Interchain Accounts
- CIP-20: Disable Blobstream module
All of the above CIPs are state breaking, and thus require a breaking network upgrade. The activation of this network upgrade will be different from future network upgrades, as described in CIP-10.
Rationale
This CIP provides a complete list of breaking changes for the Lemongrass upgrade, along with links to those specs.
Security Considerations
This CIP does not have additional security concerns beyond what is already discussed in each of the listed specs.
Copyright
Copyright and related rights waived via CC0.
cip | 18 |
---|---|
title | Standardised Gas and Pricing Estimation Interface |
description | A standardised interface for estimating gas usage and gas pricing for transactions |
author | Callum Waters (@cmwaters) |
discussions-to | https://forum.celestia.org/t/cip-standardised-gas-and-pricing-estimation-interface/1621 |
status | Review |
type | Standards Track |
category | Interface |
created | 2024-03-12 |
Abstract
Introduce a standardised querying interface for clients to use to estimate gas usage and gas pricing for transactions. This is designed to promote the entry of third party providers specialising in this service and competing to offer the most reliable and accurate estimations.
Motivation
The general motivation is improving user experience around transaction submission.
Currently, all clients wishing to submit transactions to the network need to obtain two things:
- An estimation of the amount of gas required to execute the transaction
- An estimation of the gas-price that will be sufficient to be included in a block.
All other aspects of signing and submission can remain local i.e. chainID, account number or sequence number.
Currently both these things can be provided if you have access to a trusted full node (which can simulate the transaction and return the global min gas price - which is sufficient in non-congested periods). This could be improved on in a few dimensions:
- Estimating the gas by simulating the execution of a transaction is a heavy procedure
- The minimum gas price is insufficient when there is congestion
- Not all nodes expose these endpoints and it may not be so simple to find endpoints that are trusted
In addition, Keplr provides some form of gas estimation but no gas price estimation. Users have to pick from “low”, “medium”, or “high”
Specification
The following API is proposed for the standardised gas estimation service.
service GasEstimator {
rpc EstimateGasUsage(EstimateGasUsageRequest) returns (EstimateGasUsageResponse) {}
rpc EstimateGasPrice(EstimateGasPriceRequest) returns (EstimateGasPriceResponse) {}
}
message EstimateGasUsageRequest {
cosmos.tx.Tx tx = 1;
}
message EstimateGasUsageResponse {
uint64 estimated_gas_used = 1;
}
message EstimateGasPriceRequest {}
message EstimateGasPriceResponse {
double estimated_gas_price = 1;
}
Given it’s wide usage both in the Cosmos SDK and more broadly, the service would be implemented using gRPC. RESTful endpoints may optionally be supported in the future using something like grpc-gateway
.
Given the expected reliance on clients, all future changes must be done in a strictly non-breaking way.
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Rationale
Most of the rationale behind the necessity of a standard interface and it’s definition is captured in the motivation section.
The notion of urgency, or within how many blocks does the user want their transaction to be submitted may be a useful parameter in the future but is currently left out of the original interface.
Backwards Compatibility
As this is a new interface, no consideration for backwards compatibility is necessary
Reference Implementation
The effectiveness of a “standardised” interface depends on the willingness of current and future clients to adopt it as well as the willingness of teams to provide those services. To set a sufficient precendent, both the Node API within celestia-node
and the consensus node within celestia-app
will implement client and server implementations respectively, creating an interface between the existing line of communication. That way by default, light nodes will use that API with the trusted provider they are already using for transaction submission.
The consensus node will use the SimulateTx method to estimate the gas used and use the min_gas_price
parameter within state as the estimated_gas_price
The Node API will optionally allow a user to pass a url in the constructor. If this is not provided, the default will be to use the gRPC server of the consensus node. Users will still be able to manually set the gas used and gas price which will override any automated estimation.
Security Considerations
It must be noted that this current service is trust-based. The service operates on a best-effort basis as it tries to most accurately predict the gas used and the gas price such that the transaction is included and the user has not needed to overpay. However, there is nothing preventing a “bad” service provider from providing estimates multitudes greater than is necessary. Guardrails on the client side should be in place to prevent significant waste of funds.
Copyright
Copyright and related rights waived via CC0.
cip | 19 |
---|---|
title | Shwap Protocol |
description | Shwap - a new messaging framework for DA and sampling |
author | Hlib Kanunnikov (@Wondertan) |
discussions-to | https://forum.celestia.org/t/cip-shwap-protocol/1551 |
status | Review |
type | Standards Track |
category | Data Availability, Networking |
created | 2024-02-02 |
Abstract
This document specifies Shwap (a portmanteau of share and swap) - the simple, expressive, and extensible messaging framework aiming to solve critical inefficiencies and standardize messaging of Celestia’s Data Availability p2p network.
Shwap defines a messaging framework to be exchanged around the DA p2p network in a trust-minimized way without enforcing transport (QUIC/TCP or IP) or application layer protocol semantics (e.g., HTTP/x). Using this framework, Shwap declares the most common messages and provides options for stacking them with lower-level protocols. Shwap can be stacked together with application protocol like HTTP/x, KadDHT, Bitswap or any custom protocol.
Motivation
The current Data Availability Sampling (DAS) network protocol is inefficient. A single sample operation takes log₂(k) network roundtrips (where k is the extended square size). This is not practical and does not scale for the theoretically unlimited data square that the Celestia network enables. The main motive here is a protocol with O(1) roundtrip for multiple samples, preserving the assumption of having 1/N honest peers connected possessing the data.
Initially, Bitswap and IPLD were adopted as the basis for the DA network protocols, including DAS, block synchronization (BS), and blob/namespace data retrieval (ND). They gave battle-tested protocols and tooling with pluggability to rapidly scaffold Celestia’s DA network. However, it came with the price of scalability limits and roundtrips, resulting in slower BS than block production. Before the network launch, we transitioned to the optimized ShrEx protocol for BS and integrated CAR and DAGStore-based storage optimizing BS and ND. However, DAS was left untouched, preserving its weak scalability and roundtrip inefficiency.
Shwap messaging stacked together with Bitswap protocol directly addresses described inefficiency and provides a foundation for efficient communication for BS, ND, and beyond.
Rationale
The atomic primitive of Celestia’s DA network is the share. Shwap standardizes messaging and serialization for shares. Shares are grouped together, forming more complex data types (Rows, Blobs, etc.). These data types are encapsulated in containers. For example, a row container groups the shares of a particular row. Containers can be identified with the share identifiers in order to request, advertise or index the containers. The combination of containers and identifiers provides an extensible and expressive messaging framework for groups of shares and enables efficient single roundtrip request-response communication.
Many share groups or containers are known in the Celestia network, and systemizing this is the main reason behind setting up this simple messaging framework. A single place with all the possible Celestia DA messages must be defined, which node software and protocol researchers can rely on and coordinate. Besides, this framework is designed to be sustain changes in the core protocol’s data structures and proving system as long shares stay the de facto atomic data type.
Besides, there needs to be systematization and a joint knowledge base with all the edge cases for possible protocol compositions of Shwap with lower-level protocols Bitswap, KadDHT, or Shrex, which Shwap aims to describe.
Specification
Terms and Definitions
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Commonly used terms in this document are described below.
Shwap: The protocol described by this document. Shwap is a portmanteau name of words share and swap.
Share: The core data structure of DataSquare “swapped” between peers.
DataSquare: The DA square format used by Celestia DA network.
DAH: The Data Availability Header with Row and Column commitments.
Namespace: The namespace grouping sets of shares.
Peer: An entity that can participate in a Shwap protocol. There are three types of peers: client, server, and node.
Client: The Peer that requests content by share identifiers over Shwap.
Server: The Peer that responds with shares over Shwap.
Node: The peer that is both the client and the server.
Proof: A Merkle inclusion proof of the data in the DataSquare.
Message Framework
This section defines Shwap’s messaging framework. Every group of shares that needs to be exchanged over the network MUST define its share identifier and share container and follow their described rules. Every identifier and container MUST define its serialization format, which MAY NOT be consistent with other identifiers and containers.
Share Containers
Share containers encapsulate a set of data shares with proof. Share containers are identified by share identifiers.
Containers SHOULD contain shares within a single DataSquare and MAY NOT be adjacent. Containers MUST have a DAH inclusion proof field defined.
Serialization for Share Containers
Share containers are RECOMMENDED to use protobuf (proto3) encoding, and other formats MAY be used for serialization. A container MAY define multiple serialization formats.
Share Identifiers
Share identifiers identify share containers. Identifiers are not collision-resistant and there MAY be different identifiers referencing the same container. There SHOULD NOT be multiple containers identified by the same identifier.
Identifiers MAY embed each other to narrow down the scope of needed shares. For example, SampleID embeds RowID as every sample lay on a particular row.
Serialization for Share Identifiers
Share identifiers MUST be serialized by concatenating big-endian representations of fields in the order defined by their respective formatting section. Serialized identifiers SHOULD have constant and predetermined lengths in bytes.
Versioning
If a defined share container or identifier requires an incompatible change, the new message type MAY be introduced suffixed with a new major version starting from v1. E.g., if the Row message needs a revision, RowV1 is created.
Messages
This section defines all the supported Shwap messages, including share identifiers and containers. All the new future messages should be described in this section.
EdsID
EdsID identifies the DataSquare.
EdsID identifiers are formatted as shown below:
EdsID {
Height: u64;
}
The fields with validity rules that form EdsID are:
Height: A uint64 representing the chain height with the data square. It MUST be greater than zero.
Serialized EdsID MUST have a length of 8 bytes.
Eds Container
Eds containers encapsulate the DataSquare. Internally, they only keep the original data (1st quadrant) of the EDS with redundant data (2nd, 3rd and 4th quadrants) computable from the original data.
Eds containers MUST be formatted by serializing ODS left-to-right share-by-share in the row-major order.
Due to ever-growing nature of DataSquare, the Eds containers SHOULD be streamed over reliable links in the share-by-share formatting above.
RowID
RowID identifies the Row shares container in a DataSquare.
RowID identifiers are formatted as shown below:
RowID {
EdsID;
RowIndex: u16;
}
The fields with validity rules that form RowID are:
EdsID: A EdsID of the Row Container. It MUST follow EdsID formatting and field validity rules.
RowIndex: An uint16 representing row index points to a particular row. The 16 bit limit fits data squares up to 2TB. It MUST not exceed the number of DAH Row roots reduced by one.
Serialized RowID MUST have a length of 10 bytes.
Row Container
Row containers encapsulate the rows of the DataSquare. Internally, they only keep the left (original) half of the row with right (redundant) half recomputable from the left half.
Row containers are protobuf formatted using the following proto3 schema:
syntax = "proto3";
message Row {
repeated Share shares_half = 1;
HalfSide half_side= 2;
enum HalfSide {
LEFT = 0;
RIGHT = 1;
}
}
The fields with validity rules that form Row containers are:
SharesHalf: A variable size Share array representing either left or right half of a row. Which half side is defined by HalfSide field. Its length MUST be equal to the number of Column roots in DAH divided by two. The opposite half is computed using Leopard Reed-Solomon erasure-coding. The Leopard algorithm must operate over 8-bit Galois Fields for rows of total size less than or equal to 256 shares or 16-bit GF otherwise. Afterward, the NMT is built over both halves, and the computed NMT root MUST be equal to the respective Row root in DAH.
HalfSide: An enum defining which side of the row SharesHalf field contains. It MUST be either LEFT or RIGHT.
SampleID
SampleID identifies a Sample container of a single share in a DataSquare.
SampleID identifiers are formatted as shown below:
SampleID {
RowID;
ColumnIndex: u16;
}
The fields with validity rules that form SampleID are:
RowID: A RowID of the sample. It MUST follow RowID formatting and field validity rules.
ColumnIndex: A uint16 representing the column index of the sampled share; in other words, the share index in the row. The 16 bit limit fits data squares up to 2TB. It MUST stay within the number of DAH Column roots reduced by one.
Serialized SampleID MUST have a length of 12 bytes.
Sample Container
Sample containers encapsulate single shares of the DataSquare.
Sample containers are protobuf formatted using the following proto3 schema:
syntax = "proto3";
message Sample {
Share share = 1;
Proof proof = 2;
AxisType proof_type = 3;
}
The fields with validity rules that form Sample containers are:
Share: A Share of a sample.
Proof: A protobuf formated NMT proof of share inclusion. It MUST follow NMT proof verification and be verified against the respective root from the Row or Column axis in DAH. The axis is defined by the ProofType field.
AxisType: An enum defining which axis root the Proof field is coming from. It MUST be either ROW or COL.
RowNamespaceDataID
RowNamespaceDataID identifies namespace Data container of shares within a single Row. That is, namespace shares spanning over multiple Rows are identified with multiple identifiers.
RowNamespaceDataID identifiers are formatted as shown below:
RowNamespaceDataID {
RowID;
Namespace;
}
The fields with validity rules that form RowNamespaceDataID are:
RowID: A RowID of the namespace data. It MUST follow RowID formatting and field validity rules.
Namespace: A fixed-size 29 bytes array representing the Namespace of interest. It MUST follow Namespace formatting and its validity rules.
Serialized RowNamespaceDataID MUST have a length of 39 bytes.
RowNamespaceData Container
RowNamespaceData containers encapsulate user-submitted data under namespaces within a single DataSquare row.
RowNamespaceData containers are protobuf formatted using the following proto3 schema:
syntax = "proto3";
message RowNamespaceData {
repeated Share shares = 1;
Proof proof = 2;
}
The fields with validity rules that form Data containers are:
Shares: A variable size Share array representing data of a namespace in a row.
Proof: A protobuf formated NMT proof of share inclusion. It MUST follow NMT proof verification and be verified against the respective root from the Row root axis in DAH.
Namespace data may span over multiple rows, in which case all the data is encapsulated in multiple RowNamespaceData containers. This enables parallelization of namespace data retrieval and certain compositions may get advantage of that by requesting containers of a single namespace from multiple servers simultaneously.
Core Structures
This section is purposed for messages that do not fit into Identifier or Container categories, but have to be strictly specified to be used across the categories and beyond.
AxisType
The data square consists of rows and columns of shares that are committed in NMT merkle trees. Subsequently, we have two commitments over any share in the square. AxisType helps to point to either of those in different contexts.
syntax = "proto3";
enum AxisType {
ROW = 0;
COL = 1;
}
Share
Share defines the atomic primitive of the data square.
syntax = "proto3";
message Share {
bytes data = 1;
}
The fields with validity rules that form Data containers are:
Data: A variable size byte array representing a share. It MUST follow share formatting and validity rules.
Protocol Compositions
This section specifies compositions of Shwap with other protocols. While Shwap is transport agnostic, there are rough edges on the protocol integration, which every composition specification has to describe.
Bitswap
Bitswap is an application-level protocol for sharing verifiable data across peer-to-peer networks. Bitswap operates as a dynamic want-list exchange among peers in a network. Peers continuously update and share their want lists of desired data in real time. It is promptly fetched if at least one connected peer has the needed data. This ongoing exchange ensures that as soon as any peer acquires the sought-after data, it can instantly share it with those in need.
Shwap is designed to be synergetic with Bitswap, as that is the primary composition to be deployed in Celestia’s DA network. Bitswap provides the 1/N peers guarantee and can parallelize fetching across multiple peers. Both of these properties significantly contribute to Celestia’s efficient DAS protocol.
Bitswap runs over the libp2p stack, which provides QUIC transport integration. Subsequently, Shwap will benefit from features libp2p provides together with transport protocol advancements introduced in QUIC.
Multihashes and CIDs
Bitswap is tightly coupled with Multihash and CID notions, establishing the content addressability property. Bitswap operates over Blocks of data that are addressed and verified by CIDs. Based on that, Shwap integrates into Bitswap by complying with both of these interfaces. The Share Containers are Blocks that are identified via Share Identifiers.
Even though Shwap takes inspiration from content addressability, it breaks free from the hash-based model to optimize message sizes and data request patterns. In some way, it hacks into multihash abstraction to make it contain data that is not, in fact, a hash. Furthermore, the protocol does not include hash digests in the multihashes. The authentication of the messages happens using externally provided data commitment.
However, breaking-free from hashes creates issues necessary to be solved on the implementation level, particularly in the reference Golang implementation, if forking and substantially diverging from the upstream is not an option. CIDs are required to have fixed and deterministic sizes. Making share identifiers compliant with CID prevents protobuf usage due to its reliance on varints and dynamic byte arrays serialization.
The naive question would be: “Why not make content verification after Bitswap provided it back over its API?” Intuitively, this would simplify much and would not require “hacking” CID. However, this has an important downside - the Bitswap, in such a case, would consider the request finalized and the content as fetched and valid, sending a DONT_WANT message to its peers. In contrast, the message might still be invalid according to the verification rules.
Bitswap still requires multihashes and CID codecs to be registered. Therefore, we provide a table for the required share identifiers with their respective multihash and CID codec codes. This table should be extended whenever any new share identifier or new version of an existing identifier is added.
Name | Multihash | Codec |
---|---|---|
EdsID* | N/A | N/A |
RowID | 0x7801 | 0x7800 |
SampleID | 0x7811 | 0x7810 |
RowNamespaceDataID | 0x7821 | 0x7820 |
*EdsID and container are excluded from Bitswap composition. Bitswap is limited to messages of size 256kb, while EDSes are expected to be bigger. Also, it is more efficient to parallelize EDS requesting by rows.
Blocks
Bitswap operates over IPFS blocks (not to mix with Celestia or other blockchain blocks). An IPFS block is a blob of arbitrary bytes addressed and identified with a CID. An IPFS block must have a CID encoded into it, s.t. the CID can either be computed by hashing the block or by extracting it out of the block data itself.
In order for the composition to work, Shwap has to comply with the block format and for this we introduce general adapter block type. As Shwap container identifiers are not hash-based and aren’t computable, we have to encode CIDs into the block adapter for the containers.
The block adapter is protobuf encoded with the following schema:
syntax = "proto3";
message Block {
bytes cid = 1;
bytes container = 2;
}
The fields with validity rules that form the Block are:
CID: A variable size byte array representing a CID. It MUST follow CIDv1 specification. The encoded multihash and codec codes inside of the CID MUST be from one of the registered IDs defined in the table.
Container: A variable size byte array representing a protobuf serialized Shwap Container. It MUST be of a type defined by multihash and codec in the CID field. It MUST be validated according to validity rules of the respective Shwap Container.
Backwards Compatibility
Shwap is incompatible with the old sampling protocol.
After rigorous investigation, the celestia-node team decided against implementing backward compatibility with the old protocol into the node client due to the immense complications it brings. Instead, the simple and time-efficient strategy is transiently deploying infrastructure for old and new versions, allowing network participants to migrate gradually to the latest version. We will first deprecate the old version, and once the majority has migrated, we will terminate the old infrastructure.
Considerations
Security
Shwap does not change the security model of Celestia’s Data Availability network and changes the underlying protocol for data retrieval.
Essentially, the network and its codebase get simplified and require less code and infrastructure to operate. This in turn decreases the amount of implementation vulnerabilities, DOS vectors, message amplification, and resource exhaustion attacks. However, new bugs may be introduced, as with any new protocol.
Protobuf Serialization
Protobuf is recommended used to serialize share containers. It is a widely adopted serialization format and is used within Celestia’s protocols. This was quite an obvious choice for consistency reasons, even though we could choose other more efficient and advanced formats like Cap’n Proto.
Constant-size Identifier Serialization
Share identifiers should be of a constant size according to the spec. This is needed to support Bitswap composition, which has an implementation level limitation that enforces constant size identifiers. Ideally, this should be avoided as Shwap aims to be protocol agnostic, and future iterations of Shwap may introduce dynamically sized identifiers if constant ever becomes problematic.
Sampling and Reconstruction
Shwap deliberately avoids specifying sampling and reconstruction logic. The sampling concerns on randomness selection and sample picking are out of Shwap’s scope and a matter of following CIPs. Shwap only provides messaging for sampling(via SampleID and Sample container).
Reference Implementation
Copyright
Copyright and related rights waived via CC0.
cip | 20 |
---|---|
title | Disable Blobstream module |
description | Disable the Blobstream state machine module |
author | Rootul Patel (@rootulp) |
discussions-to | https://forum.celestia.org/t/cip-disable-blobstream-module/1693 |
status | Final |
type | Standards Track |
category | Core |
created | 2024-04-16 |
Abstract
The purpose of this proposal is to disable the Blobstream module in celestia-app.
Motivation
The Blobstream module is a celestia-app specific state machine module. The Blobstream module was designed to serve as a single component in the original Blobstream architecture. The original Blobstream architecture has been deprecated in favor of Blobstream X so the Blobstream module is no longer needed and thus can be disabled.
Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
If this CIP is adopted:
- The state machine MUST NOT accept new transactions that include Blobstream messages (e.g.
NewMsgRegisterEVMAddress
). - The state machine MUST NOT respond to queries for the Blobstream module (e.g.
AttestationRequestByNonce
,LatestAttestationNonce
,EarliestAttestationNonce
).
Parameters
If this CIP is adopted, the following parameter can be removed:
Parameter | Value | Description | Changeable via Governance |
---|---|---|---|
blobstream.DataCommitmentWindow | 400 | Number of blocks that are included in a signed batch (DataCommitment). | True |
Rationale
Disabling the Blobstream module reduces the functionality of the celestia-app state machine. Disabling the Blobstream module also reduces the amount of state that needs to be stored and maintained in the celestia-app state machine.
Backwards Compatibility
This proposal is backwards-incompatible because it is state-machine breaking. Therefore, this proposal cannot be introduced without an app version bump.
Test Cases
[!NOTE] Blobstream was previously named Quantum Gravity Bridge (QGB) and the codebase never fully adopted the name change so you may interpret instances of
qgb
asblobstream
.
- Ensure that celestia-app no longer accepts transactions for the Blobstream module. Example:
celestia-app tx qgb <command>
should return an error message. - Ensure that celestia-app no longer accepts gRPC, RPC, or CLI queries for the Blobstream module. Example:
celestia-app query qgb <command>
should return an error message.
Reference Implementation
Celestia-app uses a versioned module manager and configurator that enables the removal of modules during app version upgrades. Concretely, the Blobstream module can be disabled via this diff in app.go
:
{
Module: blobstream.NewAppModule(appCodec, app.BlobstreamKeeper),
FromVersion: v1,
- ToVersion: v2,
+ ToVersion: v1,
},
The blobstream store key MUST be removed from the versioned store keys map in app/modules.go
for app version 2:
func versionedStoreKeys() map[uint64][]string {
return map[uint64][]string{
1: {
// ...
},
2: {
// ...
- blobstreamtypes.StoreKey,
}
}
}
The Blobstream module’s tx commands and query CLI commands MAY be removed from the CLI in x/blobstream/module.go
:
// GetTxCmd returns no command because the blobstream module was disabled in app
// version 2.
func (a AppModuleBasic) GetTxCmd() *cobra.Command {
- return bscmd.GetTxCmd()
+ return nil
}
// GetQueryCmd returns no command because the blobstream module was disabled in
// app version 2.
func (AppModuleBasic) GetQueryCmd() *cobra.Command {
- return bscmd.GetQueryCmd()
+ return nil
}
Lastly, the x/blobstream module registers hooks in the staking module. Since these hooks are not version-aware, they MUST be made no-ops for app versions >= 2.
Security Considerations
This CIP generally reduces the surface area of celestia-app because it removes functionality that is no longer deemed necessary. Therefore, it could be argued that this CIP reduces the surface area for bugs or security vulnerabilities.
However, there is a slim chance that this CIP uncoveres a bug because the Celestia state machine hasn’t disabled a module before. Executing this CIP involves using new components (i.e. a versioned module manager and configurator) which may uncover bugs in software outside the scope of this CIP.
Copyright
Copyright and related rights waived via CC0.
cip | 21 |
---|---|
title | Introduce blob type with verified signer |
description | Introduce a new blob type that can be submitted whereby the signer address is included and verified. |
author | Callum Waters (@cmwaters) |
discussions-to | https://forum.celestia.org/t/cip-blobs-with-verified-author |
status | Final |
type | Standards Track |
category | Core |
created | 2024-05-22 |
Abstract
Introduce a new blob type (v1 share format) that can be submitted with the author of the blob. Validators verify that the author is correct, simplifying the loop for rollups that adopt a fork-choice rule that whitelists one or more sequencers (blob publishers).
Motivation
A common fork-choice rule for rollups is to enshrine the sequencer. In this situation, full rollup nodes, pull all the blobs on one or more namespaces and verify their authenticity through the address (i.e. celestia1fhalcne7syg....
) that paid for those blobs, the signer
. Currently in Celestia, the signer
field, is located in the MsgPayForBlobs
(PFB) which is separated from the blobs itself. Thus, the current flow is as follows:
- Retrieve all the PFBs in the
PFBNamespace
. Verify inclusion and then loop through them and find the PFBs that correspond to the namespaces that the rollup is subscribed to. - For the PFBs of the namespaces that the rollup is subscribed to, verify the
signer
matches the sequencer. - Use the share indexes of the
IndexWrapper
of the PFB to retrieve the blobs that match the PFB. Verify the blobs inclusion and finally process the blobs.
For rollups, using ZK, such as the case with Sovereign, the flow is as follows:
- Query all the blobs from the rollup namespace via RPC
- For each blob, reconstruct the blob commitment.
- Fetch the PFB namespace
- Parse the PFB namespace and create a mapping from blob commitment -> PFB
- (In circuit) Accept the list of all rollup blobs and the list of relevant PFBs as inputs
- (In circuit) Verify that the claimed list of blobs matches the block header using the namespaced merkle tree
- (In circuit) For each blob, find the PFB with the matching commitment and check that the sender is correct.
- (In circuit) For each relevant PFB, check that the bytes provided match the namespaced merkle tree
This is currently a needlessly complicated flow and more computationally heavy at constructing proofs. This CIP proposes an easier path for rollups that opt for this fork-choice rule
Specification
This CIP introduces a new blob type (known henceforth as an authored blob):
message Blob {
bytes namespace_id = 1;
bytes data = 2;
uint32 share_version = 3;
uint32 namespace_version = 4;
// new field
string signer = 5;
}
Given proto’s backwards compatibility, users could still submit the old blob type (in the BlobTx
format) and signer would be processed as an empty string.
The new block validity rule (In PrepareProposal
and ProcessProposal
) would thus be that if the signer was not empty, then it must match that of the PFB that paid for it. When validating the BlobTx
, validators would check the equivalency of the PFB’s signer
to the Blob’s signer
(as well as verification of the signature itself).
Although no version changes are required for protobuf encoded blobs, share encoding would change. Blobs containing a non empty signer string would be encoded using the new v1 share format (the first share format version is 0):
Note that in this diagram it is the Info Byte
that contains the share version. Not to be confused with the namespace version.
Blobs with an empty signer
string would remain encoded using the v0 format. A transaction specifying a share version of 1 and an empty signer field would be rejected. Equally so, specifying a share version of 0 and a non empty signer field would be rejected.
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Rationale
Given the current specification change, the new loop is simplified:
- Retrieve all blobs in the subscribed namespace
- Verify that the blobs from the previous step belong to the subscribed namespace and the list is complete (i.e. there are no blobs present in the subscribed namespace that are absent from the list of retrieved blobs).
- Verify that the
signer
in each blob matches that of an allowed sequencer
As a small digression, it may be feasible to additionally introduce a new namespace version with the enforcement that all blobs in that namespace use the v1 format i.e. have a signer. However, this does not mean that the signer matches that of the sequencer (which Celestia validators would not be aware of). This would mean that full nodes would need to get and verify all blobs in the namespace anyway.
Backwards Compatibility
This change requires a hard fork network upgrade as older nodes will not be able to verify the new blob format. The old blob format will still be supported allowing rollups to opt into the change as they please.
Test Cases
Test cases will need to ensure that a user may not forge a incorrect signer, nor misuse the versioning. Test cases should also ensure that the properties of backwards compatibility mentioned earlier are met.
Reference Implementation
The go implementation makes the following modifications to the Blob
proto:
message BlobProto {
bytes namespace_id = 1;
bytes data = 2;
uint32 share_version = 3;
uint32 namespace_version = 4;
+ // Signer is sdk.AccAddress that paid for this blob. This field is optional
+ // and can only be used when share_version is set to 1.
+ bytes signer = 5;
}
The signer is validated in CheckTx
and ProcessProposal
as follows:
signer, err := sdk.AccAddressFromBech32(msgPFB.Signer)
if err != nil {
return err
}
for _, blob := range bTx.Blobs {
// If share version is 1, assert that the signer in the blob
// matches the signer in the msgPFB.
if blob.ShareVersion() == share.ShareVersionOne {
if appVersion < v3.Version {
return ErrUnsupportedShareVersion.Wrapf("share version %d is not supported in %d. Supported from v3 onwards", blob.ShareVersion(), appVersion)
}
if !bytes.Equal(blob.Signer(), signer) {
return ErrInvalidBlobSigner.Wrapf("blob signer %s does not match msgPFB signer %s", sdk.AccAddress(blob.Signer()).String(), msgPFB.Signer)
}
}
}
Security Considerations
Rollups using this pattern for verifying the enshrined sequencer make an assumption that there is at least 1/3 in voting power of the network is “correct”. Note this is a more secure assumption than forking which may require up to 2/3+ of the voting power to be “correct”. Rollups may decide to still retrieve the PFB’s and validate the signatures themselves if they wish to avoid this assumption.
Copyright
Copyright and related rights waived via CC0.
cip | 22 |
---|---|
title | Removing the IndexWrapper |
author | NashQueue (@Nashqueue) |
discussions-to | https://forum.celestia.org/t/achieving-trust-minimized-light-clients-through-zk-proofs-instead-of-fraud-proofs/1759 |
status | Review |
type | Standards Track |
category | Core |
created | 2024-06-26 |
Abstract
A reserved namespace exists to store all PayForBlobs (PFB) transactions. These transactions are populated with metadata, including the start index of the blobs that the PFB transaction references. These indices can only be populated after the blobs are in the data square, making the creation of a deterministic square layout more complicated since protobuf uses variable-length encoding. The indices were needed to create compact fraud proofs for blob inclusion proofs in the future. By removing the indices from the metadata, we can simplify the square layout creation and make it more efficient, but we have to ZK prove the blob inclusion rules instead.
Specification
Remove the share_indexes
from the ‘IndexWrapper’. With the removal of the share_indexes
, the IndexWrapper
will no longer be needed and should be removed as a data structure.
Rationale
The index where the blob starts had two initial purposes:
- It was needed to create compact fraud proofs for blob inclusion proofs.
- It was needed to create the square out of the transactions alone in the absence of any constraints on blob placement.
We are solving the first by committing to proving the correctness of the square layout in the future using a ZK proof, thereby removing the need for the index for fraud proofs.
The second initial purpose was removed when we moved to a deterministic square construction. With that the index was no longer needed for blob placements as it can be deterministically derived from the list of transactions. Each blob has to follow the blob share commitment rules and cannot be placed at an index that does not respect the rules.
Backwards Compatibility
Removing the index is a breaking consensus change but should not affect anybody upstream of this change.
celestia-node does not consume this index. Although they have their own way of creating the same information, this proposal is not applicable to them.
None of the rollup teams are influenced by this change except Sovereign SDK. The circuit parsing the PFB reserved namespace would break and must be adapted. The circuit does not use the information from the index. If this change is accepted, the live rollup teams will have to upgrade their circuits when Celestia upgrades to the new version. Currently, no rollups using Sovereign SDK are live on mainnet using Celestia, so a breaking change would not affect anyone directly.
Security Considerations
No Celestia light nodes rely on the index to verify the square’s correctness. No fraud proofs rely on the index, so removing it does not affect the network’s security. Without the index, we won’t be able to create compact fraud proofs anymore. This means that accepting this proposal is also a commitment to ZK prove the validity of the square layout.
Copyright
Copyright and related rights waived via CC0.
cip | 23 |
---|---|
title | Coordinated prevote times |
description | Scheduled prevote times for consistent blocks |
author | Callum Waters (@cmwaters) |
discussions-to | https://forum.celestia.org/t/coordinated-start-time-intervals/1768 |
status | Draft |
type | Standards Track |
category | Core |
created | 2024-07-12 |
Abstract
Block propagation speed is dependent on block size, thus larger blocks have later block times. It is desirable that regardless of block size that blocks are finalized with a consistent cadence. This CIP proposes a solution to this problem, namely, correct processes that receive a valid proposal withthold their prevote until the ProposalTimeout
has elapsed.
Motivation
The current network has a cap of 2MB. We have observed in testnets that larger blocks (~8MB) shift the block time by up to 7 seconds (18 seconds in total with the default 11 second TimeoutCommit
). This range of 11 - 18 seconds for blocks 0-8MB in size is too inconsistent for users. We want to provide block finality with a consistent cadence.
Specification
The current system relied on a fixed timeout (known as TimeoutCommit
) after finality to dictate the interval between block times. The proposed modification targets the proposal timeout. Currently, if a correct process does not receive a proposal within a set time TimeoutPropose
after the round’s start time, the process will prevote nil. This mechanism ensures liveness in the event that the proposer fails. This CIP makes the following modification:
- Upon receiving a correct proposal, a process will not immediately PREVOTE but wait until
TimeoutPropose
before sending their vote. - Upon receiing an invalid proposal, the process will immediately PREVOTE nil.
- If a process is locked on a block, they will send PREVOTE votes immediately (this situaiton applies after the first round).
This mechanism can be switched on and off and is controlled by the application via the ConsensusParams
in the EndBlock
of the ABCI interface.
In addition, TimeoutCommit
will also move from a local variable to one controlled by the application, as outlined in ADR115.
It is expected that enabling this mechanism would work alongside reducing the TimeoutCommit
.
Parameters
The proposal adds two variables that can be controlled by the application but will not be exposed by governance:
EnableCoordinatedPrevotes
TimeoutCommit
(If 0, a node’s local config time will be used - for backwards compatibility)TimeoutPropose
(If 0, a node’s local config time will be used - for backwards compatibility)
Given a target block rate of 12 seconds, enabling this mechanism would coincide with changes to the following timeouts:
TimeoutPropose
remains at 10 secondsTimeoutCommit
goes from 11 seconds to 1 secondEnableCoordinatedPrevotes
is set totrue
.
NOTE: These numbers are subject to benchmarking
Rationale
The variables TimeoutCommit
and TimeoutPropose
were previously part of a node’s local configuration. Switching these variables to be coordinated by consensus itself is critical.
Backwards Compatibility
Enabling of this mechanism must be coordinated amongst all consensus nodes in the network. It should tie in with a major upgrade. The changes themselves can be made in a backwards compatible manner to celestia-core
by having them be disabled by default.
Test Cases
Testing must be designed around validating that this approach does achieve more consistent block times (i.e. a 1 second standard deviation). As this modifies the voting logic, testing should also verify that correct nodes always vote.
Reference Implementation
TBC
Security Considerations
This modification is small in scope and logically shouldn’t imapact the properties of consensus, however this still makes modifications to the consensus algorithm and thus there is implementation risk.
Copyright
Copyright and related rights waived via CC0.
cip | 24 |
---|---|
title | Versioned Gas Scheduler Variables |
description | Transition to hard fork-only updates for gas scheduler variables |
author | Nina Barbakadze (@ninabarbakadze) |
discussions-to | https://forum.celestia.org/t/cip-versioned-gas-scheduler-variables/1785 |
status | Final |
type | Standards Track |
category | Core |
created | 2024-07-24 |
Abstract
Gas scheduler parameters blob.GasPerBlobByte
and auth.TxSizeCostPerByte
will no longer be modifiable by governance but may only change via a hard fork upgrade.
Motivation
Versioning on-chain governance modifiable parameters blob.GasPerBlobByte
and auth.TxSizeCostPerByte
, aims to stabilize gas estimation by removing block-to-block variability. This allows for hardcoding these values into estimators, simplifying the gas estimation process and making transaction costs more predictable without the need for pre-transaction queries.
Specification
Currently, GasPerBlobByte
and TxSizeCostPerByte
are module parameters within the blob
and auth
modules, allowing for their modification via ParameterChangeProposal
. The proposed modification changes these parameters to hardcoded constants within the application, accessible via version-specific getters.
Parameters
The proposal makes these two variables modifiable through hard fork upgrades:
Previously:
blob.GasPerBlobByte
auth.TxSizeCostPerByte
Now:
appconsts.GasPerBlobByte
appconsts.TxSizeCostPerByte
Backwards Compatibility
Enabling this feature requires a hard fork network upgrade.
Test Cases
Test cases should verify that gas scheduler variables are exclusively updated via hard forks, effectively preventing updates through governance mechanisms and that the gas meter uses those constants.
Reference Implementation
Starting from v3, we updated the PayForBlobs()
function in x/blob/keeper.go
to use versioned GasPerBlobByte
parameter when calculating gas based on the size of the blobs while maintaining compatibility with previous versions.
- gasToConsume := types.GasToConsume(msg.BlobSizes, k.GasPerBlobByte(ctx))
+ // GasPerBlobByte is a versioned param from version 3 onwards.
+ var gasToConsume uint64
+ if ctx.BlockHeader().Version.App <= v2.Version {
+ gasToConsume = types.GasToConsume(msg.BlobSizes, k.GasPerBlobByte(ctx))
+ } else {
+ gasToConsume = types.GasToConsume(msg.BlobSizes, appconsts.GasPerBlobByte(ctx.BlockHeader().Version.App))
+ }
+
Additionally, we modified the PFB gas estimation logic to use appconsts.DefaultTxSizeCostPerByte
.
-// DefaultEstimateGas runs EstimateGas with the system defaults. The network may change these values
-// through governance, thus this function should predominantly be used in testing.
+// DefaultEstimateGas runs EstimateGas with the system defaults.
func DefaultEstimateGas(blobSizes []uint32) uint64 {
- return EstimateGas(blobSizes, appconsts.DefaultGasPerBlobByte, auth.DefaultTxSizeCostPerByte)
+ return EstimateGas(blobSizes, appconsts.DefaultGasPerBlobByte, appconsts.DefaultTxSizeCostPerByte)
}
We also needed to update the gas consumption logic related to transaction size in the ante handler. The AnteHandle
function within NewConsumeGasForTxSizeDecorator
has been modified to retrieve the TxSizeCostPerByte
value from app constants for versions v3 and later. The logic for earlier versions remains unchanged.
+// consumeGasForTxSize consumes gas based on the size of the transaction.
+// It uses different parameters depending on the app version.
+func consumeGasForTxSize(ctx sdk.Context, txBytes uint64, params auth.Params) {
+ // For app v2 and below we should get txSizeCostPerByte from auth module
+ if ctx.BlockHeader().Version.App <= v2.Version {
+ ctx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*txBytes, "txSize")
+ } else {
+ // From v3 onwards, we should get txSizeCostPerByte from appconsts
+ txSizeCostPerByte := appconsts.TxSizeCostPerByte(ctx.BlockHeader().Version.App)
+ ctx.GasMeter().ConsumeGas(txSizeCostPerByte*txBytes, "txSize")
+ }
+}
Security Considerations
This change prioritizes network stability and predictability but requires heightened vigilance against potential misconfigurations.
Copyright
Copyright and related rights waived via CC0.
cip | 25 |
---|---|
title | Ginger Network Upgrade |
description | Reference CIPs included in the Ginger Network Upgrade |
author | Josh Stein @jcstein, Nina Barbakadze (@ninabarbakadze) |
discussions-to | https://forum.celestia.org/t/cip-v3-peppermint-network-upgrade/1826 |
status | Final |
type | Meta |
created | 2024-10-01 |
requires | CIP-21, CIP-24, CIP-26, CIP-27, CIP-28 |
Abstract
This Meta CIP lists the CIPs included in the Ginger network upgrade.
Specification
Included CIPs
- CIP-21: Introduce blob type with verified signer
- CIP-24: Versioned Gas Scheduler Variables
- CIP-26: Versioned timeouts
- CIP-27: Block limits for number of PFBs and non-PFBs
- CIP-28: Transaction size limit
All of the above CIPs are state breaking, and thus require a breaking network upgrade. The activation of this network upgrade will be different from previous network upgrades, as described in CIP-10.
Rationale
This CIP provides a complete list of breaking changes for the Ginger upgrade, along with links to those CIPs.
Security Considerations
This CIP does not have additional security concerns beyond what is already discussed in each of the listed CIPs.
Copyright
Copyright and related rights waived via CC0.
cip | 26 |
---|---|
title | Versioned timeouts |
description | Timeouts are now controlled by the application version. |
author | Josh Stein (@jcstein), Rootul Patel (@rootulp), Sanaz Taheri (@staheri14) |
discussions-to | https://forum.celestia.org/t/cip-decrease-block-time-to-6-seconds/1836 |
status | Final |
type | Standards Track |
category | Core |
created | 2024-10-09 |
Abstract
This CIP proposes making timeouts application-version dependent. Starting from v3, timeouts will be controlled by the application version.
This change enables automated block time adjustments, eliminating the need for validators to modify configurations manually, as the adjustments (if any) will occur automatically with each celestia-app version upgrade.
Updating the timeouts will naturally impact block time, block rate, and network throughput. For v3 of celestia-app, the timeout values are set to reduce the block time from 12 seconds to 6 seconds. This means block time is cut in half which, consequently, will nearly double the block rate and throughput (considering other network factors).
Additionally, this CIP proposes increasing the ttl-num-blocks
parameter in the mempool configuration from 5 to 12 to align with the reduced block time.
Motivation
The motivation for this CIP stems from a discussion in Core Devs Call 17, where it was proposed to reduce the block time to 6 seconds from 12 seconds.
Specification
-
The block time in celestia-app SHOULD be reduced from 12 seconds to 6 seconds. Concretely, this implies decreasing
TimeoutCommit
to 4.2 seconds andTimeoutPropose
to 3.5 seconds.- The
TimeoutCommit
andTimeoutPropose
parameters were moved from local config parameters into versioned parameters controlled by the state machine. The timeouts will be managed by the application and communicated with Celestia-core through the following ABCI interfaces:InitChain
,EndBlock
, andInfo
, now extended withTimeoutsInfo
, which encapsulatesTimeoutPropose
andTimeoutCommit
. The timeouts obtained through these interfaces are utilized by the Celestia-core side as needed. - Celestia consensus nodes SHOULD update their software to accommodate this change prior to the agreed-upon block height.
- Client applications interacting with the Celestia network SHOULD be updated to account for the faster block time, particularly in areas related to transaction confirmation and block finality.
- The
-
The default
ttl-num-blocks
parameter in the mempool configuration SHALL be increased from 5 to 12. This change is necessary to maintain consistency with the new block time and ensure that transactions remain in the mempool for a similar duration as before.- Current default:
ttl-num-blocks = 5
- New default:
ttl-num-blocks = 12
- This change SHALL NOT be implemented alongside the block time reduction. The default increase from 5 to 12 will occur when users upgrade to celestia-app v3.0.0 and regenerate their config files. The block time reduction will happen one week later when the v2 to v3 activation height occurs. This approach ensures consistent behavior of the mempool across the network upgrade.
- All validator nodes SHOULD update their configuration files to reflect this new
ttl-num-blocks
value before the agreed-upon implementation block height.
- Current default:
-
Documentation and APIs related to block time and block production MUST be updated to reflect these changes.
Rationale
The rationale for this change is to increase the throughput of the Celestia network by doubling the number of blocks produced per unit of time. This will reduce the time it takes for transactions to be finalized and improve the overall user experience on the network.
The increase in ttl-num-blocks
from 5 to 12 is necessary to maintain consistent mempool behavior with the new block time. This change ensures that transactions remain in the mempool for approximately 72 seconds (12 blocks times 6 seconds), which closely matches the previous behavior of about 60 seconds (5 blocks times 12 seconds).
Backwards Compatibility
This upgrade requires all participants to update their software to v3 to accommodate the new block time and ttl-num-blocks
. Nodes running older versions may not function correctly with the new network parameters. All validators and node operators should update to v3 before the agreed-upon implementation block height to ensure network consistency and optimal performance.
Test Cases
This will be tested on Arabica devnet and Mocha testnet before going live on Celestia Mainnet Beta.
Security Considerations
While the reduction in block time itself does not introduce significant new security risks to the network, there are important considerations:
- Participants should ensure that their systems are capable of handling the increased throughput from faster block times.
- The increase of
ttl-num-blocks
from 5 to 12 is crucial for maintaining the security and efficiency of the mempool:- It prevents premature removal of valid transactions, reducing the risk of unintended exclusion from blocks.
- Without this adjustment, transactions would be pruned from the mempool after only 30 seconds, potentially leading to increased transaction failures and a poor user experience.
- Validators and node operators should update their configurations to reflect the new
ttl-num-blocks
value to maintain network consistency and security.
These changes require careful implementation and testing to ensure network stability during and after the transition.
Copyright
Copyright and related rights waived via CC0.
cip | 27 |
---|---|
title | Block limits for number of PFBs and non-PFBs |
description | Set limits for number of PFBs and non-PFBs per block |
author | Josh Stein (@jcstein), Nina Barbakadze (@ninabarbakadze), rach-id (@rach-id), Rootul Patel (@rootulp) |
discussions-to | https://forum.celestia.org/t/cip-limit-number-of-pfbs-and-non-pfbs-per-block-increase-transaction-size-limit/1843 |
status | Final |
type | Standards Track |
category | Core |
created | 2024-10-16 |
Abstract
This CIP proposes to set limits for the number of PayForBlobs (PFBs) messages and non-PFBs messages per block. The proposal is to set the limits to 600 PFBs messages and 200 non-PFB messages per block. Setting PFB and non-PFBs limits is not consensus-breaking.
Specification
- The number of PFBs per block is limited to 600 by setting
MaxPFBMessages
to 600. - The number of non-PFBs messages per block is limited to 200 by setting
MaxNonPFBMessages
to 200. - It’s important to note that these limits are not strictly enforced. While they are defined by the
celestia-app
implementation, a validator could potentially modify thePrepareProposal
logic, run a custom binary, and produce blocks that exceed the specified limits for PFB or non-PFBs transactions.
Rationale
The rationale for this proposal is to prevent long block times on the network by limiting the number of PFBs and non-PFB messages per block. This is not consensus-breaking but it has a meaningful effect on users and should be formalized in a CIP.
- The limits for PFBs (Pay for Blob transactions) and non-PFBs per block were established using the following process:
- Benchmarks were conducted in PR 3904 on celestia-app to measure ABCI method processing times for different transaction types.
- A target processing time of ~0.25 seconds was set to prevent long block times.
- Based on these benchmarks run on the recommended validator configuration (4 CPU, 16GB RAM), a soft limiter was implemented in the prepare proposal stage.
- This limiter sets specific caps on the number of PFB and non-PFB messages allowed in a default block to meet the processing time target.
- While default blocks adhere to these limits, blocks exceeding them can still be included if they reach consensus, ensuring flexibility.
- This approach balances network efficiency with block processing speed, directly informing the PFB and non-PFB limits now in place.
Backwards Compatibility
This proposal is meant to be included with v3 and the Ginger Network Upgrade. It is backwards compatible with v2.
Security Considerations
This proposal does not introduce any new security risks. However, it does impact network behavior and user experience, which should be carefully considered during implementation.
Copyright
Copyright and related rights waived via CC0.
cip | 28 |
---|---|
title | Transaction size limit |
description | Set limit for transaction size |
author | Josh Stein (@jcstein), Nina Barbakadze (@ninabarbakadze), Rootul Patel (@rootulp) |
discussions-to | https://forum.celestia.org/t/cip-limit-number-of-pfbs-and-non-pfbs-per-block-increase-transaction-size-limit/1843 |
status | Final |
type | Standards Track |
category | Core |
created | 2024-10-16 |
Abstract
This CIP proposes to set the limit for transaction size. The proposal is to set the transaction size limit to 2MiB. Setting the transaction size limit is consensus-breaking.
Specification
- Transaction size is limited to 2MiB by setting the versioned parameter
MaxTxSize
to 2097152, which is 2MiB in bytes. From version v3 and above, inCheckTx
,PrepareProposal
, andProcessProposal
, each transaction’s size is checked against theappconsts.MaxTxSize
threshold. This ensures that transactions over the limit are rejected or excluded at all stages, from initial submission to execution.
Rationale
This proposal aims to set a transaction size limit of 2 MiB, even with blocks of 8 MiB or larger, primarily as a preventative measure. Gossiping transactions approaching 8 MiB without chunking could potentially be detrimental to network performance and stability.
The 2 MiB limit serves to:
- Maintain network stability
- Provide clear expectations for users and developers
- Safeguard against potential issues as network usage grows
This approach prioritizes network-wide consistency and stability while allowing for future scalability considerations.
Backwards Compatibility
This proposal is meant to be included with v3 and the Ginger Network Upgrade. It is a consensus-breaking change.
Security Considerations
Any changes to the block validity rules (via PrepareProposal
and ProcessProposal
) introduce implementation risks that could potentially lead to a chain halt.
Copyright
Copyright and related rights waived via CC0.
Celestia Core Devs Call notes directory
Celestia Core Devs Call 14 notes
Overview
- Lemongrass hard fork (CIP-17) final release expected by end of week, to be deployed on Arabica testnet in early August
- CIP-21 (blob with verified signers) in Last Call stage, integration with celestia-app in progress
- CIP-22 (remove
IndexWrapper
) moved to review status, discussion on potential impacts - CIP-23 (coordinated prevote time) introduced to solve inconsistent block times, prototype expected in 1-2 weeks
- Celestia state machine v3 to include author blobs and delayed voting features, with potential upgrade system refactor
Lemongrass hard fork update
- Final release of celestia-app v2.0.0 expected by end of week
- To be incorporated into celestia-node v0.16.0 (tentative)
- Deployment timeline (tentative):
- Arabica testnet: Early August
- Mocha testnet: Mid August
- Mainnet: Late August (assuming no bugs)
- Services running on Celestia urged to deploy to Arabica and Mocha testnets for compatibility testing
Blob with verified signers progress
- First release candidate of go-square cut
- Integration with celestia-app started, expected to complete in 1-2 weeks
- QA testing to follow, likely to undergo audits
- CIP-21 to be moved to Last Call stage, pending preliminary QA before moving to Final
Removing index wrapper proposal
- CIP-22 moved to review status with updated specification
- Benefits:
- Makes PFB transaction deterministic
- Simplifies square construction
- Prerequisite for future features like hash blocks of squares
- Potential impacts on SDK parsing logic discussed
- Evan suggested exploring implications for square layout and fraud proofs
Coordinated prevote time introduction
- CIP-23 introduced to address inconsistent block times
- Proposes delaying pre-vote until end of timeout propose
- Aims to provide finality on a regular cadence
- Prototype expected in 1-2 weeks
- Informal Systems to be involved in auditing
Celestia state machine v3 features
- Confirmed features:
- Potential inclusion:
- Refactor of upgrade system for Cosmos SDK compatibility
- Meta CIP for v3 features to be written and discussed in next call
- v3 upgrade planned to follow shortly after Lemongrass hard fork
Celestia Core Devs Call 15 notes
Date: Wednesday, August 7th
Duration: 9 minutes 45 seconds
Action Items
Overview
- Celestia Core Dev’s Call 15 focused on working group updates and upcoming upgrades
- Lemongrass hard fork (CIP-17) and Celestia App v2 upgrades timelines slightly pushed back:
- Arabica (mid-August)
- Mocha (end of August)
- Mainnet (early September)
- CIP-24 introduced to simplify gas estimation for clients by making cost variables version-based instead of governance-modifiable
Working group updates
- Interface working group: Draft CIP in progress, to be completed by end of August
- ZK working group: Started work on exploring paths leading to ZK accounts, discussion planned for next week
- ZK working group sync meetings now biweekly, alternating with core dev calls
Lemongrass hard fork and Celestia App v2 upgrades
- Celestia Node to release update supporting Celestia App v2 soon
- Arabica devnet upgrade planned for August 14th
- Node operators must upgrade within 1 week timeframe
- Consensus node operators need to use a specific CLI flag (see documentation)
- Application owners on Arabica advised to monitor before and after upgrade
- Upgrade process: Arabica first, then Mocha, finally Mainnet
CIP 24: Constant gas costs for blob transactions
- CIP-24 aims to simplify gas estimation for clients submitting transactions and blobs
- Changes two governance-modifiable variables to version-based constants:
blob.GasPerBlobByte
auth.TxSizeCostPerByte
- Allows clients to rely on consistent values for gas calculations within major versions
- Draft PR with reference implementation nearly ready
- CIP to remain in draft status until implementation is complete
cip | XX (assigned by Editors) |
---|---|
title | The CIP title is a few words, not a complete sentence |
description | Description is one full (short) sentence |
author | a comma separated list of the author’s or authors’ name + GitHub username (in parenthesis), or name and email (in angle brackets). Example, FirstName LastName (@GitHubUsername), FirstName LastName [email protected], FirstName (@GitHubUsername) and GitHubUsername (@GitHubUsername) |
discussions-to | URL |
status | Draft |
type | Standards Track, Meta, or Informational |
category | Core, Data Availability, Networking, Interface, or CRC. Only required for Standards Track. Otherwise, remove this field. |
created | Date created on, in ISO 8601 (yyyy-mm-dd) format |
requires | CIP number(s). Only required when you reference a CIP in the Specification section. Otherwise, remove this field. |
Note: READ CIP-1 BEFORE USING THIS TEMPLATE! This is the suggested template for new CIPs. After you have filled in the requisite fields, please delete these comments. Note that an CIP number will be assigned by an editor. When opening a pull request to submit your CIP, please use an abbreviated title in the filename,
cip-draft_title_abbrev.md
. The title should be 44 characters or less. It should not repeat the CIP number in title, irrespective of the category.
TODO: Remove the note before submitting
Abstract
The Abstract is a multi-sentence (short paragraph) technical summary. This should be a very terse and human-readable version of the specification section. Someone should be able to read only the abstract to get the gist of what this specification does.
TODO: Remove the previous comments before submitting
Motivation
This section is optional.
The motivation section should include a description of any nontrivial problems the CIP solves. It should not describe how the CIP solves those problems, unless it is not immediately obvious. It should not describe why the CIP should be made into a standard, unless it is not immediately obvious.
With a few exceptions, external links are not allowed. If you feel that a particular resource would demonstrate a compelling case for your CIP, then save it as a printer-friendly PDF, put it in the assets folder, and link to that copy.
TODO: Remove the previous comments before submitting
Specification
The Specification section should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Celestia clients (celestia-node, celestia-core, celestia-app).
It is recommended to follow RFC 2119 and RFC 8170. Do not remove the key word definitions if RFC 2119 and RFC 8170 are followed.
TODO: Remove the previous comments before submitting
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.
Parameters
The parameters section should summarize any changes to global or module parameters, including any new parameters, introduced by the CIP. All mainnet parameters are tracked in CIP-13. Once a CIP is accepted and deployed to mainnet, CIP-13 MUST be updated with these parameter changes. If there are no parameter changes in the CIP, this section can be omitted.
TODO: Remove the previous comments and update the following table before submitting
Parameter | Proposed value | Description | Changeable via Governance |
---|---|---|---|
module1.Name1 | ProposedValue1 | Description1 | bool |
module2.Name2 | ProposedValue2 | Description2 | bool |
For changes to existing parameters, use the following table:
Parameter | Current value | Proposed value | Description | Changeable via Governance |
---|---|---|---|---|
module1.Name1 | CurrentValue1 | ProposedValue1 | Description1 | bool |
module2.Name1 | CurrentValue2 | ProposedValue2 | Description2 | bool |
For new parameters the Current value column can be omitted.
Rationale
The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages.
The current placeholder is acceptable for a draft.
TODO: Remove the previous comments before submitting
Backwards Compatibility
This section is optional.
All CIPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The CIP must explain how the author proposes to deal with these incompatibilities. CIP submissions without a sufficient backwards compatibility treatise may be rejected outright.
The current placeholder is acceptable for a draft: “No backward compatibility issues found.”
TODO: Remove the previous comments before submitting
Test Cases
This section is optional.
The Test Cases section should include expected input/output pairs, but may include a succinct set of executable tests. It should not include project build files. No new requirements may be be introduced here (meaning an implementation following only the Specification section should pass all tests here.)
If the test suite is too large to reasonably be included inline, then consider adding it as one or more files in ../assets/cip-####/
. External links will not be allowed
TODO: Remove the previous comments before submitting
Reference Implementation
This section is optional.
The Reference Implementation section should include a minimal implementation that assists in understanding or implementing this specification. It should not include project build files. The reference implementation is not a replacement for the Specification section, and the proposal should still be understandable without it.
If the reference implementation is too large to reasonably be included inline, then consider adding it as one or more files in ../assets/cip-####/
. External links will not be allowed.
TODO: Remove the previous comments before submitting
Security Considerations
All CIPs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks and can be used throughout the life cycle of the proposal. For example, include security-relevant design decisions, concerns, important discussions, implementation-specific guidance and pitfalls, an outline of threats and risks and how they are being addressed. CIP submissions missing the “Security Considerations” section will be rejected. An CIP cannot proceed to status “Final” without a Security Considerations discussion deemed sufficient by the reviewers.
The current placeholder is acceptable for a draft.
TODO: Remove the previous comments before submitting
Copyright
Copyright and related rights waived via CC0.
Working Groups Directory
Here you will find all working groups and their meeting notes and recordings, if available.
Data Availability (DA) Working Group
Meetings
№ | Date | Agenda | Notes | Recording |
---|---|---|---|---|
1 | 6 December 2023 | Agenda is in Meeting Notes | Meeting Notes | N/A |
2 | 22 January 2024 | Agenda is in Meeting Notes | Meeting Notes | N/A |
Interface Working Group
Meetings
№ | Date | Agenda | Notes | Recording |
---|---|---|---|---|
1 | Date | Agenda Link | Notes Link | Recording Link |
Zero Knowledge (ZK) Working Group
Meetings
№ | Date | Agenda | Notes | Recording |
---|---|---|---|---|
1 | Jan 24, 2024 | Agenda Link | Notes Link | Recording Link |
2 | Feb 7, 2024 | Agenda Link | Notes Link | Recording Link |
3 | Feb 21, 2024 | Agenda Link | Notes Link | Recording Link |
4 | March 6, 2024 | Agenda Link | Notes Link | Recording Link |
5 | March 21, 2024 | Agenda Link | Notes Link | Recording Link |
6 | April 4, 2024 | Agenda Link | Notes Link | Recording Link |
7 | May 1, 2024 | Agenda Link | Notes Link | Recording Link |
8 | May 22, 2024 | Agenda Link | Notes Link | Recording Link |
9 | May 29, 2024 | Agenda Link | Notes Link | Recording Link |
10 | June 19, 2024 | Agenda Link | Notes Link | Recording Link |
11 | July 3, 2024 | Agenda Link | Notes Link | Recording Link |
12 | July 17, 2024 | Agenda Link | Notes Link | Recording Link |
Resources
- Overview
- Shumo/Nebra’s notes on GNARK
- Uma/Succinct’s post on Snark accounts
- John Adler’s research day talk
Questions
- Do we want to support forced withdrawals or forced state transitions?
- How do we serialize deposits into rollups?
- Do we need Acks when transferring tokens between Celestia and the rollup?
- Are Acks isomorphic to Option 4 of the spec proposal?
- Do we need an on-chain light client of Celestia on the SNARK account if we want to support SNARK account <-> SNARK account bridging?
- Can SNARK accounts upgrade, and if yes what kind of changes do we have to make?
- Are there any other requirements of the rollup client on Celestia that we have to take into account?
- Do we have to support transfers other than TIA?
Overview of ZK Accounts
Overview
In order to achieve “functional escape velocity” (i.e. in order to support non-trivial L2s), a blockchain must be sufficiently expressive. While it was previously assumed that a blockchain would have to provide general execution to meet the bar for sufficient expressivity, ZK proofs—also known as validity proofs—loosen this requirement. Using such systems, a blockchain only needs to provide verification of ZK proofs.
Background
Popularized with Ethereum’s account model, transactions on blockchains with general execution have traditionally required only the following for spending from an account (equivalently, transaction validity):
- Correct account nonce (or other replay protection mechanism), and
- Sufficient account balance to pay fees and send funds, and
- Valid digital signature based on the account’s public key.
This is as opposed to Bitcoin, which allows limited scripting in the form of stateless predicates to control spending UTXOs. The lack of flexibility in the traditional account model signficantly restricts users’ ability to define conditions under which their accounts can be used; infamously, native multisigs are not possible on Ethereum. Some account-based blockchain, such as those based on the Cosmos SDK, can support additional functionality natively such as multisig accounts or vesting accounts, however each such feature needs to be enshrined individually.
ZK proofs can provide the best of both worlds: expressive account control without individual enshrinement into the protocol.
Protocol Sketch
A sketch of a ZK account protocol is actually surpisingly simple. A ZK account is a special account type in the Celestia state machine that is associated with a verification key, which uniquely represents a program whose execution over inputs can be verified. The program is entirely determined by the user, and does not require specific enshrinement in the Celestia state machine.
Spending from the ZK account (equivalently, advancing the state of the ZK account) is done through a transaction that provides a proof against the current ZK account state. If the proof is correctly verified, the funds of the account are unlocked and spendable as defined in the transaction. Inputs to the proof verifier depend on the specific application of the ZK account (detailed in the following section), which can be defined at account creation time or at transaction sending time. In the simplest form, inputs could be a public key and a nonce—sufficiency of TIA balance would have to be enforced by the Celestia state machine.
Applications of ZK Accounts
The protocol sketch in the previous section allows for conditional simple transfers, but not much more on its own. Significant additional functionality can be enabled by enshrining a small amount of additional logic, described non-exhaustively in this section.
Account Abstraction
While the protocol sketch is a form of account abstraction in that conditions for spending from a ZK account can be set by the user, this abstraction is only of limited use if the ZK account cannot interact with any other account. As an addition to the protocol sketch, we can allow messages from other accounts as inputs to the verifier. This would enable ZK accounts to delegate restricted or unrestricted control over spending to another account.
Restricted control could be useful in the case of two ZK rollups bridging atomically through the Celestia state machine in a single Celestia transaction. The first rollup could withdraw tokens from its ZK account, which then get sent via a message to the second rollup, and ingested into the second rollup’s ZK account. Rollup bridging is described in more detail in Lazybridging.
Unrestricted control could be useful to delegate control of an account to another account. This has applications with keystore rollups.
Upgrades
The verification key associated with a ZK account does not need to be fixed at account creation time. Borrowing a playbook from account abstraction, each ZK account can instead store its verification key as mutable. A specific execution path of the verifier can trigger an upgrade, which can either be from posting a valid proof to the ZK account itself, or from another account entirely. The upgrade would change the verification key, potentially arbitrarily, essentially changing the program controlling spending of the ZK account. In the context of a ZK rollup, this would mean upgrading the execution logic of the rollup.
Lazybridging
[!TIP] “Lazybridging is a lifestyle.”
Lazybriding is an extension to base ZK account functionality that allows for trust-minimized two-way bridges between the Celestia state machine and rollups (for both TIA and arbitrary tokens potentially), and incoming bridging of assets from non-IBC external chains. The extension is another execution path of the verifier that can trigger the unlocking of specific account funds to be spent from the ZK account, rather than the entirety of the account funds as with the protocol sketch.
For rollups (both ZK and optimistic), lazybriding is implemented as two components working in unison: one in the Celestia state machine as described above, and one in the rollup’s state transition function. For example, the rollup can have an enshrined transaction type or an enshrined smart contract that can burn assets on the rollup, triggering the withdrawal of those assets from the associated ZK account on Celestia.
Another form of lazybriding is relaying assets from non-IBC chains, such as Ethereum. In this scheme, the ZK account would verify the correctness of the remote chain’s consensus and some subset of its state transition (e.g. just the logs for an EVM chain). In other words, it does not require the remote chain to opt-in to using Celestia for data availability.
Keystore Rollups
Finally, a keystore rollup is a rollup which abstracts mutable user public keys behind unique immutable identifiers. The identifiers then control the assets, rather than the public keys. When combined with account abstraction, this allows keystore rollup accounts to control both other accounts in the Celestia state machine and other rollups that use ZK accounts.
Follow the Development of ZK Accounts
Working Group and CIP
A working group for designing and implementing ZK accounts meets every two weeks: Zero Knowledge in the Celestia Baselayer. A work-in-progress CIP is also available, with call recordings and additional resources.
Content
- Initial proposal: Achieving base layer functionality escape velocity without on-chain smart contracts, using sovereign ZK rollups
- Proposed designs: Celestia Snark Accounts Design Spec
- Discussion on inputs: Public Inputs for SNARK Accounts
- Talk: ZK Accounts on Celestia
- Podcast: How ZK Accounts Expand dApp Limits