Reference implementation of the Stacks blockchain in Rust

Reference implementation of the Proof of Transfer (PoX) mining that anchors to Bitcoin security

Stacks 2.0

Reference implementation of the Stacks blockchain in Rust.

Stacks 2.0 is a layer-1 blockchain that connects to Bitcoin for security and enables decentralized apps and predictable smart contracts. Stacks 2.0 implements Proof of Transfer (PoX) mining that anchors to Bitcoin security. Leader election happens at the Bitcoin blockchain and Stacks (STX) miners write new blocks on the separate Stacks blockchain. With PoX there is no need to modify Bitcoin to enable smart contracts and apps around it. See this page for more details and resources.

Repository

Blockstack Topic/Tech Where to learn more more
Stacks 2.0 master branch
Stacks 1.0 legacy branch
Use the package our core docs
Develop a Blockstack App our developer docs
Use a Blockstack App our browser docs
Blockstack PBC the company our website

Release Schedule and Hotfixes

Normal releases in this repository that add features such as improved RPC endpoints, improved boot-up time, new event observer fields or event types, etc., are released on a monthly schedule. The currently staged changes for such releases are in the develop branch. It is generally safe to run a stacks-node from that branch, though it has received less rigorous testing than release tags. If bugs are found in the develop branch, please do report them as issues on this repository.

For fixes that impact the correct functioning or liveness of the network, hotfixes may be issued. These are patches to the main branch which are backported to the develop branch after merging. These hotfixes are categorized by priority according to the following rubric:

  • High Priority. Any fix for an issue that could deny service to the network as a whole, e.g., an issue where a particular kind of invalid transaction would cause nodes to stop processing requests or shut down unintentionally. Any fix for an issue that could cause honest miners to produce invalid blocks.
  • Medium Priority. Any fix for an issue that could cause miners to waste funds.
  • Low Priority. Any fix for an issue that could deny service to individual nodes.

Versioning

This repository uses a 5 part version number.

For example, a node operator running version 2.0.10.0.0 would not need to wipe and refresh their chainstate to upgrade to 2.0.10.1.0 or 2.0.10.0.1. However, upgrading to 2.0.11.0.0 would require a new chainstate.

Roadmap

  • SIP 001: Burn Election
  • SIP 002: Clarity, a language for predictable smart contracts
  • SIP 003: Peer Network
  • SIP 004: Cryptographic Committment to Materialized Views
  • SIP 005: Blocks, Transactions, and Accounts
  • SIP 006: Clarity Execution Cost Assessment
  • SIP 007: Stacking Consensus
  • SIP 008: Clarity Parsing and Analysis Cost Assessment

Stacks improvement proposals (SIPs) are aimed at describing the implementation of the Stacks blockchain, as well as proposing improvements. They should contain concise technical specifications of features or standards and the rationale behind it. SIPs are intended to be the primary medium for proposing new features, for collecting community input on a system-wide issue, and for documenting design decisions.

See SIP 000 for more details.

The SIPs are now located in the stacksgov/sips repository as part of the Stacks Community Governance organization.

Testnet versions

  • Krypton is a Stacks 2 testnet with a fixed, two-minute block time, called regtest. Regtest is generally unstable for regular use, and is reset often. See the regtest documentation for more information on using regtest.

  • Xenon is the Stacks 2 public testnet, which runs PoX against the Bitcoin testnet. It is the full implementation of the Stacks 2 blockchain, and should be considered a stable testnet for developing Clarity smart contracts. See the testnet documentation for more information on the public testnet.

  • Mainnet is the fully functional Stacks 2 blockchain, see the Stacks overview for information on running a Stacks node, mining, stacking, and writing Clarity smart contracts.

Getting started

Download and build stacks-blockchain

The first step is to ensure that you have Rust and the support software installed.

For building on Windows, follow the rustup installer instructions at https://rustup.rs/

From there, you can clone this repository:

Then build the project:

Run the tests:

Encode and sign transactions

Here, we have generated a keypair that will be used for signing the upcoming transactions:

This keypair is already registered in the testnet-follower-conf.toml file, so it can be used as presented here.

We will interact with the following simple contract kv-store. In our examples, we will assume this contract is saved to ./kv-store.clar:

We want to publish this contract on chain, then issue some transactions that interact with it by setting some keys and getting some values, so we can observe read and writes.

Our first step is to generate and sign, using your private key, the transaction that will publish the contract kv-store. To do that, we will use the subcommand:

With the following arguments:

The 515 is the transaction fee, denominated in microSTX. Right now, the testnet requires one microSTX per byte minimum, and this transaction should be less than 515 bytes. The third argument 0 is a nonce, that must be increased monotonically with each new transaction.

This command will output the binary format of the transaction. In our case, we want to pipe this output and dump it to a file that will be used later in this tutorial.

Run the testnet

You can observe the state machine in action locally by running:

testnet-follower-conf.toml is a configuration file that you can use for setting genesis balances or configuring Event observers. You can grant an address an initial account balance by adding the following entries:

The address field is the Stacks testnet address, and the amount field is the number of microSTX to grant to it in the genesis block. The addresses of the private keys used in the tutorial below are already added.

Publish your contract

Assuming that the testnet is running, we can publish our kv-store contract.

In another terminal (or file explorer), you can move the tx1.bin generated earlier, to the mempool:

In the terminal window running the testnet, you can observe the state machine's reactions.

Reading from / Writing to the contract

Now that our contract has been published on chain, let's try to submit some read / write transactions. We will start by trying to read the value associated with the key foo.

To do that, we will use the subcommand:

With the following arguments:

contract-call generates and signs a contract-call transaction.

We can submit the transaction by moving it to the mempool path:

Similarly, we can generate a transaction that would be setting the key foo to the value bar:

And submit it by moving it to the mempool path:

Finally, we can issue a third transaction, reading the key foo again, for ensuring that the previous transaction has successfully updated the state machine:

And submit this last transaction by moving it to the mempool path:

Congratulations, you can now write your own smart contracts with Clarity.

Platform support

Officially supported platforms: Linux 64-bit, MacOS 64-bit, Windows 64-bit.

Platforms with second-tier status (builds are provided but not tested): MacOS Apple Silicon (ARM64), Linux ARMv7, Linux ARM64.

For help cross-compiling on memory-constrained devices, please see the community supported documentation here: Cross Compiling.

Community

Beyond this Github project, Blockstack maintains a public forum and an opened Discord channel. In addition, the project maintains a mailing list which sends out community announcements.

The greater Blockstack community regularly hosts in-person meetups. The project's YouTube channel includes videos from some of these meetups, as well as video tutorials to help new users get started and help developers wrap their heads around the system's design.

Further Reading

You can learn more by visiting the Blockstack Website and checking out the documentation:

  • Blockstack docs

You can also read the technical papers:

  • "PoX: Proof of Transfer Mining with Bitcoin", May 2020
  • "Stacks 2.0: Apps and Smart Contracts for Bitcoin", Dec 2020

If you have high-level questions about Blockstack, try searching our forum and start a new question if your question is not answered there.

Contributing

Tests and Coverage

PRs must include test coverage. However, if your PR includes large tests or tests which cannot run in parallel (which is the default operation of the cargo test command), these tests should be decorated with #[ignore]. If you add #[ignore] tests, you should add your branch to the filters for the all_tests job in our circle.yml (or if you are working on net code or marf code, your branch should be named such that it matches the existing filters there).

A test should be marked #[ignore] if:

  1. It does not always pass cargo test in a vanilla environment (i.e., it does not need to run with --test-threads 1).
  2. Or, it runs for over a minute via a normal cargo test execution (the cargo test command will warn if this is not the case).

Formatting

This repository uses the default rustfmt formatting style. PRs will be checked against rustfmt and will fail if not properly formatted.

You can check the formatting locally via:

You can automatically reformat your commit via:

Mining

Stacks tokens (STX) are mined by transferring BTC via PoX. To run as a miner, you should make sure to add the following config fields to your config file:

You can verify that your node is operating as a miner by checking its log output to verify that it was able to find its Bitcoin UTXOs:

Configuring Cost and Fee Estimation

Fee and cost estimators can be configure via the config section [fee_estimation]:

Fee and cost estimators observe transactions on the network and use the observed costs of those transactions to build estimates for viable fee rates and expected execution costs for transactions. Estimators and metrics can be selected using the configuration fields above, though the default values are the only options currently. log_error controls whether or not the INFO logger will display information about the cost estimator accuracy as new costs are observed. Setting enabled = false turns off the cost estimators. Cost estimators are not consensus-critical components, but rather can be used by miners to rank transactions in the mempool or client to determine appropriate fee rates for transactions before broadcasting them.

The fuzzed_weighted_median_fee_rate uses a median estimate from a window of the fees paid in the last fee_rate_window_size blocks. Estimates are then randomly "fuzzed" using uniform random fuzz of size up to fee_rate_fuzzer_fraction of the base estimate.

Non-Consensus Breaking Release Process

For non-consensus breaking releases, this project uses the following release process:

  1. The release must be timed so that it does not interfere with a prepare phase. The timing of the next Stacking cycle can be found here. A release to mainnet should happen at least 24 hours before the start of a new cycle, to avoid interfering with the prepare phase. So, start by being aware of when the release can happen.

  2. Before creating the release, the release manager must determine the version number for this release. The factors that determine the version number are discussed in Versioning. We assume, in this section, that the change is not consensus-breaking. So, the release manager must first determine whether there are any "non-consensus-breaking changes that require a fresh chainstate". This means, in other words, that the database schema has changed, but an automatic migration was not implemented. Then, the release manager should determine whether this is a feature release, as opposed to a hot fix or a patch. Given the answers to these questions, the version number can be computed.

  3. The release manager enumerates the PRs or issues that would block the release. A label should be applied to each such issue/PR as 2.0.x.y.z-blocker. The release manager should ping these issue/PR owners for updates on whether or not those issues/PRs have any blockers or are waiting on feedback.

  4. The release manager should open a develop -> master PR. This can be done before all the blocker PRs have merged, as it is helpful for the manager and others to see the staged changes.

  5. The release manager must update the CHANGELOG.md file with summaries what was Added, Changed, and Fixed. The pull requests merged into develop can be found here. Note, however, that GitHub apparently does not allow sorting by merge time, so, when sorting by some proxy criterion, some care should be used to understand which PR's were merged after the last develop -> master release PR. This CHANGELOG.md should also be used as the description of the develop -> master so that it acts as release notes when the branch is tagged.

  6. Once the blocker PRs have merged, the release manager will create a new tag by manually triggering the stacks-blockchain Github Actions workflow against the develop branch, inputting the release candidate tag, 2.0.x.y.z-rc0, in the Action's input textbox.

  7. Once the release candidate has been built, and docker images, etc. are available, the release manager will notify various ecosystem participants to test the release candidate on various staging infrastructure:

    1. Stacks Foundation staging environments.
    2. Hiro PBC testnet network.
    3. Hiro PBC mainnet mock miner.

    The release candidate should be announced in the #stacks-core-devs channel in the Stacks Discord. For coordinating rollouts on specific infrastructure, the release manager should contact the above participants directly either through e-mail or Discord DM. The release manager should also confirm that the built release on the Github releases page is marked as Pre-Release.

  8. The release manager will test that the release candidate successfully syncs with the current chain from genesis both in testnet and mainnet. This requires starting the release candidate with an empty chainstate and confirming that it synchronizes with the current chain tip.

  9. If bugs or issues emerge from the rollout on staging infrastructure, the release will be delayed until those regressions are resolved. As regressions are resolved, additional release candidates should be tagged. The release manager is responsible for updating the develop -> master PR with information about the discovered issues, even if other community members and developers may be addressing the discovered issues.

  10. Once the final release candidate has rolled out successfully without issue on the above staging infrastructure, the release manager tags 2 additional stacks-blockchain team members to review the develop -> master PR. If there is a merge conflict in this PR, this is the protocol: open a branch off of develop, merge master into that branch, and then open a PR from this side branch to develop. The merge conflicts will be resolved.

  11. Once reviewed and approved, the release manager merges the PR, and tags the release via the stacks-blockchain Github action by clicking "Run workflow" and providing the release version as the tag (e.g., 2.0.11.1.0) This creates a release and release images. Once the release has been created, the release manager should update the Github release text with the CHANGELOG.md "top-matter" for the release.

Copyright and License

The code and documentation copyright are attributed to blockstack.org for the year of 2020.

This code is released under the GPL v3 license, and the docs are released under the Creative Commons license.

Issues

Collection of the latest Issues

njordhov

njordhov

0

Panic in Clarity contracts results in a useless and uninformative (err none) reported to the caller when the contract aborts, whether triggered by unwrap-panic or a failure such as an uint subtraction underflow.

Instead, a panic should provide a useful error message so the caller can understand why the contract call was aborted.

The lack of helpful panic messages leads to confused developers, ad-hoc workarounds, and avoidance of unwrap-panic, an anti-pattern. The clarity-lang book explicitly advises against using unwrap-panic due to the lack of a useful response:

You should ideally not use the -panic variants unless you absolutely have to, because they confer no meaningful information when they fail. A transaction will revert with a vague "runtime error" and users as well as developers are left to figure out exactly what went wrong.

See also:

obycode

obycode

1

Copied from hirosystems/clarinet#418 reported by I opened a PR to fix this in clarinet, but a similar change should likely be made here, for 2.1, so that this type of problem will cause an error when deploying the contract instead of only reporting a runtime error when it is called.

Original issue:

Consider the following contract where I use try! as a variable name:

(define-public (call-this (fail bool)) (let ( (try! (try-me fail)) ) (ok true) ) )

(define-public (try-me (fail bool)) (begin (asserts! fail (err u999)) (ok true) ) ) This passes a clarinet check, but when executed on clarinet console returns a RuntimeError: Runtime Error: Runtime error while interpreting ST1PQHQKV0RJXZFY1DGX8MNSNYVE3VGZJSRTPGZGM.contract-7: Unchecked(NameAlreadyUsed("try!"))

If you use an asserts! instead, it will not pass clarinet check, naming a syntax error. If I write: (define-data-var try! uint u0), then it will return a Runtime error NameAlreadyUsed after running clarinet check.

Conclusion

I think running clarinet check should be able to detect this kind of error before trying to run the contract code on clarinet console/test.

314159265359879

314159265359879

2

Describe the bug I start a node with the default command: stacks-node mainnet after syncing the bitcoin headers (to 100%) the process is halted and posts: PS C:\stacks2.05.0.2.2> then nothing else happens.

Steps To Reproduce download package, unpack and and run with 'stacks-node mainnet'

Expected behavior I expect the node to start syncing the Stacks blockchain to the chain tip after synching the bitcoin headers.

Environment (please complete the following information):

  • OS: Windows 10
  • Rust version: latest
  • Stacks node 2.05.0.2.0, 2.05.0.2.1 and 2.05.0.2.2 (not a problem with 2.05.0.1.0 or older)

Additional context

kantai

kantai

question
0

For the most part, any issues experienced in the Stacks testnet these days are caused by Bitcoin testnet block storms. During these events, Stacks block production grinds to a halt. I'm starting this issue as a place to discuss feasible ideas for addressing this. I suspect that there's not much that can be done, because testnet block storms typically have a huge number of empty blocks (which won't include a Stacks block commit in any event), but it may be possible to get occasional block commits included during the storm.

CharlieC3

CharlieC3

0

Describe the bug When updating the local_peer_seed config property on a node with a pre-existing chainstate, the local_peer table in the peer database does not get updated. Instead, the node continues using the private key of the original local_peer_seed used when the node first started with an empty working directory.

Steps To Reproduce

  1. Start a stacks-node with an empty working directory
  2. Observe the entry in the local_peer table in the peer database
  3. Update the local_peer_seed config property and restart the stacks-node
  4. Observe the entry in the local_peer table in the peer database again. It's supposed to change with the updated local_peer_seed, but it doesn't

Expected behavior The entry in the local_peer table in the peer database should be updated on bootup if the local_peer_seed in the config TOML has changed since the last boot.

Stacks-node version: 2.05.0.2.0

obycode

obycode

1

The parser reports an error when there is no whitespace between two expressions, e.g.:

This results in:

(This was noticed by @fariedt in hirosystems/clarity-repl#158)

The new parser (#3150) currently allows this pattern. Any objections to allowing this after the 2.1 epoch, along with the other parser consensus-breaking changes?

friedger

friedger

1

Describe the bug A contract is deployed successfully, even though the top-level code in the contract contains errors.

Steps To Reproduce

  1. Deploy a contract with a function that can fails
  2. Deploy a contract that calls the first contract's function as top-level code so that the first contract fails
  3. Deploy another contract that calls the first contract's function as top-level code so that the first contract fails but with the contract call wrapped in try!
  4. Deploy another contract that calls the first contract's function as top-level code so that the first contract fails but with the contract call wrapped in unwrap-panic

Successful contract deployments with contract calls that fail and succeed and with wrapping in try! and unwrap-panic:

Expected behavior A contract deploy transaction should only succeed if and only if the code is correct and all top-level statements do not throw errors and do not return err responses.

Additional context This was discovered during prize distribution of the 90stx raffle, e.g. https://explorer.stacks.co/txid/0x5421e0963f29b4409718b2f864262aee559f00ccc8edc302bf274d9cea98c320?chain=mainnet

friedger

friedger

documentation
1

Describe the bug The documentation says the following for nft-burn? (https://docs.stacks.co/write-smart-contracts/language-functions#nft-burn)

If an asset identified by asset-identifier doesn't exist, this function will return an error with the following error code: (err u1)

However, calling nft-burn? with tx-sender who does not own the nft returns (err u1) even if the asset exists.

Steps To Reproduce Clone the repo and run clarinet test

burn nft by owner https://github.com/Light-Labs/ryder-nft/blob/1708febcc0339955752951b809e35d9dfdf6573f/backend-stacks/tests/ryder-nft_test.ts#L97 ==> succeeds

burn nft by user that does not own the existing id https://github.com/Light-Labs/ryder-nft/blob/1708febcc0339955752951b809e35d9dfdf6573f/backend-stacks/tests/ryder-nft_test.ts#L114 ==> burn fails (test successful)

Expected behavior Any tx-sender can call nft-burn? if the contract supports it

njordhov

njordhov

2

The slice function in development for Stacks 2.1 does not adequately infer the length of the resulting sub-sequence, but instead assigns it to have the same max length as the input sequence. This will make cost estimates unnecessarily high.

The current implementation "return a sub-sequence of that starts at left-position (inclusive), and ends at right-position (non-inclusive)". This makes it infeasible (given the runtime limitations) to infer the length of the result sequence unless both bounds are integer constants, even in cases when the length of the result is known, such as:

Quality cost estimates is a key affordance of Clarity, and should be a factor when extending the language. There is an opportunity now to specify the slice function in a way that improves the ability to estimate execution cost: Make the function take the length of the slice as argument:

When the length is an integer constant, it is obviously trivial to determine the length of the output sequence, while the type inference can fall back on using the max length of the input sequence otherwise.

Additional context

CharlieC3

CharlieC3

4

Describe the bug On the previous release, I noticed the Docker image digest for the 2.05.0.2.0 release changed shortly after it was first created. The first image digest was for an image built for the amd64 arch, and the new one which overwrote the docker tag was for the arm64 arch.

After investigating, it seems the docker-platform.yml workflow is overwriting the amd64 Docker image pushed up in the ci workflow, with the arm64 it builds. The intention is to use the same image tag for both image archs. but this doesn't appear to be happening.

I was able to fix this because luckily I had previously pulled the amd64 image digest down from the docker hub before it was overwritten, so I was able to push it back up. But if this issue isn't corrected before the next time any repo tag is created via running the ci workflow, this is going to happen again.

Steps To Reproduce Run the ci workflow with a tag passed in as a parameter. Observe the resulting docker arch available for the tag created in docker hub.

Expected behavior A docker tag should be created which should reference two different architecture builds. Running docker pull should automatically pull down the appropriate image for the host machine. See here for an example docker tag which allows for multiple archs.

tyGavinZJU

tyGavinZJU

2

Is your feature request related to a problem? Please describe. Mining Pool aims to solve the following problems of current mining pool and individual miner https://github.com/stacks-network/stacks-blockchain/issues/1969:

  1. Users do not need to trust the Mining Pool. If the user subsidizes the Mining Pool to participate in the mining of block N, the income will be automatically distributed in the stacks network without the permission of the Mining Pool.
  2. Users can go online or offline at will
  3. More flexible mining pool logic can be implemented through the clarity smart contract

This issue will discuss in detail the process of subsequent bitcoin transactions and the specific implementation of the contract

Describe the solution you'd like From the previous conversation with @jcnelson in monthly mining conference. The implementation of the mining pool scheme is through:

  1. Use bitcoin multi-signature script to fund block commit
  2. stacks 2.1 upgrade will support taproot / segwit PoX outputs, and support stacks coinbase payments to smart contracts.

The above two points can confirm the sponsor of the block commit in the contract, so the mining pool rewards can be issued without permission.

The inputs to the block commit would be as follows:

  1. The mining pool operator's spent TXO
  2. The scriptsig that actually funds the block commit

The outputs would be as follows:

  1. The OP_RETURN encoding the block commit
  2. The first PoX output
  3. The second PoX output
  4. The pool operator's dust UTXO
  5. The pool's new taproot UTXO
CharlieC3

CharlieC3

18

The stacks-node is susceptible to losing all outbound peers when broadcasting events to a stacks-blockchain-api, causing it to fall behind the chain tip potentially for hours until the stacks-node is restarted. During this time, the stacks-node printing plenty of PermanentlyDrained logs like such: http:id=234,request=false,peer=<redacted>: failed to recv: PermanentlyDrained

Expected behavior If a stacks-node loses all outbound peers, it should be able to acquire new ones or re-validate old ones relatively quickly.

Debug Information If this happens again, I'll get more debug information like:

  • running 'select * from frontier;' in the peers.db before/after restarting the stacks-node.
  • checking open network connections before/after restarting the stacks-node

stacks-node version: 2.05.0.2.0

Screen_Shot_2022-05-21_at_12 25 11_PM
wileyj

wileyj

documentation
0

Seems like this would be the best repo to address, since the atlas source code is here: https://github.com/stacks-network/stacks-blockchain/tree/master/src/net/atlas

Ideally, what I'd like to see is a markdown or something added here:https://github.com/stacks-network/stacks-blockchain/tree/master/docs explaining what Atlas is, and how it's designed in v2 (which can then be added to docs.stacks.co etc).

Reference: https://github.com/stacks-network/stacks-blockchain/pull/2013

The endpoints added by the PR should also be documented in https://github.com/stacks-network/stacks-blockchain/blob/master/docs/rpc-endpoints.md

jcnelson

jcnelson

bug
1

When working on #3043, I noticed that there's a bug in the microblock fee distribution calculation. When we generate the MinerReward struct for the parent that captures the streamed transaction fees that the parent produced, we accidentally use the fees that the parent confirmed. This means that if A mines a microblock stream, B confirms it, and C mines a block off of B's block, the following will happen:

  • A won't get any transaction fees from the stream
  • B will get 60% of the stream's fees when B's block reward matures
  • B will get 40% of the stream's fees when C's block reward matures

What should happen is:

  • A gets 40% of the stream's fees when B's block reward matures
  • B gets 60% of the stream's fees when B's block reward matures

Fortunately, it's easy to fix -- we can fold it into Stacks 2.1. Also, the impact isn't too severe yet, since transaction fees are only a small portion of the overall block reward.

jcnelson

jcnelson

consensus-critical
0

(NOTE: this is not show-stopping)

Consider this code block here, in LeaderBlockCommitOp::check_pox():

https://github.com/stacks-network/stacks-blockchain/blob/26bfd5fcdc1f25106288f18ced05290b569f6abb/src/chainstate/burn/operations/leader_block_commit.rs#L597-L610

The variable parent_block_height is set from self.parent_block_ptr. If the block-commit is attempting to build the first block off of the genesis state, this value (and self.parent_block_vtxindex) will be 0. But, the method SortitionHandleTx::descends_from() returns an error if the block height argument is 0, which in turn causes the block-commit to be invalid. However, this only happens if PoX is active.

The fix is simple: check for this special case, and if so, skip the call to tx.descends_from() and instead verify that the block-commit only contains burn outputs. We'd gate this to 2.1 and later.

obycode

obycode

4

It was discovered via a discussion on Discord that SIP 005 specifies that a contract name can be 128 characters

The contract principal variant of the principal post-condition field is encoded as follows: A 1-byte type prefix of 0x03 The 1-byte version of the standard principal that issued the contract The 20-byte Hash160 of the standard principal that issued the contract A 1-byte length of the contract name, up to 128 The contract name

However the parser limits the contract name to 40 characters:

For Clarity 2, do we want to keep the 40 character limit or go back to the 128 character limit specified in the SIP?

jcnelson

jcnelson

consensus-critical
5

Now that there are multiple versions of Clarity, with different keywords and different interpretations of existing keywords, we need to determine the rules for how (at-block), contract-call? and smart contract deployment should work. I suggest the following:

  • The Clarity version used in a transaction is immutable over the transaction's lifetime. Moreover, the Clarity version used is determined early -- i.e. at or before the instantiation of the transaction's GlobalContext.

  • A ContractCall transaction runs with the Clarity version of the contract it calls, not the current epoch. Similarly, a SmartContract transaction runs with the Clarity version it indicates in its pragma (#2719), not the current epoch (the current epoch is only used to infer the Clarity version if the SmartContract payload indicates to do so). If the contract is deployed with Clarity1 rules, then all top-level contract calls into it use Clarity1. If a contract is deployed with Clarity1 rules, and it calls other contracts as part of its code-body evaluation, then these contract-calls follow Clarity1 rules as well. Same goes for the use of Clarity2.

  • The closure in (at-block) runs with the Clarity version of the caller, not the target block. So, if a Clarity2 contract calls (at-block), its code body runs with Clarity2 rules even if the block was mined before Clarity2 was allowed. Similarly, if a Clarity1 contract calls (at-block), its code body runs with Clarity1 rules even if the block was mined when Clarity2 was allowed. If the closure fails for any reason -- even via CheckErrors resulting from the Clarity version's rules rendering the closure's code invalid -- the error is always (re-)interpreted as a runtime error (#3107) so the transaction can still be mined.

  • The closure in (at-block) can only call other contracts that have been successfully instantiated. This precludes calling a contract that uses Clarity2 keywords but was deployed in epoch 2.05 or earlier via (at-block) -- the call should fail even if the contract could have worked in Clarity2, because the contract itself should not have passed the analysis check when its transaction was mined.

gregorycoppola

gregorycoppola

1

Describe the bug It seems possible for transactions to not have a "transaction outcome" logged for them.

Steps To Reproduce Look for 30f9970f4fdb7bb837cdd4a69a0da95688979e0f5bddce85db3ccc4fa353b728 in the mock miner logs from May 5 (assuming you have access to such logs).

Expected Result There should be a line like Tx processing failed with error, or Tx successfully processed., but there is no entry of either type for this transaction.

Additional context Some transaction outcome cases aren't covered in the code base.

https://github.com/stacks-network/stacks-blockchain/blob/26bfd5fcdc1f25106288f18ced05290b569f6abb/src/chainstate/stacks/miner.rs#L1997

jcnelson

jcnelson

0

The code path that determines the set of confirmed microblocks to request takes up to a whole second (!!), even if there are no requests to make, and even if the same range of microblock streams are re-considered over and over. This is unacceptably slow, and some performance engineering will be needed to cut this down to something more like 10ms.

obycode

obycode

consensus-critical
4

Describe the bug Parsing a commented tuple results in the tuple still showing up in the AST. This was originally reported in hirosystems/clarinet#339 when a difference was detected between the v1 and v2 parsers. I was able to identify this as the core problem.

Steps To Reproduce

That clarinet check will show an error indicating that the v2 parser has a different result than the canonical parser, and upon debugging this, you can see that the commented tuple shows up in the AST:

Expected behavior

The commented code should not be in the AST.

Additional context This may be another reason to bring the new parser from the REPL into the blockchain for 2.1 (see also #3123).

Versions

Find the latest versions by id

2.05.0.2.2 - Jun 12, 2022

2.05.0.2.2-rc1 - Jun 12, 2022

2.05.0.2.1 - Jun 03, 2022

Fixed

  • Fixed a security bug in the SPV client whereby the chain work was not being considered at all when determining the canonical Bitcoin fork. The SPV client now only accepts a new Bitcoin fork if it has a higher chain work than any other previously-seen chain (#3152).

2.05.0.2.1-rc1 - Jun 01, 2022

[2.05.0.2.1]

Fixed

  • Fixed a security bug in the SPV client whereby the chain work was not being
    considered at all when determining the canonical Bitcoin fork. The SPV client
    now only accepts a new Bitcoin fork if it has a higher chain work than any other
    previously-seen chain (#3152).

2.05.0.2.0 - May 03, 2022

IMPORTANT! READ THIS FIRST

Please read the following WARNINGs in their entirety before upgrading.

WARNING: Please be aware that using this node on chainstate prior to this release will cause the node to spend up to 30 minutes migrating the data to a new schema. Depending on the storage medium, this may take even longer.

WARNING: This migration process cannot be interrupted. If it is, the chainstate will be irrecovarably corrupted and require a sync from genesis.

WARNING: You will need at least 2x the disk space for the migration to work. This is because a copy of the chainstate will be made in the same directory in order to apply the new schema.

It is highly recommended that you back up your chainstate before running this version of the software on it.

Changed

  • The MARF implementation will now defer calculating the root hash of a new trie until the moment the trie is committed to disk. This avoids gratuitous hash calculations, and yields a performance improvement of anywhere between 10x and 200x (#3041).
  • The MARF implementation will now store tries to an external file for instances where the tries are expected to exceed the SQLite page size (namely, the Clarity database). This improves read performance by a factor of 10x to 14x (#3059).
  • The MARF implementation may now cache trie nodes in RAM if directed to do so by an environment variable (#3042).
  • Sortition processing performance has been improved by about an order of magnitude, by avoiding a slew of expensive database reads (#3045).
  • Updated chains coordinator so that before a Stacks block or a burn block is processed, an event is sent through the event dispatcher. This fixes #3015.
  • Expose a node's public key and public key hash160 (i.e. what appears in /v2/neighbors) via the /v2/info API endpoint (#3046)
  • Reduced the default subsequent block attempt timeout from 180 seconds to 30 seconds, based on benchmarking the new MARF performance data during a period of network congestion (#3098)
  • The blockstack-core binary has been renamed to stacks-inspect. This binary provides CLI tools for chain and mempool inspection.

2.05.0.2.0-rc3 - Apr 29, 2022

2.05.0.2.0-rc2 - Apr 25, 2022

[2.05.0.2.0]

WARNING: Please be aware that using this node on chainstate prior to this release will cause the node to spend up to 30 minutes migrating the data to a new schema.

Changed

  • The MARF implementation will now defer calculating the root hash of a new trie until the moment the trie is committed to disk. This avoids gratuitous hash calculations, and yields a performance improvement of anywhere between 10x and 200x (#3041).
  • The MARF implementation will now store tries to an external file for instances where the tries are expected to exceed the SQLite page size (namely, the Clarity database). This improves read performance by a factor of 10x to 14x (#3059).
  • The MARF implementation may now cache trie nodes in RAM if directed to do so by an environment variable (#3042).
  • Sortition processing performance has been improved by about an order of magnitude, by avoiding a slew of expensive database reads (#3045). WARNING: applying this change to an existing chainstate directory will take a few minutes when the node starts up.
  • Updated chains coordinator so that before a Stacks block or a burn block is processed, an event is sent through the event dispatcher. This fixes #3015.
  • Expose a node's public key and public key hash160 (i.e. what appears in /v2/neighbors) via the /v2/info API endpoint (#3046)
  • Reduced the default subsequent block attempt timeout from 180 seconds to 30 seconds, based on benchmarking the new MARF performance data during a period of network congestion (#3098)

2.05.0.2.0-rc1 - Apr 20, 2022

WARNING: Please be aware that using this node on chainstate prior to this release will cause the node to spend up to 30 minutes migrating the data to a new schema.

Changed

  • The MARF implementation will now defer calculating the root hash of a new trie until the moment the trie is committed to disk. This avoids gratuitous hash calculations, and yields a performance improvement of anywhere between 10x and 200x (#3041).
  • The MARF implementation will now store tries to an external file for instances where the tries are expected to exceed the SQLite page size (namely, the Clarity database). This improves read performance by a factor of 10x to 14x (#3059).
  • The MARF implementation may now cache trie nodes in RAM if directed to do so by an environment variable (#3042).
  • Sortition processing performance has been improved by about an order of magnitude, by avoiding a slew of expensive database reads (#3045). WARNING: applying this change to an existing chainstate directory will take a few minutes when the node starts up.
  • Updated chains coordinator so that before a Stacks block or a burn block is processed, an event is sent through the event dispatcher. This fixes #3015.
  • Expose a node's public key and public key hash160 (i.e. what appears in /v2/neighbors) via the /v2/info API endpoint (#3046)
  • Reduced the default subsequent block attempt timeout from 180 seconds to 30 seconds, based on benchmarking the new MARF performance data during a period of network congestion (#3098)

2.05.0.1.0 - Feb 07, 2022

This software update is a point-release with a large set of changes. This release's chainstate directory is compatible with chainstate directories from 2.05.0.0.0.

Added

  • A new fee estimator intended to produce fewer over-estimates, by having less sensitivity to outliers. Its characteristic features are: 1) use a window to forget past estimates instead of exponential averaging, 2) use weighted percentiles, so that bigger transactions influence the estimates more, 3) assess empty space in blocks as having paid the "minimum fee", so that empty space is accounted for, 4) use random "fuzz" so that in busy times the fees can change dynamically. (#2972)
  • Implements anti-entropy protocol for querying transactions from other nodes' mempools. Before, nodes wouldn't sync mempool contents with one another. (#2884)
  • Structured logging in the mining code paths. This will shine light on what happens to transactions (successfully added, skipped or errored) that the miner considers while buildings blocks. (#2975)
  • Added the mined microblock event, which includes information on transaction events that occurred in the course of mining (will provide insight on whether a transaction was successfully added to the block, skipped, or had a processing error). (#2975)
  • For v2 endpoints, can now specify the tip parameter to latest. If tip=latest, the node will try to run the query off of the latest tip. (#2778)
  • Adds the /v2/headers endpoint, which returns a sequence of SIP-003-encoded block headers and consensus hashes (see the ExtendedStacksHeader struct that this PR adds to represent this data). (#2862)
  • Adds the /v2/data_var endpoint, which returns a contract's data variable value and a MARF proof of its existence. (#2862)
  • Fixed a bug in the unconfirmed state processing logic that could lead to a denial of service (node crash) for nodes that mine microblocks (#2970)
  • Added prometheus metric that tracks block fullness by logging the percentage of each cost dimension that is consumed in a given block (#3025).

Changed

  • Updated the mined block event. It now includes information on transaction events that occurred in the course of mining (will provide insight on whether a transaction was successfully added to the block, skipped, or had a processing error). (#2975)
  • Updated some of the logic in the block assembly for the miner and the follower to consolidate similar logic. Added functions setup_block and finish_block. (#2946)
  • Makes the p2p state machine more reactive to newly-arrived BlocksAvailable and MicroblocksAvailable messages for block and microblock streams that this node does not have. If such messages arrive during an inventory sync, the p2p state machine will immediately transition from the inventory sync work state to the block downloader work state, and immediately proceed to fetch the available block or microblock stream. (#2862)
  • Nodes will push recently-obtained blocks and microblock streams to outbound neighbors if their cached inventories indicate that they do not yet have them (#2986).
  • Nodes will no longer perform full inventory scans on their peers, except during boot-up, in a bid to minimize block-download stalls (#2986).
  • Nodes will process sortitions in parallel to downloading the Stacks blocks for a reward cycle, instead of doing these tasks sequentially (#2986).
  • The node's runloop will coalesce and expire stale requests to mine blocks on top of parent blocks that are no longer the chain tip (#2969).
  • Several database indexes have been updated to avoid table scans, which significantly improves most RPC endpoint speed and cuts node spin-up time in half (#2989, #3005).
  • Fixed a rare denial-of-service bug whereby a node that processes a very deep burnchain reorg can get stuck, and be rendered unable to process further sortitions. This has never happened in production, but it can be replicated in tests (#2989).
  • Updated what indices are created, and ensures that indices are created even after the database is initialized (#3029).

Fixed

  • Updates the lookup key for contracts in the pessimistic cost estimator. Before, contracts published by different principals with the same name would have had the same key in the cost estimator. (#2984)
  • Fixed a few prometheus metrics to be more accurate compared to /v2 endpoints when polling data (#2987)

2.05.0.1.0-rc4 - Feb 02, 2022

2.05.0.1.0-rc1 - Jan 27, 2022

2.05.0.1.0-rc2 - Jan 18, 2022

2.05.0.1.0-A - Jan 13, 2022

2.05.0.1.0-rc0 - Jan 13, 2022

2.05.0.0.0 - Nov 24, 2021

This software update is a consensus changing release and the implementation of the proposed cost changes in SIP-012. This release's chainstate directory is compatible with chainstate directories from 2.0.11.4.0. However, this release is only compatible with chainstate directories before the 2.05 consensus changes activate (Bitcoin height 713,000). If you run a 2.00 stacks-node beyond this point, and wish to run a 2.05 node afterwards, you must start from a new chainstate directory.

Added

  • At height 713,000 a new costs-2 contract will be launched by the Stacks boot address.

Changed

  • Stacks blocks whose parents are mined >= 713,000 will use default costs from the new costs-2 contract.
  • Stacks blocks whose parents are mined >= 713,000 will use the real serialized length of Clarity values as the cost inputs to several methods that previously used the maximum possible size for the associated types.
  • Stacks blocks whose parents are mined >= 713,000 will use the new block limit defined in SIP-012.

Fixed

  • Miners are now more aggressive in calculating their block limits when confirming microblocks (#2916)

2.05.0.0.0-rc1 - Nov 22, 2021

next-costs-2 - Nov 16, 2021

next-costs-1 - Nov 15, 2021

serde-toml-1 - Nov 12, 2021

print-limits-1 - Nov 11, 2021

2.0.11.4.0 - Nov 09, 2021

This software update is a point-release to change the transaction selection logic in the default miner to prioritize by an estimated fee rate instead of raw fee. This release's chainstate directory is compatible with chainstate directories from 2.0.11.3.0.

Added

  • FeeEstimator and CostEstimator interfaces. These can be controlled via node configuration options. See the README.md for more information on the configuration.
  • New fee rate estimation endpoint /v2/fees/transaction (#2872). See docs/rpc/openapi.yaml for more information.

Changed

  • Prioritize transaction inclusion in blocks by estimated fee rates (#2859).
  • MARF sqlite connections will now use mmap'ed connections with up to 256MB space (#2869).

print-limits-3 - Nov 11, 2021

2.0.11.4.0-rc1 - Oct 20, 2021

2.0.11.3.0 - Sep 09, 2021

This software update is a point-release to change the transaction selection logic in the default miner to prioritize by fee instead of nonce sequence. This release's chainstate directory is compatible with chainstate directories from 2.0.11.2.0.

Added

  • The node will enforce a soft deadline for mining a block, so that a node operator can control how frequently their node attempts to mine a block regardless of how congested the mempool is. The timeout parameters are controlled in the [miner] section of the node's config file (#2823).

Changed

  • Prioritize transaction inclusion in the mempool by transaction fee (#2823).

2.0.11.3.0-rc2 - Sep 03, 2021

2.0.11.3.0-rc1 - Aug 30, 2021

2.0.11.2.0 - Aug 02, 2021

NOTE: This change resets the testnet. Users running a testnet node will need to reset their chain states.

Added

  • clarity-cli will now also print a serialized version of the resulting output from eval and execute commands. This serialization is in hexademical string format and supports integration with other tools. (#2684)
  • The creation of a Bitcoin wallet with BTC version > 0.19 is now supported on a private testnet. (#2647)
  • lcov-compatible coverage reporting has been added to clarity-cli for Clarity contract testing. (#2592)
  • The README.md file has new documentation about the release process. (#2726)

Changed

  • This change resets the testnet. (#2742)
  • Caching has been added to speed up /v2/info responses. (#2746)

Fixed

  • PoX syncing will only look back to the reward cycle prior to divergence, instead of looking back over all history. This will speed up running a follower node. (#2746)
  • The UTXO staleness check is re-ordered so that it occurs before the RBF-limit check. This way, if stale UTXOs reached the "RBF limit" a miner will recover by resetting the UTXO cache. (#2694)
  • Microblock events were being sent to the event observer when microblock data was received by a peer, but were not emitted if the node mined the microblocks itself. This made something like the private-testnet setup incapable of emitting microblock events. Microblock events are now sent even when self-mined. (#2653)
  • A bug is fixed in the mocknet/helium miner that would lead to a panic if a burn block occurred without a sortition in it. (#2711)
  • Two bugs that caused problems syncing with the bitcoin chain during a bitcoin reorg have been fixed (#2771, #2780).
  • Documentation is fixed in cases where string and buffer types are allowed but not covered in the documentation. (#2676)

2.0.11.2.0-rc2 - Jul 21, 2021

2.0.11.2.0-rc1 - Jul 06, 2021

[2.0.11.2.0]

Added

  • clarity-cli will now also print a serialized version of the resulting output from eval and execute commands. This serialization is in hexademical string format and supports integration with other tools. (#2684)
  • lcov-compatible coverage reporting has been added to clarity-cli for Clarity contract testing. (#2592)
  • The README.md file has new ocumentation about the release process. (#2726)

Changed

  • This change resets the testnet. (#2742)
  • Caching has been added to speed up /v2/info responses. (#2746)

Fixed

  • PoX syncing will only look back to the reward cycle prior to divergence, instead of looking back over all history. This will speed up running a follower node. (#2746)
  • The UTXO staleness check is re-ordered so that it occurs before the RBF-limit check. This way, if stale UTXOs reached the "RBF limit" a miner will recover by resetting the UTXO cache. (#2694)
  • A bug is fixed in the mocknet/helium miner had a logic bug that would lead to a panic if a burn block occurred without a sortition in it. (#2711)
  • Documentation is fixed in cases where string and buffer types are allowed but not covered in the documentation. (#2676)

2.0.11.1.0 - Jun 02, 2021

This software update is our monthly release. It introduces fixes and features for both developers and miners. This release's chainstate directory is compatible with chainstate directories from 2.0.11.0.0.

Added

  • /new_microblock endpoint to notify event observers when a valid microblock has been received (#2571).
  • Added new features to clarity-cli (#2597)
  • Exposing new mining-related metrics in prometheus (#2664)
  • Miner's computed relative miner score as a percentage
  • Miner's computed commitment, the min of their previous commitment and their median commitment
  • Miner's current median commitment
  • Add key-for-seed command to the stacks-node binary - outputs the associated secret key hex string and WIF formatted secret key for a given "seed" value (#2658).

Changed

  • Improved mempool walk order (#2514).
  • Renamed database tx_tracking.db to tx_tracking.sqlite (#2666).

Fixed

  • Alter the miner to prioritize spending the most recent UTXO when building a transaction, instead of the largest UTXO. In the event of a tie, it uses the smallest UTXO first (#2661).
  • Fix trait rpc lookups for implicitly implemented traits (#2602).
  • Fix v2/pox endpoint, broken on Mocknet (#2634).
  • Align cost limits on mocknet, testnet and mainnet (#2660).
  • Log peer addresses in the HTTP server (#2667)
  • Mine microblocks if there are no recent unprocessed Stacks blocks

Information - Updated Jun 22, 2022

Stars: 2.7K
Forks: 527
Issues: 197

Kanidm is an identity management platform written in rust

We also publish limited code of conduct

Kanidm is an identity management platform written in rust

Actix Casbin Middleware

Casbin only takes charge of permission control, so you need to implement an Authentication Middleware to identify user

Actix Casbin Middleware

The fastest way to identify anything

Identify any mysterious text or analyze strings from a file, just ask lemmeknow

The fastest way to identify anything
Facebook Instagram Twitter GitHub Dribbble
Privacy