Skip to content

Latest commit

 

History

History
219 lines (161 loc) · 8.83 KB

cap-0005.md

File metadata and controls

219 lines (161 loc) · 8.83 KB

Preamble

CAP: 0005
Title: Throttling and transaction pricing improvements 
Author: Nicolas Barry
Status: Final
Created: 2018-Oct-05
Discussion: https://github.com/stellar/stellar-protocol/issues/75
https://github.com/stellar/stellar-protocol/issues/133
Protocol version: 11

Simple Summary

Combined set of changes that rationalize how we throttle the network, and also makes it easier for clients to craft transactions that will make it into a ledger even when network fees are changing rapidly.

Abstract

Goal of the updated design is to maximize the number of operations included in a candidate transaction set, while preserving fairness.

Note that there are two mechanism at play here: one that is implemented to construct candidate values by validators which happens outside of consensus, and the other mechanism that is used to pick between two candidate sets during consensus.

Fairness in how transactions are included in a candidate set is equivalent to designing a sorting scheme for the list of transactions before trimming (typically all accumulated transactions on the validator) such that:

  • accounts have the same odds of having their transactions included, regardless of the distribution of transactions per account and regardless of how many transactions are picked from the sorted set.
  • transactions with a higher per operation fee have a strict advantage over lower fee transactions.

Fairness is also achieved by attempting to reduce the fee charged to accounts.

This is achieved by associating a value baseFee to each transaction set (greater than ledgerHeader.baseFee), and using that base fee to compute the actual fee charged. With this change, tx.fee becomes an upper bound instead of being the actual fee deducted from the source account.

Motivation

Right now the network configuration that governs the maximum throughput of the network is ledgerHeader.maxTxSetSize, that controls the maximum number of transactions that can be included in a transaction set; yet a good approximation of work (reflected in how fees are computed), is the total number of operations included in a transaction set. As any transaction can contain up to 100 operations, this causes the network setting to be overly conservative (as it has to assume the worst case situation where all transactions would be 100 operations).

Another motivation for changing how fees are computed is that right now the fee charged for a given transaction is equal to the amount specified when the transaction was crafted.

When surge pricing occurs, or simply if ledgerHeader.baseFee is raised, this creates problems:

  • for the typical user, this leads to transactions rejected by validators if the specified fee is too small.
  • for pre-signed transactions (smart contracts), the fee has to be specified with a value that exceeds the future fee required to be included in a transaction set.
  • in both cases the model can lead to excessive fees charged to users if the specified fee is bigger than needed.

Specification

In the following specification, we'll use StellarValue v with its associated TransactionSet txSet and LedgerHeader ledgerHeader.

XDR representation

No changes in XDR

Construction of the pair (txSetBaseFee, txSet)

Repurposing ledgerHeader.maxTxSetSize

For ledgers with a protocolVersion supporting CAP0005, this setting controls the maximum number of operations allowed in a transaction set (currently it controls the number of transactions in the transaction set).

Updated "surge pricing" algorithm

  1. pick a random number S
  2. for each account, construct a queue of transactions for that account, sorted by sequence number in ascending order
    • implementation note: there might be an opportunity to consolidate with the transaction holding tank.
  3. construct a heap of those queues, sorted by:
    • the fee rate (descending) of the first transaction in each queue (so that the queue whose first transaction has the highest fee rate is on top of the heap)
    • first transaction hash Xored with S (ascending)
  4. nb_operations_to_add = maxTxSetSize
  5. while (nb_operations_to_add > 0 && !heap.empty())
    • queue = heap.pop()
    • if nb_operations(first transaction "tx" from queue) <= nb_operations_to_add
      • push tx into txSet
      • nb_operations_to_add = nb_operations_to_add - nb_operations(tx)
      • pop tx from queue
      • if queue non empty, push it into the heap
  6. if count_ops(txSet) > max(0, ledgerHeader.maxTxSetSize-100)
    • surge pricing in effect

NB: if ledgerHeader.ledgerVersion < CAP_0005.protocolVersion,

  • step 4 becomes nb_operations_to_add = maxTxSetSize*100
  • step 5 defaults nb_operations(tx) to 100

Validity and usage of the effective base fee

Computation of the associated effective base fee

The effective base fee for protocol supporting CAP_0005 is defined as follows:

  1. if nb_operations(txSet) <= max(0, ledgerHeader.maxTxSetSize-100)
    • surge pricing not in effect
      • effectiveBaseFee = ledgerHeader.baseFee
  2. else, we use the smallest possible base fee for that transaction set
    • effectiveBaseFee = min(baseFeeForTx(for(tx in txSet)))

Where baseFeeForTx is defined as tx.fee / tx.ops.length() rounded up.

NB: Surge pricing is triggered as soon as we can't assume that a transaction was not included in the transaction set given the limit ledgerHeader.maxTxSetSize, which comes to ledgerHeader.maxTxSetSize-100.

Computation of the fee charged per transaction

The computation of fee = computedFee(tx, effectiveBaseFee) is done as:

  1. if ledgerHeader.ledgerVersion < CAP_0005.protocolVersion
    • fee = tx.fee
  2. else
    • fee = min(tx.fee, effectiveBaseFee*tx.ops.length())

Validity of a txSet

From a fee point of view, the only requirement is that each source account has a balance sufficient to cover the combined computed fees of that account's transactions included in txSet

if ledgerHeader.ledgerVersion < CAP_0005.protocolVersion we enforce that txSet.length() <= maxTxSetSize.

Otherwise we enforce that nb_operations(txSet) <= maxTxSetSize

Ballot protocol changes

The ballot protocol requires a function to combine candidate values produced by the nomination protocol.

For older versions of the protocol, we do not change this function, ie we keep the transaction set with the highest number of transactions, and break ties by comparing the hash of transaction sets (XORed to pseudo-randomize).

For newer versions of the protocol, we'll be following a similar scheme by picking the highest value sorted in order by:

  • highest number of operations
  • highest total fee (sum of fees charged)
  • highest txset hash (XORed by the hash of all value as before)

Rationale

Implementing this proposal should greatly increase the chances of transactions being processed by the network even when fees vary between ledgers.

More improvements can be made, in particular for very long lived pre-signed transactions (where there is no rational way of putting a high bound on fees), or for more complex client side fee control strategies.

This proposal assumes that there is no inherent incentive in validators profiting from the rise of fees which probably stays true for the foreseeable future.

Backwards Compatibility

Transaction subsystem

The change is backward compatible with existing transactions and will be transparent to clients.

The only breaking change is that accounts may have a higher balance than expected if fee is higher than the minimum fee.

Smart contracts typically don't rely on exact balances and use mergeAccountOp as part of their cleanup that doesn't depend on having a predetermined balance.

Surge pricing implementation

The exact value generated by surge pricing doesn't need to match even when the network is still operating on an older version of the protocol.

As such, there is no concern to try to preserve the existing behavior.

Network

The behavior is entirely driven by the value of the protocol version, the change is therefore backward compatible.

After the network upgrades to the new protocol version the value of maxTxSetSize will be smaller than desired (as the raw value will not represent transactions but operations instead); A subsequent vote by validators will be need to be coordinated in order to adjust it to a new desired value.

Practically speaking the disruption should be minimal:

  • Most transactions are single operation transactions and won't be effected.
  • Disruption will be mostly for transactions with more operations than maxTxSetSize. As of this writing, this network parameter is set to 50 on the public network, which means that transactions with more than 50 operations will not be accepted by the network between the two upgrades.
  • Validators can coordinate a "double upgrade", where both the protocol version and the maxTxSetSize field gets updated.

Test Cases

TBD

Implementation

TBD