5

Let's imagine I have AlwaysSucceeds Plutus validator script, which is:

mkValidator :: Params -> Datum -> Redeemer -> ScriptContext -> Bool
mkValidator _ _ _ _ = True

I locked many different utxos at the script address in the testnet. Let's image 15 in total. Now I want to spend all of those utxos at once from the script using cardano-cli. Building transaction as usual, and providing those utxos as --tx-in:

--tx-in "abcdef#1" \
--tx-in-script-file alwaysSucceeds.plutus \
--tx-in-datum-value 420 \
--tx-in-redeemer-file redeemer.json \
--tx-in "abcdef#2" \
--tx-in-script-file alwaysSucceeds.plutus \
--tx-in-datum-value 420 \
--tx-in-redeemer-file redeemer.json \
...
...
--tx-in "abcdef#15" \
--tx-in-script-file alwaysSucceeds.plutus \
--tx-in-datum-value 420 \
--tx-in-redeemer-file redeemer.json \

I am pretty sure, that script is included only once in the transaction (is it?). Then submitting transaction to the testnet, I get the following error:

Command failed: transaction submit  Error: Error while submitting tx: ShelleyTxValidationError ShelleyBasedEraAlonzo (ApplyTxError [UtxowFailure (WrappedShelleyEraFailure (UtxoFailure (ExUnitsTooBigUTxO (ExUnits {exUnitsMem = 12500000, exUnitsSteps = 10000000000}) (ExUnits {exUnitsMem = 31256460, exUnitsSteps = 14103266205}))))])

This basically means that I exceeded memory limit by 3 times and CPU by 1.5.

When lowering utxo count to 14,13... it scales down pretty much linearly:

15 utxos: {exUnitsMem = 31256460, exUnitsSteps = 14103266205}
14: {exUnitsMem = 28008316, exUnitsSteps = 12629952076}
13: {exUnitsMem = 24926512, exUnitsSteps = 11232794573}
12: {exUnitsMem = 22011048, exUnitsSteps = 9911793696}
11: {exUnitsMem = 19261924, exUnitsSteps = 8666949445}
10: {exUnitsMem = 16679140, exUnitsSteps = 7498261820}
9: {exUnitsMem = 14262696, exUnitsSteps = 6405730821}
8: Transaction successfully submitted.

The validator script itself has no logic, it should not inflate memory and CPU usage (even though unfortunately it will be executed same amount of times as there are utxos).

My questions:

  1. Does this really mean that somewhat around 10 inputs are the maximum possible to spend from the script?
  2. Is there any way to overcome this, and allow way more inputs from the script to be spent at once?
  3. Are exUnitsMem and exUnitsSteps the same on the mainnet?
  4. Maybe there is a way to increase those parameters to be able to submit the transaction?
  5. Is script really included only once then submitting/Am I building transaction correctly?
  6. As I am validating multiple utxos at once, is it possible to run validator only once and not for each utxo?
  7. If this is really the limit, how we are supposed to build something useful? When and by how much are those units are planned to increase?
  8. Is there any plans, regarding running validator only once for all utxo set, instead of executing it for each utxo?

EDIT:

after updating node version to 1.32.1, error is thrown on the build step also, then utxo count is increased even further. Result is pretty much the same, only the error message is different:

Command failed: transaction build  Error: The following scripts have execution failures:
the script for transaction input 0 (in the order of the TxIds) failed with:
The Plutus script evaluation failed: An error has occurred:  User error:
The budget was overspent. Final negative state: ({ cpu: 5931879314
| mem: -1199
})
10
  • 1
    You could deserialise the transaction and check how many times the script is included and how large it is. This would be interesting. I read that SundaeSwap struggled with the size of its scripts and had to do some low level optimisations. But unfortunately there are no details on this to be found.
    – Jey
    Commented Dec 28, 2021 at 10:39
  • I am pretty sure that it is included only once, as signed transaction size is basically the same with either 1 or 15 utxos, and raw transaction size increases dramatically with each utxo. I can't think of any possible optimizations, as this seems to be lmited by design/architecture itself..
    – serx
    Commented Dec 28, 2021 at 10:56
  • I understand now. It is not the transaction size constraint but the memory constraint that is hit here. I read in an article on iohk.io that the needed execution memory increases with every script-UTxO. I wondered on Twitter if the memory could not be cleared after each UTxO as they are independent of each other. I got no answer though and I don't know if this a yet unused optimisation or if something else is preventing this.
    – Jey
    Commented Dec 28, 2021 at 11:17
  • Out of curiosity, how large is the transaction-cbor, though ?
    – Jey
    Commented Dec 28, 2021 at 11:19
  • 1
    transaction with my script (not AlwaysSuceeds), containing 10 utxo inputs, (and some other stuff) - {exUnitsMem = 19785600, exUnitsSteps = 8451556350}, built transaction - 104553 bytes, signed - 12619 bytes
    – serx
    Commented Dec 28, 2021 at 11:36

1 Answer 1

2

This is a known issue: https://github.com/input-output-hk/cardano-node/issues/3360

Out of curiosity, how many NFT's do you have per --tx-in? I've seen this increase memory usage dramatically with users on jpg.store who store a lot of NFT's in there wallet.

I have a few suggestions, knowing practically nothing about how this stuff works :P I'll share these in the Github issue as well.

  1. Can we remove these limits entirely? Is there any reason why a user can't just pay more to execute a tx with higher mem/CPU requirements?

  2. If #1 isn't possible, what is the reason for keeping the limits as low as they are currently?

  3. Are there any efforts to optimize the Plutus compiler here? Surely adding a few more --tx-in's shouldn't blow up memory like this?

2
  • 1
    Thanks for the answer! well, these issues are only related. In the link you posted, I believe it is a simple transaction. This one is a transaction involving validator script. Also it is not NFT's. It's utxo's containing ADA or other CNT's, the main difference being the script inclusion in the transaction.
    – serx
    Commented Dec 28, 2021 at 17:58
  • If the network had no computational bounds and was run on nuclear energy sure, but a lot of the nodes are people struggling to keep up with the current, ever increasing minimum requirements, and a lot of that energy is spinning turbines with steam from burning stuff. I don't understand why the contracts can't be iterated through... The network parameters are always known. If you can move them out in piecemeal then while it's a bug, it's not a blackhole. Commented Jan 9, 2022 at 3:05

Not the answer you're looking for? Browse other questions tagged or ask your own question.