This replaces `ver_rct_non_semantics_simple_cached()` with an API that offloads
the responsibility of tracking input verification successes to the caller. The
main caller of this function in the codebase, `cryptonote::Blockchain()` instead
keeps track of the verification results for transaction in the mempool by
storing a "verification ID" in the mempool metadata table (with `txpool_tx_meta_t`).
This has several benefits, including:
* When the mempool is large (>8192 txs), we no longer experience cache misses and unnecessarily re-verify ring signatures. This greatly improves block propagation time for FCMP++ blocks under load
* For the same reason, reorg handling can be sped up by storing verification IDs of transactions popped from the chain
* Speeds up re-validating every mempool transaction on fork change (monerod revalidates the whole tx-pool on HFs #10142)
* Caches results for every single type of Monero transaction, not just latest RCT type
* Cache persists over a node restart
* Uses 512KiB less RAM (8192*2*32B)
* No additional storage or DB migration required since `txpool_tx_meta_t` already had padding allocated
* Moves more verification logic out of `cryptonote::Blockchain`
Furthermore, this opens the door to future multi-threaded block verification
speed-ups. Right now, transactions' input proof verification is limited to one
transaction at a time. However, one can imagine a scenario with verification IDs
where input proofs are optimistically multi-threaded in advance of block
processing. Then, even though ring member fetching and verification is
single-threaded inside of `cryptonote::Blockchain::check_tx_inputs()`, the
single thread can skip the CPU-intensive cryptographic code if the verification
ID allows it.
Also changes the default log category in `tx_verification_utils.cpp` from "blockchain" to "verify".
Use a fixed, 240s deadline for the daemons to reach agreement and poll
with a suitable frequency (.25s), rather than polling up to 100 times at
roughly 10s intervals.
Reduce the likelihood of false positive failures in the p2p
transaction propagation functional test by waiting up to a
maximum timeout for a transaction to propagate, rather than using a
fixed timeout, to reflect the random delay of Dandelion++ transaction
propagation. This strategy also speeds test execution in cases where
propagation occurs faster than the previously expected fixed delay.