Bot war: Failing txs

Intro: When is a contract workable?
While other jobs can have different variables, for the UniswapV2SlidingOracle job, the ‘workable’ variable seems to be returning ‘true’ once a block has been mined that is timestamped 1800 seconds (30 minutes) after the last successful worked job.

You could anticipate (gamble) to send out your tx exactly at 1800 seconds after, or even before, and hope the block your transaction will be in is timestamped at least 1800 seconds after the last work. However, miners are free to timestamp a block they mine with any time, with the only requirement being that it’s incrementing the previous block timestamp.

That said, it’s possible to send a transaction 1800 seconds after the last work, and still being in a block timestamped 1799 seconds after the last work, resulting in your transaction/work to fail.

Bottomline is that the safest way is to wait until the variable returns true, but at this point bots may have gambled already, or for sure are sending the transaction also at this same time.

Current situation
While gas wars are prevented by the cap on rewarded KP3R, the work is still dominated by a few bots, mostly resulting in loss of funds if you directly compare tx fee VS rewards in KP3R.

So, it’s very logical there are different people/bots sending out transactions in the very same block once ‘workable’ turns ‘true’, and at that point the only difference is the amount of gas being paid to decide who’s winning the job, resulting in almost no profits or even losses for the winner + losses for everyone who sees their tx to fail. Fortunately the loss for a failed transaction doesn’t have the same gas costs as working a job, since it’s reverted early saving on gas usage, but after a few fails it’s no fun anymore.

While the current situation makes sure jobs are done as timely as possible, it results already in many failing (double) work transactions and the same bots doing the work.

Possible solution: to be fine-tuned…
A possible solution could be giving addresses who recently successfully performed a job an extra timeout, like a fixed 1 minute timeout or something relative to how long ago was the last successful work done and/or how many active keepers are on the job. Like this you create a small delay for recent job performers and other bots/people get a chance to perform the job directly after 1800 seconds.

Things to keep in mind with this ‘solution’ is that the ‘workable’ function will return different responses for every address, so an address should be passed to check if the job is workable for you. Also, there must be a way to prevent the bots from creating many new addresses to perform jobs, maybe the timeout can take in mind the amount of bonded keep3r. However I’m also no fan of high bonding requirements because we are not all whales.

Conclusion
I’m not sure if this is seen as a real problem, but I think already at this point in the first weeks there are quite a few people working on creating bots or trying to jobs manually, but are facing expert bots dominating the jobs. I thought I should just share the thoughts and possible part of the solution, to start the conversation about this and see what you guys think. Is this really a problem we should solve and, if yes, what are the ideas for this?

4 Likes

Agree that this is a problem based on my recent experience competing for jobs. At this point I’ve stopped trying. We can certainly set our jobs to run every second and or try to game the timing as explained above. Maybe we will get lucky.

To ensure the health of the ecosystem and to make it fair for participants I propose an exploration of the following:

Proposal:
Establish a framework that distributes jobs randomly (similar to ETH2 validator attestations) among eligible Keep3rs.

Pros:

  • Gives a fair chance for all Keep3rs to win work and earn KP3R.
  • Ensures the viability of the ecosystem as it grows - fewer cartels or monopolies.
  • Mitigates the gas wars and other advantage-generating mechanisms.
  • Creates an opportunity to allow Keep3rs to register for specific job types (e.g. give me all workable jobs where I call work() with gas fees likely less than $50, or give me the complex jobs where I call work(x,y,z) which requires min. 200 KP3R bonded and likely gas fees > $300). Would avoid situations where Keep3rs call jobs they aren’t able to complete.
  • Allows for ecosystem metrics for financial budgeting (e.g. 200 jobs in 24 hrs div 50 Keep3rs = ~4 jobs per Keep3r per day).

Cons:

  • Technically I’m not even sure if it’s a viable solution. May require additional off-chain infrastructure.
  • Saturation as there are relatively few jobs. This will change as the ecosystem grows.
  • It’s still possible to game the system by creating multiple Keep3rs. Not sure how to solve this.
  • Requires a ‘timeout’ and ‘redistribute’ to address situations where Keep3rs sit on jobs, and potentially a strike system for inactivity (slashing).
  • A bunch of other things that I haven’t thought of that would make this a really bad idea.

Open to discussion/feedback.

5 Likes

I’m not too knowledgeable about keep3r so I don’t know if this solution will work.

What if there’s an auction for the job within that 1800s when no one can perform the job? This might solve the problem of double-work since it establishes who has the rights to a job when the system is effectively on cooldown (raises engagement as well).

The auction can be for the rights to a job for a certain time frame or range of block numbers (say, 1800s to 1860s?). After that, we can either hold another auction or just revert back to the current free-for-all method of claiming jobs.

The problem with this approach is if someone’s dedicated to attack this protocol by losing fees on each job, they can.