Compare commits

..

315 Commits

Author SHA1 Message Date
Gui Heise
d8f896bda3 Add Eth and Weth prices 2022-01-18 16:33:32 -05:00
Gui Heise
c7de7cf808 Fix Black pre-commit 2022-01-18 16:04:31 -05:00
Gui Heise
a3d83e625c Add cETH and cWBTC 2022-01-18 16:01:33 -05:00
Gui Heise
fed3497afc Remove @coro from cli 2022-01-17 22:09:27 -05:00
Gui Heise
3072e4a826 Specify coingecko id's and remove async keyword from cli 2022-01-14 13:17:37 -05:00
Gui Heise
7af515d1ac Change price to float 2022-01-13 11:17:48 -05:00
Gui Heise
2a1da33752 Remove leftover coinbase file 2022-01-13 10:54:00 -05:00
Gui Heise
2e22103713 Add coingecko api 2022-01-13 01:26:53 -05:00
Luke Van Seters
a93161eabc Merge pull request #229 from flashbots/chainlink-fix
Fix chainlink price
2022-01-11 15:52:39 -05:00
Luke Van Seters
d9bca45a50 Fix chainlink price 2022-01-11 15:50:19 -05:00
Luke Van Seters
de03b953a0 Merge pull request #223 from flashbots/aave-zero-bug
Support aave self-liquidations
2022-01-11 09:49:50 -05:00
Luke Van Seters
403e84fa29 should be zero if we dont know 2022-01-11 09:48:41 -05:00
Luke Van Seters
a40e250464 Merge pull request #226 from tmikulin/improve-k8-security
Enforce security in k8 files
2022-01-11 08:20:01 -05:00
Tomislav Mikulin
2703b008de Enforce security in k8 files 2022-01-10 20:52:45 +01:00
Gui Heise
c28f7c6174 Remove unused Optional 2022-01-10 14:21:28 -05:00
Gui Heise
2bb760874d Remove exceptions 2022-01-10 14:18:37 -05:00
Luke Van Seters
a29b12bf0a Merge pull request #224 from flashbots/support-all-tokens-coinbase-knows
Support all tokens we have supported for coinbase
2022-01-10 12:54:14 -05:00
Luke Van Seters
5b1efd5e6d Support all tokens we have supported for coinbase 2022-01-10 12:27:08 -05:00
Luke Van Seters
89fcf388e4 Merge pull request #222 from flashbots/include-sandwiches-close-in-arb
Include sandwiches that close in arbs
2022-01-10 11:24:34 -05:00
Gui Heise
63087fc0e8 Support aave self-liquidations 2022-01-10 10:14:58 -05:00
Luke Van Seters
a6e76bfd10 Merge pull request #218 from flashbots/remove-backfill
Remove old backfill code
2022-01-10 10:05:14 -05:00
Luke Van Seters
50ff7dadcd The sandwicher should be where the swap value accumulates 2022-01-08 16:15:39 -05:00
Luke Van Seters
4930065045 Include sandwiches that close in arbs 2022-01-08 13:44:58 -05:00
Luke Van Seters
4a4992a0f9 Merge pull request #221 from flashbots/fix-listener-small-image
Fix listener to work with more secure image
2022-01-08 07:04:44 -05:00
Luke Van Seters
81be06ad7d Fix listener to work with more secure image 2022-01-07 16:18:51 -05:00
Luke Van Seters
e13f895593 Merge pull request #219 from flashbots/require-small-difference-arbs
Require token amounts in arbitrage swaps to be close to each other
2022-01-07 14:09:52 -05:00
Luke Van Seters
660dfe7b2f Update tests to use a true reverting arb 2022-01-07 13:18:51 -05:00
Luke Van Seters
11aebe078a Require price difference to be less than 1% between swaps 2022-01-07 13:06:41 -05:00
Luke Van Seters
69cad7537e Break swap outs / ins check into a function 2022-01-07 12:22:45 -05:00
Gui Heise
9894450e0c Merge pull request #217 from flashbots/aave-liquidations-v3
Restructure AAVE classifier debt logic
2022-01-07 11:30:53 -05:00
Gui Heise
977a72839e Remove instance checks 2022-01-07 11:25:33 -05:00
Luke Van Seters
dcdb4e421d Merge pull request #210 from tmikulin/improve_dockerfile
Improve dockerfile
2022-01-07 11:05:39 -05:00
Tomislav Mikulin
02fb01dfb8 Merge branch 'main' into improve_dockerfile 2022-01-07 09:30:35 +01:00
Tomislav Mikulin
9ab1e6e5b1 add the missing emojis 2022-01-07 09:25:30 +01:00
Luke Van Seters
b33eb49dd2 Remove old backfill code 2022-01-06 17:10:52 -05:00
Gui Heise
327695c56c Remove AAVE address list 2022-01-06 16:38:48 -05:00
Gui Heise
818a9b0b65 Raise exceptions 2022-01-06 16:35:51 -05:00
Gui Heise
75748abb43 Actually fix eth transfers test 2022-01-06 16:17:10 -05:00
Gui Heise
92904d7298 Fix eth transfer liquidations 2022-01-06 16:14:35 -05:00
Gui Heise
73a29a667b Fix text 2022-01-06 15:08:44 -05:00
Luke Van Seters
8bb92aa87e Merge pull request #215 from flashbots/flip-token-in-out-amounts
Switch token amounts for taker and maker on 0x
2022-01-05 20:30:40 -05:00
Luke Van Seters
722ee8c6ec Fix tests 2022-01-05 18:02:49 -05:00
Luke Van Seters
bee620fd98 Switch token amounts for taker and maker on 0x 2022-01-05 17:55:49 -05:00
Luke Van Seters
2d8db7f506 Merge pull request #213 from flashbots/static-redis-password
Set the password in Redis statically locally
2022-01-05 15:21:31 -05:00
Luke Van Seters
09e1d48ae8 Set the password in redis statically locally 2022-01-04 19:00:10 -05:00
Luke Van Seters
379bd82f0e Merge pull request #211 from flashbots/faster-writes
Use COPY to speed up database writes for blocks and traces
2022-01-04 13:17:24 -05:00
Luke Van Seters
8ba0f86569 Merge pull request #206 from flashbots/fix-pricing
Only import the worker where needed
2022-01-04 12:21:29 -05:00
Luke Van Seters
807e6e482a Merge pull request #212 from flashbots/only-search-shortest
Cut out early from arbitrages if we've already found a shorter path
2022-01-04 11:38:31 -05:00
Luke Van Seters
17823b5aae comment => variable 2022-01-04 11:25:27 -05:00
Luke Van Seters
eff77dd482 goodbye 2022-01-04 11:24:33 -05:00
Luke Van Seters
2af2f86069 Merge pull request #207 from flashbots/gimme-a-break
Be more lenient on liveness timeouts for deployments
2022-01-04 11:05:31 -05:00
Luke Van Seters
28b37c723c Put it back 2022-01-04 10:19:39 -05:00
Luke Van Seters
02a0adc8e2 Break it to prove tests work 2022-01-04 10:16:50 -05:00
Luke Van Seters
f84b9d45d3 Add placeholder file to detect which code is running 2022-01-04 10:05:53 -05:00
Luke Van Seters
24a6ba670e Bring back the array for diff checks 2022-01-04 09:50:44 -05:00
Luke Van Seters
bb94eba02a Change to max_route_length to make the logic clearer 2022-01-03 16:09:34 -05:00
Luke Van Seters
4e9ff10988 Cut out early from arbitrages if we've already found a shorter path 2022-01-03 15:59:56 -05:00
Luke Van Seters
0ed4f5456e Move list util to db shared 2022-01-03 15:20:00 -05:00
Luke Van Seters
9b8cac5c5d Credit 2022-01-03 15:14:28 -05:00
Luke Van Seters
ada540c1d4 Write using an iterator 2022-01-03 14:50:27 -05:00
Luke Van Seters
6b1c469a10 Move classified_traces to csv write 2022-01-03 14:27:36 -05:00
Luke Van Seters
bab2043575 Abstract out csv writing 2022-01-03 13:38:34 -05:00
Luke Van Seters
93bdb7c129 Write blocks as proof of concept 2022-01-03 13:15:30 -05:00
Luke Van Seters
99d291da8e Be more lenient on liveness timeouts 2022-01-03 12:43:54 -05:00
Luke Van Seters
7bb3275c04 Only import worker where needed 2022-01-03 12:16:33 -05:00
Tomislav Mikulin
1557673eda Merge branch 'main' into improve_dockerfile 2022-01-03 17:56:13 +01:00
Luke Van Seters
5a26bde3de Get RPC only where its needed 2022-01-03 11:50:38 -05:00
Luke Van Seters
e462a16b8f Merge pull request #202 from flashbots/redis-queue
Queue backfills with Redis
2022-01-03 11:42:07 -05:00
Tomislav Mikulin
6f624ecb7b optimize the dockerfile with security and shrinking the resulting docker image 2022-01-02 16:32:52 +01:00
Luke Van Seters
0860f4f7f5 More detail in the README 2021-12-31 18:08:04 -05:00
Luke Van Seters
5cad2fef43 Break redis into a function. Add reference to README for now 2021-12-31 18:00:32 -05:00
Luke Van Seters
139e45333b Clean up redis pods 2021-12-31 16:44:22 -05:00
Luke Van Seters
f296de5a20 Update README to reflect new backfill 2021-12-31 16:37:27 -05:00
Luke Van Seters
0516fffa9c Add some logging 2021-12-31 16:18:17 -05:00
Luke Van Seters
01bb566478 Drop worker count to 1 locally 2021-12-31 16:18:05 -05:00
Luke Van Seters
cbec5b7613 Only build inspector once 2021-12-31 16:12:36 -05:00
Luke Van Seters
cff148e21f Log when writing 2021-12-31 16:11:18 -05:00
Luke Van Seters
815af26f28 Enqueue messages to redis with backfill command 2021-12-31 15:55:33 -05:00
Luke Van Seters
b862bddfe9 Add worker deployment 2021-12-31 15:55:33 -05:00
Luke Van Seters
476db25003 Add redis 2021-12-31 15:55:33 -05:00
Luke Van Seters
4662a1ecbc Pass DB sessions into inspector 2021-12-31 15:50:07 -05:00
Luke Van Seters
1ff9e9aa1c Merge pull request #199 from flashbots/fix-cycle-sandwiches
Support sandwiches including multiple pools
2021-12-31 15:22:39 -05:00
Luke Van Seters
bec0d03cae Merge pull request #201 from flashbots/fix-typo
Fix typo in gathering blocks
2021-12-31 14:49:33 -05:00
Luke Van Seters
602e32de36 Merge pull request #200 from flashbots/mev-use-poetry
Use poetry for backfill script
2021-12-31 08:19:33 -05:00
Luke Van Seters
943715c812 Fix typo in gathering blocks 2021-12-30 22:05:23 -05:00
Luke Van Seters
60b0b933b4 Use poetry for backfill script 2021-12-30 10:46:29 -05:00
Luke Van Seters
9235020999 Merge pull request #195 from flashbots/consistent-middleware
Use middleware for all RPC calls
2021-12-30 10:11:33 -05:00
Luke Van Seters
a683cc66e0 Fix sandwiches including multiple pools 2021-12-29 17:59:21 -05:00
Luke Van Seters
b487ab08a0 Merge pull request #197 from flashbots/break-out-early-find
Break out of finding block on first missing attribute
2021-12-29 11:26:56 -05:00
Luke Van Seters
880e588f5f Merge pull request #196 from flashbots/zero-ex-two-transfers
ZeroX requires at least 2 child transfers
2021-12-29 11:26:39 -05:00
Luke Van Seters
f9ccd8dca2 Merge pull request #194 from flashbots/bug-all
Inspect block should write all
2021-12-29 11:26:08 -05:00
Luke Van Seters
846f7376d4 Break out of finding block on first missing attribute 2021-12-29 09:50:40 -05:00
Luke Van Seters
52be448fb8 ZeroX requires at least 2 child transfers 2021-12-29 09:14:15 -05:00
Luke Van Seters
b70f55c9cc Keep asyncio sleep 2021-12-25 17:29:40 -05:00
Luke Van Seters
7707b818f0 Include new methods in retry-able methods 2021-12-25 17:23:21 -05:00
Luke Van Seters
6b8d66b976 Merge pull request #173 from sketsdever/opensea
Opensea NFT Trade classifier
2021-12-25 16:56:29 -05:00
Luke Van Seters
b611be4e68 Inspect block should write all 2021-12-25 16:54:47 -05:00
Shea Ketsdever
5990838603 Last nits 2021-12-25 15:53:13 -06:00
Luke Van Seters
fcc453391f Use middleware for trace and receipt methods 2021-12-23 22:21:18 -05:00
Shea Ketsdever
edc40a3106 Merge 2021-12-23 19:56:24 -06:00
Shea Ketsdever
ce7585e0b3 Fix getting addr 2021-12-23 19:41:26 -06:00
Shea Ketsdever
1f84f95fff Require exchange_wallet_address and rename payment_token -> payment_token_address 2021-12-23 18:57:11 -06:00
Luke Van Seters
2982ff700f Merge pull request #192 from flashbots/liquidations-error-crud
Pass error through to liquidation
2021-12-23 14:41:37 -05:00
Luke Van Seters
21826dd308 Pass error through from trace to liquidation 2021-12-23 10:09:32 -05:00
Luke Van Seters
115167096e Add error column to liquidations 2021-12-23 09:56:15 -05:00
Luke Van Seters
7b44046926 Merge pull request #183 from flashbots/fix-infinite-arbs
Only use each swap in a single arbitrage
2021-12-22 22:53:55 -05:00
Luke Van Seters
2768428eac Merge pull request #189 from flashbots/overflow-error
Ignore overflow errors on trace decode
2021-12-22 22:49:40 -05:00
Luke Van Seters
b588e115ce Fix reverting arbitrage tests 2021-12-22 22:42:26 -05:00
Luke Van Seters
bd99188f6e Rename rest 2021-12-22 22:41:10 -05:00
Luke Van Seters
fa5be12e81 Fix docstring 2021-12-22 22:41:10 -05:00
Luke Van Seters
ca921f896d route => shortest_route in tests 2021-12-22 22:41:10 -05:00
Luke Van Seters
22769c9529 Remove TODO - not needed for now 2021-12-22 22:41:10 -05:00
Luke Van Seters
17c9b835ac Simplify smallest logic. Fix tests 2021-12-22 22:41:10 -05:00
Luke Van Seters
46b768c147 Break out shortest logic into a function 2021-12-22 22:41:10 -05:00
Luke Van Seters
46f7786c4f Only keep the shortest route instead 2021-12-22 22:41:10 -05:00
Luke Van Seters
154d356621 Only keep the longest arb 2021-12-22 22:41:10 -05:00
Luke Van Seters
f4fb7717dd Ignore overflow errors on trace decode 2021-12-22 22:39:06 -05:00
Gui Heise
45c74a19ec Merge pull request #188 from flashbots/compound-tokens
Add compound tokens
2021-12-22 15:27:33 -05:00
Gui Heise
1916c81293 Fix USDC const 2021-12-22 14:59:34 -05:00
Gui Heise
e237f8d17f Add token addresses 2021-12-22 14:45:12 -05:00
Taarush Vemulapalli
4cb3383d1a New error column for arbitrages (#180) 2021-12-22 08:00:54 -08:00
Luke Van Seters
ea40a3905f Merge pull request #179 from flashbots/copy-data
Inspect many writing 10 blocks at a time - 40s => 30s locally
2021-12-21 17:57:01 -05:00
Luke Van Seters
bb0420fd78 Merge pull request #175 from flashbots/random-postgres-client
Append a random number to postgres client
2021-12-21 15:46:21 -05:00
Luke Van Seters
3c958cdc76 Merge pull request #178 from flashbots/copy-data
Bulk delete and write data
2021-12-21 15:37:26 -05:00
Luke Van Seters
cec6341bdf Inspect many writing 10 blocks at a time - 40s => 30s locally 2021-12-21 15:05:12 -05:00
Luke Van Seters
fcfb40c864 Add inspect many blocks - use for single inspect too 2021-12-21 14:58:39 -05:00
Gui Heise
a463ff7ebf Merge pull request #177 from flashbots/token-decimals
Create tokens table
2021-12-21 14:52:29 -05:00
Gui Heise
c68e7216d9 Remove pass 2021-12-21 14:44:58 -05:00
Gui Heise
ba45200d66 Create tokens table 2021-12-21 14:18:46 -05:00
Luke Van Seters
35074c098e Append a random number to postgres client 2021-12-21 10:28:13 -05:00
Shea Ketsdever
66e1e64675 Actually fix lint issues 2021-12-20 11:05:05 -08:00
Luke Van Seters
82c167d842 Merge pull request #174 from flashbots/listener-lag-fix
Fix listener first startup
2021-12-20 12:54:32 -05:00
Luke Van Seters
a2f8b5c08e Remove PIDFILE after stop 2021-12-20 12:43:27 -05:00
Luke Van Seters
6e8d898cb0 Start listener from block lag 2021-12-20 12:37:20 -05:00
Shea Ketsdever
bf85025b84 Fix lint issue 2021-12-20 09:05:21 -08:00
Shea Ketsdever
97e6c156ab Add nft_trades table to db 2021-12-19 15:13:01 -08:00
Shea Ketsdever
b75ee98018 Create nft trade from transfers 2021-12-19 14:31:49 -08:00
Shea Ketsdever
f92737b00c Classify opensea nft trades 2021-12-19 12:16:49 -08:00
Luke Van Seters
cfa3443f88 Merge pull request #170 from flashbots/no-sandwiches
If no sandwiched swaps, not a sandwich
2021-12-17 12:15:05 -05:00
Luke Van Seters
088c32f52f If no sandwiched swaps, not a sandwich 2021-12-17 11:02:03 -05:00
Luke Van Seters
1943d73021 Merge pull request #169 from flashbots/lower-prices
Make token addresses for prices lowercase
2021-12-16 18:38:17 -05:00
Luke Van Seters
633007be64 Make token addresses for prices lowercase 2021-12-16 17:28:20 -05:00
Taarush Vemulapalli
d7bb160d85 Add received_token_address for Compound/CREAM (#168) 2021-12-16 14:33:10 -05:00
Luke Van Seters
8a8090e20f Merge pull request #163 from flashbots/add-sandwiches-crud
Add sandwiches
2021-12-16 14:32:03 -05:00
Gui Heise
408ff02de3 Merge pull request #164 from flashbots/0x-bug 2021-12-16 13:41:10 -05:00
Gui Heise
c93e216647 Fix length check for child transfers 2021-12-15 14:35:29 -05:00
Gui Heise
af01b4e8b5 Value to Runtime error 2021-12-15 14:03:51 -05:00
Gui Heise
42b82be386 Add exception to transfers not found 2021-12-15 13:54:51 -05:00
Luke Van Seters
566dada5d4 Add back crud for sandwiches 2021-12-15 13:47:29 -05:00
Luke Van Seters
f0c29e2b2f Add logic and writing for sandwiches. Add tests too 2021-12-15 13:45:55 -05:00
Gui Heise
c090624f4c move none check 2021-12-15 11:06:22 -05:00
Luke Van Seters
5fa7c6b567 Merge pull request #167 from flashbots/isort-again
Fix whitespace for isort
2021-12-14 13:31:50 -05:00
Luke Van Seters
b9544eb18b Fix whitespace for isort 2021-12-14 13:14:13 -05:00
Luke Van Seters
c23b9a1651 Merge pull request #158 from flashbots/add-isort
Add isort pack to pre-commit
2021-12-14 13:11:39 -05:00
Luke Van Seters
94a05d8845 Run isort for alembic 2021-12-14 13:09:28 -05:00
Luke Van Seters
8b6bf7d76d Make alembic a known third part for isort 2021-12-14 13:09:02 -05:00
Luke Van Seters
2c251fb72e Make alembic a known third party 2021-12-14 13:08:26 -05:00
Luke Van Seters
bda96b04ce Try local rev 2021-12-14 13:03:24 -05:00
Luke Van Seters
bd73820123 Rename isort back 2021-12-14 12:59:14 -05:00
Luke Van Seters
7bc820fb33 Merge pull request #162 from flashbots/add-sandwiches-db
Add sandwiches tables
2021-12-14 12:48:03 -05:00
Luke Van Seters
4b909ad88e Add tables for sandwiches 2021-12-14 12:47:49 -05:00
Luke Van Seters
2ec2bf44ba Merge pull request #160 from flashbots/add-transaction-position-crud
Write transaction position for swaps and traces
2021-12-14 12:47:16 -05:00
Luke Van Seters
ccd409e9cf Merge pull request #161 from flashbots/add-transaction-position
Add nullable transaction position field
2021-12-14 12:47:07 -05:00
Luke Van Seters
138b1a0eef No comments 2021-12-14 12:46:00 -05:00
Luke Van Seters
d62d547da1 Merge pull request #159 from flashbots/faster-tests
Speed up the tests by sharing trace_classifier
2021-12-14 12:37:02 -05:00
Gui Heise
23635892a6 Add check for reverted orders 2021-12-13 21:07:24 -05:00
Luke Van Seters
9ffe9fe131 Add back 2021-12-13 20:24:29 -05:00
Luke Van Seters
c6eba733a0 Fix env.py 2021-12-13 20:20:12 -05:00
Luke Van Seters
c853cee43e Write transaction position for swaps and traces 2021-12-13 20:05:07 -05:00
Luke Van Seters
e84d946ebb Add nullable transaction position field 2021-12-13 20:03:17 -05:00
Luke Van Seters
2046cd2e51 Add support for profiling 2021-12-13 19:46:58 -05:00
Luke Van Seters
bb23fce13c Share trace classifier in tests 2021-12-13 19:34:47 -05:00
Luke Van Seters
5d7d84aa02 Add back isort in precommit 2021-12-13 18:50:43 -05:00
Luke Van Seters
767cf2df8f Specify python version 2021-12-13 18:48:50 -05:00
Luke Van Seters
d5f73b5e3a Run isort on all files 2021-12-13 18:46:39 -05:00
Luke Van Seters
bc46c2929b Fix isort settings so mev_inspect is considered this project 2021-12-13 18:45:21 -05:00
Luke Van Seters
f07c497b33 Merge pull request #157 from flashbots/fix-head-punks
Change migrations head for punks
2021-12-13 14:34:23 -05:00
Luke Van Seters
1534fb6165 Change migrations head for punks 2021-12-13 13:19:11 -05:00
Gui Heise
88adfd8625 Merge pull request #154 from flashbots/add-liquidation-addresses
Add liquidation addresses
2021-12-13 11:00:26 -05:00
Gui Heise
d736b38845 Add coinbase names for addresses 2021-12-08 15:09:06 -05:00
Gui Heise
00c73b228d Add supported token addresses 2021-12-07 15:53:45 -05:00
Gui Heise
5341c904ec Add top received liquidation addresses to prices 2021-12-07 15:13:08 -05:00
Robert Miller
9ffa9d2df9 Merge pull request #149 from flashbots/punk_accept_bids_database
feat: punk accept bids database
2021-12-06 16:41:47 -05:00
Robert Miller
4e91e52a92 style: formatting 2021-12-06 16:36:05 -05:00
Robert Miller
0ad3906989 style: formatting 2021-12-06 16:33:35 -05:00
Robert Miller
27f43ea29c Merge branch 'main' into punk_accept_bids_database 2021-12-06 16:31:24 -05:00
Robert Miller
8d48cea315 Merge pull request #147 from flashbots/punk-database-work
feat: punk_snipe database entry
2021-12-06 16:16:31 -05:00
Robert Miller
01c4024017 style: formatting 2021-12-06 16:13:47 -05:00
Robert Miller
044a233141 Merge branch 'main' into punk-database-work 2021-12-06 16:07:13 -05:00
Robert Miller
34dc54ee6f Merge pull request #148 from flashbots/punk_bid_database
feat: add punk bid database
2021-12-06 16:05:47 -05:00
Gui Heise
d938182833 Merge pull request #153 from flashbots/double-arb-bug
Fix arbitrage swap double entry bug
2021-12-06 15:21:43 -05:00
Gui Heise
d2a1814774 skip start swap 2021-12-06 15:18:32 -05:00
Gui Heise
be19c42275 add start and end route check 2021-12-06 11:52:41 -05:00
Robert Miller
8ee803d229 Merge branch 'punk-database-work' of https://github.com/flashbots/mev-inspect-py into punk-database-work 2021-12-04 20:33:13 -05:00
Robert Miller
478f9bafa5 style: formatting 2021-12-04 20:32:54 -05:00
Robert Miller
9f08275698 style: formatting 2021-12-04 20:32:29 -05:00
Robert Miller
622cf9319e style: formatting 2021-12-04 20:31:46 -05:00
Luke Van Seters
11744deaa9 Merge pull request #151 from sketsdever/bancor
Bancor classifier
2021-12-03 11:05:44 -05:00
Shea Ketsdever
37e6900f46 Rename create_swap functions 2021-12-02 21:08:45 -05:00
Taarush Vemulapalli
1fb65bacc1 Compound backfilling/removed network calls (#125)
* Removes `collateral_token_address` from both aave/comp for consistency
2021-12-02 11:19:32 -08:00
Shea Ketsdever
4fdd628ce3 Merge 2021-12-01 18:10:05 -05:00
Luke Van Seters
912239fc2e Merge pull request #150 from flashbots/fix-timestamp-writing
Fix timestamp writing in blocks
2021-11-30 12:57:20 -05:00
Luke Van Seters
ed94e71715 Fix timestamp writing in blocks 2021-11-30 12:54:07 -05:00
Luke Van Seters
f7e4bdaed2 Merge pull request #142 from flashbots/prices-kube
Add cron job to fetch prices
2021-11-29 12:09:29 -05:00
Shea Ketsdever
7d7f78bfb1 Fix int<>timestamp bug 2021-11-28 16:02:41 -08:00
Shea Ketsdever
cd01298ba6 Bancor classifier 2021-11-28 14:51:24 -08:00
Robert Miller
c1ba63ef81 style: formatting 2021-11-26 21:34:16 -05:00
Robert Miller
e1e678bbc2 style: formatting 2021-11-26 21:33:47 -05:00
Robert Miller
c619c20878 bug: add a missing parentheses 2021-11-26 21:29:53 -05:00
Robert Miller
3088055606 bug: add a missing parentheses 2021-11-26 21:29:20 -05:00
Luke Van Seters
018fb8c73b Run hourly 2021-11-26 21:07:06 -05:00
Luke Van Seters
9a076a6b4c Don't run prices by default 2021-11-26 21:07:06 -05:00
Luke Van Seters
391314b9d6 Limit successful history instead of ttl 2021-11-26 21:07:06 -05:00
Luke Van Seters
c83577b04c Remove restart 2021-11-26 21:07:06 -05:00
Luke Van Seters
34aca861cc Use poetry directly instead of entrypoint script 2021-11-26 21:07:06 -05:00
Luke Van Seters
a8c1728e35 Save progress 2021-11-26 21:07:06 -05:00
Luke Van Seters
26caaa04e1 Merge pull request #134 from flashbots/prices
Add support for fetching prices from coinbase and storing
2021-11-26 21:06:48 -05:00
Luke Van Seters
4f34316afb COINBASE_TOKEN_NAMES => COINBASE_TOKEN_NAME_BY_ADDRESS 2021-11-26 21:03:57 -05:00
Robert Miller
868094696a Merge branch 'main' into punk_accept_bids_database 2021-11-26 19:07:42 -05:00
Robert Miller
90f822a15f Merge branch 'main' into punk_bid_database 2021-11-26 19:07:16 -05:00
Robert Miller
56f0bbb855 Merge branch 'main' into punk-database-work 2021-11-26 19:02:10 -05:00
Gui Heise
4304776af6 Merge pull request #143 from flashbots/0x-v2
Add support for 0x orderbook
2021-11-26 18:14:41 -05:00
Robert Miller
07aa6e3089 feat: add punk_bid_acceptances database 2021-11-26 15:42:36 -05:00
Robert Miller
71c549b6f3 feat: add punk_bids database 2021-11-26 15:33:07 -05:00
Robert Miller
7bfe77a18f bug: fix punk_snipe alembic file 2021-11-26 15:22:45 -05:00
Robert Miller
947e5921c7 feat: add alembic for punk snipes 2021-11-26 15:10:37 -05:00
Robert Miller
8144d406b3 Merge pull request #138 from flashbots/cryptopunks-classifer 2021-11-26 12:00:35 -05:00
Luke Van Seters
2dc2c89b0b Merge pull request #146 from flashbots/block-timestamp-timestamp
Convert block_timestmap from numeric to timestamp
2021-11-26 11:18:30 -05:00
Luke Van Seters
051ef74eb7 Convert block_timestmap from numeric to timestamp 2021-11-26 11:02:02 -05:00
Robert Miller
2cc7ac4a20 feat: initial files for punk database 2021-11-25 21:05:42 -05:00
Robert Miller
b4097baa68 feat: remove unused punk_snipe import 2021-11-25 19:35:22 -05:00
Robert Miller
7638c97e88 =feat: change punk snipe to only check against the highest bid per punk 2021-11-25 19:32:30 -05:00
Robert Miller
bb3ace07a1 =move punk classifiers out of classifer.py 2021-11-25 16:48:48 -05:00
Robert Miller
976ac9ea77 style: change punk_bid.amount to price 2021-11-25 16:04:52 -05:00
Robert Miller
3314056c88 revert change to mev 2021-11-25 12:23:46 -05:00
Gui Heise
44e357344e Remove test assertion 2021-11-24 13:54:39 -05:00
Gui Heise
9f860c118e Remove validation step 2021-11-24 12:23:32 -05:00
Gui Heise
8a555ea442 Move helpers into 0x file 2021-11-24 12:14:40 -05:00
Gui Heise
7656c0d76c Remove children swaps 2021-11-23 14:34:26 -05:00
Gui Heise
c334441e95 Add assertion and move constants up 2021-11-23 11:28:15 -05:00
Gui Heise
d7872db45c Restructure classifier 2021-11-23 11:15:03 -05:00
Gui Heise
d75e9b76ab Add constants and exceptions 2021-11-23 10:38:02 -05:00
Gui Heise
4c643a2d9f Add tests for 0x swaps 2021-11-23 09:32:18 -05:00
Gui Heise
2d62ca25d6 Add function signatures 2021-11-22 19:06:58 -05:00
Gui Heise
e29c4fad72 Add support for any taker 2021-11-22 15:09:20 -05:00
Gui Heise
2f1a9bc751 Add helper for token_in_amount 2021-11-22 12:35:23 -05:00
Gui Heise
f650d3e87f Make protocol zero_ex 2021-11-22 12:23:14 -05:00
Gui Heise
32aa3246bf Remove debugger 2021-11-22 12:23:14 -05:00
Gui Heise
dbe40249b5 Add Rfq/Limit distinction 2021-11-22 12:23:14 -05:00
Gui Heise
cf71272c10 Add 0x swap classifier 2021-11-22 12:23:14 -05:00
Gui Heise
8428dd9908 Merge pull request #141 from flashbots/classifier-helpers
Add classifier helpers
2021-11-22 12:22:38 -05:00
Gui Heise
89c2ed3a84 Remove func 2021-11-22 12:16:39 -05:00
Gui Heise
784922fa07 Rename to helpers, add func 2021-11-22 12:07:30 -05:00
Gui Heise
9bf7a2675c Merge pull request #140 from flashbots/swapmodel
Add contract_address to SwapModel
2021-11-22 11:28:26 -05:00
Gui Heise
dc02564862 Add contract_address 2021-11-22 10:55:00 -05:00
Gui Heise
4f2c65e535 Merge pull request #137 from flashbots/swap-contract-address
Swap contract address
2021-11-21 22:14:17 -05:00
Gui Heise
94269cad33 Merge pull request #139 from flashbots/mev-bash
Change shell directory
2021-11-20 10:24:23 -05:00
Gui Heise
d2e1c588c4 Change shell directory 2021-11-19 19:21:46 -05:00
Robert Miller
377137d9c8 feat: add support for punk snipes 2021-11-19 17:18:29 -06:00
Robert Miller
f31430da30 bug: update uint to uin256 2021-11-19 17:17:34 -06:00
Gui Heise
12a82e918b Add contract_address in arbs 2021-11-19 11:03:06 -05:00
Gui Heise
45c9980a79 Add contract_address to tests 2021-11-19 11:00:14 -05:00
Gui Heise
8c699ed7cc Alter schema 2021-11-19 10:59:08 -05:00
Gui Heise
a9859a0b12 Add database migration 2021-11-19 10:58:35 -05:00
Gui Heise
07e1680301 Merge pull request #130 from flashbots/swaps-classifiers
Implement swap classifiers
2021-11-19 09:58:39 -05:00
Luke Van Seters
bf4570c8a3 Merge pull request #136 from flashbots/transfers-trace-address-array
Change transfers trace_address to ARRAY
2021-11-19 08:43:59 -05:00
Luke Van Seters
5f9bd3a274 Change transfers trace address to ARRAY 2021-11-19 08:42:08 -05:00
Luke Van Seters
ec860c7357 Merge pull request #135 from flashbots/prices-table-2
Add prices table
2021-11-18 17:33:27 -05:00
Luke Van Seters
f5233a17fd Rename to prices table 2021-11-18 13:56:07 -05:00
Luke Van Seters
7d50d3d674 Rename to prices table 2021-11-18 13:55:38 -05:00
Luke Van Seters
023205c25b Print => logger 2021-11-18 13:47:59 -05:00
Luke Van Seters
d499983f32 Remove fetch-latest for now 2021-11-18 13:45:25 -05:00
Luke Van Seters
5b59427d4f Write prices. Ignore duplicates 2021-11-18 13:43:21 -05:00
Gui Heise
386eccaeb7 Remove abstract method 2021-11-18 12:58:45 -05:00
Gui Heise
ca0014533a Add getter method for Uni recipient address 2021-11-18 12:52:48 -05:00
Gui Heise
c5621e0676 space 2021-11-18 12:23:09 -05:00
Gui Heise
1e1241cbf5 Remove Uni none checks and bash change 2021-11-18 12:22:13 -05:00
Luke Van Seters
bed8520bc8 Write prices on fetch-all 2021-11-18 11:55:42 -05:00
Luke Van Seters
5a3dbca425 Create usd_prices table 2021-11-18 11:55:03 -05:00
Luke Van Seters
2dc14218bf Add support for fetching all supported prices 2021-11-18 11:43:59 -05:00
Luke Van Seters
053c29cf20 Add placeholder for price commands 2021-11-18 11:43:59 -05:00
Gui Heise
6e25031623 Rename utils.py to swaps.py 2021-11-18 11:38:09 -05:00
Luke Van Seters
5756cb15a5 Merge pull request #128 from elopio/typo/clasifier
Fix typo: clasifier
2021-11-18 11:34:25 -05:00
Luke Van Seters
36101c36db Merge pull request #132 from flashbots/timestamp-support
Add support for writing timestamps in mev-inspect
2021-11-18 10:39:15 -05:00
Luke Van Seters
d7238c0e83 Merge pull request #131 from flashbots/add-block-timestamps-table
Add block timestamps table
2021-11-18 10:39:10 -05:00
Luke Van Seters
0d4cbc76b6 Merge pull request #129 from flashbots/fix-logging-base
Only set base logging from entry points
2021-11-18 10:39:05 -05:00
Robert Miller
1de1570939 feat: change to "punk bid acceptance" and get punk bid acceptances 2021-11-17 21:51:56 -05:00
Luke Van Seters
d2437055d9 Fix tests 2021-11-17 15:19:48 -05:00
Luke Van Seters
5aa8776b0d Don't attempt to create block if timestamp is null 2021-11-17 15:14:24 -05:00
Luke Van Seters
a2dc8908df Save block during inspection 2021-11-17 15:11:26 -05:00
Luke Van Seters
ad45abbe9c Add crud for blocks 2021-11-17 15:07:04 -05:00
Luke Van Seters
460f449127 Add block timestamps table 2021-11-17 14:37:57 -05:00
Luke Van Seters
caf645e923 Fetch timestamp when creating blocks 2021-11-17 13:28:48 -05:00
Gui Heise
ff9337eb4b Fix UniV3 Classifier 2021-11-17 10:19:10 -05:00
Gui Heise
94c5691f01 Move swap logic into classifiers 2021-11-17 07:37:25 -05:00
Robert Miller
96d2171daa style: improve schema naming bcuz imagine complained 2021-11-16 19:57:58 -05:00
Robert Miller
0d6215f82e wip feat: getting punk bids / accepts 2021-11-15 21:08:28 -05:00
Robert Miller
5766abb9fe feat: add punk classifiers 2021-11-15 21:08:07 -05:00
Robert Miller
c5ab2be4e3 add punk classifications 2021-11-15 21:07:38 -05:00
Luke Van Seters
f705a85b5c Only set base logging from entrypoints 2021-11-15 16:00:18 -05:00
Gui Heise
f43df8ffa4 Fix circular imports 2021-11-15 13:28:34 -05:00
Gui Heise
29cd82cd0b Parse swap logic inside uniswap classifier 2021-11-15 11:00:39 -05:00
Luke Van Seters
dec628b7a9 Merge pull request #124 from flashbots/listener-healthcheck
Ping healthcheck URL on each inspect in listener
2021-11-12 19:02:42 -05:00
Luke Van Seters
ec49c03484 Merge pull request #123 from flashbots/listener-async
Support asyncio in listener
2021-11-12 19:02:33 -05:00
Luke Van Seters
d34356bffb Merge pull request #118 from flashbots/classified-traces-block-index
Change classified_traces and miner_payments primary keys to begin with block number
2021-11-12 19:02:24 -05:00
Luke Van Seters
e144e377fd Merge pull request #117 from flashbots/swap-block-index
Reindex swaps by block number
2021-11-12 14:58:43 -05:00
Leo Arias
cfeaaae046 Fix typo: clasifier 2021-11-11 17:55:12 +00:00
Gui Heise
5d03c1fbfa Add classifier specs to init 2021-11-11 10:39:24 -05:00
Robert Miller
af2aab4940 add cryptopunks trace classifier 2021-11-10 20:14:42 -05:00
Luke Van Seters
63e81b22e6 Ping healthcheck url on each successful block inspect 2021-11-09 18:21:51 -05:00
Luke Van Seters
7b60488f76 Support async for listener 2021-11-09 11:51:43 -05:00
Luke Van Seters
e0d6919039 Pass DB session into the inspector 2021-11-09 10:49:08 -05:00
Luke Van Seters
45a536cd15 Change miner payments and transfers tables to begin with block number 2021-11-03 12:47:56 -04:00
Luke Van Seters
674565f789 Change classified traces primary key to include block number 2021-11-02 18:40:43 -04:00
Luke Van Seters
c38d77504e Reindex swaps by block number 2021-11-02 17:29:45 -04:00
153 changed files with 4052 additions and 939 deletions

View File

@@ -2,8 +2,15 @@ repos:
- repo: https://github.com/ambv/black
rev: 20.8b1
hooks:
- id: black
language_version: python3.9
- id: black
language_version: python3.9
- repo: local
hooks:
- id: isort
name: isort
entry: poetry run isort .
language: system
types: [python]
- repo: local
hooks:
- id: pylint

View File

@@ -433,7 +433,7 @@ int-import-graph=
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
known-third-party=alembic
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=

View File

@@ -1,21 +1,29 @@
FROM python:3.9
FROM python:3.9-slim-buster
RUN pip install -U pip \
ENV POETRY_VERSION=1.1.12
RUN useradd --create-home flashbot \
&& apt-get update \
&& curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
&& apt-get install -y --no-install-recommends build-essential libffi-dev libpq-dev gcc procps \
&& pip install poetry==$POETRY_VERSION \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV PATH="${PATH}:/root/.poetry/bin"
ENV PATH="${PATH}:/home/flashbot/.local/bin"
COPY ./pyproject.toml /app/pyproject.toml
COPY ./poetry.lock /app/poetry.lock
COPY --chown=flashbot ./pyproject.toml /app/pyproject.toml
COPY --chown=flashbot ./poetry.lock /app/poetry.lock
WORKDIR /app/
RUN poetry config virtualenvs.create false && \
poetry install
USER flashbot
COPY . /app
RUN poetry config virtualenvs.create false \
&& poetry install
COPY --chown=flashbot . /app
# easter eggs 😝
RUN echo "PS1='🕵️:\[\033[1;36m\]\h \[\033[1;34m\]\W\[\033[0;35m\]\[\033[1;36m\]$ \[\033[0m\]'" >> ~/.bashrc
ENTRYPOINT [ "/app/entrypoint.sh"]
ENTRYPOINT [ "poetry" ]
CMD [ "run", "python", "loop.py" ]

View File

@@ -103,11 +103,24 @@ And stop the listener with:
### Backfilling
For larger backfills, you can inspect many blocks in parallel using kubernetes
For larger backfills, you can inspect many blocks in parallel
To inspect blocks 12914944 to 12915044 divided across 10 worker pods:
To inspect blocks 12914944 to 12915044, run
```
./mev backfill 12914944 12915044 10
./mev backfill 12914944 12915044
```
This queues the blocks in Redis to be pulled off by the mev-inspect-worker service
To increase or decrease parallelism, update the replicaCount value for the mev-inspect-workers helm chart
Locally, this can be done by editing Tiltfile and changing "replicaCount=1" to your desired parallelism:
```
k8s_yaml(helm(
'./k8s/mev-inspect-workers',
name='mev-inspect-workers',
set=["replicaCount=1"],
))
```
You can see worker pods spin up then complete by watching the status of all pods
@@ -115,12 +128,35 @@ You can see worker pods spin up then complete by watching the status of all pods
watch kubectl get pods
```
To watch the logs for a given pod, take its pod name using the above, then run:
To see progress and failed batches, connect to Redis with
```
kubectl logs -f pod/mev-inspect-backfill-abcdefg
./mev redis
```
(where `mev-inspect-backfill-abcdefg` is your actual pod name)
For total messages, query:
```
HLEN dramatiq:default.msgs
```
For messages failed and waiting to retry in the delay queue (DQ), query:
```
HGETALL dramatiq:default.DQ.msgs
```
For messages permanently failed in the dead letter queue (XQ), query:
```
HGETALL dramatiq:default.XQ.msgs
```
For more information on queues, see the [spec shared by dramatiq](https://github.com/Bogdanp/dramatiq/blob/24cbc0dc551797783f41b08ea461e1b5d23a4058/dramatiq/brokers/redis/dispatch.lua#L24-L43)
To watch the logs for a given worker pod, take its pod name using the above, then run:
```
kubectl logs -f pod/mev-inspect-worker-abcdefg
```
(where `mev-inspect-worker-abcdefg` is your actual pod name)
### Exploring

View File

@@ -1,5 +1,4 @@
load("ext://helm_remote", "helm_remote")
load("ext://restart_process", "docker_build_with_restart")
load("ext://secret", "secret_from_dict")
load("ext://configmap", "configmap_from_dict")
@@ -9,10 +8,20 @@ helm_remote("postgresql",
set=["postgresqlPassword=password", "postgresqlDatabase=mev_inspect"],
)
helm_remote("redis",
repo_name="bitnami",
repo_url="https://charts.bitnami.com/bitnami",
set=["global.redis.password=password"],
)
k8s_yaml(configmap_from_dict("mev-inspect-rpc", inputs = {
"url" : os.environ["RPC_URL"],
}))
k8s_yaml(configmap_from_dict("mev-inspect-listener-healthcheck", inputs = {
"url" : os.getenv("LISTENER_HEALTHCHECK_URL", default=""),
}))
k8s_yaml(secret_from_dict("mev-inspect-db-credentials", inputs = {
"username" : "postgres",
"password": "password",
@@ -26,8 +35,7 @@ k8s_yaml(secret_from_dict("mev-inspect-db-credentials", inputs = {
# "host": "trace-db-postgresql",
# }))
docker_build_with_restart("mev-inspect-py", ".",
entrypoint="/app/entrypoint.sh",
docker_build("mev-inspect-py", ".",
live_update=[
sync(".", "/app"),
run("cd /app && poetry install",
@@ -35,7 +43,24 @@ docker_build_with_restart("mev-inspect-py", ".",
],
)
k8s_yaml(helm('./k8s/mev-inspect', name='mev-inspect'))
k8s_resource(workload="mev-inspect", resource_deps=["postgresql-postgresql"])
k8s_resource(
workload="mev-inspect",
resource_deps=["postgresql-postgresql", "redis-master"],
)
k8s_yaml(helm(
'./k8s/mev-inspect-workers',
name='mev-inspect-workers',
set=["replicaCount=1"],
))
k8s_resource(
workload="mev-inspect-workers",
resource_deps=["postgresql-postgresql", "redis-master"],
)
# uncomment to enable price monitor
# k8s_yaml(helm('./k8s/mev-inspect-prices', name='mev-inspect-prices'))
# k8s_resource(workload="mev-inspect-prices", resource_deps=["postgresql-postgresql"])
local_resource(
'pg-port-forward',

View File

@@ -1,9 +1,7 @@
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
from sqlalchemy import engine_from_config, pool
from mev_inspect.db import get_inspect_database_uri

View File

@@ -0,0 +1,54 @@
"""Change miner payments and transfers primary keys to include block number
Revision ID: 04a3bb3740c3
Revises: a10d68643476
Create Date: 2021-11-02 22:42:01.702538
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "04a3bb3740c3"
down_revision = "a10d68643476"
branch_labels = None
depends_on = None
def upgrade():
# transfers
op.execute("ALTER TABLE transfers DROP CONSTRAINT transfers_pkey")
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)
op.drop_index("ix_transfers_block_number")
# miner_payments
op.execute("ALTER TABLE miner_payments DROP CONSTRAINT miner_payments_pkey")
op.create_primary_key(
"miner_payments_pkey",
"miner_payments",
["block_number", "transaction_hash"],
)
op.drop_index("ix_block_number")
def downgrade():
# transfers
op.execute("ALTER TABLE transfers DROP CONSTRAINT transfers_pkey")
op.create_index("ix_transfers_block_number", "transfers", ["block_number"])
op.create_primary_key(
"transfers_pkey",
"transfers",
["transaction_hash", "trace_address"],
)
# miner_payments
op.execute("ALTER TABLE miner_payments DROP CONSTRAINT miner_payments_pkey")
op.create_index("ix_block_number", "miner_payments", ["block_number"])
op.create_primary_key(
"miner_payments_pkey",
"miner_payments",
["transaction_hash"],
)

View File

@@ -0,0 +1,35 @@
"""Change blocks.timestamp to timestamp
Revision ID: 04b76ab1d2af
Revises: 2c90b2b8a80b
Create Date: 2021-11-26 15:31:21.111693
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "04b76ab1d2af"
down_revision = "0cef835f7b36"
branch_labels = None
depends_on = None
def upgrade():
op.alter_column(
"blocks",
"block_timestamp",
type_=sa.TIMESTAMP,
nullable=False,
postgresql_using="TO_TIMESTAMP(block_timestamp)",
)
def downgrade():
op.alter_column(
"blocks",
"block_timestamp",
type_=sa.Numeric,
nullable=False,
postgresql_using="extract(epoch FROM block_timestamp)",
)

View File

@@ -0,0 +1,34 @@
"""empty message
Revision ID: 070819d86587
Revises: d498bdb0a641
Create Date: 2021-11-26 18:25:13.402822
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d498bdb0a641"
down_revision = "b9fa1ecc9929"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_snipes",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("min_acceptance_price", sa.Numeric, nullable=False),
sa.Column("acceptance_price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_snipes")

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-30 17:42:25.548130
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "083978d6e455"
down_revision = "92f28a2b4f52"

View File

@@ -0,0 +1,26 @@
"""Rename pool_address to contract_address
Revision ID: 0cef835f7b36
Revises: 5427d62a2cc0
Create Date: 2021-11-19 15:36:15.152622
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "0cef835f7b36"
down_revision = "5427d62a2cc0"
branch_labels = None
depends_on = None
def upgrade():
op.alter_column(
"swaps", "pool_address", nullable=False, new_column_name="contract_address"
)
def downgrade():
op.alter_column(
"swaps", "contract_address", nullable=False, new_column_name="pool_address"
)

View File

@@ -0,0 +1,28 @@
"""Add nullable transaction_position field to swaps and traces
Revision ID: 15ba9c27ee8a
Revises: 04b76ab1d2af
Create Date: 2021-12-02 18:24:18.218880
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "15ba9c27ee8a"
down_revision = "ead7eb8283b9"
branch_labels = None
depends_on = None
def upgrade():
op.add_column(
"classified_traces",
sa.Column("transaction_position", sa.Numeric, nullable=True),
)
op.add_column("swaps", sa.Column("transaction_position", sa.Numeric, nullable=True))
def downgrade():
op.drop_column("classified_traces", "transaction_position")
op.drop_column("swaps", "transaction_position")

View File

@@ -8,7 +8,6 @@ Create Date: 2021-10-04 19:52:40.017084
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "205ce02374b3"
down_revision = "c8363617aa07"

View File

@@ -0,0 +1,28 @@
"""Add blocks table
Revision ID: 2c90b2b8a80b
Revises: 04a3bb3740c3
Create Date: 2021-11-17 18:29:13.065944
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "2c90b2b8a80b"
down_revision = "04a3bb3740c3"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"blocks",
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("block_timestamp", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number"),
)
def downgrade():
op.drop_table("blocks")

View File

@@ -7,7 +7,6 @@ Create Date: 2021-09-14 11:11:41.559137
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "320e56b0a99f"
down_revision = "a02f3f2c469f"

View File

@@ -0,0 +1,45 @@
"""Cahnge swap primary key to include block number
Revision ID: 3417f49d97b3
Revises: 205ce02374b3
Create Date: 2021-11-02 20:50:32.854996
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "3417f49d97b3"
down_revision = "205ce02374b3"
branch_labels = None
depends_on = None
def upgrade():
op.execute("ALTER TABLE swaps DROP CONSTRAINT swaps_pkey CASCADE")
op.create_primary_key(
"swaps_pkey",
"swaps",
["block_number", "transaction_hash", "trace_address"],
)
op.create_index(
"arbitrage_swaps_swaps_idx",
"arbitrage_swaps",
["swap_transaction_hash", "swap_trace_address"],
)
def downgrade():
op.drop_index("arbitrage_swaps_swaps_idx")
op.execute("ALTER TABLE swaps DROP CONSTRAINT swaps_pkey CASCADE")
op.create_primary_key(
"swaps_pkey",
"swaps",
["transaction_hash", "trace_address"],
)
op.create_foreign_key(
"arbitrage_swaps_swaps_fkey",
"arbitrage_swaps",
"swaps",
["swap_transaction_hash", "swap_trace_address"],
["transaction_hash", "trace_address"],
)

View File

@@ -0,0 +1,40 @@
"""Create NFT Trades table
Revision ID: 3c54832385e3
Revises: 4b9d289f2d74
Create Date: 2021-12-19 22:50:28.936516
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "3c54832385e3"
down_revision = "4b9d289f2d74"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"nft_trades",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("abi_name", sa.String(1024), nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("transaction_position", sa.Numeric, nullable=False),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("protocol", sa.String(256), nullable=False),
sa.Column("error", sa.String(256), nullable=True),
sa.Column("seller_address", sa.String(256), nullable=False),
sa.Column("buyer_address", sa.String(256), nullable=False),
sa.Column("payment_token_address", sa.String(256), nullable=False),
sa.Column("payment_amount", sa.Numeric, nullable=False),
sa.Column("collection_address", sa.String(256), nullable=False),
sa.Column("token_id", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("nft_trades")

View File

@@ -0,0 +1,23 @@
"""Add error column to liquidations
Revision ID: 4b9d289f2d74
Revises: 99d376cb93cc
Create Date: 2021-12-23 14:54:28.406159
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "4b9d289f2d74"
down_revision = "99d376cb93cc"
branch_labels = None
depends_on = None
def upgrade():
op.add_column("liquidations", sa.Column("error", sa.String(256), nullable=True))
def downgrade():
op.drop_column("liquidations", "error")

View File

@@ -0,0 +1,33 @@
"""empty message
Revision ID: 52d75a7e0533
Revises: 7cf0eeb41da0
Create Date: 2021-11-26 20:35:58.954138
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "52d75a7e0533"
down_revision = "7cf0eeb41da0"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_bid_acceptances",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("min_price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_bid_acceptances")

View File

@@ -0,0 +1,46 @@
"""Change transfers trace address to ARRAY
Revision ID: 5427d62a2cc0
Revises: d540242ae368
Create Date: 2021-11-19 13:25:11.252774
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "5427d62a2cc0"
down_revision = "d540242ae368"
branch_labels = None
depends_on = None
def upgrade():
op.drop_constraint("transfers_pkey", "transfers")
op.alter_column(
"transfers",
"trace_address",
type_=sa.ARRAY(sa.Integer),
nullable=False,
postgresql_using="trace_address::int[]",
)
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)
def downgrade():
op.drop_constraint("transfers_pkey", "transfers")
op.alter_column(
"transfers",
"trace_address",
type_=sa.String(256),
nullable=False,
)
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)

View File

@@ -0,0 +1,33 @@
"""empty message
Revision ID: 7cf0eeb41da0
Revises: d498bdb0a641
Create Date: 2021-11-26 20:27:28.936516
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "7cf0eeb41da0"
down_revision = "d498bdb0a641"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_bids",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_bids")

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-06 15:58:04.556762
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "7eec417a4f3e"
down_revision = "9d8c69b3dccb"

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-17 03:46:21.498821
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "92f28a2b4f52"
down_revision = "9b8ae51c5d56"

View File

@@ -0,0 +1,23 @@
"""error column
Revision ID: 99d376cb93cc
Revises: c4a7620a2d33
Create Date: 2021-12-21 21:26:12.142484
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "99d376cb93cc"
down_revision = "c4a7620a2d33"
branch_labels = None
depends_on = None
def upgrade():
op.add_column("arbitrages", sa.Column("error", sa.String(256), nullable=True))
def downgrade():
op.drop_column("arbitrages", "error")

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-06 17:06:55.364516
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "9b8ae51c5d56"
down_revision = "7eec417a4f3e"

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-05 21:46:35.209199
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "9d8c69b3dccb"
down_revision = "2116e2f36a19"

View File

@@ -8,7 +8,6 @@ Create Date: 2021-09-13 21:32:27.181344
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "a02f3f2c469f"
down_revision = "d70c08b4db6f"

View File

@@ -0,0 +1,34 @@
"""Change classified traces primary key to include block number
Revision ID: a10d68643476
Revises: 3417f49d97b3
Create Date: 2021-11-02 22:03:26.312317
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "a10d68643476"
down_revision = "3417f49d97b3"
branch_labels = None
depends_on = None
def upgrade():
op.execute("ALTER TABLE classified_traces DROP CONSTRAINT classified_traces_pkey")
op.create_primary_key(
"classified_traces_pkey",
"classified_traces",
["block_number", "transaction_hash", "trace_address"],
)
op.drop_index("i_block_number")
def downgrade():
op.execute("ALTER TABLE classified_traces DROP CONSTRAINT classified_traces_pkey")
op.create_index("i_block_number", "classified_traces", ["block_number"])
op.create_primary_key(
"classified_traces_pkey",
"classified_traces",
["transaction_hash", "trace_address"],
)

View File

@@ -0,0 +1,26 @@
"""Remove collateral_token_address column
Revision ID: b9fa1ecc9929
Revises: 04b76ab1d2af
Create Date: 2021-12-01 23:32:40.574108
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "b9fa1ecc9929"
down_revision = "04b76ab1d2af"
branch_labels = None
depends_on = None
def upgrade():
op.drop_column("liquidations", "collateral_token_address")
def downgrade():
op.add_column(
"liquidations",
sa.Column("collateral_token_address", sa.String(256), nullable=False),
)

View File

@@ -0,0 +1,28 @@
"""Create tokens table
Revision ID: c4a7620a2d33
Revises: 15ba9c27ee8a
Create Date: 2021-12-21 19:12:33.940117
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "c4a7620a2d33"
down_revision = "15ba9c27ee8a"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"tokens",
sa.Column("token_address", sa.String(256), nullable=False),
sa.Column("decimals", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("token_address"),
)
def downgrade():
op.drop_table("tokens")

View File

@@ -7,7 +7,6 @@ Create Date: 2021-07-30 17:37:27.335475
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "c5da44eb072c"
down_revision = "0660432b9840"

View File

@@ -8,7 +8,6 @@ Create Date: 2021-09-29 14:00:06.857103
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "c8363617aa07"
down_revision = "cd96af55108e"

View File

@@ -8,7 +8,6 @@ Create Date: 2021-09-17 12:44:45.245137
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "cd96af55108e"
down_revision = "320e56b0a99f"

View File

@@ -0,0 +1,29 @@
"""Create usd_prices table
Revision ID: d540242ae368
Revises: 2c90b2b8a80b
Create Date: 2021-11-18 04:30:06.802857
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d540242ae368"
down_revision = "2c90b2b8a80b"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"prices",
sa.Column("timestamp", sa.TIMESTAMP),
sa.Column("usd_price", sa.Numeric, nullable=False),
sa.Column("token_address", sa.String(256), nullable=False),
sa.PrimaryKeyConstraint("token_address", "timestamp"),
)
def downgrade():
op.drop_table("prices")

View File

@@ -8,7 +8,6 @@ Create Date: 2021-08-30 22:10:04.186251
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d70c08b4db6f"
down_revision = "083978d6e455"

View File

@@ -0,0 +1,69 @@
"""Create sandwiches and sandwiched swaps tables
Revision ID: ead7eb8283b9
Revises: a5d80460f0e6
Create Date: 2021-12-03 16:37:28.077158
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "ead7eb8283b9"
down_revision = "52d75a7e0533"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"sandwiches",
sa.Column("id", sa.String(256), primary_key=True),
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("sandwicher_address", sa.String(256), nullable=False),
sa.Column("frontrun_swap_transaction_hash", sa.String(256), nullable=False),
sa.Column("frontrun_swap_trace_address", sa.ARRAY(sa.Integer), nullable=False),
sa.Column("backrun_swap_transaction_hash", sa.String(256), nullable=False),
sa.Column("backrun_swap_trace_address", sa.ARRAY(sa.Integer), nullable=False),
)
op.create_index(
"ik_sandwiches_frontrun",
"sandwiches",
[
"block_number",
"frontrun_swap_transaction_hash",
"frontrun_swap_trace_address",
],
)
op.create_index(
"ik_sandwiches_backrun",
"sandwiches",
["block_number", "backrun_swap_transaction_hash", "backrun_swap_trace_address"],
)
op.create_table(
"sandwiched_swaps",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("sandwich_id", sa.String(1024), primary_key=True),
sa.Column("block_number", sa.Numeric, primary_key=True),
sa.Column("transaction_hash", sa.String(66), primary_key=True),
sa.Column("trace_address", sa.ARRAY(sa.Integer), primary_key=True),
sa.ForeignKeyConstraint(["sandwich_id"], ["sandwiches.id"], ondelete="CASCADE"),
)
op.create_index(
"ik_sandwiched_swaps_secondary",
"sandwiched_swaps",
["block_number", "transaction_hash", "trace_address"],
)
def downgrade():
op.drop_index("ik_sandwiched_swaps_secondary")
op.drop_table("sandwiched_swaps")
op.drop_index("ik_sandwiches_frontrun")
op.drop_index("ik_sandwiches_backrun")
op.drop_table("sandwiches")

View File

@@ -1,57 +0,0 @@
import subprocess
import sys
from typing import Iterator, Tuple
def get_block_after_before_chunks(
after_block: int,
before_block: int,
n_workers: int,
) -> Iterator[Tuple[int, int]]:
n_blocks = before_block - after_block
remainder = n_blocks % n_workers
floor_chunk_size = n_blocks // n_workers
last_before_block = None
for worker_index in range(n_workers):
chunk_size = floor_chunk_size
if worker_index < remainder:
chunk_size += 1
batch_after_block = (
last_before_block if last_before_block is not None else after_block
)
batch_before_block = batch_after_block + chunk_size
yield batch_after_block, batch_before_block
last_before_block = batch_before_block
def backfill(after_block: int, before_block: int, n_workers: int):
if n_workers <= 0:
raise ValueError("Need at least one worker")
for batch_after_block, batch_before_block in get_block_after_before_chunks(
after_block,
before_block,
n_workers,
):
print(f"Backfilling {batch_after_block} to {batch_before_block}")
backfill_command = f"sh backfill.sh {batch_after_block} {batch_before_block}"
process = subprocess.Popen(backfill_command.split(), stdout=subprocess.PIPE)
output, _ = process.communicate()
print(output)
def main():
after_block = int(sys.argv[1])
before_block = int(sys.argv[2])
n_workers = int(sys.argv[3])
backfill(after_block, before_block, n_workers)
if __name__ == "__main__":
main()

View File

@@ -1,6 +0,0 @@
current_image=$(kubectl get deployment mev-inspect -o=jsonpath='{$.spec.template.spec.containers[:1].image}')
helm template mev-inspect-backfill ./k8s/mev-inspect-backfill \
--set image.repository=$current_image \
--set command.startBlockNumber=$1 \
--set command.endBlockNumber=$2 | kubectl apply -f -

89
cli.py
View File

@@ -1,46 +1,41 @@
import asyncio
import logging
import os
import signal
from functools import wraps
import sys
import click
from mev_inspect.concurrency import coro
from mev_inspect.crud.prices import write_prices
from mev_inspect.db import get_inspect_session, get_trace_session
from mev_inspect.inspector import MEVInspector
from mev_inspect.prices import fetch_prices
RPC_URL_ENV = "RPC_URL"
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
@click.group()
def cli():
pass
def coro(f):
@wraps(f)
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
def cancel_task_callback():
for task in asyncio.all_tasks():
task.cancel()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, cancel_task_callback)
try:
loop.run_until_complete(f(*args, **kwargs))
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
return wrapper
@cli.command()
@click.argument("block_number", type=int)
@click.option("--rpc", default=lambda: os.environ.get(RPC_URL_ENV, ""))
@coro
async def inspect_block_command(block_number: int, rpc: str):
inspector = MEVInspector(rpc=rpc)
await inspector.inspect_single_block(block=block_number)
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
inspector = MEVInspector(rpc)
await inspector.inspect_single_block(
inspect_db_session=inspect_db_session,
trace_db_session=trace_db_session,
block=block_number,
)
@cli.command()
@@ -48,8 +43,14 @@ async def inspect_block_command(block_number: int, rpc: str):
@click.option("--rpc", default=lambda: os.environ.get(RPC_URL_ENV, ""))
@coro
async def fetch_block_command(block_number: int, rpc: str):
inspector = MEVInspector(rpc=rpc)
block = await inspector.create_from_block(block_number=block_number)
trace_db_session = get_trace_session()
inspector = MEVInspector(rpc)
block = await inspector.create_from_block(
block_number=block_number,
trace_db_session=trace_db_session,
)
print(block.json())
@@ -74,16 +75,48 @@ async def inspect_many_blocks_command(
max_concurrency: int,
request_timeout: int,
):
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
inspector = MEVInspector(
rpc=rpc,
rpc,
max_concurrency=max_concurrency,
request_timeout=request_timeout,
)
await inspector.inspect_many_blocks(
after_block=after_block, before_block=before_block
inspect_db_session=inspect_db_session,
trace_db_session=trace_db_session,
after_block=after_block,
before_block=before_block,
)
@cli.command()
@click.argument("after_block", type=int)
@click.argument("before_block", type=int)
@click.argument("batch_size", type=int, default=10)
def enqueue_many_blocks_command(after_block: int, before_block: int, batch_size: int):
from worker import ( # pylint: disable=import-outside-toplevel
inspect_many_blocks_task,
)
for batch_after_block in range(after_block, before_block, batch_size):
batch_before_block = min(batch_after_block + batch_size, before_block)
logger.info(f"Sending {batch_after_block} to {batch_before_block}")
inspect_many_blocks_task.send(batch_after_block, batch_before_block)
@cli.command()
def fetch_all_prices():
inspect_db_session = get_inspect_session()
logger.info("Fetching prices")
prices = fetch_prices()
logger.info("Writing prices")
write_prices(inspect_db_session, prices)
def get_rpc_url() -> str:
return os.environ["RPC_URL"]

View File

@@ -1,3 +0,0 @@
#!/bin/bash
python loop.py

View File

@@ -1,5 +1,5 @@
apiVersion: v2
name: mev-inspect-backfill
name: mev-inspect-prices
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.

View File

@@ -1,7 +1,7 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "mev-inspect-backfill.name" -}}
{{- define "mev-inspect-prices.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
@@ -10,7 +10,7 @@ Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mev-inspect-backfill.fullname" -}}
{{- define "mev-inspect-prices.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
@@ -26,16 +26,16 @@ If release name contains chart name it will be used as a full name.
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mev-inspect-backfill.chart" -}}
{{- define "mev-inspect-prices.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mev-inspect-backfill.labels" -}}
helm.sh/chart: {{ include "mev-inspect-backfill.chart" . }}
{{ include "mev-inspect-backfill.selectorLabels" . }}
{{- define "mev-inspect-prices.labels" -}}
helm.sh/chart: {{ include "mev-inspect-prices.chart" . }}
{{ include "mev-inspect-prices.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
@@ -45,17 +45,17 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{/*
Selector labels
*/}}
{{- define "mev-inspect-backfill.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect-backfill.name" . }}
{{- define "mev-inspect-prices.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect-prices.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mev-inspect-backfill.serviceAccountName" -}}
{{- define "mev-inspect-prices.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mev-inspect-backfill.fullname" .) .Values.serviceAccount.name }}
{{- default (include "mev-inspect-prices.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}

View File

@@ -0,0 +1,35 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "mev-inspect-prices.fullname" . }}
spec:
schedule: "0 */1 * * *"
successfulJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- run
- fetch-all-prices
env:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: host
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
restartPolicy: Never

View File

@@ -0,0 +1,7 @@
image:
repository: mev-inspect-py
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,24 @@
apiVersion: v2
name: mev-inspect-workers
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "mev-inspect-worker.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mev-inspect-worker.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mev-inspect-worker.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mev-inspect-worker.labels" -}}
helm.sh/chart: {{ include "mev-inspect-worker.chart" . }}
{{ include "mev-inspect-worker.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mev-inspect-worker.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect-worker.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mev-inspect-worker.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mev-inspect-worker.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,32 +1,46 @@
apiVersion: batch/v1
kind: Job
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mev-inspect-backfill.fullname" . }}-{{ randAlphaNum 5 | lower }}
name: {{ include "mev-inspect-worker.fullname" . }}
labels:
{{- include "mev-inspect-backfill.labels" . | nindent 4 }}
{{- include "mev-inspect-worker.labels" . | nindent 4 }}
spec:
completions: 1
parallelism: 1
ttlSecondsAfterFinished: 5
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "mev-inspect-worker.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "mev-inspect-worker.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- poetry
- run
- inspect-many-blocks
- {{ .Values.command.startBlockNumber | quote }}
- {{ .Values.command.endBlockNumber | quote }}
args: ["run", "dramatiq", "worker", "--threads=1", "--processes=1"]
livenessProbe:
exec:
command:
- ls
- /
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: POSTGRES_HOST
valueFrom:
@@ -43,6 +57,11 @@ spec:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
- name: TRACE_DB_HOST
valueFrom:
secretKeyRef:
@@ -66,4 +85,21 @@ spec:
configMapKeyRef:
name: mev-inspect-rpc
key: url
restartPolicy: OnFailure
- name: LISTENER_HEALTHCHECK_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-listener-healthcheck
key: url
optional: true
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,9 +1,11 @@
# Default values for mev-inspect.
# Default values for mev-inspect-workers
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: mev-inspect-py
repository: mev-inspect-py:latest
pullPolicy: IfNotPresent
imagePullSecrets: []
@@ -15,13 +17,14 @@ podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
runAsNonRoot: true
runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious

View File

@@ -30,13 +30,15 @@ spec:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["run", "python", "loop.py"]
livenessProbe:
exec:
command:
- ls
- /
initialDelaySeconds: 20
periodSeconds: 5
periodSeconds: 10
timeoutSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
@@ -55,6 +57,11 @@ spec:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
- name: TRACE_DB_HOST
valueFrom:
secretKeyRef:
@@ -78,6 +85,12 @@ spec:
configMapKeyRef:
name: mev-inspect-rpc
key: url
- name: LISTENER_HEALTHCHECK_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-listener-healthcheck
key: url
optional: true
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -17,13 +17,15 @@ podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
#readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious

View File

@@ -3,9 +3,9 @@
set -e
NAME=listener
PIDFILE=/var/run/$NAME.pid
DAEMON=/root/.poetry/bin/poetry
DAEMON_OPTS="run python listener.py"
PIDFILE=/home/flashbot/$NAME.pid
DAEMON=/bin/bash
DAEMON_OPTS='-c "poetry run python listener.py"'
case "$1" in
start)
@@ -13,16 +13,18 @@ case "$1" in
start-stop-daemon \
--background \
--chdir /app \
--chuid flashbot \
--start \
--quiet \
--pidfile $PIDFILE \
--make-pidfile \
--startas $DAEMON -- $DAEMON_OPTS
--startas /bin/bash -- -c "poetry run python listener.py"
echo "."
;;
stop)
echo -n "Stopping daemon: "$NAME
start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE
rm $PIDFILE
echo "."
;;
tail)
@@ -31,14 +33,16 @@ case "$1" in
restart)
echo -n "Restarting daemon: "$NAME
start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile $PIDFILE
rm $PIDFILE
start-stop-daemon \
--background \
--chdir /app \
--chuid flashbot \
--start \
--quiet \
--pidfile $PIDFILE \
--make-pidfile \
--startas $DAEMON -- $DAEMON_OPTS
--startas /bin/bash -- -c "poetry run python listener.py"
echo "."
;;

View File

@@ -1,78 +1,98 @@
import asyncio
import logging
import os
import time
from web3 import Web3
import aiohttp
from mev_inspect.block import get_latest_block_number
from mev_inspect.concurrency import coro
from mev_inspect.crud.latest_block_update import (
find_latest_block_update,
update_latest_block,
)
from mev_inspect.classifiers.trace import TraceClassifier
from mev_inspect.db import get_inspect_session, get_trace_session
from mev_inspect.inspect_block import inspect_block
from mev_inspect.inspector import MEVInspector
from mev_inspect.provider import get_base_provider
from mev_inspect.signal_handler import GracefulKiller
logging.basicConfig(filename="listener.log", level=logging.INFO)
logging.basicConfig(filename="listener.log", filemode="a", level=logging.INFO)
logger = logging.getLogger(__name__)
# lag to make sure the blocks we see are settled
BLOCK_NUMBER_LAG = 5
def run():
@coro
async def run():
rpc = os.getenv("RPC_URL")
if rpc is None:
raise RuntimeError("Missing environment variable RPC_URL")
healthcheck_url = os.getenv("LISTENER_HEALTHCHECK_URL")
logger.info("Starting...")
killer = GracefulKiller()
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
trace_classifier = TraceClassifier()
inspector = MEVInspector(rpc)
base_provider = get_base_provider(rpc)
w3 = Web3(base_provider)
latest_block_number = get_latest_block_number(w3)
while not killer.kill_now:
last_written_block = find_latest_block_update(inspect_db_session)
logger.info(f"Latest block: {latest_block_number}")
logger.info(f"Last written block: {last_written_block}")
if (last_written_block is None) or (
last_written_block < (latest_block_number - BLOCK_NUMBER_LAG)
):
block_number = (
latest_block_number
if last_written_block is None
else last_written_block + 1
)
logger.info(f"Writing block: {block_number}")
inspect_block(
inspect_db_session,
base_provider,
w3,
trace_classifier,
block_number,
trace_db_session=trace_db_session,
)
update_latest_block(inspect_db_session, block_number)
else:
time.sleep(5)
latest_block_number = get_latest_block_number(w3)
await inspect_next_block(
inspector,
inspect_db_session,
trace_db_session,
base_provider,
healthcheck_url,
)
logger.info("Stopping...")
async def inspect_next_block(
inspector: MEVInspector,
inspect_db_session,
trace_db_session,
base_provider,
healthcheck_url,
):
latest_block_number = await get_latest_block_number(base_provider)
last_written_block = find_latest_block_update(inspect_db_session)
logger.info(f"Latest block: {latest_block_number}")
logger.info(f"Last written block: {last_written_block}")
if last_written_block is None:
# maintain lag if no blocks written yet
last_written_block = latest_block_number - BLOCK_NUMBER_LAG - 1
if last_written_block < (latest_block_number - BLOCK_NUMBER_LAG):
block_number = last_written_block + 1
logger.info(f"Writing block: {block_number}")
await inspector.inspect_single_block(
inspect_db_session=inspect_db_session,
trace_db_session=trace_db_session,
block=block_number,
)
update_latest_block(inspect_db_session, block_number)
if healthcheck_url:
await ping_healthcheck_url(healthcheck_url)
else:
await asyncio.sleep(5)
async def ping_healthcheck_url(url):
async with aiohttp.ClientSession() as session:
async with session.get(url):
pass
if __name__ == "__main__":
try:
run()

View File

@@ -3,7 +3,6 @@ import time
from mev_inspect.signal_handler import GracefulKiller
logging.basicConfig(filename="loop.log", level=logging.INFO)
logger = logging.getLogger(__name__)

49
mev
View File

@@ -1,39 +1,56 @@
#!/bin/sh
#!/usr/bin/env bash
set -e
DB_NAME=mev_inspect
function get_kube_secret(){
kubectl get secrets $1 -o jsonpath="{.data.$2}" | base64 --decode
}
function get_kube_db_secret(){
kubectl get secrets mev-inspect-db-credentials -o jsonpath="{.data.$1}" | base64 --decode
}
function db(){
host=$(get_kube_db_secret "host")
username=$(get_kube_db_secret "username")
password=$(get_kube_db_secret "password")
host=$(get_kube_secret "mev-inspect-db-credentials" "host")
username=$(get_kube_secret "mev-inspect-db-credentials" "username")
password=$(get_kube_secret "mev-inspect-db-credentials" "password")
kubectl run -i --rm --tty postgres-client \
kubectl run -i --rm --tty postgres-client-$RANDOM \
--env="PGPASSWORD=$password" \
--image=jbergknoff/postgresql-client \
-- $DB_NAME --host=$host --user=$username
}
function redis(){
echo "To continue, enter 'shift + r'"
redis_password=$(get_kube_secret "redis" "redis-password")
kubectl run -i --rm --tty \
--namespace default redis-client-$RANDOM \
--env REDIS_PASSWORD=$redis_password \
--image docker.io/bitnami/redis:6.2.6-debian-10-r0 \
--command -- redis-cli -h redis-master -a $redis_password
}
case "$1" in
db)
echo "Connecting to $DB_NAME"
db
;;
redis)
echo "Connecting to redis"
redis
;;
listener)
kubectl exec -ti deploy/mev-inspect -- ./listener $2
;;
backfill)
start_block_number=$2
end_block_number=$3
n_workers=$4
echo "Backfilling from $start_block_number to $end_block_number with $n_workers workers"
python backfill.py $start_block_number $end_block_number $n_workers
echo "Backfilling from $start_block_number to $end_block_number"
kubectl exec -ti deploy/mev-inspect -- poetry run enqueue-many-blocks $start_block_number $end_block_number
;;
inspect)
block_number=$2
@@ -48,14 +65,28 @@ case "$1" in
poetry run inspect-many-blocks $start_block_number $end_block_number
;;
test)
shift
echo "Running tests"
kubectl exec -ti deploy/mev-inspect -- poetry run pytest tests
kubectl exec -ti deploy/mev-inspect -- poetry run pytest tests $@
;;
fetch)
block_number=$2
echo "Fetching block $block_number"
kubectl exec -ti deploy/mev-inspect -- poetry run fetch-block $block_number
;;
prices)
shift
case "$1" in
fetch-all)
echo "Running price fetch-all"
kubectl exec -ti deploy/mev-inspect -- \
poetry run fetch-all-prices
;;
*)
echo "prices usage: "$1" {fetch-all}"
exit 1
esac
;;
exec)
shift
kubectl exec -ti deploy/mev-inspect -- $@

View File

@@ -1,37 +1,15 @@
from typing import List, Tuple, Optional
from typing import List, Optional, Tuple
from mev_inspect.traces import (
get_child_traces,
is_child_of_any_address,
)
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.traces import (
ClassifiedTrace,
CallTrace,
DecodedCallTrace,
Classification,
ClassifiedTrace,
DecodedCallTrace,
Protocol,
)
from mev_inspect.transfers import get_transfer
from mev_inspect.schemas.transfers import Transfer
from mev_inspect.schemas.liquidations import Liquidation
AAVE_CONTRACT_ADDRESSES: List[str] = [
# AAVE Proxy
"0x398ec7346dcd622edc5ae82352f02be94c62d119",
# AAVE V2
"0x7d2768de32b0b80b7a3454c06bdac94a69ddc7a9",
# AAVE V1
"0x3dfd23a6c5e8bbcfc9581d2e864a68feb6a076d3",
# AAVE V2 WETH
"0x030ba81f1c18d280636f32af80b9aad02cf0854e",
# AAVE AMM Market DAI
"0x79be75ffc64dd58e66787e4eae470c8a1fd08ba4",
# AAVE i
"0x030ba81f1c18d280636f32af80b9aad02cf0854e",
"0xbcca60bb61934080951369a648fb03df4f96263c",
]
from mev_inspect.traces import get_child_traces, is_child_of_any_address
from mev_inspect.transfers import get_transfer
def get_aave_liquidations(
@@ -57,50 +35,73 @@ def get_aave_liquidations(
child_traces = get_child_traces(
trace.transaction_hash, trace.trace_address, traces
)
(debt_token_address, debt_purchase_amount) = _get_debt_data(
trace, child_traces, liquidator
)
(
received_token_address,
received_amount,
) = _get_payback_token_and_amount(trace, child_traces, liquidator)
if debt_purchase_amount == 0:
continue
(received_token_address, received_amount) = _get_received_data(
trace, child_traces, liquidator
)
if received_amount == 0:
continue
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["_user"],
collateral_token_address=trace.inputs["_collateral"],
debt_token_address=trace.inputs["_reserve"],
debt_token_address=debt_token_address,
liquidator_user=liquidator,
debt_purchase_amount=trace.inputs["_purchaseAmount"],
debt_purchase_amount=debt_purchase_amount,
protocol=Protocol.aave,
received_amount=received_amount,
received_token_address=received_token_address,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
block_number=trace.block_number,
error=trace.error,
)
)
return liquidations
def _get_payback_token_and_amount(
liquidation: DecodedCallTrace, child_traces: List[ClassifiedTrace], liquidator: str
def _get_received_data(
liquidation_trace: DecodedCallTrace,
child_traces: List[ClassifiedTrace],
liquidator: str,
) -> Tuple[str, int]:
"""Look for and return liquidator payback from liquidation"""
for child in child_traces:
child_transfer: Optional[Transfer] = get_transfer(child)
if child_transfer is not None and child_transfer.to_address == liquidator:
return child_transfer.token_address, child_transfer.amount
return liquidation_trace.inputs["_collateral"], 0
def _get_debt_data(
liquidation_trace: DecodedCallTrace,
child_traces: List[ClassifiedTrace],
liquidator: str,
) -> Tuple[str, int]:
"""Get transfer from liquidator to AAVE"""
for child in child_traces:
if isinstance(child, CallTrace):
child_transfer: Optional[Transfer] = get_transfer(child)
child_transfer: Optional[Transfer] = get_transfer(child)
if child_transfer is not None:
if child_transfer is not None:
if child_transfer.from_address == liquidator:
return child_transfer.token_address, child_transfer.amount
if (
child_transfer.to_address == liquidator
and child.from_address in AAVE_CONTRACT_ADDRESSES
):
return child_transfer.token_address, child_transfer.amount
return liquidation.inputs["_collateral"], 0
return (
liquidation_trace.inputs["_reserve"],
0,
)

View File

@@ -7,7 +7,6 @@ from pydantic import parse_obj_as
from mev_inspect.schemas.abi import ABI
from mev_inspect.schemas.traces import Protocol
THIS_FILE_DIRECTORY = Path(__file__).parents[0]
ABI_DIRECTORY_PATH = THIS_FILE_DIRECTORY / "abis"

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,9 +1,11 @@
from itertools import groupby
from typing import List, Tuple
from typing import List, Optional, Tuple
from mev_inspect.schemas.arbitrages import Arbitrage
from mev_inspect.schemas.swaps import Swap
MAX_TOKEN_AMOUNT_PERCENT_DIFFERENCE = 0.01
def get_arbitrages(swaps: List[Swap]) -> List[Arbitrage]:
get_transaction_hash = lambda swap: swap.transaction_hash
@@ -45,17 +47,23 @@ def _get_arbitrages_from_swaps(swaps: List[Swap]) -> List[Arbitrage]:
if len(start_ends) == 0:
return []
# for (start, end) in filtered_start_ends:
for (start, end) in start_ends:
potential_intermediate_swaps = [
swap for swap in swaps if swap is not start and swap is not end
]
routes = _get_all_routes(start, end, potential_intermediate_swaps)
used_swaps: List[Swap] = []
for route in routes:
for (start, ends) in start_ends:
if start in used_swaps:
continue
unused_ends = [end for end in ends if end not in used_swaps]
route = _get_shortest_route(start, unused_ends, swaps)
if route is not None:
start_amount = route[0].token_in_amount
end_amount = route[-1].token_out_amount
profit_amount = end_amount - start_amount
error = None
for swap in route:
if swap.error is not None:
error = swap.error
arb = Arbitrage(
swaps=route,
@@ -66,8 +74,12 @@ def _get_arbitrages_from_swaps(swaps: List[Swap]) -> List[Arbitrage]:
start_amount=start_amount,
end_amount=end_amount,
profit_amount=profit_amount,
error=error,
)
all_arbitrages.append(arb)
used_swaps.extend(route)
if len(all_arbitrages) == 1:
return all_arbitrages
else:
@@ -78,56 +90,101 @@ def _get_arbitrages_from_swaps(swaps: List[Swap]) -> List[Arbitrage]:
]
def _get_all_start_end_swaps(swaps: List[Swap]) -> List[Tuple[Swap, Swap]]:
def _get_shortest_route(
start_swap: Swap,
end_swaps: List[Swap],
all_swaps: List[Swap],
max_route_length: Optional[int] = None,
) -> Optional[List[Swap]]:
if len(end_swaps) == 0:
return None
if max_route_length is not None and max_route_length < 2:
return None
for end_swap in end_swaps:
if _swap_outs_match_swap_ins(start_swap, end_swap):
return [start_swap, end_swap]
if max_route_length is not None and max_route_length == 2:
return None
other_swaps = [
swap for swap in all_swaps if (swap is not start_swap and swap not in end_swaps)
]
if len(other_swaps) == 0:
return None
shortest_remaining_route = None
max_remaining_route_length = (
None if max_route_length is None else max_route_length - 1
)
for next_swap in other_swaps:
if _swap_outs_match_swap_ins(start_swap, next_swap):
shortest_from_next = _get_shortest_route(
next_swap,
end_swaps,
other_swaps,
max_route_length=max_remaining_route_length,
)
if shortest_from_next is not None and (
shortest_remaining_route is None
or len(shortest_from_next) < len(shortest_remaining_route)
):
shortest_remaining_route = shortest_from_next
max_remaining_route_length = len(shortest_from_next) - 1
if shortest_remaining_route is None:
return None
else:
return [start_swap] + shortest_remaining_route
def _get_all_start_end_swaps(swaps: List[Swap]) -> List[Tuple[Swap, List[Swap]]]:
"""
Gets the set of all possible opening and closing swap pairs in an arbitrage via
Gets the set of all possible openings and corresponding closing swaps for an arbitrage via
- swap[start].token_in == swap[end].token_out
- swap[start].from_address == swap[end].to_address
- not swap[start].from_address in all_pool_addresses
- not swap[end].to_address in all_pool_addresses
"""
pool_addrs = [swap.pool_address for swap in swaps]
valid_start_ends: List[Tuple[Swap, Swap]] = []
for potential_start_swap in swaps:
for potential_end_swap in swaps:
pool_addrs = [swap.contract_address for swap in swaps]
valid_start_ends: List[Tuple[Swap, List[Swap]]] = []
for index, potential_start_swap in enumerate(swaps):
ends_for_start: List[Swap] = []
remaining_swaps = swaps[:index] + swaps[index + 1 :]
for potential_end_swap in remaining_swaps:
if (
potential_start_swap.token_in_address
== potential_end_swap.token_out_address
and potential_start_swap.from_address == potential_end_swap.to_address
and not potential_start_swap.from_address in pool_addrs
):
valid_start_ends.append((potential_start_swap, potential_end_swap))
ends_for_start.append(potential_end_swap)
if len(ends_for_start) > 0:
valid_start_ends.append((potential_start_swap, ends_for_start))
return valid_start_ends
def _get_all_routes(
start_swap: Swap, end_swap: Swap, other_swaps: List[Swap]
) -> List[List[Swap]]:
"""
Returns all routes (List[Swap]) from start to finish between a start_swap and an end_swap only accounting for token_address_in and token_address_out.
"""
# If the path is complete, return
if start_swap.token_out_address == end_swap.token_in_address:
return [[start_swap, end_swap]]
elif len(other_swaps) == 0:
return []
def _swap_outs_match_swap_ins(swap_out, swap_in) -> bool:
if swap_out.token_out_address == swap_in.token_in_address and (
swap_out.contract_address == swap_in.from_address
or swap_out.to_address == swap_in.contract_address
or swap_out.to_address == swap_in.from_address
):
amount_percent_difference = abs(
(float(swap_out.token_out_amount) / swap_in.token_in_amount) - 1.0
)
# Collect all potential next steps, check if valid, recursively find routes from next_step to end_swap
routes: List[List[Swap]] = []
for potential_next_swap in other_swaps:
if start_swap.token_out_address == potential_next_swap.token_in_address and (
start_swap.pool_address == potential_next_swap.from_address
or start_swap.to_address == potential_next_swap.pool_address
or start_swap.to_address == potential_next_swap.from_address
):
remaining_swaps = [
swap for swap in other_swaps if swap != potential_next_swap
]
next_swap_routes = _get_all_routes(
potential_next_swap, end_swap, remaining_swaps
)
if len(next_swap_routes) > 0:
for next_swap_route in next_swap_routes:
next_swap_route.insert(0, start_swap)
routes.append(next_swap_route)
return routes
if amount_percent_difference < MAX_TOKEN_AMOUNT_PERCENT_DIFFERENCE:
return True
return False

View File

@@ -1,7 +1,5 @@
import asyncio
import logging
import sys
from pathlib import Path
from typing import List, Optional
from sqlalchemy import orm
@@ -11,19 +9,21 @@ from mev_inspect.fees import fetch_base_fee_per_gas
from mev_inspect.schemas.blocks import Block
from mev_inspect.schemas.receipts import Receipt
from mev_inspect.schemas.traces import Trace, TraceType
from mev_inspect.utils import hex_to_int
cache_directory = "./cache"
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
def get_latest_block_number(w3: Web3) -> int:
return int(w3.eth.get_block("latest")["number"])
async def get_latest_block_number(base_provider) -> int:
latest_block = await base_provider.make_request(
"eth_getBlockByNumber",
["latest", False],
)
return hex_to_int(latest_block["result"]["number"])
async def create_from_block_number(
base_provider,
w3: Web3,
block_number: int,
trace_db_session: Optional[orm.Session],
@@ -34,37 +34,26 @@ async def create_from_block_number(
block = _find_block(trace_db_session, block_number)
if block is None:
block = await _fetch_block(w3, base_provider, block_number)
block = await _fetch_block(w3, block_number)
return block
else:
return block
async def _fetch_block(w3, base_provider, block_number: int, retries: int = 0) -> Block:
async def _fetch_block(w3, block_number: int) -> Block:
block_json, receipts_json, traces_json, base_fee_per_gas = await asyncio.gather(
w3.eth.get_block(block_number),
base_provider.make_request("eth_getBlockReceipts", [block_number]),
base_provider.make_request("trace_block", [block_number]),
w3.eth.get_block_receipts(block_number),
w3.eth.trace_block(block_number),
fetch_base_fee_per_gas(w3, block_number),
)
try:
receipts: List[Receipt] = [
Receipt(**receipt) for receipt in receipts_json["result"]
]
traces = [Trace(**trace_json) for trace_json in traces_json["result"]]
except KeyError as e:
logger.warning(
f"Failed to create objects from block: {block_number}: {e}, retrying: {retries + 1} / 3"
)
if retries < 3:
await asyncio.sleep(5)
return await _fetch_block(w3, base_provider, block_number, retries)
else:
raise
receipts: List[Receipt] = [Receipt(**receipt) for receipt in receipts_json]
traces = [Trace(**trace_json) for trace_json in traces_json]
return Block(
block_number=block_number,
block_timestamp=block_json["timestamp"],
miner=block_json["miner"],
base_fee_per_gas=base_fee_per_gas,
traces=traces,
@@ -76,20 +65,29 @@ def _find_block(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[Block]:
traces = _find_traces(trace_db_session, block_number)
receipts = _find_receipts(trace_db_session, block_number)
base_fee_per_gas = _find_base_fee(trace_db_session, block_number)
block_timestamp = _find_block_timestamp(trace_db_session, block_number)
if block_timestamp is None:
return None
if traces is None or receipts is None or base_fee_per_gas is None:
base_fee_per_gas = _find_base_fee(trace_db_session, block_number)
if base_fee_per_gas is None:
return None
traces = _find_traces(trace_db_session, block_number)
if traces is None:
return None
receipts = _find_receipts(trace_db_session, block_number)
if receipts is None:
return None
miner_address = _get_miner_address_from_traces(traces)
if miner_address is None:
return None
return Block(
block_number=block_number,
block_timestamp=block_timestamp,
miner=miner_address,
base_fee_per_gas=base_fee_per_gas,
traces=traces,
@@ -97,6 +95,22 @@ def _find_block(
)
def _find_block_timestamp(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[int]:
result = trace_db_session.execute(
"SELECT block_timestamp FROM block_timestamps WHERE block_number = :block_number",
params={"block_number": block_number},
).one_or_none()
if result is None:
return None
else:
(block_timestamp,) = result
return block_timestamp
def _find_traces(
trace_db_session: orm.Session,
block_number: int,
@@ -165,17 +179,3 @@ def get_transaction_hashes(calls: List[Trace]) -> List[str]:
result.append(call.transaction_hash)
return result
def cache_block(cache_path: Path, block: Block):
write_mode = "w" if cache_path.is_file() else "x"
cache_path.parent.mkdir(parents=True, exist_ok=True)
with open(cache_path, mode=write_mode) as cache_file:
cache_file.write(block.json())
def _get_cache_path(block_number: int) -> Path:
cache_directory_path = Path(cache_directory)
return cache_directory_path / f"{block_number}.json"

View File

@@ -0,0 +1,180 @@
from typing import List, Optional, Sequence
from mev_inspect.schemas.nft_trades import NftTrade
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import ClassifiedTrace, DecodedCallTrace
from mev_inspect.schemas.transfers import ETH_TOKEN_ADDRESS, Transfer
def create_nft_trade_from_transfers(
trace: DecodedCallTrace,
child_transfers: List[Transfer],
collection_address: str,
seller_address: str,
buyer_address: str,
exchange_wallet_address: str,
) -> Optional[NftTrade]:
transfers_to_buyer = _filter_transfers(child_transfers, to_address=buyer_address)
transfers_to_seller = _filter_transfers(child_transfers, to_address=seller_address)
if len(transfers_to_buyer) != 1 or len(transfers_to_seller) != 1:
return None
if transfers_to_buyer[0].token_address != collection_address:
return None
payment_token_address = transfers_to_seller[0].token_address
payment_amount = transfers_to_seller[0].amount
token_id = transfers_to_buyer[0].amount
transfers_from_seller_to_exchange = _filter_transfers(
child_transfers,
from_address=seller_address,
to_address=exchange_wallet_address,
)
transfers_from_buyer_to_exchange = _filter_transfers(
child_transfers,
from_address=buyer_address,
to_address=exchange_wallet_address,
)
for fee in [
*transfers_from_seller_to_exchange,
*transfers_from_buyer_to_exchange,
]:
# Assumes that exchange fees are paid with the same token as the sale
payment_amount -= fee.amount
return NftTrade(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
protocol=trace.protocol,
error=trace.error,
seller_address=seller_address,
buyer_address=buyer_address,
payment_token_address=payment_token_address,
payment_amount=payment_amount,
collection_address=collection_address,
token_id=token_id,
)
def create_swap_from_pool_transfers(
trace: DecodedCallTrace,
recipient_address: str,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
pool_address = trace.to_address
transfers_to_pool = []
if trace.value is not None and trace.value > 0:
transfers_to_pool = [_build_eth_transfer(trace)]
if len(transfers_to_pool) == 0:
transfers_to_pool = _filter_transfers(prior_transfers, to_address=pool_address)
if len(transfers_to_pool) == 0:
transfers_to_pool = _filter_transfers(child_transfers, to_address=pool_address)
if len(transfers_to_pool) == 0:
return None
transfers_from_pool_to_recipient = _filter_transfers(
child_transfers, to_address=recipient_address, from_address=pool_address
)
if len(transfers_from_pool_to_recipient) != 1:
return None
transfer_in = transfers_to_pool[-1]
transfer_out = transfers_from_pool_to_recipient[0]
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=pool_address,
protocol=trace.protocol,
from_address=transfer_in.from_address,
to_address=transfer_out.to_address,
token_in_address=transfer_in.token_address,
token_in_amount=transfer_in.amount,
token_out_address=transfer_out.token_address,
token_out_amount=transfer_out.amount,
error=trace.error,
)
def create_swap_from_recipient_transfers(
trace: DecodedCallTrace,
pool_address: str,
recipient_address: str,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
transfers_from_recipient = _filter_transfers(
[*prior_transfers, *child_transfers], from_address=recipient_address
)
transfers_to_recipient = _filter_transfers(
child_transfers, to_address=recipient_address
)
if len(transfers_from_recipient) != 1 or len(transfers_to_recipient) != 1:
return None
transfer_in = transfers_from_recipient[0]
transfer_out = transfers_to_recipient[0]
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=pool_address,
protocol=trace.protocol,
from_address=transfer_in.from_address,
to_address=transfer_out.to_address,
token_in_address=transfer_in.token_address,
token_in_amount=transfer_in.amount,
token_out_address=transfer_out.token_address,
token_out_amount=transfer_out.amount,
error=trace.error,
)
def _build_eth_transfer(trace: ClassifiedTrace) -> Transfer:
return Transfer(
block_number=trace.block_number,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
amount=trace.value,
to_address=trace.to_address,
from_address=trace.from_address,
token_address=ETH_TOKEN_ADDRESS,
)
def _filter_transfers(
transfers: Sequence[Transfer],
to_address: Optional[str] = None,
from_address: Optional[str] = None,
) -> List[Transfer]:
filtered_transfers = []
for transfer in transfers:
if to_address is not None and transfer.to_address != to_address:
continue
if from_address is not None and transfer.from_address != from_address:
continue
filtered_transfers.append(transfer)
return filtered_transfers

View File

@@ -1,17 +1,19 @@
from typing import Dict, Optional, Tuple, Type
from mev_inspect.schemas.classifiers import Classifier, ClassifierSpec
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.classifiers import ClassifierSpec, Classifier
from .aave import AAVE_CLASSIFIER_SPECS
from .balancer import BALANCER_CLASSIFIER_SPECS
from .bancor import BANCOR_CLASSIFIER_SPECS
from .compound import COMPOUND_CLASSIFIER_SPECS
from .cryptopunks import CRYPTOPUNKS_CLASSIFIER_SPECS
from .curve import CURVE_CLASSIFIER_SPECS
from .erc20 import ERC20_CLASSIFIER_SPECS
from .uniswap import UNISWAP_CLASSIFIER_SPECS
from .weth import WETH_CLASSIFIER_SPECS, WETH_ADDRESS
from .zero_ex import ZEROX_CLASSIFIER_SPECS
from .balancer import BALANCER_CLASSIFIER_SPECS
from .compound import COMPOUND_CLASSIFIER_SPECS
from .opensea import OPENSEA_CLASSIFIER_SPECS
from .uniswap import UNISWAP_CLASSIFIER_SPECS
from .weth import WETH_ADDRESS, WETH_CLASSIFIER_SPECS
from .zero_ex import ZEROX_CLASSIFIER_SPECS
ALL_CLASSIFIER_SPECS = (
ERC20_CLASSIFIER_SPECS
@@ -22,7 +24,9 @@ ALL_CLASSIFIER_SPECS = (
+ ZEROX_CLASSIFIER_SPECS
+ BALANCER_CLASSIFIER_SPECS
+ COMPOUND_CLASSIFIER_SPECS
+ CRYPTOPUNKS_CLASSIFIER_SPECS
+ OPENSEA_CLASSIFIER_SPECS
+ BANCOR_CLASSIFIER_SPECS
)
_SPECS_BY_ABI_NAME_AND_PROTOCOL: Dict[

View File

@@ -1,8 +1,8 @@
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
TransferClassifier,
LiquidationClassifier,
TransferClassifier,
)
from mev_inspect.schemas.traces import Protocol
from mev_inspect.schemas.transfers import Transfer

View File

@@ -1,20 +1,28 @@
from mev_inspect.schemas.traces import (
DecodedCallTrace,
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
SwapClassifier,
)
from typing import List, Optional
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
BALANCER_V1_POOL_ABI_NAME = "BPool"
class BalancerSwapClassifier(SwapClassifier):
@staticmethod
def get_swap_recipient(trace: DecodedCallTrace) -> str:
return trace.from_address
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
BALANCER_V1_SPECS = [

View File

@@ -0,0 +1,41 @@
from typing import List, Optional
from mev_inspect.classifiers.helpers import create_swap_from_recipient_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
BANCOR_NETWORK_ABI_NAME = "BancorNetwork"
BANCOR_NETWORK_CONTRACT_ADDRESS = "0x2F9EC37d6CcFFf1caB21733BdaDEdE11c823cCB0"
class BancorSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_recipient_transfers(
trace,
BANCOR_NETWORK_CONTRACT_ADDRESS,
recipient_address,
prior_transfers,
child_transfers,
)
return swap
BANCOR_NETWORK_SPEC = ClassifierSpec(
abi_name=BANCOR_NETWORK_ABI_NAME,
protocol=Protocol.bancor,
classifiers={
"convertByPath(address[],uint256,uint256,address,address,uint256)": BancorSwapClassifier,
},
valid_contract_addresses=[BANCOR_NETWORK_CONTRACT_ADDRESS],
)
BANCOR_CLASSIFIER_SPECS = [BANCOR_NETWORK_SPEC]

View File

@@ -1,11 +1,9 @@
from mev_inspect.schemas.traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
LiquidationClassifier,
SeizeClassifier,
)
from mev_inspect.schemas.traces import Protocol
COMPOUND_V2_CETH_SPEC = ClassifierSpec(
abi_name="CEther",

View File

@@ -0,0 +1,27 @@
from mev_inspect.schemas.classifiers import Classifier, ClassifierSpec
from mev_inspect.schemas.traces import Classification, Protocol
class PunkBidAcceptanceClassifier(Classifier):
@staticmethod
def get_classification() -> Classification:
return Classification.punk_accept_bid
class PunkBidClassifier(Classifier):
@staticmethod
def get_classification() -> Classification:
return Classification.punk_bid
CRYPTO_PUNKS_SPEC = ClassifierSpec(
abi_name="cryptopunks",
protocol=Protocol.cryptopunks,
valid_contract_addresses=["0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB"],
classifiers={
"enterBidForPunk(uint256)": PunkBidClassifier,
"acceptBidForPunk(uint256,uint256)": PunkBidAcceptanceClassifier,
},
)
CRYPTOPUNKS_CLASSIFIER_SPECS = [CRYPTO_PUNKS_SPEC]

View File

@@ -1,18 +1,26 @@
from mev_inspect.schemas.traces import (
Protocol,
)
from typing import List, Optional
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
SwapClassifier,
)
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
class CurveSwapClassifier(SwapClassifier):
@staticmethod
def get_swap_recipient(trace: DecodedCallTrace) -> str:
return trace.from_address
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
CURVE_BASE_POOLS = [

View File

@@ -1,8 +1,5 @@
from mev_inspect.schemas.classifiers import ClassifierSpec, TransferClassifier
from mev_inspect.schemas.traces import DecodedCallTrace
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
TransferClassifier,
)
from mev_inspect.schemas.transfers import Transfer

View File

@@ -1,17 +1,42 @@
from mev_inspect.schemas.traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
from typing import List, Optional
from mev_inspect.classifiers.helpers import create_nft_trade_from_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, NftTradeClassifier
from mev_inspect.schemas.nft_trades import NftTrade
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
OPENSEA_WALLET_ADDRESS = "0x5b3256965e7c3cf26e11fcaf296dfc8807c01073"
class OpenseaClassifier(NftTradeClassifier):
@staticmethod
def parse_trade(
trace: DecodedCallTrace,
child_transfers: List[Transfer],
) -> Optional[NftTrade]:
addresses = trace.inputs["addrs"]
buy_maker = addresses[1]
sell_maker = addresses[8]
target = addresses[4]
return create_nft_trade_from_transfers(
trace,
child_transfers,
collection_address=target,
seller_address=sell_maker,
buyer_address=buy_maker,
exchange_wallet_address=OPENSEA_WALLET_ADDRESS,
)
OPENSEA_SPEC = ClassifierSpec(
abi_name="WyvernExchange",
protocol=Protocol.opensea,
valid_contract_addresses=["0x7be8076f4ea4a4ad08075c2508e481d6c946d12b"],
classifiers={
"atomicMatch_(address[14],uint256[18],uint8[8],bytes,bytes,bytes,bytes,bytes,bytes,uint8[2],bytes32[5])": OpenseaClassifier,
},
)
OPEN_SEA_SPEC = [
ClassifierSpec(
abi_name="OpenSea",
protocol=Protocol.opensea,
valid_contract_addresses=["0x7be8076f4ea4a4ad08075c2508e481d6c946d12b"],
)
]
OPENSEA_CLASSIFIER_SPECS = [OPEN_SEA_SPEC]
OPENSEA_CLASSIFIER_SPECS = [OPENSEA_SPEC]

View File

@@ -1,12 +1,10 @@
from mev_inspect.schemas.traces import (
DecodedCallTrace,
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
SwapClassifier,
)
from typing import List, Optional
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
UNISWAP_V2_PAIR_ABI_NAME = "UniswapV2Pair"
UNISWAP_V3_POOL_ABI_NAME = "UniswapV3Pool"
@@ -14,20 +12,34 @@ UNISWAP_V3_POOL_ABI_NAME = "UniswapV3Pool"
class UniswapV3SwapClassifier(SwapClassifier):
@staticmethod
def get_swap_recipient(trace: DecodedCallTrace) -> str:
if trace.inputs is not None and "recipient" in trace.inputs:
return trace.inputs["recipient"]
else:
return trace.from_address
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.inputs.get("recipient", trace.from_address)
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
class UniswapV2SwapClassifier(SwapClassifier):
@staticmethod
def get_swap_recipient(trace: DecodedCallTrace) -> str:
if trace.inputs is not None and "to" in trace.inputs:
return trace.inputs["to"]
else:
return trace.from_address
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.inputs.get("to", trace.from_address)
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
UNISWAP_V3_CONTRACT_SPECS = [
@@ -127,7 +139,7 @@ UNISWAPPY_V2_PAIR_SPEC = ClassifierSpec(
},
)
UNISWAP_CLASSIFIER_SPECS = [
UNISWAP_CLASSIFIER_SPECS: List = [
*UNISWAP_V3_CONTRACT_SPECS,
*UNISWAPPY_V2_CONTRACT_SPECS,
*UNISWAP_V3_GENERAL_SPECS,

View File

@@ -1,11 +1,9 @@
from mev_inspect.schemas.traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
TransferClassifier,
)
from mev_inspect.schemas.traces import Protocol
from mev_inspect.schemas.transfers import Transfer

View File

@@ -1,9 +1,55 @@
from mev_inspect.schemas.traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
)
from typing import List, Optional, Tuple
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
ANY_TAKER_ADDRESS = "0x0000000000000000000000000000000000000000"
RFQ_SIGNATURES = [
"fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"_fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,bool,address)",
]
LIMIT_SIGNATURES = [
"fillOrKillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"_fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,address)",
]
class ZeroExSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
if len(child_transfers) < 2:
return None
token_out_address, token_out_amount = _get_0x_token_out_data(
trace, child_transfers
)
token_in_address, token_in_amount = _get_0x_token_in_data(trace)
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=trace.to_address,
protocol=Protocol.zero_ex,
from_address=trace.from_address,
to_address=trace.to_address,
token_in_address=token_in_address,
token_in_amount=token_in_amount,
token_out_address=token_out_address,
token_out_amount=token_out_amount,
error=trace.error,
)
ZEROX_CONTRACT_SPECS = [
@@ -122,6 +168,14 @@ ZEROX_GENERIC_SPECS = [
ClassifierSpec(
abi_name="INativeOrdersFeature",
protocol=Protocol.zero_ex,
valid_contract_addresses=["0xdef1c0ded9bec7f1a1670819833240f027b25eff"],
classifiers={
"fillOrKillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"_fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,bool,address)": ZeroExSwapClassifier,
"_fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,address)": ZeroExSwapClassifier,
},
),
ClassifierSpec(
abi_name="IOtcOrdersFeature",
@@ -166,3 +220,62 @@ ZEROX_GENERIC_SPECS = [
]
ZEROX_CLASSIFIER_SPECS = ZEROX_CONTRACT_SPECS + ZEROX_GENERIC_SPECS
def _get_taker_token_transfer_amount(
trace: DecodedCallTrace,
taker_address: str,
token_address: str,
child_transfers: List[Transfer],
) -> int:
if trace.error is not None:
return 0
if len(child_transfers) < 2:
raise ValueError(
f"A settled order should consist of 2 child transfers, not {len(child_transfers)}."
)
if taker_address == ANY_TAKER_ADDRESS:
for transfer in child_transfers:
if transfer.token_address == token_address:
return transfer.amount
else:
for transfer in child_transfers:
if transfer.to_address == taker_address:
return transfer.amount
raise RuntimeError("Unable to find transfers matching 0x order.")
def _get_0x_token_out_data(
trace: DecodedCallTrace, child_transfers: List[Transfer]
) -> Tuple[str, int]:
order: List = trace.inputs["order"]
token_out_address = order[0]
if trace.function_signature in RFQ_SIGNATURES:
taker_address = order[5]
elif trace.function_signature in LIMIT_SIGNATURES:
taker_address = order[6]
else:
raise RuntimeError(
f"0x orderbook function {trace.function_signature} is not supported"
)
token_out_amount = _get_taker_token_transfer_amount(
trace, taker_address, token_out_address, child_transfers
)
return token_out_address, token_out_amount
def _get_0x_token_in_data(trace: DecodedCallTrace) -> Tuple[str, int]:
order: List = trace.inputs["order"]
token_in_address = order[1]
token_in_amount = trace.inputs["takerTokenFillAmount"]
return token_in_address, token_in_amount

View File

@@ -4,12 +4,13 @@ from mev_inspect.abi import get_abi
from mev_inspect.decode import ABIDecoder
from mev_inspect.schemas.blocks import CallAction, CallResult
from mev_inspect.schemas.traces import (
CallTrace,
Classification,
ClassifiedTrace,
CallTrace,
DecodedCallTrace,
Trace,
TraceType,
)
from mev_inspect.schemas.traces import Trace, TraceType
from .specs import ALL_CLASSIFIER_SPECS

View File

@@ -1,52 +1,17 @@
from typing import Dict, List, Optional
from web3 import Web3
from mev_inspect.traces import get_child_traces
from mev_inspect.schemas.traces import (
ClassifiedTrace,
Classification,
Protocol,
)
from typing import List, Optional
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.abi import get_raw_abi
from mev_inspect.transfers import ETH_TOKEN_ADDRESS
from mev_inspect.schemas.traces import Classification, ClassifiedTrace, Protocol
from mev_inspect.traces import get_child_traces
V2_COMPTROLLER_ADDRESS = "0x3d9819210A31b4961b30EF54bE2aeD79B9c9Cd3B"
V2_C_ETHER = "0x4Ddc2D193948926D02f9B1fE9e1daa0718270ED5"
CREAM_COMPTROLLER_ADDRESS = "0x3d5BC3c8d13dcB8bF317092d84783c2697AE9258"
CREAM_CR_ETHER = "0xD06527D5e56A3495252A528C4987003b712860eE"
# helper, only queried once in the beginning (inspect_block)
def fetch_all_underlying_markets(w3: Web3, protocol: Protocol) -> Dict[str, str]:
if protocol == Protocol.compound_v2:
c_ether = V2_C_ETHER
address = V2_COMPTROLLER_ADDRESS
elif protocol == Protocol.cream:
c_ether = CREAM_CR_ETHER
address = CREAM_COMPTROLLER_ADDRESS
else:
raise ValueError(f"No Comptroller found for {protocol}")
token_mapping = {}
comptroller_abi = get_raw_abi("Comptroller", Protocol.compound_v2)
comptroller_instance = w3.eth.contract(address=address, abi=comptroller_abi)
markets = comptroller_instance.functions.getAllMarkets().call()
token_abi = get_raw_abi("CToken", Protocol.compound_v2)
for token in markets:
# make an exception for cETH (as it has no .underlying())
if token != c_ether:
token_instance = w3.eth.contract(address=token, abi=token_abi)
underlying_token = token_instance.functions.underlying().call()
token_mapping[
token.lower()
] = underlying_token.lower() # make k:v lowercase for consistancy
return token_mapping
def get_compound_liquidations(
traces: List[ClassifiedTrace],
collateral_by_c_token_address: Dict[str, str],
collateral_by_cr_token_address: Dict[str, str],
) -> List[Liquidation]:
"""Inspect list of classified traces and identify liquidation"""
@@ -67,51 +32,41 @@ def get_compound_liquidations(
trace.transaction_hash, trace.trace_address, traces
)
seize_trace = _get_seize_call(child_traces)
underlying_markets = {}
if trace.protocol == Protocol.compound_v2:
underlying_markets = collateral_by_c_token_address
elif trace.protocol == Protocol.cream:
underlying_markets = collateral_by_cr_token_address
if (
seize_trace is not None
and seize_trace.inputs is not None
and len(underlying_markets) != 0
):
if seize_trace is not None and seize_trace.inputs is not None:
c_token_collateral = trace.inputs["cTokenCollateral"]
if trace.abi_name == "CEther":
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["borrower"],
collateral_token_address=ETH_TOKEN_ADDRESS, # WETH since all cEther liquidations provide Ether
debt_token_address=c_token_collateral,
liquidator_user=seize_trace.inputs["liquidator"],
debt_purchase_amount=trace.value,
protocol=trace.protocol,
received_amount=seize_trace.inputs["seizeTokens"],
received_token_address=trace.to_address,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
block_number=trace.block_number,
error=trace.error,
)
)
elif (
trace.abi_name == "CToken"
): # cToken liquidations where liquidator pays back via token transfer
c_token_address = trace.to_address
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["borrower"],
collateral_token_address=underlying_markets[
c_token_address
],
debt_token_address=c_token_collateral,
liquidator_user=seize_trace.inputs["liquidator"],
debt_purchase_amount=trace.inputs["repayAmount"],
protocol=trace.protocol,
received_amount=seize_trace.inputs["seizeTokens"],
received_token_address=trace.to_address,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
block_number=trace.block_number,
error=trace.error,
)
)
return liquidations

View File

@@ -0,0 +1,22 @@
import asyncio
import signal
from functools import wraps
def coro(f):
@wraps(f)
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
def cancel_task_callback():
for task in asyncio.all_tasks():
task.cancel()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, cancel_task_callback)
try:
loop.run_until_complete(f(*args, **kwargs))
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
return wrapper

View File

@@ -4,17 +4,20 @@ from uuid import uuid4
from mev_inspect.models.arbitrages import ArbitrageModel
from mev_inspect.schemas.arbitrages import Arbitrage
from .shared import delete_by_block_range
def delete_arbitrages_for_block(
def delete_arbitrages_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(ArbitrageModel)
.filter(ArbitrageModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
ArbitrageModel,
after_block_number,
before_block_number,
)
db_session.commit()
@@ -37,6 +40,7 @@ def write_arbitrages(
start_amount=arbitrage.start_amount,
end_amount=arbitrage.end_amount,
profit_amount=arbitrage.profit_amount,
error=arbitrage.error,
)
)

View File

@@ -0,0 +1,39 @@
from datetime import datetime
from typing import List
from mev_inspect.db import write_as_csv
from mev_inspect.schemas.blocks import Block
def delete_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
db_session.execute(
"""
DELETE FROM blocks
WHERE
block_number >= :after_block_number AND
block_number < :before_block_number
""",
params={
"after_block_number": after_block_number,
"before_block_number": before_block_number,
},
)
db_session.commit()
def write_blocks(
db_session,
blocks: List[Block],
) -> None:
items_generator = (
(
block.block_number,
datetime.fromtimestamp(block.block_timestamp),
)
for block in blocks
)
write_as_csv(db_session, "blocks", items_generator)

View File

@@ -4,17 +4,20 @@ from typing import List
from mev_inspect.models.liquidations import LiquidationModel
from mev_inspect.schemas.liquidations import Liquidation
from .shared import delete_by_block_range
def delete_liquidations_for_block(
def delete_liquidations_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(LiquidationModel)
.filter(LiquidationModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
LiquidationModel,
after_block_number,
before_block_number,
)
db_session.commit()

View File

@@ -4,17 +4,20 @@ from typing import List
from mev_inspect.models.miner_payments import MinerPaymentModel
from mev_inspect.schemas.miner_payments import MinerPayment
from .shared import delete_by_block_range
def delete_miner_payments_for_block(
def delete_miner_payments_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(MinerPaymentModel)
.filter(MinerPaymentModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
MinerPaymentModel,
after_block_number,
before_block_number,
)
db_session.commit()

View File

@@ -0,0 +1,30 @@
import json
from typing import List
from mev_inspect.crud.shared import delete_by_block_range
from mev_inspect.models.nft_trades import NftTradeModel
from mev_inspect.schemas.nft_trades import NftTrade
def delete_nft_trades_for_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
delete_by_block_range(
db_session,
NftTradeModel,
after_block_number,
before_block_number,
)
db_session.commit()
def write_nft_trades(
db_session,
nft_trades: List[NftTrade],
) -> None:
models = [NftTradeModel(**json.loads(nft_trade.json())) for nft_trade in nft_trades]
db_session.bulk_save_objects(models)
db_session.commit()

View File

@@ -0,0 +1,17 @@
from typing import List
from sqlalchemy.dialects.postgresql import insert
from mev_inspect.models.prices import PriceModel
from mev_inspect.schemas.prices import Price
def write_prices(db_session, prices: List[Price]) -> None:
insert_statement = (
insert(PriceModel.__table__)
.values([price.dict() for price in prices])
.on_conflict_do_nothing()
)
db_session.execute(insert_statement)
db_session.commit()

90
mev_inspect/crud/punks.py Normal file
View File

@@ -0,0 +1,90 @@
import json
from typing import List
from mev_inspect.models.punks import (
PunkBidAcceptanceModel,
PunkBidModel,
PunkSnipeModel,
)
from mev_inspect.schemas.punk_accept_bid import PunkBidAcceptance
from mev_inspect.schemas.punk_bid import PunkBid
from mev_inspect.schemas.punk_snipe import PunkSnipe
from .shared import delete_by_block_range
def delete_punk_bid_acceptances_for_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
delete_by_block_range(
db_session,
PunkBidAcceptanceModel,
after_block_number,
before_block_number,
)
db_session.commit()
def write_punk_bid_acceptances(
db_session,
punk_bid_acceptances: List[PunkBidAcceptance],
) -> None:
models = [
PunkBidAcceptanceModel(**json.loads(punk_bid_acceptance.json()))
for punk_bid_acceptance in punk_bid_acceptances
]
db_session.bulk_save_objects(models)
db_session.commit()
def delete_punk_bids_for_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
delete_by_block_range(
db_session,
PunkBidModel,
after_block_number,
before_block_number,
)
db_session.commit()
def write_punk_bids(
db_session,
punk_bids: List[PunkBid],
) -> None:
models = [PunkBidModel(**json.loads(punk_bid.json())) for punk_bid in punk_bids]
db_session.bulk_save_objects(models)
db_session.commit()
def delete_punk_snipes_for_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
delete_by_block_range(
db_session,
PunkSnipeModel,
after_block_number,
before_block_number,
)
db_session.commit()
def write_punk_snipes(
db_session,
punk_snipes: List[PunkSnipe],
) -> None:
models = [
PunkSnipeModel(**json.loads(punk_snipe.json())) for punk_snipe in punk_snipes
]
db_session.bulk_save_objects(models)
db_session.commit()

View File

@@ -0,0 +1,67 @@
from typing import List
from uuid import uuid4
from mev_inspect.models.sandwiches import SandwichModel
from mev_inspect.schemas.sandwiches import Sandwich
from .shared import delete_by_block_range
def delete_sandwiches_for_blocks(
db_session,
after_block_number: int,
before_block_number: int,
) -> None:
delete_by_block_range(
db_session,
SandwichModel,
after_block_number,
before_block_number,
)
db_session.commit()
def write_sandwiches(
db_session,
sandwiches: List[Sandwich],
) -> None:
sandwich_models = []
sandwiched_swaps = []
for sandwich in sandwiches:
sandwich_id = str(uuid4())
sandwich_models.append(
SandwichModel(
id=sandwich_id,
block_number=sandwich.block_number,
sandwicher_address=sandwich.sandwicher_address,
frontrun_swap_transaction_hash=sandwich.frontrun_swap.transaction_hash,
frontrun_swap_trace_address=sandwich.frontrun_swap.trace_address,
backrun_swap_transaction_hash=sandwich.backrun_swap.transaction_hash,
backrun_swap_trace_address=sandwich.backrun_swap.trace_address,
)
)
for swap in sandwich.sandwiched_swaps:
sandwiched_swaps.append(
{
"sandwich_id": sandwich_id,
"block_number": swap.block_number,
"transaction_hash": swap.transaction_hash,
"trace_address": swap.trace_address,
}
)
if len(sandwich_models) > 0:
db_session.bulk_save_objects(sandwich_models)
db_session.execute(
"""
INSERT INTO sandwiched_swaps
(sandwich_id, block_number, transaction_hash, trace_address)
VALUES
(:sandwich_id, :block_number, :transaction_hash, :trace_address)
""",
params=sandwiched_swaps,
)
db_session.commit()

View File

@@ -0,0 +1,20 @@
from typing import Type
from mev_inspect.models.base import Base
def delete_by_block_range(
db_session,
model_class: Type[Base],
after_block_number,
before_block_number,
) -> None:
(
db_session.query(model_class)
.filter(model_class.block_number >= after_block_number)
.filter(model_class.block_number < before_block_number)
.delete()
)
db_session.commit()

View File

@@ -4,17 +4,20 @@ from typing import List
from mev_inspect.models.swaps import SwapModel
from mev_inspect.schemas.swaps import Swap
from .shared import delete_by_block_range
def delete_swaps_for_block(
def delete_swaps_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(SwapModel)
.filter(SwapModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
SwapModel,
after_block_number,
before_block_number,
)
db_session.commit()

View File

@@ -1,18 +1,24 @@
import json
from datetime import datetime, timezone
from typing import List
from mev_inspect.db import to_postgres_list, write_as_csv
from mev_inspect.models.traces import ClassifiedTraceModel
from mev_inspect.schemas.traces import ClassifiedTrace
from .shared import delete_by_block_range
def delete_classified_traces_for_block(
def delete_classified_traces_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(ClassifiedTraceModel)
.filter(ClassifiedTraceModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
ClassifiedTraceModel,
after_block_number,
before_block_number,
)
db_session.commit()
@@ -22,29 +28,35 @@ def write_classified_traces(
db_session,
classified_traces: List[ClassifiedTrace],
) -> None:
models = []
for trace in classified_traces:
inputs_json = (json.loads(trace.json(include={"inputs"}))["inputs"],)
models.append(
ClassifiedTraceModel(
transaction_hash=trace.transaction_hash,
block_number=trace.block_number,
classification=trace.classification.value,
trace_type=trace.type.value,
trace_address=trace.trace_address,
protocol=str(trace.protocol),
abi_name=trace.abi_name,
function_name=trace.function_name,
function_signature=trace.function_signature,
inputs=inputs_json,
from_address=trace.from_address,
to_address=trace.to_address,
gas=trace.gas,
value=trace.value,
gas_used=trace.gas_used,
error=trace.error,
)
classified_at = datetime.now(timezone.utc)
items = (
(
classified_at,
trace.transaction_hash,
trace.block_number,
trace.classification.value,
trace.type.value,
str(trace.protocol),
trace.abi_name,
trace.function_name,
trace.function_signature,
_inputs_as_json(trace),
trace.from_address,
trace.to_address,
trace.gas,
trace.value,
trace.gas_used,
trace.error,
to_postgres_list(trace.trace_address),
trace.transaction_position,
)
for trace in classified_traces
)
db_session.bulk_save_objects(models)
db_session.commit()
write_as_csv(db_session, "classified_traces", items)
def _inputs_as_json(trace) -> str:
inputs = json.dumps(json.loads(trace.json(include={"inputs"}))["inputs"])
inputs_with_array = f"[{inputs}]"
return inputs_with_array

View File

@@ -4,15 +4,19 @@ from typing import List
from mev_inspect.models.transfers import TransferModel
from mev_inspect.schemas.transfers import Transfer
from .shared import delete_by_block_range
def delete_transfers_for_block(
def delete_transfers_for_blocks(
db_session,
block_number: int,
after_block_number: int,
before_block_number: int,
) -> None:
(
db_session.query(TransferModel)
.filter(TransferModel.block_number == block_number)
.delete()
delete_by_block_range(
db_session,
TransferModel,
after_block_number,
before_block_number,
)
db_session.commit()

View File

@@ -1,9 +1,11 @@
import os
from typing import Optional
from typing import Any, Iterable, List, Optional
from sqlalchemy import create_engine, orm
from sqlalchemy.orm import sessionmaker
from mev_inspect.string_io import StringIteratorIO
def get_trace_database_uri() -> Optional[str]:
username = os.getenv("TRACE_DB_USER")
@@ -12,7 +14,7 @@ def get_trace_database_uri() -> Optional[str]:
db_name = "trace_db"
if all(field is not None for field in [username, password, host]):
return f"postgresql://{username}:{password}@{host}/{db_name}"
return f"postgresql+psycopg2://{username}:{password}@{host}/{db_name}"
return None
@@ -22,27 +24,70 @@ def get_inspect_database_uri():
password = os.getenv("POSTGRES_PASSWORD")
host = os.getenv("POSTGRES_HOST")
db_name = "mev_inspect"
return f"postgresql://{username}:{password}@{host}/{db_name}"
return f"postgresql+psycopg2://{username}:{password}@{host}/{db_name}"
def _get_engine(uri: str):
return create_engine(uri)
return create_engine(
uri,
executemany_mode="values",
executemany_values_page_size=10000,
executemany_batch_page_size=500,
)
def _get_session(uri: str):
Session = sessionmaker(bind=_get_engine(uri))
return Session()
def _get_sessionmaker(uri: str):
return sessionmaker(bind=_get_engine(uri))
def get_inspect_session() -> orm.Session:
def get_inspect_sessionmaker():
uri = get_inspect_database_uri()
return _get_session(uri)
return _get_sessionmaker(uri)
def get_trace_session() -> Optional[orm.Session]:
def get_trace_sessionmaker():
uri = get_trace_database_uri()
if uri is not None:
return _get_session(uri)
return _get_sessionmaker(uri)
return None
def get_inspect_session() -> orm.Session:
Session = get_inspect_sessionmaker()
return Session()
def get_trace_session() -> Optional[orm.Session]:
Session = get_trace_sessionmaker()
if Session is not None:
return Session()
return None
def write_as_csv(
db_session,
table_name: str,
items: Iterable[Iterable[Any]],
) -> None:
csv_iterator = StringIteratorIO(
("|".join(map(_clean_csv_value, item)) + "\n" for item in items)
)
with db_session.connection().connection.cursor() as cursor:
cursor.copy_from(csv_iterator, table_name, sep="|")
def _clean_csv_value(value: Optional[Any]) -> str:
if value is None:
return r"\N"
return str(value).replace("\n", "\\n")
def to_postgres_list(values: List[Any]) -> str:
if len(values) == 0:
return "{}"
return "{" + ",".join(map(str, values)) + "}"

View File

@@ -1,7 +1,6 @@
from typing import Dict, Optional
import eth_utils.abi
from eth_abi import decode_abi
from eth_abi.exceptions import InsufficientDataBytes, NonEmptyPaddingBytes
from hexbytes._utils import hexstr_to_bytes
@@ -9,7 +8,6 @@ from hexbytes._utils import hexstr_to_bytes
from mev_inspect.schemas.abi import ABI, ABIFunctionDescription
from mev_inspect.schemas.call_data import CallData
# 0x + 8 characters
SELECTOR_LENGTH = 10
@@ -40,7 +38,7 @@ class ABIDecoder:
try:
decoded = decode_abi(types, hexstr_to_bytes(params))
except (InsufficientDataBytes, NonEmptyPaddingBytes):
except (InsufficientDataBytes, NonEmptyPaddingBytes, OverflowError):
return None
return CallData(

View File

@@ -1,5 +1,5 @@
import logging
from typing import Optional
from typing import List, Optional
from sqlalchemy import orm
from web3 import Web3
@@ -7,93 +7,222 @@ from web3 import Web3
from mev_inspect.arbitrages import get_arbitrages
from mev_inspect.block import create_from_block_number
from mev_inspect.classifiers.trace import TraceClassifier
from mev_inspect.crud.arbitrages import (
delete_arbitrages_for_block,
write_arbitrages,
)
from mev_inspect.crud.traces import (
delete_classified_traces_for_block,
write_classified_traces,
)
from mev_inspect.crud.miner_payments import (
delete_miner_payments_for_block,
write_miner_payments,
)
from mev_inspect.crud.swaps import delete_swaps_for_block, write_swaps
from mev_inspect.crud.transfers import delete_transfers_for_block, write_transfers
from mev_inspect.crud.arbitrages import delete_arbitrages_for_blocks, write_arbitrages
from mev_inspect.crud.blocks import delete_blocks, write_blocks
from mev_inspect.crud.liquidations import (
delete_liquidations_for_block,
delete_liquidations_for_blocks,
write_liquidations,
)
from mev_inspect.crud.miner_payments import (
delete_miner_payments_for_blocks,
write_miner_payments,
)
from mev_inspect.crud.nft_trades import delete_nft_trades_for_blocks, write_nft_trades
from mev_inspect.crud.punks import (
delete_punk_bid_acceptances_for_blocks,
delete_punk_bids_for_blocks,
delete_punk_snipes_for_blocks,
write_punk_bid_acceptances,
write_punk_bids,
write_punk_snipes,
)
from mev_inspect.crud.sandwiches import delete_sandwiches_for_blocks, write_sandwiches
from mev_inspect.crud.swaps import delete_swaps_for_blocks, write_swaps
from mev_inspect.crud.traces import (
delete_classified_traces_for_blocks,
write_classified_traces,
)
from mev_inspect.crud.transfers import delete_transfers_for_blocks, write_transfers
from mev_inspect.liquidations import get_liquidations
from mev_inspect.miner_payments import get_miner_payments
from mev_inspect.nft_trades import get_nft_trades
from mev_inspect.punks import get_punk_bid_acceptances, get_punk_bids, get_punk_snipes
from mev_inspect.sandwiches import get_sandwiches
from mev_inspect.schemas.arbitrages import Arbitrage
from mev_inspect.schemas.blocks import Block
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.miner_payments import MinerPayment
from mev_inspect.schemas.nft_trades import NftTrade
from mev_inspect.schemas.punk_accept_bid import PunkBidAcceptance
from mev_inspect.schemas.punk_bid import PunkBid
from mev_inspect.schemas.punk_snipe import PunkSnipe
from mev_inspect.schemas.sandwiches import Sandwich
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import ClassifiedTrace
from mev_inspect.schemas.transfers import Transfer
from mev_inspect.swaps import get_swaps
from mev_inspect.transfers import get_transfers
from mev_inspect.liquidations import get_liquidations
logger = logging.getLogger(__name__)
async def inspect_block(
inspect_db_session: orm.Session,
base_provider,
w3: Web3,
trace_clasifier: TraceClassifier,
trace_classifier: TraceClassifier,
block_number: int,
trace_db_session: Optional[orm.Session],
should_write_classified_traces: bool = True,
):
block = await create_from_block_number(
base_provider,
await inspect_many_blocks(
inspect_db_session,
w3,
trace_classifier,
block_number,
block_number + 1,
trace_db_session,
should_write_classified_traces,
)
logger.info(f"Block: {block_number} -- Total traces: {len(block.traces)}")
total_transactions = len(
set(t.transaction_hash for t in block.traces if t.transaction_hash is not None)
)
logger.info(f"Block: {block_number} -- Total transactions: {total_transactions}")
async def inspect_many_blocks(
inspect_db_session: orm.Session,
w3: Web3,
trace_classifier: TraceClassifier,
after_block_number: int,
before_block_number: int,
trace_db_session: Optional[orm.Session],
should_write_classified_traces: bool = True,
):
all_blocks: List[Block] = []
all_classified_traces: List[ClassifiedTrace] = []
all_transfers: List[Transfer] = []
all_swaps: List[Swap] = []
all_arbitrages: List[Arbitrage] = []
all_liquidations: List[Liquidation] = []
all_sandwiches: List[Sandwich] = []
classified_traces = trace_clasifier.classify(block.traces)
logger.info(
f"Block: {block_number} -- Returned {len(classified_traces)} classified traces"
)
all_punk_bids: List[PunkBid] = []
all_punk_bid_acceptances: List[PunkBidAcceptance] = []
all_punk_snipes: List[PunkSnipe] = []
all_miner_payments: List[MinerPayment] = []
all_nft_trades: List[NftTrade] = []
for block_number in range(after_block_number, before_block_number):
block = await create_from_block_number(
w3,
block_number,
trace_db_session,
)
logger.info(f"Block: {block_number} -- Total traces: {len(block.traces)}")
total_transactions = len(
set(
t.transaction_hash
for t in block.traces
if t.transaction_hash is not None
)
)
logger.info(
f"Block: {block_number} -- Total transactions: {total_transactions}"
)
classified_traces = trace_classifier.classify(block.traces)
logger.info(
f"Block: {block_number} -- Returned {len(classified_traces)} classified traces"
)
transfers = get_transfers(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(transfers)} transfers")
swaps = get_swaps(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(swaps)} swaps")
arbitrages = get_arbitrages(swaps)
logger.info(f"Block: {block_number} -- Found {len(arbitrages)} arbitrages")
liquidations = get_liquidations(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(liquidations)} liquidations")
sandwiches = get_sandwiches(swaps)
logger.info(f"Block: {block_number} -- Found {len(sandwiches)} sandwiches")
punk_bids = get_punk_bids(classified_traces)
punk_bid_acceptances = get_punk_bid_acceptances(classified_traces)
punk_snipes = get_punk_snipes(punk_bids, punk_bid_acceptances)
logger.info(f"Block: {block_number} -- Found {len(punk_snipes)} punk snipes")
nft_trades = get_nft_trades(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(nft_trades)} nft trades")
miner_payments = get_miner_payments(
block.miner, block.base_fee_per_gas, classified_traces, block.receipts
)
all_blocks.append(block)
all_classified_traces.extend(classified_traces)
all_transfers.extend(transfers)
all_swaps.extend(swaps)
all_arbitrages.extend(arbitrages)
all_liquidations.extend(liquidations)
all_sandwiches.extend(sandwiches)
all_punk_bids.extend(punk_bids)
all_punk_bid_acceptances.extend(punk_bid_acceptances)
all_punk_snipes.extend(punk_snipes)
all_nft_trades.extend(nft_trades)
all_miner_payments.extend(miner_payments)
logger.info("Writing data")
delete_blocks(inspect_db_session, after_block_number, before_block_number)
write_blocks(inspect_db_session, all_blocks)
if should_write_classified_traces:
delete_classified_traces_for_block(inspect_db_session, block_number)
write_classified_traces(inspect_db_session, classified_traces)
delete_classified_traces_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_classified_traces(inspect_db_session, all_classified_traces)
transfers = get_transfers(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(transfers)} transfers")
delete_transfers_for_block(inspect_db_session, block_number)
write_transfers(inspect_db_session, transfers)
swaps = get_swaps(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(swaps)} swaps")
delete_swaps_for_block(inspect_db_session, block_number)
write_swaps(inspect_db_session, swaps)
arbitrages = get_arbitrages(swaps)
logger.info(f"Block: {block_number} -- Found {len(arbitrages)} arbitrages")
delete_arbitrages_for_block(inspect_db_session, block_number)
write_arbitrages(inspect_db_session, arbitrages)
liquidations = get_liquidations(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(liquidations)} liquidations")
delete_liquidations_for_block(inspect_db_session, block_number)
write_liquidations(inspect_db_session, liquidations)
miner_payments = get_miner_payments(
block.miner, block.base_fee_per_gas, classified_traces, block.receipts
delete_transfers_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_transfers(inspect_db_session, all_transfers)
delete_miner_payments_for_block(inspect_db_session, block_number)
write_miner_payments(inspect_db_session, miner_payments)
delete_swaps_for_blocks(inspect_db_session, after_block_number, before_block_number)
write_swaps(inspect_db_session, all_swaps)
delete_arbitrages_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_arbitrages(inspect_db_session, all_arbitrages)
delete_liquidations_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_liquidations(inspect_db_session, all_liquidations)
delete_sandwiches_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_sandwiches(inspect_db_session, all_sandwiches)
delete_punk_bids_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_punk_bids(inspect_db_session, all_punk_bids)
delete_punk_bid_acceptances_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_punk_bid_acceptances(inspect_db_session, all_punk_bid_acceptances)
delete_punk_snipes_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_punk_snipes(inspect_db_session, all_punk_snipes)
delete_nft_trades_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_nft_trades(inspect_db_session, all_nft_trades)
delete_miner_payments_for_blocks(
inspect_db_session, after_block_number, before_block_number
)
write_miner_payments(inspect_db_session, all_miner_payments)
logger.info("Done writing")

View File

@@ -1,22 +1,28 @@
import asyncio
import logging
import sys
import traceback
from asyncio import CancelledError
from typing import Optional
from sqlalchemy import orm
from web3 import Web3
from web3.eth import AsyncEth
from mev_inspect.block import create_from_block_number
from mev_inspect.classifiers.trace import TraceClassifier
from mev_inspect.db import get_inspect_session, get_trace_session
from mev_inspect.inspect_block import inspect_block
from mev_inspect.inspect_block import inspect_block, inspect_many_blocks
from mev_inspect.methods import get_block_receipts, trace_block
from mev_inspect.provider import get_base_provider
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)
# add missing parity methods
# this is a bit gross
AsyncEth.trace_block = trace_block
AsyncEth.get_block_receipts = get_block_receipts
class MEVInspector:
def __init__(
self,
@@ -24,55 +30,83 @@ class MEVInspector:
max_concurrency: int = 1,
request_timeout: int = 300,
):
self.inspect_db_session = get_inspect_session()
self.trace_db_session = get_trace_session()
self.base_provider = get_base_provider(rpc, request_timeout=request_timeout)
self.w3 = Web3(self.base_provider, modules={"eth": (AsyncEth,)}, middlewares=[])
base_provider = get_base_provider(rpc, request_timeout=request_timeout)
self.w3 = Web3(base_provider, modules={"eth": (AsyncEth,)}, middlewares=[])
self.trace_classifier = TraceClassifier()
self.max_concurrency = asyncio.Semaphore(max_concurrency)
async def create_from_block(self, block_number: int):
async def create_from_block(
self,
trace_db_session: Optional[orm.Session],
block_number: int,
):
return await create_from_block_number(
base_provider=self.base_provider,
w3=self.w3,
block_number=block_number,
trace_db_session=self.trace_db_session,
trace_db_session=trace_db_session,
)
async def inspect_single_block(self, block: int):
async def inspect_single_block(
self,
inspect_db_session: orm.Session,
block: int,
trace_db_session: Optional[orm.Session],
):
return await inspect_block(
self.inspect_db_session,
self.base_provider,
inspect_db_session,
self.w3,
self.trace_classifier,
block,
trace_db_session=self.trace_db_session,
trace_db_session=trace_db_session,
)
async def inspect_many_blocks(self, after_block: int, before_block: int):
async def inspect_many_blocks(
self,
inspect_db_session: orm.Session,
trace_db_session: Optional[orm.Session],
after_block: int,
before_block: int,
block_batch_size: int = 10,
):
tasks = []
for block_number in range(after_block, before_block):
for block_number in range(after_block, before_block, block_batch_size):
batch_after_block = block_number
batch_before_block = min(block_number + block_batch_size, before_block)
tasks.append(
asyncio.ensure_future(
self.safe_inspect_block(block_number=block_number)
self.safe_inspect_many_blocks(
inspect_db_session,
trace_db_session,
after_block_number=batch_after_block,
before_block_number=batch_before_block,
)
)
)
logger.info(f"Gathered {len(tasks)} blocks to inspect")
logger.info(f"Gathered {before_block-after_block} blocks to inspect")
try:
await asyncio.gather(*tasks)
except CancelledError:
logger.info("Requested to exit, cleaning up...")
except Exception as e:
logger.error(f"Existed due to {type(e)}")
logger.error(f"Exited due to {type(e)}")
traceback.print_exc()
raise
async def safe_inspect_block(self, block_number: int):
async def safe_inspect_many_blocks(
self,
inspect_db_session: orm.Session,
trace_db_session: Optional[orm.Session],
after_block_number: int,
before_block_number: int,
):
async with self.max_concurrency:
return await inspect_block(
self.inspect_db_session,
self.base_provider,
return await inspect_many_blocks(
inspect_db_session,
self.w3,
self.trace_classifier,
block_number,
trace_db_session=self.trace_db_session,
after_block_number,
before_block_number,
trace_db_session=trace_db_session,
)

View File

@@ -1,11 +1,9 @@
from typing import List
from mev_inspect.aave_liquidations import get_aave_liquidations
from mev_inspect.schemas.traces import (
ClassifiedTrace,
Classification,
)
from mev_inspect.compound_liquidations import get_compound_liquidations
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.traces import Classification, ClassifiedTrace
def has_liquidations(classified_traces: List[ClassifiedTrace]) -> bool:
@@ -20,4 +18,5 @@ def get_liquidations(
classified_traces: List[ClassifiedTrace],
) -> List[Liquidation]:
aave_liquidations = get_aave_liquidations(classified_traces)
return aave_liquidations
comp_liquidations = get_compound_liquidations(classified_traces)
return aave_liquidations + comp_liquidations

16
mev_inspect/methods.py Normal file
View File

@@ -0,0 +1,16 @@
from typing import Callable, List
from web3._utils.rpc_abi import RPC
from web3.method import Method, default_root_munger
from web3.types import BlockIdentifier, ParityBlockTrace, RPCEndpoint
trace_block: Method[Callable[[BlockIdentifier], List[ParityBlockTrace]]] = Method(
RPC.trace_block,
mungers=[default_root_munger],
)
get_block_receipts: Method[Callable[[BlockIdentifier], List[dict]]] = Method(
RPCEndpoint("eth_getBlockReceipts"),
mungers=[default_root_munger],
)

View File

@@ -1,13 +1,10 @@
from typing import List
from mev_inspect.schemas.traces import ClassifiedTrace
from mev_inspect.schemas.miner_payments import MinerPayment
from mev_inspect.schemas.receipts import Receipt
from mev_inspect.schemas.traces import ClassifiedTrace
from mev_inspect.traces import get_traces_by_transaction_hash
from mev_inspect.transfers import (
filter_transfers,
get_eth_transfers,
)
from mev_inspect.transfers import filter_transfers, get_eth_transfers
def get_miner_payments(

View File

@@ -14,3 +14,4 @@ class ArbitrageModel(Base):
start_amount = Column(Numeric, nullable=False)
end_amount = Column(Numeric, nullable=False)
profit_amount = Column(Numeric, nullable=False)
error = Column(String, nullable=True)

View File

@@ -1,4 +1,4 @@
from sqlalchemy import Column, Numeric, String, ARRAY, Integer
from sqlalchemy import ARRAY, Column, Integer, Numeric, String
from .base import Base
@@ -8,7 +8,6 @@ class LiquidationModel(Base):
liquidated_user = Column(String, nullable=False)
liquidator_user = Column(String, nullable=False)
collateral_token_address = Column(String, nullable=False)
debt_token_address = Column(String, nullable=False)
debt_purchase_amount = Column(Numeric, nullable=False)
received_amount = Column(Numeric, nullable=False)
@@ -17,3 +16,4 @@ class LiquidationModel(Base):
transaction_hash = Column(String, primary_key=True)
trace_address = Column(ARRAY(Integer), primary_key=True)
block_number = Column(Numeric, nullable=False)
error = Column(String, nullable=True)

Some files were not shown because too many files have changed in this diff Show More