Compare commits

..

2 Commits

Author SHA1 Message Date
Yaz Khoury
ed01811987 fix: Remove unneeded code from template 2021-10-14 14:24:02 +00:00
Yaz Khoury
3663d075d9 feat: Add initial atomicMatch logic for OpenSea 2021-10-14 13:28:36 +00:00
157 changed files with 1026 additions and 4978 deletions

View File

@@ -2,15 +2,8 @@ repos:
- repo: https://github.com/ambv/black
rev: 20.8b1
hooks:
- id: black
language_version: python3.9
- repo: local
hooks:
- id: isort
name: isort
entry: poetry run isort .
language: system
types: [python]
- id: black
language_version: python3.9
- repo: local
hooks:
- id: pylint

View File

@@ -433,7 +433,7 @@ int-import-graph=
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=alembic
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=

View File

@@ -1,36 +0,0 @@
# Contributing guide
Welcome to the Flashbots collective! We just ask you to be nice when you play with us.
## Pre-commit
We use pre-commit to maintain a consistent style, prevent errors, and ensure test coverage.
To set up, install dependencies through `poetry`:
```
poetry install
```
Then install pre-commit hooks with:
```
poetry run pre-commit install
```
## Tests
Run tests with:
```
kubectl exec deploy/mev-inspect-deployment -- poetry run pytest --cov=mev_inspect tests
```
## Send a pull request
- Your proposed changes should be first described and discussed in an issue.
- Open the branch in a personal fork, not in the team repository.
- Every pull request should be small and represent a single change. If the problem is complicated, split it in multiple issues and pull requests.
- Every pull request should be covered by unit tests.
We appreciate you, friend <3.

View File

@@ -18,5 +18,4 @@ COPY . /app
# easter eggs 😝
RUN echo "PS1='🕵️:\[\033[1;36m\]\h \[\033[1;34m\]\W\[\033[0;35m\]\[\033[1;36m\]$ \[\033[0m\]'" >> ~/.bashrc
ENTRYPOINT [ "poetry" ]
CMD [ "run", "python", "loop.py" ]
ENTRYPOINT [ "/app/entrypoint.sh"]

193
README.md
View File

@@ -1,9 +1,7 @@
# mev-inspect-py
> illuminating the dark forest 🌲💡
[![standard-readme compliant](https://img.shields.io/badge/readme%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/RichardLitt/standard-readme)
[![Discord](https://img.shields.io/discord/755466764501909692)](https://discord.gg/7hvTycdNcK)
[Maximal extractable value](https://ethereum.org/en/developers/docs/mev/) inspector for Ethereum, to illuminate the [dark forest](https://www.paradigm.xyz/2020/08/ethereum-is-a-dark-forest/) 🌲💡
**mev-inspect-py** is an MEV inspector for Ethereum
Given a block, mev-inspect finds:
- miner payments (gas + coinbase)
@@ -11,144 +9,106 @@ Given a block, mev-inspect finds:
- swaps and [arbitrages](https://twitter.com/bertcmiller/status/1427632028263059462)
- ...and more
Data is stored in Postgres for analysis.
Data is stored in Postgres for analysis
## Install
## Running locally
mev-inspect-py is built to run on kubernetes locally and in production
mev-inspect-py is built to run on kubernetes locally and in production.
### Install dependencies
### Dependencies
- [docker](https://www.docker.com/products/docker-desktop)
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start), or a similar tool for running local Kubernetes clusters
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [helm](https://helm.sh/docs/intro/install/)
- [tilt](https://docs.tilt.dev/install.html)
### Set up
Create a new cluster with:
First, setup a local kubernetes deployment - we use [Docker](https://www.docker.com/products/docker-desktop) and [kind](https://kind.sigs.k8s.io/docs/user/quick-start)
If using kind, create a new cluster with:
```
kind create cluster
```
Set an environment variable `RPC_URL` to an RPC for fetching blocks.
Next, install the kubernetes CLI [`kubectl`](https://kubernetes.io/docs/tasks/tools/)
mev-inspect-py currently requires a node with support for Erigon traces and receipts (not geth yet 😔).
Then, install [helm](https://helm.sh/docs/intro/install/) - helm is a package manager for kubernetes
[pokt.network](pokt.network)'s "Ethereum Mainnet Archival with trace calls" is a good hosted option.
Lastly, setup [Tilt](https://docs.tilt.dev/install.html) which manages running and updating kubernetes resources locally
### Start up
Set an environment variable `RPC_URL` to an RPC for fetching blocks
Example:
```
export RPC_URL="http://111.111.111.111:8546"
```
**Note: mev-inspect-py currently requires an RPC with support for OpenEthereum / Erigon traces (not geth 😔)**
Next, start all services with:
```
tilt up
```
Press "space" to see a browser of the services starting up.
On first startup, you'll need to apply database migrations with:
Press "space" to see a browser of the services starting up
On first startup, you'll need to apply database migrations. Apply with:
```
./mev exec alembic upgrade head
kubectl exec deploy/mev-inspect-deployment -- alembic upgrade head
```
## Usage
## Inspecting
### Inspect a single block
Inspecting block [12914944](https://twitter.com/mevalphaleak/status/1420416437575901185):
Inspecting block [12914944](https://twitter.com/mevalphaleak/status/1420416437575901185)
```
./mev inspect 12914944
kubectl exec deploy/mev-inspect-deployment -- poetry run inspect-block 12914944
```
### Inspect many blocks
Inspecting blocks 12914944 to 12914954:
Inspecting blocks 12914944 to 12914954
```
./mev inspect-many 12914944 12914954
kubectl exec deploy/mev-inspect-deployment -- poetry run inspect-many-blocks 12914944 12914954
```
### Inspect all incoming blocks
Start a block listener with:
Start a block listener with
```
./mev listener start
kubectl exec deploy/mev-inspect-deployment -- /app/listener start
```
By default, it will pick up wherever you left off.
If running for the first time, listener starts at the latest block.
Tail logs for the listener with:
If running for the first time, listener starts at the latest block
See logs for the listener with
```
./mev listener tail
kubectl exec deploy/mev-inspect-deployment -- tail -f listener.log
```
And stop the listener with:
And stop the listener with
```
./mev listener stop
kubectl exec deploy/mev-inspect-deployment -- /app/listener stop
```
### Backfilling
For larger backfills, you can inspect many blocks in parallel using kubernetes
To inspect blocks 12914944 to 12915044 divided across 10 worker pods:
```
./mev backfill 12914944 12915044 10
```
You can see worker pods spin up then complete by watching the status of all pods
```
watch kubectl get pods
```
To watch the logs for a given pod, take its pod name using the above, then run:
```
kubectl logs -f pod/mev-inspect-backfill-abcdefg
```
(where `mev-inspect-backfill-abcdefg` is your actual pod name)
### Exploring
## Exploring
All inspect output data is stored in Postgres.
To connect to the local Postgres database for querying, launch a client container with:
```
./mev db
kubectl run -i --rm --tty postgres-client --env="PGPASSWORD=password" --image=jbergknoff/postgresql-client -- mev_inspect --host=postgresql --user=postgres
```
When you see the prompt:
When you see the prompt
```
mev_inspect=#
```
You're ready to query!
Try finding the total number of swaps decoded with UniswapV3Pool:
Try finding the total number of swaps decoded with UniswapV3Pool
```
SELECT COUNT(*) FROM swaps WHERE abi_name='UniswapV3Pool';
```
or top 10 arbs by gross profit that took profit in WETH:
or top 10 arbs by gross profit that took profit in WETH
```
SELECT *
FROM arbitrages
@@ -157,83 +117,78 @@ ORDER BY profit_amount DESC
LIMIT 10;
```
Postgres tip: Enter `\x` to enter "Explanded display" mode which looks nicer for results with many columns.
Postgres tip: Enter `\x` to enter "Explanded display" mode which looks nicer for results with many columns
## Contributing
### Guide
✨ Coming soon
### Pre-commit
We use pre-commit to maintain a consistent style, prevent errors, and ensure test coverage.
To set up, install dependencies through poetry
```
poetry install
```
Then install pre-commit hooks with
```
poetry run pre-commit install
```
### Tests
Run tests with
```
kubectl exec deploy/mev-inspect-deployment -- poetry run pytest --cov=mev_inspect tests
```
## FAQ
### How do I delete / reset my local postgres data?
Stop the system if running:
Stop the system if running
```
tilt down
```
Delete it with:
Delete it with
```
kubectl delete pvc data-postgresql-postgresql-0
```
Start back up again:
Start back up again
```
tilt up
```
And rerun migrations to create the tables again:
And rerun migrations to create the tables again
```
./mev exec alembic upgrade head
kubectl exec deploy/mev-inspect-deployment -- alembic upgrade head
```
### I was using the docker-compose setup and want to switch to kube, now what?
Re-add the old `docker-compose.yml` file to your mev-inspect-py directory.
Re-add the old `docker-compose.yml` file to your mev-inspect-py directory
A copy can be found [here](https://github.com/flashbots/mev-inspect-py/blob/ef60c097719629a7d2dc56c6e6c9a100fb706f76/docker-compose.yml)
Tear down docker-compose resources:
Tear down docker-compose resources
```
docker compose down
```
Then go through the steps in the current README for kube setup.
Then go through the steps in the current README for kube setup
### Error from server (AlreadyExists): pods "postgres-client" already exists
This means the postgres client container didn't shut down correctly
This means the postgres client container didn't shut down correctly.
Delete this one with:
Delete this one with
```
kubectl delete pod/postgres-client
```
Then start it back up again.
## Maintainers
- [@lukevs](https://github.com/lukevs)
- [@gheise](https://github.com/gheise)
- [@bertmiller](https://github.com/bertmiller)
## Contributing
[Flashbots](https://flashbots.net) is a research and development collective working on mitigating the negative externalities of decentralized economies. We contribute with the larger free software community to illuminate the dark forest.
You are welcome here <3.
- If you want to join us, come and say hi in our [Discord chat](https://discord.gg/7hvTycdNcK).
- If you have a question, feedback or a bug report for this project, please [open a new Issue](https://github.com/flashbots/mev-inspect-py/issues).
- If you would like to contribute with code, check the [CONTRIBUTING file](CONTRIBUTING.md).
- We just ask you to be nice.
## Security
If you find a security vulnerability on this project or any other initiative related to Flashbots, please let us know sending an email to security@flashbots.net.
---
Made with by the ⚡🤖 collective.
Then start it back up again

View File

@@ -1,4 +1,5 @@
load("ext://helm_remote", "helm_remote")
load("ext://restart_process", "docker_build_with_restart")
load("ext://secret", "secret_from_dict")
load("ext://configmap", "configmap_from_dict")
@@ -12,39 +13,19 @@ k8s_yaml(configmap_from_dict("mev-inspect-rpc", inputs = {
"url" : os.environ["RPC_URL"],
}))
k8s_yaml(configmap_from_dict("mev-inspect-listener-healthcheck", inputs = {
"url" : os.getenv("LISTENER_HEALTHCHECK_URL", default=""),
}))
k8s_yaml(secret_from_dict("mev-inspect-db-credentials", inputs = {
"username" : "postgres",
"password": "password",
"host": "postgresql",
}))
# if using https://github.com/taarushv/trace-db
# k8s_yaml(secret_from_dict("trace-db-credentials", inputs = {
# "username" : "username",
# "password": "password",
# "host": "trace-db-postgresql",
# }))
docker_build("mev-inspect-py", ".",
docker_build_with_restart("mev-inspect-py", ".",
entrypoint="/app/entrypoint.sh",
live_update=[
sync(".", "/app"),
run("cd /app && poetry install",
trigger="./pyproject.toml"),
],
)
k8s_yaml(helm('./k8s/mev-inspect', name='mev-inspect'))
k8s_resource(workload="mev-inspect", resource_deps=["postgresql-postgresql"])
# uncomment to enable price monitor
# k8s_yaml(helm('./k8s/mev-inspect-prices', name='mev-inspect-prices'))
# k8s_resource(workload="mev-inspect-prices", resource_deps=["postgresql-postgresql"])
local_resource(
'pg-port-forward',
serve_cmd='kubectl port-forward --namespace default svc/postgresql 5432:5432',
resource_deps=["postgresql-postgresql"]
)
k8s_yaml("k8s/app.yaml")
k8s_resource(workload="mev-inspect-deployment", resource_deps=["postgresql-postgresql"])

View File

@@ -1,14 +1,16 @@
from logging.config import fileConfig
from alembic import context
from sqlalchemy import engine_from_config, pool
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from mev_inspect.db import get_inspect_database_uri
from alembic import context
from mev_inspect.db import get_sqlalchemy_database_uri
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
config.set_main_option("sqlalchemy.url", get_inspect_database_uri())
config.set_main_option("sqlalchemy.url", get_sqlalchemy_database_uri())
# Interpret the config file for Python logging.
# This line sets up loggers basically.

View File

@@ -1,54 +0,0 @@
"""Change miner payments and transfers primary keys to include block number
Revision ID: 04a3bb3740c3
Revises: a10d68643476
Create Date: 2021-11-02 22:42:01.702538
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "04a3bb3740c3"
down_revision = "a10d68643476"
branch_labels = None
depends_on = None
def upgrade():
# transfers
op.execute("ALTER TABLE transfers DROP CONSTRAINT transfers_pkey")
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)
op.drop_index("ix_transfers_block_number")
# miner_payments
op.execute("ALTER TABLE miner_payments DROP CONSTRAINT miner_payments_pkey")
op.create_primary_key(
"miner_payments_pkey",
"miner_payments",
["block_number", "transaction_hash"],
)
op.drop_index("ix_block_number")
def downgrade():
# transfers
op.execute("ALTER TABLE transfers DROP CONSTRAINT transfers_pkey")
op.create_index("ix_transfers_block_number", "transfers", ["block_number"])
op.create_primary_key(
"transfers_pkey",
"transfers",
["transaction_hash", "trace_address"],
)
# miner_payments
op.execute("ALTER TABLE miner_payments DROP CONSTRAINT miner_payments_pkey")
op.create_index("ix_block_number", "miner_payments", ["block_number"])
op.create_primary_key(
"miner_payments_pkey",
"miner_payments",
["transaction_hash"],
)

View File

@@ -1,35 +0,0 @@
"""Change blocks.timestamp to timestamp
Revision ID: 04b76ab1d2af
Revises: 2c90b2b8a80b
Create Date: 2021-11-26 15:31:21.111693
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "04b76ab1d2af"
down_revision = "0cef835f7b36"
branch_labels = None
depends_on = None
def upgrade():
op.alter_column(
"blocks",
"block_timestamp",
type_=sa.TIMESTAMP,
nullable=False,
postgresql_using="TO_TIMESTAMP(block_timestamp)",
)
def downgrade():
op.alter_column(
"blocks",
"block_timestamp",
type_=sa.Numeric,
nullable=False,
postgresql_using="extract(epoch FROM block_timestamp)",
)

View File

@@ -1,34 +0,0 @@
"""empty message
Revision ID: 070819d86587
Revises: d498bdb0a641
Create Date: 2021-11-26 18:25:13.402822
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d498bdb0a641"
down_revision = "b9fa1ecc9929"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_snipes",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("min_acceptance_price", sa.Numeric, nullable=False),
sa.Column("acceptance_price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_snipes")

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-30 17:42:25.548130
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "083978d6e455"
down_revision = "92f28a2b4f52"

View File

@@ -1,26 +0,0 @@
"""Rename pool_address to contract_address
Revision ID: 0cef835f7b36
Revises: 5427d62a2cc0
Create Date: 2021-11-19 15:36:15.152622
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "0cef835f7b36"
down_revision = "5427d62a2cc0"
branch_labels = None
depends_on = None
def upgrade():
op.alter_column(
"swaps", "pool_address", nullable=False, new_column_name="contract_address"
)
def downgrade():
op.alter_column(
"swaps", "contract_address", nullable=False, new_column_name="pool_address"
)

View File

@@ -1,28 +0,0 @@
"""Add nullable transaction_position field to swaps and traces
Revision ID: 15ba9c27ee8a
Revises: 04b76ab1d2af
Create Date: 2021-12-02 18:24:18.218880
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "15ba9c27ee8a"
down_revision = "ead7eb8283b9"
branch_labels = None
depends_on = None
def upgrade():
op.add_column(
"classified_traces",
sa.Column("transaction_position", sa.Numeric, nullable=True),
)
op.add_column("swaps", sa.Column("transaction_position", sa.Numeric, nullable=True))
def downgrade():
op.drop_column("classified_traces", "transaction_position")
op.drop_column("swaps", "transaction_position")

View File

@@ -1,26 +0,0 @@
"""Add received_collateral_address to liquidations
Revision ID: 205ce02374b3
Revises: c8363617aa07
Create Date: 2021-10-04 19:52:40.017084
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "205ce02374b3"
down_revision = "c8363617aa07"
branch_labels = None
depends_on = None
def upgrade():
op.add_column(
"liquidations",
sa.Column("received_token_address", sa.String(256), nullable=True),
)
def downgrade():
op.drop_column("liquidations", "received_token_address")

View File

@@ -1,28 +0,0 @@
"""Add blocks table
Revision ID: 2c90b2b8a80b
Revises: 04a3bb3740c3
Create Date: 2021-11-17 18:29:13.065944
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "2c90b2b8a80b"
down_revision = "04a3bb3740c3"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"blocks",
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("block_timestamp", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number"),
)
def downgrade():
op.drop_table("blocks")

View File

@@ -7,6 +7,7 @@ Create Date: 2021-09-14 11:11:41.559137
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "320e56b0a99f"
down_revision = "a02f3f2c469f"

View File

@@ -1,45 +0,0 @@
"""Cahnge swap primary key to include block number
Revision ID: 3417f49d97b3
Revises: 205ce02374b3
Create Date: 2021-11-02 20:50:32.854996
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "3417f49d97b3"
down_revision = "205ce02374b3"
branch_labels = None
depends_on = None
def upgrade():
op.execute("ALTER TABLE swaps DROP CONSTRAINT swaps_pkey CASCADE")
op.create_primary_key(
"swaps_pkey",
"swaps",
["block_number", "transaction_hash", "trace_address"],
)
op.create_index(
"arbitrage_swaps_swaps_idx",
"arbitrage_swaps",
["swap_transaction_hash", "swap_trace_address"],
)
def downgrade():
op.drop_index("arbitrage_swaps_swaps_idx")
op.execute("ALTER TABLE swaps DROP CONSTRAINT swaps_pkey CASCADE")
op.create_primary_key(
"swaps_pkey",
"swaps",
["transaction_hash", "trace_address"],
)
op.create_foreign_key(
"arbitrage_swaps_swaps_fkey",
"arbitrage_swaps",
"swaps",
["swap_transaction_hash", "swap_trace_address"],
["transaction_hash", "trace_address"],
)

View File

@@ -1,33 +0,0 @@
"""empty message
Revision ID: 52d75a7e0533
Revises: 7cf0eeb41da0
Create Date: 2021-11-26 20:35:58.954138
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "52d75a7e0533"
down_revision = "7cf0eeb41da0"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_bid_acceptances",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("min_price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_bid_acceptances")

View File

@@ -1,46 +0,0 @@
"""Change transfers trace address to ARRAY
Revision ID: 5427d62a2cc0
Revises: d540242ae368
Create Date: 2021-11-19 13:25:11.252774
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "5427d62a2cc0"
down_revision = "d540242ae368"
branch_labels = None
depends_on = None
def upgrade():
op.drop_constraint("transfers_pkey", "transfers")
op.alter_column(
"transfers",
"trace_address",
type_=sa.ARRAY(sa.Integer),
nullable=False,
postgresql_using="trace_address::int[]",
)
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)
def downgrade():
op.drop_constraint("transfers_pkey", "transfers")
op.alter_column(
"transfers",
"trace_address",
type_=sa.String(256),
nullable=False,
)
op.create_primary_key(
"transfers_pkey",
"transfers",
["block_number", "transaction_hash", "trace_address"],
)

View File

@@ -1,33 +0,0 @@
"""empty message
Revision ID: 7cf0eeb41da0
Revises: d498bdb0a641
Create Date: 2021-11-26 20:27:28.936516
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "7cf0eeb41da0"
down_revision = "d498bdb0a641"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"punk_bids",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("transaction_hash", sa.String(66), nullable=False),
sa.Column("trace_address", sa.String(256), nullable=False),
sa.Column("from_address", sa.String(256), nullable=False),
sa.Column("punk_index", sa.Numeric, nullable=False),
sa.Column("price", sa.Numeric, nullable=False),
sa.PrimaryKeyConstraint("block_number", "transaction_hash", "trace_address"),
)
def downgrade():
op.drop_table("punk_bids")

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-06 15:58:04.556762
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "7eec417a4f3e"
down_revision = "9d8c69b3dccb"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-17 03:46:21.498821
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "92f28a2b4f52"
down_revision = "9b8ae51c5d56"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-06 17:06:55.364516
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "9b8ae51c5d56"
down_revision = "7eec417a4f3e"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-05 21:46:35.209199
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "9d8c69b3dccb"
down_revision = "2116e2f36a19"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-09-13 21:32:27.181344
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "a02f3f2c469f"
down_revision = "d70c08b4db6f"

View File

@@ -1,34 +0,0 @@
"""Change classified traces primary key to include block number
Revision ID: a10d68643476
Revises: 3417f49d97b3
Create Date: 2021-11-02 22:03:26.312317
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "a10d68643476"
down_revision = "3417f49d97b3"
branch_labels = None
depends_on = None
def upgrade():
op.execute("ALTER TABLE classified_traces DROP CONSTRAINT classified_traces_pkey")
op.create_primary_key(
"classified_traces_pkey",
"classified_traces",
["block_number", "transaction_hash", "trace_address"],
)
op.drop_index("i_block_number")
def downgrade():
op.execute("ALTER TABLE classified_traces DROP CONSTRAINT classified_traces_pkey")
op.create_index("i_block_number", "classified_traces", ["block_number"])
op.create_primary_key(
"classified_traces_pkey",
"classified_traces",
["transaction_hash", "trace_address"],
)

View File

@@ -1,26 +0,0 @@
"""Remove collateral_token_address column
Revision ID: b9fa1ecc9929
Revises: 04b76ab1d2af
Create Date: 2021-12-01 23:32:40.574108
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "b9fa1ecc9929"
down_revision = "04b76ab1d2af"
branch_labels = None
depends_on = None
def upgrade():
op.drop_column("liquidations", "collateral_token_address")
def downgrade():
op.add_column(
"liquidations",
sa.Column("collateral_token_address", sa.String(256), nullable=False),
)

View File

@@ -7,6 +7,7 @@ Create Date: 2021-07-30 17:37:27.335475
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = "c5da44eb072c"
down_revision = "0660432b9840"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-09-29 14:00:06.857103
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "c8363617aa07"
down_revision = "cd96af55108e"

View File

@@ -8,6 +8,7 @@ Create Date: 2021-09-17 12:44:45.245137
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "cd96af55108e"
down_revision = "320e56b0a99f"

View File

@@ -1,29 +0,0 @@
"""Create usd_prices table
Revision ID: d540242ae368
Revises: 2c90b2b8a80b
Create Date: 2021-11-18 04:30:06.802857
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d540242ae368"
down_revision = "2c90b2b8a80b"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"prices",
sa.Column("timestamp", sa.TIMESTAMP),
sa.Column("usd_price", sa.Numeric, nullable=False),
sa.Column("token_address", sa.String(256), nullable=False),
sa.PrimaryKeyConstraint("token_address", "timestamp"),
)
def downgrade():
op.drop_table("prices")

View File

@@ -8,6 +8,7 @@ Create Date: 2021-08-30 22:10:04.186251
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "d70c08b4db6f"
down_revision = "083978d6e455"

View File

@@ -1,69 +0,0 @@
"""Create sandwiches and sandwiched swaps tables
Revision ID: ead7eb8283b9
Revises: a5d80460f0e6
Create Date: 2021-12-03 16:37:28.077158
"""
import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic.
revision = "ead7eb8283b9"
down_revision = "52d75a7e0533"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"sandwiches",
sa.Column("id", sa.String(256), primary_key=True),
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("block_number", sa.Numeric, nullable=False),
sa.Column("sandwicher_address", sa.String(256), nullable=False),
sa.Column("frontrun_swap_transaction_hash", sa.String(256), nullable=False),
sa.Column("frontrun_swap_trace_address", sa.ARRAY(sa.Integer), nullable=False),
sa.Column("backrun_swap_transaction_hash", sa.String(256), nullable=False),
sa.Column("backrun_swap_trace_address", sa.ARRAY(sa.Integer), nullable=False),
)
op.create_index(
"ik_sandwiches_frontrun",
"sandwiches",
[
"block_number",
"frontrun_swap_transaction_hash",
"frontrun_swap_trace_address",
],
)
op.create_index(
"ik_sandwiches_backrun",
"sandwiches",
["block_number", "backrun_swap_transaction_hash", "backrun_swap_trace_address"],
)
op.create_table(
"sandwiched_swaps",
sa.Column("created_at", sa.TIMESTAMP, server_default=sa.func.now()),
sa.Column("sandwich_id", sa.String(1024), primary_key=True),
sa.Column("block_number", sa.Numeric, primary_key=True),
sa.Column("transaction_hash", sa.String(66), primary_key=True),
sa.Column("trace_address", sa.ARRAY(sa.Integer), primary_key=True),
sa.ForeignKeyConstraint(["sandwich_id"], ["sandwiches.id"], ondelete="CASCADE"),
)
op.create_index(
"ik_sandwiched_swaps_secondary",
"sandwiched_swaps",
["block_number", "transaction_hash", "trace_address"],
)
def downgrade():
op.drop_index("ik_sandwiched_swaps_secondary")
op.drop_table("sandwiched_swaps")
op.drop_index("ik_sandwiches_frontrun")
op.drop_index("ik_sandwiches_backrun")
op.drop_table("sandwiches")

View File

@@ -1,57 +0,0 @@
import subprocess
import sys
from typing import Iterator, Tuple
def get_block_after_before_chunks(
after_block: int,
before_block: int,
n_workers: int,
) -> Iterator[Tuple[int, int]]:
n_blocks = before_block - after_block
remainder = n_blocks % n_workers
floor_chunk_size = n_blocks // n_workers
last_before_block = None
for worker_index in range(n_workers):
chunk_size = floor_chunk_size
if worker_index < remainder:
chunk_size += 1
batch_after_block = (
last_before_block if last_before_block is not None else after_block
)
batch_before_block = batch_after_block + chunk_size
yield batch_after_block, batch_before_block
last_before_block = batch_before_block
def backfill(after_block: int, before_block: int, n_workers: int):
if n_workers <= 0:
raise ValueError("Need at least one worker")
for batch_after_block, batch_before_block in get_block_after_before_chunks(
after_block,
before_block,
n_workers,
):
print(f"Backfilling {batch_after_block} to {batch_before_block}")
backfill_command = f"sh backfill.sh {batch_after_block} {batch_before_block}"
process = subprocess.Popen(backfill_command.split(), stdout=subprocess.PIPE)
output, _ = process.communicate()
print(output)
def main():
after_block = int(sys.argv[1])
before_block = int(sys.argv[2])
n_workers = int(sys.argv[3])
backfill(after_block, before_block, n_workers)
if __name__ == "__main__":
main()

View File

@@ -1,6 +0,0 @@
current_image=$(kubectl get deployment mev-inspect -o=jsonpath='{$.spec.template.spec.containers[:1].image}')
helm template mev-inspect-backfill ./k8s/mev-inspect-backfill \
--set image.repository=$current_image \
--set command.startBlockNumber=$1 \
--set command.endBlockNumber=$2 | kubectl apply -f -

99
cli.py
View File

@@ -1,14 +1,14 @@
import logging
import os
import logging
import sys
import click
from web3 import Web3
from mev_inspect.db import get_session
from mev_inspect.inspect_block import inspect_block
from mev_inspect.provider import get_base_provider
from mev_inspect.concurrency import coro
from mev_inspect.crud.prices import write_prices
from mev_inspect.db import get_inspect_session, get_trace_session
from mev_inspect.inspector import MEVInspector
from mev_inspect.prices import fetch_all_supported_prices
RPC_URL_ENV = "RPC_URL"
@@ -24,74 +24,51 @@ def cli():
@cli.command()
@click.argument("block_number", type=int)
@click.option("--rpc", default=lambda: os.environ.get(RPC_URL_ENV, ""))
@coro
async def inspect_block_command(block_number: int, rpc: str):
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
@click.option("--cache/--no-cache", default=True)
def inspect_block_command(block_number: int, rpc: str, cache: bool):
db_session = get_session()
base_provider = get_base_provider(rpc)
w3 = Web3(base_provider)
inspector = MEVInspector(rpc, inspect_db_session, trace_db_session)
await inspector.inspect_single_block(block=block_number)
if not cache:
logger.info("Skipping cache")
@cli.command()
@click.argument("block_number", type=int)
@click.option("--rpc", default=lambda: os.environ.get(RPC_URL_ENV, ""))
@coro
async def fetch_block_command(block_number: int, rpc: str):
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
inspector = MEVInspector(rpc, inspect_db_session, trace_db_session)
block = await inspector.create_from_block(block_number=block_number)
print(block.json())
inspect_block(db_session, base_provider, w3, block_number, should_cache=cache)
@cli.command()
@click.argument("after_block", type=int)
@click.argument("before_block", type=int)
@click.option("--rpc", default=lambda: os.environ.get(RPC_URL_ENV, ""))
@click.option(
"--max-concurrency",
type=int,
help="maximum number of concurrent connections",
default=5,
)
@click.option(
"--request-timeout", type=int, help="timeout for requests to nodes", default=500
)
@coro
async def inspect_many_blocks_command(
after_block: int,
before_block: int,
rpc: str,
max_concurrency: int,
request_timeout: int,
@click.option("--cache/--no-cache", default=True)
def inspect_many_blocks_command(
after_block: int, before_block: int, rpc: str, cache: bool
):
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
inspector = MEVInspector(
rpc,
inspect_db_session,
trace_db_session,
max_concurrency=max_concurrency,
request_timeout=request_timeout,
)
await inspector.inspect_many_blocks(
after_block=after_block, before_block=before_block
)
db_session = get_session()
base_provider = get_base_provider(rpc)
w3 = Web3(base_provider)
if not cache:
logger.info("Skipping cache")
@cli.command()
@coro
async def fetch_all_prices():
inspect_db_session = get_inspect_session()
for i, block_number in enumerate(range(after_block, before_block)):
block_message = (
f"Running for {block_number} ({i+1}/{before_block - after_block})"
)
dashes = "-" * len(block_message)
logger.info(dashes)
logger.info(block_message)
logger.info(dashes)
logger.info("Fetching prices")
prices = await fetch_all_supported_prices()
logger.info("Writing prices")
write_prices(inspect_db_session, prices)
inspect_block(
db_session,
base_provider,
w3,
block_number,
should_write_classified_traces=False,
should_cache=cache,
)
def get_rpc_url() -> str:

3
entrypoint.sh Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
python loop.py

48
k8s/app.yaml Normal file
View File

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mev-inspect-deployment
labels:
app: mev-inspect
spec:
replicas: 1
selector:
matchLabels:
app: mev-inspect
template:
metadata:
labels:
app: mev-inspect
spec:
containers:
- name: mev-inspect
image: mev-inspect-py
command: [ "/app/entrypoint.sh" ]
env:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: host
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
- name: RPC_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-rpc
key: url
livenessProbe:
exec:
command:
- ls
- /
initialDelaySeconds: 20
periodSeconds: 5

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: mev-inspect-backfill
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "mev-inspect-backfill.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mev-inspect-backfill.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mev-inspect-backfill.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mev-inspect-backfill.labels" -}}
helm.sh/chart: {{ include "mev-inspect-backfill.chart" . }}
{{ include "mev-inspect-backfill.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mev-inspect-backfill.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect-backfill.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mev-inspect-backfill.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mev-inspect-backfill.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,68 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "mev-inspect-backfill.fullname" . }}-{{ randAlphaNum 5 | lower }}
labels:
{{- include "mev-inspect-backfill.labels" . | nindent 4 }}
spec:
completions: 1
parallelism: 1
ttlSecondsAfterFinished: 5
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- run
- inspect-many-blocks
- {{ .Values.command.startBlockNumber | quote }}
- {{ .Values.command.endBlockNumber | quote }}
env:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: host
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
- name: TRACE_DB_HOST
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: host
optional: true
- name: TRACE_DB_USER
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: username
optional: true
- name: TRACE_DB_PASSWORD
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: password
optional: true
- name: RPC_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-rpc
key: url
restartPolicy: OnFailure

View File

@@ -1,42 +0,0 @@
# Default values for mev-inspect.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: mev-inspect-py
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: mev-inspect-prices
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "mev-inspect-prices.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mev-inspect-prices.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mev-inspect-prices.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mev-inspect-prices.labels" -}}
helm.sh/chart: {{ include "mev-inspect-prices.chart" . }}
{{ include "mev-inspect-prices.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mev-inspect-prices.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect-prices.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mev-inspect-prices.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mev-inspect-prices.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,35 +0,0 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "mev-inspect-prices.fullname" . }}
spec:
schedule: "0 */1 * * *"
successfulJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- run
- fetch-all-prices
env:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: host
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
restartPolicy: Never

View File

@@ -1,7 +0,0 @@
image:
repository: mev-inspect-py
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: mev-inspect
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,62 +0,0 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "mev-inspect.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mev-inspect.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mev-inspect.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mev-inspect.labels" -}}
helm.sh/chart: {{ include "mev-inspect.chart" . }}
{{ include "mev-inspect.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mev-inspect.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mev-inspect.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mev-inspect.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mev-inspect.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -1,99 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mev-inspect.fullname" . }}
labels:
{{- include "mev-inspect.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "mev-inspect.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "mev-inspect.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args: ["run", "python", "loop.py"]
livenessProbe:
exec:
command:
- ls
- /
initialDelaySeconds: 20
periodSeconds: 5
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: host
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mev-inspect-db-credentials
key: password
- name: TRACE_DB_HOST
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: host
optional: true
- name: TRACE_DB_USER
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: username
optional: true
- name: TRACE_DB_PASSWORD
valueFrom:
secretKeyRef:
name: trace-db-credentials
key: password
optional: true
- name: RPC_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-rpc
key: url
- name: LISTENER_HEALTHCHECK_URL
valueFrom:
configMapKeyRef:
name: mev-inspect-listener-healthcheck
key: url
optional: true
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -1,44 +0,0 @@
# Default values for mev-inspect.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: mev-inspect-py:latest
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -25,9 +25,6 @@ case "$1" in
start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE
echo "."
;;
tail)
tail -f listener.log
;;
restart)
echo -n "Restarting daemon: "$NAME
start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile $PIDFILE
@@ -43,7 +40,7 @@ case "$1" in
;;
*)
echo "Usage: "$1" {start|stop|restart|tail}"
echo "Usage: "$1" {start|stop|restart}"
exit 1
esac

View File

@@ -1,98 +1,74 @@
import asyncio
import logging
import os
import time
import aiohttp
from web3 import Web3
from mev_inspect.block import get_latest_block_number
from mev_inspect.concurrency import coro
from mev_inspect.crud.latest_block_update import (
find_latest_block_update,
update_latest_block,
)
from mev_inspect.db import get_inspect_session, get_trace_session
from mev_inspect.inspector import MEVInspector
from mev_inspect.db import get_session
from mev_inspect.inspect_block import inspect_block
from mev_inspect.provider import get_base_provider
from mev_inspect.signal_handler import GracefulKiller
logging.basicConfig(filename="listener.log", filemode="a", level=logging.INFO)
logging.basicConfig(filename="listener.log", level=logging.INFO)
logger = logging.getLogger(__name__)
# lag to make sure the blocks we see are settled
BLOCK_NUMBER_LAG = 5
@coro
async def run():
def run():
rpc = os.getenv("RPC_URL")
if rpc is None:
raise RuntimeError("Missing environment variable RPC_URL")
healthcheck_url = os.getenv("LISTENER_HEALTHCHECK_URL")
logger.info("Starting...")
killer = GracefulKiller()
inspect_db_session = get_inspect_session()
trace_db_session = get_trace_session()
inspector = MEVInspector(rpc, inspect_db_session, trace_db_session)
db_session = get_session()
base_provider = get_base_provider(rpc)
w3 = Web3(base_provider)
latest_block_number = get_latest_block_number(w3)
while not killer.kill_now:
await inspect_next_block(
inspector,
inspect_db_session,
base_provider,
healthcheck_url,
)
last_written_block = find_latest_block_update(db_session)
logger.info(f"Latest block: {latest_block_number}")
logger.info(f"Last written block: {last_written_block}")
if (last_written_block is None) or (
last_written_block < (latest_block_number - BLOCK_NUMBER_LAG)
):
block_number = (
latest_block_number
if last_written_block is None
else last_written_block + 1
)
logger.info(f"Writing block: {block_number}")
inspect_block(
db_session,
base_provider,
w3,
block_number,
should_write_classified_traces=False,
should_cache=False,
)
update_latest_block(db_session, block_number)
else:
time.sleep(5)
latest_block_number = get_latest_block_number(w3)
logger.info("Stopping...")
async def inspect_next_block(
inspector: MEVInspector,
inspect_db_session,
base_provider,
healthcheck_url,
):
latest_block_number = await get_latest_block_number(base_provider)
last_written_block = find_latest_block_update(inspect_db_session)
logger.info(f"Latest block: {latest_block_number}")
logger.info(f"Last written block: {last_written_block}")
if last_written_block is None:
# maintain lag if no blocks written yet
last_written_block = latest_block_number - 1
if last_written_block < (latest_block_number - BLOCK_NUMBER_LAG):
block_number = (
latest_block_number
if last_written_block is None
else last_written_block + 1
)
logger.info(f"Writing block: {block_number}")
await inspector.inspect_single_block(block=block_number)
update_latest_block(inspect_db_session, block_number)
if healthcheck_url:
await ping_healthcheck_url(healthcheck_url)
else:
await asyncio.sleep(5)
async def ping_healthcheck_url(url):
async with aiohttp.ClientSession() as session:
async with session.get(url):
pass
if __name__ == "__main__":
try:
run()
except Exception as e:
logger.error(e)
run()

View File

@@ -3,6 +3,7 @@ import time
from mev_inspect.signal_handler import GracefulKiller
logging.basicConfig(filename="loop.log", level=logging.INFO)
logger = logging.getLogger(__name__)

49
mev
View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/sh
set -e
@@ -24,58 +24,17 @@ case "$1" in
echo "Connecting to $DB_NAME"
db
;;
listener)
kubectl exec -ti deploy/mev-inspect -- ./listener $2
;;
backfill)
start_block_number=$2
end_block_number=$3
n_workers=$4
echo "Backfilling from $start_block_number to $end_block_number with $n_workers workers"
python backfill.py $start_block_number $end_block_number $n_workers
;;
inspect)
block_number=$2
echo "Inspecting block $block_number"
kubectl exec -ti deploy/mev-inspect -- poetry run inspect-block $block_number
;;
inspect-many)
start_block_number=$2
end_block_number=$3
echo "Inspecting from block $start_block_number to $end_block_number"
kubectl exec -ti deploy/mev-inspect -- \
poetry run inspect-many-blocks $start_block_number $end_block_number
kubectl exec -ti deploy/mev-inspect-deployment -- poetry run inspect-block $block_number
;;
test)
shift
echo "Running tests"
kubectl exec -ti deploy/mev-inspect -- poetry run pytest tests $@
kubectl exec -ti deploy/mev-inspect-deployment -- poetry run pytest tests
;;
fetch)
block_number=$2
echo "Fetching block $block_number"
kubectl exec -ti deploy/mev-inspect -- poetry run fetch-block $block_number
;;
prices)
shift
case "$1" in
fetch-all)
echo "Running price fetch-all"
kubectl exec -ti deploy/mev-inspect -- \
poetry run fetch-all-prices
;;
*)
echo "prices usage: "$1" {fetch-all}"
exit 1
esac
;;
exec)
shift
kubectl exec -ti deploy/mev-inspect -- $@
;;
*)
echo "Usage: "$1" {db|backfill|inspect|test}"
echo "Usage: "$1" {inspect|test}"
exit 1
esac

View File

@@ -1,15 +1,17 @@
from typing import List, Optional, Tuple
from typing import List
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.traces import (
CallTrace,
Classification,
from mev_inspect.traces import (
get_child_traces,
is_child_of_any_address,
)
from mev_inspect.schemas.classified_traces import (
ClassifiedTrace,
DecodedCallTrace,
Classification,
Protocol,
)
from mev_inspect.schemas.transfers import Transfer
from mev_inspect.traces import get_child_traces, is_child_of_any_address
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.transfers import get_transfer
AAVE_CONTRACT_ADDRESSES: List[str] = [
@@ -22,10 +24,7 @@ AAVE_CONTRACT_ADDRESSES: List[str] = [
# AAVE V2 WETH
"0x030ba81f1c18d280636f32af80b9aad02cf0854e",
# AAVE AMM Market DAI
"0x79be75ffc64dd58e66787e4eae470c8a1fd08ba4",
# AAVE i
"0x030ba81f1c18d280636f32af80b9aad02cf0854e",
"0xbcca60bb61934080951369a648fb03df4f96263c",
"0x79bE75FFC64DD58e66787E4Eae470c8a1FD08ba4",
]
@@ -53,48 +52,38 @@ def get_aave_liquidations(
trace.transaction_hash, trace.trace_address, traces
)
(
received_token_address,
received_amount,
) = _get_payback_token_and_amount(trace, child_traces, liquidator)
received_amount = _get_liquidator_payback(child_traces, liquidator)
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["_user"],
collateral_token_address=trace.inputs["_collateral"],
debt_token_address=trace.inputs["_reserve"],
liquidator_user=liquidator,
debt_purchase_amount=trace.inputs["_purchaseAmount"],
protocol=Protocol.aave,
received_amount=received_amount,
received_token_address=received_token_address,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
block_number=trace.block_number,
)
)
return liquidations
def _get_payback_token_and_amount(
liquidation: DecodedCallTrace, child_traces: List[ClassifiedTrace], liquidator: str
) -> Tuple[str, int]:
"""Look for and return liquidator payback from liquidation"""
def _get_liquidator_payback(
child_traces: List[ClassifiedTrace], liquidator: str
) -> int:
for child in child_traces:
if child.classification == Classification.transfer:
if isinstance(child, CallTrace):
child_transfer = get_transfer(child)
child_transfer: Optional[Transfer] = get_transfer(child)
if (
child_transfer is not None
and child_transfer.to_address == liquidator
and child.from_address in AAVE_CONTRACT_ADDRESSES
):
return child_transfer.amount
if child_transfer is not None:
if (
child_transfer.to_address == liquidator
and child.from_address in AAVE_CONTRACT_ADDRESSES
):
return child_transfer.token_address, child_transfer.amount
return liquidation.inputs["_collateral"], 0
return 0

View File

@@ -4,8 +4,9 @@ from typing import Optional
from pydantic import parse_obj_as
from mev_inspect.schemas.abi import ABI
from mev_inspect.schemas.traces import Protocol
from mev_inspect.schemas import ABI
from mev_inspect.schemas.classified_traces import Protocol
THIS_FILE_DIRECTORY = Path(__file__).parents[0]
ABI_DIRECTORY_PATH = THIS_FILE_DIRECTORY / "abis"

View File

@@ -1,615 +0,0 @@
[
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "owner",
"type": "address"
},
{
"indexed": true,
"internalType": "address",
"name": "spender",
"type": "address"
},
{
"indexed": false,
"internalType": "uint256",
"name": "value",
"type": "uint256"
}
],
"name": "Approval",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "from",
"type": "address"
},
{
"indexed": true,
"internalType": "address",
"name": "to",
"type": "address"
},
{
"indexed": false,
"internalType": "uint256",
"name": "value",
"type": "uint256"
},
{
"indexed": false,
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "BalanceTransfer",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "from",
"type": "address"
},
{
"indexed": true,
"internalType": "address",
"name": "target",
"type": "address"
},
{
"indexed": false,
"internalType": "uint256",
"name": "value",
"type": "uint256"
},
{
"indexed": false,
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "Burn",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "underlyingAsset",
"type": "address"
},
{
"indexed": true,
"internalType": "address",
"name": "pool",
"type": "address"
},
{
"indexed": false,
"internalType": "address",
"name": "treasury",
"type": "address"
},
{
"indexed": false,
"internalType": "address",
"name": "incentivesController",
"type": "address"
},
{
"indexed": false,
"internalType": "uint8",
"name": "aTokenDecimals",
"type": "uint8"
},
{
"indexed": false,
"internalType": "string",
"name": "aTokenName",
"type": "string"
},
{
"indexed": false,
"internalType": "string",
"name": "aTokenSymbol",
"type": "string"
},
{
"indexed": false,
"internalType": "bytes",
"name": "params",
"type": "bytes"
}
],
"name": "Initialized",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "from",
"type": "address"
},
{
"indexed": false,
"internalType": "uint256",
"name": "value",
"type": "uint256"
},
{
"indexed": false,
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "Mint",
"type": "event"
},
{
"anonymous": false,
"inputs": [
{
"indexed": true,
"internalType": "address",
"name": "from",
"type": "address"
},
{
"indexed": true,
"internalType": "address",
"name": "to",
"type": "address"
},
{
"indexed": false,
"internalType": "uint256",
"name": "value",
"type": "uint256"
}
],
"name": "Transfer",
"type": "event"
},
{
"inputs": [
],
"name": "UNDERLYING_ASSET_ADDRESS",
"outputs": [
{
"internalType": "address",
"name": "",
"type": "address"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "owner",
"type": "address"
},
{
"internalType": "address",
"name": "spender",
"type": "address"
}
],
"name": "allowance",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "spender",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
}
],
"name": "approve",
"outputs": [
{
"internalType": "bool",
"name": "",
"type": "bool"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "account",
"type": "address"
}
],
"name": "balanceOf",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
},
{
"internalType": "address",
"name": "receiverOfUnderlying",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
},
{
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "burn",
"outputs": [
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
],
"name": "getIncentivesController",
"outputs": [
{
"internalType": "contract IAaveIncentivesController",
"name": "",
"type": "address"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
}
],
"name": "getScaledUserBalanceAndSupply",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
},
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
}
],
"name": "handleRepayment",
"outputs": [
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "contract ILendingPool",
"name": "pool",
"type": "address"
},
{
"internalType": "address",
"name": "treasury",
"type": "address"
},
{
"internalType": "address",
"name": "underlyingAsset",
"type": "address"
},
{
"internalType": "contract IAaveIncentivesController",
"name": "incentivesController",
"type": "address"
},
{
"internalType": "uint8",
"name": "aTokenDecimals",
"type": "uint8"
},
{
"internalType": "string",
"name": "aTokenName",
"type": "string"
},
{
"internalType": "string",
"name": "aTokenSymbol",
"type": "string"
},
{
"internalType": "bytes",
"name": "params",
"type": "bytes"
}
],
"name": "initialize",
"outputs": [
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
},
{
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "mint",
"outputs": [
{
"internalType": "bool",
"name": "",
"type": "bool"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
},
{
"internalType": "uint256",
"name": "index",
"type": "uint256"
}
],
"name": "mintToTreasury",
"outputs": [
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
}
],
"name": "scaledBalanceOf",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
],
"name": "scaledTotalSupply",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
],
"name": "totalSupply",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "view",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "recipient",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
}
],
"name": "transfer",
"outputs": [
{
"internalType": "bool",
"name": "",
"type": "bool"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "sender",
"type": "address"
},
{
"internalType": "address",
"name": "recipient",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
}
],
"name": "transferFrom",
"outputs": [
{
"internalType": "bool",
"name": "",
"type": "bool"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "from",
"type": "address"
},
{
"internalType": "address",
"name": "to",
"type": "address"
},
{
"internalType": "uint256",
"name": "value",
"type": "uint256"
}
],
"name": "transferOnLiquidation",
"outputs": [
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "user",
"type": "address"
},
{
"internalType": "uint256",
"name": "amount",
"type": "uint256"
}
],
"name": "transferUnderlyingTo",
"outputs": [
{
"internalType": "uint256",
"name": "",
"type": "uint256"
}
],
"stateMutability": "nonpayable",
"type": "function"
}
]

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,5 @@
from itertools import groupby
from typing import List, Tuple
from typing import List, Optional
from mev_inspect.schemas.arbitrages import Arbitrage
from mev_inspect.schemas.swaps import Swap
@@ -23,112 +23,70 @@ def get_arbitrages(swaps: List[Swap]) -> List[Arbitrage]:
def _get_arbitrages_from_swaps(swaps: List[Swap]) -> List[Arbitrage]:
"""
An arbitrage is defined as multiple swaps in a series that result in the initial token being returned
to the initial sender address.
There are 2 types of swaps that are most common (99%+).
Case I (fully routed):
BOT -> A/B -> B/C -> C/A -> BOT
Case II (always return to bot):
BOT -> A/B -> BOT -> B/C -> BOT -> A/C -> BOT
There is only 1 correct way to route Case I, but for Case II the following valid routes could be found:
A->B->C->A / B->C->A->B / C->A->B->C. Thus when multiple valid routes are found we filter to the set that
happen in valid order.
"""
pool_addresses = {swap.pool_address for swap in swaps}
all_arbitrages = []
start_ends = _get_all_start_end_swaps(swaps)
if len(start_ends) == 0:
return []
for index, first_swap in enumerate(swaps):
other_swaps = swaps[:index] + swaps[index + 1 :]
# for (start, end) in filtered_start_ends:
for (start, end) in start_ends:
potential_intermediate_swaps = [
swap for swap in swaps if swap is not start and swap is not end
]
routes = _get_all_routes(start, end, potential_intermediate_swaps)
if first_swap.from_address not in pool_addresses:
arbitrage = _get_arbitrage_starting_with_swap(first_swap, other_swaps)
for route in routes:
start_amount = route[0].token_in_amount
end_amount = route[-1].token_out_amount
if arbitrage is not None:
all_arbitrages.append(arbitrage)
return all_arbitrages
def _get_arbitrage_starting_with_swap(
start_swap: Swap,
other_swaps: List[Swap],
) -> Optional[Arbitrage]:
swap_path = [start_swap]
current_swap: Swap = start_swap
while True:
next_swap = _get_swap_from_address(
current_swap.to_address,
current_swap.token_out_address,
other_swaps,
)
if next_swap is None:
return None
swap_path.append(next_swap)
current_swap = next_swap
if (
current_swap.to_address == start_swap.from_address
and current_swap.token_out_address == start_swap.token_in_address
):
start_amount = start_swap.token_in_amount
end_amount = current_swap.token_out_amount
profit_amount = end_amount - start_amount
arb = Arbitrage(
swaps=route,
block_number=route[0].block_number,
transaction_hash=route[0].transaction_hash,
account_address=route[0].from_address,
profit_token_address=route[0].token_in_address,
return Arbitrage(
swaps=swap_path,
block_number=start_swap.block_number,
transaction_hash=start_swap.transaction_hash,
account_address=start_swap.from_address,
profit_token_address=start_swap.token_in_address,
start_amount=start_amount,
end_amount=end_amount,
profit_amount=profit_amount,
)
all_arbitrages.append(arb)
if len(all_arbitrages) == 1:
return all_arbitrages
else:
return [
arb
for arb in all_arbitrages
if (arb.swaps[0].trace_address < arb.swaps[-1].trace_address)
]
return None
def _get_all_start_end_swaps(swaps: List[Swap]) -> List[Tuple[Swap, Swap]]:
"""
Gets the set of all possible opening and closing swap pairs in an arbitrage via
- swap[start].token_in == swap[end].token_out
- swap[start].from_address == swap[end].to_address
- not swap[start].from_address in all_pool_addresses
- not swap[end].to_address in all_pool_addresses
"""
pool_addrs = [swap.contract_address for swap in swaps]
valid_start_ends: List[Tuple[Swap, Swap]] = []
for index, potential_start_swap in enumerate(swaps):
remaining_swaps = swaps[:index] + swaps[index + 1 :]
for potential_end_swap in remaining_swaps:
if (
potential_start_swap.token_in_address
== potential_end_swap.token_out_address
and potential_start_swap.from_address == potential_end_swap.to_address
and not potential_start_swap.from_address in pool_addrs
):
valid_start_ends.append((potential_start_swap, potential_end_swap))
return valid_start_ends
def _get_swap_from_address(
address: str, token_address: str, swaps: List[Swap]
) -> Optional[Swap]:
for swap in swaps:
if swap.pool_address == address and swap.token_in_address == token_address:
return swap
def _get_all_routes(
start_swap: Swap, end_swap: Swap, other_swaps: List[Swap]
) -> List[List[Swap]]:
"""
Returns all routes (List[Swap]) from start to finish between a start_swap and an end_swap only accounting for token_address_in and token_address_out.
"""
# If the path is complete, return
if start_swap.token_out_address == end_swap.token_in_address:
return [[start_swap, end_swap]]
elif len(other_swaps) == 0:
return []
# Collect all potential next steps, check if valid, recursively find routes from next_step to end_swap
routes: List[List[Swap]] = []
for potential_next_swap in other_swaps:
if start_swap.token_out_address == potential_next_swap.token_in_address and (
start_swap.contract_address == potential_next_swap.from_address
or start_swap.to_address == potential_next_swap.contract_address
or start_swap.to_address == potential_next_swap.from_address
):
remaining_swaps = [
swap for swap in other_swaps if swap != potential_next_swap
]
next_swap_routes = _get_all_routes(
potential_next_swap, end_swap, remaining_swaps
)
if len(next_swap_routes) > 0:
for next_swap_route in next_swap_routes:
next_swap_route.insert(0, start_swap)
routes.append(next_swap_route)
return routes
return None

View File

@@ -1,72 +1,55 @@
import asyncio
import logging
from typing import List, Optional
from pathlib import Path
from typing import List
from sqlalchemy import orm
from web3 import Web3
from mev_inspect.fees import fetch_base_fee_per_gas
from mev_inspect.schemas.blocks import Block
from mev_inspect.schemas import Block, Trace, TraceType
from mev_inspect.schemas.receipts import Receipt
from mev_inspect.schemas.traces import Trace, TraceType
from mev_inspect.utils import hex_to_int
logger = logging.getLogger(__name__)
async def get_latest_block_number(base_provider) -> int:
latest_block = await base_provider.make_request(
"eth_getBlockByNumber",
["latest", False],
)
return hex_to_int(latest_block["result"]["number"])
cache_directory = "./cache"
async def create_from_block_number(
base_provider,
w3: Web3,
block_number: int,
trace_db_session: Optional[orm.Session],
def get_latest_block_number(w3: Web3) -> int:
return int(w3.eth.get_block("latest")["number"])
def create_from_block_number(
base_provider, w3: Web3, block_number: int, should_cache: bool
) -> Block:
block: Optional[Block] = None
if not should_cache:
return fetch_block(w3, base_provider, block_number)
if trace_db_session is not None:
block = _find_block(trace_db_session, block_number)
cache_path = _get_cache_path(block_number)
if block is None:
block = await _fetch_block(w3, base_provider, block_number)
return block
if cache_path.is_file():
print(f"Cache for block {block_number} exists, " "loading data from cache")
return Block.parse_file(cache_path)
else:
print(f"Cache for block {block_number} did not exist, getting data")
block = fetch_block(w3, base_provider, block_number)
cache_block(cache_path, block)
return block
async def _fetch_block(w3, base_provider, block_number: int, retries: int = 0) -> Block:
block_json, receipts_json, traces_json, base_fee_per_gas = await asyncio.gather(
w3.eth.get_block(block_number),
base_provider.make_request("eth_getBlockReceipts", [block_number]),
base_provider.make_request("trace_block", [block_number]),
fetch_base_fee_per_gas(w3, block_number),
)
def fetch_block(w3, base_provider, block_number: int) -> Block:
block_json = w3.eth.get_block(block_number)
receipts_json = base_provider.make_request("eth_getBlockReceipts", [block_number])
traces_json = w3.parity.trace_block(block_number)
try:
receipts: List[Receipt] = [
Receipt(**receipt) for receipt in receipts_json["result"]
]
traces = [Trace(**trace_json) for trace_json in traces_json["result"]]
except KeyError as e:
logger.warning(
f"Failed to create objects from block: {block_number}: {e}, retrying: {retries + 1} / 3"
)
if retries < 3:
await asyncio.sleep(5)
return await _fetch_block(w3, base_provider, block_number, retries)
else:
raise
receipts: List[Receipt] = [
Receipt(**receipt) for receipt in receipts_json["result"]
]
traces = [Trace(**trace_json) for trace_json in traces_json]
base_fee_per_gas = fetch_base_fee_per_gas(w3, block_number)
return Block(
block_number=block_number,
block_timestamp=block_json["timestamp"],
miner=block_json["miner"],
base_fee_per_gas=base_fee_per_gas,
traces=traces,
@@ -74,110 +57,6 @@ async def _fetch_block(w3, base_provider, block_number: int, retries: int = 0) -
)
def _find_block(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[Block]:
block_timestamp = _find_block_timestamp(trace_db_session, block_number)
traces = _find_traces(trace_db_session, block_number)
receipts = _find_receipts(trace_db_session, block_number)
base_fee_per_gas = _find_base_fee(trace_db_session, block_number)
if (
block_timestamp is None
or traces is None
or receipts is None
or base_fee_per_gas is None
):
return None
miner_address = _get_miner_address_from_traces(traces)
if miner_address is None:
return None
return Block(
block_number=block_number,
block_timestamp=block_timestamp,
miner=miner_address,
base_fee_per_gas=base_fee_per_gas,
traces=traces,
receipts=receipts,
)
def _find_block_timestamp(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[int]:
result = trace_db_session.execute(
"SELECT block_timestamp FROM block_timestamps WHERE block_number = :block_number",
params={"block_number": block_number},
).one_or_none()
if result is None:
return None
else:
(block_timestamp,) = result
return block_timestamp
def _find_traces(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[List[Trace]]:
result = trace_db_session.execute(
"SELECT raw_traces FROM block_traces WHERE block_number = :block_number",
params={"block_number": block_number},
).one_or_none()
if result is None:
return None
else:
(traces_json,) = result
return [Trace(**trace_json) for trace_json in traces_json]
def _find_receipts(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[List[Receipt]]:
result = trace_db_session.execute(
"SELECT raw_receipts FROM block_receipts WHERE block_number = :block_number",
params={"block_number": block_number},
).one_or_none()
if result is None:
return None
else:
(receipts_json,) = result
return [Receipt(**receipt) for receipt in receipts_json]
def _find_base_fee(
trace_db_session: orm.Session,
block_number: int,
) -> Optional[int]:
result = trace_db_session.execute(
"SELECT base_fee_in_wei FROM base_fee WHERE block_number = :block_number",
params={"block_number": block_number},
).one_or_none()
if result is None:
return None
else:
(base_fee,) = result
return base_fee
def _get_miner_address_from_traces(traces: List[Trace]) -> Optional[str]:
for trace in traces:
if trace.type == TraceType.reward:
return trace.action["author"]
return None
def get_transaction_hashes(calls: List[Trace]) -> List[str]:
result = []
@@ -190,3 +69,17 @@ def get_transaction_hashes(calls: List[Trace]) -> List[str]:
result.append(call.transaction_hash)
return result
def cache_block(cache_path: Path, block: Block):
write_mode = "w" if cache_path.is_file() else "x"
cache_path.parent.mkdir(parents=True, exist_ok=True)
with open(cache_path, mode=write_mode) as cache_file:
cache_file.write(block.json())
def _get_cache_path(block_number: int) -> Path:
cache_directory_path = Path(cache_directory)
return cache_directory_path / f"{block_number}.json"

View File

@@ -1,124 +0,0 @@
from typing import List, Optional, Sequence
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import ClassifiedTrace, DecodedCallTrace
from mev_inspect.schemas.transfers import ETH_TOKEN_ADDRESS, Transfer
def create_swap_from_pool_transfers(
trace: DecodedCallTrace,
recipient_address: str,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
pool_address = trace.to_address
transfers_to_pool = []
if trace.value is not None and trace.value > 0:
transfers_to_pool = [_build_eth_transfer(trace)]
if len(transfers_to_pool) == 0:
transfers_to_pool = _filter_transfers(prior_transfers, to_address=pool_address)
if len(transfers_to_pool) == 0:
transfers_to_pool = _filter_transfers(child_transfers, to_address=pool_address)
if len(transfers_to_pool) == 0:
return None
transfers_from_pool_to_recipient = _filter_transfers(
child_transfers, to_address=recipient_address, from_address=pool_address
)
if len(transfers_from_pool_to_recipient) != 1:
return None
transfer_in = transfers_to_pool[-1]
transfer_out = transfers_from_pool_to_recipient[0]
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=pool_address,
protocol=trace.protocol,
from_address=transfer_in.from_address,
to_address=transfer_out.to_address,
token_in_address=transfer_in.token_address,
token_in_amount=transfer_in.amount,
token_out_address=transfer_out.token_address,
token_out_amount=transfer_out.amount,
error=trace.error,
)
def create_swap_from_recipient_transfers(
trace: DecodedCallTrace,
pool_address: str,
recipient_address: str,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
transfers_from_recipient = _filter_transfers(
[*prior_transfers, *child_transfers], from_address=recipient_address
)
transfers_to_recipient = _filter_transfers(
child_transfers, to_address=recipient_address
)
if len(transfers_from_recipient) != 1 or len(transfers_to_recipient) != 1:
return None
transfer_in = transfers_from_recipient[0]
transfer_out = transfers_to_recipient[0]
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=pool_address,
protocol=trace.protocol,
from_address=transfer_in.from_address,
to_address=transfer_out.to_address,
token_in_address=transfer_in.token_address,
token_in_amount=transfer_in.amount,
token_out_address=transfer_out.token_address,
token_out_amount=transfer_out.amount,
error=trace.error,
)
def _build_eth_transfer(trace: ClassifiedTrace) -> Transfer:
return Transfer(
block_number=trace.block_number,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
amount=trace.value,
to_address=trace.to_address,
from_address=trace.from_address,
token_address=ETH_TOKEN_ADDRESS,
)
def _filter_transfers(
transfers: Sequence[Transfer],
to_address: Optional[str] = None,
from_address: Optional[str] = None,
) -> List[Transfer]:
filtered_transfers = []
for transfer in transfers:
if to_address is not None and transfer.to_address != to_address:
continue
if from_address is not None and transfer.from_address != from_address:
continue
filtered_transfers.append(transfer)
return filtered_transfers

View File

@@ -1,18 +1,16 @@
from typing import Dict, Optional, Tuple, Type
from mev_inspect.schemas.classifiers import Classifier, ClassifierSpec
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.classified_traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.classifiers import ClassifierSpec, Classifier
from .aave import AAVE_CLASSIFIER_SPECS
from .balancer import BALANCER_CLASSIFIER_SPECS
from .bancor import BANCOR_CLASSIFIER_SPECS
from .compound import COMPOUND_CLASSIFIER_SPECS
from .cryptopunks import CRYPTOPUNKS_CLASSIFIER_SPECS
from .curve import CURVE_CLASSIFIER_SPECS
from .erc20 import ERC20_CLASSIFIER_SPECS
from .uniswap import UNISWAP_CLASSIFIER_SPECS
from .weth import WETH_ADDRESS, WETH_CLASSIFIER_SPECS
from .weth import WETH_CLASSIFIER_SPECS, WETH_ADDRESS
from .zero_ex import ZEROX_CLASSIFIER_SPECS
from .balancer import BALANCER_CLASSIFIER_SPECS
from .compound import COMPOUND_CLASSIFIER_SPECS
ALL_CLASSIFIER_SPECS = (
ERC20_CLASSIFIER_SPECS
@@ -23,8 +21,6 @@ ALL_CLASSIFIER_SPECS = (
+ ZEROX_CLASSIFIER_SPECS
+ BALANCER_CLASSIFIER_SPECS
+ COMPOUND_CLASSIFIER_SPECS
+ CRYPTOPUNKS_CLASSIFIER_SPECS
+ BANCOR_CLASSIFIER_SPECS
)
_SPECS_BY_ABI_NAME_AND_PROTOCOL: Dict[

View File

@@ -1,25 +1,10 @@
from mev_inspect.schemas.classified_traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
LiquidationClassifier,
TransferClassifier,
)
from mev_inspect.schemas.traces import Protocol
from mev_inspect.schemas.transfers import Transfer
class AaveTransferClassifier(TransferClassifier):
@staticmethod
def get_transfer(trace: DecodedCallTrace) -> Transfer:
return Transfer(
block_number=trace.block_number,
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
amount=trace.inputs["value"],
to_address=trace.inputs["to"],
from_address=trace.inputs["from"],
token_address=trace.to_address,
)
AAVE_SPEC = ClassifierSpec(
@@ -30,13 +15,4 @@ AAVE_SPEC = ClassifierSpec(
},
)
ATOKENS_SPEC = ClassifierSpec(
abi_name="aTokens",
protocol=Protocol.aave,
classifiers={
"transferOnLiquidation(address,address,uint256)": AaveTransferClassifier,
"transferFrom(address,address,uint256)": AaveTransferClassifier,
},
)
AAVE_CLASSIFIER_SPECS = [AAVE_SPEC, ATOKENS_SPEC]
AAVE_CLASSIFIER_SPECS = [AAVE_SPEC]

View File

@@ -1,28 +1,20 @@
from typing import List, Optional
from mev_inspect.schemas.classified_traces import (
DecodedCallTrace,
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
SwapClassifier,
)
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
BALANCER_V1_POOL_ABI_NAME = "BPool"
class BalancerSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
def get_swap_recipient(trace: DecodedCallTrace) -> str:
return trace.from_address
BALANCER_V1_SPECS = [

View File

@@ -1,41 +0,0 @@
from typing import List, Optional
from mev_inspect.classifiers.helpers import create_swap_from_recipient_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
BANCOR_NETWORK_ABI_NAME = "BancorNetwork"
BANCOR_NETWORK_CONTRACT_ADDRESS = "0x2F9EC37d6CcFFf1caB21733BdaDEdE11c823cCB0"
class BancorSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_recipient_transfers(
trace,
BANCOR_NETWORK_CONTRACT_ADDRESS,
recipient_address,
prior_transfers,
child_transfers,
)
return swap
BANCOR_NETWORK_SPEC = ClassifierSpec(
abi_name=BANCOR_NETWORK_ABI_NAME,
protocol=Protocol.bancor,
classifiers={
"convertByPath(address[],uint256,uint256,address,address,uint256)": BancorSwapClassifier,
},
valid_contract_addresses=[BANCOR_NETWORK_CONTRACT_ADDRESS],
)
BANCOR_CLASSIFIER_SPECS = [BANCOR_NETWORK_SPEC]

View File

@@ -1,24 +1,15 @@
from mev_inspect.schemas.classified_traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
LiquidationClassifier,
SeizeClassifier,
)
from mev_inspect.schemas.traces import Protocol
COMPOUND_V2_CETH_SPEC = ClassifierSpec(
abi_name="CEther",
protocol=Protocol.compound_v2,
valid_contract_addresses=["0x4ddc2d193948926d02f9b1fe9e1daa0718270ed5"],
classifiers={
"liquidateBorrow(address,address)": LiquidationClassifier,
"seize(address,address,uint256)": SeizeClassifier,
},
)
CREAM_CETH_SPEC = ClassifierSpec(
abi_name="CEther",
protocol=Protocol.cream,
valid_contract_addresses=["0xD06527D5e56A3495252A528C4987003b712860eE"],
classifiers={
"liquidateBorrow(address,address)": LiquidationClassifier,
"seize(address,address,uint256)": SeizeClassifier,
@@ -28,136 +19,10 @@ CREAM_CETH_SPEC = ClassifierSpec(
COMPOUND_V2_CTOKEN_SPEC = ClassifierSpec(
abi_name="CToken",
protocol=Protocol.compound_v2,
valid_contract_addresses=[
"0x6c8c6b02e7b2be14d4fa6022dfd6d75921d90e4e",
"0x5d3a536e4d6dbd6114cc1ead35777bab948e3643",
"0x158079ee67fce2f58472a96584a73c7ab9ac95c1",
"0x39aa39c021dfbae8fac545936693ac917d5e7563",
"0xf650c3d88d12db855b8bf7d11be6c55a4e07dcc9",
"0xc11b1268c1a384e55c48c2391d8d480264a3a7f4",
"0xb3319f5d18bc0d84dd1b4825dcde5d5f7266d407",
"0xf5dce57282a584d2746faf1593d3121fcac444dc",
"0x35a18000230da775cac24873d00ff85bccded550",
"0x70e36f6bf80a52b3b46b3af8e106cc0ed743e8e4",
"0xccf4429db6322d5c611ee964527d42e5d685dd6a",
"0x12392f67bdf24fae0af363c24ac620a2f67dad86",
"0xface851a4921ce59e912d19329929ce6da6eb0c7",
"0x95b4ef2869ebd94beb4eee400a99824bf5dc325b",
"0x4b0181102a0112a2ef11abee5563bb4a3176c9d7",
"0xe65cdb6479bac1e22340e4e755fae7e509ecd06c",
"0x80a2ae356fc9ef4305676f7a3e2ed04e12c33946",
],
classifiers={
"liquidateBorrow(address,uint256,address)": LiquidationClassifier,
"seize(address,address,uint256)": SeizeClassifier,
},
)
CREAM_CTOKEN_SPEC = ClassifierSpec(
abi_name="CToken",
protocol=Protocol.cream,
valid_contract_addresses=[
"0xd06527d5e56a3495252a528c4987003b712860ee",
"0x51f48b638f82e8765f7a26373a2cb4ccb10c07af",
"0x44fbebd2f576670a6c33f6fc0b00aa8c5753b322",
"0xcbae0a83f4f9926997c8339545fb8ee32edc6b76",
"0xce4fe9b4b8ff61949dcfeb7e03bc9faca59d2eb3",
"0x19d1666f543d42ef17f66e376944a22aea1a8e46",
"0x9baf8a5236d44ac410c0186fe39178d5aad0bb87",
"0x797aab1ce7c01eb727ab980762ba88e7133d2157",
"0x892b14321a4fcba80669ae30bd0cd99a7ecf6ac0",
"0x697256caa3ccafd62bb6d3aa1c7c5671786a5fd9",
"0x8b86e0598616a8d4f1fdae8b59e55fb5bc33d0d6",
"0xc7fd8dcee4697ceef5a2fd4608a7bd6a94c77480",
"0x17107f40d70f4470d20cb3f138a052cae8ebd4be",
"0x1ff8cdb51219a8838b52e9cac09b71e591bc998e",
"0x3623387773010d9214b10c551d6e7fc375d31f58",
"0x4ee15f44c6f0d8d1136c83efd2e8e4ac768954c6",
"0x338286c0bc081891a4bda39c7667ae150bf5d206",
"0x10fdbd1e48ee2fd9336a482d746138ae19e649db",
"0x01da76dea59703578040012357b81ffe62015c2d",
"0xef58b2d5a1b8d3cde67b8ab054dc5c831e9bc025",
"0xe89a6d0509faf730bd707bf868d9a2a744a363c7",
"0xeff039c3c1d668f408d09dd7b63008622a77532c",
"0x22b243b96495c547598d9042b6f94b01c22b2e9e",
"0x8b3ff1ed4f36c2c2be675afb13cc3aa5d73685a5",
"0x2a537fa9ffaea8c1a41d3c2b68a9cb791529366d",
"0x7ea9c63e216d5565c3940a2b3d150e59c2907db3",
"0x3225e3c669b39c7c8b3e204a8614bb218c5e31bc",
"0xf55bbe0255f7f4e70f63837ff72a577fbddbe924",
"0x903560b1cce601794c584f58898da8a8b789fc5d",
"0x054b7ed3f45714d3091e82aad64a1588dc4096ed",
"0xd5103afcd0b3fa865997ef2984c66742c51b2a8b",
"0xfd609a03b393f1a1cfcacedabf068cad09a924e2",
"0xd692ac3245bb82319a31068d6b8412796ee85d2c",
"0x92b767185fb3b04f881e3ac8e5b0662a027a1d9f",
"0x10a3da2bb0fae4d591476fd97d6636fd172923a8",
"0x3c6c553a95910f9fc81c98784736bd628636d296",
"0x21011bc93d9e515b9511a817a1ed1d6d468f49fc",
"0x85759961b116f1d36fd697855c57a6ae40793d9b",
"0x7c3297cfb4c4bbd5f44b450c0872e0ada5203112",
"0x7aaa323d7e398be4128c7042d197a2545f0f1fea",
"0x011a014d5e8eb4771e575bb1000318d509230afa",
"0xe6c3120f38f56deb38b69b65cc7dcaf916373963",
"0x4fe11bc316b6d7a345493127fbe298b95adaad85",
"0xcd22c4110c12ac41acefa0091c432ef44efaafa0",
"0x228619cca194fbe3ebeb2f835ec1ea5080dafbb2",
"0x73f6cba38922960b7092175c0add22ab8d0e81fc",
"0x38f27c03d6609a86ff7716ad03038881320be4ad",
"0x5ecad8a75216cea7dff978525b2d523a251eea92",
"0x5c291bc83d15f71fb37805878161718ea4b6aee9",
"0x6ba0c66c48641e220cf78177c144323b3838d375",
"0xd532944df6dfd5dd629e8772f03d4fc861873abf",
"0x197070723ce0d3810a0e47f06e935c30a480d4fc",
"0xc25eae724f189ba9030b2556a1533e7c8a732e14",
"0x25555933a8246ab67cbf907ce3d1949884e82b55",
"0xc68251421edda00a10815e273fa4b1191fac651b",
"0x65883978ada0e707c3b2be2a6825b1c4bdf76a90",
"0x8b950f43fcac4931d408f1fcda55c6cb6cbf3096",
"0x59089279987dd76fc65bf94cb40e186b96e03cb3",
"0x2db6c82ce72c8d7d770ba1b5f5ed0b6e075066d6",
"0xb092b4601850e23903a42eacbc9d8a0eec26a4d5",
"0x081fe64df6dc6fc70043aedf3713a3ce6f190a21",
"0x1d0986fb43985c88ffa9ad959cc24e6a087c7e35",
"0xc36080892c64821fa8e396bc1bd8678fa3b82b17",
"0x8379baa817c5c5ab929b03ee8e3c48e45018ae41",
"0x299e254a8a165bbeb76d9d69305013329eea3a3b",
"0xf8445c529d363ce114148662387eba5e62016e20",
"0x28526bb33d7230e65e735db64296413731c5402e",
"0x45406ba53bb84cd32a58e7098a2d4d1b11b107f6",
"0x6d1b9e01af17dd08d6dec08e210dfd5984ff1c20",
"0x1f9b4756b008106c806c7e64322d7ed3b72cb284",
"0xab10586c918612ba440482db77549d26b7abf8f7",
"0xdfff11dfe6436e42a17b86e7f419ac8292990393",
"0xdbb5e3081def4b6cdd8864ac2aeda4cbf778fecf",
"0x71cefcd324b732d4e058afacba040d908c441847",
"0x1a122348b73b58ea39f822a89e6ec67950c2bbd0",
"0x523effc8bfefc2948211a05a905f761cba5e8e9e",
"0x4202d97e00b9189936edf37f8d01cff88bdd81d4",
"0x4baa77013ccd6705ab0522853cb0e9d453579dd4",
"0x98e329eb5aae2125af273102f3440de19094b77c",
"0x8c3b7a4320ba70f8239f83770c4015b5bc4e6f91",
"0xe585c76573d7593abf21537b607091f76c996e73",
"0x81e346729723c4d15d0fb1c5679b9f2926ff13c6",
"0x766175eac1a99c969ddd1ebdbe7e270d508d8fff",
"0xd7394428536f63d5659cc869ef69d10f9e66314b",
"0x1241b10e7ea55b22f5b2d007e8fecdf73dcff999",
"0x2a867fd776b83e1bd4e13c6611afd2f6af07ea6d",
"0x250fb308199fe8c5220509c1bf83d21d60b7f74a",
"0x4112a717edd051f77d834a6703a1ef5e3d73387f",
"0xf04ce2e71d32d789a259428ddcd02d3c9f97fb4e",
"0x89e42987c39f72e2ead95a8a5bc92114323d5828",
"0x58da9c9fc3eb30abbcbbab5ddabb1e6e2ef3d2ef",
],
classifiers={
"liquidateBorrow(address,uint256,address)": LiquidationClassifier,
"seize(address,address,uint256)": SeizeClassifier,
},
)
COMPOUND_CLASSIFIER_SPECS = [
COMPOUND_V2_CETH_SPEC,
COMPOUND_V2_CTOKEN_SPEC,
CREAM_CETH_SPEC,
CREAM_CTOKEN_SPEC,
]
COMPOUND_CLASSIFIER_SPECS = [COMPOUND_V2_CETH_SPEC, COMPOUND_V2_CTOKEN_SPEC]

View File

@@ -1,27 +0,0 @@
from mev_inspect.schemas.classifiers import Classifier, ClassifierSpec
from mev_inspect.schemas.traces import Classification, Protocol
class PunkBidAcceptanceClassifier(Classifier):
@staticmethod
def get_classification() -> Classification:
return Classification.punk_accept_bid
class PunkBidClassifier(Classifier):
@staticmethod
def get_classification() -> Classification:
return Classification.punk_bid
CRYPTO_PUNKS_SPEC = ClassifierSpec(
abi_name="cryptopunks",
protocol=Protocol.cryptopunks,
valid_contract_addresses=["0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB"],
classifiers={
"enterBidForPunk(uint256)": PunkBidClassifier,
"acceptBidForPunk(uint256,uint256)": PunkBidAcceptanceClassifier,
},
)
CRYPTOPUNKS_CLASSIFIER_SPECS = [CRYPTO_PUNKS_SPEC]

View File

@@ -1,26 +1,18 @@
from typing import List, Optional
from mev_inspect.schemas.classified_traces import (
Protocol,
)
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
SwapClassifier,
)
class CurveSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.from_address
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
def get_swap_recipient(trace: DecodedCallTrace) -> str:
return trace.from_address
CURVE_BASE_POOLS = [

View File

@@ -1,5 +1,8 @@
from mev_inspect.schemas.classifiers import ClassifierSpec, TransferClassifier
from mev_inspect.schemas.traces import DecodedCallTrace
from mev_inspect.schemas.classified_traces import DecodedCallTrace
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
TransferClassifier,
)
from mev_inspect.schemas.transfers import Transfer

View File

@@ -0,0 +1,18 @@
from mev_inspect.schemas.classified_traces import (
DecodedCallTrace,
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
AtomicMatchClassifier,
)
OPENSEA_ATOMIC_MATCH_ABI_NAME='atomicMatch_'
OPENSEA_SPEC = [
ClassifierSpec(
abi_name="atomicMatch_",
protocol=Protocol.opensea,
valid_contract_addresses=["0x7be8076f4ea4a4ad08075c2508e481d6c946d12b"],
),
]

View File

@@ -1,10 +1,12 @@
from typing import List, Optional
from mev_inspect.schemas.classified_traces import (
DecodedCallTrace,
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
SwapClassifier,
)
from mev_inspect.classifiers.helpers import create_swap_from_pool_transfers
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
UNISWAP_V2_PAIR_ABI_NAME = "UniswapV2Pair"
UNISWAP_V3_POOL_ABI_NAME = "UniswapV3Pool"
@@ -12,34 +14,20 @@ UNISWAP_V3_POOL_ABI_NAME = "UniswapV3Pool"
class UniswapV3SwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.inputs.get("recipient", trace.from_address)
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
def get_swap_recipient(trace: DecodedCallTrace) -> str:
if trace.inputs is not None and "recipient" in trace.inputs:
return trace.inputs["recipient"]
else:
return trace.from_address
class UniswapV2SwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
recipient_address = trace.inputs.get("to", trace.from_address)
swap = create_swap_from_pool_transfers(
trace, recipient_address, prior_transfers, child_transfers
)
return swap
def get_swap_recipient(trace: DecodedCallTrace) -> str:
if trace.inputs is not None and "to" in trace.inputs:
return trace.inputs["to"]
else:
return trace.from_address
UNISWAP_V3_CONTRACT_SPECS = [
@@ -139,7 +127,7 @@ UNISWAPPY_V2_PAIR_SPEC = ClassifierSpec(
},
)
UNISWAP_CLASSIFIER_SPECS: List = [
UNISWAP_CLASSIFIER_SPECS = [
*UNISWAP_V3_CONTRACT_SPECS,
*UNISWAPPY_V2_CONTRACT_SPECS,
*UNISWAP_V3_GENERAL_SPECS,

View File

@@ -1,9 +1,11 @@
from mev_inspect.schemas.classified_traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
DecodedCallTrace,
TransferClassifier,
)
from mev_inspect.schemas.traces import Protocol
from mev_inspect.schemas.transfers import Transfer

View File

@@ -1,53 +1,9 @@
from typing import List, Optional, Tuple
from mev_inspect.schemas.classifiers import ClassifierSpec, SwapClassifier
from mev_inspect.schemas.swaps import Swap
from mev_inspect.schemas.traces import DecodedCallTrace, Protocol
from mev_inspect.schemas.transfers import Transfer
ANY_TAKER_ADDRESS = "0x0000000000000000000000000000000000000000"
RFQ_SIGNATURES = [
"fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"_fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,bool,address)",
]
LIMIT_SIGNATURES = [
"fillOrKillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)",
"_fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,address)",
]
class ZeroExSwapClassifier(SwapClassifier):
@staticmethod
def parse_swap(
trace: DecodedCallTrace,
prior_transfers: List[Transfer],
child_transfers: List[Transfer],
) -> Optional[Swap]:
token_in_address, token_in_amount = _get_0x_token_in_data(
trace, child_transfers
)
token_out_address, token_out_amount = _get_0x_token_out_data(trace)
return Swap(
abi_name=trace.abi_name,
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
trace_address=trace.trace_address,
contract_address=trace.to_address,
protocol=Protocol.zero_ex,
from_address=trace.from_address,
to_address=trace.to_address,
token_in_address=token_in_address,
token_in_amount=token_in_amount,
token_out_address=token_out_address,
token_out_amount=token_out_amount,
error=trace.error,
)
from mev_inspect.schemas.classified_traces import (
Protocol,
)
from mev_inspect.schemas.classifiers import (
ClassifierSpec,
)
ZEROX_CONTRACT_SPECS = [
@@ -166,14 +122,6 @@ ZEROX_GENERIC_SPECS = [
ClassifierSpec(
abi_name="INativeOrdersFeature",
protocol=Protocol.zero_ex,
valid_contract_addresses=["0xdef1c0ded9bec7f1a1670819833240f027b25eff"],
classifiers={
"fillOrKillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128)": ZeroExSwapClassifier,
"_fillRfqOrder((address,address,uint128,uint128,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,bool,address)": ZeroExSwapClassifier,
"_fillLimitOrder((address,address,uint128,uint128,uint128,address,address,address,address,bytes32,uint64,uint256),(uint8,uint8,bytes32,bytes32),uint128,address,address)": ZeroExSwapClassifier,
},
),
ClassifierSpec(
abi_name="IOtcOrdersFeature",
@@ -218,57 +166,3 @@ ZEROX_GENERIC_SPECS = [
]
ZEROX_CLASSIFIER_SPECS = ZEROX_CONTRACT_SPECS + ZEROX_GENERIC_SPECS
def _get_taker_token_in_amount(
taker_address: str, token_in_address: str, child_transfers: List[Transfer]
) -> int:
if len(child_transfers) != 2:
raise ValueError(
f"A settled order should consist of 2 child transfers, not {len(child_transfers)}."
)
if taker_address == ANY_TAKER_ADDRESS:
for transfer in child_transfers:
if transfer.token_address == token_in_address:
return transfer.amount
else:
for transfer in child_transfers:
if transfer.to_address == taker_address:
return transfer.amount
return 0
def _get_0x_token_in_data(
trace: DecodedCallTrace, child_transfers: List[Transfer]
) -> Tuple[str, int]:
order: List = trace.inputs["order"]
token_in_address = order[0]
if trace.function_signature in RFQ_SIGNATURES:
taker_address = order[5]
elif trace.function_signature in LIMIT_SIGNATURES:
taker_address = order[6]
else:
raise RuntimeError(
f"0x orderbook function {trace.function_signature} is not supported"
)
token_in_amount = _get_taker_token_in_amount(
taker_address, token_in_address, child_transfers
)
return token_in_address, token_in_amount
def _get_0x_token_out_data(trace: DecodedCallTrace) -> Tuple[str, int]:
order: List = trace.inputs["order"]
token_out_address = order[1]
token_out_amount = trace.inputs["takerTokenFillAmount"]
return token_out_address, token_out_amount

View File

@@ -2,14 +2,12 @@ from typing import Dict, List, Optional
from mev_inspect.abi import get_abi
from mev_inspect.decode import ABIDecoder
from mev_inspect.schemas.blocks import CallAction, CallResult
from mev_inspect.schemas.traces import (
CallTrace,
from mev_inspect.schemas.blocks import CallAction, CallResult, Trace, TraceType
from mev_inspect.schemas.classified_traces import (
Classification,
ClassifiedTrace,
CallTrace,
DecodedCallTrace,
Trace,
TraceType,
)
from .specs import ALL_CLASSIFIER_SPECS

View File

@@ -1,40 +0,0 @@
import aiohttp
from mev_inspect.classifiers.specs.weth import WETH_ADDRESS
from mev_inspect.schemas.coinbase import CoinbasePrices, CoinbasePricesResponse
from mev_inspect.schemas.prices import (
AAVE_TOKEN_ADDRESS,
LINK_TOKEN_ADDRESS,
REN_TOKEN_ADDRESS,
UNI_TOKEN_ADDRESS,
USDC_TOKEN_ADDRESS_ADDRESS,
WBTC_TOKEN_ADDRESS,
YEARN_TOKEN_ADDRESS,
)
from mev_inspect.schemas.transfers import ETH_TOKEN_ADDRESS
COINBASE_API_BASE = "https://www.coinbase.com/api/v2"
COINBASE_TOKEN_NAME_BY_ADDRESS = {
WETH_ADDRESS: "weth",
ETH_TOKEN_ADDRESS: "ethereum",
WBTC_TOKEN_ADDRESS: "wrapped-bitcoin",
LINK_TOKEN_ADDRESS: "link",
YEARN_TOKEN_ADDRESS: "yearn-finance",
AAVE_TOKEN_ADDRESS: "aave",
UNI_TOKEN_ADDRESS: "uniswap",
USDC_TOKEN_ADDRESS_ADDRESS: "usdc",
REN_TOKEN_ADDRESS: "ren",
}
async def fetch_coinbase_prices(token_address: str) -> CoinbasePrices:
if token_address not in COINBASE_TOKEN_NAME_BY_ADDRESS:
raise ValueError(f"Unsupported token_address {token_address}")
coinbase_token_name = COINBASE_TOKEN_NAME_BY_ADDRESS[token_address]
url = f"{COINBASE_API_BASE}/assets/prices/{coinbase_token_name}"
async with aiohttp.ClientSession() as session:
async with session.get(url, params={"base": "USD"}) as response:
json_data = await response.json()
return CoinbasePricesResponse(**json_data).data.prices

View File

@@ -1,17 +1,42 @@
from typing import List, Optional
from typing import Dict, List, Optional
from web3 import Web3
from mev_inspect.traces import get_child_traces
from mev_inspect.schemas.classified_traces import (
ClassifiedTrace,
Classification,
Protocol,
)
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.traces import Classification, ClassifiedTrace, Protocol
from mev_inspect.traces import get_child_traces
from mev_inspect.classifiers.specs import WETH_ADDRESS
from mev_inspect.abi import get_raw_abi
V2_COMPTROLLER_ADDRESS = "0x3d9819210A31b4961b30EF54bE2aeD79B9c9Cd3B"
V2_C_ETHER = "0x4Ddc2D193948926D02f9B1fE9e1daa0718270ED5"
CREAM_COMPTROLLER_ADDRESS = "0x3d5BC3c8d13dcB8bF317092d84783c2697AE9258"
CREAM_CR_ETHER = "0xD06527D5e56A3495252A528C4987003b712860eE"
# helper, only queried once in the beginning (inspect_block)
def fetch_all_comp_markets(w3: Web3) -> Dict[str, str]:
c_token_mapping = {}
comp_v2_comptroller_abi = get_raw_abi("Comptroller", Protocol.compound_v2)
comptroller_instance = w3.eth.contract(
address=V2_COMPTROLLER_ADDRESS, abi=comp_v2_comptroller_abi
)
markets = comptroller_instance.functions.getAllMarkets().call()
comp_v2_ctoken_abi = get_raw_abi("CToken", Protocol.compound_v2)
for c_token in markets:
# make an exception for cETH (as it has no .underlying())
if c_token != V2_C_ETHER:
ctoken_instance = w3.eth.contract(address=c_token, abi=comp_v2_ctoken_abi)
underlying_token = ctoken_instance.functions.underlying().call()
c_token_mapping[
c_token.lower()
] = underlying_token.lower() # make k:v lowercase for consistancy
return c_token_mapping
def get_compound_liquidations(
traces: List[ClassifiedTrace],
traces: List[ClassifiedTrace], collateral_by_c_token_address: Dict[str, str]
) -> List[Liquidation]:
"""Inspect list of classified traces and identify liquidation"""
@@ -20,10 +45,7 @@ def get_compound_liquidations(
for trace in traces:
if (
trace.classification == Classification.liquidate
and (
trace.protocol == Protocol.compound_v2
or trace.protocol == Protocol.cream
)
and trace.protocol == Protocol.compound_v2
and trace.inputs is not None
and trace.to_address is not None
):
@@ -32,17 +54,17 @@ def get_compound_liquidations(
trace.transaction_hash, trace.trace_address, traces
)
seize_trace = _get_seize_call(child_traces)
if seize_trace is not None and seize_trace.inputs is not None:
c_token_collateral = trace.inputs["cTokenCollateral"]
if trace.abi_name == "CEther":
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["borrower"],
collateral_token_address=WETH_ADDRESS, # WETH since all cEther liquidations provide Ether
debt_token_address=c_token_collateral,
liquidator_user=seize_trace.inputs["liquidator"],
debt_purchase_amount=trace.value,
protocol=trace.protocol,
protocol=Protocol.compound_v2,
received_amount=seize_trace.inputs["seizeTokens"],
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,
@@ -52,13 +74,17 @@ def get_compound_liquidations(
elif (
trace.abi_name == "CToken"
): # cToken liquidations where liquidator pays back via token transfer
c_token_address = trace.to_address
liquidations.append(
Liquidation(
liquidated_user=trace.inputs["borrower"],
collateral_token_address=collateral_by_c_token_address[
c_token_address
],
debt_token_address=c_token_collateral,
liquidator_user=seize_trace.inputs["liquidator"],
debt_purchase_amount=trace.inputs["repayAmount"],
protocol=trace.protocol,
protocol=Protocol.compound_v2,
received_amount=seize_trace.inputs["seizeTokens"],
transaction_hash=trace.transaction_hash,
trace_address=trace.trace_address,

View File

@@ -1,22 +0,0 @@
import asyncio
import signal
from functools import wraps
def coro(f):
@wraps(f)
def wrapper(*args, **kwargs):
loop = asyncio.get_event_loop()
def cancel_task_callback():
for task in asyncio.all_tasks():
task.cancel()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, cancel_task_callback)
try:
loop.run_until_complete(f(*args, **kwargs))
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
return wrapper

View File

@@ -0,0 +1,26 @@
import json
from typing import list
from mev_inspect.models.atomicmatch import AtomicMatchModel
from mev_inspect.schemas.atomicmatch import AtomicMatch
def delete_atomicmatch_for_block(
db_session,
block_number: int,
) -> None:
(
db_session.query(AtomicMatchModel)
.filter(AtomicMatchModel.block_number == block_number)
.delete()
)
db_session.commit()
def write_atomicmatch(
db_session,
atomicmatches: List[AtomicMatch],
) -> None:
models = [AtomicMatchModel(**json.loads(atomicmatch.json())) for atomicmatch in atomicmatches]
db_session.bulk_save_objects(models)
db_session.commit()

View File

@@ -1,28 +0,0 @@
from datetime import datetime
from mev_inspect.schemas.blocks import Block
def delete_block(
db_session,
block_number: int,
) -> None:
db_session.execute(
"DELETE FROM blocks WHERE block_number = :block_number",
params={"block_number": block_number},
)
db_session.commit()
def write_block(
db_session,
block: Block,
) -> None:
db_session.execute(
"INSERT INTO blocks (block_number, block_timestamp) VALUES (:block_number, :block_timestamp)",
params={
"block_number": block.block_number,
"block_timestamp": datetime.fromtimestamp(block.block_timestamp),
},
)
db_session.commit()

View File

@@ -1,8 +1,8 @@
import json
from typing import List
from mev_inspect.models.traces import ClassifiedTraceModel
from mev_inspect.schemas.traces import ClassifiedTrace
from mev_inspect.models.classified_traces import ClassifiedTraceModel
from mev_inspect.schemas.classified_traces import ClassifiedTrace
def delete_classified_traces_for_block(
@@ -28,7 +28,6 @@ def write_classified_traces(
models.append(
ClassifiedTraceModel(
transaction_hash=trace.transaction_hash,
transaction_position=trace.transaction_position,
block_number=trace.block_number,
classification=trace.classification.value,
trace_type=trace.type.value,

View File

@@ -1,17 +0,0 @@
from typing import List
from sqlalchemy.dialects.postgresql import insert
from mev_inspect.models.prices import PriceModel
from mev_inspect.schemas.prices import Price
def write_prices(db_session, prices: List[Price]) -> None:
insert_statement = (
insert(PriceModel.__table__)
.values([price.dict() for price in prices])
.on_conflict_do_nothing()
)
db_session.execute(insert_statement)
db_session.commit()

View File

@@ -1,85 +0,0 @@
import json
from typing import List
from mev_inspect.models.punks import (
PunkBidAcceptanceModel,
PunkBidModel,
PunkSnipeModel,
)
from mev_inspect.schemas.punk_accept_bid import PunkBidAcceptance
from mev_inspect.schemas.punk_bid import PunkBid
from mev_inspect.schemas.punk_snipe import PunkSnipe
def delete_punk_bid_acceptances_for_block(
db_session,
block_number: int,
) -> None:
(
db_session.query(PunkBidAcceptanceModel)
.filter(PunkBidAcceptanceModel.block_number == block_number)
.delete()
)
db_session.commit()
def write_punk_bid_acceptances(
db_session,
punk_bid_acceptances: List[PunkBidAcceptance],
) -> None:
models = [
PunkBidAcceptanceModel(**json.loads(punk_bid_acceptance.json()))
for punk_bid_acceptance in punk_bid_acceptances
]
db_session.bulk_save_objects(models)
db_session.commit()
def delete_punk_bids_for_block(
db_session,
block_number: int,
) -> None:
(
db_session.query(PunkBidModel)
.filter(PunkBidModel.block_number == block_number)
.delete()
)
db_session.commit()
def write_punk_bids(
db_session,
punk_bids: List[PunkBid],
) -> None:
models = [PunkBidModel(**json.loads(punk_bid.json())) for punk_bid in punk_bids]
db_session.bulk_save_objects(models)
db_session.commit()
def delete_punk_snipes_for_block(
db_session,
block_number: int,
) -> None:
(
db_session.query(PunkSnipeModel)
.filter(PunkSnipeModel.block_number == block_number)
.delete()
)
db_session.commit()
def write_punk_snipes(
db_session,
punk_snipes: List[PunkSnipe],
) -> None:
models = [
PunkSnipeModel(**json.loads(punk_snipe.json())) for punk_snipe in punk_snipes
]
db_session.bulk_save_objects(models)
db_session.commit()

View File

@@ -1,64 +0,0 @@
from typing import List
from uuid import uuid4
from mev_inspect.models.sandwiches import SandwichModel
from mev_inspect.schemas.sandwiches import Sandwich
def delete_sandwiches_for_block(
db_session,
block_number: int,
) -> None:
(
db_session.query(SandwichModel)
.filter(SandwichModel.block_number == block_number)
.delete()
)
db_session.commit()
def write_sandwiches(
db_session,
sandwiches: List[Sandwich],
) -> None:
sandwich_models = []
sandwiched_swaps = []
for sandwich in sandwiches:
sandwich_id = str(uuid4())
sandwich_models.append(
SandwichModel(
id=sandwich_id,
block_number=sandwich.block_number,
sandwicher_address=sandwich.sandwicher_address,
frontrun_swap_transaction_hash=sandwich.frontrun_swap.transaction_hash,
frontrun_swap_trace_address=sandwich.frontrun_swap.trace_address,
backrun_swap_transaction_hash=sandwich.backrun_swap.transaction_hash,
backrun_swap_trace_address=sandwich.backrun_swap.trace_address,
)
)
for swap in sandwich.sandwiched_swaps:
sandwiched_swaps.append(
{
"sandwich_id": sandwich_id,
"block_number": swap.block_number,
"transaction_hash": swap.transaction_hash,
"trace_address": swap.trace_address,
}
)
if len(sandwich_models) > 0:
db_session.bulk_save_objects(sandwich_models)
db_session.execute(
"""
INSERT INTO sandwiched_swaps
(sandwich_id, block_number, transaction_hash, trace_address)
VALUES
(:sandwich_id, :block_number, :transaction_hash, :trace_address)
""",
params=sandwiched_swaps,
)
db_session.commit()

View File

@@ -1,23 +1,10 @@
import os
from typing import Optional
from sqlalchemy import create_engine, orm
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
def get_trace_database_uri() -> Optional[str]:
username = os.getenv("TRACE_DB_USER")
password = os.getenv("TRACE_DB_PASSWORD")
host = os.getenv("TRACE_DB_HOST")
db_name = "trace_db"
if all(field is not None for field in [username, password, host]):
return f"postgresql://{username}:{password}@{host}/{db_name}"
return None
def get_inspect_database_uri():
def get_sqlalchemy_database_uri():
username = os.getenv("POSTGRES_USER")
password = os.getenv("POSTGRES_PASSWORD")
host = os.getenv("POSTGRES_HOST")
@@ -25,24 +12,10 @@ def get_inspect_database_uri():
return f"postgresql://{username}:{password}@{host}/{db_name}"
def _get_engine(uri: str):
return create_engine(uri)
def get_engine():
return create_engine(get_sqlalchemy_database_uri())
def _get_session(uri: str):
Session = sessionmaker(bind=_get_engine(uri))
def get_session():
Session = sessionmaker(bind=get_engine())
return Session()
def get_inspect_session() -> orm.Session:
uri = get_inspect_database_uri()
return _get_session(uri)
def get_trace_session() -> Optional[orm.Session]:
uri = get_trace_database_uri()
if uri is not None:
return _get_session(uri)
return None

View File

@@ -1,16 +1,14 @@
from typing import Dict, Optional
import eth_utils.abi
from hexbytes import HexBytes
from eth_abi import decode_abi
from eth_abi.exceptions import InsufficientDataBytes, NonEmptyPaddingBytes
from hexbytes._utils import hexstr_to_bytes
from mev_inspect.schemas.abi import ABI, ABIFunctionDescription
from mev_inspect.schemas.call_data import CallData
# 0x + 8 characters
SELECTOR_LENGTH = 10
class ABIDecoder:
def __init__(self, abi: ABI):
@@ -21,7 +19,8 @@ class ABIDecoder:
}
def decode(self, data: str) -> Optional[CallData]:
selector, params = data[:SELECTOR_LENGTH], data[SELECTOR_LENGTH:]
hex_data = HexBytes(data)
selector, params = hex_data[:4], hex_data[4:]
func = self._functions_by_selector.get(selector)
@@ -37,7 +36,7 @@ class ABIDecoder:
]
try:
decoded = decode_abi(types, hexstr_to_bytes(params))
decoded = decode_abi(types, params)
except (InsufficientDataBytes, NonEmptyPaddingBytes):
return None

View File

@@ -1,10 +1,9 @@
from web3 import Web3
async def fetch_base_fee_per_gas(w3: Web3, block_number: int) -> int:
base_fees = await w3.eth.fee_history(1, block_number)
base_fees_per_gas = base_fees["baseFeePerGas"]
if len(base_fees_per_gas) == 0:
def fetch_base_fee_per_gas(w3: Web3, block_number: int) -> int:
base_fees = w3.eth.fee_history(1, block_number)["baseFeePerGas"]
if len(base_fees) == 0:
raise RuntimeError("Unexpected error - no fees returned")
return base_fees_per_gas[0]
return base_fees[0]

View File

@@ -1,129 +1,101 @@
import logging
from typing import Optional
from sqlalchemy import orm
from web3 import Web3
from mev_inspect.arbitrages import get_arbitrages
from mev_inspect.block import create_from_block_number
from mev_inspect.classifiers.trace import TraceClassifier
from mev_inspect.crud.arbitrages import delete_arbitrages_for_block, write_arbitrages
from mev_inspect.crud.blocks import delete_block, write_block
from mev_inspect.crud.liquidations import (
delete_liquidations_for_block,
write_liquidations,
from mev_inspect.crud.arbitrages import (
delete_arbitrages_for_block,
write_arbitrages,
)
from mev_inspect.crud.classified_traces import (
delete_classified_traces_for_block,
write_classified_traces,
)
from mev_inspect.crud.miner_payments import (
delete_miner_payments_for_block,
write_miner_payments,
)
from mev_inspect.crud.punks import (
delete_punk_bid_acceptances_for_block,
delete_punk_bids_for_block,
delete_punk_snipes_for_block,
write_punk_bid_acceptances,
write_punk_bids,
write_punk_snipes,
)
from mev_inspect.crud.sandwiches import delete_sandwiches_for_block, write_sandwiches
from mev_inspect.crud.swaps import delete_swaps_for_block, write_swaps
from mev_inspect.crud.traces import (
delete_classified_traces_for_block,
write_classified_traces,
)
from mev_inspect.crud.transfers import delete_transfers_for_block, write_transfers
from mev_inspect.liquidations import get_liquidations
from mev_inspect.crud.liquidations import (
delete_liquidations_for_block,
write_liquidations,
)
from mev_inspect.crud.atomicmatch import (
delete_atomicmatch_for_block,
write_atomicmatch,
)
from mev_inspect.miner_payments import get_miner_payments
from mev_inspect.punks import get_punk_bid_acceptances, get_punk_bids, get_punk_snipes
from mev_inspect.sandwiches import get_sandwiches
from mev_inspect.swaps import get_swaps
from mev_inspect.transfers import get_transfers
from mev_inspect.liquidations import get_liquidations
logger = logging.getLogger(__name__)
async def inspect_block(
inspect_db_session: orm.Session,
def inspect_block(
db_session,
base_provider,
w3: Web3,
trace_classifier: TraceClassifier,
block_number: int,
trace_db_session: Optional[orm.Session],
should_cache: bool,
should_write_classified_traces: bool = True,
should_write_swaps: bool = True,
should_write_transfers: bool = True,
should_write_arbitrages: bool = True,
should_write_liquidations: bool = True,
should_write_miner_payments: bool = True,
):
block = await create_from_block_number(
base_provider,
w3,
block_number,
trace_db_session,
)
block = create_from_block_number(base_provider, w3, block_number, should_cache)
logger.info(f"Block: {block_number} -- Total traces: {len(block.traces)}")
delete_block(inspect_db_session, block_number)
write_block(inspect_db_session, block)
logger.info(f"Total traces: {len(block.traces)}")
total_transactions = len(
set(t.transaction_hash for t in block.traces if t.transaction_hash is not None)
)
logger.info(f"Block: {block_number} -- Total transactions: {total_transactions}")
logger.info(f"Total transactions: {total_transactions}")
classified_traces = trace_classifier.classify(block.traces)
logger.info(
f"Block: {block_number} -- Returned {len(classified_traces)} classified traces"
)
trace_clasifier = TraceClassifier()
classified_traces = trace_clasifier.classify(block.traces)
logger.info(f"Returned {len(classified_traces)} classified traces")
if should_write_classified_traces:
delete_classified_traces_for_block(inspect_db_session, block_number)
write_classified_traces(inspect_db_session, classified_traces)
delete_classified_traces_for_block(db_session, block_number)
write_classified_traces(db_session, classified_traces)
transfers = get_transfers(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(transfers)} transfers")
delete_transfers_for_block(inspect_db_session, block_number)
write_transfers(inspect_db_session, transfers)
if should_write_transfers:
delete_transfers_for_block(db_session, block_number)
write_transfers(db_session, transfers)
swaps = get_swaps(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(swaps)} swaps")
logger.info(f"Found {len(swaps)} swaps")
delete_swaps_for_block(inspect_db_session, block_number)
write_swaps(inspect_db_session, swaps)
if should_write_swaps:
delete_swaps_for_block(db_session, block_number)
write_swaps(db_session, swaps)
arbitrages = get_arbitrages(swaps)
logger.info(f"Block: {block_number} -- Found {len(arbitrages)} arbitrages")
logger.info(f"Found {len(arbitrages)} arbitrages")
delete_arbitrages_for_block(inspect_db_session, block_number)
write_arbitrages(inspect_db_session, arbitrages)
if should_write_arbitrages:
delete_arbitrages_for_block(db_session, block_number)
write_arbitrages(db_session, arbitrages)
liquidations = get_liquidations(classified_traces)
logger.info(f"Block: {block_number} -- Found {len(liquidations)} liquidations")
liquidations = get_liquidations(classified_traces, w3)
logger.info(f"Found {len(liquidations)} liquidations")
delete_liquidations_for_block(inspect_db_session, block_number)
write_liquidations(inspect_db_session, liquidations)
sandwiches = get_sandwiches(swaps)
logger.info(f"Block: {block_number} -- Found {len(sandwiches)} sandwiches")
delete_sandwiches_for_block(inspect_db_session, block_number)
write_sandwiches(inspect_db_session, sandwiches)
punk_bids = get_punk_bids(classified_traces)
delete_punk_bids_for_block(inspect_db_session, block_number)
write_punk_bids(inspect_db_session, punk_bids)
punk_bid_acceptances = get_punk_bid_acceptances(classified_traces)
delete_punk_bid_acceptances_for_block(inspect_db_session, block_number)
write_punk_bid_acceptances(inspect_db_session, punk_bid_acceptances)
punk_snipes = get_punk_snipes(punk_bids, punk_bid_acceptances)
logger.info(f"Block: {block_number} -- Found {len(punk_snipes)} punk snipes")
delete_punk_snipes_for_block(inspect_db_session, block_number)
write_punk_snipes(inspect_db_session, punk_snipes)
if should_write_liquidations:
delete_liquidations_for_block(db_session, block_number)
write_liquidations(db_session, liquidations)
miner_payments = get_miner_payments(
block.miner, block.base_fee_per_gas, classified_traces, block.receipts
)
delete_miner_payments_for_block(inspect_db_session, block_number)
write_miner_payments(inspect_db_session, miner_payments)
if should_write_miner_payments:
delete_miner_payments_for_block(db_session, block_number)
write_miner_payments(db_session, miner_payments)

View File

@@ -1,79 +0,0 @@
import asyncio
import logging
import traceback
from asyncio import CancelledError
from typing import Optional
from sqlalchemy import orm
from web3 import Web3
from web3.eth import AsyncEth
from mev_inspect.block import create_from_block_number
from mev_inspect.classifiers.trace import TraceClassifier
from mev_inspect.inspect_block import inspect_block
from mev_inspect.provider import get_base_provider
logger = logging.getLogger(__name__)
class MEVInspector:
def __init__(
self,
rpc: str,
inspect_db_session: orm.Session,
trace_db_session: Optional[orm.Session],
max_concurrency: int = 1,
request_timeout: int = 300,
):
self.inspect_db_session = inspect_db_session
self.trace_db_session = trace_db_session
self.base_provider = get_base_provider(rpc, request_timeout=request_timeout)
self.w3 = Web3(self.base_provider, modules={"eth": (AsyncEth,)}, middlewares=[])
self.trace_classifier = TraceClassifier()
self.max_concurrency = asyncio.Semaphore(max_concurrency)
async def create_from_block(self, block_number: int):
return await create_from_block_number(
base_provider=self.base_provider,
w3=self.w3,
block_number=block_number,
trace_db_session=self.trace_db_session,
)
async def inspect_single_block(self, block: int):
return await inspect_block(
self.inspect_db_session,
self.base_provider,
self.w3,
self.trace_classifier,
block,
trace_db_session=self.trace_db_session,
)
async def inspect_many_blocks(self, after_block: int, before_block: int):
tasks = []
for block_number in range(after_block, before_block):
tasks.append(
asyncio.ensure_future(
self.safe_inspect_block(block_number=block_number)
)
)
logger.info(f"Gathered {len(tasks)} blocks to inspect")
try:
await asyncio.gather(*tasks)
except CancelledError:
logger.info("Requested to exit, cleaning up...")
except Exception as e:
logger.error(f"Existed due to {type(e)}")
traceback.print_exc()
async def safe_inspect_block(self, block_number: int):
async with self.max_concurrency:
return await inspect_block(
self.inspect_db_session,
self.base_provider,
self.w3,
self.trace_classifier,
block_number,
trace_db_session=self.trace_db_session,
)

View File

@@ -1,22 +1,19 @@
from typing import List
from web3 import Web3
from mev_inspect.aave_liquidations import get_aave_liquidations
from mev_inspect.compound_liquidations import get_compound_liquidations
from mev_inspect.compound_liquidations import (
get_compound_liquidations,
fetch_all_comp_markets,
)
from mev_inspect.schemas.classified_traces import ClassifiedTrace
from mev_inspect.schemas.liquidations import Liquidation
from mev_inspect.schemas.traces import Classification, ClassifiedTrace
def has_liquidations(classified_traces: List[ClassifiedTrace]) -> bool:
liquidations_exist = False
for classified_trace in classified_traces:
if classified_trace.classification == Classification.liquidate:
liquidations_exist = True
return liquidations_exist
def get_liquidations(
classified_traces: List[ClassifiedTrace],
classified_traces: List[ClassifiedTrace], w3: Web3
) -> List[Liquidation]:
aave_liquidations = get_aave_liquidations(classified_traces)
comp_liquidations = get_compound_liquidations(classified_traces)
return aave_liquidations + comp_liquidations
comp_markets = fetch_all_comp_markets(w3)
compound_liquidations = get_compound_liquidations(classified_traces, comp_markets)
return aave_liquidations + compound_liquidations

View File

@@ -1,10 +1,13 @@
from typing import List
from mev_inspect.schemas.classified_traces import ClassifiedTrace
from mev_inspect.schemas.miner_payments import MinerPayment
from mev_inspect.schemas.receipts import Receipt
from mev_inspect.schemas.traces import ClassifiedTrace
from mev_inspect.traces import get_traces_by_transaction_hash
from mev_inspect.transfers import filter_transfers, get_eth_transfers
from mev_inspect.transfers import (
filter_transfers,
get_eth_transfers,
)
def get_miner_payments(

View File

@@ -0,0 +1,17 @@
from sqlalchemy import Column, Numeric, String, Integer, ARRAY
from .base import Base
class AtomicMatchModel(Base):
__tablename__ = "atomic_match"
block_number = Column(Numeric, nullable=False)
transaction_hash = Column(String, primary_key=True)
protocol = Column(String, nullable=True)
from_address = Column(String, nullable=False)
to_address = Column(String, nullable=False)
token_address = Column(String, nullable=False)
amount = Column(Numeric, nullable=False)
metadata = Column(ARRAY(String), nullable=False)
error = Column(String, nullable=True)

View File

@@ -1,4 +1,4 @@
from sqlalchemy import ARRAY, JSON, Column, Integer, Numeric, String
from sqlalchemy import Column, JSON, Numeric, String, ARRAY, Integer
from .base import Base
@@ -7,7 +7,6 @@ class ClassifiedTraceModel(Base):
__tablename__ = "classified_traces"
transaction_hash = Column(String, primary_key=True)
transaction_position = Column(Numeric, nullable=True)
block_number = Column(Numeric, nullable=False)
classification = Column(String, nullable=False)
trace_type = Column(String, nullable=False)

View File

@@ -1,4 +1,4 @@
from sqlalchemy import ARRAY, Column, Integer, Numeric, String
from sqlalchemy import Column, Numeric, String, ARRAY, Integer
from .base import Base
@@ -8,10 +8,10 @@ class LiquidationModel(Base):
liquidated_user = Column(String, nullable=False)
liquidator_user = Column(String, nullable=False)
collateral_token_address = Column(String, nullable=False)
debt_token_address = Column(String, nullable=False)
debt_purchase_amount = Column(Numeric, nullable=False)
received_amount = Column(Numeric, nullable=False)
received_token_address = Column(String, nullable=False)
protocol = Column(String, nullable=True)
transaction_hash = Column(String, primary_key=True)
trace_address = Column(ARRAY(Integer), primary_key=True)

Some files were not shown because too many files have changed in this diff Show More