Available datasets
Name | Tag | Entry count (k) | Size (MBytes) | Format | Last updated | Links |
---|---|---|---|---|---|---|
Decentralised exchanges | exchange_universe | 5 | 3 | JSON | Documentation Download | |
Trading pairs | pair_universe | 194 | 24 | Parquet | Documentation Download | |
OHLCV candles, 1 minute | candles_1m | 520,875 | 29,779 | Parquet | Documentation Download | |
OHLCV candles, 5 minutes | candles_5m | 311,157 | 17,908 | Parquet | Documentation Download | |
OHLCV candles, 15 minutes | candles_15m | 197,820 | 11,498 | Parquet | Documentation Download | |
OHLCV candles, 1 hour | candles_1h | 99,876 | 5,868 | Parquet | Documentation Download | |
OHLCV candles, 4 hours | candles_4h | 44,792 | 2,662 | Parquet | Documentation Download | |
OHLCV candles, daily | candles_1d | 13,648 | 836 | Parquet | Documentation Download | |
OHLCV candles, weekly | candles_7d | 3,266 | 209 | Parquet | Documentation Download | |
OHLCV candles, montly | candles_30d | 1,142 | 76 | Parquet | Documentation Download | |
XY Liquidity, 1 minute | liquidity_1m | 536,314 | 21,077 | Parquet | Documentation Download | |
XY Liquidity, 5 minutes | liquidity_5m | 317,531 | 12,441 | Parquet | Documentation Download | |
XY Liquidity, 15 minutes | liquidity_15m | 201,196 | 7,938 | Parquet | Documentation Download | |
XY Liquidity, 1 hour | liquidity_1h | 101,644 | 4,040 | Parquet | Documentation Download | |
XY Liquidity, 4 hours | liquidity_4h | 45,967 | 1,849 | Parquet | Documentation Download | |
XY Liquidity, daily | liquidity_1d | 14,319 | 603 | Parquet | Documentation Download | |
XY Liquidity, weekly | liquidity_7d | 3,528 | 162 | Parquet | Documentation Download | |
XY Liquidity, monthly | liquidity_30d | 1,244 | 62 | Parquet | Documentation Download | |
Top momentum, daily | top_movers_24h | 0.371 | 0.466 | JSON | Documentation Download | |
AAVE v3 supply and borrow rates | aave_v3 | 4,773 | 425 | Parquet | Documentation Download |
Data logistics
Datasets are distributed in Parquet file format designed for data research. Parquet is a columnar data format for high performance in-memory datasets from Apache Arrow project.
Datasets are large. Datasets are compressed using Parquet built-in Snappy compression and may be considerably larger when expanded to RAM. We expect you to download the dataset, cache the resulting file on a local disk and perform your own strategy specific trading pair filtering before using the data. Uncompressed one minute candle data takes several gigabyte of memory.