Memecoin Equal Weighted Index (Base) beta
Equally-weighted index for memecoins
Source code
The source code of the Memecoin Equal Weighted Index (Base) strategy
"""Base equally-weightd memecoin index, full variant for Lagoon..
Visit https://tradingstrategy.ai/community to ask any questions
"""
import datetime
import numpy as np
import pandas as pd
from tradeexecutor.state.trade import TradeExecution
from tradeexecutor.state.types import USDollarAmount
from tradeexecutor.strategy.alpha_model import AlphaModel
from tradeexecutor.strategy.cycle import CycleDuration
from tradeexecutor.strategy.default_routing_options import TradeRouting
from tradeexecutor.strategy.execution_context import ExecutionContext
from tradeexecutor.strategy.pandas_trader.indicator import IndicatorDependencyResolver
from tradeexecutor.strategy.pandas_trader.indicator import IndicatorSource
from tradeexecutor.strategy.pandas_trader.indicator_decorator import IndicatorRegistry
from tradeexecutor.strategy.pandas_trader.strategy_input import StrategyInput
from tradeexecutor.strategy.parameters import StrategyParameters
from tradeexecutor.strategy.tag import StrategyTag
from tradeexecutor.strategy.trading_strategy_universe import TradingStrategyUniverse
from tradeexecutor.strategy.trading_strategy_universe import (
load_partial_data)
from tradeexecutor.strategy.tvl_size_risk import USDTVLSizeRiskModel
from tradeexecutor.strategy.universe_model import UniverseOptions
from tradeexecutor.strategy.weighting import weight_equal
from tradingstrategy.alternative_data.coingecko import CoingeckoUniverse, categorise_pairs
from tradingstrategy.chain import ChainId
from tradingstrategy.client import Client
from tradingstrategy.pair import PandasPairUniverse
from tradingstrategy.timebucket import TimeBucket
from tradingstrategy.transport.cache import OHLCVCandleType
from tradingstrategy.utils.forward_fill import forward_fill
from tradingstrategy.utils.token_extra_data import filter_scams
from tradingstrategy.utils.token_filter import deduplicate_pairs_by_volume
from tradeexecutor.state.identifier import TradingPairIdentifier
from tradeexecutor.utils.dedent import dedent_any
#
# Strategy parametrers
#
trading_strategy_engine_version = "0.5"
class Parameters:
# We trade 1h candle
candle_time_bucket = TimeBucket.h1
cycle_duration = CycleDuration.cycle_4h
# Coingecko categories to include
#
# See list here: TODO
#
chain_id = ChainId.base
categories = {"Meme"}
exchanges = {"uniswap-v2", "uniswap-v3"}
#
# Basket construction and rebalance parameters
#
min_asset_universe = 10 # How many assets we need in the asset universe to start running the index
max_assets_in_portfolio = 99 # How many assets our basket can hold once
allocation = 0.90 # Allocate all cash to volatile pairs
min_volatility_threshold = 0.02 # Set to have Sharpe ratio threshold for the inclusion
per_position_cap_of_pool = 0.01 # Never own more than % of the lit liquidity of the trading pool
max_concentration = 0.20 # How large % can one asset be in a portfolio once
min_portfolio_weight = 0.0050 # Close position / do not open if weight is less than 50 BPS
#
# Rebalance thresholds
#
portfolio_rebalance_threshold_usd = 75.0
individual_rebalance_min_threshold_usd = 75.0 # Don't make buys less than this amount
sell_rebalance_min_threshold_usd = 25.0
#
# Inclusion criteria parameters:
# - We set the length of various indicators used in the inclusion criteria
# - We set minimum thresholds neede to be included in the index to filter out illiquid pairs
#
# For the length of trailing sharpe used in inclusion criteria
trailing_sharpe_bars = pd.Timedelta("14d") // candle_time_bucket.to_timedelta() # How many bars to use in trailing sharpe indicator
rebalance_volalitity_bars = pd.Timedelta("14d") // candle_time_bucket.to_timedelta() # How many bars to use in volatility indicator
rolling_volume_bars = pd.Timedelta("7d") // candle_time_bucket.to_timedelta()
rolling_liquidity_bars = pd.Timedelta("7d") // candle_time_bucket.to_timedelta()
ewm_span = 200 # How many bars to use in exponential moving average for trailing sharpe smoothing
tvl_ewm_span = 200 # How many bars to use in EWM smoothing of TVLs
min_volume = 50_000 # USD
min_liquidity = 200_000 # USD
min_tvl = 25_000 # USD
min_token_sniffer_score = 30 # Scam filter
#
# Backtesting only
#
backtest_start = datetime.datetime(2024, 3, 1)
backtest_end = datetime.datetime(2025, 1, 3)
initial_cash = 10_000
#
# Live only
#
routing = TradeRouting.default
required_history_period = datetime.timedelta(days=2 * 14 + 1)
slippage_tolerance = 0.0060 # 0.6%
assummed_liquidity_when_data_missings = 10_000
#: Assets used in routing and buy-and-hold benchmark values for our strategy, but not traded by this strategy.
SUPPORTING_PAIRS = [
(ChainId.base, "uniswap-v2", "WETH", "USDC", 0.0030),
(ChainId.base, "uniswap-v3", "WETH", "USDC", 0.0005),
(ChainId.base, "uniswap-v3", "cbBTC", "WETH", 0.0030), # Only trading since October
]
# Will be converted to cbBTC/ETH->USDC
VOL_PAIR = (ChainId.base, "uniswap-v2", "WETH", "USDC", 0.0030)
def create_trading_universe(
timestamp: datetime.datetime,
client: Client,
execution_context: ExecutionContext,
universe_options: UniverseOptions,
) -> TradingStrategyUniverse:
"""Create the trading universe.
- Load Trading Strategy full pairs dataset
- Load built-in Coingecko top 1000 dataset
- Get all DEX tokens for a certain Coigecko category
- Load OHCLV data for these pairs
- Load also BTC and ETH price data to be used as a benchmark
"""
chain_id = Parameters.chain_id
categories = Parameters.categories
coingecko_universe = CoingeckoUniverse.load()
print("Coingecko universe is", coingecko_universe)
exchange_universe = client.fetch_exchange_universe()
pairs_df = client.fetch_pair_universe().to_pandas()
# Drop other chains to make the dataset smaller to work with
chain_mask = pairs_df["chain_id"] == Parameters.chain_id.value
pairs_df = pairs_df[chain_mask]
# Pull out our benchmark pairs ids.
# We need to construct pair universe object for the symbolic lookup.
pair_universe = PandasPairUniverse(pairs_df, exchange_universe=exchange_universe)
benchmark_pair_ids = [pair_universe.get_pair_by_human_description(desc).pair_id for desc in SUPPORTING_PAIRS]
# Assign categories to all pairs
category_df = categorise_pairs(coingecko_universe, pairs_df)
# Get all trading pairs that are memecoin, across all coingecko data
mask = category_df["category"].isin(categories)
category_pair_ids = category_df[mask]["pair_id"]
our_pair_ids = list(category_pair_ids) + benchmark_pair_ids
# From these pair ids, see what trading pairs we have on Ethereum mainnet
pairs_df = pairs_df[pairs_df["pair_id"].isin(our_pair_ids)]
# Limit by DEX
pairs_df = pairs_df[pairs_df["exchange_slug"].isin(Parameters.exchanges)]
# Never deduplicate supporrting pars
supporting_pairs_df = pairs_df[pairs_df["pair_id"].isin(benchmark_pair_ids)]
# Deduplicate trading pairs - Choose the best pair with the best volume
deduplicated_df = deduplicate_pairs_by_volume(pairs_df)
pairs_df = pd.concat([deduplicated_df, supporting_pairs_df]).drop_duplicates(subset='pair_id', keep='first')
print(
f"Total {len(pairs_df)} pairs to trade on {chain_id.name} for categories {categories}",
)
# Scam filter using TokenSniffer
pairs_df = filter_scams(pairs_df, client, min_token_sniffer_score=Parameters.min_token_sniffer_score)
pairs_df = pairs_df.sort_values("volume", ascending=False)
uni_v2 = pairs_df.loc[pairs_df["exchange_slug"] == "uniswap-v2"]
uni_v3 = pairs_df.loc[pairs_df["exchange_slug"] == "uniswap-v3"]
print(f"Pairs on Uniswap v2: {len(uni_v2)}, Uniswap v3: {len(uni_v3)}")
dataset = load_partial_data(
client=client,
time_bucket=Parameters.candle_time_bucket,
pairs=pairs_df,
execution_context=execution_context,
universe_options=universe_options,
liquidity=True,
liquidity_time_bucket=TimeBucket.d1,
liquidity_query_type=OHLCVCandleType.tvl_v2,
)
strategy_universe = TradingStrategyUniverse.create_from_dataset(
dataset,
reserve_asset="0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913", # USDC on Base
forward_fill=True, # We got very gappy data from low liquid DEX coins
forward_fill_until=timestamp,
)
# Tag benchmark/routing pairs tokens so they can be separated from the rest of the tokens
# for the index construction.
strategy_universe.warm_up_data()
for pair_id in benchmark_pair_ids:
pair = strategy_universe.get_pair_by_id(pair_id)
pair.other_data["benchmark"] = True
print("Universe is (including benchmark pairs):")
for idx, pair in enumerate(strategy_universe.iterate_pairs()):
benchmark = pair.other_data.get("benchmark")
print(f" {idx + 1}. pair #{pair.internal_id}: {pair.base.token_symbol} - {pair.quote.token_symbol} ({pair.exchange_name}), {'benchmark/routed token' if benchmark else 'traded token'}")
return strategy_universe
#
# Strategy logic
#
def decide_trades(
input: StrategyInput
) -> list[TradeExecution]:
"""For each strategy tick, generate the list of trades."""
parameters = input.parameters
position_manager = input.get_position_manager()
state = input.state
timestamp = input.timestamp
indicators = input.indicators
strategy_universe = input.strategy_universe
# Build signals for each pair
alpha_model = AlphaModel(
timestamp,
close_position_weight_epsilon=parameters.min_portfolio_weight, # 10 BPS is our min portfolio weight
)
# Prepare diagnostics variables
max_vol = (0, None)
signal_count = 0
vol_pair = strategy_universe.get_pair_by_human_description(VOL_PAIR)
volume_included_pair_count = indicators.get_indicator_value(
"volume_included_pair_count",
)
volatility_included_pair_count = indicators.get_indicator_value(
"volatility_included_pair_count",
)
tvl_included_pair_count = indicators.get_indicator_value(
"tvl_included_pair_count",
)
# Get pairs included in this rebalance cycle.
# This includes pair that have been pre-cleared in inclusion_criteria()
# with volume, volatility and TVL filters
included_pairs = indicators.get_indicator_value(
"inclusion_criteria",
na_conversion=False,
)
if included_pairs is None:
included_pairs = []
# Set signal for each pair
for pair_id in included_pairs:
pair = strategy_universe.get_pair_by_id(pair_id)
weight = 1
alpha_model.set_signal(
pair,
weight,
)
# Diagnostics reporting
signal_count += 1
# Calculate how much dollar value we want each individual position to be on this strategy cycle,
# based on our total available equity
portfolio = position_manager.get_current_portfolio()
portfolio_target_value = portfolio.get_total_equity() * parameters.allocation - position_manager.get_pending_redemptions()
# Select max_assets_in_portfolio assets in which we are going to invest
# Calculate a weight for ecah asset in the portfolio using 1/N method based on the raw signal
alpha_model.select_top_signals(count=parameters.max_assets_in_portfolio)
alpha_model.assign_weights(method=weight_equal)
# alpha_model.assign_weights(method=weight_by_1_slash_n)
#
# Normalise weights and cap the positions
#
size_risk_model = USDTVLSizeRiskModel(
pricing_model=input.pricing_model,
per_position_cap=parameters.per_position_cap_of_pool, # This is how much % by all pool TVL we can allocate for a position
missing_tvl_placeholder_usd=parameters.assummed_liquidity_when_data_missings, # Placeholder for missing TVL data until we get the data off the chain
)
alpha_model.normalise_weights(
investable_equity=portfolio_target_value,
size_risk_model=size_risk_model,
max_weight=parameters.max_concentration,
)
# Load in old weight for each trading pair signal,
# so we can calculate the adjustment trade size
alpha_model.update_old_weights(
state.portfolio,
ignore_credit=True,
)
alpha_model.calculate_target_positions(position_manager)
# Generate trades to rebalance the portfolio to the target shape
trades = alpha_model.generate_rebalance_trades_and_triggers(
position_manager,
min_trade_threshold=parameters.portfolio_rebalance_threshold_usd, # Don't bother with trades under XXXX USD
invidiual_rebalance_min_threshold=parameters.individual_rebalance_min_threshold_usd,
sell_rebalance_min_threshold=parameters.sell_rebalance_min_threshold_usd,
execution_context=input.execution_context,
)
# Add verbal report about decision made/not made,
# so it is much easier to diagnose live trade execution.
# This will be readable in Discord/Telegram logging.
if input.is_visualisation_enabled():
try:
top_signal = next(iter(alpha_model.get_signals_sorted_by_weight()))
if top_signal.normalised_weight == 0:
top_signal = None
except StopIteration:
top_signal = None
max_vol_pair = max_vol[1]
if max_vol_pair:
max_vol_signal = alpha_model.get_signal_by_pair(max_vol_pair)
else:
max_vol_signal = None
vol_pair_vol = indicators.get_indicator_value("volatility_ewm", pair=vol_pair)
rebalance_volume = sum(t.get_value() for t in trades)
report = dedent_any(f"""
Cycle: #{input.cycle} ({input.timestamp})
Rebalanced: {'👍' if alpha_model.is_rebalance_triggered() else '👎'}
Open/about to open positions: {len(state.portfolio.open_positions)}
Max position value change: {alpha_model.max_position_adjust_usd:,.2f} USD
Rebalance threshold: {alpha_model.position_adjust_threshold_usd:,.2f} USD
Trades decided: {len(trades)}
Pairs total: {strategy_universe.data_universe.pairs.get_count()}
Pairs meeting inclusion criteria: {len(included_pairs)}
Pairs meeting volume inclusion criteria: {volume_included_pair_count}
Pairs meeting volatility inclusion criteria: {volatility_included_pair_count}
Pairs meeting TVL inclusion criteria: {tvl_included_pair_count}
Signals created: {signal_count}
Total equity: {portfolio.get_total_equity():,.2f} USD
Cash: {position_manager.get_current_cash():,.2f} USD
Redemption queue: {position_manager.get_pending_redemptions():,.2f} USD
Investable equity: {alpha_model.investable_equity:,.2f} USD
Accepted investable equity: {alpha_model.accepted_investable_equity:,.2f} USD
Allocated to signals: {alpha_model.get_allocated_value():,.2f} USD
Discarted allocation because of lack of lit liquidity: {alpha_model.size_risk_discarded_value:,.2f} USD
Rebalance volume: {rebalance_volume:,.2f} USD
{vol_pair.base.token_symbol} volatility: {vol_pair_vol}
Most volatility pair: {max_vol_pair.get_ticker() if max_vol_pair else '-'}
Most volatility pair vol: {max_vol[0]}
Most volatility pair signal value: {max_vol_signal.signal if max_vol_signal else '-'}
Most volatility pair signal weight: {max_vol_signal.raw_weight if max_vol_signal else '-'}
""")
# Most volatility pair signal weight (normalised): {max_vol_signal.normalised_weight * 100 if max_vol_signal else '-'} % (got {max_vol_signal.position_size_risk.get_relative_capped_amount() * 100 if max_vol_signal else '-'} % of asked size)
if top_signal:
top_signal_vol = indicators.get_indicator_value("volatility_ewm", pair=top_signal.pair)
assert top_signal.position_size_risk
report += dedent_any(f"""
Top signal pair: {top_signal.pair.get_ticker()}
Top signal volatility: {top_signal_vol}
Top signal value: {top_signal.signal}
Top signal weight: {top_signal.raw_weight}
Top signal weight (normalised): {top_signal.normalised_weight * 100:.2f} % (got {top_signal.position_size_risk.get_relative_capped_amount() * 100:.2f} % of asked size)
""")
for flag, count in alpha_model.get_flag_diagnostics_data().items():
report += f"Signals with flag {flag.name}: {count}\n"
state.visualisation.add_message(
timestamp,
report,
)
state.visualisation.set_discardable_data("alpha_model", alpha_model)
return trades # Return the list of trades we made in this cycle
#
# Indicators
#
indicators = IndicatorRegistry()
@indicators.define()
def trailing_sharpe(
close: pd.Series,
trailing_sharpe_bars: int
) -> pd.Series:
"""Calculate trailing 30d or so returns / standard deviation.
:param length:
Trailing period.
:return:
Rolling cumulative returns / rolling standard deviation
Note that this trailing sharpe is not annualised.
"""
ann_factor = pd.Timedelta(days=365) / Parameters.candle_time_bucket.to_pandas_timedelta()
returns = close.pct_change()
mean_returns = returns.rolling(window=trailing_sharpe_bars).mean()
vol = returns.rolling(window=trailing_sharpe_bars).std()
return mean_returns / vol * np.sqrt(ann_factor)
@indicators.define(dependencies=(trailing_sharpe,), source=IndicatorSource.dependencies_only_per_pair)
def trailing_sharpe_ewm(
trailing_sharpe_bars: int,
ewm_span: float,
pair: TradingPairIdentifier,
dependency_resolver: IndicatorDependencyResolver,
) -> pd.Series:
"""Expontentially weighted moving average for Sharpe.
:param ewm_span:
How many bars to consider in the EVM
"""
trailing_sharpe = dependency_resolver.get_indicator_data(
"trailing_sharpe",
pair=pair,
parameters={"trailing_sharpe_bars": trailing_sharpe_bars},
)
ewm = trailing_sharpe.ewm(span=ewm_span)
return ewm.mean()
@indicators.define()
def volatility(close: pd.Series, rebalance_volalitity_bars: int) -> pd.Series:
"""Calculate the rolling volatility for rebalancing the index for each decision cycle."""
price_diff = close.pct_change()
rolling_std = price_diff.rolling(window=rebalance_volalitity_bars).std()
return rolling_std
@indicators.define()
def volatility_ewm(close: pd.Series, rebalance_volalitity_bars: int) -> pd.Series:
"""Calculate the rolling volatility for rebalancing the index for each decision cycle."""
# We are operating on 1h candles, 14d window
price_diff = close.pct_change()
rolling_std = price_diff.rolling(window=rebalance_volalitity_bars).std()
ewm = rolling_std.ewm(span=14 * 8)
return ewm.mean()
@indicators.define()
def mean_returns(close: pd.Series, rebalance_volalitity_bars: int) -> pd.Series:
# Descripton: TODO
returns = close.pct_change()
mean_returns = returns.rolling(window=rebalance_volalitity_bars).mean()
return mean_returns
@indicators.define()
def rolling_cumulative_volume(volume: pd.Series, rolling_volume_bars: int) -> pd.Series:
"""Calculate rolling volume of the pair.
- Used in inclusion criteria
"""
rolling_volume = volume.rolling(window=rolling_volume_bars).sum()
return rolling_volume
@indicators.define()
def rolling_liquidity_avg(close: pd.Series, rolling_volume_bars: int) -> pd.Series:
"""Calculate rolling liquidity average
- This is either TVL or XY liquidity (one sided) depending on the trading pair DEX type
- Used in inclusion criteria
"""
rolling_liquidity_close = close.rolling(window=rolling_volume_bars).mean()
return rolling_liquidity_close
@indicators.define(dependencies=(rolling_cumulative_volume,), source=IndicatorSource.strategy_universe)
def volume_inclusion_criteria(
strategy_universe: TradingStrategyUniverse,
min_volume: USDollarAmount,
rolling_volume_bars: int,
dependency_resolver: IndicatorDependencyResolver,
) -> pd.Series:
"""Calculate pair volume inclusion criteria.
- Avoid including illiquid / broken pairs in the set: Pair is included when it has enough volume
TODO: Add liquidity check later
:return:
Series where each timestamp is a list of pair ids meeting the criteria at that timestamp
"""
series = dependency_resolver.get_indicator_data_pairs_combined(
rolling_cumulative_volume,
parameters={"rolling_volume_bars": rolling_volume_bars},
)
# Get mask for days when the rolling volume meets out criteria
mask = series >= min_volume
mask_true_values_only = mask[mask == True]
# Turn to a series of lists
series = mask_true_values_only.groupby(level='timestamp').apply(lambda x: x.index.get_level_values('pair_id').tolist())
return series
@indicators.define(dependencies=(volatility_ewm,), source=IndicatorSource.strategy_universe)
def volatility_inclusion_criteria(
strategy_universe: TradingStrategyUniverse,
rebalance_volalitity_bars: int,
dependency_resolver: IndicatorDependencyResolver,
) -> pd.Series:
"""Calculate volatility inclusion criteria.
- Include pairs that are above our threshold signal
:return:
Series where each timestamp is a list of pair ids meeting the criteria at that timestamp
"""
series = dependency_resolver.get_indicator_data_pairs_combined(
volatility_ewm,
parameters={"rebalance_volalitity_bars": rebalance_volalitity_bars},
)
threshold_pair = strategy_universe.get_pair_by_human_description(VOL_PAIR)
assert threshold_pair
threshold_signal = dependency_resolver.get_indicator_data(
volatility_ewm,
pair=threshold_pair,
parameters={"rebalance_volalitity_bars": rebalance_volalitity_bars},
)
assert threshold_signal is not None, "No threshold volatility signal for: {threshold_pair}"
# Get mask for days when the rolling volume meets out criteria,
# and max out the threshold signal if there is
# mask = filtered_series >= threshold_signal
df = series.reset_index()
df2 = df.merge(threshold_signal, on=["timestamp"], suffixes=('_pair', '_reference'))
# pair_id timestamp value_pair value_reference
# 0 4569519 2024-02-13 16:00:00 0.097836 NaN
# 1 4569519 2024-02-13 17:00:00 0.097773 NaN
# TODO: No idea what causes this to break
if "value_pair" in df2.columns:
high_volatility_rows = df2[df2["value_pair"] >= df2["value_reference"]]
elif "close_pair" in df2.columns:
high_volatility_rows = df2[df2["close_pair"] >= df2["close_reference"]]
else:
raise NotImplemented(f"Confused with pandas: {df.columns}")
def _get_pair_ids_as_list(rows):
return rows["pair_id"].tolist()
# Turn to a series of lists
series = high_volatility_rows.groupby(by=['timestamp']).apply(_get_pair_ids_as_list)
assert isinstance(series, pd.Series)
return series
@indicators.define(source=IndicatorSource.tvl)
def tvl(
close: pd.Series,
timestamp: pd.Timestamp,
execution_context: ExecutionContext,
) -> pd.Series:
"""Get TVL series for a pair.
- Because TVL data is 1d and we use 1h everywhere else, we need to forward fill
- Use previous hourly close as the value
"""
if execution_context.live_trading:
# TVL is daily data.
# We need to forward fill until the current hour.
# Use our special ff function.
assert isinstance(timestamp, pd.Timestamp), f"Live trading needs forward-fill end time, we got {timestamp}"
df = pd.DataFrame({"close": close})
df_ff = forward_fill(
df,
Parameters.candle_time_bucket.to_frequency(),
columns=("close",),
forward_fill_until=timestamp,
)
series = df_ff["close"]
return series
else:
return close.resample("1h").ffill()
@indicators.define(dependencies=(tvl,), source=IndicatorSource.dependencies_only_per_pair)
def tvl_ewm(
pair: TradingPairIdentifier,
tvl_ewm_span: float,
dependency_resolver: IndicatorDependencyResolver,
) -> pd.Series:
"""Get smoothed TVL series for a pair.
- Interpretation: If you set span=5, for example, the ewm function will compute an exponential moving average where the weight of the most recent observation is about 33.3% (since α=2/(5+1)≈0.333) and this weight decreases exponentially for older observations.
- We forward fill gaps, so there is no missing data in decide_trades()
- Currently unused in the strategy itself
"""
tvl_ff = dependency_resolver.get_indicator_data(
tvl,
pair=pair,
)
return tvl_ff.ewm(span=tvl_ewm_span).mean()
@indicators.define(dependencies=(tvl,), source=IndicatorSource.dependencies_only_universe)
def tvl_inclusion_criteria(
min_tvl: USDollarAmount,
dependency_resolver: IndicatorDependencyResolver,
) -> pd.Series:
"""The pair must have min XX,XXX USD one-sided TVL to be included.
- If the Uniswap pool does not have enough ETH or USDC deposited, skip the pair as a scam
:return:
Series where each timestamp is a list of pair ids meeting the criteria at that timestamp
"""
series = dependency_resolver.get_indicator_data_pairs_combined(tvl)
mask = series >= min_tvl
# Turn to a series of lists
mask_true_values_only = mask[mask == True]
pairs_per_timestamp = mask_true_values_only.groupby(level='timestamp').apply(lambda x: x.index.get_level_values('pair_id').tolist())
return pairs_per_timestamp
@indicators.define(
source=IndicatorSource.strategy_universe
)
def trading_availability_criteria(
strategy_universe: TradingStrategyUniverse,
) -> pd.Series:
"""Is pair tradeable at each hour.
- The pair has a price candle at that
- Mitigates very corner case issues that TVL/liquidity data is per-day whileas price data is natively per 1h
and the strategy inclusion criteria may include pair too early hour based on TVL only,
leading to a failed attempt to rebalance in a backtest
- Only relevant for backtesting issues if we make an unlucky trade on the starting date
of trading pair listing
:return:
Series with with index (timestamp) and values (list of pair ids trading at that hour)
"""
# Trading pair availability is defined if there is a open candle in the index for it.
# Because candle data is forward filled, we should not have any gaps in the index.
candle_series = strategy_universe.data_universe.candles.df["open"]
pairs_per_timestamp = candle_series.groupby(level='timestamp').apply(lambda x: x.index.get_level_values('pair_id').tolist())
return pairs_per_timestamp
@indicators.define(
dependencies=[
volume_inclusion_criteria,
volatility_inclusion_criteria,
tvl_inclusion_criteria,
trading_availability_criteria
],
source=IndicatorSource.strategy_universe
)
def inclusion_criteria(
strategy_universe: TradingStrategyUniverse,
min_volume: USDollarAmount,
rolling_volume_bars: int,
rebalance_volalitity_bars: int,
min_tvl: USDollarAmount,
dependency_resolver: IndicatorDependencyResolver
) -> pd.Series:
"""Pairs meeting all of our inclusion criteria.
- Give the tradeable pair set for each timestamp
:return:
Series where index is timestamp and each cell is a list of pair ids matching our inclusion criteria at that moment
"""
# Filter out benchmark pairs like WETH in the tradeable pair set
benchmark_pair_ids = set(strategy_universe.get_pair_by_human_description(desc).internal_id for desc in SUPPORTING_PAIRS)
volatility_series = dependency_resolver.get_indicator_data(
volatility_inclusion_criteria,
parameters={"rebalance_volalitity_bars": rebalance_volalitity_bars},
)
volume_series = dependency_resolver.get_indicator_data(
volume_inclusion_criteria,
parameters={
"min_volume": min_volume,
"rolling_volume_bars": rolling_volume_bars,
},
)
tvl_series = dependency_resolver.get_indicator_data(
tvl_inclusion_criteria,
parameters={
"min_tvl": min_tvl,
},
)
trading_availability_series = dependency_resolver.get_indicator_data(trading_availability_criteria)
#
# Process all pair ids as a set and the final inclusion
# criteria is union of all sub-criterias
#
df = pd.DataFrame({
"tvl_pair_ids": tvl_series,
"volume_pair_ids": volume_series,
"volatility_pair_ids": volatility_series,
"trading_availability_pair_ids": trading_availability_series,
})
# https://stackoverflow.com/questions/33199193/how-to-fill-dataframe-nan-values-with-empty-list-in-pandas
df = df.fillna("").apply(list)
# Volatility criteria removed so we get truly equally weighted index
# def _combine_criteria(row):value_pair
# final_set = set(row["volume_pair_ids"]) & set(row["volatility_pair_ids"]) & set(row["tvl_pair_ids"])
# return final_set - benchmark_pair_ids
def _combine_criteria(row):
final_set = set(row["volume_pair_ids"]) & \
set(row["tvl_pair_ids"]) & \
set(row["trading_availability_pair_ids"])
return final_set - benchmark_pair_ids
union_criteria = df.apply(_combine_criteria, axis=1)
# Inclusion criteria data can be spotty at the beginning when there is only 0 or 1 pairs trading,
# so we need to fill gaps to 0
full_index = pd.date_range(
start=union_criteria.index.min(),
end=union_criteria.index.max(),
freq=Parameters.candle_time_bucket.to_frequency(),
)
# Reindex and fill NaN with zeros
reindexed = union_criteria.reindex(full_index, fill_value=[])
return reindexed
@indicators.define(dependencies=(volume_inclusion_criteria,), source=IndicatorSource.dependencies_only_universe)
def volume_included_pair_count(
min_volume: USDollarAmount,
rolling_volume_bars: int,
dependency_resolver: IndicatorDependencyResolver
) -> pd.Series:
series = dependency_resolver.get_indicator_data(
volume_inclusion_criteria,
parameters={"min_volume": min_volume, "rolling_volume_bars": rolling_volume_bars},
)
return series.apply(len)
@indicators.define(dependencies=(volatility_inclusion_criteria,), source=IndicatorSource.dependencies_only_universe)
def volatility_included_pair_count(
rebalance_volalitity_bars: int,
dependency_resolver: IndicatorDependencyResolver
) -> pd.Series:
"""Calculate number of pairs in meeting volatility criteria on each timestamp"""
series = dependency_resolver.get_indicator_data(
volatility_inclusion_criteria,
parameters={"rebalance_volalitity_bars": rebalance_volalitity_bars},
)
return series.apply(len)
@indicators.define(dependencies=(tvl_inclusion_criteria,), source=IndicatorSource.dependencies_only_universe)
def tvl_included_pair_count(
min_tvl: USDollarAmount,
dependency_resolver: IndicatorDependencyResolver
) -> pd.Series:
"""Calculate number of pairs in meeting volatility criteria on each timestamp"""
series = dependency_resolver.get_indicator_data(
tvl_inclusion_criteria,
parameters={"min_tvl": min_tvl},
)
series = series.apply(len)
# TVL data can be spotty at the beginning when there is only 0 or 1 pairs trading,
# so we need to fill gaps to 0
full_index = pd.date_range(
start=series.index.min(),
end=series.index.max(),
freq=Parameters.candle_time_bucket.to_frequency(),
)
# Reindex and fill NaN with zeros
reindexed = series.reindex(full_index, fill_value=0)
return reindexed
@indicators.define(dependencies=(inclusion_criteria,), source=IndicatorSource.dependencies_only_universe)
def all_criteria_included_pair_count(
min_volume: USDollarAmount,
min_tvl: USDollarAmount,
rolling_volume_bars: int,
rebalance_volalitity_bars: int,
dependency_resolver: IndicatorDependencyResolver
) -> pd.Series:
"""Series where each timestamp is the list of pairs meeting all inclusion criteria.
:return:
Series with pair count for each timestamp
"""
series = dependency_resolver.get_indicator_data(
"inclusion_criteria",
parameters={
"min_volume": min_volume,
"min_tvl": min_tvl,
"rolling_volume_bars": rolling_volume_bars,
"rebalance_volalitity_bars": rebalance_volalitity_bars,
},
)
return series.apply(len)
@indicators.define(source=IndicatorSource.strategy_universe)
def trading_pair_count(
strategy_universe: TradingStrategyUniverse,
) -> pd.Series:
"""Get number of pairs that trade at each timestamp.
- Pair must have had at least one candle before the timestamp to be included
- Exclude benchmarks pairs we do not trade
:return:
Series with pair count for each timestamp
"""
benchmark_pair_ids = {strategy_universe.get_pair_by_human_description(desc).internal_id for desc in SUPPORTING_PAIRS}
# Get pair_id, timestamp -> timestamp, pair_id index
series = strategy_universe.data_universe.candles.df["open"]
swap_index = series.index.swaplevel(0, 1)
seen_pairs = set()
seen_data = {}
for timestamp, pair_id in swap_index:
if pair_id in benchmark_pair_ids:
continue
seen_pairs.add(pair_id)
seen_data[timestamp] = len(seen_pairs)
series = pd.Series(seen_data.values(), index=list(seen_data.keys()))
return series
def create_indicators(
timestamp: datetime.datetime,
parameters: StrategyParameters,
strategy_universe: TradingStrategyUniverse,
execution_context: ExecutionContext
):
return indicators.create_indicators(
timestamp=timestamp,
parameters=parameters,
strategy_universe=strategy_universe,
execution_context=execution_context,
)
#
# Strategy metadata/UI data.
#
tags = {StrategyTag.beta, StrategyTag.live}
name = "Memecoin Equal Weighted Index (Base)"
short_description = "Equally-weighted index for memecoins"
icon = ""
long_description = """
# Strategy description
This automated trading strategy buys and holds all quality memecoins on the Base blockchain.
The picked memecoins satisfy minimum volume and TVL criteria.
## Benefits
- Get directional exposure to exciting small-cap memecoin market
- No need to pick memecoins yourself; the strategy will buy all of them
- Small cap memecoins are often uncorrelated with the rest of cryptocurrency markets
## Trading rules summary
- Rebalance every four hours
- Equally weight between opened positions
- Trade on Uniswap v2 and v3 on Base
- Use Coingecko as category labelling source for memecoins
- Use TokenSniffer as token quality source to filter scams
- Have minimum TVL and volume criteria to be included in the index
- Adjust position size based on lit Uniswap liquidity (TVL), to prevent takeing oversized positions on limited liquidity tokens
Due to low number of memecoins, the strategy does not yet have market cap criteria.
The strategy will run an open-ended index strategy: the source code is available. For further details, see the strategy source code.
## Expected performance
The strategy is a memecoin index and reflects the performance of memecoin markets.
See backtesting page for historical estimated performance.
Past performance is no guarantee of future results.
Due to the young age of memecoin markets, being very recent phenomenom, there is no backtesting data about memecoin market downturns.
## Risks
**This strategy is ultra-high-risk**. Only allocate small amounts of capital you may lose.
Before investing large amounts of capital, speak with the strategy developers.
This strategy, paired with onchain execution, is the first of its kind in the world. Because of the novelty, multiple risks are involved. Consider it beta-quality software.
**Trading risk**: Small-cap tokens traded are highly volatile,
may include rug pulls and other questionable activities.
Despite the strategy of automated filtering for tokens, some of the tokens may go to zero. The whole memecoin market can go to zero, and is likely to do so at some point.
There is little liquidity for these tokens and trading costs are high and may change abruptly.
**Technical risk**: The strategy relies on ERC-7540: Asynchronous ERC-4626 Tokenized Vaults standard.
This standard is new and not yet time-proven. The onchain trade execution contains unaudited smart contracts. There are centralised price oracles. The redemption process relies on a centralised valuation committee.
The strategy includes risk mitigations like:
- Use third-party security services to check the reputation of traded tokens
- Strategy limits position sizes based on the available lit liquidity
- Strategy limits position sizes based on the overall maximum % of the portfolio
- DAO and/or security committee holding multisignature keys can take manual actions in the case of unexpected issues:
the strategy can be halted and manually wind up
The technical risks will be mitigated in the future by working with other protocols,
by increasing the quality onchain trading venues, data and decentralisation.
## Further information
- Any questions are welcome in [the Discord community chat](https://tradingstrategy.ai/community)
"""