Build and Backtest Custom Crypto Trading Signals with the Thrive Workbench
Every profitable trading system rests on a signal. Something measurable that tells you when to enter, when to exit, and when to do nothing. The signal might be a funding rate extreme, a moving average crossover, a whale accumulation pattern, or some combination of factors that you have identified through research. But having a signal idea and having a validated trading signal are two very different things.
The gap between "I think this works" and "I know this works" is called backtesting. And in crypto, where survivorship bias is rampant and market conditions shift quarterly, a proper backtest is not optional. It is the only way to separate genuine edge from wishful thinking.
Most traders skip this step for a legitimate reason: it is hard. Building a backtest engine requires coding ability, clean historical data, proper statistical methodology, and enough understanding of quantitative finance to avoid the dozen pitfalls that produce misleading results. Overfitting. Lookahead bias. Survivorship bias. Regime-dependent results. Transaction cost assumptions. The list goes on.
The Thrive Data Workbench solves this by giving you a no-code Strategy Builder that translates your trading rules into executable strategies, paired with a Backtest engine that includes Monte Carlo simulation, walk-forward analysis, and regime-segmented performance attribution. You define your signal, the system proves or disproves it against history, and you get a clear statistical answer before risking a single dollar.
This article walks through the entire process: from defining your first signal to interpreting backtest results and knowing whether you have something worth trading.
Key Takeaways:
- Custom signals built from your own research outperform generic indicators because they capture edges specific to your market understanding
- The no-code Strategy Builder supports entry/exit conditions, four position sizing models, risk management rules, and regime filters
- Monte Carlo simulation runs 500+ randomized iterations to test whether your results are robust or lucky
- Walk-forward analysis validates that your strategy adapts to changing market conditions rather than overfitting to one period
- Regime-segmented performance shows you exactly when to deploy your strategy and when to sit out
- A signal that passes both Monte Carlo and walk-forward validation has cleared institutional-grade quality bars
Why Custom Signals Beat Generic Indicators
Every trading indicator available on TradingView is used by millions of traders. RSI, MACD, Bollinger Bands, moving average crossovers, Fibonacci retracements—these are all public knowledge, freely available, and widely applied. When millions of people use the same signals, the edge erodes. The market learns to exploit crowded trades, and what once worked becomes a trap.
Custom signals are different. When you build a signal from your own research—combining on-chain metrics with derivatives data, layering funding rates with whale movement patterns, or incorporating sentiment extremes as a filter on technical setups—you create something that nobody else has. The edge exists precisely because it is yours.
This is not theoretical. Consider the difference between trading RSI oversold (millions of traders watching the same level) versus trading the combination of RSI oversold plus negative funding rate plus whale accumulation detected in the last 48 hours. The second signal has far fewer people watching it, far more informational content, and far better odds of working because it requires multiple independent data sources to align.
The Workbench makes this possible by giving you access to all these data sources in one environment. You do not need separate subscriptions to Glassnode for on-chain data, Coinglass for derivatives, and LunarCrush for sentiment. It is all queryable from a single SQL editor, and the Strategy Builder lets you combine conditions from any data source into one executable strategy.
The Signal Building Process
Building a custom signal follows a structured process. Skip any step and you risk building a strategy that looks good on paper but fails in live markets.
- Research: Identify a market behavior or pattern worth testing
- Data Exploration: Query the data to verify the behavior exists and is measurable
- Entry Conditions: Define precise, measurable criteria for opening a position
- Exit Conditions: Define criteria for closing a position (both profitable and unprofitable)
- Position Sizing: Choose a sizing model that manages risk appropriately
- Risk Management: Set stop losses, take profits, and maximum exposure limits
- Filters: Add regime, volatility, or time-based filters to eliminate false signals
- Backtest: Run the complete strategy against historical data
- Monte Carlo: Validate results are robust to randomization
- Walk-Forward: Confirm the strategy adapts to changing conditions
The Workbench supports each step with purpose-built tools. Let us work through them one by one with a real example: building a funding rate mean reversion strategy.
Step 1: Research and Hypothesis
Every custom signal starts with a hypothesis. You observe something in the market and wonder: "Can I trade this?"
For our example, the hypothesis is:** When perpetual swap funding rates become extremely negative (shorts paying longs), it indicates excessive bearish positioning that often precedes short squeezes. Entering long when funding hits extreme negative territory and exiting when funding normalizes should be profitable.**
This hypothesis is grounded in market mechanics. When funding rates are deeply negative, short sellers are paying a premium to maintain their positions. This creates two bullish catalysts: the cost of maintaining shorts pressures weak hands to close (buying pressure), and any price increase triggers cascading liquidations of levered short positions (forced buying).
The Workbench's AI Chat can help generate hypotheses too. Ask "What market conditions have historically preceded positive BTC returns over the last 12 months?" and the AI runs a Feature Discovery scan to identify promising patterns.
Step 2: Data Exploration with SQL
Before building anything, verify your hypothesis with data. The Workbench SQL editor lets you explore the relationship between funding rates and forward returns directly.
Demo: Funding Rate Distribution Analysis
WITH funding_buckets AS (
SELECT
f.symbol,
f.timestamp,
f.funding_rate,
CASE
WHEN f.funding_rate < -0.03 THEN 'EXTREME NEGATIVE'
WHEN f.funding_rate < -0.01 THEN 'NEGATIVE'
WHEN f.funding_rate BETWEEN -0.01 AND 0.01 THEN 'NEUTRAL'
WHEN f.funding_rate > 0.03 THEN 'EXTREME POSITIVE'
ELSE 'POSITIVE'
END AS funding_bucket,
c.close AS price_at_signal,
c_fwd.close AS price_24h_later,
ROUND(((c_fwd.close - c.close) / c.close) * 100, 2) AS fwd_return_24h
FROM funding_rate_history f
JOIN workbench_candles c
ON f.symbol = c.symbol AND c.timeframe = '1h' AND c.timestamp = f.timestamp
JOIN workbench_candles c_fwd
ON f.symbol = c_fwd.symbol AND c_fwd.timeframe = '1h'
AND c_fwd.timestamp = f.timestamp + INTERVAL '24 hours'
WHERE f.symbol = 'BTC/USDT'
AND f.timestamp >= NOW() - INTERVAL '365 days'
)
SELECT
funding_bucket,
COUNT(*) AS occurrences,
ROUND(AVG(fwd_return_24h), 3) AS avg_fwd_return,
ROUND(PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY fwd_return_24h), 3) AS median_fwd_return,
ROUND(STDDEV(fwd_return_24h), 3) AS return_volatility,
ROUND(COUNT(*) FILTER (WHERE fwd_return_24h > 0)::DECIMAL / COUNT(*) * 100, 1) AS win_rate
FROM funding_buckets
GROUP BY funding_bucket
ORDER BY avg_fwd_return DESC
| funding_bucket | occurrences | avg_fwd_return | median_fwd_return | return_volatility | win_rate |
|---|---|---|---|---|---|
| EXTREME NEGATIVE | 47 | 1.82 | 1.45 | 3.21 | 68.1 |
| NEGATIVE | 312 | 0.54 | 0.32 | 2.45 | 58.3 |
| NEUTRAL | 1,847 | 0.08 | 0.02 | 1.89 | 51.2 |
| POSITIVE | 489 | -0.21 | -0.14 | 2.12 | 47.4 |
| EXTREME POSITIVE | 62 | -1.14 | -0.88 | 3.67 | 37.1 |
The data confirms the hypothesis. Extreme negative funding produces a 68.1% win rate with average forward returns of +1.82% over 24 hours. Extreme positive funding produces a 37.1% win rate with average returns of -1.14%. The asymmetry is clear and statistically meaningful across 47 extreme negative occurrences over the past year.
This is exactly the kind of data-driven validation that separates hypothesis from tradeable edge. The SQL query runs in seconds, and the visualization cell renders the distribution instantly.
Step 3: Defining Entry Conditions
With the hypothesis validated, it is time to define precise entry rules in the Strategy Builder. The no-code interface lets you specify conditions using dropdown menus.
Entry Condition Setup
For our funding rate mean reversion strategy:
Entry Conditions (AND logic)
-
Funding Rate (8h EMA) crosses below -0.02 (extreme negative territory)
-
Volume (24h) is above $100M (sufficient liquidity)
-
Price is above the 200-period SMA (only trade in structural uptrends)
-
Direction: Long only
Each condition is selected from a dropdown menu. You choose the indicator, the comparison operator (above, below, crosses above, crosses below), and the threshold value. AND logic means all conditions must be true simultaneously. OR logic means any condition triggers.
Why Multiple Entry Conditions Matter
A single condition signal fires too often and produces too many false positives. The funding rate alone might produce 47 signals in a year, but many of those occur during structural downtrends where mean reversion is less reliable. Adding the 200-period SMA filter eliminates bearish-structure signals. Adding the volume filter eliminates signals in illiquid conditions where execution slippage would eat your edge.
The goal is not to maximize the number of trades. It is to maximize the quality of each trade. Three conditions that all agree is a much stronger signal than one condition that fires indiscriminately.
Step 4: Defining Exit Conditions
Entries get the glory, but exits determine your profitability. The Strategy Builder supports multiple exit mechanisms that can be combined.
Exit Condition Setup
Exit Conditions (OR logic)
- Funding Rate (8h EMA) crosses above 0.01 (funding has normalized, mean reversion complete)
- RSI(14) crosses above 70 (momentum exhaustion)
- Position held for more than 72 hours (time stop to prevent dead money)
OR logic means the position closes when any single exit condition is met. The first condition captures the primary edge expiration (funding normalizes). The second captures momentum exhaustion that might precede a reversal. The third prevents capital from being tied up in positions that are not moving.
The Importance of Time Stops
Many traders overlook time stops, but they are essential for capital efficiency. If your signal has a typical holding period of 24-48 hours (as validated by signal decay analysis), then a position still open after 72 hours is outside the signal's effective window. Closing it frees capital for the next opportunity.
Step 5: Position Sizing Selection
The Strategy Builder offers four position sizing models, each suited to different risk profiles and trading styles.
Fixed Percentage
Allocate a constant percentage of capital per trade. Simple and predictable. If you set 2%, every trade risks 2% of your current equity. Your position sizes grow as your account grows and shrink as it draws down. This is the safest starting point for most traders.
Kelly Criterion
Mathematically optimal sizing based on your edge (win rate × average win / average loss). Kelly maximizes long-term geometric growth but produces aggressive sizing that most traders find uncomfortable. Half-Kelly (using half the calculated size) is a common practical modification.
For our funding rate strategy with a 68% win rate and 1.82:1.14 win/loss ratio, full Kelly suggests sizing at approximately 35% of capital. Half-Kelly would be 17.5%. This is aggressive but reflects the strength of the statistical edge.
Volatility-Scaled
Position size adjusts inversely to recent volatility. In calm markets, you take larger positions. In volatile markets, you take smaller ones. This keeps your dollar risk approximately constant regardless of market conditions.
For crypto, where volatility can shift 3-5x within a week, volatility-scaled sizing prevents the common mistake of maintaining the same position size into a volatility expansion and getting stopped out on noise.
Risk Parity
Equalizes risk contribution across positions when running multiple strategies simultaneously. If you have three strategies with different volatility profiles, risk parity ensures each strategy contributes equally to your portfolio's total risk. This is how professional portfolio managers allocate across strategies.
For our single-strategy example, we will use volatility-scaled sizing with a 2% base risk. This means the target risk per trade is 2% of equity, with position size adjusting based on the asset's recent realized volatility.
Step 6: Risk Management Rules
Risk management is not optional. It is the difference between a strategy that compounds wealth and one that blows up on a single bad trade.
Stop Loss
Set at 3% below entry price. This limits maximum loss per trade to approximately 3% × position size. With volatility-scaled sizing targeting 2% risk, the actual portfolio impact of a full stop-out is roughly 2%. That means you survive 50 consecutive losing trades before losing your entire account, which provides massive risk-of-ruin protection.
Take Profit
Set at 6% above entry price. Combined with the 3% stop, this creates a 2:1 reward-to-risk ratio. Even with a 40% win rate, a 2:1 R:R strategy is profitable. With our hypothesized 68% win rate, it is highly profitable.
Trailing Stop
A 2% trailing stop from peak price. Once the trade moves in your favor, the stop follows price upward, locking in gains. If the trade runs to +5% before pulling back 2%, you exit at +3% profit rather than waiting for the full take profit to hit.
Trailing stops are particularly effective for momentum-driven trades because they let winners run during strong moves while protecting gains during pullbacks. The 2% trailing distance is wide enough to avoid noise stops in crypto's typical volatility.
Step 7: Adding Filters
Filters prevent your strategy from trading in unfavorable conditions. They do not define entries or exits. They define when the strategy should be active at all.
Regime Filter
Using the Market Regime Detection tool, we add a filter that activates the strategy only in trending or volatile market regimes. As our data exploration showed, funding rate mean reversion is most effective when markets have directional energy. In ranging markets, the signal accuracy drops below 55%, which is not enough edge to justify the transaction costs.
- Setting this in the Strategy Builder: Market Regime filter → Include: Trending Up, Trending Down, Volatile → Exclude: Ranging.
Volatility Filter
Minimum 14-day realized volatility of 40% annualized. Below this threshold, price movements are too small for the signal to generate meaningful returns after fees. This filter automatically deactivates the strategy during ultra-low-volatility consolidation periods.
Time Filter
Exclude the first and last hour of the UTC day. These periods have the highest frequency of funding rate settlements and associated price manipulation. Entering during settlement periods introduces noise into our signal.
The Impact of Filters
Without filters, our raw signal produces 47 trades with a 68% win rate. With regime, volatility, and time filters applied, the trade count drops to 31 but the win rate increases to 74%. You trade less frequently, but each trade has higher conviction and better expected value. This is the hallmark of a well-constructed systematic strategy: precision over frequency.
Step 8: Running Your Backtest
With the complete strategy defined (entries, exits, sizing, risk management, and filters), the Backtest cell runs it against historical data.
Backtest Configuration
- Asset: BTC/USDT
- Timeframe: 1-hour candles
- Period: January 2025 through January 2026 (12 months)
- Starting Capital: $100,000
- Commission: 0.06% per trade (round trip: 0.12%)
- Slippage: 0.05% per trade
These are realistic execution parameters. Many backtests use zero slippage and zero commissions, which inflates results and leads to strategies that work in theory but fail in practice. The Workbench forces you to specify these costs so your results reflect reality.
Backtest Output
| Metric | Value |
|---|---|
| Total Trades | 31 |
| Win Rate | 74.2% |
| Profit Factor | 2.41 |
| Sharpe Ratio | 2.38 |
| Sortino Ratio | 3.51 |
| Max Drawdown | -8.7% |
| Calmar Ratio | 3.22 |
| Expectancy | 0.68R |
| Average Win | +3.12% |
| Average Loss | -1.89% |
| Largest Win | +5.84% |
| Largest Loss | -2.98% |
| Total Return | +41.2% |
| Annual Volatility | 17.3% |
These are strong numbers. A 2.38 Sharpe means the strategy generates excellent risk-adjusted returns. A max drawdown of only 8.7% means the worst peak-to-trough decline was manageable. And 74.2% win rate with a 1.65:1 win/loss ratio produces a healthy expectancy of 0.68R per trade.
But a single backtest is just one path through history. The next two steps determine whether these results are robust or fragile.
Step 9: Monte Carlo Simulation
Enable Monte Carlo in the Backtest cell, and the system runs 500+ iterations by randomizing trade order and applying statistical variation to outcomes. This transforms a single result into a probability distribution.
Why Monte Carlo Matters
Your actual sequence of trades will not match the historical sequence. You might hit three losses in a row at the start instead of the middle. You might get your biggest winner early (inflating compound returns) or late (reducing them). Monte Carlo shows you the full range of possible outcomes given the same set of trades in different orders.
Monte Carlo Output
| Metric | 5th Percentile | 25th Percentile | Median | 75th Percentile | 95th Percentile |
|---|---|---|---|---|---|
| Total Return | +21.4% | +31.8% | +39.7% | +48.2% | +62.1% |
| Max Drawdown | -16.4% | -11.8% | -9.1% | -6.8% | -4.2% |
| Sharpe Ratio | 1.42 | 1.89 | 2.31 | 2.74 | 3.18 |
| Win Rate | 64.5% | 70.9% | 74.2% | 77.4% | 83.8% |
The 5th percentile (worst 5% of outcomes) still shows a +21.4% return with a Sharpe of 1.42. This is a profitable strategy even in bad-luck scenarios. The max drawdown in the worst case is -16.4%, which is uncomfortable but not catastrophic for a 2% risk-per-trade strategy.
The median result closely matches the single backtest (39.7% vs 41.2%), which indicates the single backtest was not an outlier but representative of typical expected performance.
If the 5th percentile showed negative returns, it would mean the strategy only works with favorable trade sequencing—a red flag for live trading. But a strategy that is profitable across the entire Monte Carlo distribution has passed the most rigorous statistical validation available.
Step 10: Walk-Forward Analysis
Walk-forward analysis is the final and most demanding validation step. It tests whether your strategy maintains its edge across different market environments without re-optimization.
How Walk-Forward Works
The system divides your historical data into segments. It optimizes your strategy on the first segment (in-sample), then tests the optimized parameters on the next segment (out-of-sample). Then it rolls forward, optimizing on a new in-sample window and testing on the next out-of-sample window. This process repeats through the entire dataset.
For our 12-month dataset:
- Window 1: Optimize Jan-Mar 2025, test Apr 2025
- Window 2: Optimize Feb-Apr 2025, test May 2025
- Window 3: Optimize Mar-May 2025, test Jun 2025
- Continue through December 2025
Walk-Forward Results
| Period | In-Sample Sharpe | Out-of-Sample Sharpe | Profitable? |
|---|---|---|---|
| Apr 2025 | 2.51 | 1.89 | Yes |
| May 2025 | 2.34 | 2.12 | Yes |
| Jun 2025 | 2.18 | 0.74 | Marginal |
| Jul 2025 | 1.92 | 1.45 | Yes |
| Aug 2025 | 2.45 | 2.31 | Yes |
| Sep 2025 | 2.61 | 1.67 | Yes |
| Oct 2025 | 2.38 | -0.21 | No |
| Nov 2025 | 2.12 | 2.84 | Yes |
| Dec 2025 | 2.55 | 1.92 | Yes |
Eight of nine out-of-sample periods were profitable. October 2025 was the exception, which corresponded to a highly unusual market regime transition. The out-of-sample Sharpe ratios are consistently lower than in-sample (expected) but still mostly above 1.0 (strong).
A strategy that maintains profitability across 8 of 9 walk-forward windows has demonstrated genuine adaptability to changing market conditions. This is dramatically more convincing than a single backtest on the full period, because it proves the strategy works in conditions it was not specifically optimized for.
Interpreting Backtest Results
Numbers mean nothing without context. Here is how to interpret the key metrics from your backtest.
Sharpe Ratio
Below 0.5: Not worth trading. The return does not compensate for the risk and volatility. 0.5-1.0: Decent. Comparable to buy-and-hold crypto but with more controlled drawdowns. 1.0-2.0: Good. Institutional-quality risk-adjusted returns. Above 2.0: Excellent. This is the target range for active strategies.
Our strategy's Sharpe of 2.38 is in the excellent range, and it maintains above 1.4 even in the 5th percentile Monte Carlo scenario.
Maximum Drawdown
This is the worst peak-to-trough decline. It answers: "How much do I need to stomach before the strategy recovers?" Our 8.7% max drawdown (16.4% in worst Monte Carlo case) is highly manageable. Most traders can psychologically handle drawdowns up to 20%. Beyond 30%, most people abandon their strategy—which is often the worst time to stop.
Profit Factor
Total gross profit divided by total gross loss. A profit factor of 1.0 means breakeven. Our 2.41 means the strategy generates $2.41 in profit for every $1.00 in losses. Any profit factor above 1.5 is strong. Above 2.0 is exceptional.
Expectancy
Expected value per trade in R-multiples (risk units). Our 0.68R means each trade is expected to return 0.68 times the amount risked. With 2% risk per trade, each trade adds an expected 1.36% to the portfolio. Over 31 trades per year, that compounds to significant returns.
Sample Size Warning
31 trades is a relatively small sample size. Statistical confidence increases with more trades. For a strategy producing ~30 trades annually, you need 2-3 years of data to reach a statistically confident sample of 60-90 trades. Monitor results carefully in the first year and be prepared to re-evaluate if early live results deviate significantly from backtest expectations.
Common Backtesting Mistakes to Avoid
The Workbench handles many technical pitfalls automatically, but understanding these common errors helps you build better strategies.
Overfitting
Adding too many conditions to your entry criteria until the backtest produces perfect results. If your strategy has 10 conditions with specific thresholds, you have almost certainly overfit to the historical data. Our strategy uses three entry conditions and three filters, which is a reasonable level of complexity for 31 trades.
The test for overfitting is straightforward: change any single parameter by a small amount and re-run the backtest. If results collapse, the strategy depends on that exact parameter value, not the underlying market behavior. Robust strategies produce similar results across a range of parameter values. If your RSI threshold works at 28, 30, and 32, you are capturing a genuine behavior. If it only works at exactly 30, you are fitting noise.
Survivorship Bias
Only testing your strategy on assets that still exist. Assets that went to zero are not in the data, which inflates results. The Workbench mitigates this by including historical data for delisted assets, but be aware of the bias when testing on altcoins.
This matters more than most traders realize. If you backtest a momentum strategy on "the top 20 coins" using today's top 20, you are selecting assets that have already succeeded. A more honest test includes assets that were in the top 20 at the time of each historical trade, including those that have since fallen out.
Transaction Cost Assumptions
Using zero commissions and slippage in your backtest. The Workbench forces you to specify these parameters. Always use realistic values based on your actual execution venue. A strategy that works at 0% fees might break at 0.12% round-trip costs, especially for high-frequency strategies.
Slippage is the hidden killer of backtests. A strategy that trades 200 times per year with 0.05% average slippage per trade loses 20% of its gross returns to execution friction alone. The Workbench lets you model slippage explicitly so your backtest returns reflect what you would actually keep after execution costs.
Forward-Looking Bias
Using information that would not have been available at the time of the trade. The Workbench's data alignment prevents most forms of this error, but be careful with features that publish with a delay. If an on-chain metric has a 2-hour publication lag, your backtest should not use it until 2 hours after the event.
Regime Blindness
Testing only during favorable conditions. If your data starts in a bull market and ends in a bull market, your strategy may simply be "buy the dip in a bull market." Walk-forward analysis across different regimes catches this. Always ensure your test period includes at least one complete market cycle.
Ignoring Drawdown Psychology
A strategy with 80% annual return and 40% max drawdown looks great on paper. In practice, most traders abandon a strategy after a 25% drawdown. If your backtest shows a deeper drawdown than you can psychologically handle, reduce position size until the drawdown is within your tolerance. A strategy you stick with through the rough patches beats a "better" strategy you abandon at the worst possible time.
From Backtest to Live Trading
Passing Monte Carlo and walk-forward validation is not the end of the process. Before committing full capital, graduated deployment protects against the remaining unknowns.
Paper Trading Phase
Run the strategy on live data without real capital for 2-4 weeks. This verifies that your execution matches the backtest's assumptions—entry timing, fill prices, and slippage. If paper results deviate more than 20% from backtest expectations, investigate before going live.
Quarter-Size Phase
Trade the strategy with 25% of your intended position size for 1-3 months. This is real money with real psychological pressure, but limited downside. Collect enough trades to compare live results against backtest metrics. If the live Sharpe ratio is within 0.5 of the backtest Sharpe, the strategy is performing as expected.
Full Deployment
Once live results confirm backtest expectations across at least 15-20 trades, scale to full size. Continue monitoring with the Alpha Leak Detection tool to catch degradation early.
This graduated approach costs some upside (you miss gains while trading small) but protects against the catastrophic scenario: deploying full capital on a strategy that does not work in live conditions. Professional firms use this exact process. Discipline in deployment protects everything you built during research.
FAQs
Do I need coding experience to build custom signals in the Workbench?
No. The Strategy Builder uses dropdown menus for selecting indicators, comparison operators, and threshold values. Entry conditions, exit conditions, position sizing, risk management, and filters are all configured through the visual interface. SQL knowledge helps with data exploration, but the AI Chat can handle that step through natural language.
How many trades do I need for a statistically valid backtest?
A minimum of 30 trades provides preliminary statistical validity. For higher confidence, aim for 60-100+ trades. If your strategy produces fewer than 30 trades per year, test across multiple years of data. The Monte Carlo simulation helps compensate for small sample sizes by generating many possible outcomes from your trade set.
What is the difference between Monte Carlo simulation and walk-forward analysis?
Monte Carlo tests robustness to trade sequencing by randomizing the order of your trades across 500+ simulations. It answers "Are my results dependent on the specific order trades occurred?" Walk-forward tests robustness to regime change by optimizing and testing across different time periods. It answers "Does my strategy work in market conditions it was not trained on?" Both tests are necessary for thorough validation.
How much does backtesting cost in credits?
Simple backtests (single asset, under one year, no Monte Carlo) cost 30 credits. Complex backtests with Monte Carlo simulation, multiple assets, or periods exceeding one year cost 50+ credits. The additional cost reflects the computational resources required for running 500+ simulations. Visit the pricing page for credit allocations per subscription tier.
Can I backtest strategies on multiple assets simultaneously?
Yes. The Backtest cell supports multi-asset backtesting where the same strategy rules are applied across a portfolio of assets. This is particularly useful for strategies based on cross-asset signals like relative value or momentum rotation.
What data is available for backtesting?
The Workbench provides historical candlestick data across multiple timeframes, funding rate history, on-chain metrics, sentiment data, and liquidation events. The exact historical depth varies by data type, with price data typically available for 2+ years for major assets.
How do I know if my strategy is overfit?
Signs of overfitting include: many specific conditions (more than 5-6), very high in-sample performance that drops dramatically in walk-forward testing, and sensitivity to small parameter changes. If changing your RSI threshold from 30 to 32 dramatically changes results, the strategy is overfit to the specific threshold rather than capturing a genuine market behavior.
Can I share my backtested strategies with others?
Yes. Notebooks containing your strategy definitions and backtest results can be shared publicly with slug-based URLs. Other Thrive users can fork your notebooks to test variations or build on your research. This creates a collaborative research environment where the community benefits from shared insights.
What position sizing model should beginners use?
Start with Fixed Percentage at 1-2% risk per trade. This is the simplest model and provides clear risk control. As you gain confidence in your strategy's edge (validated through Monte Carlo and walk-forward), you can graduate to volatility-scaled or Kelly-based sizing for potentially higher returns.
How often should I re-optimize my strategy?
Quarterly re-optimization strikes a good balance between adapting to changing conditions and maintaining strategy stability. Monthly re-optimization risks chasing noise. Annual re-optimization risks missing genuine regime shifts. The Alpha Leak Detection tool helps you identify when re-optimization is needed by monitoring your signal's information coefficient over time.
Summary
Building and backtesting custom trading signals is the most important skill a crypto trader can develop. It transforms you from a price-chart watcher into a quantitative trader with a validated, measurable edge.
The Thrive Data Workbench gives you every tool in this pipeline without requiring a single line of code. SQL exploration validates your hypothesis. The no-code Strategy Builder translates your rules into executable strategies. The Backtest engine runs your strategy against history with realistic cost assumptions. Monte Carlo simulation proves your results are not dependent on luck. Walk-forward analysis proves your strategy adapts to changing markets.
This is the same validation process used by institutional quant desks, packed into a browser tab. The tools that used to require a team of PhD quants and millions in infrastructure are now available to any trader willing to do the work.
Build. Test. Validate. Trade. In that order.
→ Start Building Your Strategy
