- Problem
- The ensemble combines five brains — technical, fundamental, sentiment, smart-money and quantitative — using per-brain weights that adapt nightly from the feedback loop. For about two weeks the weights looked healthy on paper but the signals were worse than a uniform-weight baseline.
- How I found it
- I shipped a public scorecard that showed per-brain accuracy next to the ensemble's. The ensemble was underperforming the best single brain by 8 percentage points, which should be impossible if the weights are pulling their weight.
- What I tried first
- I assumed the accuracy calc was wrong and spent a day rewriting it. The numbers didn't change.
- The fix
- The brain-accuracy cache guarded its refresh with a non-blocking lock. On a cache miss during concurrent scans it returned an empty dict, which the ensemble silently treated as "all weights equal 0.2" — a uniform mixture that ignored two weeks of learned calibration. Switching to a blocking lock so the first caller refreshes and others wait was a two-line change.
- Impact
- +7.3% win rate on the next 500 signals once real weights were flowing again.
- What I learned
- A silent fallback is worse than an exception. If the cache can't serve, the ensemble should refuse to score, not quietly return a nonsense answer. Fail-loud over fail-open for anything that touches capital.
TradeMaster AI · Build log
An algorithmic trading platform for Indian equities, built solo to learn ML product development in public.
Currently in paper-trade validation. Target: 58% win rate. Here’s the build log, the bugs, and the fixes.
Read the build log ↓What it does
TradeMaster AI scans 614 Indian equities after market close, scores them with a five-model ML ensemble, and narrows the list to a 40–60 stock execution watchlist for the next trading session. During market hours, an intraday strategy engine watches that watchlist for ORB and VWAP-pullback setups and logs simulated paper trades with stop-loss and target orders. Every signal and outcome is written to a feedback loop that retrains the per-model weights nightly, so the system learns from its own misses.
- 01Data SourcesZerodha Kite + yfinance
- 02Post-market Scanner614 stocks
- 035-Model ML Ensemble
- 04Execution Watchlist40–60 stocks
- 05Intraday StrategyORB + VWAP Pullback
- 06Signal Log & Paper Trade Executor
- 07Feedback Loopnightly retrain
Live performance dashboard
Paper-traded. Read-only aggregate from the trade log. Updated whenever you refresh.
Paper-traded returns, net of simulated transaction costs (brokerage ₹20/order, STT 0.025%, slippage 0.05%). Not financial advice.
| Date | Symbol | Strategy | Entry | Exit | P&L (₹) | Outcome |
|---|---|---|---|---|---|---|
| No trades available yet. | ||||||
ML build log
Four bugs that mattered. What broke, how I found it, what I tried, what fixed it, and what it taught me.
Design tradeoffs
- Two-tier scanner instead of real-timeProblem. A real-time scanner on 614 stocks at 1-minute resolution costs ~40x the compute and ~20x the API quota vs a post-market scan plus a small execution watchlist.Decision. Score the universe nightly, pick 40–60 stocks, and only stream intraday quotes for that subset.Why. I would rather spend the compute budget on scoring quality than on watching 550 stocks that won't get traded.
- One primary broker with a free fallbackProblem. Zerodha Kite is the best free source for real-time Indian market data, but a single broker outage or a stale token can poison a whole trading day — and yfinance rate-limits aggressively on retry loops.Decision. Zerodha Kite's WebSocket streams ticks for the execution watchlist; yfinance fills OHLCV gaps and fundamentals with an 8-second timeout and a Redis-backed stale-cache guard. Both sources flow through a single DataFusion layer so callers never need to know which responded.Why. I'd rather have a designed degradation path than a five-minute outage every time a token expires. The fusion layer is also where I surface stale-data flags to the scoring brains so they can discount their own signals.
- ORB and VWAP Pullback, not deep learningProblem. A deep-learning intraday model is harder to explain when it loses money and harder to debug when the feature pipeline drifts.Decision. Start with two rules I can read in plain English (Opening Range Breakout and VWAP Pullback) and let the ML ensemble rank which stocks they should run on.Why. Every lost trade has to be attributable to a specific rule. Deep learning optimises for performance; readable rules optimise for trust.
- FastAPI over DjangoProblem. The trading engine needs async WebSocket consumers, low latency on single endpoints, and easy deployment behind nginx on a small VPS.Decision. FastAPI for the backend, Pydantic models for validation, PM2 + uvicorn workers for process management.Why. Django ORM and templating would carry weight the project never uses. FastAPI keeps the hot path small.
- Paper trading before live capitalProblem. Live deployment with real capital is a one-way door. Bugs in production don't just break a test — they take money.Decision. Every strategy graduates through paper trading first; live deployment requires a documented 90-day paper-traded track record and a hand-reviewed trade log.Why. Discipline beats conviction. The cost of delaying a live launch by 90 days is low; the cost of a bad first quarter is not.
Roadmap
Shipped
- Post-market scanner for 614 stocks
- 5-model ML ensemble with adaptive per-model weights
- ORB and VWAP Pullback execution engine
- Paper trade logging with simulated costs
- Fixed inverted weights and look-ahead bias
In progress
- Stop-loss infrastructure (bracket orders everywhere)
- Transaction-cost modeling in live signals, not just backtest
- Public performance dashboard (this page)
Cut
- LSTM-based signal generation
- Options strategies
- Mobile app
About the builder
I’m Anustup, a branding, communications, and marketing professional transitioning into AI Product Management. I spent 10+ years building brands, shaping communication strategy, and running marketing for enterprise clients at a Bengaluru agency. TradeMaster AI is my attempt to learn ML product development the way I think PMs should — by building, breaking, and documenting something end-to-end rather than reading about it.
I’m exploring AI PM, Growth PM, and Growth Intelligence roles at Indian tech and fintech companies. If you’re hiring, or want to talk about the build: mukherjeeanustup@gmail.com · LinkedIn.