quant-runtime / release-console
# strategy.py
from sklearn.ensemble import RandomForestClassifier
def train(data, config):
features = build_features(data)
model = RandomForestClassifier(
n_estimators=250,
max_depth=8,
random_state=42,
)
model.fit(features.X, features.y)
metrics = {
"sharpe": 1.82,
"profit_factor": 1.41,
"max_drawdown": 0.084,
}
return model, metrics
def predict(model, market_data, config):
point = latest_feature_vector(market_data)
prob = model.predict_proba([point])[0][1]
if prob > 0.64:
return {"signal": "UP", "confidence": prob}
if prob < 0.36:
return {"signal": "DOWN", "confidence": 1 - prob}
return {"signal": "HOLD", "confidence": 0.18}Independent quant traders
Turn notebook research into a live, versioned signal product.
Small prop and execution teams
Centralize validation, releases, deployment, and signal delivery.
Systematic signal researchers
Run repeatable training, PPE validation, and artifact-based deployment.
Model-first infra buyers
Keep Python for training while PyP handles runtime and monetization.
The problem
Training is the easy part. Getting live signals from a trained model to your broker in real time is weeks of infrastructure work.
P&L curves on in-sample data mean nothing. You need out-of-sample validation on real tick data with realistic execution assumptions.
Even if your strategy works, turning it into recurring revenue requires a marketplace, payment processing, and subscriber management you'd have to build yourself.
Pipeline
The page should scan like an execution pipeline, not a feature dump. Training, validation, deployment, and monetization all connect to the same release system.
Use plain Python with train() and predict() entrypoints. Bring sklearn, XGBoost, LightGBM, statsmodels, numpy, pandas, or export heavier models to ONNX.
GitHub Actions runs the training job against 5+ years of market data across forex and crypto timeframes. Output artifacts are stored, versioned, and attached to jobs.
PyP replays your strategy candle by candle. PPE-PP then measures drawdown, Sharpe, MAE/MFE, session edge, and regime-specific behavior.
Deploy inference to Cloudflare edge or container runtime. Signals fire on candle close and route to MT4, MT5, Telegram, Discord, WhatsApp, and the dashboard.
Publish verified releases to the marketplace, price your subscription, and let PyP manage billing, subscribers, and signal delivery.
Use plain Python with train() and predict() entrypoints. Bring sklearn, XGBoost, LightGBM, statsmodels, numpy, pandas, or export heavier models to ONNX.
GitHub Actions runs the training job against 5+ years of market data across forex and crypto timeframes. Output artifacts are stored, versioned, and attached to jobs.
PyP replays your strategy candle by candle. PPE-PP then measures drawdown, Sharpe, MAE/MFE, session edge, and regime-specific behavior.
Deploy inference to Cloudflare edge or container runtime. Signals fire on candle close and route to MT4, MT5, Telegram, Discord, WhatsApp, and the dashboard.
Publish verified releases to the marketplace, price your subscription, and let PyP manage billing, subscribers, and signal delivery.
Model support
A quant buyer needs a deployment decision matrix, not a vague compatibility claim. This makes runtime choice, artifact shape, and preservation model obvious.
Custom feature engineering and lightweight numpy-driven signals
Classical tabular models, pipelines, and probability-driven directional systems
Tree ensembles and factor-heavy directional strategies
Low-latency gradient boosting with compact inference footprints
Econometric and statistical forecasting workflows
Neural networks and heavier exact-model deployment paths
Researchers who want to keep training in PyTorch but deploy a portable artifact
TensorFlow stacks routed through ONNX for production delivery
All models are trained on GitHub Actions with full Python. Edge-friendly families run on Cloudflare's global edge network. Heavier exact-model families use PyP Container Runtime.
Artifacts
PyP does not force every strategy through one opaque blob. Artifact choice follows how your model should be trained, preserved, and executed.
Custom signal logic and small deterministic models
Cloudflare Edge Workers
Maximum flexibility with the lightest deployment footprint.
sklearn, XGBoost, LightGBM, and statsmodels-style artifacts
Cloudflare Edge Workers
Framework-agnostic extracted weights for fast CPU inference.
Exact exported neural or heavy model files
PyP Container Runtime
Keeps the exact ONNX model intact for heavier inference families.
Validation
PyP's Proof-of-Performance Engine does not draw a flattering backtest curve and call it done. It replays your strategy against historical candles in sequence, using the same model logic that powers deployment.
PPE-PP then analyzes adverse excursion, favorable excursion, regime behavior, session edge, and replay-aware SL/TP geometry. That output becomes part of your release evidence.
When you list on the marketplace, those verification results travel with the strategy. Subscribers see verified operating behavior rather than unverifiable claims.
PPE-PP / release verification
London session edge remains strongest after replay normalization.
Counterfactual path math suggests tighter TP geometry on regime shift days.
Adverse excursion profile remains contained under 9% max drawdown.
Release is eligible for marketplace listing because PPE has completed.
Delivery
Instant signal messages to any channel or group. Pair, direction, confidence, SL, TP, and release context arrive formatted and ready.
Webhook delivery to your server with custom formatting, role pings, and channel routing.
EA polls PyP API on candle close and executes with your SL/TP automatically. Paste the token once and leave the delivery loop to PyP.
Watch live decisions, release history, signal audit trails, and equity curve behavior from a single control plane.
Marketplace
List your strategy on the PyP Marketplace.
Set a monthly subscription price.
Subscribers receive verified live signals automatically.
50 subscribers × $49/month. PyP handles recurring billing, signal delivery, subscriber management, and listing infrastructure.
See MarketplacePricing
This section should read like capacity planning. Quant buyers care about how much research, validation, signal throughput, and marketplace economics each plan supports.
Best for solo research and first marketplace listings.
Training runs
60 / month
Simulation candle limit
15,000 / simulation
Signal allocation
5,000 / month
Marketplace commission
20%
Best for active validation, deployment, and subscriber growth.
Training runs
120 / month
Simulation candle limit
15,000 / simulation
Signal allocation
15,000 / month
Marketplace commission
15%
Best for desks, prop-style throughput, and high-volume signal operations.
Training runs
999 / month
Simulation candle limit
15,000 / simulation
Signal allocation
2,500,000 / month
Marketplace commission
10%
No. Quant Mode uses plain Python. Write train() and predict() functions. Use any supported library.
5+ years of OHLCV data across major forex pairs such as EURUSD, GBPUSD, XAUUSD, USDJPY and major crypto pairs such as BTCUSDT and ETHUSDT at 1m, 5m, 15m, 1h, 4h, and 1d timeframes.
Custom dataset upload is on the roadmap. Currently, PyP provides the market data inside the quant workflow.
Your trained artifact is stored on Cloudflare R2. On each candle close, PyP loads the artifact, runs the correct runtime, normalizes the result, and dispatches to your configured channels.
For heavier model families such as ONNX, PyP uses container runtime infrastructure to load the exact model file while keeping the same release, simulation, and delivery workflow.
Yes. Any published release with a completed PPE simulation can be listed. You set the price, and PyP takes a commission based on plan.
Yes. Your source code and artifacts are not exposed to subscribers. Subscribers receive the signal product, not your code or model internals.
Move from research notebooks and disconnected scripts into a quant workflow that actually reaches validation, live deployment, and recurring revenue.