LEGAL

Methodology

Last updated September 5, 2025

This page explains how Finaur creates educational trading frameworks, how we run backtests and simulations, how the Discipline Score is computed, and how we publish results with integrity and limits. Read this together with our Disclaimer, Terms, and Privacy Policy.

Entity

Finaur Labs, Spain. Contact hi@finaur.com. Educational purpose, no investment advice.

Document

Version v1.0

Reviewed September 5, 2025

Transparency

  • Publish both favorable and unfavorable periods
  • Integrity hash on reports
  • Reproducible parameter records

Contact

Questions about methodology

Email hi@finaur.com

#Scope and position

  • Finaur provides education and analytics, not investment advice and not portfolio management
  • Frameworks are models and scenarios for learning and research
  • Users make their own decisions, no order routing, no custody, no execution

#Data and sources

  • Primary market data vendors include equities, crypto, and indexes, listed in each strategy page
  • Adjusted prices for corporate actions where applicable, splits and dividends included in equities backtests
  • Timestamps are recorded in UTC and converted in the interface for readability
  • Missing data points are handled with clear rules, either exclusion or forward fill, documented per strategy

#Universes and calendars

  • Equities universes target survivorship free sets where available, selection rules appear in strategy details
  • Crypto uses continuous calendars, exchanges and symbols are listed per strategy
  • Trading calendars respect official market holidays for equities, continuous time for crypto

#Costs and slippage

  • Backtests apply transaction costs and slippage, with default values stated on each strategy page
  • Sensitivity analysis appears where relevant to show the effect of higher or lower costs
  • All cost figures are educational estimates, real trading conditions vary across brokers and venues

#Backtesting method

  • Walk forward evaluation with train and test splits, no look ahead usage
  • Signals use information available at decision time only, entry and exit timing rules are documented
  • Order of operations is consistent, for example universe filter then ranking then position sizing
  • Randomness is controlled via seed where random selection is used, seed values appear in result metadata
  • All parameter sets are recorded, the exact set used for the public chart is visible on the page

#Metrics and definitions

Metric Definition
CAGR Compound annual growth rate over the backtest period
Max drawdown Maximum peak to trough decline on the equity curve
Sharpe Annualized excess return divided by annualized volatility, risk free set to the stated assumption
Win rate Percentage of closed trades with positive result before costs or after costs as labeled
Trades Count of executed positions in the period, excludes partial fills in the simplified engine
Time in market Share of the period where the strategy maintains exposure greater than zero

#Strategy page standard

  • Hero chart first, equity curve versus benchmark with time range controls
  • Stats panel with CAGR, max drawdown, Sharpe, win rate, trades, time in market
  • Method and rules section, clear plain language and parameter table
  • Risk notes and known failure modes, typical adverse conditions listed in plain terms
  • Assumptions block with costs, slippage, universe, calendar, and data vendors
  • Download links where available, report files carry integrity hashes

#Simulator behavior

  • Paper simulation mirrors the backtest engine rules, fills use the same price formation logic stated on the page
  • No live order routing, no execution, the simulator is for education and practice
  • Parameter changes are recorded, each run shows the parameter set and timestamp

#Discipline Score

  • Weekly score from zero to one hundred that reflects plan adherence, stop integrity, sizing variance, overtrading index, and hygiene completion
  • Weights and guards appear in the Discipline Score page and release notes, versioned in the user record
  • Top drivers and one practice suggestion are stored with each weekly score for learning and coaching

#Bias detection

  • Heuristics track revenge, fear of missing out, hesitation, premature exit, size creep, and stop dragging
  • Signals derive from the gap between plan and execution, timing clusters, frequent stop edits, and exits without triggers
  • Bias events are recorded with timestamps and meta fields, visible in the journal and reports

#Publication and integrity

  • Each report lists author, reviewer, publish time, methodology version, and an integrity hash
  • Archive keeps all reports, no deletions for result reasons, corrections are logged with new hashes
  • Scoreboard aggregates outcomes by month and compares with public benchmarks where applicable

#Assumptions and limits

  • Backtests are historical simulations, real trading conditions differ because of liquidity, slippage, fees, taxes, and human factors
  • Data quality and survivorship handling vary by market and vendor, we document known gaps on each strategy page
  • Results are educational, not predictive, past performance is not a reliable indicator of future outcomes

#Versioning and change log

  • Strategies carry a name and a semantic version and a parameter snapshot
  • Discipline Score has a version label stored with each user week
  • This Methodology page uses the version at the top of the document and lists material changes in the change log below

Change log

  • September 5, 2025 initial public version and alignment across strategy pages and scoreboard

#Reproducibility

  • Backtest runs store parameters, universe definition, calendar, cost model, data snapshot identifiers, and random seed when used
  • Public artifacts include report integrity hashes, the same inputs recreate the same outputs under the same engine version
  • Where we correct a dataset, we publish a new run and note the change in the archive

#Contact

Finaur Labs

Spain

Email hi@finaur.com