Data and methodology

Where does the underlying data come from?

The oracle network collects listing data from 8-10 active darknet marketplaces. Each of the 10 independent oracle nodes operates its own scraping infrastructure, so even if one marketplace goes offline or one node fails, the system continues operating.

Raw data goes through several stages of filtering, developed from published academic methodologies, to remove fake listings, shill pricing, and statistical outliers before being submitted to the consensus mechanism.

How often is data updated?

Oracle nodes take 6 snapshots per day across all monitored marketplaces. The 10 binary bracket markets settle once every 24 hours, and new markets open immediately after settlement. The implied price estimate updates continuously throughout the day as participants trade.

What data do institutional subscribers receive?

Subscribers don't need to interact with the prediction markets directly. The data product extracts the implied price estimate (the p=0.5 crossover point), confidence intervals (derived from the probability distribution across the 10 brackets), and historical time series from the trading data. This is delivered via API, dashboards, and downloadable datasets.

How reliable is the data?

No data source for these markets is perfectly reliable, and we don't claim otherwise. What the system provides is a structured, incentive-aligned estimate that improves as participation grows. The oracle network's multi-source design and statistical filtering reduce the impact of any single bad data point. The prediction market layer adds a second level of error correction by rewarding participants who identify and trade against data inaccuracies.

We publish quality metrics for every settlement, including inter-node agreement scores, filtered observation counts, and confidence intervals. Academic partners can audit the full methodology.