Secondary

Benchmarks

Latest find-on-screen benchmark results, scenario strategy, and published artifacts rendered into the guide shell.

Latest Run

Generated `seed:20260304`
Platform `linux/amd64` on `AMD EPYC 7763 64-Core Processor`
Target `BenchmarkFindOnScreenE2E` with benchtime `200ms` and count `1`
Scenario Corpus `10` scenario types across `4` resolution groups
Key Findings Most accurate: `kaze` at `50.8%` success. Fastest: `orb` at `224.177` ms/op. Lowest false-positive rate: `template` at `0.0%`.

Engine Snapshot

Engine Cases Avg ms/op Median ms/op Success % False Positive % No Match %
akaze 120 503.831 419.450 31.7 3.3 65.0
brisk 66 1152.434 1072.305 48.5 16.7 34.8
hybrid 96 461.403 413.654 45.8 7.3 46.9
kaze 120 2595.424 2122.136 50.8 7.5 41.7
orb 9 224.177 222.929 0.0 22.2 77.8
sift 120 676.725 492.477 45.0 9.2 45.8
template 99 451.781 399.972 43.4 0.0 56.6

Charts

Performance chart

Accuracy chart

Resolution time chart

Resolution matches chart

Artifact Map

Artifact Purpose Link
Overview Current benchmark summary at the section root. Open
Reports Hub Artifact map for the current benchmark run. Open
Detailed E2E Full engine, resolution, and scenario breakdown. Open
Benchmark JSON Machine-readable benchmark summary. Open
Benchmark Text Raw go test benchmark output. Open
Scenario Strategy Scenario corpus and engine-selection rationale. Open
Strategy JSON Machine-readable strategy summary. Open
Visual Gallery Generated benchmark screenshots and summaries. Open
Scenario Intent What each scenario is intended to prove. Open
Scenario Schema Manifest schema and region workflow. Open

Scenario Corpus

Document Purpose Link
Intent Why each scenario exists and what it should stress. Open
Schema Manifest structure, region-selection flow, and validation inputs. Open