Secondary

Benchmarks

Latest find-on-screen benchmark results, scenario strategy, and published artifacts rendered into the guide shell.

Latest Run

Generated `2026-03-07T23:32:15.506029+00:00`
Platform `darwin/arm64` on `Apple M4 Pro`
Target `BenchmarkFindOnScreenE2E` with benchtime `200ms` and count `1`
Scenario Corpus `10` scenario types across `4` resolution groups
Key Findings Most accurate: `hybrid` at `57.5%` success. Fastest: `orb` at `56.443` ms/op. Lowest false-positive rate: `template` at `0.0%`.

Engine Snapshot

Engine Cases Avg ms/op Median ms/op Success % False Positive % No Match %
akaze 120 172.121 147.695 32.5 2.5 65.0
brisk 120 388.483 123.118 39.2 8.3 52.5
hybrid 120 171.017 134.411 57.5 5.0 37.5
kaze 120 824.898 640.512 52.5 5.8 41.7
orb 120 56.443 44.794 10.8 9.2 80.0
sift 120 256.756 198.264 46.7 7.5 45.8
template 120 154.257 114.466 53.3 0.0 46.7

Charts

Performance chart

Accuracy chart

Resolution time chart

Resolution matches chart

Artifact Map

Artifact Purpose Link
Overview Current benchmark summary at the section root. Open
Reports Hub Artifact map for the current benchmark run. Open
Detailed E2E Full engine, resolution, and scenario breakdown. Open
Benchmark JSON Machine-readable benchmark summary. Open
Benchmark Text Raw go test benchmark output. Open
Scenario Strategy Scenario corpus and engine-selection rationale. Open
Strategy JSON Machine-readable strategy summary. Open
Visual Gallery Generated benchmark screenshots and summaries. Open
Scenario Intent What each scenario is intended to prove. Open
Scenario Schema Manifest schema and region workflow. Open

Scenario Corpus

Document Purpose Link
Intent Why each scenario exists and what it should stress. Open
Schema Manifest structure, region-selection flow, and validation inputs. Open