Build From Source

Start with Downloads, Getting Started: Installation, Golang API: Installation, or Contribution: Development Setup if you want the task-first entry points. This page remains the deeper build and publish reference.

This is the current build reference for building sikuli-go from source.

Prerequisites

Install Workspace Dependencies

cd /path/to/sikuli-go
yarn install

Build Go API Binaries

Build sikuli-go:

cd packages/api
go build -tags "gosseract opencv gocv_specific_modules gocv_features2d gocv_calib3d" -trimpath -ldflags="-s -w" -o ../../sikuli-go ./cmd/sikuli-go

Build sikuli-go-monitor:

cd packages/api
go build -tags "gosseract opencv gocv_specific_modules gocv_features2d gocv_calib3d" -trimpath -ldflags="-s -w" -o ../../sikuli-go-monitor ./cmd/sikuli-go-monitor

Build Protocol Artifacts

./scripts/generate-grpc-stubs.sh
./scripts/clients/generate-node-stubs.sh
./scripts/clients/generate-python-stubs.sh
./scripts/clients/generate-lua-descriptor.sh

Build Node Client

yarn workspace @sikuligo/sikuli-go build

Build Python Distributions

./scripts/clients/release-python-client.sh

Skip installer steps:

SKIP_INSTALL=1 ./scripts/clients/release-python-client.sh

Build Everything (Convenience)

make

Publish GitHub Docs

Build docs locally and publish the generated site to the gh-pages branch:

make gh-publish

Options:

CI automation:

Local End-to-End Verification

Run the local verifier:

make test-publish

What it checks:

Optional: verify using packed tarball install flow (closest to published package install):

VERIFY_PACKED_INSTALL=1 make test-publish

Run the full local integration suite:

make test-integration

What test-integration adds on top of test-publish:

Run optional real-desktop E2E directly:

REAL_DESKTOP_E2E=1 make test-e2e

Select a specific monitor/display for capture (useful on multi-monitor setups):

REAL_DESKTOP_E2E=1 REAL_DESKTOP_E2E_DISPLAY=2 make test-e2e

Notes:

Find Benchmark E2E

Benchmark FindOnScreen across matcher implementations (template, orb, hybrid) and multiple fixture scenarios:

Run:

make benchmark

Output artifacts (default):

Useful options:

FIND_BENCH_TIME=500ms FIND_BENCH_COUNT=3 make benchmark
FIND_BENCH_TAGS="opencv gocv_specific_modules gocv_features2d gocv_calib3d" make benchmark
FIND_BENCH_TAGS="" make benchmark
FIND_BENCH_REPORT_DIR=.test-results/custom make benchmark
FIND_BENCH_VISUAL=1 FIND_BENCH_VISUAL_MAX_ATTEMPTS=2 make benchmark
FIND_BENCH_VISUAL_DIR=.test-results/bench/custom-visuals FIND_BENCH_VISUAL_TIMEOUT=8s make benchmark
make benchmark
FIND_BENCH_PATCH_READMES=1 FIND_BENCH_README_PATHS="$PWD/README.md,$PWD/packages/client-node/README.md,$PWD/packages/client-python/README.md" make benchmark
FIND_BENCH_PATCH_READMES=1 FIND_BENCH_README_INLINE_IMAGES=4 FIND_BENCH_README_SECTION_TITLE="Latest Benchmark Evidence" make benchmark
FIND_BENCH_PATCH_READMES=0 make benchmark
FIND_BENCH_TEST_TIMEOUT=120m make benchmark

By default, the benchmark runs with OpenCV-related tags enabled so orb and hybrid implementations can be compared.

README patching is enabled by default. On each benchmark run, the script appends/updates an autogenerated section at the bottom of the root README plus the Node.js and Python client READMEs with:

README patch controls:

Optional OCR-Tagged Tests

cd packages/api
go test -tags gosseract ./...