Skip to content

Developer DetailsΒΆ

This page is for contributors who want to modify algorithms, add engines, or extend the project.

# create/sync venv with dev + exts
uv sync --python 3.13 --dev --extra exts

# install editable local packages into the active venv
uv pip install -e ./py_ballisticcalc.exts
uv pip install -e .

# activate & test
source .venv/bin/activate   # Linux/macOS
# .\.venv\Scripts\activate  # Windows
python -m pytest tests --engine="rk4_engine"

Notes:

  • The repo includes a sitecustomize.py that disables user site-packages and warns if you are not using the local .venv, to prevent stale/external packages from shadowing your build.
  • If you prefer pip, using python -m pip install -e ./py_ballisticcalc.exts (then python -m pip install -e .) works fine when the venv is activated.

CI and uv.lockΒΆ

Development dependencies and reproducible developer/CI installs are pinned in uv.lock.

  • This lockfile is for maintainers and CI reproducibility; it is not used by library consumers who install via pip/pyproject.
  • If you use uv for environment management, run uv sync --dev (optionally with --extra exts to install the Cython subproject) to produce the locked environment used by CI.

Code locations & responsibilitiesΒΆ

  • py_ballisticcalc/ β€” core Python package.
    • engines/ β€” Python engine implementations and TrajectoryDataFilter.
    • trajectory_data.py β€” BaseTrajData, TrajectoryData, HitResult, TrajFlag, interpolation helpers.
    • conditions.py, munition.py β€” shot and environment objects.
    • drag_model.py, drag_tables.py β€” drag lookup and interpolation.
  • py_ballisticcalc.exts/ β€” Cython subproject.
    • py_ballisticcalc_exts/base_engine.pyx β€” Cython wrapper that orchestrates C-layer stepping and defers event logic to Python.
    • py_ballisticcalc_exts/ rk4_engine.pyx, euler_engine.pyx β€” Cython engine implementations.
    • py_ballisticcalc_exts/cy_bindings.pyx/.pxd β€” helper functions and bridging helpers for C structs.

How engines are wiredΒΆ

Public call flow (simplified):

  1. Calculator.fire() calls engine.integrate().
  2. BaseIntegrationEngine.integrate() converts units, calls engine _integrate(), which feeds TrajectoryDataFilter.
  3. _integrate() returns a HitResult consisting of TrajectoryData rows and post-processing functions.

Testing & parityΒΆ

  • The project runs many parity tests that assert identical results between Python and Cython engines. When adding features, run the whole test suite using the --engine="engine_name" argument.
  • Focus tests on:
    • Event parity (ZERO_UP/ZERO_DOWN/MACH/APEX) and interpolation accuracy.
    • Search functions (find_zero_angle, find_max_range, find_apex).
    • Dense output correctness (HitResult.base_data) and shape.

BenchmarkingΒΆ

scripts/benchmark.py checks execution speed on two standardized scenarios named Trajectory and Zero.

Note

If you are contemplating work that could affect performance you should run benchmark.py before modifying any code to set a baseline, and then re-run the benchmark afterwards to confirm whether the changes have affected performance.

# Run benchmarks on all engines:
uv run python scripts/benchmark.py --all

# Run benchmarks on specific engine:
uv run python scripts/benchmark.py --engine="rk4_engine"

Understanding benchmark resultsΒΆ

The benchmark numbers are only meaningful for comparing different versions of the project run on the same computer (and under the same operating conditions β€” i.e., same processor and memory availability).

Each benchmark run will be logged to ./benchmarks/benchmarks.csv, which will contain a row for each engine and scenario, with the following columns:

  • timestamp β€” of the run.
  • version β€” of the project (as listed in pyproject.toml).
  • branch β€” name reported by git (if any).
  • git_hash β€” version (short) reported by git.
  • case β€” which scenario was run (Trajectory or Zero).
  • engine β€” which engine was run.
  • repeats β€” how many iterations of the case were run to determine runtime statistics.
  • mean_ms β€” average runtime (in milliseconds) for the case.
  • stdev_ms β€” standard deviation of runtimes observed.
  • min_ms β€” fastest runtime observed.
  • max_ms β€” slowest runtime observed.

The key statistic to look at is mean_ms. The other three statistics are useful for validating that figure and detecting benchmarking problems. Ideally:

  • stdev_ms should be very small relative to mean_ms. If it is not then you should check for other processes that could be consuming compute while running the benchmarks and try to disable those. Alternatively, you can increase the number of iterations used for benchmark by setting a larger --repeats argument. (More samples should reduce the variance from the mean.)
  • min_ms and max_ms should be similar to mean_ms. If max_ms is much larger than mean_ms then you may have other processes competing for compute during the benchmark run. Or you may need a longer warmup, which you can set with the --warmup argument.

Cython notes & common pitfallsΒΆ

  • Cython is used only for performance-critical numeric loops. Keep higher-level semantics in Python to avoid code duplication and subtle parity issues.
  • Common Cython pitfalls observed in this codebase:
    • Indentation and cdef scoping errors β€” ensure cdef declarations live at the top of a C function or appropriate scope.
    • Avoid using Python booleans when declaring typed C variables (use bint and 0/1 assignment in the C context).
    • Keep initialisation of C structs and memory allocation clear; release resources in _free_trajectory.

Build / test commandsΒΆ

# optional: install editable C extensions and main package
py -m pip install -e ./py_ballisticcalc.exts
py -m pip install -e .

# run a single test file
py -m pytest tests/test_exts_basic.py

# run full tests
py -m pytest

Where to ask questionsΒΆ

Open an issue on the repository with a minimal reproduction and a note about the engine(s) involved.