Adds get_extra_dns_servers(), add_extra_dns_server(), and the
high-level ensure_extra_dns_server() which registers a node and
enforces dns=yes + domain_check=yes in a single call. Also adds
the generic post() helper. 10 new tests, 141 total.
- Separate peer_sync.auth_username/password from the DA-facing credentials
so /internal/* uses its own basic auth; a compromised peer cannot push
zones or access the admin API
- Per-peer health tracking: consecutive failure count, degraded/recovered
log events at FAILURE_THRESHOLD (3) and on first successful contact after
degradation
- Gossip-lite mesh discovery: each sync pass calls /internal/peers on every
known peer and adds newly discovered node URLs automatically; a linear
chain of initial connections is sufficient to form a full mesh
- /internal/peers endpoint returns the node's live peer URL list
- Support DADNS_PEER_SYNC_PEER_N_URL/USERNAME/PASSWORD numbered env vars
for multi-peer env-var-only deployments (up to 9); original single-peer
DADNS_PEER_SYNC_PEER_URL retained for backward compatibility
- Fix dnspython silently relativizing in-zone FQDN targets to '@' by
calling rdata.to_text(origin=origin, relativize=False); CoreDNS MySQL
requires absolute FQDNs in RDATA and was serving '.' for any CNAME/MX
pointing to the zone apex
- Reorder write_zone to delete stale records before inserting new ones
so a brief NXDOMAIN is preferred over briefly serving duplicate records
- Rework save-queue batch loop: keep batch open until queue is empty
rather than closing after a fixed timeout, so sequential DA zone pushes
accumulate into a single batch
- Add managed_by='directadmin' to _ensure_zone_exists for new and
legacy NULL rows
CoreDNS MySQL (cybercinch fork) expects '@' for zone-apex references in
record RDATA. Storing the full FQDN (e.g. 'ithome.net.nz.') caused CoreDNS
to strip the zone suffix and serve 'MX 0 .' / 'CNAME .' instead of the
correct apex target.
- Add _relativize_name(): converts zone FQDN → '@', in-zone subdomains →
relative label, external FQDNs left unchanged. Handles both already-
relativized output from dnspython ($ORIGIN present) and absolute FQDNs
when $ORIGIN is absent from the zone file.
- Replace _normalize_cname_data() with _relativize_name(); add
_normalize_mx_data(), _normalize_ns_data(), _normalize_srv_data() using
the same helper.
- _parse_zone_to_record_set() now normalizes MX, NS, SRV alongside CNAME.
- _ensure_zone_exists() sets managed_by='directadmin' on create and
back-fills NULL rows from pre-migration installs.
- Zone.managed_by changed to nullable=True to match ALTER TABLE migration
where existing rows have no value.
- schema/coredns_mysql.sql updated to reflect actual two-table schema with
managed_by column and migration comment.
- 11 new tests (130 total, all passing).
- directdnsonly/__main__.py: inserts package dir into sys.path before
importing main.py (which uses short-form relative imports) then calls
main(); works for both `python -m directdnsonly` and the dadns script
- pyproject.toml: wire up `dadns` console script entry point
The cybercinch fork's resilience features (cache fallback, health monitoring,
zero downtime, connection pooling) make it the best DNS backend regardless of
zone count — not just at 300+ zones. Update summary recommendation and
topology comparison "Best for" row to reflect this.
- PeerSyncWorker reads DADNS_PEER_SYNC_PEER_URL / _USERNAME / _PASSWORD env
vars to populate a single peer without a config file; deduped against any
config-file peers so the URL never appears twice
- 2 new tests (119 total, all passing)
- README: peer sync single-peer env var table; Topology C compose example
updated to use env vars only (no config file needed for two-node setup)
- README: document cybercinch/coredns_mysql_extend built-in file caching —
serves from cache during MySQL outages, eliminates per-query round-trips
- New NSDBackend: zone files + nsd-control reload, zone registration via
nsd.conf.d include file; mirrors BIND backend interface exactly
- BackendRegistry now supports type "nsd"; config defaults for nsd.zones_dir
and nsd.nsd_conf
- Dockerfile installs both NSD and BIND9 — entrypoint detects configured
backend type(s) and starts only the required daemon; CoreDNS MySQL
deployments start neither
- docker/nsd.conf: minimal NSD base config with remote-control and
zones.conf include
- entrypoint.sh: reads config file + env vars to determine which daemon
to start; runs nsd-control-setup on first boot
- 20 new NSD backend tests (117 total, all passing)
- README: Topology C (multi-instance + peer sync) documented as most robust
HA option; NSD config reference; updated topology comparison table;
NSD env-var-only compose examples; version 2.5.0
Migrate remaining session.query() calls in coredns_mysql.py to
select()/session.execute() style; update bulk delete to delete()
construct and count to func.count(); drop sessionmaker(bind=).
Update test fixtures and assertions to match.
Zero session.query() calls remaining across the entire codebase.
Configurable startup delay before the first reconciliation pass so that
multiple receivers behind a load balancer can be offset without relying
on container start order (which is lost on reboot). Set to half the
interval on the secondary receiver — e.g. interval 60m → delay 30m.
Default is 0 (no change to existing behaviour). Stop event is respected
during the delay so the worker shuts down cleanly even mid-wait.
Cover memory profiles, zone-count thresholds, reload behaviour, and
throughput characteristics for BIND9, CoreDNS MySQL, NSD, and Knot DNS.
Call out NSD as the recommended lighter bundled alternative to BIND9
(~5-10 MB base, near-identical zone file format, same reload semantics)
and note the ~300-zone crossover where CoreDNS MySQL starts to win.
Expand both topology diagrams to show the retry queue and healing pass in
the flow. Add per-topology failure-behaviour tables covering transient backend
failure, prolonged outage, container-down-during-push, and cross-node drift.
Rewrite the comparison table to call out the key architectural difference:
Topology A has no auto-recovery from prolonged BIND failure (needs next DA push);
Topology B's reconciler healing pass re-syncs missing backends from stored
zone_data without any DA involvement.
- worker.py: third persistent retry queue with exponential backoff (30s→30m,
max 5 attempts); failed backends tracked per-item so retries target only the
failing nodes; zone_data stored in DB after every successful write
- Domain model: zone_data TEXT + zone_updated_at DATETIME columns; additive
migration applied on startup so existing deployments upgrade in place
- ReconciliationWorker: Option C healing pass — checks every configured backend
for zone presence after each reconciliation cycle and re-queues any zone
missing from a backend using stored zone_data, enabling automatic recovery
from prolonged backend outages without waiting for DirectAdmin to re-push
- 82 tests, all passing
Dead code from v1 planning — never implemented, superseded by the
CoreDNS MySQL backend. Also carried a broken stale import that would
have caused an ImportError on load.
Move all outbound DirectAdmin HTTP logic out of ReconciliationWorker and
into a dedicated, independently testable DirectAdminClient class:
- directdnsonly/app/da/client.py: list_domains (paginated JSON + legacy
fallback), get (authenticated GET to any CMD_* endpoint), _login
(DA Evo session-cookie fallback), _parse_legacy_domain_list
- directdnsonly/app/da/__init__.py: public re-export of DirectAdminClient
- reconciler.py: now purely reconciliation logic; instantiates a client
per configured server — no HTTP code remaining
- tests/test_da_client.py: 16 dedicated tests for DirectAdminClient
- tests/test_reconciler.py: mocks at the DirectAdminClient class boundary
instead of the internal _fetch_da_domains method
Bumped to 2.2.0 — DirectAdminClient is now a first-class public API.
- entrypoint: only start named when a bind backend is configured and
enabled in app.yml; CoreDNS-only deployments skip named entirely
- config: user-supplied paths (/etc/directdnsonly, ./config) now
searched before the bundled app.yml so mounted configs take effect
- docs: deployment topology reference — Topology A (dual BIND HA) and
Topology B (single instance, multi-DC CoreDNS MySQL)
- chore: bump version to 2.1.0
- justfile: add build-docker recipe
Root config/ directory was removed when the duplicate config/app.yml was
deleted — the canonical config is now bundled inside directdnsonly/config/
and is already covered by the existing COPY directdnsonly step.
- Added `certifi` version 2026.1.4 and `charset-normalizer` version 3.4.4 to poetry.lock.
- Introduced `idna` version 3.11 to poetry.lock.
- Updated `requests` to version 2.32.5 in poetry.lock and added it as a dependency in pyproject.toml.
- Updated `urllib3` to version 2.6.3 in poetry.lock.
- Added extras for `requests` and `urllib3` in poetry.lock.
- Migrated from setuptools to Poetry; added pyproject.toml, poetry.lock,
poetry.toml and .python-version (Python 3.11.12)
- Built out full directdnsonly Python package with BIND and CoreDNS MySQL
backends, CherryPy REST API, persist-queue worker, and vyper-based config
- Auth credentials now read from config/env (app.auth_username/password)
rather than hardcoded; override via DADNS_APP_AUTH_PASSWORD env var
- Added Dockerfile.deepseek: Python 3.11 slim + BIND9 + Poetry install
- Rewrote docker-compose.yml for local dev stack (MySQL + dadns services)
- Added SQL schema, docker/ BIND configs, justfile, tests, and README
- Expanded .gitignore for Poetry/Python project artifacts