You've already forked directdnsonly
feat: add NSD backend and Topology C (multi-instance with peer sync) 🏗️
- New NSDBackend: zone files + nsd-control reload, zone registration via nsd.conf.d include file; mirrors BIND backend interface exactly - BackendRegistry now supports type "nsd"; config defaults for nsd.zones_dir and nsd.nsd_conf - Dockerfile installs both NSD and BIND9 — entrypoint detects configured backend type(s) and starts only the required daemon; CoreDNS MySQL deployments start neither - docker/nsd.conf: minimal NSD base config with remote-control and zones.conf include - entrypoint.sh: reads config file + env vars to determine which daemon to start; runs nsd-control-setup on first boot - 20 new NSD backend tests (117 total, all passing) - README: Topology C (multi-instance + peer sync) documented as most robust HA option; NSD config reference; updated topology comparison table; NSD env-var-only compose examples; version 2.5.0
This commit is contained in:
39
Dockerfile
39
Dockerfile
@@ -1,16 +1,22 @@
|
||||
FROM python:3.11.12-slim
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
# Install system dependencies.
|
||||
# Both NSD and BIND are installed so the image works with any DNS backend type.
|
||||
# The entrypoint detects which one is configured and starts only that daemon.
|
||||
# CoreDNS MySQL users: neither daemon is started — the image is still usable.
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
bind9 \
|
||||
bind9utils \
|
||||
nsd \
|
||||
dnsutils \
|
||||
gcc \
|
||||
python3-dev \
|
||||
default-libmysqlclient-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Configure BIND
|
||||
# ---------------------------------------------------------------------------
|
||||
# BIND setup
|
||||
# ---------------------------------------------------------------------------
|
||||
RUN mkdir -p /etc/named/zones && \
|
||||
chown -R bind:bind /etc/named && \
|
||||
chmod 755 /etc/named/zones
|
||||
@@ -19,31 +25,34 @@ COPY docker/named.conf.local /etc/bind/
|
||||
COPY docker/named.conf.options /etc/bind/
|
||||
RUN chown root:bind /etc/bind/named.conf.*
|
||||
|
||||
# Install Python dependencies
|
||||
# ---------------------------------------------------------------------------
|
||||
# NSD setup
|
||||
# ---------------------------------------------------------------------------
|
||||
RUN mkdir -p /etc/nsd/zones /etc/nsd/nsd.conf.d && \
|
||||
chown -R nsd:nsd /etc/nsd && \
|
||||
chmod 755 /etc/nsd/zones
|
||||
|
||||
COPY docker/nsd.conf /etc/nsd/nsd.conf
|
||||
RUN chown nsd:nsd /etc/nsd/nsd.conf
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Application
|
||||
# ---------------------------------------------------------------------------
|
||||
WORKDIR /app
|
||||
COPY pyproject.toml poetry.lock README.md ./
|
||||
|
||||
# Install specific Poetry version that matches your lock file
|
||||
RUN pip install "poetry==2.1.2" # Adjust version to match your lock file
|
||||
RUN pip install "poetry==2.1.2"
|
||||
|
||||
# Copy application files
|
||||
COPY directdnsonly ./directdnsonly
|
||||
COPY schema ./schema
|
||||
|
||||
RUN poetry config virtualenvs.create false && \
|
||||
poetry install
|
||||
|
||||
|
||||
|
||||
# Create data directories
|
||||
RUN mkdir -p /app/data/queues && \
|
||||
mkdir -p /app/data/zones && \
|
||||
mkdir -p /app/logs && \
|
||||
RUN mkdir -p /app/data/queues /app/data/zones /app/logs && \
|
||||
chmod -R 755 /app/data
|
||||
|
||||
# Configure BIND zone directory to match app config
|
||||
#RUN ln -s /app/data/zones /etc/named/zones/dadns
|
||||
|
||||
# Start script
|
||||
COPY docker/entrypoint.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
387
README.md
387
README.md
@@ -2,13 +2,13 @@
|
||||
|
||||
## Deployment Topologies
|
||||
|
||||
Two reference topologies are documented below. Choose the one that matches your infrastructure.
|
||||
Three reference topologies are documented below. Choose the one that matches your infrastructure.
|
||||
|
||||
---
|
||||
|
||||
### Topology A — Dual BIND Instances (High-Availability / Multi-Server)
|
||||
### Topology A — Dual NSD/BIND Instances (High-Availability / Multi-Server)
|
||||
|
||||
Two independent DirectDNSOnly containers, each running a bundled BIND9 instance. Both are registered as Extra DNS servers in the same DirectAdmin Multi-Server environment, so DA pushes every zone change to both simultaneously.
|
||||
Two independent DirectDNSOnly containers, each running a bundled DNS daemon (NSD by default, or BIND9). Both are registered as Extra DNS servers in the same DirectAdmin Multi-Server environment, so DA pushes every zone change to both simultaneously.
|
||||
|
||||
```
|
||||
DirectAdmin Multi-Server
|
||||
@@ -65,7 +65,7 @@ dns:
|
||||
```yaml
|
||||
services:
|
||||
directdnsonly-1:
|
||||
image: guisea/directdnsonly:2.3.0
|
||||
image: guisea/directdnsonly:2.5.0
|
||||
ports:
|
||||
- "2222:2222" # DA pushes here
|
||||
- "53:53/udp" # authoritative DNS
|
||||
@@ -154,23 +154,103 @@ Adding a third data centre is a single stanza in the config — no code changes
|
||||
|
||||
---
|
||||
|
||||
### Topology C — Multi-Instance with Peer Sync (Most Robust)
|
||||
|
||||
Multiple independent DirectDNSOnly containers, each with a single local DNS backend (NSD or CoreDNS MySQL), registered as separate Extra DNS servers in DirectAdmin Multi-Server. Peer sync provides eventual consistency — if one instance misses a DA push while it is offline, it recovers the missing zone data from a peer on the next sync interval.
|
||||
|
||||
```
|
||||
DirectAdmin Multi-Server
|
||||
│
|
||||
├─ POST /CMD_API_DNS_ADMIN ──▶ directdnsonly-syd (NSD or CoreDNS MySQL)
|
||||
│ │
|
||||
│ Persistent Queue + zone_data store
|
||||
│ ├─ writes zone file / MySQL
|
||||
│ ├─ reloads daemon
|
||||
│ └─ retry on failure
|
||||
│ │
|
||||
│ ◀──── peer sync ────▶
|
||||
│ │
|
||||
└─ POST /CMD_API_DNS_ADMIN ──▶ directdnsonly-mlb (NSD or CoreDNS MySQL)
|
||||
│
|
||||
Persistent Queue + zone_data store
|
||||
├─ writes zone file / MySQL
|
||||
├─ reloads daemon
|
||||
└─ retry on failure
|
||||
```
|
||||
|
||||
**Why this is the most robust topology:**
|
||||
|
||||
- DA pushes to each instance independently — no single point of failure
|
||||
- No load balancer in the write path — a dead LB cannot silence both instances
|
||||
- Each instance serves DNS immediately from its own daemon
|
||||
- If SYD misses a push while offline, it pulls the newer zone from MLB on the next peer sync (default 15 minutes)
|
||||
- Peer sync is best-effort eventual consistency — deliberately simple, no consensus protocol
|
||||
|
||||
#### Failure behaviour
|
||||
|
||||
| Scenario | What happens |
|
||||
|---|---|
|
||||
| One instance down during DA push | Other instance(s) receive and serve the update. When the downed instance recovers, peer sync detects the stale/missing `zone_updated_at` and pulls the newer zone data from a peer. |
|
||||
| Both instances down during DA push | Both miss the push. When they recover, they sync from each other — the most recently updated peer wins per zone. No DA re-push needed. |
|
||||
| Peer offline | Peer sync silently skips unreachable peers. Syncs resume automatically when the peer recovers. |
|
||||
| Zone deleted from DA | Reconciliation poller detects the orphan and queues the delete on each instance independently. |
|
||||
|
||||
#### `config/app.yml` — instance syd
|
||||
|
||||
```yaml
|
||||
app:
|
||||
auth_username: directdnsonly
|
||||
auth_password: your-secret
|
||||
|
||||
dns:
|
||||
default_backend: nsd
|
||||
backends:
|
||||
nsd:
|
||||
type: nsd
|
||||
enabled: true
|
||||
zones_dir: /etc/nsd/zones
|
||||
nsd_conf: /etc/nsd/nsd.conf.d/zones.conf
|
||||
|
||||
peer_sync:
|
||||
enabled: true
|
||||
interval_minutes: 15
|
||||
peers:
|
||||
- url: http://directdnsonly-mlb:2222
|
||||
username: directdnsonly
|
||||
password: your-secret
|
||||
|
||||
reconciliation:
|
||||
enabled: true
|
||||
interval_minutes: 60
|
||||
directadmin_servers:
|
||||
- hostname: da.syd.example.com
|
||||
port: 2222
|
||||
username: admin
|
||||
password: da-secret
|
||||
ssl: true
|
||||
```
|
||||
|
||||
Register each container as a separate Extra DNS server entry in DA → DNS Administration → Extra DNS Servers with the same credentials.
|
||||
|
||||
---
|
||||
|
||||
### Topology Comparison
|
||||
|
||||
| | Topology A — Dual BIND | Topology B — CoreDNS MySQL |
|
||||
|---|---|---|
|
||||
| **DNS server** | BIND9 (bundled in container) | CoreDNS (separate, reads MySQL) |
|
||||
| **Write path** | DA → each instance independently | DA → single instance → all backends |
|
||||
| **Zone storage** | Zone files on container disk | MySQL database rows |
|
||||
| **DA registration** | Two Extra DNS server entries | One Extra DNS server entry |
|
||||
| **Redundancy model** | Independent app+DNS units | One app, N database backends |
|
||||
| **Transient backend failure** | Retry queue (exp. backoff, 5 attempts) | Retry queue (exp. backoff, 5 attempts) |
|
||||
| **Prolonged backend outage** | No auto-recovery — waits for next DA push to that zone | Reconciler healing pass re-pushes all missing zones using stored `zone_data` (no DA involvement) |
|
||||
| **Container down during push** | Zone missed entirely — no retry possible at DA level | Zone missed at DA level — same limitation |
|
||||
| **Cross-node consistency** | No sync between instances — drift until next DA push | All backends share same write path — reconciler enforces consistency |
|
||||
| **Orphan detection** | Yes — reconciler removes zones deleted from DA | Yes — reconciler removes zones deleted from DA |
|
||||
| **External DB required** | No | Yes (MySQL per CoreDNS node) |
|
||||
| **Horizontal scaling** | Add DA Extra DNS entries + deploy new containers | Add backend stanzas in `config/app.yml` |
|
||||
| **Best for** | Simple HA, no external DB | Multi-DC, stronger consistency guarantees |
|
||||
| | Topology A — Dual NSD/BIND | Topology B — CoreDNS MySQL | Topology C — Multi-Instance + Peer Sync |
|
||||
|---|---|---|---|
|
||||
| **DNS server** | NSD or BIND9 (bundled) | CoreDNS (separate, reads MySQL) | NSD or CoreDNS MySQL (per instance) |
|
||||
| **Write path** | DA → each instance independently | DA → single instance → all backends | DA → each instance independently |
|
||||
| **Zone storage** | Zone files on container disk | MySQL database rows | Zone files or MySQL + SQLite zone_data store |
|
||||
| **DA registration** | Two Extra DNS server entries | One Extra DNS server entry | One entry per instance |
|
||||
| **Redundancy model** | Independent app+DNS units | One app, N database backends | Independent instances + peer sync |
|
||||
| **Transient backend failure** | Retry queue (exp. backoff, 5 attempts) | Retry queue (exp. backoff, 5 attempts) | Retry queue (exp. backoff, 5 attempts) |
|
||||
| **Prolonged backend outage** | No auto-recovery — waits for next DA push | Reconciler healing pass re-pushes all missing zones | Peer sync pulls missed zones from a healthy peer |
|
||||
| **Container down during push** | Zone missed entirely | Zone missed at DA level | Zone missed at DA level; recovered via peer sync |
|
||||
| **Cross-node consistency** | No sync between instances | All backends share same write path | Peer sync provides eventual consistency |
|
||||
| **Orphan detection** | Yes — reconciler | Yes — reconciler | Yes — reconciler (per instance) |
|
||||
| **External DB required** | No | Yes (MySQL per CoreDNS node) | No (NSD) or Yes (CoreDNS MySQL) |
|
||||
| **Horizontal scaling** | Add DA Extra DNS entries + containers | Add backend stanzas in config | Add DA Extra DNS entries + containers + peer list |
|
||||
| **Best for** | Simple HA, no external DB | Multi-DC, stronger consistency | Most robust HA — survives extended outages without DA re-push |
|
||||
|
||||
---
|
||||
|
||||
@@ -194,9 +274,11 @@ Adding a third data centre is a single stanza in the config — no code changes
|
||||
|
||||
---
|
||||
|
||||
### Is there a lighter alternative to bundle instead of BIND9?
|
||||
### Bundled DNS daemons — NSD and BIND9
|
||||
|
||||
Yes. **NSD (Name Server Daemon)** from NLnet Labs is the strongest candidate for a drop-in replacement:
|
||||
The container image ships with **both NSD and BIND9** installed. The entrypoint reads your config and starts only the daemon that matches the configured backend type. CoreDNS MySQL deployments start neither.
|
||||
|
||||
**NSD (Name Server Daemon)** from NLnet Labs is the default recommendation:
|
||||
|
||||
| | BIND9 | NSD | Knot DNS |
|
||||
|---|---|---|---|
|
||||
@@ -216,13 +298,11 @@ Yes. **NSD (Name Server Daemon)** from NLnet Labs is the strongest candidate for
|
||||
|
||||
**Summary recommendation:**
|
||||
|
||||
- **Today, ~100–300 zones, no external DB:** NSD is a better bundled choice than BIND9 — lighter, faster, simpler config for authoritative-only use.
|
||||
- **Up to ~300 zones, no external DB:** Use the NSD backend (bundled) — lighter, faster, authoritative-only, same zone file format as BIND.
|
||||
- **300–1 000+ zones:** CoreDNS MySQL wins — zone data in MySQL means no daemon reload at all.
|
||||
- **Need zero-interruption zone swaps:** Knot DNS.
|
||||
- **Need an HTTP API for zone management (no file I/O):** PowerDNS Authoritative with its native HTTP API and file/SQLite backend.
|
||||
|
||||
> NSD backend support is a planned future addition. A pull request is welcome — the implementation is straightforward since zone file format and reload semantics are nearly identical to the existing BIND backend.
|
||||
|
||||
---
|
||||
|
||||
## CoreDNS MySQL Backend — Required Fork
|
||||
@@ -250,10 +330,11 @@ build required.
|
||||
---
|
||||
|
||||
## Features
|
||||
- Multi-backend DNS management (BIND, CoreDNS MySQL)
|
||||
- Multi-backend DNS management (NSD, BIND, CoreDNS MySQL)
|
||||
- Parallel backend dispatch — all enabled backends updated simultaneously
|
||||
- Persistent queue — zone updates survive restarts
|
||||
- Automatic record-count verification and drift reconciliation
|
||||
- Peer sync — eventual consistency between directdnsonly instances
|
||||
- Thread-safe operations
|
||||
- Loguru-based logging
|
||||
|
||||
@@ -340,33 +421,249 @@ dns:
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config/app.yml` for backend settings. Credentials can be overridden via
|
||||
environment variables using the `DADNS_` prefix (e.g.
|
||||
`DADNS_APP_AUTH_PASSWORD`).
|
||||
DirectDNSOnly uses [Vyper](https://github.com/sn3d/vyper-py) for configuration. Settings are resolved in this priority order (highest wins):
|
||||
|
||||
1. **Environment variables** — `DADNS_` prefix, dots replaced with underscores (e.g. `DADNS_APP_AUTH_PASSWORD`)
|
||||
2. **Config file** — `app.yml` searched in `/etc/directdnsonly`, `.`, `./config`, then the bundled default
|
||||
3. **Built-in defaults** (shown in the table below)
|
||||
|
||||
**A config file is entirely optional.** Every scalar setting can be provided through environment variables alone.
|
||||
|
||||
---
|
||||
|
||||
### Configuration Reference
|
||||
|
||||
#### Core
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `log_level` | `DADNS_LOG_LEVEL` | `info` | Log verbosity: `debug`, `info`, `warning`, `error` |
|
||||
| `timezone` | `DADNS_TIMEZONE` | `Pacific/Auckland` | Timezone for log timestamps |
|
||||
| `queue_location` | `DADNS_QUEUE_LOCATION` | `./data/queues` | Path for the persistent zone-update queue |
|
||||
|
||||
#### App (HTTP server)
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `app.auth_username` | `DADNS_APP_AUTH_USERNAME` | `directdnsonly` | Basic auth username for all API routes (including `/internal`) |
|
||||
| `app.auth_password` | `DADNS_APP_AUTH_PASSWORD` | `changeme` | Basic auth password — **always override in production** |
|
||||
| `app.listen_port` | `DADNS_APP_LISTEN_PORT` | `2222` | TCP port the HTTP server binds to |
|
||||
| `app.ssl_enable` | `DADNS_APP_SSL_ENABLE` | `false` | Enable TLS on the HTTP server |
|
||||
| `app.proxy_support` | `DADNS_APP_PROXY_SUPPORT` | `true` | Trust `X-Forwarded-For` from a reverse proxy |
|
||||
| `app.proxy_support_base` | `DADNS_APP_PROXY_SUPPORT_BASE` | `http://127.0.0.1` | Trusted proxy base address |
|
||||
|
||||
#### Datastore (internal SQLite)
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `datastore.type` | `DADNS_DATASTORE_TYPE` | `sqlite` | Internal datastore type (only `sqlite` supported) |
|
||||
| `datastore.db_location` | `DADNS_DATASTORE_DB_LOCATION` | `data/directdns.db` | Path to the SQLite database file |
|
||||
|
||||
#### DNS backends — BIND
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `dns.default_backend` | `DADNS_DNS_DEFAULT_BACKEND` | _(none)_ | Name of the primary backend (used for status/health reporting) |
|
||||
| `dns.backends.bind.enabled` | `DADNS_DNS_BACKENDS_BIND_ENABLED` | `false` | Enable the bundled BIND9 backend |
|
||||
| `dns.backends.bind.zones_dir` | `DADNS_DNS_BACKENDS_BIND_ZONES_DIR` | `/etc/named/zones` | Directory where zone files are written |
|
||||
| `dns.backends.bind.named_conf` | `DADNS_DNS_BACKENDS_BIND_NAMED_CONF` | `/etc/named.conf.local` | `named.conf` include file managed by directdnsonly |
|
||||
|
||||
#### DNS backends — NSD
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `dns.backends.nsd.enabled` | `DADNS_DNS_BACKENDS_NSD_ENABLED` | `false` | Enable the NSD backend |
|
||||
| `dns.backends.nsd.zones_dir` | `DADNS_DNS_BACKENDS_NSD_ZONES_DIR` | `/etc/nsd/zones` | Directory where zone files are written |
|
||||
| `dns.backends.nsd.nsd_conf` | `DADNS_DNS_BACKENDS_NSD_NSD_CONF` | `/etc/nsd/nsd.conf.d/zones.conf` | NSD zone include file managed by directdnsonly |
|
||||
|
||||
#### DNS backends — CoreDNS MySQL
|
||||
|
||||
The built-in env var mapping targets the backend named `coredns_mysql`. For multiple named CoreDNS backends (e.g. `coredns_dc1`, `coredns_dc2`) you must use a config file — see [Multi-backend via config file](#multi-backend-via-config-file) below.
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `dns.backends.coredns_mysql.enabled` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_ENABLED` | `false` | Enable the CoreDNS MySQL backend |
|
||||
| `dns.backends.coredns_mysql.host` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_HOST` | `localhost` | MySQL host |
|
||||
| `dns.backends.coredns_mysql.port` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_PORT` | `3306` | MySQL port |
|
||||
| `dns.backends.coredns_mysql.database` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_DATABASE` | `coredns` | MySQL database name |
|
||||
| `dns.backends.coredns_mysql.username` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_USERNAME` | `coredns` | MySQL username |
|
||||
| `dns.backends.coredns_mysql.password` | `DADNS_DNS_BACKENDS_COREDNS_MYSQL_PASSWORD` | _(empty)_ | MySQL password |
|
||||
|
||||
#### Reconciliation poller
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `reconciliation.enabled` | `DADNS_RECONCILIATION_ENABLED` | `false` | Enable the background reconciliation poller |
|
||||
| `reconciliation.dry_run` | `DADNS_RECONCILIATION_DRY_RUN` | `false` | Log orphans but do not queue deletes (safe first-run mode) |
|
||||
| `reconciliation.interval_minutes` | `DADNS_RECONCILIATION_INTERVAL_MINUTES` | `60` | How often the poller runs |
|
||||
| `reconciliation.verify_ssl` | `DADNS_RECONCILIATION_VERIFY_SSL` | `true` | Verify TLS certificates when querying DirectAdmin |
|
||||
|
||||
> The `reconciliation.directadmin_servers` list (DA hostnames, credentials) requires a config file — it cannot be expressed as simple env vars.
|
||||
|
||||
#### Peer sync
|
||||
|
||||
| Config key | Environment variable | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `peer_sync.enabled` | `DADNS_PEER_SYNC_ENABLED` | `false` | Enable background peer-to-peer zone sync |
|
||||
| `peer_sync.interval_minutes` | `DADNS_PEER_SYNC_INTERVAL_MINUTES` | `15` | How often each peer is polled |
|
||||
|
||||
> The `peer_sync.peers` list (peer URLs, credentials) requires a config file — it cannot be expressed as simple env vars.
|
||||
|
||||
---
|
||||
|
||||
### Environment-variable-only setup
|
||||
|
||||
No config file is needed for single-backend deployments. Pass all settings as container environment variables.
|
||||
|
||||
#### Topology A/C — NSD backend (env vars only, recommended)
|
||||
|
||||
```bash
|
||||
DADNS_APP_AUTH_PASSWORD=my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND=nsd
|
||||
DADNS_DNS_BACKENDS_NSD_ENABLED=true
|
||||
DADNS_DNS_BACKENDS_NSD_ZONES_DIR=/etc/nsd/zones
|
||||
DADNS_DNS_BACKENDS_NSD_NSD_CONF=/etc/nsd/nsd.conf.d/zones.conf
|
||||
DADNS_QUEUE_LOCATION=/app/data/queues
|
||||
DADNS_DATASTORE_DB_LOCATION=/app/data/directdns.db
|
||||
```
|
||||
|
||||
`docker-compose.yml` snippet (Topology C — two instances with peer sync via config file):
|
||||
|
||||
### Config Files
|
||||
#### `config/app.yml`
|
||||
```yaml
|
||||
timezone: Pacific/Auckland
|
||||
log_level: INFO
|
||||
queue_location: ./data/queues
|
||||
services:
|
||||
directdnsonly-syd:
|
||||
image: guisea/directdnsonly:2.5.0
|
||||
ports:
|
||||
- "2222:2222"
|
||||
- "53:53/udp"
|
||||
environment:
|
||||
DADNS_APP_AUTH_PASSWORD: my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND: nsd
|
||||
DADNS_DNS_BACKENDS_NSD_ENABLED: "true"
|
||||
volumes:
|
||||
- ./config/syd:/app/config # contains peer_sync.peers list
|
||||
- syd-data:/app/data
|
||||
|
||||
directdnsonly-mlb:
|
||||
image: guisea/directdnsonly:2.5.0
|
||||
ports:
|
||||
- "2223:2222"
|
||||
- "54:53/udp"
|
||||
environment:
|
||||
DADNS_APP_AUTH_PASSWORD: my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND: nsd
|
||||
DADNS_DNS_BACKENDS_NSD_ENABLED: "true"
|
||||
volumes:
|
||||
- ./config/mlb:/app/config # contains peer_sync.peers list
|
||||
- mlb-data:/app/data
|
||||
|
||||
volumes:
|
||||
syd-data:
|
||||
mlb-data:
|
||||
```
|
||||
|
||||
#### Topology A — BIND backend (env vars only)
|
||||
|
||||
```bash
|
||||
# docker run / docker-compose environment:
|
||||
DADNS_APP_AUTH_USERNAME=directdnsonly
|
||||
DADNS_APP_AUTH_PASSWORD=my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND=bind
|
||||
DADNS_DNS_BACKENDS_BIND_ENABLED=true
|
||||
DADNS_DNS_BACKENDS_BIND_ZONES_DIR=/etc/named/zones
|
||||
DADNS_DNS_BACKENDS_BIND_NAMED_CONF=/etc/named/named.conf.local
|
||||
DADNS_QUEUE_LOCATION=/app/data/queues
|
||||
DADNS_DATASTORE_DB_LOCATION=/app/data/directdns.db
|
||||
```
|
||||
|
||||
`docker-compose.yml` snippet:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
directdnsonly:
|
||||
image: guisea/directdnsonly:2.5.0
|
||||
ports:
|
||||
- "2222:2222"
|
||||
- "53:53/udp"
|
||||
environment:
|
||||
DADNS_APP_AUTH_PASSWORD: my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND: bind
|
||||
DADNS_DNS_BACKENDS_BIND_ENABLED: "true"
|
||||
DADNS_DNS_BACKENDS_BIND_ZONES_DIR: /etc/named/zones
|
||||
DADNS_DNS_BACKENDS_BIND_NAMED_CONF: /etc/named/named.conf.local
|
||||
volumes:
|
||||
- ddo-data:/app/data
|
||||
|
||||
volumes:
|
||||
ddo-data:
|
||||
```
|
||||
|
||||
#### Topology B — single CoreDNS MySQL backend (env vars only)
|
||||
|
||||
```bash
|
||||
DADNS_APP_AUTH_PASSWORD=my-strong-secret
|
||||
DADNS_DNS_DEFAULT_BACKEND=coredns_mysql
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_ENABLED=true
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_HOST=mysql.dc1.internal
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_PORT=3306
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_DATABASE=coredns
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_USERNAME=coredns
|
||||
DADNS_DNS_BACKENDS_COREDNS_MYSQL_PASSWORD=db-secret
|
||||
DADNS_QUEUE_LOCATION=/app/data/queues
|
||||
DADNS_DATASTORE_DB_LOCATION=/app/data/directdns.db
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Multi-backend via config file
|
||||
|
||||
When you need **multiple named backends** (e.g. two CoreDNS MySQL instances in different data centres), **peer sync**, or **reconciliation with DA servers**, use a config file mounted at `/app/config/app.yml` (or `/etc/directdnsonly/app.yml`):
|
||||
|
||||
```yaml
|
||||
app:
|
||||
auth_username: directdnsonly
|
||||
auth_password: changeme # override with DADNS_APP_AUTH_PASSWORD
|
||||
auth_password: my-strong-secret # or use DADNS_APP_AUTH_PASSWORD
|
||||
|
||||
dns:
|
||||
default_backend: bind
|
||||
default_backend: coredns_dc1
|
||||
backends:
|
||||
bind:
|
||||
coredns_dc1:
|
||||
type: coredns_mysql
|
||||
enabled: true
|
||||
zones_dir: ./data/zones
|
||||
named_conf: ./data/named.conf.include
|
||||
|
||||
coredns_mysql:
|
||||
enabled: true
|
||||
host: "127.0.0.1"
|
||||
host: 10.0.0.80
|
||||
port: 3306
|
||||
database: "coredns"
|
||||
username: "coredns"
|
||||
password: "password"
|
||||
database: coredns
|
||||
username: coredns
|
||||
password: db-secret-dc1
|
||||
|
||||
coredns_dc2:
|
||||
type: coredns_mysql
|
||||
enabled: true
|
||||
host: 10.0.1.29
|
||||
port: 3306
|
||||
database: coredns
|
||||
username: coredns
|
||||
password: db-secret-dc2
|
||||
|
||||
reconciliation:
|
||||
enabled: true
|
||||
dry_run: false
|
||||
interval_minutes: 60
|
||||
verify_ssl: true
|
||||
directadmin_servers:
|
||||
- hostname: da1.example.com
|
||||
port: 2222
|
||||
username: admin
|
||||
password: da-secret
|
||||
ssl: true
|
||||
|
||||
peer_sync:
|
||||
enabled: true
|
||||
interval_minutes: 15
|
||||
peers:
|
||||
- url: http://ddo-2:2222
|
||||
username: directdnsonly
|
||||
password: my-strong-secret
|
||||
```
|
||||
|
||||
Credentials in the config file can still be overridden by env vars — for example, `DADNS_APP_AUTH_PASSWORD` overrides `app.auth_password` regardless of what the file says.
|
||||
@@ -2,6 +2,7 @@ from typing import Dict, Type, Optional
|
||||
from .base import DNSBackend
|
||||
from .bind import BINDBackend
|
||||
from .coredns_mysql import CoreDNSMySQLBackend
|
||||
from .nsd import NSDBackend
|
||||
from directdnsonly.config import config
|
||||
from loguru import logger
|
||||
|
||||
@@ -11,6 +12,7 @@ class BackendRegistry:
|
||||
self._backend_types = {
|
||||
"bind": BINDBackend,
|
||||
"coredns_mysql": CoreDNSMySQLBackend,
|
||||
"nsd": NSDBackend,
|
||||
}
|
||||
self._backend_instances: Dict[str, DNSBackend] = {}
|
||||
self._initialized = False
|
||||
|
||||
179
directdnsonly/app/backends/nsd.py
Normal file
179
directdnsonly/app/backends/nsd.py
Normal file
@@ -0,0 +1,179 @@
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from loguru import logger
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
from .base import DNSBackend
|
||||
|
||||
|
||||
class NSDBackend(DNSBackend):
|
||||
"""DNS backend for NSD (Name Server Daemon) by NLnet Labs.
|
||||
|
||||
Zone files use the same RFC 1035 format as BIND. NSD is reloaded via
|
||||
``nsd-control reload`` after each write. Zone registration is managed in a
|
||||
dedicated include file so the main ``nsd.conf`` is never modified by the
|
||||
application.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def get_name(cls) -> str:
|
||||
return "nsd"
|
||||
|
||||
@classmethod
|
||||
def is_available(cls) -> bool:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["nsd-control", "status"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
# nsd-control exits 0 when NSD is running, non-zero otherwise.
|
||||
# Either way, a non-FileNotFoundError means the binary is present.
|
||||
logger.info("NSD available (nsd-control found)")
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
logger.warning("NSD not found in PATH — nsd-control missing")
|
||||
return False
|
||||
|
||||
def __init__(self, config: Dict):
|
||||
super().__init__(config)
|
||||
self.zones_dir = Path(config.get("zones_dir", "/etc/nsd/zones"))
|
||||
self.nsd_conf = Path(
|
||||
config.get("nsd_conf", "/etc/nsd/nsd.conf.d/zones.conf")
|
||||
)
|
||||
|
||||
# Ensure zones directory exists
|
||||
try:
|
||||
if self.zones_dir.is_symlink():
|
||||
logger.debug(f"{self.zones_dir} is already a symlink")
|
||||
elif not self.zones_dir.exists():
|
||||
self.zones_dir.mkdir(parents=True, mode=0o755)
|
||||
logger.debug(f"Created zones directory: {self.zones_dir}")
|
||||
os.chmod(self.zones_dir, 0o755)
|
||||
except FileExistsError:
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to setup zones directory: {e}")
|
||||
raise
|
||||
|
||||
# Ensure the conf include directory and file exist
|
||||
self.nsd_conf.parent.mkdir(parents=True, exist_ok=True)
|
||||
if not self.nsd_conf.exists():
|
||||
self.nsd_conf.touch()
|
||||
logger.info(f"Created empty NSD zone conf: {self.nsd_conf}")
|
||||
|
||||
logger.success(
|
||||
f"NSD backend initialized — zones: {self.zones_dir}, "
|
||||
f"conf: {self.nsd_conf}"
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Core backend interface
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def write_zone(self, zone_name: str, zone_data: str) -> bool:
|
||||
zone_file = self.zones_dir / f"{zone_name}.db"
|
||||
try:
|
||||
zone_file.write_text(zone_data)
|
||||
logger.debug(f"Wrote zone file: {zone_file}")
|
||||
self._ensure_zone_in_conf(zone_name)
|
||||
return True
|
||||
except IOError as e:
|
||||
logger.error(f"Failed to write zone file {zone_file}: {e}")
|
||||
return False
|
||||
|
||||
def delete_zone(self, zone_name: str) -> bool:
|
||||
zone_file = self.zones_dir / f"{zone_name}.db"
|
||||
try:
|
||||
if zone_file.exists():
|
||||
zone_file.unlink()
|
||||
logger.debug(f"Deleted zone file: {zone_file}")
|
||||
else:
|
||||
logger.warning(f"Zone file not found: {zone_file}")
|
||||
return False
|
||||
self._remove_zone_from_conf(zone_name)
|
||||
return True
|
||||
except IOError as e:
|
||||
logger.error(f"Failed to delete zone {zone_name}: {e}")
|
||||
return False
|
||||
|
||||
def reload_zone(self, zone_name: Optional[str] = None) -> bool:
|
||||
try:
|
||||
if zone_name:
|
||||
cmd = ["nsd-control", "reload", zone_name]
|
||||
logger.debug(f"Reloading single zone: {zone_name}")
|
||||
else:
|
||||
cmd = ["nsd-control", "reload"]
|
||||
logger.debug("Reloading all zones")
|
||||
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
check=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
)
|
||||
logger.debug(f"NSD reload successful: {result.stdout.strip()}")
|
||||
return True
|
||||
except subprocess.CalledProcessError as e:
|
||||
logger.error(f"NSD reload failed: {e.stderr.strip()}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error during NSD reload: {e}")
|
||||
return False
|
||||
|
||||
def zone_exists(self, zone_name: str) -> bool:
|
||||
exists = (self.zones_dir / f"{zone_name}.db").exists()
|
||||
logger.debug(f"Zone existence check for {zone_name}: {exists}")
|
||||
return exists
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# NSD conf file management
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def update_nsd_conf(self, zones: List[str]) -> bool:
|
||||
"""Rewrite the NSD zones include file with exactly the given zone list.
|
||||
|
||||
Equivalent to BINDBackend.update_named_conf — full replacement from a
|
||||
known-good source list.
|
||||
"""
|
||||
try:
|
||||
lines = []
|
||||
for zone in zones:
|
||||
zone_file = self.zones_dir / f"{zone}.db"
|
||||
lines.append(
|
||||
f'\nzone:\n name: "{zone}"\n zonefile: "{zone_file}"\n'
|
||||
)
|
||||
self.nsd_conf.write_text("".join(lines))
|
||||
logger.debug(f"Rewrote NSD zone conf: {self.nsd_conf}")
|
||||
return True
|
||||
except IOError as e:
|
||||
logger.error(f"Failed to update NSD zone conf: {e}")
|
||||
return False
|
||||
|
||||
def _ensure_zone_in_conf(self, zone_name: str) -> None:
|
||||
"""Append a zone stanza to the NSD conf file if it is not already present."""
|
||||
zone_file = self.zones_dir / f"{zone_name}.db"
|
||||
stanza = f'\nzone:\n name: "{zone_name}"\n zonefile: "{zone_file}"\n'
|
||||
|
||||
content = self.nsd_conf.read_text() if self.nsd_conf.exists() else ""
|
||||
if f'name: "{zone_name}"' not in content:
|
||||
with open(self.nsd_conf, "a") as f:
|
||||
f.write(stanza)
|
||||
logger.debug(f"Added zone {zone_name} to NSD conf")
|
||||
|
||||
def _remove_zone_from_conf(self, zone_name: str) -> None:
|
||||
"""Remove a zone stanza from the NSD conf file."""
|
||||
if not self.nsd_conf.exists():
|
||||
return
|
||||
content = self.nsd_conf.read_text()
|
||||
pattern = (
|
||||
r'\nzone:\n name: "'
|
||||
+ re.escape(zone_name)
|
||||
+ r'"\n zonefile: "[^"]+"\n'
|
||||
)
|
||||
new_content = re.sub(pattern, "", content)
|
||||
if new_content != content:
|
||||
self.nsd_conf.write_text(new_content)
|
||||
logger.debug(f"Removed zone {zone_name} from NSD conf")
|
||||
@@ -43,6 +43,10 @@ def load_config() -> Vyper:
|
||||
v.set_default("dns.backends.bind.zones_dir", "/etc/named/zones")
|
||||
v.set_default("dns.backends.bind.named_conf", "/etc/named.conf.local")
|
||||
|
||||
v.set_default("dns.backends.nsd.enabled", False)
|
||||
v.set_default("dns.backends.nsd.zones_dir", "/etc/nsd/zones")
|
||||
v.set_default("dns.backends.nsd.nsd_conf", "/etc/nsd/nsd.conf.d/zones.conf")
|
||||
|
||||
v.set_default("dns.backends.coredns_mysql.enabled", False)
|
||||
v.set_default("dns.backends.coredns_mysql.host", "localhost")
|
||||
v.set_default("dns.backends.coredns_mysql.port", 3306)
|
||||
|
||||
@@ -2,10 +2,11 @@
|
||||
set -e
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Detect whether any bind backend is configured and enabled.
|
||||
# Detect which DNS backend type(s) are configured and enabled.
|
||||
# Uses the same config search order as the application itself.
|
||||
# ---------------------------------------------------------------------------
|
||||
BIND_ENABLED=$(python3 - <<'EOF'
|
||||
detect_backend_types() {
|
||||
python3 - <<'EOF'
|
||||
import yaml, sys, os
|
||||
|
||||
config_paths = [
|
||||
@@ -17,6 +18,10 @@ config_paths = [
|
||||
"/app/config/app.yaml",
|
||||
]
|
||||
|
||||
# Also honour env-var-only deployments (no config file)
|
||||
bind_env = os.environ.get("DADNS_DNS_BACKENDS_BIND_ENABLED", "").lower() == "true"
|
||||
nsd_env = os.environ.get("DADNS_DNS_BACKENDS_NSD_ENABLED", "").lower() == "true"
|
||||
|
||||
config = {}
|
||||
for path in config_paths:
|
||||
if os.path.exists(path):
|
||||
@@ -25,20 +30,62 @@ for path in config_paths:
|
||||
break
|
||||
|
||||
backends = config.get("dns", {}).get("backends", {})
|
||||
has_bind = bind_env
|
||||
has_nsd = nsd_env
|
||||
for cfg in backends.values():
|
||||
if isinstance(cfg, dict) and cfg.get("type") == "bind" and cfg.get("enabled", False):
|
||||
print("true")
|
||||
sys.exit(0)
|
||||
print("false")
|
||||
EOF
|
||||
)
|
||||
if not isinstance(cfg, dict) or not cfg.get("enabled", False):
|
||||
continue
|
||||
btype = cfg.get("type", "")
|
||||
if btype == "bind":
|
||||
has_bind = True
|
||||
elif btype == "nsd":
|
||||
has_nsd = True
|
||||
|
||||
if [ "$BIND_ENABLED" = "true" ]; then
|
||||
echo "[entrypoint] BIND backend enabled — starting named"
|
||||
types = []
|
||||
if has_bind:
|
||||
types.append("bind")
|
||||
if has_nsd:
|
||||
types.append("nsd")
|
||||
print(" ".join(types) if types else "none")
|
||||
EOF
|
||||
}
|
||||
|
||||
BACKEND_TYPES=$(detect_backend_types)
|
||||
echo "[entrypoint] Detected DNS backend type(s): ${BACKEND_TYPES:-none}"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Start BIND if a bind backend is configured
|
||||
# ---------------------------------------------------------------------------
|
||||
if echo "$BACKEND_TYPES" | grep -qw "bind"; then
|
||||
if command -v named >/dev/null 2>&1; then
|
||||
echo "[entrypoint] Starting BIND (named)"
|
||||
/usr/sbin/named -u bind -f &
|
||||
else
|
||||
echo "[entrypoint] No BIND backend configured — skipping named"
|
||||
echo "[entrypoint] WARNING: bind backend configured but 'named' not found — skipping"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Start the application
|
||||
# ---------------------------------------------------------------------------
|
||||
# Start NSD if an nsd backend is configured
|
||||
# ---------------------------------------------------------------------------
|
||||
if echo "$BACKEND_TYPES" | grep -qw "nsd"; then
|
||||
if command -v nsd >/dev/null 2>&1; then
|
||||
echo "[entrypoint] Starting NSD"
|
||||
# Ensure nsd-control keys exist (generated on first run)
|
||||
if [ ! -f /etc/nsd/nsd_server.key ]; then
|
||||
nsd-control-setup 2>/dev/null || true
|
||||
fi
|
||||
/usr/sbin/nsd -d -c /etc/nsd/nsd.conf &
|
||||
else
|
||||
echo "[entrypoint] WARNING: nsd backend configured but 'nsd' not found — skipping"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$BACKEND_TYPES" = "none" ] || [ -z "$BACKEND_TYPES" ]; then
|
||||
echo "[entrypoint] No local DNS daemon required (CoreDNS MySQL or similar)"
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Start the directdnsonly application
|
||||
# ---------------------------------------------------------------------------
|
||||
exec python -m directdnsonly
|
||||
|
||||
20
docker/nsd.conf
Normal file
20
docker/nsd.conf
Normal file
@@ -0,0 +1,20 @@
|
||||
# NSD base configuration for directdnsonly containers.
|
||||
# Zone stanzas are written to /etc/nsd/nsd.conf.d/zones.conf by the NSD
|
||||
# backend and auto-included via the glob below.
|
||||
|
||||
server:
|
||||
server-count: 1
|
||||
ip-address: 0.0.0.0
|
||||
port: 53
|
||||
username: nsd
|
||||
zonesdir: /etc/nsd/zones
|
||||
verbosity: 1
|
||||
# Log to stderr so Docker captures it
|
||||
logfile: ""
|
||||
|
||||
remote-control:
|
||||
control-enable: yes
|
||||
control-interface: 127.0.0.1
|
||||
control-port: 8952
|
||||
|
||||
include: /etc/nsd/nsd.conf.d/*.conf
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "directdnsonly"
|
||||
version = "2.4.0"
|
||||
version = "2.5.0"
|
||||
description = "DNS Management System - DirectAdmin to multiple backends"
|
||||
authors = [
|
||||
{name = "Aaron Guise",email = "aaron@guise.net.nz"}
|
||||
|
||||
227
tests/test_nsd.py
Normal file
227
tests/test_nsd.py
Normal file
@@ -0,0 +1,227 @@
|
||||
"""Tests for directdnsonly.app.backends.nsd — NSDBackend."""
|
||||
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from directdnsonly.app.backends.nsd import NSDBackend
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
ZONE_DATA = """\
|
||||
$ORIGIN example.com.
|
||||
$TTL 300
|
||||
@ 300 IN SOA ns1.example.com. hostmaster.example.com. (2024010101 3600 900 604800 300)
|
||||
@ 300 IN NS ns1.example.com.
|
||||
@ 300 IN A 192.0.2.1
|
||||
"""
|
||||
|
||||
|
||||
def _make_backend(tmp_path) -> NSDBackend:
|
||||
"""Return an NSDBackend pointing at tmp_path directories.
|
||||
|
||||
is_available() is patched so the tests do not require a real nsd install.
|
||||
"""
|
||||
zones_dir = tmp_path / "zones"
|
||||
nsd_conf = tmp_path / "nsd.conf.d" / "zones.conf"
|
||||
config = {
|
||||
"instance_name": "test_nsd",
|
||||
"zones_dir": str(zones_dir),
|
||||
"nsd_conf": str(nsd_conf),
|
||||
}
|
||||
with patch.object(NSDBackend, "is_available", return_value=True):
|
||||
return NSDBackend(config)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Availability check
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_is_available_true(monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
"directdnsonly.app.backends.nsd.subprocess.run",
|
||||
lambda *a, **kw: MagicMock(returncode=0),
|
||||
)
|
||||
assert NSDBackend.is_available()
|
||||
|
||||
|
||||
def test_is_available_false_when_not_installed(monkeypatch):
|
||||
def raise_fnf(*args, **kwargs):
|
||||
raise FileNotFoundError
|
||||
|
||||
monkeypatch.setattr("directdnsonly.app.backends.nsd.subprocess.run", raise_fnf)
|
||||
assert not NSDBackend.is_available()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Initialisation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_init_creates_zones_dir(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
assert backend.zones_dir.exists()
|
||||
|
||||
|
||||
def test_init_creates_nsd_conf(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
assert backend.nsd_conf.exists()
|
||||
|
||||
|
||||
def test_get_name():
|
||||
assert NSDBackend.get_name() == "nsd"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# write_zone
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_write_zone_creates_zone_file(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
assert backend.write_zone("example.com", ZONE_DATA)
|
||||
assert (backend.zones_dir / "example.com.db").exists()
|
||||
|
||||
|
||||
def test_write_zone_content_matches(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
content = (backend.zones_dir / "example.com.db").read_text()
|
||||
assert content == ZONE_DATA
|
||||
|
||||
|
||||
def test_write_zone_adds_to_conf(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
conf = backend.nsd_conf.read_text()
|
||||
assert 'name: "example.com"' in conf
|
||||
assert "example.com.db" in conf
|
||||
|
||||
|
||||
def test_write_zone_idempotent_conf_entry(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
conf = backend.nsd_conf.read_text()
|
||||
# Should appear exactly once
|
||||
assert conf.count('name: "example.com"') == 1
|
||||
|
||||
|
||||
def test_write_zone_multiple_zones(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
backend.write_zone("other.com", ZONE_DATA)
|
||||
conf = backend.nsd_conf.read_text()
|
||||
assert 'name: "example.com"' in conf
|
||||
assert 'name: "other.com"' in conf
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# zone_exists
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_zone_exists_after_write(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
assert backend.zone_exists("example.com")
|
||||
|
||||
|
||||
def test_zone_not_exists_before_write(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
assert not backend.zone_exists("missing.com")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# delete_zone
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_delete_zone_removes_file(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
assert backend.delete_zone("example.com")
|
||||
assert not (backend.zones_dir / "example.com.db").exists()
|
||||
|
||||
|
||||
def test_delete_zone_removes_conf_entry(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
backend.delete_zone("example.com")
|
||||
conf = backend.nsd_conf.read_text()
|
||||
assert 'name: "example.com"' not in conf
|
||||
|
||||
|
||||
def test_delete_zone_returns_false_when_missing(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
assert not backend.delete_zone("ghost.com")
|
||||
|
||||
|
||||
def test_delete_zone_leaves_other_zones(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("example.com", ZONE_DATA)
|
||||
backend.write_zone("other.com", ZONE_DATA)
|
||||
backend.delete_zone("example.com")
|
||||
assert 'name: "other.com"' in backend.nsd_conf.read_text()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# reload_zone — subprocess interactions
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_reload_zone_calls_nsd_control_reload(tmp_path, monkeypatch):
|
||||
backend = _make_backend(tmp_path)
|
||||
calls = []
|
||||
|
||||
def fake_run(cmd, **kwargs):
|
||||
calls.append(cmd)
|
||||
return MagicMock(returncode=0, stdout="ok", stderr="")
|
||||
|
||||
monkeypatch.setattr("directdnsonly.app.backends.nsd.subprocess.run", fake_run)
|
||||
assert backend.reload_zone()
|
||||
assert calls[0] == ["nsd-control", "reload"]
|
||||
|
||||
|
||||
def test_reload_single_zone_passes_zone_name(tmp_path, monkeypatch):
|
||||
backend = _make_backend(tmp_path)
|
||||
calls = []
|
||||
|
||||
def fake_run(cmd, **kwargs):
|
||||
calls.append(cmd)
|
||||
return MagicMock(returncode=0, stdout="ok", stderr="")
|
||||
|
||||
monkeypatch.setattr("directdnsonly.app.backends.nsd.subprocess.run", fake_run)
|
||||
assert backend.reload_zone("example.com")
|
||||
assert calls[0] == ["nsd-control", "reload", "example.com"]
|
||||
|
||||
|
||||
def test_reload_zone_returns_false_on_failure(tmp_path, monkeypatch):
|
||||
backend = _make_backend(tmp_path)
|
||||
|
||||
def fake_run(cmd, **kwargs):
|
||||
raise subprocess.CalledProcessError(1, cmd, stderr="nsd-control: error")
|
||||
|
||||
monkeypatch.setattr("directdnsonly.app.backends.nsd.subprocess.run", fake_run)
|
||||
assert not backend.reload_zone()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# update_nsd_conf — full rewrite
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_update_nsd_conf_replaces_all_zones(tmp_path):
|
||||
backend = _make_backend(tmp_path)
|
||||
backend.write_zone("old.com", ZONE_DATA)
|
||||
backend.update_nsd_conf(["new1.com", "new2.com"])
|
||||
conf = backend.nsd_conf.read_text()
|
||||
assert 'name: "old.com"' not in conf
|
||||
assert 'name: "new1.com"' in conf
|
||||
assert 'name: "new2.com"' in conf
|
||||
Reference in New Issue
Block a user