Files
openaccounting-server/STORAGE.md
Aaron Guise 8b6ba74ce9 feat: implement secure file upload system with JWT authentication
- Add JWT-based secure file access for local storage with 1-hour expiry
- Implement GORM repository methods for attachment CRUD operations
- Add secure file serving endpoint with token validation
- Update storage interface to support user context in URL generation
- Add comprehensive security features including path traversal protection
- Update documentation with security model and configuration examples
- Add utility functions for hex/byte conversion and UUID validation
- Configure secure file permissions (0600) for uploaded files

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-03 15:45:25 +12:00

7.6 KiB

Modular Storage System

The OpenAccounting server now supports multiple storage backends for file attachments. This allows you to choose between local filesystem storage for simple deployments or cloud storage for production/multi-user environments.

Supported Storage Backends

1. Local Filesystem Storage

Perfect for self-hosted deployments or development environments.

Configuration:

{
  "storage": {
    "backend": "local",
    "local": {
      "root_dir": "./uploads",
      "base_url": "https://yourapp.com/files",
      "signing_key": "your-secret-jwt-signing-key"
    }
  }
}

Environment Variables:

OA_STORAGE_BACKEND=local
OA_STORAGE_LOCAL_ROOT_DIR=./uploads
OA_STORAGE_LOCAL_BASE_URL=https://yourapp.com/files
OA_STORAGE_LOCAL_SIGNINGKEY=your-secret-jwt-signing-key

Security Features:

  • JWT Token Access: Files are served through secure JWT tokens with 1-hour expiry
  • Secure File Permissions: Files created with 0600 permissions (owner read/write only)
  • Time-Limited URLs: All file access URLs expire automatically
  • Path Traversal Protection: Comprehensive validation prevents directory traversal attacks
  • No Direct File Access: Files cannot be accessed without valid authentication tokens

2. Amazon S3 Storage

Reliable cloud storage for production deployments.

Configuration:

{
  "storage": {
    "backend": "s3",
    "s3": {
      "region": "us-east-1",
      "bucket": "my-openaccounting-attachments",
      "prefix": "attachments",
      "access_key_id": "AKIA...",
      "secret_access_key": "...",
      "endpoint": "",
      "path_style": false
    }
  }
}

Environment Variables:

OA_STORAGE_BACKEND=s3
OA_STORAGE_S3_REGION=us-east-1
OA_STORAGE_S3_BUCKET=my-openaccounting-attachments
OA_STORAGE_S3_PREFIX=attachments
OA_STORAGE_S3_ACCESS_KEY_ID=AKIA...
OA_STORAGE_S3_SECRET_ACCESS_KEY=...

Features:

  • Automatic presigned URL generation
  • Configurable expiry times
  • Support for S3-compatible services (MinIO, DigitalOcean Spaces)
  • IAM role support (leave credentials empty to use IAM)

3. Backblaze B2 Storage

Cost-effective cloud storage alternative to S3.

Configuration:

{
  "storage": {
    "backend": "b2",
    "b2": {
      "account_id": "your-b2-account-id",
      "application_key": "your-b2-application-key",
      "bucket": "my-openaccounting-attachments", 
      "prefix": "attachments"
    }
  }
}

Environment Variables:

OA_STORAGE_BACKEND=b2
OA_STORAGE_B2_ACCOUNT_ID=your-b2-account-id
OA_STORAGE_B2_APPLICATION_KEY=your-b2-application-key
OA_STORAGE_B2_BUCKET=my-openaccounting-attachments
OA_STORAGE_B2_PREFIX=attachments

API Endpoints

The storage system provides both legacy and new endpoints:

New Storage-Agnostic Endpoints

Upload Attachment:

POST /api/v1/attachments
Content-Type: multipart/form-data

transactionId: uuid
description: string (optional)
file: binary data

Get Attachment Metadata:

GET /api/v1/attachments/{id}

Get Download URL:

GET /api/v1/attachments/{id}/url

Download File:

GET /api/v1/attachments/{id}?download=true

Delete Attachment:

DELETE /api/v1/attachments/{id}

Legacy Endpoints (Still Supported)

The original transaction-scoped endpoints remain available for backward compatibility:

  • GET/POST /api/v1/orgs/{orgId}/transactions/{transactionId}/attachments

Security Features

  • File type validation - Only allowed MIME types are accepted
  • File size limits - Configurable maximum file size (default 10MB)
  • Path traversal protection - Prevents directory traversal attacks
  • Access control - Files are linked to users and organizations
  • Time-limited access - JWT tokens for local storage, presigned URLs for cloud storage
  • Secure file permissions - Local files created with restricted permissions (0600)
  • Cryptographic security - HMAC-SHA256 signed JWT tokens prevent tampering

File Organization

Files are automatically organized by date:

uploads/
├── 2025/
│   ├── 01/
│   │   ├── 15/
│   │   │   ├── uuid1.pdf
│   │   │   └── uuid2.png
│   │   └── 16/
│   │       └── uuid3.jpg

Configuration Examples

Development (Local Storage)

{
  "storage": {
    "backend": "local",
    "local": {
      "root_dir": "./dev-uploads",
      "signing_key": "dev-secret-key-change-in-production"
    }
  }
}

Production (S3 with IAM)

{
  "storage": {
    "backend": "s3", 
    "s3": {
      "region": "us-west-2",
      "bucket": "prod-openaccounting-files",
      "prefix": "attachments"
    }
  }
}

Cost-Optimized (Backblaze B2)

{
  "storage": {
    "backend": "b2",
    "b2": {
      "account_id": "${B2_ACCOUNT_ID}",
      "application_key": "${B2_APP_KEY}",
      "bucket": "openaccounting-prod"
    }
  }
}

Local Storage Security (JWT Tokens)

Local storage now implements JWT-based security matching the security model of S3 presigned URLs:

How It Works

  1. File Upload: Files are stored with secure permissions (0600) in date-organized directories
  2. URL Generation: When requesting file access, the server generates a JWT token containing:
    • File path
    • User ID and Organization ID (for audit trails)
    • Expiry time (default 1 hour)
    • Cryptographic signature (HMAC-SHA256)
  3. File Access: Files are served through /secure-files?token=... endpoint with token validation
  4. Security: Tokens expire automatically and cannot be tampered with

JWT Token Example

/secure-files?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

Security Benefits

  • Time-Limited Access: URLs expire after 1 hour by default
  • Tamper-Proof: HMAC-SHA256 signatures prevent token modification
  • Audit Trail: Tokens include user and organization context
  • No Direct Access: Files cannot be accessed without valid tokens
  • Secure Permissions: Files created with 0600 permissions (owner only)

Signing Key Configuration

The signing key should be:

  • Unique per deployment to prevent cross-deployment token reuse
  • Kept secret and not committed to version control
  • At least 32 characters for security (recommended)
  • Set via environment variable OA_STORAGE_LOCAL_SIGNINGKEY

If no signing key is provided, the server will auto-generate one (but tokens won't persist across restarts).

Migration Between Storage Backends

When changing storage backends, existing attachments will remain in the old storage location. The database records contain the storage path, so files can be accessed until migrated.

To migrate:

  1. Update configuration to new backend
  2. Restart server
  3. New uploads will use the new backend
  4. Optional: Run migration script to move existing files

Environment-Specific Considerations

Self-Hosted

  • Use local storage for simplicity
  • Ensure backup strategy includes upload directory
  • Consider disk space management

Cloud Deployment

  • Use S3 or B2 for reliability and scalability
  • Configure proper IAM policies
  • Enable versioning and lifecycle policies

Multi-Region

  • Use cloud storage with appropriate region selection
  • Consider CDN integration for better performance

Troubleshooting

Storage backend not initialized:

  • Check configuration syntax
  • Verify credentials for cloud backends
  • Ensure storage directories/buckets exist

Permission denied:

  • Check file system permissions for local storage
  • Verify IAM policies for S3
  • Confirm B2 application key permissions

Large file uploads failing:

  • Check MaxFileSize configuration
  • Verify network timeouts
  • Consider multipart upload for large files