Introduction

Konarr is a blazing fast, lightweight web interface for monitoring your servers, clusters, and containers' supply chains for dependencies and vulnerabilities. Written in Rust 🦀, it provides minimal resource usage while delivering real-time insights into your software bill of materials (SBOM) and security posture.

Key Features

  • Simple, easy-to-use web interface with both light and dark modes
  • Blazing fast performance with minimal resource usage (written in Rust 🦀)
  • Real-time container monitoring using industry-standard scanners:
  • Orchestration support for:
  • Software Bill of Materials (SBOM) generation and management for containers
  • Supply chain attack monitoring (in development 🚧)
  • CycloneDX support (v1.5 and v1.6) for SBOM formats

Architecture

Konarr follows a simple server + agent architecture:

  • Server: Built with Rust and the Rocket framework (source)

  • Agent / CLI: Rust-based CLI (konarr-cli) that:

    • Runs in monitoring mode (watches Docker socket for container events)
    • Generates SBOMs using configurable tools (Syft, Grype, Trivy)
    • Uploads snapshots and vulnerability data to the server
    • Supports auto-creation of projects
    • Can auto-install and update scanning tools
  • Extensible tooling:

    • Tool discovery and management system
    • Support for multiple package managers:
    • Standardized SBOM and vulnerability report uploading

Technologies Used

Konarr is built with modern, high-performance technologies:

Backend:

Frontend:

Database:

  • SQLite for lightweight, embedded data storage
  • GeekORM for type-safe database operations
  • Automatic migrations and schema management

Security & Standards:

Container & Deployment:

Getting Started

  1. Install the Server - See Server Installation
  2. Configure Authentication - Retrieve the agent token from the server
  3. Deploy Agents - See Agent Installation to monitor your containers
  4. Monitor Projects - View SBOMs and vulnerabilities in the web interface

For a quick start using Docker, see our installation guide.


Project Repository: https://github.com/42ByteLabs/konarr
Frontend Repository: https://github.com/42ByteLabs/konarr-client
Container Images: Available on GitHub Container Registry
License: Apache 2.0

Installation and Setup

This section provides multiple installation methods for Konarr. Choose the method that best fits your environment:

  • Quick Start: One-line installer script
  • Docker Compose: Full stack with server and agent
  • Individual Components: Install server and agent separately
  • From Source: Build from the GitHub repository

The fastest way to get Konarr running is with Docker:

# Run Konarr server
docker run -d \
  --name konarr \
  -p 9000:9000 \
  -v ./data:/data \
  -v ./config:/config \
  ghcr.io/42bytelabs/konarr:latest

This will start the Konarr server with:

  • Web interface accessible at http://localhost:9000
  • Data persistence in ./data directory
  • Optional configuration in ./config/konarr.yml

Docker Compose Setup

For a complete development environment with both server and frontend:

# Clone repository
git clone https://github.com/42ByteLabs/konarr.git && cd konarr
git submodule update --init --recursive

# Start services
docker-compose up -d

This provides:

  • Konarr server on port 9000
  • Development setup with both server and frontend
  • Persistent data volumes
  • Automatic service management

Component Installation

For detailed setup of individual components:

Prerequisites

For Container Deployment:

  • Docker (v20.10+) or Podman (v3.0+)
  • Docker Compose (optional, for multi-container setup)

For Source Installation:

  • Rust and Cargo (latest stable)
  • Node.js and npm (for frontend build)
  • Git (for cloning repository)

System Requirements:

  • Minimum: 256MB RAM, 1GB disk space
  • Recommended: 512MB+ RAM, 5GB+ disk space (for SBOM storage)

Quick Workflow

  1. Start the server (port 9000 by default)
  2. Access the web UI at http://localhost:9000
  3. Retrieve the agent token from server settings or database
  4. Deploy agents on hosts you want to monitor
  5. Create projects to organize your container monitoring
  6. View SBOMs and vulnerabilities in the web interface

Default Ports and Paths

  • Server Port: 9000 (HTTP)
  • Data Directory: ./data (contains SQLite database)
  • Config Directory: ./config (contains konarr.yml)
  • Database: ./data/konarr.db (SQLite)
  • Agent Token: Stored in server settings as agent.key

Verifying and Troubleshooting

  1. Start the server and open the UI: http://localhost:9000 (or configured host).
  2. Start the agent with the correct instance URL, token and a project id or auto-create enabled.
  3. Confirm snapshots appear in the project view and the server shows the agent as authenticated.

Common troubleshooting

  • Agent authentication failures: double-check KONARR_AGENT_TOKEN value and ensure the server agent.key matches.
  • Missing scanner binaries: either enable agent.tool_auto_install or install syft/grype/trivy on the host/container and make sure they are on PATH or in /usr/local/toolcache.
  • Frontend not served when running server from source: build frontend (client/) and point server frontend config to the dist directory.

Next Steps:

Konarr Server

The Konarr server is the central component providing the REST API, web interface, and data storage. It's built with Rust using the Rocket framework and stores data in SQLite by default.

Server Implementation: The server is implemented in server/src/main.rs with API routes in server/src/api and data models in src/models.

Installation Methods

Single Container:

docker run -d \
  --name konarr \
  -p 9000:9000 \
  -v ./data:/data \
  -v ./config:/config \
  ghcr.io/42bytelabs/konarr:latest

Key Points:

  • Server listens on port 9000
  • Data persisted in ./data (SQLite database)
  • Configuration in ./config (optional konarr.yml)
  • Automatic database migrations on startup

Docker Compose

For production deployments, see our Docker Compose guide which includes:

  • Service definitions
  • Volume management
  • Health checks
  • Upgrade procedures

Cargo Installation

Install the server binary directly:

# Install from crates.io
cargo install konarr-server

# Run with default configuration
konarr-server

# Run with custom config
konarr-server -c ./konarr.yml

Note: Cargo installation is not recommended for production use.

From Source (Development)

Requirements:

  • Rust and Cargo (latest stable)
  • Node.js and npm (for frontend)
  • Git

Clone and Build:

# Clone repository with frontend submodule
git clone https://github.com/42ByteLabs/konarr.git && cd konarr
git submodule update --init --recursive

# Build frontend
cd frontend && npm install && npm run build && cd ..

# Run server (development mode)
cargo run -p konarr-server

# Or build and run release
cargo run -p konarr-server --release -- -c ./konarr.yml

Development with Live Reload:

# Watch mode for server changes
cargo watch -q -c -- cargo run -p konarr-server

# Frontend development (separate terminal)
cd frontend && npm run dev

This creates:

  • Default config: config/konarr.yml
  • SQLite database: data/konarr.db
  • Server on port 9000 (default for all modes)

Configuration

Environment Variables

The server uses Figment for configuration, supporting environment variables with KONARR_ prefix:

# Server settings
export KONARR_SERVER__PORT=9000
export KONARR_DATA_PATH=/data
export KONARR_FRONTEND__URL=https://konarr.example.com

# Database settings  
export KONARR_DATABASE__PATH=/data/konarr.db

# Security
export KONARR_SECRET=your-secret-key

Configuration File

Create konarr.yml for persistent settings:

server:
  host: "0.0.0.0"
  port: 9000
  data_path: "/data"
  frontend:
    url: "https://konarr.example.com"
  secret: "your-secret-key"

database:
  path: "/data/konarr.db"
  
agent:
  key: "your-agent-key"  # Optional: will be generated if not provided

Agent Token Management

The server automatically generates an agent authentication key on first startup, stored as agent.key in ServerSettings.

Retrieving the Agent Token

Method 1: Database Query

sqlite3 ./data/konarr.db "SELECT value FROM server_settings WHERE name='agent.key';"

Method 2: Configuration File

If you set the agent key in konarr.yml, use that value.

Method 3: Web UI

Access server settings through the admin interface (requires authentication).

⚠️ Security: Treat the agent token as a secret. Do not commit to version control or share publicly.

Production Deployment

Reverse Proxy Setup

See Reverse Proxy Setup for detailed configuration examples.

Security Recommendations

  • Use HTTPS: Configure TLS termination at the reverse proxy
  • Set frontend URL: Update server.frontend.url to match external URL
  • Secure volumes: Protect ./data and ./config with appropriate file permissions
  • Stable secrets: Set server.secret to a strong, persistent value
  • Regular backups: Back up the SQLite database before upgrades

Resource Requirements

  • Minimum: 256MB RAM, 1GB disk
  • Recommended: 512MB+ RAM, 5GB+ disk (for SBOM storage)
  • CPU: Scales with number of concurrent users and agent uploads

Monitoring

Monitor server health:

# Health check endpoint
curl http://localhost:9000/api/health

# Container logs
docker logs -f konarr

# Database size
du -h ./data/konarr.db

Next Steps: Configure and deploy agents to start monitoring containers.

Server Docker Compose

This page provides a ready-to-use Docker Compose example and notes for deploying the Konarr Server in a multi-container environment (eg. web + DB volumes). The example focuses on the official Konarr image and mounting persistent volumes for data and config.

docker-compose example

Save the following as docker-compose.yml in your deployment directory and adjust paths and environment variables as needed:

services:
  konarr:
    image: ghcr.io/42bytelabs/konarr:latest
    container_name: konarr
    restart: unless-stopped
    ports:
      - "9000:9000"
    volumes:
      - ./data:/data
      - ./config:/config
    environment:
      # Use KONARR_ prefixed env vars to configure the server if you prefer env-based config
      - KONARR_DATA_PATH=/data
      - KONARR_CONFIG_PATH=/config/konarr.yml
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9000/api/health || exit 1"]
      interval: 30s
      timeout: 5s
      retries: 3

Deploy

# start in detached mode
docker compose up -d

Monitor logs

# show logs
docker compose logs -f konarr

Volumes and persistent data

  • ./data stores the SQLite database and other runtime state — back this up regularly.
  • ./config stores konarr.yml (optional). If you want immutable configuration, mount a read-only config volume and supply environment variables for secrets.

Backups and migrations

  • Backup the data/konarr.db file before performing upgrades.
  • On first run the server will run migrations; ensure your backup is taken before major version upgrades.

Upgrading the image

  1. Pull the new image: docker compose pull konarr
  2. Restart the service: docker compose up -d --no-deps --build konarr
  3. Monitor logs for migrations: docker compose logs -f konarr

Notes

  • The server listens on port 9000 by default.
  • Use a reverse proxy or load balancer in front of the service for TLS termination in production.
  • For security, protect the config and data directories and do not expose the database file to untrusted users.

See Also:

Kubernetes Deployment

This guide covers deploying Konarr server and agents on Kubernetes clusters, including configuration, security considerations, and operational best practices.

Overview

Konarr can be deployed on Kubernetes using standard manifests or Helm charts. The deployment typically includes:

  • Konarr Server: Web interface, API, and database
  • Konarr Agents: Container monitoring and SBOM generation (optional)
  • Supporting Resources: ConfigMaps, Secrets, Services, and storage

Prerequisites

  • Kubernetes cluster (v1.20+)
  • kubectl configured to access your cluster
  • Persistent storage support (for database persistence)
  • LoadBalancer or Ingress controller (for external access)

Quick Start

Minimal Deployment

# konarr-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: konarr
---
# konarr-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: konarr-server
  namespace: konarr
  labels:
    app: konarr-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: konarr-server
  template:
    metadata:
      labels:
        app: konarr-server
    spec:
      containers:
      - name: konarr-server
        image: ghcr.io/42bytelabs/konarr:latest
        ports:
        - containerPort: 9000
        env:
        - name: KONARR_DATA_PATH
          value: "/data"
        volumeMounts:
        - name: data
          mountPath: /data
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      volumes:
      - name: data
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: konarr-server
  namespace: konarr
spec:
  selector:
    app: konarr-server
  ports:
  - port: 9000
    targetPort: 9000
  type: ClusterIP

Deploy the minimal setup:

kubectl apply -f konarr-namespace.yaml
kubectl apply -f konarr-server.yaml

Troubleshooting

Common Issues

  1. Agent Permission Issues:
# Check agent logs
kubectl logs -n konarr -l app=konarr-agent

# Verify RBAC permissions
kubectl auth can-i get pods --as=system:serviceaccount:konarr:konarr-agent
  1. Storage Issues:
# Check PVC status
kubectl get pvc -n konarr

# Check storage class
kubectl get storageclass
  1. Network Connectivity:
# Test internal service connectivity
kubectl exec -n konarr deployment/konarr-agent -- curl http://konarr-server:9000/api/health

# Check ingress status
kubectl get ingress -n konarr

Debug Commands

# Get all Konarr resources
kubectl get all -n konarr

# Check events
kubectl get events -n konarr --sort-by='.lastTimestamp'

# Debug pod issues
kubectl describe pod -n konarr -l app=konarr-server

# Check logs
kubectl logs -n konarr deployment/konarr-server --follow

Deployment Scripts

Complete Deployment Script

#!/bin/bash
# deploy-konarr.sh

set -e

NAMESPACE="konarr"
DOMAIN="konarr.example.com"

echo "Creating namespace..."
kubectl create namespace ${NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -

echo "Generating secrets..."
AGENT_TOKEN=$(openssl rand -base64 32)
SERVER_SECRET=$(openssl rand -base64 32)

kubectl create secret generic konarr-secrets \
  --namespace=${NAMESPACE} \
  --from-literal=agent-token=${AGENT_TOKEN} \
  --from-literal=server-secret=${SERVER_SECRET} \
  --dry-run=client -o yaml | kubectl apply -f -

echo "Deploying Konarr server..."
envsubst < konarr-server.yaml | kubectl apply -f -

echo "Deploying Konarr agents..."
kubectl apply -f konarr-agent-daemonset.yaml

echo "Configuring ingress..."
envsubst < konarr-ingress.yaml | kubectl apply -f -

echo "Waiting for deployment..."
kubectl wait --for=condition=available --timeout=300s deployment/konarr-server -n ${NAMESPACE}

echo "Konarr deployed successfully!"
echo "Access at: https://${DOMAIN}"
echo "Agent token: ${AGENT_TOKEN}"

Migration from Docker

Data Migration

# Copy data from Docker volume to Kubernetes PV
kubectl cp /var/lib/docker/volumes/konarr_data/_data/konarr.db \
  konarr/konarr-server-pod:/data/konarr.db

Best Practices

Resource Management

  • Use resource requests and limits
  • Configure appropriate storage classes
  • Implement monitoring and alerting
  • Use horizontal pod autoscaling for high-traffic deployments

Security

  • Run as non-root user
  • Use read-only root filesystems where possible
  • Implement network policies
  • Regular security updates and scanning

Operations

  • Implement proper backup strategies
  • Monitor resource usage and performance
  • Use GitOps for configuration management
  • Regular testing of disaster recovery procedures

Additional Resources

Reverse Proxy Setup

Running Konarr behind a reverse proxy is recommended for production deployments to provide TLS termination, load balancing, and additional security features.

Nginx

Basic Configuration

server {
    listen 80;
    server_name konarr.example.com;
    
    # Redirect HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name konarr.example.com;
    
    # SSL configuration
    ssl_certificate /path/to/certificate.crt;
    ssl_certificate_key /path/to/private.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    
    # Security headers
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    
    # Proxy configuration
    location / {
        proxy_pass http://127.0.0.1:9000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        
        # WebSocket support (if needed for future features)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        
        # Timeouts
        proxy_connect_timeout 30s;
        proxy_send_timeout 30s;
        proxy_read_timeout 30s;
        
        # Buffer settings
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 4k;
    }
    
    # API endpoints with longer timeouts for large SBOM uploads
    location ~ ^/api/(snapshots|upload) {
        proxy_pass http://127.0.0.1:9000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Extended timeouts for large uploads
        proxy_connect_timeout 60s;
        proxy_send_timeout 300s;
        proxy_read_timeout 300s;
        
        # Increase client max body size for SBOM uploads
        client_max_body_size 50M;
    }
    
    # Health check endpoint
    location /api/health {
        proxy_pass http://127.0.0.1:9000;
        access_log off;
    }
}

Let's Encrypt with Certbot

Automatically obtain and renew SSL certificates:

# Install certbot
sudo apt update
sudo apt install certbot python3-certbot-nginx

# Obtain certificate
sudo certbot --nginx -d konarr.example.com

# Test automatic renewal
sudo certbot renew --dry-run

Traefik

Docker Compose Configuration

version: '3.8'

services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./traefik.yml:/traefik.yml:ro
      - ./acme.json:/acme.json
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)"
      - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"

  konarr:
    image: ghcr.io/42bytelabs/konarr:latest
    container_name: konarr
    restart: unless-stopped
    volumes:
      - ./data:/data
      - ./config:/config
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.konarr.rule=Host(`konarr.example.com`)"
      - "traefik.http.routers.konarr.tls.certresolver=letsencrypt"
      - "traefik.http.services.konarr.loadbalancer.server.port=9000"
      # Health check
      - "traefik.http.services.konarr.loadbalancer.healthcheck.path=/api/health"
      - "traefik.http.services.konarr.loadbalancer.healthcheck.interval=30s"

Traefik Configuration (traefik.yml)

api:
  dashboard: true

entryPoints:
  web:
    address: ":80"
    http:
      redirections:
        entrypoint:
          to: websecure
          scheme: https
  websecure:
    address: ":443"

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false

certificatesResolvers:
  letsencrypt:
    acme:
      email: admin@example.com
      storage: acme.json
      httpChallenge:
        entryPoint: web

Caddy

Caddyfile Configuration

konarr.example.com {
    reverse_proxy 127.0.0.1:9000 {
        header_up X-Real-IP {remote_addr}
        header_up X-Forwarded-For {remote_addr}
        header_up X-Forwarded-Proto {scheme}
        
        # Health check
        health_uri /api/health
        health_interval 30s
        health_timeout 10s
    }
    
    # Security headers
    header {
        X-Frame-Options DENY
        X-Content-Type-Options nosniff
        X-XSS-Protection "1; mode=block"
        Strict-Transport-Security "max-age=31536000; includeSubDomains"
    }
    
    # Longer timeouts for API uploads
    @api_uploads path /api/snapshots* /api/upload*
    reverse_proxy @api_uploads 127.0.0.1:9000 {
        timeout 300s
    }
}

Apache HTTP Server

Virtual Host Configuration

<VirtualHost *:80>
    ServerName konarr.example.com
    Redirect permanent / https://konarr.example.com/
</VirtualHost>

<VirtualHost *:443>
    ServerName konarr.example.com
    
    # SSL Configuration
    SSLEngine on
    SSLCertificateFile /path/to/certificate.crt
    SSLCertificateKeyFile /path/to/private.key
    SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
    SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
    
    # Security Headers
    Header always set X-Frame-Options DENY
    Header always set X-Content-Type-Options nosniff
    Header always set X-XSS-Protection "1; mode=block"
    Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
    
    # Proxy Configuration
    ProxyPreserveHost On
    ProxyPass / http://127.0.0.1:9000/
    ProxyPassReverse / http://127.0.0.1:9000/
    
    # Set headers for backend
    ProxyPassReverse / http://127.0.0.1:9000/
    ProxyPassReverseMatch ^(.*)$ http://127.0.0.1:9000$1
    
    SetEnvIf X-Forwarded-Proto https HTTPS=on
</VirtualHost>

HAProxy

Load Balancing Configuration

global
    daemon
    maxconn 4096

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend konarr_frontend
    bind *:80
    bind *:443 ssl crt /path/to/certificate.pem
    
    # Redirect HTTP to HTTPS
    redirect scheme https if !{ ssl_fc }
    
    # Security headers
    http-response set-header X-Frame-Options DENY
    http-response set-header X-Content-Type-Options nosniff
    http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains"
    
    default_backend konarr_backend

backend konarr_backend
    balance roundrobin
    
    # Health check
    option httpchk GET /api/health
    
    # Backend servers
    server konarr1 127.0.0.1:9000 check
    # server konarr2 127.0.0.1:9001 check  # Additional instances

Security Considerations

Rate Limiting

Configure rate limiting at the reverse proxy level:

Nginx:

# Rate limiting configuration
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=upload:10m rate=1r/s;

location /api/ {
    limit_req zone=api burst=20 nodelay;
}

location ~ ^/api/(snapshots|upload) {
    limit_req zone=upload burst=5 nodelay;
}

Traefik:

# Add to service labels
- "traefik.http.middlewares.ratelimit.ratelimit.burst=20"
- "traefik.http.middlewares.ratelimit.ratelimit.average=10"
- "traefik.http.routers.konarr.middlewares=ratelimit"

IP Whitelisting

Restrict access to specific IP ranges:

Nginx:

# Allow specific networks
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 172.16.0.0/12;
deny all;

Authentication Middleware

Add basic authentication at the proxy level:

Nginx:

location / {
    auth_basic "Konarr Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
    proxy_pass http://127.0.0.1:9000;
}

Monitoring and Logging

Access Logs

Configure detailed logging for monitoring:

Nginx:

log_format konarr_format '$remote_addr - $remote_user [$time_local] '
                         '"$request" $status $body_bytes_sent '
                         '"$http_referer" "$http_user_agent" '
                         '$request_time $upstream_response_time';

access_log /var/log/nginx/konarr.access.log konarr_format;

Health Checks

Set up monitoring for the reverse proxy and backend:

# Simple health check script
#!/bin/bash
curl -f -s https://konarr.example.com/api/health > /dev/null
if [ $? -eq 0 ]; then
    echo "Konarr is healthy"
    exit 0
else
    echo "Konarr health check failed"
    exit 1
fi

Configuration Notes

Backend URL Configuration

Update Konarr server configuration to use the external URL:

# konarr.yml
server:
  frontend:
    url: "https://konarr.example.com"

CORS Configuration

If needed, configure CORS headers:

# Add CORS headers if required
add_header Access-Control-Allow-Origin "https://trusted-domain.com";
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Authorization, Content-Type";

Konarr Agent

The Konarr Agent (konarr-cli) is a Rust-based command-line tool that monitors containers, generates SBOMs, and uploads security data to the Konarr server. It can run as a one-shot scanner or in continuous monitoring mode.

Installation Methods

Basic Agent Run:

docker run -it --rm \
  -e KONARR_INSTANCE="http://your-server:9000" \
  -e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
  -e KONARR_PROJECT_ID="<PROJECT_ID>" \
  ghcr.io/42bytelabs/konarr-agent:latest

Container Monitoring Mode:

docker run -d \
  --name konarr-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -e KONARR_INSTANCE="http://your-server:9000" \
  -e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
  -e KONARR_AGENT_MONITORING=true \
  -e KONARR_AGENT_AUTO_CREATE=true \
  ghcr.io/42bytelabs/konarr-agent:latest

🔐 Security Warning: Mounting the Docker socket (/var/run/docker.sock) grants the container significant control over the host system. This includes the ability to:

  • Create privileged containers
  • Access host filesystem through volume mounts
  • Escalate privileges
  • Inspect all running containers

Security Mitigations:

  • Only run on trusted hosts with trusted images
  • Use read-only mounts when possible (:ro)
  • Consider using a dedicated host agent instead of containerized agent
  • Limit agent runtime permissions
  • Monitor agent activity closely
  • Consider using container runtimes with safer introspection APIs

Cargo Installation

Install the CLI binary directly:

# Install from crates.io
cargo install konarr-cli

# Run agent
konarr-cli --instance http://your-server:9000 \
  --agent-token <AGENT_TOKEN> \
  agent --docker-socket /var/run/docker.sock

# One-shot scan
konarr-cli --instance http://your-server:9000 \
  --agent-token <AGENT_TOKEN> \
  scan --image alpine:latest

Specialized Agent Images

Syft-only Agent:

FROM ghcr.io/42bytelabs/konarr-cli:latest
RUN curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

Configuration

Quick Configuration

The agent requires minimal configuration to get started:

Environment Variables:

# Required settings
export KONARR_INSTANCE="http://konarr.example.com:9000"
export KONARR_AGENT_TOKEN="your-agent-token"

# Optional - enable monitoring mode
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_AUTO_CREATE=true

Configuration File (konarr.yml):

agent:
  project_id: "my-project"
  create: true           # Auto-create projects
  monitoring: true       # Watch Docker events
  tool: "syft"          # Primary SBOM tool
  tool_auto_install: true

For comprehensive configuration options, security settings, and production deployment examples, see Agent Configuration Details.

CLI Commands

Agent Mode (Continuous Monitoring):

# Monitor with config file
konarr-cli --config ./konarr.yml agent --docker-socket /var/run/docker.sock

# Monitor with environment variables set
konarr-cli agent --docker-socket /var/run/docker.sock

# Monitor specific Docker socket
konarr-cli agent --docker-socket /custom/docker.sock

Scan Mode (One-time Scan):

# Scan specific image
konarr-cli scan --image alpine:latest

# Scan with output to file
konarr-cli --config ./konarr.yml scan --image alpine:latest --output scan-results.json

Tool Management:

# List available tools
konarr-cli scan --list

# Install specific tool
konarr-cli tools install syft

# Check tool versions
konarr-cli tools list

Scanning Tools

The agent uses external scanning tools for SBOM generation and vulnerability detection. Three tools are supported:

  • Syft - Primary SBOM generation tool
  • Grype - Vulnerability scanning
  • Trivy - Comprehensive security scanning

The agent can automatically install these tools when needed:

# Enable auto-install (default in container images)
export KONARR_AGENT_TOOL_AUTO_INSTALL=true

# Or manually install specific tools
konarr-cli tools install syft

For detailed information about each tool, installation options, and configuration, see Scanning Tools.

Project Management

Project Creation

The agent can automatically create projects or upload to existing ones:

# Auto-create project (default behavior)
export KONARR_AGENT_AUTO_CREATE=true

# Use existing project ID
export KONARR_AGENT_PROJECT_ID="existing-project-123"

Project Naming Convention:

  • Docker Compose: {prefix}/{container_name}
  • Labeled containers: {prefix}/{image-title}
  • Default: Container name or image name

Container Filtering

The agent automatically monitors containers but can be configured to filter:

# Example filtering (implementation-dependent)
agent:
  monitoring: true
  filters:
    exclude_labels:
      - "konarr.ignore=true"
    include_only:
      - "environment=production"

Docker Compose Integration

For container monitoring via Docker Compose, see our Agent Docker Compose guide.

Example docker-compose.yml service:

services:
  konarr-agent:
    image: ghcr.io/42bytelabs/konarr-agent:latest
    container_name: konarr-agent
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - KONARR_INSTANCE=http://konarr-server:9000
      - KONARR_AGENT_TOKEN=${KONARR_AGENT_TOKEN}
      - KONARR_AGENT_MONITORING=true
      - KONARR_AGENT_AUTO_CREATE=true

Troubleshooting

Common Issues

Agent Authentication Failed:

# Verify token
echo $KONARR_AGENT_TOKEN

# Test server connection
curl -H "Authorization: Bearer $KONARR_AGENT_TOKEN" \
  http://your-server:9000/api/health

Tool Installation Issues:

# Check tool availability
konarr-cli tools list

# Manual tool install
konarr-cli tools install syft

# Check tool cache
ls -la /usr/local/toolcache/

Docker Socket Issues:

# Verify Docker socket access
docker ps

# Check socket permissions
ls -la /var/run/docker.sock

Monitoring Agent Status

# Container logs
docker logs -f konarr-agent

# Agent health (if running as daemon)
konarr-cli agent status

# Server-side agent status
curl http://your-server:9000/api/agents

Next Steps: Configure monitoring and view results in the Konarr web interface.

Docker Compose for Agent

This document shows how to run the Konarr Agent via Docker Compose — useful for running a long-lived agent that monitors a host and uploads snapshots.

docker-compose example (monitoring host Docker)

Save as docker-compose-agent.yml and run from the host you want to monitor.

version: '3.8'
services:
  konarr-agent:
    image: ghcr.io/42byteLabs/konarr-agent:latest
    container_name: konarr-agent
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - KONARR_INSTANCE=http://your-server:9000
      - KONARR_AGENT_TOKEN=<AGENT_TOKEN>
      - KONARR_AGENT_MONITORING=true
      - KONARR_AGENT_TOOL_AUTO_INSTALL=true

Notes and security

  • The compose example mounts the Docker socket as read-only. Even read-only mounts may expose sensitive control; follow the security guidance in 02-agent.md before using this in production.
  • Use a secrets manager (or Docker secrets) to provide the agent token in production rather than hard-coding it in the compose file.

Run

docker compose -f docker-compose-agent.yml up -d

Upgrading

docker compose -f docker-compose-agent.yml pull konarr-agent
docker compose -f docker-compose-agent.yml up -d --no-deps --build konarr-agent

See Also:

Configuration & Usage

This page provides an overview of Konarr configuration and common usage workflows for the Server, Web UI, and Agent (CLI).

Configuration Sources and Precedence

Konarr uses a configuration merging strategy (Figment in the server code):

  1. konarr.yml configuration file (if present)
  2. Environment variables
  3. Command-line flags (where present)

Environment variables are supported and commonly used for container deployments. The server and agent use prefixed environment variables to avoid collisions:

  • Server-wide env vars: prefix with KONARR_ (e.g., KONARR_DATA_PATH, KONARR_DATABASE_URL)
  • Agent-specific env vars: prefix with KONARR_AGENT_ (e.g., KONARR_AGENT_TOKEN, KONARR_AGENT_MONITORING)

Container Defaults

Packaged defaults (container images):

  • Data path: /data (exposed as KONARR_DATA_PATH=/data)
  • Config file path: /config/konarr.yml (mount /config to provide konarr.yml)
  • HTTP port: 9000

Configuration Overview

Konarr configuration is organized into several main sections:

Server Configuration

The server configuration controls the web interface, API, database, and security settings.

Key areas:

  • Network settings (domain, port, scheme)
  • Security settings (secrets, authentication)
  • Database configuration
  • Frontend configuration
  • Session management

For detailed server configuration, see: Server Configuration

Agent Configuration

The agent configuration controls how agents connect to the server, which projects they target, and how they scan containers.

Key areas:

  • Server connectivity and authentication
  • Project targeting and auto-creation
  • Docker monitoring and scanning
  • Security tool management
  • Resource limits and filtering

For detailed agent configuration, see: Agent Configuration Overview

Sample Complete Configuration

# Basic konarr.yml example
server:
  domain: "konarr.example.com"
  port: 9000
  scheme: "https"
  secret: "your-secret-key"

data_path: "/var/lib/konarr"

database:
  path: "/var/lib/konarr/konarr.db"

agent:
  token: "your-agent-token"
  project_id: "123"
  monitoring: true
  tool_auto_install: true

sessions:
  admins:
    expires: 8
  users:
    expires: 24

CLI Usage (konarr-cli)

Global Flags

ArgumentDescription
--config <path>Path to a konarr.yml configuration file
--instance <url>Konarr server URL (example: http://your-server:9000)
--agent-token <token>Agent token for authentication (or use KONARR_AGENT_TOKEN env var)
--debugEnable debug logging
--project-id <id>Project ID for operations

Common Subcommands

SubcommandDescription
agentRun the agent in monitoring mode with optional --docker-socket
scanScan container images with --image, --list, --output
upload-sbomUpload SBOM file with --input, --snapshot-id
databaseDatabase operations (create, user, cleanup)
tasksRun maintenance tasks

Agent Example

# Run agent with Docker socket monitoring
konarr-cli --instance http://your-server:9000 --agent-token <AGENT_TOKEN> agent --docker-socket /var/run/docker.sock

Scan Example

# Scan a container image
konarr-cli --instance http://your-server:9000 --agent-token <AGENT_TOKEN> scan --image alpine:latest

# List available tools
konarr-cli scan --list

Enable debug logging for troubleshooting with --debug flag.


Configuration Validation

Test Configuration

Before deploying to production, validate your configuration:

# Test server configuration (development)
cargo run -p konarr-server -- --config konarr.yml

# Test agent with debug logging
konarr-cli --config konarr.yml --debug agent

# Check configuration loading
konarr-cli --config konarr.yml --debug

Environment Variable Check

# List all Konarr environment variables
env | grep KONARR_ | sort

# Run with debug to see configuration loading
konarr-cli --debug

Additional Resources

For detailed configuration options and examples:

For additional help, see the troubleshooting guide or visit the Konarr GitHub repository.

Server Configuration

This section covers the basic configuration and setup of the Konarr server. The server is the central component that provides the REST API, web interface, and data storage capabilities.

Quick Start

Basic Configuration File

Create a konarr.yml configuration file:

# Basic server configuration
server:
  domain: "localhost"
  port: 9000
  scheme: "http"
  secret: "your-secure-secret-key-here"

# Data storage location
data_path: "./data"

# Agent authentication
agent:
  key: "your-agent-key-here"

Running the Server

Start the server with your configuration:

# Using Docker (recommended)
docker run -d \
  --name konarr-server \
  -p 9000:9000 \
  -v $(pwd)/konarr.yml:/app/konarr.yml \
  -v $(pwd)/data:/data \
  ghcr.io/42bytelabs/konarr:latest

# Using binary
konarr-server --config konarr.yml

# Using cargo (development)
cargo run --bin konarr-server -- --config konarr.yml

Essential Configuration

Network Settings

SettingDescriptionDefault
server.domainServer hostnamelocalhost
server.portHTTP port9000
server.schemeProtocol (http/https)http

Security Settings

SettingDescriptionRequired
server.secretApplication secret for sessionsYes
agent.keyAgent authentication tokenOptional

Storage Settings

SettingDescriptionDefault
data_pathDatabase and data directory./data

Verification

Health Check

Verify your server is running correctly:

# Test server health
curl http://localhost:9000/api

# Expected response includes server version and status

Web Interface

Access the web interface at: http://localhost:9000

Next Steps

Common Issues

Database Initialization

The server automatically creates the SQLite database on first run. Ensure the data_path directory is writable.

Port Conflicts

If port 9000 is in use, change server.port in your configuration file or use Docker port mapping.

Launching the Server

This page covers starting the Konarr server and verifying it's running correctly. For comprehensive web interface usage, see the Web Interface Guide.

Starting the Server

# Using Docker
docker run -d \
  --name konarr-server \
  -p 9000:9000 \
  -v $(pwd)/data:/data \
  ghcr.io/42bytelabs/konarr:latest

# Using Docker Compose
docker-compose up -d konarr-server

Using Pre-built Binary

# Download and extract binary
curl -L https://github.com/42ByteLabs/konarr/releases/latest/download/konarr-server-linux-x86_64.tar.gz | tar xz

# Run server
./konarr-server

From Source

# Build and run from source
git clone https://github.com/42ByteLabs/konarr.git
cd konarr
cargo run --bin konarr-server

Verifying Server Status

Health Check

Test that the server is running and accessible:

# Basic health check
curl -v http://localhost:9000/api/health

# Expected response:
# HTTP/1.1 200 OK
# {"status":"healthy","version":"x.x.x"}

Server Logs

Monitor server startup and operation:

# Docker logs
docker logs -f konarr-server

# Binary logs (with RUST_LOG=info)
RUST_LOG=info ./konarr-server

Initial Access

Web Interface

Open the server URL in your browser (default port 9000):

http://localhost:9000

First-Time Setup

  1. Web Interface: Navigate to the web interface to verify it loads correctly
  2. Admin Account: Create or configure admin access if required
  3. Agent Token: Retrieve the auto-generated agent token for agent setup

For detailed web interface usage, navigation, and features, see the Web Interface Guide.

Configuration

Basic Configuration

Create a konarr.yml file for persistent settings:

server:
  domain: "localhost"
  port: 9000
  scheme: "http"

data_path: "/data"

Environment Variables

Override configuration with environment variables:

export KONARR_SERVER_PORT=8080
export KONARR_DATA_PATH=/custom/data/path
./konarr-server

For complete configuration options, see:

Next Steps

After launching the server:

  1. Web Interface - Learn to use the web interface
  2. Agent Setup - Configure agents to monitor containers
  3. Security Setup - Implement production security practices
  4. Reverse Proxy - Set up HTTPS and production deployment

Server Configuration

This page documents comprehensive server-specific configuration options, environment variable mappings, and production deployment examples.

Configuration Implementation: Server configuration is managed through Figment and defined in src/utils/config/server.rs and src/utils/config/config.rs.


Core Server Settings

Network Configuration

ConfigurationDescriptionDefault
server.domainServer domain/hostnamelocalhost
server.portHTTP port9000
server.schemeURL scheme, http or httpshttp
server.corsEnable CORS for API accesstrue
server.apiAPI endpoint prefix/api

Security Settings

ConfigurationDescription
server.secretApplication secret for sessions and JWT tokens. Required for production
agent.keyAgent authentication token. Auto-generated if not provided

Data and Storage

ConfigurationDescriptionDefault
data_pathDirectory for SQLite database and application data/data
server.frontendPath to frontend static filesfrontend/build

URL Configuration

ConfigurationDescription
server.frontend.urlExternally accessible URL for generating links in emails and redirects

Complete Configuration Example

# Complete server configuration
server:
  # Network settings
  domain: "konarr.example.com"
  port: 9000
  scheme: "https"
  cors: true
  api: "/api"
  
  # Security settings
  secret: "your-very-strong-secret-key-here"
  
  # Frontend configuration
  frontend: "/app/dist"
  
# Data storage
data_path: "/var/lib/konarr"

# Database configuration
database:
  path: "/var/lib/konarr/konarr.db"
  token: null  # For remote databases

# Session configuration
sessions:
  admins:
    expires: 1    # hours
  users:
    expires: 24   # hours
  agents:
    expires: 360  # hours

# Agent authentication
agent:
  key: "your-agent-token"  # Auto-generated if not provided

Advanced Server Settings

Cleanup Configuration

# Automatic cleanup settings
cleanup:
  enabled: true
  timer: 90  # days to keep old snapshots

Security Features

# Security scanning and vulnerability management
security:
  enabled: true
  rescan: true
  advisories_pull: true

Registration Settings

# User registration control
registration:
  enabled: false  # Disable public registration

Environment Variables

All server settings can be overridden with environment variables using the KONARR_SERVER_ prefix:

# Network configuration
export KONARR_SERVER_DOMAIN=konarr.example.com
export KONARR_SERVER_PORT=9000
export KONARR_SERVER_SCHEME=https
export KONARR_SERVER_CORS=true

# Security settings
export KONARR_SERVER_SECRET="your-production-secret"

# Data paths
export KONARR_DATA_PATH=/var/lib/konarr
export KONARR_DB_PATH=/var/lib/konarr/konarr.db

# Frontend configuration
export KONARR_SERVER_FRONTEND=/app/dist
export KONARR_CLIENT_PATH=/app/dist

Database Configuration

SQLite (Default)

database:
  path: "/var/lib/konarr/konarr.db"

Remote Database (LibSQL/Turso)

database:
  path: "libsql://your-database-url"
  token: "your-database-token"

Environment variables:

export KONARR_DB_PATH="libsql://your-database-url"
export KONARR_DB_TOKEN="your-database-token"

Production Deployment Settings

Minimal Production Configuration

server:
  domain: "konarr.yourdomain.com"
  port: 9000
  scheme: "https"
  secret: "$(openssl rand -base64 32)"

data_path: "/var/lib/konarr"

database:
  path: "/var/lib/konarr/konarr.db"

sessions:
  admins:
    expires: 8   # 8 hours for admin sessions
  users:
    expires: 24  # 24 hours for user sessions

cleanup:
  enabled: true
  timer: 30    # Keep snapshots for 30 days

registration:
  enabled: false  # Disable public registration

security:
  enabled: true

Container-Specific Settings

When running in containers, these environment variables are commonly used:

# Rocket framework settings
export ROCKET_ADDRESS=0.0.0.0
export ROCKET_PORT=9000

# Konarr-specific paths
export KONARR_DATA_PATH=/data
export KONARR_DB_PATH=/data/konarr.db
export KONARR_SERVER_FRONTEND=/app/dist

# Security
export KONARR_SERVER_SECRET="$(openssl rand -base64 32)"

For more information, see:

export KONARR_DATA_PATH=/data
export KONARR_FRONTEND__URL=<https://konarr.example.com>

The project's config merging uses Figment, which supports nesting via separators (commonly __ in environment names). If an env mapping does not take effect, prefer using konarr.yml or CLI flags.

Persistence and backups

  • Mount a host directory under /data in container deployments to persist the SQLite DB (data/konarr.db).
  • Regularly back up the DB file before upgrades: cp data/konarr.db data/konarr.db.bak.

Additional Resources:

Agent Configuration Overview

The Konarr Agent (konarr-cli) is a powerful Rust-based command-line tool that monitors containers, generates Software Bill of Materials (SBOMs), and uploads security data to the Konarr server. This section provides comprehensive guidance for configuring and deploying agents in various environments.

Implementation: The agent CLI is implemented in cli/src with agent-specific functionality in cli/src/cli/agent.rs. Configuration is managed through src/utils/config/client.rs.


Agent Overview

The Konarr Agent serves as the data collection component of the Konarr ecosystem, responsible for:

  • Container Monitoring: Continuous monitoring of Docker containers and their states
  • SBOM Generation: Creating comprehensive Software Bill of Materials using industry-standard tools
  • Vulnerability Scanning: Integration with security scanners like Syft, Grype, and Trivy
  • Project Management: Automatic creation and organization of projects based on container metadata
  • Real-time Updates: Live detection of container changes and automated snapshot creation

Key Features

  • Multi-tool Support: Works with Syft, Grype, Trivy, and other security scanning tools
  • Auto-discovery: Automatically detects and monitors running containers
  • Flexible Deployment: Runs as Docker container, standalone binary, or system service
  • Smart Snapshots: Creates new snapshots only when changes are detected
  • Metadata Enrichment: Automatically adds container and system metadata to snapshots

Core Capabilities

Container Discovery and Monitoring

The agent automatically discovers running containers and organizes them into projects:

  • Docker Integration: Direct integration with Docker daemon via socket
  • Container Metadata: Extracts labels, environment variables, and runtime information
  • Project Hierarchy: Supports parent-child project relationships for complex deployments
  • State Tracking: Monitors container lifecycle events and state changes

SBOM Generation and Management

  • Multiple Formats: Supports CycloneDX, SPDX, and other SBOM standards
  • Tool Integration: Seamlessly integrates with popular scanning tools
  • Dependency Analysis: Comprehensive dependency tracking and version management
  • Incremental Updates: Only generates new SBOMs when container contents change

Security and Vulnerability Management

  • Real-time Scanning: Continuous vulnerability assessment of monitored containers
  • Multi-source Data: Aggregates vulnerability data from multiple security databases
  • Risk Assessment: Provides severity analysis and impact evaluation
  • Alert Integration: Automatically creates security alerts for discovered vulnerabilities

Operation Modes

One-shot Scanning

Execute a single scan operation and exit:

# Scan specific container image
konarr-cli scan --image nginx:latest

# Upload existing SBOM
konarr-cli upload-sbom --input sbom.json --snapshot-id 123

Monitoring Mode

Continuous monitoring with Docker socket access:

# Monitor containers with Docker socket
konarr-cli agent --docker-socket /var/run/docker.sock

# Monitor with project ID specified
konarr-cli --config konarr.yml --project-id 456 agent --docker-socket /var/run/docker.sock

Agent as Service

Background service operation:

# Run agent with configuration file
konarr-cli --config /etc/konarr/konarr.yml agent --docker-socket /var/run/docker.sock

# Docker container with volume persistence
docker run -d --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /etc/konarr:/config \
  ghcr.io/42bytelabs/konarr-agent:latest

Configuration Approaches

Environment Variables

Quick setup using environment variables:

export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="kagent_..."
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_AUTO_CREATE=true
export KONARR_AGENT_HOST="production-server-01"

Configuration File

Structured configuration using YAML:

# konarr.yml
instance: "https://konarr.example.com"

agent:
  token: "kagent_..."
  monitoring: true
  create: true
  host: "production-server-01"
  project_id: 123
  
  # Tool configuration
  tool_auto_install: true
  tool_auto_update: true
  toolcache_path: "/usr/local/toolcache"

Command Line Arguments

Direct configuration via CLI arguments:

konarr-cli \
  --instance https://konarr.example.com \
  --agent-token kagent_... \
  --monitoring \
  --auto-create \
  agent

Quick Start Examples

Basic Container Monitoring

# Docker container with minimal configuration
docker run -d \
  --name konarr-agent \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -e KONARR_INSTANCE="http://your-server:9000" \
  -e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
  -e KONARR_AGENT_MONITORING=true \
  -e KONARR_AGENT_AUTO_CREATE=true \
  ghcr.io/42bytelabs/konarr-agent:latest

Production Deployment

# docker-compose.yml
version: '3.8'
services:
  konarr-agent:
    image: ghcr.io/42bytelabs/konarr-agent:latest
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./config:/config
    environment:
      - KONARR_INSTANCE=https://konarr.example.com
      - KONARR_AGENT_TOKEN_FILE=/config/agent.token
      - KONARR_AGENT_MONITORING=true
      - KONARR_AGENT_HOST=production-cluster-01
    networks:
      - konarr-network

Binary Installation

# Install via Cargo
cargo install konarr-cli

# Configure and run
konarr-cli --config /etc/konarr/konarr.yml agent --docker-socket /var/run/docker.sock

Agent Management

Project Organization

  • Auto-creation: Agents can automatically create projects based on container metadata
  • Hierarchical Structure: Support for parent-child project relationships
  • Naming Conventions: Configurable project naming based on container labels or composition
  • Metadata Inheritance: Child projects inherit metadata from parent projects

Tool Management

  • Auto-installation: Automatic download and installation of required scanning tools
  • Version Management: Automatic updates to latest tool versions when configured
  • Custom Tools: Support for custom scanning tools and configurations
  • Tool Caching: Shared tool cache to reduce storage requirements

Security Considerations

  • Token Management: Secure handling of authentication tokens
  • Network Security: TLS/SSL support for secure communication with server
  • Container Security: Minimal container footprint with security best practices
  • Access Control: Granular permissions for different agent operations

Documentation Structure

This agent configuration section is organized into focused guides for different aspects:

CLI Usage Guide

Comprehensive command-line interface documentation covering:

  • All available commands and options
  • Common workflows and use cases
  • Debugging and troubleshooting commands
  • Integration with CI/CD pipelines

Agent Configuration Details

Complete configuration reference including:

  • All configuration options and their effects
  • Environment variable mappings
  • Production deployment configurations
  • Security and authentication settings

Next Steps

Choose the appropriate guide based on your needs:

  1. CLI Usage - Learn command-line operations and workflows
  2. Agent Configuration - Configure agents for your environment
  3. Server Configuration - Set up the Konarr server to work with agents
  4. Web Interface - Monitor and manage agents through the web interface

For installation instructions, see the Agent Installation Guide.

For troubleshooting, see the Troubleshooting Guide.

CLI Usage

This page documents common konarr-cli workflows and command-line operations.

CLI Implementation: The CLI is implemented in cli/src/main.rs with command handlers in cli/src/cli. Agent operations are in cli/src/cli/agent.rs.

Global Options

Configuration

ArgumentDescription
--config <path>Path to konarr.yml configuration file
--instance <url>Konarr server URL (e.g., http://your-server:9000)
--token <agent-token>Agent authentication token (or use KONARR_AGENT_TOKEN env var)

Output Control

ArgumentDescription
--verbose / -vEnable verbose logging for debugging
--quiet / -qSuppress non-essential output
--output <format>Output format: json, yaml, or table (default)

Core Commands

Command Reference

Agent Subcommand

ArgumentDescription
--docker-socket <path>Path to Docker socket for container monitoring (default: /var/run/docker.sock)
--monitoringEnable container monitoring mode
--project <id>Target project ID for snapshots

Scan Subcommand

ArgumentDescription
--image <name>Container image to scan (e.g., alpine:latest)
--path <directory>Local directory or file to scan
--output <file>Output results to file
--listList available security tools
--tool <name>Specify scanner tool to use

Upload SBOM Subcommand

ArgumentDescription
--input <file>Path to SBOM file to upload
--snapshot-id <id>Target snapshot ID for upload

Tools Subcommand

ArgumentDescription
--tool <name>Specific tool to install/test (e.g., syft, grype)
--allApply operation to all tools
--path <directory>Installation path for tools

Agent Operations

Monitor Containers

Run the agent in monitoring mode to continuously watch Docker containers:

konarr-cli agent \
  --instance http://your-server:9000 \
  --token <AGENT_TOKEN> \
  --docker-socket /var/run/docker.sock

This will:

  • Monitor Docker socket for container events
  • Auto-create projects when agent.create is enabled
  • Generate SBOMs when containers start or change
  • Upload snapshots to the server

Scan Specific Images

Scan and analyze a specific container image:

# Remote image
konarr-cli snapshot create \
  --image nginx:1.21 \
  --project <PROJECT_ID>

# Local image with custom scanner
konarr-cli snapshot create \
  --image local/my-app:latest \
  --scanner syft \
  --project <PROJECT_ID>

File System Analysis

Analyze local directories or files:

# Scan container image
konarr-cli scan --image nginx:latest

# Save output to file
konarr-cli scan --image alpine:latest --output sbom.json

Tool Management

List Available Tools

Show which security scanning tools are available:

konarr-cli scan --list

This will display installed tools and their versions. The agent can automatically install missing tools when agent.tool_auto_install is enabled (see Scanning Tools).

User Management

Create or Reset User Password

The database user command allows you to create new users or reset passwords for existing users. This is an interactive command that prompts for user information:

konarr-cli database user

The command will prompt you for:

  1. Username: The username for the user account
  2. Password: The new password (hidden input)
  3. Role: User role - either Admin or User

Behavior:

  • If the username already exists, the command will update the user's password and role
  • If the username doesn't exist, a new user account will be created
  • This command is useful for password recovery when users forget their credentials

Example session:

$ konarr-cli database user
Username: admin
Password: ********
Role: 
> Admin
  User
User updated successfully

Non-interactive usage:

For automated setups or scripts, you can provide the database path:

konarr-cli --database-url /data/konarr.db database user

Common use cases:

  • Initial admin account creation: Set up the first admin user after installation
  • Password reset: Reset a forgotten user password
  • Role update: Change a user's role from User to Admin or vice versa
  • Emergency access: Regain access when locked out of the web interface

Advanced Usage

Configuration File

Create /etc/konarr/konarr.yml:

instance: https://konarr.company.com
agent:
  token: your-secure-token
  project_id: 123
  monitoring: true
  tool_auto_install: true
  toolcache_path: /usr/local/toolcache
  host: production-server-01
  
tools:
  syft:
    version: "v0.96.0"
    path: /usr/local/bin/syft
  grype:
    version: "v0.74.0"
    path: /usr/local/bin/grype

Run with configuration:

konarr-cli --config /etc/konarr/konarr.yml agent --docker-socket /var/run/docker.sock

Environment Variables

Set defaults via environment:

export KONARR_INSTANCE=https://konarr.company.com
export KONARR_AGENT_TOKEN=your-secure-token
export KONARR_AGENT_MONITORING=true

# Run with environment config
konarr-cli agent --docker-socket /var/run/docker.sock

CI/CD Integration

Use in continuous integration pipelines to scan container images:

# Scan container image in CI pipeline
konarr-cli scan \
  --image $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA \
  --output security-report.json

# Upload SBOM to Konarr server
konarr-cli upload-sbom \
  --input security-report.json

Troubleshooting

Debug Mode

Enable debug logging for troubleshooting:

konarr-cli --debug agent --docker-socket /var/run/docker.sock

Tool Verification

Check which scanner tools are available:

konarr-cli scan --list

Log Analysis

Check agent logs for issues:

# Container logs
docker logs -f konarr-agent

Getting Help

For complete CLI reference, use the built-in help:

konarr-cli --help
konarr-cli agent --help
konarr-cli scan --help
konarr-cli upload-sbom --help
konarr-cli database --help

Agent Configuration

This page documents comprehensive agent-specific configuration options, environment variables, deployment scenarios, and security considerations.

Configuration Implementation: Agent configuration is defined in src/utils/config/client.rs and processed using Figment for flexible configuration management.

Core Agent Settings

Project Management

ConfigurationDescriptionDefault
agent.project_idTarget project ID for snapshots. Leave empty to auto-create projects-
agent.createAllow agent to automatically create projectstrue
agent.hostFriendly hostname identifier for this agent instance-

Monitoring and Scanning

ConfigurationDescriptionDefault
agent.monitoringEnable Docker container monitoring modefalse
agent.tool_auto_installAutomatically install missing security toolstrue
agent.toolcache_pathDirectory for scanner tool binaries/usr/local/toolcache

Connectivity

ConfigurationDescription
instanceKonarr server URL (e.g., https://konarr.example.com)
agent.tokenAuthentication token for API access

Complete Configuration Example

# Server connection
instance: "https://konarr.example.com"

# Agent configuration
agent:
  # Authentication
  token: "your-agent-token-from-server"
  
  # Project settings
  project_id: "123"  # Specific project, or "" to auto-create
  create: true       # Allow project auto-creation
  host: "production-server-01"
  
  # Monitoring settings
  monitoring: true
  scan_interval: 300  # seconds between scans
  
  # Tool management
  tool_auto_install: true
  toolcache_path: "/usr/local/toolcache"
  
  # Scanning configuration
  scan_on_start: true
  scan_on_change: true
  
  # Security tool preferences
  preferred_sbom_tool: "syft"      # syft, trivy
  preferred_vuln_tool: "grype"     # grype, trivy
  
  # Container filtering
  include_patterns:
    - "production/*"
    - "staging/*"
  exclude_patterns:
    - "*/test-*"
    - "*/temp-*"
    
  # Resource limits
  max_concurrent_scans: 3
  scan_timeout: 600  # seconds

Tool Configuration

Security Scanner Tools

# Tool-specific configuration
tools:
  syft:
    version: "v0.96.0"
    path: "/usr/local/bin/syft"
    config:
      exclude_paths:
        - "/tmp"
        - "/var/cache"
      cataloger_scope: "all-layers"
  
  grype:
    version: "v0.74.0"
    path: "/usr/local/bin/grype"
    config:
      fail_on_severity: "high"
      ignore_fixed: false
  
  trivy:
    version: "v0.48.0"
    path: "/usr/local/bin/trivy"
    config:
      skip_db_update: false
      timeout: "10m"

Environment Variables

Agent settings can be configured via environment variables with the KONARR_AGENT_ prefix:

# Server connection
export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="your-agent-token"

# Project configuration
export KONARR_AGENT_PROJECT_ID="123"
export KONARR_AGENT_CREATE=true
export KONARR_AGENT_HOST="production-server-01"

# Monitoring settings
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_SCAN_INTERVAL=300

# Tool management
export KONARR_AGENT_TOOL_AUTO_INSTALL=true
export KONARR_AGENT_TOOLCACHE_PATH="/usr/local/toolcache"

# Resource settings
export KONARR_AGENT_MAX_CONCURRENT_SCANS=3
export KONARR_AGENT_SCAN_TIMEOUT=600

Container Agent Configuration

Docker Socket Access

When running agent in a container with Docker monitoring:

# Security warning: Docker socket access grants significant privileges
agent:
  monitoring: true
  docker_socket: "/var/run/docker.sock"
  
  # Security controls
  docker_security:
    require_readonly: true
    filter_by_labels: true
    allowed_networks:
      - "production"
      - "staging"

Environment Variables for Containers

# Core settings
export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="your-token"
export KONARR_AGENT_MONITORING=true

# Container-specific paths
export KONARR_AGENT_TOOLCACHE_PATH="/usr/local/toolcache"

# Security settings
export KONARR_AGENT_DOCKER_SOCKET="/var/run/docker.sock"
export KONARR_AGENT_SECURITY_READONLY=true

Production Agent Deployment

High-Security Environment

# Air-gapped or high-security configuration
agent:
  tool_auto_install: false  # Disable auto tool installation
  toolcache_path: "/opt/security-tools"
  
  # Pre-approved tool versions
  tools:
    syft:
      path: "/opt/security-tools/syft"
      version: "v0.96.0"
      checksum: "sha256:abc123..."
    grype:
      path: "/opt/security-tools/grype"
      version: "v0.74.0"
      checksum: "sha256:def456..."
  
  # Strict scanning policies
  scan_config:
    fail_on_error: true
    require_signature_verification: true
    max_scan_size: "1GB"
    timeout: 300

Multi-Environment Agent

# Development/staging/production agent
agent:
  host: "${ENVIRONMENT}-server-${HOSTNAME}"
  project_id: "${KONARR_PROJECT_ID}"
  
  # Environment-specific settings
  monitoring: true
  scan_interval: 600  # 10 minutes
  
  # Conditional scanning based on environment
  scan_filters:
    development:
      scan_on_change: true
      include_test_images: true
    production:
      scan_on_change: false
      scan_schedule: "0 2 * * *"  # Daily at 2 AM
      exclude_test_images: true

Agent Authentication and Security

Token Management

# Retrieve agent token from server
export AGENT_TOKEN=$(curl -s -X GET \
  -H "Authorization: Bearer ${ADMIN_TOKEN}" \
  https://konarr.example.com/api/admin/settings | \
  jq -r '.settings.agentKey')

export KONARR_AGENT_TOKEN="${AGENT_TOKEN}"

Security Best Practices

  1. Rotate tokens regularly: Generate new agent tokens periodically
  2. Limit permissions: Use dedicated service accounts for agents
  3. Network security: Restrict agent network access to Konarr server only
  4. Audit logging: Enable detailed logging for agent activities
  5. Resource limits: Set appropriate CPU/memory limits for agent containers

For more information, see:

export KONARR_AGENT_URL=<http://konarr.example.com:9000>
export KONARR_AGENT_TOKEN=your-token-here

Tooling and installation

  • The agent will look for syft, grype, or trivy on PATH and in agent.toolcache_path.
  • For secure environments, pre-install approved tool versions into agent.toolcache_path and set agent.tool_auto_install to false.

See Also:

Scanning Tools

Konarr uses industry-standard security scanning tools to generate Software Bill of Materials (SBOM) and detect vulnerabilities in container images. The agent supports multiple tools, each with specific capabilities and features.

Implementation: Tool integration is implemented in src/tools with specific implementations for Syft, Grype, and Trivy. The tool catalogue system is in src/utils/catalogue.

Supported Tools

Konarr supports three primary scanning tools:

Syft

Syft is an open-source SBOM generation tool from Anchore that catalogs software packages and dependencies across various formats.

Features:

  • SBOM Generation: Creates comprehensive Software Bill of Materials in multiple formats (SPDX, CycloneDX)
  • Multi-Language Support: Detects packages from NPM, Cargo, Deb, RPM, PyPI, Maven, Go, and more
  • Container Layer Analysis: Scans all layers of container images
  • File System Cataloging: Analyzes installed packages, language-specific packages, and binaries
  • Fast Performance: Optimized for quick scanning of large images

Konarr Implementation:

  • Primary tool for SBOM generation
  • Auto-install supported in agent
  • Cataloger scope can be configured (all-layers, squashed)
  • Path exclusion support for temporary directories

Links:

Grype

Grype is a vulnerability scanner from Anchore that matches packages against known CVE databases.

Features:

  • Vulnerability Detection: Scans software packages for known vulnerabilities
  • Multiple Database Sources: Uses multiple vulnerability databases including NVD, Alpine SecDB, RHEL Security Data
  • SBOM Analysis: Can scan SBOMs generated by Syft or other tools
  • Severity Filtering: Filter results by vulnerability severity (critical, high, medium, low)
  • Format Support: Works with all Syft-supported package formats
  • Regular Updates: Vulnerability database updates automatically

Konarr Implementation:

  • Used for vulnerability scanning after SBOM generation
  • Auto-install supported in agent
  • Configurable severity thresholds
  • Option to ignore fixed vulnerabilities

Links:

Trivy

Trivy is a comprehensive security scanner from Aqua Security that detects vulnerabilities, misconfigurations, and secrets.

Features:

  • Multi-Format Scanning: Detects vulnerabilities in OS packages, language dependencies, and application dependencies
  • Container Image Scanning: Comprehensive container image analysis
  • IaC Scanning: Scans Infrastructure as Code files (Terraform, CloudFormation, Kubernetes)
  • Secret Detection: Finds exposed secrets and credentials
  • Misconfiguration Detection: Identifies security misconfigurations
  • SBOM Support: Can generate and consume SBOMs in multiple formats

Konarr Implementation:

  • Alternative security scanning tool with broader detection capabilities
  • Auto-install supported in agent
  • Configurable database update behavior
  • Timeout settings for long-running scans

Links:

Tool Selection

Selecting a Tool

You can configure which tool the agent uses through environment variables or configuration files:

Environment Variable:

# Select the primary scanning tool
export KONARR_AGENT_TOOL="syft"  # or "grype", "trivy"

Configuration File (konarr.yml):

agent:
  tool: "syft"  # Primary tool for scanning
  tool_auto_install: true
  tool_auto_update: false

Tool Installation

The agent can automatically install missing tools:

# Enable auto-install (default in container images)
export KONARR_AGENT_TOOL_AUTO_INSTALL=true

# Manual tool installation
konarr-cli tools install --tool syft
konarr-cli tools install --tool grype
konarr-cli tools install --tool trivy

Checking Installed Tools

List installed tools and their versions:

konarr-cli tools list

Output example:

Tool     Version    Status      Path
syft     v0.96.0    Installed   /usr/local/bin/syft
grype    v0.74.0    Installed   /usr/local/bin/grype
trivy    v0.48.0    Missing     -

Viewing Tool Usage

In the Web Interface

When viewing a snapshot in the Konarr web interface, you can see which tool was used to generate the SBOM and scan for vulnerabilities:

  1. Navigate to a project
  2. Click on a specific snapshot
  3. The snapshot details will show the tool used for scanning

Via API

Query the snapshot details through the API to see tool information:

curl -H "Authorization: Bearer $KONARR_AGENT_TOKEN" \
  http://your-server:9000/api/snapshots/{snapshot_id}

The response includes metadata about the scanning tool used.

In Agent Logs

The agent logs show which tool is being used for each scan:

# View agent logs in container mode
docker logs konarr-agent

# Look for lines indicating tool usage
# Example: "Using syft for SBOM generation"

Tool Configuration

Storage Locations

Tools are stored in the following locations:

EnvironmentPath
Container/usr/local/toolcache/
Host install~/.local/bin/ or /usr/local/bin/
CustomSet via KONARR_AGENT_TOOLCACHE_PATH

Advanced Configuration

Configure agent tool settings in konarr.yml:

agent:
  tool: "syft"
  tool_auto_install: true
  tool_auto_update: false

Tool Comparison

FeatureSyftGrypeTrivy
SBOM Generation✅ Primary
Vulnerability Scanning✅ Primary
Package ManagersNPM, Cargo, Deb, RPM, PyPI, Maven, GoAll Syft formatsMulti-format
Secret Detection
IaC Scanning
Auto-Install
SpeedFastFastModerate

Troubleshooting

Tool Installation Issues

If tools fail to install automatically:

# Check tool availability
konarr-cli tools list

# Manual tool install
konarr-cli tools install --tool syft

# Check tool cache
ls -la /usr/local/toolcache/

Tool Version Conflicts

Verify tool versions and compatibility:

konarr-cli tools version

Disabling Auto-Install

For secure environments, disable auto-install and pre-install approved versions:

agent:
  tool_auto_install: false  # Disable automatic installation
  toolcache_path: "/usr/local/toolcache"  # Pre-installed tool location

Additional Resources

Web Interface

This guide covers how to use the Konarr web interface to monitor your containers, view SBOMs, manage projects, and track security vulnerabilities.

Frontend Implementation: The web interface is built with Vue.js 3 and TypeScript. Views are located in src/views and components in src/components.

Accessing the Web Interface

Basic Access

Open the server URL in your browser (default port 9000):

http://<konarr-host>:9000

Examples:

  • Local development: http://localhost:9000
  • Network deployment: http://your-server-ip:9000
  • Custom domain: https://konarr.example.com

Behind Reverse Proxy

If the server is behind a reverse proxy or load balancer, use the external HTTPS URL configured in server.frontend.url.

For reverse proxy setup, see the Reverse Proxy Setup Guide.

Authentication

The web interface uses session-based authentication:

  1. Session Authentication - Login through the web interface to obtain session cookies
  2. Admin Access - Required for server settings, user management, and advanced features
  3. User Access - Standard access for viewing projects, snapshots, and alerts

Main Interface Areas

The Konarr web interface is organized into several main sections:

📁 Projects

  • Purpose: Logical groups representing hosts, applications, or container clusters
  • Contents: Each project contains snapshots, SBOMs, and security data
  • Features:
    • Project hierarchy (parent/child relationships)
    • Project types: Containers, Groups, Applications, Servers
    • Status indicators (online/offline)
    • Search and filtering capabilities

See the Projects documentation for detailed information on project management and the project view.

📸 Snapshots

  • Purpose: Captured states of containers or systems at specific points in time
  • Contents: SBOM data, dependency information, vulnerability scan results
  • Features:
    • Click snapshots to view detailed SBOM and vulnerability summaries
    • Comparison between different snapshot versions
    • Metadata including scan tools used, timestamps, and container information

See the Dependencies and SBOMs documentation for detailed information on snapshots and dependency tracking.

🚨 Alerts

  • Purpose: Security vulnerability alerts generated from scans
  • Contents: Vulnerability details, severity levels, affected components
  • Features:
    • Severity filtering (Critical, High, Medium, Low)
    • Alert state management (Vulnerable, Acknowledged, Secure)
    • Search and filtering by CVE, component, or description
    • Bulk operations for alert management

See the Security Alerts documentation for comprehensive alert management information.

👤 User Profile

  • Purpose: Personal account management and settings
  • Contents:
    • View account details (username, role, status)
    • Password management with strength validation
    • Active session management
    • Account creation date and last login information
  • Access: Available to all authenticated users

⚙️ Settings / Admin

  • Purpose: Server-level configuration and administration (admin-only)
  • Contents:
    • User and token management with enhanced UI
    • Agent authentication settings
    • Server configuration
    • System health and statistics
  • Access: Requires admin privileges

User Profile Management

Accessing Your Profile

Navigate to your profile page from the navigation menu:

  • Location: User menu in the top navigation bar
  • Access: Available to all authenticated users
  • URL: /profile

Profile Information

View and manage your account details:

  • Username: Your unique account identifier
  • Role: Your assigned role (Admin or User)
  • Status: Account state (Active, Inactive, Suspended)
  • Created At: Account creation timestamp
  • Last Login: Most recent login time
  • Avatar: Profile picture (if configured)

Password Management

Change your password securely through the profile page:

  1. Enter Current Password: Authenticate the change request
  2. Set New Password: Must be at least 8 characters
  3. Confirm Password: Verify new password entry
  4. Password Strength: Real-time validation shows password strength
  5. Submit: Update your password

Security Notes:

  • Passwords must be at least 8 characters long
  • Strong passwords are recommended (mix of letters, numbers, symbols)
  • Changing password will not log out active sessions immediately

Session Management

View and monitor your active sessions:

  • Active Sessions: List of currently authenticated sessions
  • Session Details: Login time, device/browser information
  • Session Security: Review unusual or unexpected sessions

For session management and security, see the Security Guide.


Projects Management

For detailed information on creating, organizing, and managing projects, see the Projects documentation.

Key features include:

  • Manual and automatic project creation
  • Project types (Container, Group, Application, Server)
  • Project hierarchy and organization
  • Setup workflows for agents and SBOM uploads

Dependencies and Snapshots

For detailed information on viewing and managing SBOMs and dependencies, see the Dependencies and SBOMs documentation.

Key features include:

  • Understanding snapshots and SBOM data
  • Dependency navigation and search
  • SBOM standards (CycloneDX, SPDX)
  • Manual SBOM upload
  • Integration with scanning tools

Security Alerts

For detailed information on managing security alerts and vulnerabilities, see the Security Alerts documentation.

Key features include:

  • Alert overview and lifecycle
  • Filtering and bulk operations
  • Alert details and remediation guidance
  • Integration with vulnerability scanners

Settings and Administration

User Management

Admin users can manage system access with an enhanced interface:

  • User Accounts: Create and manage user accounts with improved UI
  • Role Assignment: Assign admin or standard user privileges
  • Status Management: Activate, deactivate, or suspend user accounts
  • User Search: Find users quickly with search and filtering
  • Pagination: Navigate through large user lists efficiently
  • Session Management: Monitor active sessions and access logs
  • Bulk Operations: Manage multiple users efficiently

The updated admin interface provides better visibility and control over user accounts, with real-time statistics showing total, active, and inactive user counts.

Agent Token Management

Configure agent authentication:

  • Token Generation: Server auto-generates agent tokens on first startup
  • Token Retrieval: Access current agent token through admin interface
  • Token Security: Rotate tokens for enhanced security

Server Configuration

Access server-level settings:

  • Network Configuration: Domain, port, and proxy settings
  • Security Settings: Authentication, secrets, and access controls
  • Feature Toggles: Enable/disable specific Konarr features
  • Performance Settings: Database cleanup, retention policies

Typical Workflow

Initial Setup

  1. Start Server: Launch Konarr server and access web interface
  2. Admin Login: Log in with admin credentials
  3. Configure Settings: Set up agent tokens and server configuration
  4. Setup Profile: Optionally configure your user profile and password
  5. Agent Setup: Configure and deploy agents to monitor containers or upload SBOMs manually

Daily Operations

  1. Monitor Projects: Review project status and recent snapshots
  2. Browse Dependencies: Navigate through dependency lists with pagination
  3. Review Alerts: Triage new security vulnerabilities
  4. Investigate Issues: Drill down into specific snapshots and dependencies
  5. Take Action: Update containers, acknowledge alerts, or escalate issues

Ongoing Management

  1. Trend Analysis: Monitor security trends across projects
  2. Compliance Reporting: Export SBOMs for compliance requirements
  3. System Maintenance: Review server health and performance metrics
  4. User Management: Manage access and permissions as team grows (admin only)
  5. Profile Updates: Keep passwords current and review active sessions

Search and Filtering

  • Global Search: Use the search box on Projects and Snapshots pages
  • Filter Options: Filter by project type, status, severity, or date ranges
  • Quick Access: Bookmark frequently accessed projects for easy navigation
  • URL Parameters: Pagination states are preserved in URLs for sharing

Keyboard Shortcuts

  • Navigation: Use browser back/forward for quick page navigation
  • Refresh: F5 or Ctrl+R to refresh data views
  • Search: Click search boxes or use Tab navigation

Performance Optimization

  • Pagination: Large datasets are automatically paginated for performance
  • URL Sync: Page numbers persist in URLs for seamless navigation
  • Lazy Loading: Detailed data loads on-demand when viewing specific items
  • Caching: Web interface caches frequently accessed data

Export and Automation

Manual Export

Export data directly from the web interface:

  • SBOM Export: Download complete SBOM data from snapshot detail pages
  • Vulnerability Reports: Export security scan results
  • Project Data: Export project summaries and statistics

API Integration

For automation and integration:

  • REST API: Complete API access for all web interface functionality
  • Authentication: Use session cookies for web-based API access
  • Documentation: See API Documentation for complete endpoint reference

Reporting

Generate reports for compliance and management:

  • Security Summaries: Aggregate vulnerability data across projects
  • Compliance Reports: SBOM data for regulatory requirements
  • Trend Analysis: Historical data for security and dependency trends

Troubleshooting

Common Issues

Web Interface Not Loading:

  1. Check server is running: curl http://localhost:9000/api/health
  2. Verify frontend configuration in server settings
  3. Clear browser cache and cookies
  4. Check network connectivity and firewall settings

Authentication Problems:

  1. Verify admin user account exists
  2. Check session timeout settings
  3. Clear browser cookies and re-login
  4. Verify server authentication configuration

Performance Issues:

  1. Check server resource usage (CPU, memory, disk)
  2. Review database performance and size
  3. Consider implementing reverse proxy caching
  4. Monitor network latency and bandwidth

Additional Help

For more troubleshooting information:


Next Steps

After familiarizing yourself with the web interface:

Projects

Projects are logical groups representing hosts, applications, or container clusters for monitoring and analysis.

Implementation: Projects are managed through the Projects API (backend) and rendered in the Projects Views (frontend). The Project model defines the database schema and business logic.

Managing Projects

Creating Projects

  • Manual Creation: Through web interface (admin required)
  • Auto-Creation: Agents create projects automatically with agent.create: true

Project Types

  • Container: Individual containers
  • Group: Related project collections (e.g., microservices)
  • Application: Application-specific projects
  • Server: Host-level projects

Key Features

  • Hierarchy: Parent-child relationships for organization
  • Search: Find projects by name, tag, or hostname
  • Status: Health indicators and last update time
  • Statistics: Snapshot counts, vulnerabilities, scan times

Project View

Summary

Project overview with:

  • Name, type, description, status
  • Vulnerability counts by severity
  • Snapshot and dependency statistics
  • Parent/child project links

Sub Projects

View child projects in the hierarchy. Useful for:

  • Grouping microservices under parent applications
  • Organizing multi-container deployments
  • Structuring infrastructure by region or purpose

Alerts

Security vulnerabilities for this project:

  • Filter by severity (Critical, High, Medium, Low)
  • Alert states: Vulnerable, Acknowledged, Secure
  • CVE details and affected components

See Security Alerts for alert management.

Dependencies

All packages and components from the latest snapshot:

  • Search and filter dependencies
  • View package details, versions, licenses
  • Identify vulnerable dependencies
  • Export for compliance reporting

See Dependencies for SBOM details.

Setup

Deploy agents to monitor containers and submit SBOMs. The Setup tab generates pre-configured commands with authentication tokens and project IDs.

Docker Deployment

Run the agent as a container:

docker run \
  -e "KONARR_INSTANCE=<your-instance-url>" \
  -e "KONARR_AGENT_TOKEN=<auto-generated-token>" \
  -e "KONARR_PROJECT_ID=<project-id>" \
  -v "/var/run/docker.sock:/var/run/docker.sock:ro" \
  ghcr.io/42bytelabs/konarr-agent:latest

Environment Variables:

VariableDescription
KONARR_INSTANCEKonarr server URL
KONARR_AGENT_TOKENAuthentication token
KONARR_PROJECT_IDTarget project ID

Kubernetes Deployment

Deploy the agent in Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: konarr-agent
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: konarr-agent
          image: ghcr.io/42bytelabs/konarr-agent:latest
          env:
            - name: KONARR_INSTANCE
              value: "<your-instance-url>"
            - name: KONARR_AGENT_TOKEN
              value: "<auto-generated-token>"
            - name: KONARR_PROJECT_ID
              value: "<project-id>"

Deploy with: kubectl apply -f konarr-agent.yaml

Production: Use Kubernetes Secrets for KONARR_AGENT_TOKEN and configure RBAC permissions as needed.

Manual SBOM Upload

Upload pre-generated SBOM files for CI/CD integration, offline scanning, or non-container workloads.

Supported Formats: CycloneDX and SPDX (JSON/XML)

Generate SBOMs:

# Syft
syft <image-or-directory> -o cyclonedx-json > sbom.json

# Trivy
trivy image --format cyclonedx <image-name> > sbom.json

Upload via the project Setup tab. Files are validated and processed immediately for vulnerabilities.


Next Steps

Security Alerts

Manage security alerts and vulnerabilities detected in your projects.

Implementation: Security alerts are managed through the Security API (backend) and rendered in Security Views (frontend). The Security model defines alert and advisory data structures.

Alert Overview

Alerts are automatically generated from vulnerability scans (Grype, Trivy) when agents scan containers or SBOMs are uploaded.

Severity Levels: Critical, High, Medium, Low, Unknown

Alert States:

  • Vulnerable: Active security issue requiring attention
  • Acknowledged: Known issue being investigated
  • Secure: Resolved, mitigated, or not applicable

Alert Management

Viewing and Filtering

Access alerts from:

  • Global view: All alerts across projects
  • Project view: Project-specific alerts
  • Snapshot view: Alerts from a specific scan

Filter by severity, state, CVE identifier, or project.

Actions

Individual:

  • View details, acknowledge, mark secure, or export

Bulk:

  • Acknowledge or mark secure multiple alerts simultaneously

Triage Workflow

  1. Review new alerts (focus on Critical/High severity)
  2. Investigate CVE details and affected components
  3. Acknowledge while working on fixes
  4. Remediate (update dependencies, apply patches)
  5. Re-scan to verify resolution
  6. Mark secure when resolved

Alert Details

Each alert includes:

  • CVE identifier, description, severity
  • Affected package, current/fixed versions
  • CVSS score and attack complexity
  • Remediation guidance and upgrade path
  • List of impacted projects

Reporting

Export alert data:

  • CSV/JSON formats for analysis
  • Filter by severity, project, or date range
  • Track trends over time

Scanning Tools

Konarr integrates with:

  • Grype (Anchore): Container and filesystem scanning
  • Trivy (Aqua Security): Multi-purpose vulnerability scanner
  • Custom scanners: Any tool producing CycloneDX or SPDX SBOMs

See Scanning Tools for configuration.


Best Practices

  • Schedule regular scans (daily/weekly)
  • Scan after deployments
  • Review Critical/High alerts first
  • Document remediation decisions
  • Test updates before production deployment
  • Export reports for compliance

Troubleshooting

Missing alerts: Verify agents are scanning, tool configuration is correct, and vulnerability databases are updated

False positives: Verify package versions, review CVE applicability, mark as secure with documentation

See also: Projects | Dependencies | Scanning Tools

Dependencies and SBOMs

View and manage Software Bill of Materials (SBOM) data and dependency information.

Implementation: Dependencies are managed through the Dependencies API (backend) and Snapshots API. Frontend views are in Dependencies Views. SBOM processing is handled by snapshots/sboms.rs.

Understanding Snapshots

Snapshots capture the state of a container or system at a specific time, including SBOM data, dependencies, vulnerabilities, and scan metadata.

Snapshot Creation

Automatic:

  • Agent scans (scheduled or event-triggered)
  • Container updates or deployments

Manual:

  • API triggers
  • SBOM file uploads via web interface
  • CI/CD pipeline integration

Versioning

Multiple snapshots per project enable:

  • Historical dependency tracking
  • Snapshot comparison
  • Vulnerability trend analysis

Viewing SBOM Details

Snapshot Overview

Each snapshot shows:

  • Dependency count and vulnerability summary
  • Container/host metadata and scan tool info
  • Export options

Dependencies List

View all components with:

  • Package name, version, type, license
  • Search, filter, and pagination
  • URL-based page numbers for bookmarking

Dependency Details

Click dependencies to view:

  • Version info and available updates
  • License details
  • Vulnerability status with CVE links
  • Package relationships (dependencies/dependents)

Comparison

Compare snapshots to identify:

  • Added, removed, or updated dependencies
  • Dependency drift over time
  • Security improvements or regressions

SBOM Standards

Konarr supports industry-standard SBOM formats:

  • CycloneDX (primary): v1.5/1.6, JSON/XML
  • SPDX (alternative): JSON/XML

Both formats support vulnerability data, dependency relationships, and license information.


Tool Integration

Syft

Generate SBOMs from containers and filesystems:

syft <image-or-directory> -o cyclonedx-json > sbom.json

Grype

Scan for vulnerabilities:

grype <image-name>
syft <image> -o cyclonedx-json | grype --add-cpes-if-none

Trivy

Multi-purpose scanner with SBOM generation:

trivy image --format cyclonedx <image-name> > sbom.json

See Scanning Tools for configuration details.


Uploading SBOMs

Upload SBOMs via web interface or API for CI/CD integration, testing, or offline scanning.

Web Interface: Project Setup → Upload SBOM

API:

curl -X POST https://konarr.example.com/api/projects/123/sboms \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d @sbom.json

Supported: CycloneDX and SPDX (JSON/XML)

See API Documentation for details.


Export and Reporting

Export SBOMs in JSON, XML, or CSV formats for:

  • Compliance documentation
  • License verification
  • Vulnerability assessment
  • Dependency tracking

Best Practices

  • Schedule regular scans
  • Keep historical snapshots for comparison
  • Review dependencies periodically
  • Monitor license compliance
  • Export data for compliance

Troubleshooting

Upload errors: Verify SBOM format and file size limits

Missing dependencies: Check scanning tool configuration and SBOM completeness

See also: Projects | Security Alerts | Scanning Tools

API Documentation

This document provides comprehensive details of the Konarr REST API. The API serves both the web UI and external integrations, with separate authentication for users and agents.

API Implementation: The REST API is implemented using the Rocket framework. All API routes are defined in the server/src/api directory.

Base URL and Versioning

Base URL: http://<konarr-host>:9000/api

Current Version: The API is currently unversioned. Future versions will include version paths.

Content Type: All endpoints expect and return application/json unless otherwise specified.

Authentication

Konarr supports two authentication methods: Agent tokens for automated tools and Session-based authentication for web users.

Agent Authentication

Agents authenticate using a Bearer token in the Authorization header:

curl -H "Authorization: Bearer <AGENT_TOKEN>" \
  http://localhost:9000/api/projects

The agent token is generated by the server and stored as agent.key in the ServerSettings table. You can retrieve it from the admin panel (Settings page) or by querying the database directly.

Key Points:

  • Agent tokens grant access to specific endpoints for uploading snapshots and creating projects
  • Token validation checks a cached value first, then falls back to database lookup (implemented in authentication guards)
  • Agents are identified as the konarr-agent user internally

Session Authentication

Web UI users authenticate via session cookies. Sessions are managed automatically by the frontend and use HTTP-only cookies (x-konarr-token) for security.

Authentication Endpoints (implementation):

  • Login: POST /api/auth/login
  • Logout: POST /api/auth/logout
  • Registration: POST /api/auth/register

Sessions are validated on each request and cached in memory for performance. Users can be assigned Admin or User roles (user model).

Admin Authentication

Admin-only endpoints require both:

  1. Valid session authentication
  2. User role must be Admin

The first registered user automatically receives the Admin role.

Quick Reference

Endpoint Summary

Base & Health Endpoints

EndpointMethodAuthenticationDescription
/api/GETOptionalServer status and statistics
/api/healthGETNoneHealth check

Authentication Endpoints

EndpointMethodAuthenticationDescription
/api/auth/loginPOSTNoneUser login
/api/auth/logoutPOSTSessionUser logout
/api/auth/registerPOSTNoneUser registration

Project Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/projectsGETSession/AgentList projects
/api/projectsPOSTSession/AgentCreate project
/api/projects/{id}GETSessionGet project details
/api/projects/{id}PATCHSessionUpdate project
/api/projects/{id}DELETEAdminArchive project

Snapshot Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/snapshotsGETSessionList snapshots
/api/snapshotsPOSTSessionCreate snapshot
/api/snapshots/{id}GETSessionGet snapshot details
/api/snapshots/{id}/bomPOSTSessionUpload SBOM
/api/snapshots/{id}/metadataPATCHSessionUpdate metadata
/api/snapshots/{id}/dependenciesGETSessionList snapshot dependencies
/api/snapshots/{id}/alertsGETSessionList snapshot alerts

Dependency Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/dependenciesGETSessionList dependencies
/api/dependencies/{id}GETSessionGet dependency details

Security Alert Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/securityGETSessionList security alerts
/api/security/{id}GETSessionGet alert details

User Management Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/user/whoamiGETSessionGet current user
/api/user/passwordPATCHSessionUpdate password
/api/user/sessionsGETSessionList user sessions
/api/user/sessions/{id}DELETESessionRevoke session

Admin Endpoints

(implementation)

EndpointMethodAuthenticationDescription
/api/adminGETAdminGet admin settings
/api/adminPATCHAdminUpdate settings
/api/admin/usersGETAdminList all users
/api/admin/users/{id}PATCHAdminUpdate user

WebSocket Endpoint

(implementation)

EndpointMethodAuthenticationDescription
/api/ws?agentGET (WebSocket)SessionAgent status monitoring

Core Endpoints

Base Information

GET /api/ (implementation)

  • Description: Server status, configuration, and summary statistics
  • Authentication: Optional (returns more data when authenticated)
  • Response: Server version, commit, configuration, and statistics

Response Fields:

FieldTypeDescription
versionstringKonarr version number
commitstringGit commit SHA of the build
config.initialisedbooleanWhether the server has been initialized
config.registrationbooleanWhether new user registration is enabled
userobjectCurrent user details (authenticated only)
projectsobjectProject statistics summary (authenticated only)
dependenciesobjectDependency statistics summary (authenticated only)
securityobjectSecurity alerts summary (authenticated only)
agentobjectAgent configuration (agent authentication only)
# Unauthenticated request
curl http://localhost:9000/api/

# Authenticated request with session
curl -H "Cookie: x-konarr-token=<token>" http://localhost:9000/api/

Health Check

GET /api/health (implementation)

  • Description: Simple health check endpoint
  • Authentication: None required
  • Response: Basic status message and database connectivity check
curl http://localhost:9000/api/health

Response:

{
  "status": "ok",
  "message": "Konarr is running"
}

Projects

GET /api/projects

  • Description: List all active (non-archived) projects
  • Authentication: Session or Agent
  • Query Parameters:
ParameterTypeDescriptionDefault
pageintegerPage number for pagination (min 1)0
limitintegerResults per page (min 1, max 100)25
searchstringSearch projects by title-
typestringFilter by project type (or "all" for all types)-
topbooleanFetch only top-level projects (no parent)false
parentsbooleanFetch only parent projectsfalse
  • Response: Array of project objects with metadata

POST /api/projects

  • Description: Create a new project
  • Authentication: Session or Agent
  • Request Body:
FieldTypeRequiredDescription
namestringYesProject name (used as identifier)
typestringYesProject type (container, server, etc.)
descriptionstringNoOptional project description
parentintegerNoParent project ID (for nested projects)

GET /api/projects/{id}

  • Description: Get specific project details with latest snapshot and children
  • Authentication: Session
  • Response: Project details including children, latest snapshot, and security summary

PATCH /api/projects/{id}

  • Description: Update project details
  • Authentication: Session
  • Request Body (all fields optional):
FieldTypeDescription
titlestringDisplay title for the project
typestringProject type
descriptionstringProject description (empty string to clear)
parentintegerParent project ID

DELETE /api/projects/{id}

  • Description: Archive a project (soft delete)
  • Authentication: Admin session required
  • Response: Archived project details

Snapshots

GET /api/snapshots

  • Description: List all snapshots with pagination
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescriptionDefault
pageintegerPage number for pagination0
limitintegerResults per page25

GET /api/snapshots/{id}

  • Description: Get specific snapshot details including metadata
  • Authentication: Session
  • Response: Snapshot metadata, dependencies count, and security summary

POST /api/snapshots

  • Description: Create a new empty snapshot
  • Authentication: Session
  • Request Body:
FieldTypeRequiredDescription
project_idintegerYesProject ID to associate snapshot with

POST /api/snapshots/{id}/bom

  • Description: Upload SBOM data to an existing snapshot
  • Authentication: Session
  • Content-Type: application/json (CycloneDX format)
  • Max Size: 10 MB
  • Response: Updated snapshot details
  • Note: Automatically triggers SBOM processing task

PATCH /api/snapshots/{id}/metadata

  • Description: Update snapshot metadata
  • Authentication: Session
  • Request Body: JSON object with key-value pairs of metadata to update
  • Note: Empty values are ignored; valid keys must match SnapshotMetadataKey enum

GET /api/snapshots/{id}/dependencies

  • Description: List dependencies for a specific snapshot
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescription
searchstringSearch term for component names
pageintegerPage number for pagination
limitintegerResults per page

GET /api/snapshots/{id}/alerts

  • Description: List security alerts for a specific snapshot
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescription
searchstringSearch term for alert names or CVE IDs
severitystringFilter by severity (critical, high, medium, low, etc.)
pageintegerPage number for pagination
limitintegerResults per page

Dependencies

GET /api/dependencies

  • Description: Search and list all dependencies (components)
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescription
searchstringSearch term for component names
deptypestringFilter by component type (library, application, framework, etc.)
topbooleanGet top dependencies by usage
pageintegerPage number for pagination
limitintegerResults per page

GET /api/dependencies/{id}

  • Description: Get specific dependency details
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescription
snapshotintegerGet dependency details for specific snapshot context
  • Response: Component details, versions across projects, and associated projects (when no snapshot specified)

Security Alerts

GET /api/security

  • Description: List security alerts and vulnerabilities
  • Authentication: Session
  • Query Parameters:
ParameterTypeDescription
pageintegerPage number for pagination
limitintegerNumber of results per page
searchstringSearch term for alert names or CVE IDs
statestringFilter by alert state (defaults to state check)
severitystringFilter by severity level (critical, high, medium, low, informational, malware, unmaintained, unknown)

GET /api/security/{id}

  • Description: Get specific security alert details
  • Authentication: Session
  • Response: Alert details including advisory information, affected dependency with full details, and associated snapshot

User Profile

GET /api/user/whoami

  • Description: Get current authenticated user's profile
  • Authentication: Session required
  • Response: User details including username, role, state, creation date, and last login

PATCH /api/user/password

  • Description: Update current user's password
  • Authentication: Session required
  • Request Body:
FieldTypeRequiredDescription
current_passwordstringYesCurrent password for verification
new_passwordstringYesNew password (min 12 characters)
new_password_confirmstringYesConfirmation of new password

GET /api/user/sessions

  • Description: List active sessions for the current user
  • Authentication: Session required
  • Response: Array of session summaries with ID, creation time, last accessed time, and state

DELETE /api/user/sessions/{id}

  • Description: Revoke a specific session
  • Authentication: Session required
  • Note: Can only revoke sessions belonging to the current user

Administration

GET /api/admin

  • Description: Administrative settings, statistics, and user list
  • Authentication: Admin session required
  • Response: Server settings, project statistics, user statistics, and list of all users

Response Fields:

FieldTypeDescription
settingsobjectKey-value pairs of all server settings
project_statsobjectTotal, inactive, and archived project counts
user_statsobjectTotal, active, and inactive user counts
usersarrayList of all users with summary information

PATCH /api/admin

  • Description: Update server settings
  • Authentication: Admin session required
  • Request Body: JSON object with setting names as keys and new values as strings
  • Note: Only Toggle, Regenerate, and SetString setting types can be updated
  • Special Actions: Some settings trigger background tasks (e.g., security.rescan, security.advisories.pull)

GET /api/admin/users

  • Description: List all users
  • Authentication: Admin session required
  • Response: Array of user summaries (ID, username, role, state, creation date)

PATCH /api/admin/users/{id}

  • Description: Update user role or state
  • Authentication: Admin session required
  • Request Body (all fields optional):
FieldTypeDescription
rolestringUser role (admin or user)
statestringUser state (active or disabled)
  • Note: Cannot modify user ID 1 (default admin); disabling a user logs them out

WebSocket

GET /api/ws?agent

  • Description: WebSocket connection for agent status monitoring
  • Authentication: Session required
  • Protocol: WebSocket
  • Purpose: Allows agents to report online/offline status for projects

Message Format:

{
  "project": 123
}

When a client connects and sends a project ID, the snapshot for that project is marked as "online". When the connection closes, it's marked as "offline".

Authentication API Details

Login

POST /api/auth/login

  • Description: Authenticate a user and create a session
  • Authentication: None required
  • Rate Limit: Yes (using RateLimit guard)
  • Request Body:
FieldTypeRequiredDescription
usernamestringYesUsername
passwordstringYesPassword
  • Response: Login status and reason (if failed)
  • Side Effect: Sets x-konarr-token private cookie on success

Response:

{
  "status": "success"
}

Or on failure:

{
  "status": "failed",
  "reason": "Invalid credentials"
}

Logout

POST /api/auth/logout

  • Description: Terminate the current session
  • Authentication: Session required
  • Response: Logout status
  • Side Effect: Removes x-konarr-token cookie and invalidates session

Register

POST /api/auth/register

  • Description: Register a new user account
  • Authentication: None required (registration must be enabled)
  • Rate Limit: Yes (using RateLimit guard)
  • Request Body:
FieldTypeRequiredDescription
usernamestringYesDesired username
passwordstringYesPassword
password_confirmstringYesPassword confirmation (must match)
  • Response: Registration status and reason (if failed)
  • Note: First registered user becomes Admin; subsequent users are regular Users

Data Models

Project Object

{
  "id": 1,
  "name": "my-application",
  "title": "my-application",
  "type": "container",
  "description": "Production web application",
  "status": true,
  "parent": null,
  "createdAt": "2024-01-01T00:00:00Z",
  "snapshot": {
    "id": 15,
    "status": "completed",
    "createdAt": "2024-01-15T12:00:00Z",
    "dependencies": 245,
    "security": {
      "total": 12,
      "critical": 2,
      "high": 5,
      "medium": 3,
      "low": 2
    }
  },
  "snapshots": 15,
  "security": {
    "total": 12,
    "critical": 2,
    "high": 5,
    "medium": 3,
    "low": 2
  },
  "children": []
}

Fields:

FieldTypeDescription
idintegerUnique project identifier
namestringProject name (identifier)
titlestringDisplay title for project
typestringProject type (container, server, etc.)
descriptionstringOptional project description
statusbooleanOnline/offline status from latest snapshot metadata
parentintegerParent project ID (null for top-level)
createdAtstringISO 8601 timestamp of creation
snapshotobjectLatest snapshot summary
snapshotsintegerTotal snapshot count
securityobjectSecurity summary from latest snapshot
childrenarrayChild projects (nested structure)

Snapshot Object

{
  "id": 1,
  "status": "completed",
  "error": null,
  "createdAt": "2024-01-01T00:00:00Z",
  "updatedAt": "2024-01-01T00:05:00Z",
  "dependencies": 245,
  "security": {
    "total": 12,
    "critical": 2,
    "high": 5,
    "medium": 3,
    "low": 2,
    "informational": 0,
    "unmaintained": 0,
    "malware": 0,
    "unknown": 0
  },
  "metadata": {
    "image": "nginx:1.21",
    "version": "1.21.0",
    "status": "online"
  }
}

Fields:

FieldTypeDescription
idintegerUnique snapshot identifier
statusstringProcessing status (pending, processing, completed, failed)
errorstringError message if processing failed
createdAtstringISO 8601 timestamp of creation
updatedAtstringISO 8601 timestamp of last update
dependenciesintegerCount of dependencies in snapshot
securityobjectSecurity summary with severity breakdown
metadataobjectKey-value pairs of snapshot metadata

Dependency Object

{
  "id": 1,
  "type": "library",
  "manager": "npm",
  "name": "@types/node",
  "namespace": "@types",
  "version": "18.15.0",
  "purl": "pkg:npm/%40types/node@18.15.0",
  "versions": ["18.15.0", "18.14.0"],
  "projects": [
    {
      "id": 1,
      "name": "my-application",
      "title": "my-application"
    }
  ]
}

Fields:

FieldTypeDescription
idintegerUnique component identifier
typestringComponent type (library, application, framework, etc.)
managerstringPackage manager (npm, cargo, deb, etc.)
namestringComponent name
namespacestringOptional namespace (e.g., @types for npm)
versionstringSpecific version (context-dependent)
purlstringPackage URL (PURL) identifier
versionsarrayAll versions found across projects
projectsarrayAssociated projects (omitted in snapshot context)

Alert Object

{
  "id": 1,
  "name": "CVE-2024-1234",
  "severity": "high",
  "description": "Buffer overflow vulnerability...",
  "url": "https://nvd.nist.gov/vuln/detail/CVE-2024-1234",
  "dependency": {
    "id": 42,
    "type": "library",
    "manager": "npm",
    "name": "vulnerable-package",
    "version": "1.0.0"
  }
}

Fields:

FieldTypeDescription
idintegerUnique alert identifier
namestringAlert name (often CVE ID)
severitystringSeverity level (critical, high, medium, low, informational, unmaintained, malware, unknown)
descriptionstringDetailed description of the vulnerability
urlstringReference URL for more information
dependencyobjectAffected dependency details

SBOM Upload Format

Konarr accepts SBOMs in CycloneDX format (version 1.5 or 1.6). Upload SBOMs to the POST /api/snapshots/{id}/bom endpoint.

Example CycloneDX SBOM:

{
  "bomFormat": "CycloneDX",
  "specVersion": "1.6",
  "version": 1,
  "metadata": {
    "timestamp": "2024-01-01T00:00:00Z",
    "tools": [
      {
        "vendor": "anchore",
        "name": "syft",
        "version": "v0.96.0"
      }
    ],
    "component": {
      "type": "container",
      "name": "nginx",
      "version": "1.21"
    }
  },
  "components": [
    {
      "type": "library",
      "name": "openssl",
      "version": "1.1.1",
      "purl": "pkg:deb/debian/openssl@1.1.1"
    }
  ]
}

Supported Fields:

  • bomFormat: Must be "CycloneDX"
  • specVersion: "1.5" or "1.6"
  • metadata.component: Main component (container image, application, etc.)
  • components: Array of dependency components with PURL identifiers

After uploading, Konarr automatically processes the SBOM to extract dependencies and match them against security advisories.

Error Responses

API errors return consistent JSON responses with HTTP status codes:

{
  "message": "Project with ID 999 not found",
  "details": "Additional context about the error",
  "status": 404
}

HTTP Status Codes:

Status CodeDescription
401 UnauthorizedMissing or invalid authentication token/session
404 Not FoundRequested resource doesn't exist
429 Too Many RequestsRate limit exceeded
500 Internal Server ErrorServer-side error occurred

Common Error Scenarios:

  • Authentication Errors: Returned when Authorization header or session cookie is missing/invalid
  • Not Found: Returned when accessing non-existent projects, snapshots, or other resources
  • Unauthorized (Admin): Returned when non-admin users attempt admin-only operations
  • Validation Errors: Returned via KonarrError when request data is invalid

Rate Limiting

Rate limiting is applied to authentication endpoints to prevent abuse:

  • Login endpoint: Rate limited (specific limit configured via RocketGovernor)
  • Registration endpoint: Rate limited (specific limit configured via RocketGovernor)
  • Other endpoints: No explicit rate limiting (protected by authentication)

Rate limiting uses the rocket-governor crate with a custom RateLimit guard.

Integration Examples

Using the API with curl

Creating a snapshot and uploading an SBOM:

# 1. Login and get session cookie
curl -c cookies.txt -X POST http://localhost:9000/api/auth/login \
  -H "Content-Type: application/json" \
  -d '{"username": "admin", "password": "yourpassword"}'

# 2. Create a snapshot for a project
curl -b cookies.txt -X POST http://localhost:9000/api/snapshots \
  -H "Content-Type: application/json" \
  -d '{"project_id": 1}'

# 3. Upload SBOM to the snapshot
curl -b cookies.txt -X POST http://localhost:9000/api/snapshots/123/bom \
  -H "Content-Type: application/json" \
  -d @sbom.json

Agent authentication:

# Get agent token from admin panel, then use it
export AGENT_TOKEN="your-agent-token-here"

# List projects as agent
curl -H "Authorization: Bearer $AGENT_TOKEN" \
  http://localhost:9000/api/projects

# Upload SBOM as agent
curl -H "Authorization: Bearer $AGENT_TOKEN" \
  -X POST http://localhost:9000/api/snapshots/123/bom \
  -H "Content-Type: application/json" \
  -d @sbom.json

Python Example

import requests

class KonarrClient:
    def __init__(self, base_url, agent_token=None):
        self.base_url = base_url
        self.session = requests.Session()
        if agent_token:
            self.session.headers['Authorization'] = f'Bearer {agent_token}'
    
    def login(self, username, password):
        response = self.session.post(
            f'{self.base_url}/api/auth/login',
            json={'username': username, 'password': password}
        )
        return response.json()
    
    def get_projects(self):
        response = self.session.get(f'{self.base_url}/api/projects')
        return response.json()
    
    def upload_sbom(self, snapshot_id, sbom_data):
        response = self.session.post(
            f'{self.base_url}/api/snapshots/{snapshot_id}/bom',
            json=sbom_data
        )
        return response.json()

# Usage
client = KonarrClient('http://localhost:9000')
client.login('admin', 'password')
projects = client.get_projects()

For CLI-based interaction with Konarr, see the Agent Configuration documentation.

Security

This page describes Konarr's security model, threat considerations, and comprehensive recommendations for secure production deployments.

Security Implementation: Authentication is implemented in server/src/guards with user authentication in models/auth. Session management uses HTTP-only cookies for security.

Overview

Konarr handles sensitive supply chain data including:

  • Software Bill of Materials (SBOMs) for container images
  • Vulnerability scan results and security alerts
  • Container metadata and deployment information
  • Agent authentication credentials

A comprehensive security approach is essential for protecting this data and maintaining system integrity.

Authentication and Authorization

Agent Token Management

Konarr uses a simple but effective token-based authentication model for agents:

  • Token Generation: The server automatically generates a secure agent token (agent.key) on first startup, stored in ServerSettings
  • Token Usage: Agents authenticate using this token as a Bearer token in the Authorization header
  • Token Validation: The server validates agent requests using a guard system with performance caching and database fallback
  • Single Token Model: Currently, all agents share a single token for simplicity

Best Practices for Agent Tokens

  • Treat as Secret: Never commit tokens to version control or expose in logs
  • Secure Storage: Store tokens in secure credential management systems
  • Limited Exposure: Only provide tokens to authorized agent deployments
  • Regular Rotation: Implement a token rotation schedule (recommended: quarterly)
  • Environment Variables: Use environment variables for token distribution, not configuration files

Token Rotation Procedure

# 1. Generate new token (requires server restart or admin API when available)
# Currently requires database update - this will be improved in future versions

# 2. Update all agent deployments with new token
# For Docker environments:
docker service update --env-add KONARR_AGENT_TOKEN="new-token-here" konarr-agent

# 3. Verify all agents are connecting successfully
# Check server logs for authentication failures

# 4. Remove old token references from configuration systems

Web UI Authentication

  • Session-Based: Web interface uses session-based authentication
  • Admin Access: Server settings and sensitive operations require admin privileges
  • Session Security: Sessions are secured with appropriate timeout settings
  • Password Management: Users can update their passwords with strength validation through the profile page
  • Session Monitoring: Users can view active sessions in their profile

User Password Security

Users should follow password best practices:

  • Minimum Length: Passwords must be at least 8 characters
  • Complexity: Use a mix of uppercase, lowercase, numbers, and symbols
  • Uniqueness: Don't reuse passwords from other services
  • Regular Updates: Change passwords periodically
  • Password Strength: The profile page provides real-time password strength feedback

To change your password, visit the User Profile page in the web interface.

Transport Security

TLS Configuration

Always use HTTPS in production - Konarr transmits sensitive vulnerability and SBOM data that must be encrypted in transit.

Frontend URL Configuration

Configure the server's frontend URL to ensure secure redirects and callbacks:

# konarr.yml
server:
  frontend:
    url: "https://konarr.example.com"

Certificate Management

  • Automated Renewal: Use Let's Encrypt with automated renewal (certbot, acme.sh)
  • Certificate Monitoring: Monitor certificate expiration dates
  • Backup Certificates: Maintain secure backups of certificates and keys

Runtime Security

Container Security

Docker Socket Access Risks

⚠️ Critical Security Consideration: Mounting the Docker socket (/var/run/docker.sock) grants significant privileges:

  • Container Creation: Ability to create privileged containers
  • Host Access: Access to host filesystem through volume mounts
  • Privilege Escalation: Potential for privilege escalation attacks
  • Container Inspection: Access to all running containers and their metadata

Security Mitigations

  1. Trusted Hosts Only: Only run agents on trusted, dedicated hosts
  2. Read-Only Mounts: Use :ro flag when possible: /var/run/docker.sock:/var/run/docker.sock:ro
  3. Dedicated Agent Hosts: Consider dedicated hosts for agent containers
  4. Network Segmentation: Isolate agent hosts in secure network segments
  5. Host Monitoring: Monitor host systems for unusual container activity
  6. Alternative Runtimes: Consider container runtimes with safer introspection APIs

Container Image Security

# Use minimal base images
FROM alpine:3.19

# Run as non-root user
RUN adduser -D -s /bin/sh konarr
USER konarr

# Minimal filesystem
COPY --from=builder /app/konarr-cli /usr/local/bin/

Tool Installation Security

The agent can automatically install security scanning tools (Syft, Grype, Trivy):

Supply Chain Security

  • Tool Verification: Verify tool signatures and checksums when available
  • Controlled Environments: For strict environments, pre-install approved tool versions
  • Disable Auto-Install: Set agent.tool_auto_install: false and manage tools manually
  • Tool Isolation: Consider running tools in isolated environments
# Secure agent configuration
agent:
  tool_auto_install: false  # Disable automatic tool installation
  toolcache_path: "/usr/local/toolcache"  # Pre-installed tool location

Data Security

SBOM and Vulnerability Data Protection

SBOM and vulnerability data contains sensitive information about your infrastructure:

Access Control

  • API Authentication: All API endpoints require proper authentication
  • Project Isolation: Implement project-based access controls
  • Data Classification: Classify SBOM data according to organizational policies

Data Retention

# Example retention policy configuration (implementation-dependent)
data:
  retention:
    snapshots: "90d"      # Keep snapshots for 90 days
    vulnerabilities: "1y"  # Keep vulnerability data for 1 year
    logs: "30d"           # Keep logs for 30 days

Data Encryption

  • At Rest: Consider encrypting the SQLite database file
  • In Transit: Always use HTTPS for API communications
  • Backups: Encrypt database backups

Database Security

File Permissions

# Secure database file permissions
chmod 600 /data/konarr.db
chown konarr:konarr /data/konarr.db

# Secure data directory
chmod 700 /data
chown konarr:konarr /data

Backup Security

# Encrypted backup example
sqlite3 /data/konarr.db ".backup /tmp/konarr-backup.db"
gpg --cipher-algo AES256 --compress-algo 1 --symmetric --output konarr-backup.db.gpg /tmp/konarr-backup.db
rm /tmp/konarr-backup.db

Network Security

Firewall Configuration

# Allow only necessary ports
# Server (typically internal)
ufw allow from 10.0.0.0/8 to any port 9000

# Reverse proxy (public)
ufw allow 80
ufw allow 443

# Agent communication (if direct)
ufw allow from <agent-networks> to any port 9000

Network Segmentation

  • DMZ Deployment: Deploy web-facing components in DMZ
  • Internal Networks: Keep agents and database on internal networks
  • VPN Access: Use VPN for administrative access

Secrets Management

Configuration Security

  • Environment Variables: Use environment variables for secrets, not config files
  • Secrets Managers: Integrate with HashiCorp Vault, AWS Secrets Manager, etc.
  • File Permissions: Secure configuration files with appropriate permissions
# Example environment variable configuration
export KONARR_AGENT_TOKEN="$(vault kv get -field=token secret/konarr/agent)"
export KONARR_DATABASE_ENCRYPTION_KEY="$(vault kv get -field=key secret/konarr/database)"

Kubernetes Secrets

apiVersion: v1
kind: Secret
metadata:
  name: konarr-agent-token
type: Opaque
data:
  token: <base64-encoded-agent-token>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: konarr-agent
spec:
  template:
    spec:
      containers:
      - name: agent
        image: ghcr.io/42bytelabs/konarr-agent:latest
        env:
        - name: KONARR_AGENT_TOKEN
          valueFrom:
            secretKeyRef:
              name: konarr-agent-token
              key: token

Monitoring and Auditing

Security Monitoring

Log Collection

# Example logging configuration
logging:
  level: "info"
  audit: true
  destinations:
    - type: "file"
      path: "/var/log/konarr/audit.log"
    - type: "syslog"
      facility: "auth"

Metrics to Monitor

  • Failed authentication attempts
  • Unusual agent activity patterns
  • Large data uploads or downloads
  • Administrative actions
  • System resource usage anomalies

Alerting

# Example alert conditions
# - More than 10 failed authentications in 5 minutes
# - Agent uploading unusually large SBOMs
# - New agents connecting from unknown IP addresses
# - Database size growing rapidly

Compliance and Auditing

Audit Trail

  • Authentication Events: Log all authentication attempts and results
  • Data Access: Log access to sensitive SBOM and vulnerability data
  • Configuration Changes: Log all server configuration modifications
  • Agent Activity: Monitor agent connection patterns and data uploads

Compliance Considerations

  • Data Residency: Consider where SBOM data is stored and processed
  • Access Logging: Maintain detailed access logs for compliance audits
  • Data Retention: Implement compliant data retention policies
  • Privacy: Consider privacy implications of container metadata collection

Incident Response

Security Incident Procedures

  1. Detection: Monitor for security events and anomalies
  2. Containment: Isolate affected systems and revoke compromised tokens
  3. Investigation: Analyze logs and determine scope of compromise
  4. Recovery: Restore systems and implement additional protections
  5. Lessons Learned: Update security procedures based on incidents

Token Compromise Response

# If agent token is compromised:
# 1. Immediately rotate the agent token
# 2. Update all legitimate agents
# 3. Monitor for unauthorized access attempts
# 4. Review recent agent activity for suspicious patterns

Security Checklist

Deployment Security

  • HTTPS/TLS configured with modern ciphers
  • Security headers implemented (HSTS, CSP, etc.)
  • Agent tokens stored securely (not in code/configs)
  • Database file permissions secured (600)
  • Firewall rules configured for minimal access
  • Regular security updates applied
  • Monitoring and alerting configured
  • Backup encryption implemented
  • Agent hosts properly secured
  • Tool installation policies defined

Operational Security

  • Regular agent token rotation
  • Security monitoring in place
  • Incident response procedures defined
  • Access controls documented and reviewed
  • Compliance requirements mapped and addressed
  • Security training for operators
  • Regular security assessments conducted

Additional Resources

Troubleshooting

This guide helps resolve common issues with Konarr server and agent deployments.

Server Issues

Server Won't Start

Problem: Server fails to start or exits immediately.

Solutions:

  1. Check database permissions:

    # Ensure data directory is writable
    chmod 755 /data
    ls -la /data/konarr.db
    
  2. Verify configuration:

    # Test configuration file syntax
    cargo run -p konarr-server -- --config konarr.yml --debug
    
  3. Check port availability:

    # Verify port 9000 is available
    netstat -tulpn | grep :9000
    
  4. Review server logs:

    # Docker container logs
    docker logs konarr-server
    
    # Systemd service logs
    journalctl -u konarr-server -f
    

Database Issues

Problem: Database corruption or migration failures.

Solutions:

  1. Backup and recover database:

    # Backup current database
    cp /data/konarr.db /data/konarr.db.backup
    
    # Check database integrity
    sqlite3 /data/konarr.db "PRAGMA integrity_check;"
    
  2. Reset database (data loss):

    # Stop server, remove database, restart
    rm /data/konarr.db
    # Server will recreate database on next start
    

Web UI Not Loading

Problem: UI shows blank page or loading errors.

Solutions:

  1. Check frontend configuration:

    # konarr.yml
    server:
      frontend:
        url: "https://konarr.example.com"
    
  2. Verify reverse proxy (if used):

    # nginx example
    location / {
      proxy_pass http://localhost:9000;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
    }
    
  3. Clear browser cache and cookies

Agent Issues

Authentication Failures

Problem: Agent cannot authenticate with server.

Error: Authentication failed or Invalid token

Solutions:

  1. Verify agent token:

    # Check server for current token
    curl -s http://localhost:9000/api/health
    
    # Verify agent token matches server
    echo $KONARR_AGENT_TOKEN
    
  2. Generate new token:

    # Access server admin UI
    # Navigate to Settings > Agent Token
    # Generate new token and update agents
    
  3. Check token format:

    # Token should be base64-encoded string
    # Verify no extra whitespace or newlines
    echo -n "$KONARR_AGENT_TOKEN" | wc -c
    

Web UI Login Issues

Problem: Cannot log in to the web interface or forgot password.

Solutions:

  1. Reset user password using the CLI:

    # Interactive password reset
    konarr-cli database user
    
    # Follow the prompts:
    # - Enter the username
    # - Enter the new password
    # - Select the role (Admin/User)
    
  2. Create a new admin user if locked out:

    # Create emergency admin account
    konarr-cli --database-url /data/konarr.db database user
    
    # When prompted:
    # Username: emergency-admin
    # Password: [enter secure password]
    # Role: Admin
    
  3. For container deployments:

    # Access container and reset password
    docker exec -it konarr-server konarr-cli database user
    
    # Or with volume-mounted database
    konarr-cli --database-url /path/to/konarr.db database user
    

Note: The database user command can both create new users and reset passwords for existing users. See the CLI Usage Guide for more details.

Tool Installation Problems

Problem: Agent cannot install or find security tools.

Solutions:

  1. Manual tool installation:

    # Install Syft
    curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
    
    # Install Grype
    curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
    
    # Install Trivy
    curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
    
  2. Check tool paths:

    # Verify tools are accessible
    which syft
    which grype
    which trivy
    
    # Test tool execution
    syft --version
    grype --version
    trivy --version
    
  3. Configure toolcache path:

    # konarr.yml
    agent:
      toolcache_path: "/usr/local/toolcache"
      tool_auto_install: true
    

Docker Socket Access

Problem: Agent cannot access Docker socket.

Error: Cannot connect to Docker daemon

Solutions:

  1. Check Docker socket permissions:

    # Verify socket exists and is accessible
    ls -la /var/run/docker.sock
    
    # Add user to docker group
    sudo usermod -aG docker $USER
    # Logout and login again
    
  2. Container socket mounting:

    # Ensure socket is properly mounted
    docker run -v /var/run/docker.sock:/var/run/docker.sock \
      ghcr.io/42bytelabs/konarr-agent:latest
    
  3. Docker daemon status:

    # Check Docker daemon is running
    systemctl status docker
    sudo systemctl start docker
    

Network Connectivity

Problem: Agent cannot reach Konarr server.

Solutions:

  1. Test connectivity:

    # Test server reachability
    curl -v http://konarr-server:9000/api/health
    
    # Check DNS resolution
    nslookup konarr-server
    
  2. Firewall configuration:

    # Check firewall rules
    sudo ufw status
    sudo firewall-cmd --list-all
    
    # Allow port 9000
    sudo ufw allow 9000
    
  3. Network configuration:

    # Check network interfaces
    ip addr show
    
    # Test port connectivity
    telnet konarr-server 9000
    

Container Issues

Image Pull Failures

Problem: Cannot pull Konarr container images.

Solutions:

  1. Check image availability:

    # List available tags
    curl -s https://api.github.com/repos/42ByteLabs/konarr/packages/container/konarr/versions
    
    # Pull specific version
    docker pull ghcr.io/42bytelabs/konarr:v0.4.4
    
  2. Authentication for private registries:

    # Login to GitHub Container Registry
    echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
    

Container Startup Issues

Problem: Containers exit immediately or crash.

Solutions:

  1. Check container logs:

    # View container logs
    docker logs konarr-server
    docker logs konarr-agent
    
    # Follow logs in real-time
    docker logs -f konarr-server
    
  2. Verify volume mounts:

    # Check mount points exist and are writable
    ls -la /host/data
    ls -la /host/config
    
    # Fix permissions if needed
    sudo chown -R 1000:1000 /host/data
    
  3. Resource constraints:

    # Check available resources
    docker stats
    free -h
    df -h
    

Performance Issues

High Memory Usage

Problem: Server or agent consuming excessive memory.

Solutions:

  1. Monitor memory usage:

    # Check process memory
    ps aux | grep konarr
    
    # Monitor container resources
    docker stats konarr-server
    
  2. Configure resource limits:

    # docker-compose.yml
    services:
      konarr:
        deploy:
          resources:
            limits:
              memory: 512M
            reservations:
              memory: 256M
    
  3. Database optimization:

    # Vacuum SQLite database
    sqlite3 /data/konarr.db "VACUUM;"
    
    # Check database size
    du -h /data/konarr.db
    

Slow SBOM Generation

Problem: Agent takes too long to generate SBOMs.

Solutions:

  1. Check scanner performance:

    # Test individual tools
    time syft nginx:latest
    time grype nginx:latest
    
  2. Optimize container caching:

    # Pre-pull base images
    docker pull alpine:latest
    docker pull ubuntu:latest
    
    # Use local registry for faster access
    
  3. Adjust scanning scope:

    # konarr.yml - reduce scan scope
    agent:
      scan_config:
        exclude_paths:
          - "/tmp"
          - "/var/cache"
    

Debugging

Enable Debug Logging

Server Debug Mode:

# Environment variable
export RUST_LOG=debug

# Configuration file
echo "log_level = 'debug'" >> konarr.yml

Agent Debug Mode:

# Debug Agent
konarr-cli --debug agent --docker-socket /var/run/docker.sock

# Debug environment
export KONARR_LOG_LEVEL=debug

API Debugging

Test API Endpoints:

# Health check
curl -v http://localhost:9000/api/health

# Authentication test
curl -H "Authorization: Bearer $AGENT_TOKEN" \
     http://localhost:9000/api/projects

# Raw SBOM upload test
curl -X POST \
     -H "Authorization: Bearer $AGENT_TOKEN" \
     -H "Content-Type: application/json" \
     -d @sbom.json \
     http://localhost:9000/api/snapshots

Database Debugging

Query Database Directly:

# Connect to SQLite database
sqlite3 /data/konarr.db

# Common debugging queries
.tables
SELECT COUNT(*) FROM projects;
SELECT COUNT(*) FROM snapshots;
SELECT COUNT(*) FROM components;

# Check recent activity
SELECT * FROM snapshots ORDER BY created_at DESC LIMIT 10;

Configuration Validation and Debugging

Initial Setup Verification

1. Server Health Check

# Test server is running and accessible
curl -v http://localhost:9000/api/health

# Expected response:
# HTTP/1.1 200 OK
# {"status":"healthy","version":"x.x.x"}

2. Database Verification

# Check database file exists and is accessible
ls -la /data/konarr.db

# Verify database structure
sqlite3 /data/konarr.db ".tables"

# Check server settings
sqlite3 /data/konarr.db "SELECT name, value FROM server_settings WHERE name LIKE 'agent%';"

3. Agent Authentication Test

# Test agent token authentication
curl -H "Authorization: Bearer ${KONARR_AGENT_TOKEN}" \
     http://localhost:9000/api/projects

# Successful authentication returns project list

Advanced Configuration Troubleshooting

Server Startup Problems

Issue: Server fails to start or exits immediately

Solutions:

  1. Check configuration file syntax:

    # Validate YAML syntax
    python -c "import yaml; yaml.safe_load(open('konarr.yml'))"
    
  2. Verify data directory permissions:

    # Ensure data directory is writable
    mkdir -p /data
    chmod 755 /data
    chown konarr:konarr /data  # If running as specific user
    
  3. Check port availability:

    # Verify port 9000 is not in use
    netstat -tulpn | grep :9000
    lsof -i :9000
    

Issue: Frontend not served properly

Solutions:

  1. Check frontend path configuration:

    server:
      frontend: "/app/dist"  # Ensure path exists and contains built frontend
    
  2. Verify frontend files exist:

    ls -la /app/dist/
    # Should contain: index.html, static/, assets/
    

Agent Configuration Problems

Issue: Agent cannot connect to server

Solutions:

  1. Verify server URL configuration:

    # Test connectivity
    curl -v http://konarr.example.com:9000/api/health
    
  2. Check agent token:

    # Retrieve current agent token from server
    sqlite3 /data/konarr.db "SELECT value FROM server_settings WHERE name='agent.key';"
    
  3. Network troubleshooting:

    # Test DNS resolution
    nslookup konarr.example.com
    
    # Test port connectivity
    telnet konarr.example.com 9000
    

Issue: Agent tools not found or installation fails

Solutions:

  1. Manual tool installation:

    # Install Syft
    curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | \
      sh -s -- -b /usr/local/bin
    
    # Install Grype
    curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | \
      sh -s -- -b /usr/local/bin
    
    # Install Trivy
    curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | \
      sh -s -- -b /usr/local/bin
    
  2. Verify tool paths:

    # Check tools are accessible
    which syft grype trivy
    /usr/local/bin/syft version
    /usr/local/bin/grype version
    /usr/local/bin/trivy version
    
  3. Configure custom tool paths:

    agent:
      toolcache_path: "/opt/security-tools"
      tool_auto_install: false
    

Performance Optimization

Database Performance

# Analyze database size and performance
sqlite3 /data/konarr.db "PRAGMA integrity_check;"
sqlite3 /data/konarr.db "VACUUM;"
sqlite3 /data/konarr.db "ANALYZE;"

# Check database file size
du -h /data/konarr.db

Memory and Resource Usage

# Monitor server resource usage
ps aux | grep konarr-server
htop -p $(pgrep konarr-server)

# Container resource monitoring
docker stats konarr-server konarr-agent

Security Verification

SSL/TLS Configuration

# Test SSL certificate and configuration
openssl s_client -connect konarr.example.com:443 -servername konarr.example.com

# Check certificate expiration
curl -vI https://konarr.example.com 2>&1 | grep -E "(expire|until)"

Token Security

# Verify agent token entropy and length
echo $KONARR_AGENT_TOKEN | wc -c  # Should be 43+ characters
echo $KONARR_AGENT_TOKEN | head -c 10  # Should start with "kagent_"

Logging and Debugging

Enable Server Debug Logging

Server debug mode:

# Environment variable
export RUST_LOG=debug

# Or configuration file
echo "log_level: debug" >> konarr.yml

Agent debug mode:

# CLI flag
konarr-cli --debug agent monitor

# Environment variable
export KONARR_LOG_LEVEL=debug

Log Analysis

# Server logs (Docker)
docker logs -f konarr-server

# Agent logs (Docker)
docker logs -f konarr-agent

# System service logs
journalctl -u konarr-server -f
journalctl -u konarr-agent -f

Configuration Testing and Validation

Complete Configuration Test

# Test complete configuration (development)
cargo run -p konarr-server -- --config konarr.yml --debug

# Test agent configuration
konarr-cli --config konarr.yml --debug

Environment Variable Precedence

# Check configuration with debug output
konarr-cli --config konarr.yml --debug

# List all environment variables
env | grep KONARR_ | sort

Getting Help

Log Collection

When seeking support, collect these logs:

# Server logs
docker logs konarr-server > server.log 2>&1

# Agent logs
docker logs konarr-agent > agent.log 2>&1

# System information
docker info > docker-info.txt
uname -a > system-info.txt

Support Channels

Reporting Bugs

Include in bug reports:

  1. Konarr version (konarr-server --version)
  2. Operating system and version
  3. Docker/container runtime version
  4. Complete error messages and stack traces
  5. Steps to reproduce the issue
  6. Configuration files (remove sensitive data)