Introduction
Konarr is a blazing fast, lightweight web interface for monitoring your servers, clusters, and containers' supply chains for dependencies and vulnerabilities. Written in Rust 🦀, it provides minimal resource usage while delivering real-time insights into your software bill of materials (SBOM) and security posture.
Key Features
- Simple, easy-to-use web interface with both light and dark modes
- Blazing fast performance with minimal resource usage (written in Rust 🦀)
- Real-time container monitoring using industry-standard scanners:
- Orchestration support for:
- Docker / Podman
- Docker Compose / Docker Swarm
- Kubernetes support (planned 🚧)
- Software Bill of Materials (SBOM) generation and management for containers
- Supply chain attack monitoring (in development 🚧)
- CycloneDX support (v1.5 and v1.6) for SBOM formats
Architecture
Konarr follows a simple server + agent architecture:
-
Server: Built with Rust and the Rocket framework
- Provides REST API and web UI
- Uses SQLite for lightweight data storage (GeekORM for database operations)
- Stores server settings including agent authentication keys
- Serves frontend built with Vue.js and TypeScript
- Default port: 9000
-
Agent / CLI: Rust-based CLI (
konarr-cli
) that: -
Extensible tooling:
- Tool discovery and management system
- Support for multiple package managers:
- Standardized SBOM and vulnerability report uploading
Technologies Used
Konarr is built with modern, high-performance technologies:
Backend:
- Rust using Rocket framework for the web server
- GeekORM for database operations and SQLite integration
- Figment for configuration management
- Tokio for asynchronous runtime
Frontend:
- Vue.js 3 with TypeScript for reactive UI
- Tailwind CSS for responsive styling
- Vite for fast development and building
- Material Design Icons (MDI) and Heroicons for UI icons
- HeadlessUI for accessible UI components
Database:
- SQLite for lightweight, embedded data storage
- GeekORM for type-safe database operations
- Automatic migrations and schema management
Security & Standards:
- CycloneDX (v1.5 and v1.6) for SBOM format compliance
- Session-based authentication for web UI
- Bearer token authentication for agents
- CORS support for API access
Container & Deployment:
- Docker and Podman support
- Docker Compose configurations
- Kubernetes support (planned)
- Multi-architecture container images (x86_64, ARM64)
Quick Links
- Installation & Setup
- Server Setup
- Agent Setup
- Configuration & Usage
- API Documentation
- Security
- API Documentation
- Security
Getting Started
- Install the Server - See Server Installation
- Configure Authentication - Retrieve the agent token from the server
- Deploy Agents - See Agent Installation to monitor your containers
- Monitor Projects - View SBOMs and vulnerabilities in the web interface
For a quick start using Docker, see our installation guide.
Project Repository: https://github.com/42ByteLabs/konarr
Frontend Repository: https://github.com/42ByteLabs/konarr-client
Container Images: Available on GitHub Container Registry
License: Apache 2.0
Installation and Setup
This section provides multiple installation methods for Konarr. Choose the method that best fits your environment:
- Quick Start: One-line installer script
- Docker Compose: Full stack with server and agent
- Individual Components: Install server and agent separately
- From Source: Build from the GitHub repository
Quick Start (Recommended)
The fastest way to get Konarr running is with Docker:
# Run Konarr server
docker run -d \
--name konarr \
-p 9000:9000 \
-v ./data:/data \
-v ./config:/config \
ghcr.io/42bytelabs/konarr:latest
This will start the Konarr server with:
- Web interface accessible at
http://localhost:9000
- Data persistence in
./data
directory - Optional configuration in
./config/konarr.yml
Docker Compose Setup
For a complete development environment with both server and frontend:
# Clone repository
git clone https://github.com/42ByteLabs/konarr.git && cd konarr
git submodule update --init --recursive
# Start services
docker-compose up -d
This provides:
- Konarr server on port 9000
- Development setup with both server and frontend
- Persistent data volumes
- Automatic service management
Component Installation
For detailed setup of individual components:
- Server: Server Installation Guide - API, web UI, and data storage
- Agent: Agent Installation Guide - Container monitoring and SBOM generation
Prerequisites
For Container Deployment:
- Docker (v20.10+) or Podman (v3.0+)
- Docker Compose (optional, for multi-container setup)
For Source Installation:
- Rust and Cargo (latest stable)
- Node.js and npm (for frontend build)
- Git (for cloning repository)
System Requirements:
- Minimum: 256MB RAM, 1GB disk space
- Recommended: 512MB+ RAM, 5GB+ disk space (for SBOM storage)
Quick Workflow
- Start the server (port 9000 by default)
- Access the web UI at
http://localhost:9000
- Retrieve the agent token from server settings or database
- Deploy agents on hosts you want to monitor
- Create projects to organize your container monitoring
- View SBOMs and vulnerabilities in the web interface
Default Ports and Paths
- Server Port: 9000 (HTTP)
- Data Directory:
./data
(contains SQLite database) - Config Directory:
./config
(containskonarr.yml
) - Database:
./data/konarr.db
(SQLite) - Agent Token: Stored in server settings as
agent.key
Verifying and Troubleshooting
- Start the server and open the UI: http://localhost:9000 (or configured host).
- Start the agent with the correct instance URL, token and a project id or auto-create enabled.
- Confirm snapshots appear in the project view and the server shows the agent as authenticated.
Common troubleshooting
- Agent authentication failures: double-check
KONARR_AGENT_TOKEN
value and ensure the serveragent.key
matches. - Missing scanner binaries: either enable
agent.tool_auto_install
or install syft/grype/trivy on the host/container and make sure they are on PATH or in/usr/local/toolcache
. - Frontend not served when running server from source: build frontend (
client/
) and point serverfrontend
config to thedist
directory.
Need more?
If you'd like, I can also:
- Add a ready-to-use
docker-compose.yml
snippet to the server page. - Add an API example to find/create a Project ID.
- Add example
konarr.yml
snippets for common production deployments.
Konarr Server
The Konarr server is the central component providing the REST API, web interface, and data storage. It's built with Rust using the Rocket framework and stores data in SQLite by default.
Installation Methods
Docker (Recommended)
Single Container:
docker run -d \
--name konarr \
-p 9000:9000 \
-v ./data:/data \
-v ./config:/config \
ghcr.io/42bytelabs/konarr:latest
Key Points:
- Server listens on port 9000
- Data persisted in
./data
(SQLite database) - Configuration in
./config
(optionalkonarr.yml
) - Automatic database migrations on startup
Docker Compose
For production deployments, see our Docker Compose guide which includes:
- Service definitions
- Volume management
- Health checks
- Upgrade procedures
Cargo Installation
Install the server binary directly:
# Install from crates.io
cargo install konarr-server
# Run with default configuration
konarr-server
# Run with custom config
konarr-server -c ./konarr.yml
Note: Cargo installation is not recommended for production use.
From Source (Development)
Requirements:
- Rust and Cargo (latest stable)
- Node.js and npm (for frontend)
- Git
Clone and Build:
# Clone repository with frontend submodule
git clone https://github.com/42ByteLabs/konarr.git && cd konarr
git submodule update --init --recursive
# Build frontend
cd frontend && npm install && npm run build && cd ..
# Run server (development mode)
cargo run -p konarr-server
# Or build and run release
cargo run -p konarr-server --release -- -c ./konarr.yml
Development with Live Reload:
# Watch mode for server changes
cargo watch -q -c -- cargo run -p konarr-server
# Frontend development (separate terminal)
cd frontend && npm run dev
This creates:
- Default config:
config/konarr.yml
- SQLite database:
data/konarr.db
- Server on port 8000 (development) or 9000 (production/release)
Configuration
Environment Variables
The server uses Figment for configuration, supporting environment variables with KONARR_
prefix:
# Server settings
export KONARR_SERVER__PORT=9000
export KONARR_DATA_PATH=/data
export KONARR_FRONTEND__URL=https://konarr.example.com
# Database settings
export KONARR_DATABASE__PATH=/data/konarr.db
# Security
export KONARR_SECRET=your-secret-key
Configuration File
Create konarr.yml
for persistent settings:
server:
host: "0.0.0.0"
port: 9000
data_path: "/data"
frontend:
url: "https://konarr.example.com"
secret: "your-secret-key"
database:
path: "/data/konarr.db"
agent:
key: "your-agent-key" # Optional: will be generated if not provided
Agent Token Management
The server automatically generates an agent authentication key on first startup, stored as agent.key
in ServerSettings.
Retrieving the Agent Token
Method 1: Database Query
sqlite3 ./data/konarr.db "SELECT value FROM server_settings WHERE name='agent.key';"
Method 2: Configuration File
If you set the agent key in konarr.yml
, use that value.
Method 3: Web UI
Access server settings through the admin interface (requires authentication).
⚠️ Security: Treat the agent token as a secret. Do not commit to version control or share publicly.
Production Deployment
Reverse Proxy Setup
See Reverse Proxy Setup for detailed configuration examples.
Security Recommendations
- Use HTTPS: Configure TLS termination at the reverse proxy
- Set frontend URL: Update
server.frontend.url
to match external URL - Secure volumes: Protect
./data
and./config
with appropriate file permissions - Stable secrets: Set
server.secret
to a strong, persistent value - Regular backups: Back up the SQLite database before upgrades
Resource Requirements
- Minimum: 256MB RAM, 1GB disk
- Recommended: 512MB+ RAM, 5GB+ disk (for SBOM storage)
- CPU: Scales with number of concurrent users and agent uploads
Monitoring
Monitor server health:
# Health check endpoint
curl http://localhost:9000/api/health
# Container logs
docker logs -f konarr
# Database size
du -h ./data/konarr.db
Next Steps: Configure and deploy agents to start monitoring containers.
Server Docker Compose
This page provides a ready-to-use Docker Compose example and notes for deploying the Konarr Server in a multi-container environment (eg. web + DB volumes). The example focuses on the official Konarr image and mounting persistent volumes for data and config.
docker-compose example
Save the following as docker-compose.yml
in your deployment directory and adjust paths and environment variables as needed:
services:
konarr:
image: ghcr.io/42bytelabs/konarr:latest
container_name: konarr
restart: unless-stopped
ports:
- "9000:9000"
volumes:
- ./data:/data
- ./config:/config
environment:
# Use KONARR_ prefixed env vars to configure the server if you prefer env-based config
- KONARR_DATA_PATH=/data
- KONARR_CONFIG_PATH=/config/konarr.yml
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9000/api/health || exit 1"]
interval: 30s
timeout: 5s
retries: 3
Deploy
# start in detached mode
docker compose up -d
Monitor logs
# show logs
docker compose logs -f konarr
Volumes and persistent data
./data
stores the SQLite database and other runtime state — back this up regularly../config
storeskonarr.yml
(optional). If you want immutable configuration, mount a read-only config volume and supply environment variables for secrets.
Backups and migrations
- Backup the
data/konarr.db
file before performing upgrades. - On first run the server will run migrations; ensure your backup is taken before major version upgrades.
Upgrading the image
- Pull the new image:
docker compose pull konarr
- Restart the service:
docker compose up -d --no-deps --build konarr
- Monitor logs for migrations:
docker compose logs -f konarr
Notes
- The server listens on port 9000 by default.
- Use a reverse proxy or load balancer in front of the service for TLS termination in production.
- For security, protect the
config
anddata
directories and do not expose the database file to untrusted users.
If you'd like, I can add an example docker-compose
with an Nginx reverse-proxy configured for TLS termination (Let's Encrypt) and an accompanying nginx
config example.
Kubernetes Deployment
This guide covers deploying Konarr server and agents on Kubernetes clusters, including configuration, security considerations, and operational best practices.
Overview
Konarr can be deployed on Kubernetes using standard manifests or Helm charts. The deployment typically includes:
- Konarr Server: Web interface, API, and database
- Konarr Agents: Container monitoring and SBOM generation (optional)
- Supporting Resources: ConfigMaps, Secrets, Services, and storage
Prerequisites
- Kubernetes cluster (v1.20+)
kubectl
configured to access your cluster- Persistent storage support (for database persistence)
- LoadBalancer or Ingress controller (for external access)
Quick Start
Minimal Deployment
# konarr-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: konarr
---
# konarr-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: konarr-server
namespace: konarr
labels:
app: konarr-server
spec:
replicas: 1
selector:
matchLabels:
app: konarr-server
template:
metadata:
labels:
app: konarr-server
spec:
containers:
- name: konarr-server
image: ghcr.io/42bytelabs/konarr:latest
ports:
- containerPort: 9000
env:
- name: KONARR_DATA_PATH
value: "/data"
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: konarr-server
namespace: konarr
spec:
selector:
app: konarr-server
ports:
- port: 9000
targetPort: 9000
type: ClusterIP
Deploy the minimal setup:
kubectl apply -f konarr-namespace.yaml
kubectl apply -f konarr-server.yaml
Troubleshooting
Common Issues
- Agent Permission Issues:
# Check agent logs
kubectl logs -n konarr -l app=konarr-agent
# Verify RBAC permissions
kubectl auth can-i get pods --as=system:serviceaccount:konarr:konarr-agent
- Storage Issues:
# Check PVC status
kubectl get pvc -n konarr
# Check storage class
kubectl get storageclass
- Network Connectivity:
# Test internal service connectivity
kubectl exec -n konarr deployment/konarr-agent -- curl http://konarr-server:9000/api/health
# Check ingress status
kubectl get ingress -n konarr
Debug Commands
# Get all Konarr resources
kubectl get all -n konarr
# Check events
kubectl get events -n konarr --sort-by='.lastTimestamp'
# Debug pod issues
kubectl describe pod -n konarr -l app=konarr-server
# Check logs
kubectl logs -n konarr deployment/konarr-server --follow
Deployment Scripts
Complete Deployment Script
#!/bin/bash
# deploy-konarr.sh
set -e
NAMESPACE="konarr"
DOMAIN="konarr.example.com"
echo "Creating namespace..."
kubectl create namespace ${NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -
echo "Generating secrets..."
AGENT_TOKEN=$(openssl rand -base64 32)
SERVER_SECRET=$(openssl rand -base64 32)
kubectl create secret generic konarr-secrets \
--namespace=${NAMESPACE} \
--from-literal=agent-token=${AGENT_TOKEN} \
--from-literal=server-secret=${SERVER_SECRET} \
--dry-run=client -o yaml | kubectl apply -f -
echo "Deploying Konarr server..."
envsubst < konarr-server.yaml | kubectl apply -f -
echo "Deploying Konarr agents..."
kubectl apply -f konarr-agent-daemonset.yaml
echo "Configuring ingress..."
envsubst < konarr-ingress.yaml | kubectl apply -f -
echo "Waiting for deployment..."
kubectl wait --for=condition=available --timeout=300s deployment/konarr-server -n ${NAMESPACE}
echo "Konarr deployed successfully!"
echo "Access at: https://${DOMAIN}"
echo "Agent token: ${AGENT_TOKEN}"
Migration from Docker
Data Migration
# Copy data from Docker volume to Kubernetes PV
kubectl cp /var/lib/docker/volumes/konarr_data/_data/konarr.db \
konarr/konarr-server-pod:/data/konarr.db
Best Practices
Resource Management
- Use resource requests and limits
- Configure appropriate storage classes
- Implement monitoring and alerting
- Use horizontal pod autoscaling for high-traffic deployments
Security
- Run as non-root user
- Use read-only root filesystems where possible
- Implement network policies
- Regular security updates and scanning
Operations
- Implement proper backup strategies
- Monitor resource usage and performance
- Use GitOps for configuration management
- Regular testing of disaster recovery procedures
Additional Resources
- Server Configuration - Basic server setup
- Agent Configuration - Agent deployment options
- Security Guide - Security best practices
- Troubleshooting - Common issues and solutions
Reverse Proxy Setup
Running Konarr behind a reverse proxy is recommended for production deployments to provide TLS termination, load balancing, and additional security features.
Nginx
Basic Configuration
server {
listen 80;
server_name konarr.example.com;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name konarr.example.com;
# SSL configuration
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Proxy configuration
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# WebSocket support (if needed for future features)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# API endpoints with longer timeouts for large SBOM uploads
location ~ ^/api/(snapshots|upload) {
proxy_pass http://127.0.0.1:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Extended timeouts for large uploads
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
# Increase client max body size for SBOM uploads
client_max_body_size 50M;
}
# Health check endpoint
location /api/health {
proxy_pass http://127.0.0.1:9000;
access_log off;
}
}
Let's Encrypt with Certbot
Automatically obtain and renew SSL certificates:
# Install certbot
sudo apt update
sudo apt install certbot python3-certbot-nginx
# Obtain certificate
sudo certbot --nginx -d konarr.example.com
# Test automatic renewal
sudo certbot renew --dry-run
Traefik
Docker Compose Configuration
version: '3.8'
services:
traefik:
image: traefik:v3.0
container_name: traefik
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.yml:/traefik.yml:ro
- ./acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)"
- "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
konarr:
image: ghcr.io/42bytelabs/konarr:latest
container_name: konarr
restart: unless-stopped
volumes:
- ./data:/data
- ./config:/config
labels:
- "traefik.enable=true"
- "traefik.http.routers.konarr.rule=Host(`konarr.example.com`)"
- "traefik.http.routers.konarr.tls.certresolver=letsencrypt"
- "traefik.http.services.konarr.loadbalancer.server.port=9000"
# Health check
- "traefik.http.services.konarr.loadbalancer.healthcheck.path=/api/health"
- "traefik.http.services.konarr.loadbalancer.healthcheck.interval=30s"
Traefik Configuration (traefik.yml
)
api:
dashboard: true
entryPoints:
web:
address: ":80"
http:
redirections:
entrypoint:
to: websecure
scheme: https
websecure:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
certificatesResolvers:
letsencrypt:
acme:
email: admin@example.com
storage: acme.json
httpChallenge:
entryPoint: web
Caddy
Caddyfile Configuration
konarr.example.com {
reverse_proxy 127.0.0.1:9000 {
header_up X-Real-IP {remote_addr}
header_up X-Forwarded-For {remote_addr}
header_up X-Forwarded-Proto {scheme}
# Health check
health_uri /api/health
health_interval 30s
health_timeout 10s
}
# Security headers
header {
X-Frame-Options DENY
X-Content-Type-Options nosniff
X-XSS-Protection "1; mode=block"
Strict-Transport-Security "max-age=31536000; includeSubDomains"
}
# Longer timeouts for API uploads
@api_uploads path /api/snapshots* /api/upload*
reverse_proxy @api_uploads 127.0.0.1:9000 {
timeout 300s
}
}
Apache HTTP Server
Virtual Host Configuration
<VirtualHost *:80>
ServerName konarr.example.com
Redirect permanent / https://konarr.example.com/
</VirtualHost>
<VirtualHost *:443>
ServerName konarr.example.com
# SSL Configuration
SSLEngine on
SSLCertificateFile /path/to/certificate.crt
SSLCertificateKeyFile /path/to/private.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
# Security Headers
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
Header always set X-XSS-Protection "1; mode=block"
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
# Proxy Configuration
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:9000/
ProxyPassReverse / http://127.0.0.1:9000/
# Set headers for backend
ProxyPassReverse / http://127.0.0.1:9000/
ProxyPassReverseMatch ^(.*)$ http://127.0.0.1:9000$1
SetEnvIf X-Forwarded-Proto https HTTPS=on
</VirtualHost>
HAProxy
Load Balancing Configuration
global
daemon
maxconn 4096
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend konarr_frontend
bind *:80
bind *:443 ssl crt /path/to/certificate.pem
# Redirect HTTP to HTTPS
redirect scheme https if !{ ssl_fc }
# Security headers
http-response set-header X-Frame-Options DENY
http-response set-header X-Content-Type-Options nosniff
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains"
default_backend konarr_backend
backend konarr_backend
balance roundrobin
# Health check
option httpchk GET /api/health
# Backend servers
server konarr1 127.0.0.1:9000 check
# server konarr2 127.0.0.1:9001 check # Additional instances
Security Considerations
Rate Limiting
Configure rate limiting at the reverse proxy level:
Nginx:
# Rate limiting configuration
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=upload:10m rate=1r/s;
location /api/ {
limit_req zone=api burst=20 nodelay;
}
location ~ ^/api/(snapshots|upload) {
limit_req zone=upload burst=5 nodelay;
}
Traefik:
# Add to service labels
- "traefik.http.middlewares.ratelimit.ratelimit.burst=20"
- "traefik.http.middlewares.ratelimit.ratelimit.average=10"
- "traefik.http.routers.konarr.middlewares=ratelimit"
IP Whitelisting
Restrict access to specific IP ranges:
Nginx:
# Allow specific networks
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 172.16.0.0/12;
deny all;
Authentication Middleware
Add basic authentication at the proxy level:
Nginx:
location / {
auth_basic "Konarr Access";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:9000;
}
Monitoring and Logging
Access Logs
Configure detailed logging for monitoring:
Nginx:
log_format konarr_format '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/konarr.access.log konarr_format;
Health Checks
Set up monitoring for the reverse proxy and backend:
# Simple health check script
#!/bin/bash
curl -f -s https://konarr.example.com/api/health > /dev/null
if [ $? -eq 0 ]; then
echo "Konarr is healthy"
exit 0
else
echo "Konarr health check failed"
exit 1
fi
Configuration Notes
Backend URL Configuration
Update Konarr server configuration to use the external URL:
# konarr.yml
server:
frontend:
url: "https://konarr.example.com"
CORS Configuration
If needed, configure CORS headers:
# Add CORS headers if required
add_header Access-Control-Allow-Origin "https://trusted-domain.com";
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
add_header Access-Control-Allow-Headers "Authorization, Content-Type";
Konarr Agent
The Konarr Agent (konarr-cli
) is a Rust-based command-line tool that monitors containers, generates SBOMs, and uploads security data to the Konarr server. It can run as a one-shot scanner or in continuous monitoring mode.
Installation Methods
Docker (Recommended)
Basic Agent Run:
docker run -it --rm \
-e KONARR_INSTANCE="http://your-server:9000" \
-e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
-e KONARR_PROJECT_ID="<PROJECT_ID>" \
ghcr.io/42bytelabs/konarr-agent:latest
Container Monitoring Mode:
docker run -d \
--name konarr-agent \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-e KONARR_INSTANCE="http://your-server:9000" \
-e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
-e KONARR_AGENT_MONITORING=true \
-e KONARR_AGENT_AUTO_CREATE=true \
ghcr.io/42bytelabs/konarr-agent:latest
🔐 Security Warning: Mounting the Docker socket (/var/run/docker.sock
) grants the container significant control over the host system. This includes the ability to:
- Create privileged containers
- Access host filesystem through volume mounts
- Escalate privileges
- Inspect all running containers
Security Mitigations:
- Only run on trusted hosts with trusted images
- Use read-only mounts when possible (
:ro
) - Consider using a dedicated host agent instead of containerized agent
- Limit agent runtime permissions
- Monitor agent activity closely
- Consider using container runtimes with safer introspection APIs
Cargo Installation
Install the CLI binary directly:
# Install from crates.io
cargo install konarr-cli
# Run agent
konarr-cli --instance http://your-server:9000 \
--agent-token <AGENT_TOKEN> \
agent --docker-socket /var/run/docker.sock
# One-shot scan
konarr-cli --instance http://your-server:9000 \
--agent-token <AGENT_TOKEN> \
scan --image alpine:latest
Specialized Agent Images
Syft-only Agent:
FROM ghcr.io/42bytelabs/konarr-cli:latest
RUN curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
Configuration
Environment Variables
The agent supports configuration via environment variables with KONARR_AGENT_
prefix:
# Core settings
export KONARR_INSTANCE="http://konarr.example.com:9000"
export KONARR_AGENT_TOKEN="your-agent-token"
export KONARR_AGENT_PROJECT_ID="project-123"
# Monitoring settings
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_AUTO_CREATE=true
export KONARR_AGENT_HOST="production-server-01"
# Tool management
export KONARR_AGENT_TOOL="syft" # or "grype", "trivy"
export KONARR_AGENT_TOOL_AUTO_INSTALL=true
export KONARR_AGENT_TOOL_AUTO_UPDATE=true
# Docker settings
export KONARR_AGENT_DOCKER_SOCKET="/var/run/docker.sock"
Configuration File
Create konarr.yml
for persistent settings:
agent:
project_id: "my-project"
create: true # Auto-create projects
monitoring: true # Watch Docker events
host: "server-01" # Friendly host name
# Tool configuration
tool: "syft" # Primary SBOM tool
tool_auto_install: true
tool_auto_update: true
docker_socket: "/var/run/docker.sock"
CLI Commands
Agent Mode (Continuous Monitoring):
# Monitor with config file
konarr-cli --config ./konarr.yml agent --docker-socket /var/run/docker.sock
# Monitor with environment variables set
konarr-cli agent --docker-socket /var/run/docker.sock
# Monitor specific Docker socket
konarr-cli agent --docker-socket /custom/docker.sock
Scan Mode (One-time Scan):
# Scan specific image
konarr-cli scan --image alpine:latest
# Scan with output to file
konarr-cli --config ./konarr.yml scan --image alpine:latest --output scan-results.json
Tool Management:
# List available tools
konarr-cli scan --list
# Install specific tool
konarr-cli tools install syft
# Check tool versions
konarr-cli tools list
Scanning Tools
The agent uses external tools for SBOM generation and vulnerability scanning:
Supported Tools
Tool | Purpose | Auto-Install | Package Managers |
---|---|---|---|
Syft | SBOM Generation | ✅ | NPM, Cargo, Deb, RPM, PyPI, Maven, Go |
Grype | Vulnerability Scanning | ✅ | All Syft-supported formats |
Trivy | Security Scanning | ✅ | Multi-format vulnerability detection |
Tool Installation
The agent can automatically install tools:
# Enable auto-install (default in container images)
export KONARR_AGENT_TOOL_AUTO_INSTALL=true
# Manual tool installation
konarr-cli tools install syft
konarr-cli tools install grype
konarr-cli tools install trivy
Tool Storage Locations:
- Container:
/usr/local/toolcache/
- Host install:
~/.local/bin/
or/usr/local/bin/
- Custom: Set via
KONARR_AGENT_TOOLCACHE_PATH
Tool Configuration
# Custom tool settings
agent:
tool: "syft" # Primary tool
tool_auto_install: true # Auto-install missing tools
tool_auto_update: false # Auto-update tools
toolcache_path: "/usr/local/toolcache" # Tool storage location
Project Management
Project Creation
The agent can automatically create projects or upload to existing ones:
# Auto-create project (default behavior)
export KONARR_AGENT_AUTO_CREATE=true
# Use existing project ID
export KONARR_AGENT_PROJECT_ID="existing-project-123"
Project Naming Convention:
- Docker Compose:
{prefix}/{container_name}
- Labeled containers:
{prefix}/{image-title}
- Default: Container name or image name
Container Filtering
The agent automatically monitors containers but can be configured to filter:
# Example filtering (implementation-dependent)
agent:
monitoring: true
filters:
exclude_labels:
- "konarr.ignore=true"
include_only:
- "environment=production"
Docker Compose Integration
For container monitoring via Docker Compose, see our Agent Docker Compose guide.
Example docker-compose.yml
service:
services:
konarr-agent:
image: ghcr.io/42bytelabs/konarr-agent:latest
container_name: konarr-agent
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- KONARR_INSTANCE=http://konarr-server:9000
- KONARR_AGENT_TOKEN=${KONARR_AGENT_TOKEN}
- KONARR_AGENT_MONITORING=true
- KONARR_AGENT_AUTO_CREATE=true
Troubleshooting
Common Issues
Agent Authentication Failed:
# Verify token
echo $KONARR_AGENT_TOKEN
# Test server connection
curl -H "Authorization: Bearer $KONARR_AGENT_TOKEN" \
http://your-server:9000/api/health
Tool Installation Issues:
# Check tool availability
konarr-cli tools list
# Manual tool install
konarr-cli tools install syft
# Check tool cache
ls -la /usr/local/toolcache/
Docker Socket Issues:
# Verify Docker socket access
docker ps
# Check socket permissions
ls -la /var/run/docker.sock
Monitoring Agent Status
# Container logs
docker logs -f konarr-agent
# Agent health (if running as daemon)
konarr-cli agent status
# Server-side agent status
curl http://your-server:9000/api/agents
Next Steps: Configure monitoring and view results in the Konarr web interface.
Docker Compose for Agent
This document shows how to run the Konarr Agent via Docker Compose — useful for running a long-lived agent that monitors a host and uploads snapshots.
docker-compose example (monitoring host Docker)
Save as docker-compose-agent.yml
and run from the host you want to monitor.
version: '3.8'
services:
konarr-agent:
image: ghcr.io/42byteLabs/konarr-agent:latest
container_name: konarr-agent
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- KONARR_INSTANCE=http://your-server:9000
- KONARR_AGENT_TOKEN=<AGENT_TOKEN>
- KONARR_AGENT_MONITORING=true
- KONARR_AGENT_TOOL_AUTO_INSTALL=true
Notes and security
- The compose example mounts the Docker socket as read-only. Even read-only mounts may expose sensitive control; follow the security guidance in
02-agent.md
before using this in production. - Use a secrets manager (or Docker secrets) to provide the agent token in production rather than hard-coding it in the compose file.
Run
docker compose -f docker-compose-agent.yml up -d
Upgrading
docker compose -f docker-compose-agent.yml pull konarr-agent
docker compose -f docker-compose-agent.yml up -d --no-deps --build konarr-agent
If you'd like, I can add a Kubernetes DaemonSet example for distributing agents across a cluster.
Configuration & Usage
This page provides an overview of Konarr configuration and common usage workflows for the Server, Web UI, and Agent (CLI).
Configuration Sources and Precedence
Konarr uses a configuration merging strategy (Figment in the server code):
konarr.yml
configuration file (if present)- Environment variables
- Command-line flags (where present)
Environment variables are supported and commonly used for container deployments. The server and agent use prefixed environment variables to avoid collisions:
- Server-wide env vars: prefix with
KONARR_
(e.g.,KONARR_DATA_PATH
,KONARR_DATABASE_URL
) - Agent-specific env vars: prefix with
KONARR_AGENT_
(e.g.,KONARR_AGENT_TOKEN
,KONARR_AGENT_MONITORING
)
Container Defaults
Packaged defaults (container images):
- Data path:
/data
(exposed asKONARR_DATA_PATH=/data
) - Config file path:
/config/konarr.yml
(mount/config
to providekonarr.yml
) - HTTP port: 9000
Configuration Overview
Konarr configuration is organized into several main sections:
Server Configuration
The server configuration controls the web interface, API, database, and security settings.
Key areas:
- Network settings (domain, port, scheme)
- Security settings (secrets, authentication)
- Database configuration
- Frontend configuration
- Session management
For detailed server configuration, see: Server Configuration
Agent Configuration
The agent configuration controls how agents connect to the server, which projects they target, and how they scan containers.
Key areas:
- Server connectivity and authentication
- Project targeting and auto-creation
- Docker monitoring and scanning
- Security tool management
- Resource limits and filtering
For detailed agent configuration, see: Agent Configuration Overview
Sample Complete Configuration
# Basic konarr.yml example
server:
domain: "konarr.example.com"
port: 9000
scheme: "https"
secret: "your-secret-key"
data_path: "/var/lib/konarr"
database:
path: "/var/lib/konarr/konarr.db"
agent:
token: "your-agent-token"
project_id: "123"
monitoring: true
tool_auto_install: true
sessions:
admins:
expires: 8
users:
expires: 24
CLI Usage (konarr-cli)
Global Flags
Argument | Description |
---|---|
--config <path> | Path to a konarr.yml configuration file |
--instance <url> | Konarr server URL (example: http://your-server:9000 ) |
--agent-token <token> | Agent token for authentication (or use KONARR_AGENT_TOKEN env var) |
--debug | Enable debug logging |
--project-id <id> | Project ID for operations |
Common Subcommands
Subcommand | Description |
---|---|
agent | Run the agent in monitoring mode with optional --docker-socket |
scan | Scan container images with --image , --list , --output |
upload-sbom | Upload SBOM file with --input , --snapshot-id |
database | Database operations (create, user, cleanup) |
tasks | Run maintenance tasks |
Agent Example
# Run agent with Docker socket monitoring
konarr-cli --instance http://your-server:9000 --agent-token <AGENT_TOKEN> agent --docker-socket /var/run/docker.sock
Scan Example
# Scan a container image
konarr-cli --instance http://your-server:9000 --agent-token <AGENT_TOKEN> scan --image alpine:latest
# List available tools
konarr-cli scan --list
Enable debug logging for troubleshooting with --debug
flag.
Configuration Validation
Test Configuration
Before deploying to production, validate your configuration:
# Test server configuration (development)
cargo run -p konarr-server -- --config konarr.yml
# Test agent with debug logging
konarr-cli --config konarr.yml --debug agent
# Check configuration loading
konarr-cli --config konarr.yml --debug
Environment Variable Check
# List all Konarr environment variables
env | grep KONARR_ | sort
# Run with debug to see configuration loading
konarr-cli --debug
Additional Resources
For detailed configuration options and examples:
- Server Configuration — Complete server settings, production deployment, and environment variables
- Agent Configuration — Agent settings, tool configuration, and deployment scenarios
- Web Interface Guide — Complete web interface usage and navigation guide
- CLI Usage Examples — Practical usage examples and workflows
- Security Setup — Authentication, tokens, and security best practices
- Troubleshooting Guide — Common issues, debugging, and performance optimization
For additional help, see the troubleshooting guide or visit the Konarr GitHub repository.
Server Configuration
This section covers the basic configuration and setup of the Konarr server. The server is the central component that provides the REST API, web interface, and data storage capabilities.
Quick Start
Basic Configuration File
Create a konarr.yml
configuration file:
# Basic server configuration
server:
domain: "localhost"
port: 9000
scheme: "http"
secret: "your-secure-secret-key-here"
# Data storage location
data_path: "./data"
# Agent authentication
agent:
key: "your-agent-key-here"
Running the Server
Start the server with your configuration:
# Using Docker (recommended)
docker run -d \
--name konarr-server \
-p 9000:9000 \
-v $(pwd)/konarr.yml:/app/konarr.yml \
-v $(pwd)/data:/data \
ghcr.io/42bytelabs/konarr:latest
# Using binary
konarr-server --config konarr.yml
# Using cargo (development)
cargo run --bin konarr-server -- --config konarr.yml
Essential Configuration
Network Settings
Setting | Description | Default |
---|---|---|
server.domain | Server hostname | localhost |
server.port | HTTP port | 9000 |
server.scheme | Protocol (http/https) | http |
Security Settings
Setting | Description | Required |
---|---|---|
server.secret | Application secret for sessions | Yes |
agent.key | Agent authentication token | Optional |
Storage Settings
Setting | Description | Default |
---|---|---|
data_path | Database and data directory | ./data |
Verification
Health Check
Verify your server is running correctly:
# Test server health
curl http://localhost:9000/api
# Expected response includes server version and status
Web Interface
Access the web interface at: http://localhost:9000
Next Steps
- Launching the Server - Detailed startup procedures and verification
- Server Configuration Details - Complete configuration reference
- Web Interface - Using the web dashboard
Common Issues
Database Initialization
The server automatically creates the SQLite database on first run. Ensure the data_path
directory is writable.
Port Conflicts
If port 9000 is in use, change server.port
in your configuration file or use Docker port mapping.
Launching the Server
This page covers starting the Konarr server and verifying it's running correctly. For comprehensive web interface usage, see the Web Interface Guide.
Starting the Server
Using Docker (Recommended)
# Using Docker
docker run -d \
--name konarr-server \
-p 9000:9000 \
-v $(pwd)/data:/data \
ghcr.io/42bytelabs/konarr:latest
# Using Docker Compose
docker-compose up -d konarr-server
Using Pre-built Binary
# Download and extract binary
curl -L https://github.com/42ByteLabs/konarr/releases/latest/download/konarr-server-linux-x86_64.tar.gz | tar xz
# Run server
./konarr-server
From Source
# Build and run from source
git clone https://github.com/42ByteLabs/konarr.git
cd konarr
cargo run --bin konarr-server
Verifying Server Status
Health Check
Test that the server is running and accessible:
# Basic health check
curl -v http://localhost:9000/api/health
# Expected response:
# HTTP/1.1 200 OK
# {"status":"healthy","version":"x.x.x"}
Server Logs
Monitor server startup and operation:
# Docker logs
docker logs -f konarr-server
# Binary logs (with RUST_LOG=info)
RUST_LOG=info ./konarr-server
Initial Access
Web Interface
Open the server URL in your browser (default port 9000):
http://localhost:9000
First-Time Setup
- Web Interface: Navigate to the web interface to verify it loads correctly
- Admin Account: Create or configure admin access if required
- Agent Token: Retrieve the auto-generated agent token for agent setup
For detailed web interface usage, navigation, and features, see the Web Interface Guide.
Configuration
Basic Configuration
Create a konarr.yml
file for persistent settings:
server:
domain: "localhost"
port: 9000
scheme: "http"
data_path: "/data"
Environment Variables
Override configuration with environment variables:
export KONARR_SERVER_PORT=8080
export KONARR_DATA_PATH=/custom/data/path
./konarr-server
For complete configuration options, see:
Next Steps
After launching the server:
- Web Interface - Learn to use the web interface
- Agent Setup - Configure agents to monitor containers
- Security Setup - Implement production security practices
- Reverse Proxy - Set up HTTPS and production deployment
Server Configuration
This page documents comprehensive server-specific configuration options, environment variable mappings, and production deployment examples.
Core Server Settings
Network Configuration
Configuration | Description | Default |
---|---|---|
server.domain | Server domain/hostname | localhost |
server.port | HTTP port | 9000 |
server.scheme | URL scheme, http or https | http |
server.cors | Enable CORS for API access | true |
server.api | API endpoint prefix | /api |
Security Settings
Configuration | Description |
---|---|
server.secret | Application secret for sessions and JWT tokens. Required for production |
agent.key | Agent authentication token. Auto-generated if not provided |
Data and Storage
Configuration | Description | Default |
---|---|---|
data_path | Directory for SQLite database and application data | /data |
server.frontend | Path to frontend static files | frontend/build |
URL Configuration
Configuration | Description |
---|---|
server.frontend.url | Externally accessible URL for generating links in emails and redirects |
Complete Configuration Example
# Complete server configuration
server:
# Network settings
domain: "konarr.example.com"
port: 9000
scheme: "https"
cors: true
api: "/api"
# Security settings
secret: "your-very-strong-secret-key-here"
# Frontend configuration
frontend: "/app/dist"
# Data storage
data_path: "/var/lib/konarr"
# Database configuration
database:
path: "/var/lib/konarr/konarr.db"
token: null # For remote databases
# Session configuration
sessions:
admins:
expires: 1 # hours
users:
expires: 24 # hours
agents:
expires: 360 # hours
# Agent authentication
agent:
key: "your-agent-token" # Auto-generated if not provided
Advanced Server Settings
Cleanup Configuration
# Automatic cleanup settings
cleanup:
enabled: true
timer: 90 # days to keep old snapshots
Security Features
# Security scanning and vulnerability management
security:
enabled: true
rescan: true
advisories_pull: true
Registration Settings
# User registration control
registration:
enabled: false # Disable public registration
Environment Variables
All server settings can be overridden with environment variables using the KONARR_SERVER_
prefix:
# Network configuration
export KONARR_SERVER_DOMAIN=konarr.example.com
export KONARR_SERVER_PORT=9000
export KONARR_SERVER_SCHEME=https
export KONARR_SERVER_CORS=true
# Security settings
export KONARR_SERVER_SECRET="your-production-secret"
# Data paths
export KONARR_DATA_PATH=/var/lib/konarr
export KONARR_DB_PATH=/var/lib/konarr/konarr.db
# Frontend configuration
export KONARR_SERVER_FRONTEND=/app/dist
export KONARR_CLIENT_PATH=/app/dist
Database Configuration
SQLite (Default)
database:
path: "/var/lib/konarr/konarr.db"
Remote Database (LibSQL/Turso)
database:
path: "libsql://your-database-url"
token: "your-database-token"
Environment variables:
export KONARR_DB_PATH="libsql://your-database-url"
export KONARR_DB_TOKEN="your-database-token"
Production Deployment Settings
Minimal Production Configuration
server:
domain: "konarr.yourdomain.com"
port: 9000
scheme: "https"
secret: "$(openssl rand -base64 32)"
data_path: "/var/lib/konarr"
database:
path: "/var/lib/konarr/konarr.db"
sessions:
admins:
expires: 8 # 8 hours for admin sessions
users:
expires: 24 # 24 hours for user sessions
cleanup:
enabled: true
timer: 30 # Keep snapshots for 30 days
registration:
enabled: false # Disable public registration
security:
enabled: true
Container-Specific Settings
When running in containers, these environment variables are commonly used:
# Rocket framework settings
export ROCKET_ADDRESS=0.0.0.0
export ROCKET_PORT=9000
# Konarr-specific paths
export KONARR_DATA_PATH=/data
export KONARR_DB_PATH=/data/konarr.db
export KONARR_SERVER_FRONTEND=/app/dist
# Security
export KONARR_SERVER_SECRET="$(openssl rand -base64 32)"
For more information, see:
export KONARR_DATA_PATH=/data
export KONARR_FRONTEND__URL=<https://konarr.example.com>
The project's config merging uses Figment, which supports nesting via separators (commonly __
in environment names). If an env mapping does not take effect, prefer using konarr.yml
or CLI flags.
Persistence and backups
- Mount a host directory under
/data
in container deployments to persist the SQLite DB (data/konarr.db
). - Regularly back up the DB file before upgrades:
cp data/konarr.db data/konarr.db.bak
.
If you'd like, I can generate a full konarr.yml
reference by inspecting the server config structs in the codebase.
Agent Configuration Overview
The Konarr Agent (konarr-cli
) is a powerful Rust-based command-line tool that monitors containers, generates Software Bill of Materials (SBOMs), and uploads security data to the Konarr server. This section provides comprehensive guidance for configuring and deploying agents in various environments.
Agent Overview
The Konarr Agent serves as the data collection component of the Konarr ecosystem, responsible for:
- Container Monitoring: Continuous monitoring of Docker containers and their states
- SBOM Generation: Creating comprehensive Software Bill of Materials using industry-standard tools
- Vulnerability Scanning: Integration with security scanners like Syft, Grype, and Trivy
- Project Management: Automatic creation and organization of projects based on container metadata
- Real-time Updates: Live detection of container changes and automated snapshot creation
Key Features
- Multi-tool Support: Works with Syft, Grype, Trivy, and other security scanning tools
- Auto-discovery: Automatically detects and monitors running containers
- Flexible Deployment: Runs as Docker container, standalone binary, or system service
- Smart Snapshots: Creates new snapshots only when changes are detected
- Metadata Enrichment: Automatically adds container and system metadata to snapshots
Core Capabilities
Container Discovery and Monitoring
The agent automatically discovers running containers and organizes them into projects:
- Docker Integration: Direct integration with Docker daemon via socket
- Container Metadata: Extracts labels, environment variables, and runtime information
- Project Hierarchy: Supports parent-child project relationships for complex deployments
- State Tracking: Monitors container lifecycle events and state changes
SBOM Generation and Management
- Multiple Formats: Supports CycloneDX, SPDX, and other SBOM standards
- Tool Integration: Seamlessly integrates with popular scanning tools
- Dependency Analysis: Comprehensive dependency tracking and version management
- Incremental Updates: Only generates new SBOMs when container contents change
Security and Vulnerability Management
- Real-time Scanning: Continuous vulnerability assessment of monitored containers
- Multi-source Data: Aggregates vulnerability data from multiple security databases
- Risk Assessment: Provides severity analysis and impact evaluation
- Alert Integration: Automatically creates security alerts for discovered vulnerabilities
Operation Modes
One-shot Scanning
Execute a single scan operation and exit:
# Scan specific container image
konarr-cli scan --image nginx:latest
# Upload existing SBOM
konarr-cli upload-sbom --input sbom.json --snapshot-id 123
Monitoring Mode
Continuous monitoring with Docker socket access:
# Monitor containers with Docker socket
konarr-cli agent --docker-socket /var/run/docker.sock
# Monitor with project ID specified
konarr-cli --config konarr.yml --project-id 456 agent --docker-socket /var/run/docker.sock
Agent as Service
Background service operation:
# Run agent with configuration file
konarr-cli --config /etc/konarr/konarr.yml agent --docker-socket /var/run/docker.sock
# Docker container with volume persistence
docker run -d --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /etc/konarr:/config \
ghcr.io/42bytelabs/konarr-agent:latest
Configuration Approaches
Environment Variables
Quick setup using environment variables:
export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="kagent_..."
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_AUTO_CREATE=true
export KONARR_AGENT_HOST="production-server-01"
Configuration File
Structured configuration using YAML:
# konarr.yml
instance: "https://konarr.example.com"
agent:
token: "kagent_..."
monitoring: true
create: true
host: "production-server-01"
project_id: 123
# Tool configuration
tool_auto_install: true
tool_auto_update: true
toolcache_path: "/usr/local/toolcache"
Command Line Arguments
Direct configuration via CLI arguments:
konarr-cli \
--instance https://konarr.example.com \
--agent-token kagent_... \
--monitoring \
--auto-create \
agent
Quick Start Examples
Basic Container Monitoring
# Docker container with minimal configuration
docker run -d \
--name konarr-agent \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-e KONARR_INSTANCE="http://your-server:9000" \
-e KONARR_AGENT_TOKEN="<AGENT_TOKEN>" \
-e KONARR_AGENT_MONITORING=true \
-e KONARR_AGENT_AUTO_CREATE=true \
ghcr.io/42bytelabs/konarr-agent:latest
Production Deployment
# docker-compose.yml
version: '3.8'
services:
konarr-agent:
image: ghcr.io/42bytelabs/konarr-agent:latest
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config:/config
environment:
- KONARR_INSTANCE=https://konarr.example.com
- KONARR_AGENT_TOKEN_FILE=/config/agent.token
- KONARR_AGENT_MONITORING=true
- KONARR_AGENT_HOST=production-cluster-01
networks:
- konarr-network
Binary Installation
# Install via Cargo
cargo install konarr-cli
# Configure and run
konarr-cli --config /etc/konarr/konarr.yml agent --docker-socket /var/run/docker.sock
Agent Management
Project Organization
- Auto-creation: Agents can automatically create projects based on container metadata
- Hierarchical Structure: Support for parent-child project relationships
- Naming Conventions: Configurable project naming based on container labels or composition
- Metadata Inheritance: Child projects inherit metadata from parent projects
Tool Management
- Auto-installation: Automatic download and installation of required scanning tools
- Version Management: Automatic updates to latest tool versions when configured
- Custom Tools: Support for custom scanning tools and configurations
- Tool Caching: Shared tool cache to reduce storage requirements
Security Considerations
- Token Management: Secure handling of authentication tokens
- Network Security: TLS/SSL support for secure communication with server
- Container Security: Minimal container footprint with security best practices
- Access Control: Granular permissions for different agent operations
Documentation Structure
This agent configuration section is organized into focused guides for different aspects:
CLI Usage Guide
Comprehensive command-line interface documentation covering:
- All available commands and options
- Common workflows and use cases
- Debugging and troubleshooting commands
- Integration with CI/CD pipelines
Agent Configuration Details
Complete configuration reference including:
- All configuration options and their effects
- Environment variable mappings
- Production deployment configurations
- Security and authentication settings
Next Steps
Choose the appropriate guide based on your needs:
- CLI Usage - Learn command-line operations and workflows
- Agent Configuration - Configure agents for your environment
- Server Configuration - Set up the Konarr server to work with agents
- Web Interface - Monitor and manage agents through the web interface
For installation instructions, see the Agent Installation Guide.
For troubleshooting, see the Troubleshooting Guide.
CLI Usage
This page documents common konarr-cli
workflows and command-line operations.
Global Options
Configuration
Argument | Description |
---|---|
--config <path> | Path to konarr.yml configuration file |
--instance <url> | Konarr server URL (e.g., http://your-server:9000 ) |
--token <agent-token> | Agent authentication token (or use KONARR_AGENT_TOKEN env var) |
Output Control
Argument | Description |
---|---|
--verbose / -v | Enable verbose logging for debugging |
--quiet / -q | Suppress non-essential output |
--output <format> | Output format: json , yaml , or table (default) |
Core Commands
Command Reference
Agent Subcommand
Argument | Description |
---|---|
--docker-socket <path> | Path to Docker socket for container monitoring (default: /var/run/docker.sock ) |
--monitoring | Enable container monitoring mode |
--project <id> | Target project ID for snapshots |
Scan Subcommand
Argument | Description |
---|---|
--image <name> | Container image to scan (e.g., alpine:latest ) |
--path <directory> | Local directory or file to scan |
--output <file> | Output results to file |
--list | List available security tools |
--tool <name> | Specify scanner tool to use |
Upload SBOM Subcommand
Argument | Description |
---|---|
--input <file> | Path to SBOM file to upload |
--snapshot-id <id> | Target snapshot ID for upload |
Tools Subcommand
Argument | Description |
---|---|
--tool <name> | Specific tool to install/test (e.g., syft , grype ) |
--all | Apply operation to all tools |
--path <directory> | Installation path for tools |
Agent Operations
Monitor Mode
Continuously monitor Docker containers for changes:
konarr-cli agent monitor \
--instance http://your-server:9000 \
--token <AGENT_TOKEN> \
--project <PROJECT_ID>
Daemon Mode
Run agent as a background service:
konarr-cli agent daemon \
--config /etc/konarr/konarr.yml \
--log-file /var/log/konarr-agent.log
Snapshot Management
Create Snapshot
Generate and upload a single SBOM snapshot:
konarr-cli snapshot create \
--instance http://your-server:9000 \
--token <AGENT_TOKEN> \
--project <PROJECT_ID>
Container Image Analysis
Analyze specific container images:
# Remote image
konarr-cli snapshot create \
--image nginx:1.21 \
--project <PROJECT_ID>
# Local image with custom scanner
konarr-cli snapshot create \
--image local/my-app:latest \
--scanner syft \
--project <PROJECT_ID>
File System Analysis
Analyze local directories or files:
# Analyze current directory
konarr-cli snapshot create \
--path . \
--project <PROJECT_ID>
# Analyze specific directory
konarr-cli snapshot create \
--path /opt/application \
--project <PROJECT_ID>
Tool Management
List Available Tools
Show installed security scanning tools:
konarr-cli tools list
Output example:
Tool Version Status Path
syft v0.96.0 Installed /usr/local/bin/syft
grype v0.74.0 Installed /usr/local/bin/grype
trivy v0.48.0 Missing -
Install Tools
Install missing security tools:
# Install specific tool
konarr-cli tools install --tool syft
# Install all missing tools
konarr-cli tools install --all
# Install to custom path
konarr-cli tools install --tool grype --path /usr/local/toolcache
Check Tool Versions
Verify tool versions and compatibility:
konarr-cli tools version
Project Management
List Projects
Display available projects:
konarr-cli projects list \
--instance http://your-server:9000 \
--token <AGENT_TOKEN>
Create Project
Create a new project:
konarr-cli projects create \
--name "my-application" \
--type container \
--description "Production web application"
User Management
Create or Reset User Password
The database user
command allows you to create new users or reset passwords for existing users. This is an interactive command that prompts for user information:
konarr-cli database user
The command will prompt you for:
- Username: The username for the user account
- Password: The new password (hidden input)
- Role: User role - either
Admin
orUser
Behavior:
- If the username already exists, the command will update the user's password and role
- If the username doesn't exist, a new user account will be created
- This command is useful for password recovery when users forget their credentials
Example session:
$ konarr-cli database user
Username: admin
Password: ********
Role:
> Admin
User
User updated successfully
Non-interactive usage:
For automated setups or scripts, you can provide the database path:
konarr-cli --database-url /data/konarr.db database user
Common use cases:
- Initial admin account creation: Set up the first admin user after installation
- Password reset: Reset a forgotten user password
- Role update: Change a user's role from User to Admin or vice versa
- Emergency access: Regain access when locked out of the web interface
Advanced Usage
Configuration File
Create /etc/konarr/konarr.yml
:
instance: https://konarr.company.com
agent:
token: your-secure-token
project_id: 123
monitoring: true
tool_auto_install: true
toolcache_path: /usr/local/toolcache
host: production-server-01
tools:
syft:
version: "v0.96.0"
path: /usr/local/bin/syft
grype:
version: "v0.74.0"
path: /usr/local/bin/grype
Run with configuration:
konarr-cli --config /etc/konarr/konarr.yml agent monitor
Environment Variables
Set defaults via environment:
export KONARR_INSTANCE=https://konarr.company.com
export KONARR_AGENT_TOKEN=your-secure-token
export KONARR_AGENT_PROJECT_ID=123
export KONARR_AGENT_MONITORING=true
export KONARR_VERBOSE=true
# Run with environment config
konarr-cli agent monitor
CI/CD Integration
Use in continuous integration pipelines:
# Build-time analysis
konarr-cli snapshot create \
--image $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA \
--project $KONARR_PROJECT_ID \
--fail-on-critical \
--output json > security-report.json
# Check exit code
if [ $? -ne 0 ]; then
echo "Critical vulnerabilities found, failing build"
exit 1
fi
Security Scanning Options
Configure vulnerability scanning behavior:
# Fail on high/critical vulnerabilities
konarr-cli snapshot create \
--project <PROJECT_ID> \
--fail-on-critical \
--fail-on-high
# Custom severity threshold
konarr-cli snapshot create \
--project <PROJECT_ID> \
--max-severity medium
# Skip vulnerability scanning
konarr-cli snapshot create \
--project <PROJECT_ID> \
--skip-vulnerability-scan
Troubleshooting
Debug Mode
Enable verbose logging for troubleshooting:
konarr-cli --verbose agent monitor
Connection Testing
Test server connectivity:
konarr-cli health \
--instance http://your-server:9000
Tool Verification
Verify scanner tools are working:
konarr-cli tools test --tool syft
konarr-cli tools test --all
Log Analysis
Check agent logs for issues:
# View recent logs
journalctl -u konarr-agent -f
# Container logs
docker logs -f konarr-agent
The agent watches container lifecycle events (when configured) and uploads snapshots automatically. Use --config
to provide persistent configuration.
Tooling and debugging
- To list available scanner tools and their paths:
konarr-cli tools list
- Enable verbose logging for troubleshooting (check
konarr-cli --help
for a--verbose
or-v
flag).
If you'd like, I can expand this into a full reference for all konarr-cli
flags and subcommands by parsing the CLI code or the help output.
Agent Configuration
This page documents comprehensive agent-specific configuration options, environment variables, deployment scenarios, and security considerations.
Core Agent Settings
Project Management
Configuration | Description | Default |
---|---|---|
agent.project_id | Target project ID for snapshots. Leave empty to auto-create projects | - |
agent.create | Allow agent to automatically create projects | true |
agent.host | Friendly hostname identifier for this agent instance | - |
Monitoring and Scanning
Configuration | Description | Default |
---|---|---|
agent.monitoring | Enable Docker container monitoring mode | false |
agent.tool_auto_install | Automatically install missing security tools | true |
agent.toolcache_path | Directory for scanner tool binaries | /usr/local/toolcache |
Connectivity
Configuration | Description |
---|---|
instance | Konarr server URL (e.g., https://konarr.example.com ) |
agent.token | Authentication token for API access |
Complete Configuration Example
# Server connection
instance: "https://konarr.example.com"
# Agent configuration
agent:
# Authentication
token: "your-agent-token-from-server"
# Project settings
project_id: "123" # Specific project, or "" to auto-create
create: true # Allow project auto-creation
host: "production-server-01"
# Monitoring settings
monitoring: true
scan_interval: 300 # seconds between scans
# Tool management
tool_auto_install: true
toolcache_path: "/usr/local/toolcache"
# Scanning configuration
scan_on_start: true
scan_on_change: true
# Security tool preferences
preferred_sbom_tool: "syft" # syft, trivy
preferred_vuln_tool: "grype" # grype, trivy
# Container filtering
include_patterns:
- "production/*"
- "staging/*"
exclude_patterns:
- "*/test-*"
- "*/temp-*"
# Resource limits
max_concurrent_scans: 3
scan_timeout: 600 # seconds
Tool Configuration
Security Scanner Tools
# Tool-specific configuration
tools:
syft:
version: "v0.96.0"
path: "/usr/local/bin/syft"
config:
exclude_paths:
- "/tmp"
- "/var/cache"
cataloger_scope: "all-layers"
grype:
version: "v0.74.0"
path: "/usr/local/bin/grype"
config:
fail_on_severity: "high"
ignore_fixed: false
trivy:
version: "v0.48.0"
path: "/usr/local/bin/trivy"
config:
skip_db_update: false
timeout: "10m"
Environment Variables
Agent settings can be configured via environment variables with the KONARR_AGENT_
prefix:
# Server connection
export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="your-agent-token"
# Project configuration
export KONARR_AGENT_PROJECT_ID="123"
export KONARR_AGENT_CREATE=true
export KONARR_AGENT_HOST="production-server-01"
# Monitoring settings
export KONARR_AGENT_MONITORING=true
export KONARR_AGENT_SCAN_INTERVAL=300
# Tool management
export KONARR_AGENT_TOOL_AUTO_INSTALL=true
export KONARR_AGENT_TOOLCACHE_PATH="/usr/local/toolcache"
# Resource settings
export KONARR_AGENT_MAX_CONCURRENT_SCANS=3
export KONARR_AGENT_SCAN_TIMEOUT=600
Container Agent Configuration
Docker Socket Access
When running agent in a container with Docker monitoring:
# Security warning: Docker socket access grants significant privileges
agent:
monitoring: true
docker_socket: "/var/run/docker.sock"
# Security controls
docker_security:
require_readonly: true
filter_by_labels: true
allowed_networks:
- "production"
- "staging"
Environment Variables for Containers
# Core settings
export KONARR_INSTANCE="https://konarr.example.com"
export KONARR_AGENT_TOKEN="your-token"
export KONARR_AGENT_MONITORING=true
# Container-specific paths
export KONARR_AGENT_TOOLCACHE_PATH="/usr/local/toolcache"
# Security settings
export KONARR_AGENT_DOCKER_SOCKET="/var/run/docker.sock"
export KONARR_AGENT_SECURITY_READONLY=true
Production Agent Deployment
High-Security Environment
# Air-gapped or high-security configuration
agent:
tool_auto_install: false # Disable auto tool installation
toolcache_path: "/opt/security-tools"
# Pre-approved tool versions
tools:
syft:
path: "/opt/security-tools/syft"
version: "v0.96.0"
checksum: "sha256:abc123..."
grype:
path: "/opt/security-tools/grype"
version: "v0.74.0"
checksum: "sha256:def456..."
# Strict scanning policies
scan_config:
fail_on_error: true
require_signature_verification: true
max_scan_size: "1GB"
timeout: 300
Multi-Environment Agent
# Development/staging/production agent
agent:
host: "${ENVIRONMENT}-server-${HOSTNAME}"
project_id: "${KONARR_PROJECT_ID}"
# Environment-specific settings
monitoring: true
scan_interval: 600 # 10 minutes
# Conditional scanning based on environment
scan_filters:
development:
scan_on_change: true
include_test_images: true
production:
scan_on_change: false
scan_schedule: "0 2 * * *" # Daily at 2 AM
exclude_test_images: true
Agent Authentication and Security
Token Management
# Retrieve agent token from server
export AGENT_TOKEN=$(curl -s -X GET \
-H "Authorization: Bearer ${ADMIN_TOKEN}" \
https://konarr.example.com/api/admin/settings | \
jq -r '.settings.agentKey')
export KONARR_AGENT_TOKEN="${AGENT_TOKEN}"
Security Best Practices
- Rotate tokens regularly: Generate new agent tokens periodically
- Limit permissions: Use dedicated service accounts for agents
- Network security: Restrict agent network access to Konarr server only
- Audit logging: Enable detailed logging for agent activities
- Resource limits: Set appropriate CPU/memory limits for agent containers
For more information, see:
export KONARR_AGENT_URL=<http://konarr.example.com:9000>
export KONARR_AGENT_TOKEN=your-token-here
Tooling and installation
- The agent will look for
syft
,grype
, ortrivy
onPATH
and inagent.toolcache_path
. - For secure environments, pre-install approved tool versions into
agent.toolcache_path
and setagent.tool_auto_install
tofalse
.
If you'd like, I can add a short Kubernetes manifest to demonstrate setting these env vars in a Pod spec.
Scanning Tools
Konarr uses industry-standard security scanning tools to generate Software Bill of Materials (SBOM) and detect vulnerabilities in container images. The agent supports multiple tools, each with specific capabilities and features.
Supported Tools
Konarr supports three primary scanning tools:
Syft
Syft is an open-source SBOM generation tool from Anchore that catalogs software packages and dependencies across various formats.
Features:
- SBOM Generation: Creates comprehensive Software Bill of Materials in multiple formats (SPDX, CycloneDX)
- Multi-Language Support: Detects packages from NPM, Cargo, Deb, RPM, PyPI, Maven, Go, and more
- Container Layer Analysis: Scans all layers of container images
- File System Cataloging: Analyzes installed packages, language-specific packages, and binaries
- Fast Performance: Optimized for quick scanning of large images
Konarr Implementation:
- Primary tool for SBOM generation
- Auto-install supported in agent
- Cataloger scope can be configured (all-layers, squashed)
- Path exclusion support for temporary directories
Links:
- GitHub: https://github.com/anchore/syft
- Documentation: https://github.com/anchore/syft#readme
Grype
Grype is a vulnerability scanner from Anchore that matches packages against known CVE databases.
Features:
- Vulnerability Detection: Scans software packages for known vulnerabilities
- Multiple Database Sources: Uses multiple vulnerability databases including NVD, Alpine SecDB, RHEL Security Data
- SBOM Analysis: Can scan SBOMs generated by Syft or other tools
- Severity Filtering: Filter results by vulnerability severity (critical, high, medium, low)
- Format Support: Works with all Syft-supported package formats
- Regular Updates: Vulnerability database updates automatically
Konarr Implementation:
- Used for vulnerability scanning after SBOM generation
- Auto-install supported in agent
- Configurable severity thresholds
- Option to ignore fixed vulnerabilities
Links:
- GitHub: https://github.com/anchore/grype
- Documentation: https://github.com/anchore/grype#readme
Trivy
Trivy is a comprehensive security scanner from Aqua Security that detects vulnerabilities, misconfigurations, and secrets.
Features:
- Multi-Format Scanning: Detects vulnerabilities in OS packages, language dependencies, and application dependencies
- Container Image Scanning: Comprehensive container image analysis
- IaC Scanning: Scans Infrastructure as Code files (Terraform, CloudFormation, Kubernetes)
- Secret Detection: Finds exposed secrets and credentials
- Misconfiguration Detection: Identifies security misconfigurations
- SBOM Support: Can generate and consume SBOMs in multiple formats
Konarr Implementation:
- Alternative security scanning tool with broader detection capabilities
- Auto-install supported in agent
- Configurable database update behavior
- Timeout settings for long-running scans
Links:
- GitHub: https://github.com/aquasecurity/trivy
- Documentation: https://aquasecurity.github.io/trivy/
Tool Selection
Selecting a Tool
You can configure which tool the agent uses through environment variables or configuration files:
Environment Variable:
# Select the primary scanning tool
export KONARR_AGENT_TOOL="syft" # or "grype", "trivy"
Configuration File (konarr.yml
):
agent:
tool: "syft" # Primary tool for scanning
tool_auto_install: true
tool_auto_update: false
Tool Installation
The agent can automatically install missing tools:
# Enable auto-install (default in container images)
export KONARR_AGENT_TOOL_AUTO_INSTALL=true
# Manual tool installation
konarr-cli tools install --tool syft
konarr-cli tools install --tool grype
konarr-cli tools install --tool trivy
Checking Installed Tools
List installed tools and their versions:
konarr-cli tools list
Output example:
Tool Version Status Path
syft v0.96.0 Installed /usr/local/bin/syft
grype v0.74.0 Installed /usr/local/bin/grype
trivy v0.48.0 Missing -
Viewing Tool Usage
In the Web Interface
When viewing a snapshot in the Konarr web interface, you can see which tool was used to generate the SBOM and scan for vulnerabilities:
- Navigate to a project
- Click on a specific snapshot
- The snapshot details will show the tool used for scanning
Via API
Query the snapshot details through the API to see tool information:
curl -H "Authorization: Bearer $KONARR_AGENT_TOKEN" \
http://your-server:9000/api/snapshots/{snapshot_id}
The response includes metadata about the scanning tool used.
In Agent Logs
The agent logs show which tool is being used for each scan:
# View agent logs in container mode
docker logs konarr-agent
# Look for lines indicating tool usage
# Example: "Using syft for SBOM generation"
Tool Configuration
Storage Locations
Tools are stored in the following locations:
Environment | Path |
---|---|
Container | /usr/local/toolcache/ |
Host install | ~/.local/bin/ or /usr/local/bin/ |
Custom | Set via KONARR_AGENT_TOOLCACHE_PATH |
Advanced Configuration
Configure agent tool settings in konarr.yml
:
agent:
tool: "syft"
tool_auto_install: true
tool_auto_update: false
Tool Comparison
Feature | Syft | Grype | Trivy |
---|---|---|---|
SBOM Generation | ✅ Primary | ❌ | ✅ |
Vulnerability Scanning | ❌ | ✅ Primary | ✅ |
Package Managers | NPM, Cargo, Deb, RPM, PyPI, Maven, Go | All Syft formats | Multi-format |
Secret Detection | ❌ | ❌ | ✅ |
IaC Scanning | ❌ | ❌ | ✅ |
Auto-Install | ✅ | ✅ | ✅ |
Speed | Fast | Fast | Moderate |
Troubleshooting
Tool Installation Issues
If tools fail to install automatically:
# Check tool availability
konarr-cli tools list
# Manual tool install
konarr-cli tools install --tool syft
# Check tool cache
ls -la /usr/local/toolcache/
Tool Version Conflicts
Verify tool versions and compatibility:
konarr-cli tools version
Disabling Auto-Install
For secure environments, disable auto-install and pre-install approved versions:
agent:
tool_auto_install: false # Disable automatic installation
toolcache_path: "/usr/local/toolcache" # Pre-installed tool location
Additional Resources
- Agent Configuration - Detailed agent configuration options
- CLI Usage - Command-line tool management
- Security - Tool installation security considerations
- Troubleshooting - Common tool-related issues
Web Interface
This guide covers how to use the Konarr web interface to monitor your containers, view SBOMs, manage projects, and track security vulnerabilities.
Accessing the Web Interface
Basic Access
Open the server URL in your browser (default port 9000):
http://<konarr-host>:9000
Examples:
- Local development:
http://localhost:9000
- Network deployment:
http://your-server-ip:9000
- Custom domain:
https://konarr.example.com
Behind Reverse Proxy
If the server is behind a reverse proxy or load balancer, use the external HTTPS URL configured in server.frontend.url
.
For reverse proxy setup, see the Reverse Proxy Setup Guide.
Authentication
The web interface uses session-based authentication:
- Session Authentication - Login through the web interface to obtain session cookies
- Admin Access - Required for server settings, user management, and advanced features
- User Access - Standard access for viewing projects, snapshots, and alerts
Main Interface Areas
The Konarr web interface is organized into several main sections:
📁 Projects
- Purpose: Logical groups representing hosts, applications, or container clusters
- Contents: Each project contains snapshots, SBOMs, and security data
- Features:
- Project hierarchy (parent/child relationships)
- Project types: Containers, Groups, Applications, Servers
- Status indicators (online/offline)
- Search and filtering capabilities
📸 Snapshots
- Purpose: Captured states of containers or systems at specific points in time
- Contents: SBOM data, dependency information, vulnerability scan results
- Features:
- Click snapshots to view detailed SBOM and vulnerability summaries
- Comparison between different snapshot versions
- Metadata including scan tools used, timestamps, and container information
🚨 Alerts
- Purpose: Security vulnerability alerts generated from scans
- Contents: Vulnerability details, severity levels, affected components
- Features:
- Severity filtering (Critical, High, Medium, Low)
- Alert state management (Vulnerable, Acknowledged, Secure)
- Search and filtering by CVE, component, or description
- Bulk operations for alert management
⚙️ Settings / Admin
- Purpose: Server-level configuration and administration (admin-only)
- Contents:
- User and token management
- Agent authentication settings
- Server configuration
- System health and statistics
- Access: Requires admin privileges
Projects Management
Creating Projects
Projects can be created in two ways:
- Manual Creation: Through the web interface (admin required)
- Auto-Creation: Agents can automatically create projects when configured with
agent.create: true
Project Types
- Container: Individual container projects
- Group: Collections of related projects (e.g., microservices)
- Application: Application-specific projects
- Server: Host-level projects
Project Features
- Hierarchy: Projects can have parent-child relationships for organization
- Search: Use the search/filter box to find specific projects by name, tag, or hostname
- Status: Visual indicators show project health and last update status
- Statistics: View snapshot counts, vulnerability summaries, and last scan times
Snapshots and SBOMs
Understanding Snapshots
Snapshots represent the state of a container or system at a specific time:
- Automatic Creation: Generated when agents scan containers
- Manual Triggers: Can be triggered through the API or agent commands
- Versioning: Multiple snapshots per project show changes over time
Viewing SBOM Details
Click on any snapshot to access detailed information:
- Dependencies: Complete list of packages, libraries, and components
- Vulnerability Data: Security scan results and risk assessments
- Metadata: Scan tool information, timestamps, and container details
- Export Options: Download SBOM data in various formats (JSON, XML)
SBOM Standards
Konarr uses industry-standard SBOM formats:
- CycloneDX: Primary format for SBOM generation and storage
- SPDX: Alternative format support
- Tool Integration: Works with Syft, Grype, Trivy, and other scanning tools
Security Alerts
Alert Overview
Security alerts are automatically generated from vulnerability scans:
- Source: Generated from tools like Grype and Trivy
- Severity Levels: Critical, High, Medium, Low, Unknown
- Real-time Updates: Alerts update as new snapshots are created
Alert Management
- Filtering: Filter by severity, state, search terms, or CVE IDs
- State Management: Mark alerts as Acknowledged or Secure
- Bulk Operations: Handle multiple alerts simultaneously
- Triage Workflow: Use alerts to prioritize security remediation
Alert Details
Each alert provides comprehensive information:
- Vulnerability Description: Detailed CVE information and impact
- Affected Components: Which packages/dependencies are vulnerable
- Severity Assessment: Risk level and CVSS scores where available
- Remediation: Version upgrade recommendations and fix information
Settings and Administration
User Management
Admin users can manage system access:
- User Accounts: Create and manage user accounts
- Role Assignment: Assign admin or standard user privileges
- Session Management: Monitor active sessions and access logs
Agent Token Management
Configure agent authentication:
- Token Generation: Server auto-generates agent tokens on first startup
- Token Retrieval: Access current agent token through admin interface
- Token Security: Rotate tokens for enhanced security
Server Configuration
Access server-level settings:
- Network Configuration: Domain, port, and proxy settings
- Security Settings: Authentication, secrets, and access controls
- Feature Toggles: Enable/disable specific Konarr features
- Performance Settings: Database cleanup, retention policies
Typical Workflow
Initial Setup
- Start Server: Launch Konarr server and access web interface
- Admin Login: Log in with admin credentials
- Configure Settings: Set up agent tokens and server configuration
- Agent Setup: Configure and deploy agents to monitor containers
Daily Operations
- Monitor Projects: Review project status and recent snapshots
- Review Alerts: Triage new security vulnerabilities
- Investigate Issues: Drill down into specific snapshots and dependencies
- Take Action: Update containers, acknowledge alerts, or escalate issues
Ongoing Management
- Trend Analysis: Monitor security trends across projects
- Compliance Reporting: Export SBOMs for compliance requirements
- System Maintenance: Review server health and performance metrics
- User Management: Manage access and permissions as team grows
Navigation Tips
Search and Filtering
- Global Search: Use the search box on Projects and Snapshots pages
- Filter Options: Filter by project type, status, severity, or date ranges
- Quick Access: Bookmark frequently accessed projects for easy navigation
Keyboard Shortcuts
- Navigation: Use browser back/forward for quick page navigation
- Refresh: F5 or Ctrl+R to refresh data views
- Search: Click search boxes or use Tab navigation
Performance Optimization
- Pagination: Large datasets are automatically paginated for performance
- Lazy Loading: Detailed data loads on-demand when viewing specific items
- Caching: Web interface caches frequently accessed data
Export and Automation
Manual Export
Export data directly from the web interface:
- SBOM Export: Download complete SBOM data from snapshot detail pages
- Vulnerability Reports: Export security scan results
- Project Data: Export project summaries and statistics
API Integration
For automation and integration:
- REST API: Complete API access for all web interface functionality
- Authentication: Use session cookies for web-based API access
- Documentation: See API Documentation for complete endpoint reference
Reporting
Generate reports for compliance and management:
- Security Summaries: Aggregate vulnerability data across projects
- Compliance Reports: SBOM data for regulatory requirements
- Trend Analysis: Historical data for security and dependency trends
Troubleshooting
Common Issues
Web Interface Not Loading:
- Check server is running:
curl http://localhost:9000/api/health
- Verify frontend configuration in server settings
- Clear browser cache and cookies
- Check network connectivity and firewall settings
Authentication Problems:
- Verify admin user account exists
- Check session timeout settings
- Clear browser cookies and re-login
- Verify server authentication configuration
Performance Issues:
- Check server resource usage (CPU, memory, disk)
- Review database performance and size
- Consider implementing reverse proxy caching
- Monitor network latency and bandwidth
Additional Help
For more troubleshooting information:
- Troubleshooting Guide - Comprehensive troubleshooting procedures
- Configuration Guide - Server and web interface configuration
- Security Setup - Authentication and security configuration
Next Steps
After familiarizing yourself with the web interface:
- CLI Usage - Learn about command-line operations
- API Documentation - Integrate with external systems
- Security Guide - Implement production security practices
API Documentation
This document provides an overview of the Konarr REST API. The API serves both the web UI and external integrations, with separate authentication for users and agents.
Base URL and Versioning
Base URL: http://<konarr-host>:9000/api
Current Version: The API is currently unversioned. Future versions will include version paths.
Content Type: All endpoints expect and return application/json
unless otherwise specified.
Authentication
Agent Authentication
Agents authenticate using a Bearer token in the Authorization
header:
curl -H "Authorization: Bearer <AGENT_TOKEN>" \
http://localhost:9000/api/snapshots
The agent token is generated by the server and stored as agent.key
in ServerSettings. You can retrieve it from the admin panel or by querying the database.
Session Authentication
Web UI users authenticate via session cookies. Sessions are managed automatically by the frontend and use HTTP-only cookies for security.
Login endpoint: POST /api/auth/login
Logout endpoint: POST /api/auth/logout
Registration endpoint: POST /api/auth/register
Core Endpoints
Health Check
GET /api
- Description: Server health, status, and configuration
- Authentication: None required
- Response: Server version, commit, configuration, and basic metrics
curl http://localhost:9000/api
Projects
GET /api/projects
- Description: List all projects
- Authentication: Session or Agent
- Response: Array of project objects with metadata
POST /api/projects
- Description: Create a new project
- Authentication: Session or Agent (if auto-create enabled)
- Body: Project name, type, and optional metadata
GET /api/projects/{id}
- Description: Get specific project details
- Authentication: Session or Agent
- Response: Project details with recent snapshots
Snapshots
GET /api/projects/{id}/snapshots
- Description: List snapshots for a project
- Authentication: Session or Agent
- Query Parameters:
Parameter | Description |
---|---|
limit | Maximum number of results |
offset | Pagination offset |
since | Filter by date (ISO 8601) |
POST /api/snapshots
- Description: Upload new snapshot data (agent endpoint)
- Authentication: Agent token required
- Body: Snapshot metadata and SBOM data
- Content-Type:
application/json
GET /api/snapshots/{id}
- Description: Get specific snapshot details
- Authentication: Session
- Response: Snapshot metadata, dependencies count, and security summary
Dependencies
GET /api/dependencies
- Description: Search and list dependencies
- Authentication: Session
- Query Parameters:
Parameter | Description |
---|---|
search | Search term for component names |
manager | Filter by package manager (npm, cargo, deb, etc.) |
type | Filter by component type |
project | Filter by project ID |
GET /api/dependencies/{id}
- Description: Get specific dependency details
- Authentication: Session
- Response: Component details, versions, and associated projects
Security Alerts
GET /api/security
- Description: List security alerts and vulnerabilities
- Authentication: Session
- Query Parameters:
Parameter | Description |
---|---|
page | Page number for pagination |
limit | Number of results per page |
search | Search term for alert names or CVE IDs |
state | Filter by alert state (open, closed, etc.) |
severity | Filter by severity level (critical, high, medium, low) |
GET /api/security/{id}
- Description: Get specific security alert details
- Authentication: Session
- Response: Alert details including affected dependencies and projects
Administration
GET /api/admin
- Description: Administrative settings and server statistics
- Authentication: Admin session
- Response: Server configuration, user statistics, and system information
GET /api/admin/users
- Description: User management (admin only)
- Authentication: Admin session
- Response: List of users with roles and status
Data Models
Project Object
{
"id": 1,
"name": "my-application",
"type": "container",
"description": "Production web application",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"snapshot_count": 15,
"latest_snapshot": "2024-01-01T12:00:00Z"
}
Snapshot Object
{
"id": 1,
"project_id": 1,
"sha": "sha256:abc123...",
"container_image": "nginx:1.21",
"container_version": "1.21.0",
"created_at": "2024-01-01T00:00:00Z",
"component_count": 245,
"vulnerability_count": 12,
"has_sbom": true
}
Dependency Object
{
"id": 1,
"type": "library",
"manager": "npm",
"name": "@types/node",
"namespace": "@types",
"version": "18.15.0",
"purl": "pkg:npm/%40types/node@18.15.0",
"projects": [
{"id": 1, "name": "my-application"}
]
}
SBOM Upload Format
Agents upload SBOMs in CycloneDX format:
{
"bomFormat": "CycloneDX",
"specVersion": "1.6",
"version": 1,
"metadata": {
"timestamp": "2024-01-01T00:00:00Z",
"tools": [
{
"vendor": "anchore",
"name": "syft",
"version": "v0.96.0"
}
],
"component": {
"type": "container",
"name": "nginx",
"version": "1.21"
}
},
"components": [
{
"type": "library",
"name": "openssl",
"version": "1.1.1",
"purl": "pkg:deb/debian/openssl@1.1.1"
}
]
}
Error Responses
All API errors follow a consistent format:
{
"error": {
"code": "RESOURCE_NOT_FOUND",
"message": "Project with ID 999 not found",
"details": {}
}
}
Common Error Codes:
Error Code | Description |
---|---|
AUTHENTICATION_REQUIRED | Missing or invalid authentication |
AUTHORIZATION_FAILED | Insufficient permissions |
RESOURCE_NOT_FOUND | Requested resource doesn't exist |
VALIDATION_ERROR | Invalid request data |
RATE_LIMIT_EXCEEDED | Too many requests |
Rate Limiting
The API implements rate limiting to prevent abuse:
- Agents: 100 requests per minute per token
- Web sessions: 1000 requests per minute per session
- Health endpoint: No rate limiting
Rate limit headers are included in responses:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640995200
Client Libraries
Rust Client
The konarr
crate includes a client library:
#![allow(unused)] fn main() { use konarr::client::KonarrClient; let client = KonarrClient::new("http://localhost:9000") .with_token("your-agent-token"); let projects = client.projects().list().await?; }
CLI Integration
The konarr-cli
tool provides API access:
## CLI Usage Examples
The `konarr-cli` tool provides API access for automation:
```bash
# Upload an SBOM file
konarr-cli upload-sbom --input sbom.json --snapshot-id 123
# Run agent in monitoring mode
konarr-cli agent --docker-socket /var/run/docker.sock
# Scan a container image
konarr-cli scan --image alpine:latest --output results.json
# List available tools
konarr-cli scan --list
For advanced API usage and integration examples, see the Configuration & Usage guide.
Security
This page describes Konarr's security model, threat considerations, and comprehensive recommendations for secure production deployments.
Overview
Konarr handles sensitive supply chain data including:
- Software Bill of Materials (SBOMs) for container images
- Vulnerability scan results and security alerts
- Container metadata and deployment information
- Agent authentication credentials
A comprehensive security approach is essential for protecting this data and maintaining system integrity.
Authentication and Authorization
Agent Token Management
Konarr uses a simple but effective token-based authentication model for agents:
- Token Generation: The server automatically generates a secure agent token (
agent.key
) on first startup, stored in ServerSettings - Token Usage: Agents authenticate using this token as a Bearer token in the
Authorization
header - Token Validation: The server validates agent requests using a guard system with performance caching and database fallback
- Single Token Model: Currently, all agents share a single token for simplicity
Best Practices for Agent Tokens
- Treat as Secret: Never commit tokens to version control or expose in logs
- Secure Storage: Store tokens in secure credential management systems
- Limited Exposure: Only provide tokens to authorized agent deployments
- Regular Rotation: Implement a token rotation schedule (recommended: quarterly)
- Environment Variables: Use environment variables for token distribution, not configuration files
Token Rotation Procedure
# 1. Generate new token (requires server restart or admin API when available)
# Currently requires database update - this will be improved in future versions
# 2. Update all agent deployments with new token
# For Docker environments:
docker service update --env-add KONARR_AGENT_TOKEN="new-token-here" konarr-agent
# 3. Verify all agents are connecting successfully
# Check server logs for authentication failures
# 4. Remove old token references from configuration systems
Web UI Authentication
- Session-Based: Web interface uses session-based authentication
- Admin Access: Server settings and sensitive operations require admin privileges
- Session Security: Sessions are secured with appropriate timeout settings
Transport Security
TLS Configuration
Always use HTTPS in production - Konarr transmits sensitive vulnerability and SBOM data that must be encrypted in transit.
Frontend URL Configuration
Configure the server's frontend URL to ensure secure redirects and callbacks:
# konarr.yml
server:
frontend:
url: "https://konarr.example.com"
Certificate Management
- Automated Renewal: Use Let's Encrypt with automated renewal (certbot, acme.sh)
- Certificate Monitoring: Monitor certificate expiration dates
- Backup Certificates: Maintain secure backups of certificates and keys
Runtime Security
Container Security
Docker Socket Access Risks
⚠️ Critical Security Consideration: Mounting the Docker socket (/var/run/docker.sock
) grants significant privileges:
- Container Creation: Ability to create privileged containers
- Host Access: Access to host filesystem through volume mounts
- Privilege Escalation: Potential for privilege escalation attacks
- Container Inspection: Access to all running containers and their metadata
Security Mitigations
- Trusted Hosts Only: Only run agents on trusted, dedicated hosts
- Read-Only Mounts: Use
:ro
flag when possible:/var/run/docker.sock:/var/run/docker.sock:ro
- Dedicated Agent Hosts: Consider dedicated hosts for agent containers
- Network Segmentation: Isolate agent hosts in secure network segments
- Host Monitoring: Monitor host systems for unusual container activity
- Alternative Runtimes: Consider container runtimes with safer introspection APIs
Container Image Security
# Use minimal base images
FROM alpine:3.19
# Run as non-root user
RUN adduser -D -s /bin/sh konarr
USER konarr
# Minimal filesystem
COPY --from=builder /app/konarr-cli /usr/local/bin/
Tool Installation Security
The agent can automatically install security scanning tools (Syft, Grype, Trivy):
Supply Chain Security
- Tool Verification: Verify tool signatures and checksums when available
- Controlled Environments: For strict environments, pre-install approved tool versions
- Disable Auto-Install: Set
agent.tool_auto_install: false
and manage tools manually - Tool Isolation: Consider running tools in isolated environments
# Secure agent configuration
agent:
tool_auto_install: false # Disable automatic tool installation
toolcache_path: "/usr/local/toolcache" # Pre-installed tool location
Data Security
SBOM and Vulnerability Data Protection
SBOM and vulnerability data contains sensitive information about your infrastructure:
Access Control
- API Authentication: All API endpoints require proper authentication
- Project Isolation: Implement project-based access controls
- Data Classification: Classify SBOM data according to organizational policies
Data Retention
# Example retention policy configuration (implementation-dependent)
data:
retention:
snapshots: "90d" # Keep snapshots for 90 days
vulnerabilities: "1y" # Keep vulnerability data for 1 year
logs: "30d" # Keep logs for 30 days
Data Encryption
- At Rest: Consider encrypting the SQLite database file
- In Transit: Always use HTTPS for API communications
- Backups: Encrypt database backups
Database Security
File Permissions
# Secure database file permissions
chmod 600 /data/konarr.db
chown konarr:konarr /data/konarr.db
# Secure data directory
chmod 700 /data
chown konarr:konarr /data
Backup Security
# Encrypted backup example
sqlite3 /data/konarr.db ".backup /tmp/konarr-backup.db"
gpg --cipher-algo AES256 --compress-algo 1 --symmetric --output konarr-backup.db.gpg /tmp/konarr-backup.db
rm /tmp/konarr-backup.db
Network Security
Firewall Configuration
# Allow only necessary ports
# Server (typically internal)
ufw allow from 10.0.0.0/8 to any port 9000
# Reverse proxy (public)
ufw allow 80
ufw allow 443
# Agent communication (if direct)
ufw allow from <agent-networks> to any port 9000
Network Segmentation
- DMZ Deployment: Deploy web-facing components in DMZ
- Internal Networks: Keep agents and database on internal networks
- VPN Access: Use VPN for administrative access
Secrets Management
Configuration Security
- Environment Variables: Use environment variables for secrets, not config files
- Secrets Managers: Integrate with HashiCorp Vault, AWS Secrets Manager, etc.
- File Permissions: Secure configuration files with appropriate permissions
# Example environment variable configuration
export KONARR_AGENT_TOKEN="$(vault kv get -field=token secret/konarr/agent)"
export KONARR_DATABASE_ENCRYPTION_KEY="$(vault kv get -field=key secret/konarr/database)"
Kubernetes Secrets
apiVersion: v1
kind: Secret
metadata:
name: konarr-agent-token
type: Opaque
data:
token: <base64-encoded-agent-token>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: konarr-agent
spec:
template:
spec:
containers:
- name: agent
image: ghcr.io/42bytelabs/konarr-agent:latest
env:
- name: KONARR_AGENT_TOKEN
valueFrom:
secretKeyRef:
name: konarr-agent-token
key: token
Monitoring and Auditing
Security Monitoring
Log Collection
# Example logging configuration
logging:
level: "info"
audit: true
destinations:
- type: "file"
path: "/var/log/konarr/audit.log"
- type: "syslog"
facility: "auth"
Metrics to Monitor
- Failed authentication attempts
- Unusual agent activity patterns
- Large data uploads or downloads
- Administrative actions
- System resource usage anomalies
Alerting
# Example alert conditions
# - More than 10 failed authentications in 5 minutes
# - Agent uploading unusually large SBOMs
# - New agents connecting from unknown IP addresses
# - Database size growing rapidly
Compliance and Auditing
Audit Trail
- Authentication Events: Log all authentication attempts and results
- Data Access: Log access to sensitive SBOM and vulnerability data
- Configuration Changes: Log all server configuration modifications
- Agent Activity: Monitor agent connection patterns and data uploads
Compliance Considerations
- Data Residency: Consider where SBOM data is stored and processed
- Access Logging: Maintain detailed access logs for compliance audits
- Data Retention: Implement compliant data retention policies
- Privacy: Consider privacy implications of container metadata collection
Incident Response
Security Incident Procedures
- Detection: Monitor for security events and anomalies
- Containment: Isolate affected systems and revoke compromised tokens
- Investigation: Analyze logs and determine scope of compromise
- Recovery: Restore systems and implement additional protections
- Lessons Learned: Update security procedures based on incidents
Token Compromise Response
# If agent token is compromised:
# 1. Immediately rotate the agent token
# 2. Update all legitimate agents
# 3. Monitor for unauthorized access attempts
# 4. Review recent agent activity for suspicious patterns
Security Checklist
Deployment Security
- HTTPS/TLS configured with modern ciphers
- Security headers implemented (HSTS, CSP, etc.)
- Agent tokens stored securely (not in code/configs)
- Database file permissions secured (600)
- Firewall rules configured for minimal access
- Regular security updates applied
- Monitoring and alerting configured
- Backup encryption implemented
- Agent hosts properly secured
- Tool installation policies defined
Operational Security
- Regular agent token rotation
- Security monitoring in place
- Incident response procedures defined
- Access controls documented and reviewed
- Compliance requirements mapped and addressed
- Security training for operators
- Regular security assessments conducted
Additional Resources
- Reverse Proxy Setup Guide - Detailed TLS configuration
- Agent Configuration - Secure agent deployment
- API Documentation - Authentication and authorization details
- Troubleshooting - Security-related troubleshooting
Troubleshooting
This guide helps resolve common issues with Konarr server and agent deployments.
Server Issues
Server Won't Start
Problem: Server fails to start or exits immediately.
Solutions:
-
Check database permissions:
# Ensure data directory is writable chmod 755 /data ls -la /data/konarr.db
-
Verify configuration:
# Test configuration file syntax cargo run -p konarr-server -- --config konarr.yml --debug
-
Check port availability:
# Verify port 9000 is available netstat -tulpn | grep :9000
-
Review server logs:
# Docker container logs docker logs konarr-server # Systemd service logs journalctl -u konarr-server -f
Database Issues
Problem: Database corruption or migration failures.
Solutions:
-
Backup and recover database:
# Backup current database cp /data/konarr.db /data/konarr.db.backup # Check database integrity sqlite3 /data/konarr.db "PRAGMA integrity_check;"
-
Reset database (data loss):
# Stop server, remove database, restart rm /data/konarr.db # Server will recreate database on next start
Web UI Not Loading
Problem: UI shows blank page or loading errors.
Solutions:
-
Check frontend configuration:
# konarr.yml server: frontend: url: "https://konarr.example.com"
-
Verify reverse proxy (if used):
# nginx example location / { proxy_pass http://localhost:9000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }
-
Clear browser cache and cookies
Agent Issues
Authentication Failures
Problem: Agent cannot authenticate with server.
Error: Authentication failed
or Invalid token
Solutions:
-
Verify agent token:
# Check server for current token curl -s http://localhost:9000/api/health # Verify agent token matches server echo $KONARR_AGENT_TOKEN
-
Generate new token:
# Access server admin UI # Navigate to Settings > Agent Token # Generate new token and update agents
-
Check token format:
# Token should be base64-encoded string # Verify no extra whitespace or newlines echo -n "$KONARR_AGENT_TOKEN" | wc -c
Web UI Login Issues
Problem: Cannot log in to the web interface or forgot password.
Solutions:
-
Reset user password using the CLI:
# Interactive password reset konarr-cli database user # Follow the prompts: # - Enter the username # - Enter the new password # - Select the role (Admin/User)
-
Create a new admin user if locked out:
# Create emergency admin account konarr-cli --database-url /data/konarr.db database user # When prompted: # Username: emergency-admin # Password: [enter secure password] # Role: Admin
-
For container deployments:
# Access container and reset password docker exec -it konarr-server konarr-cli database user # Or with volume-mounted database konarr-cli --database-url /path/to/konarr.db database user
Note: The database user
command can both create new users and reset passwords for existing users. See the CLI Usage Guide for more details.
Tool Installation Problems
Problem: Agent cannot install or find security tools.
Solutions:
-
Manual tool installation:
# Install Syft curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin # Install Grype curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin # Install Trivy curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
-
Check tool paths:
# Verify tools are accessible which syft which grype which trivy # Test tool execution syft --version grype --version trivy --version
-
Configure toolcache path:
# konarr.yml agent: toolcache_path: "/usr/local/toolcache" tool_auto_install: true
Docker Socket Access
Problem: Agent cannot access Docker socket.
Error: Cannot connect to Docker daemon
Solutions:
-
Check Docker socket permissions:
# Verify socket exists and is accessible ls -la /var/run/docker.sock # Add user to docker group sudo usermod -aG docker $USER # Logout and login again
-
Container socket mounting:
# Ensure socket is properly mounted docker run -v /var/run/docker.sock:/var/run/docker.sock \ ghcr.io/42bytelabs/konarr-agent:latest
-
Docker daemon status:
# Check Docker daemon is running systemctl status docker sudo systemctl start docker
Network Connectivity
Problem: Agent cannot reach Konarr server.
Solutions:
-
Test connectivity:
# Test server reachability curl -v http://konarr-server:9000/api/health # Check DNS resolution nslookup konarr-server
-
Firewall configuration:
# Check firewall rules sudo ufw status sudo firewall-cmd --list-all # Allow port 9000 sudo ufw allow 9000
-
Network configuration:
# Check network interfaces ip addr show # Test port connectivity telnet konarr-server 9000
Container Issues
Image Pull Failures
Problem: Cannot pull Konarr container images.
Solutions:
-
Check image availability:
# List available tags curl -s https://api.github.com/repos/42ByteLabs/konarr/packages/container/konarr/versions # Pull specific version docker pull ghcr.io/42bytelabs/konarr:v0.4.4
-
Authentication for private registries:
# Login to GitHub Container Registry echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin
Container Startup Issues
Problem: Containers exit immediately or crash.
Solutions:
-
Check container logs:
# View container logs docker logs konarr-server docker logs konarr-agent # Follow logs in real-time docker logs -f konarr-server
-
Verify volume mounts:
# Check mount points exist and are writable ls -la /host/data ls -la /host/config # Fix permissions if needed sudo chown -R 1000:1000 /host/data
-
Resource constraints:
# Check available resources docker stats free -h df -h
Performance Issues
High Memory Usage
Problem: Server or agent consuming excessive memory.
Solutions:
-
Monitor memory usage:
# Check process memory ps aux | grep konarr # Monitor container resources docker stats konarr-server
-
Configure resource limits:
# docker-compose.yml services: konarr: deploy: resources: limits: memory: 512M reservations: memory: 256M
-
Database optimization:
# Vacuum SQLite database sqlite3 /data/konarr.db "VACUUM;" # Check database size du -h /data/konarr.db
Slow SBOM Generation
Problem: Agent takes too long to generate SBOMs.
Solutions:
-
Check scanner performance:
# Test individual tools time syft nginx:latest time grype nginx:latest
-
Optimize container caching:
# Pre-pull base images docker pull alpine:latest docker pull ubuntu:latest # Use local registry for faster access
-
Adjust scanning scope:
# konarr.yml - reduce scan scope agent: scan_config: exclude_paths: - "/tmp" - "/var/cache"
Debugging
Enable Debug Logging
Server Debug Mode:
# Environment variable
export RUST_LOG=debug
# Configuration file
echo "log_level = 'debug'" >> konarr.yml
Agent Debug Mode:
# Debug Agent
konarr-cli --debug agent --docker-socket /var/run/docker.sock
# Debug environment
export KONARR_LOG_LEVEL=debug
API Debugging
Test API Endpoints:
# Health check
curl -v http://localhost:9000/api/health
# Authentication test
curl -H "Authorization: Bearer $AGENT_TOKEN" \
http://localhost:9000/api/projects
# Raw SBOM upload test
curl -X POST \
-H "Authorization: Bearer $AGENT_TOKEN" \
-H "Content-Type: application/json" \
-d @sbom.json \
http://localhost:9000/api/snapshots
Database Debugging
Query Database Directly:
# Connect to SQLite database
sqlite3 /data/konarr.db
# Common debugging queries
.tables
SELECT COUNT(*) FROM projects;
SELECT COUNT(*) FROM snapshots;
SELECT COUNT(*) FROM components;
# Check recent activity
SELECT * FROM snapshots ORDER BY created_at DESC LIMIT 10;
Configuration Validation and Debugging
Initial Setup Verification
1. Server Health Check
# Test server is running and accessible
curl -v http://localhost:9000/api/health
# Expected response:
# HTTP/1.1 200 OK
# {"status":"healthy","version":"x.x.x"}
2. Database Verification
# Check database file exists and is accessible
ls -la /data/konarr.db
# Verify database structure
sqlite3 /data/konarr.db ".tables"
# Check server settings
sqlite3 /data/konarr.db "SELECT name, value FROM server_settings WHERE name LIKE 'agent%';"
3. Agent Authentication Test
# Test agent token authentication
curl -H "Authorization: Bearer ${KONARR_AGENT_TOKEN}" \
http://localhost:9000/api/projects
# Successful authentication returns project list
Advanced Configuration Troubleshooting
Server Startup Problems
Issue: Server fails to start or exits immediately
Solutions:
-
Check configuration file syntax:
# Validate YAML syntax python -c "import yaml; yaml.safe_load(open('konarr.yml'))"
-
Verify data directory permissions:
# Ensure data directory is writable mkdir -p /data chmod 755 /data chown konarr:konarr /data # If running as specific user
-
Check port availability:
# Verify port 9000 is not in use netstat -tulpn | grep :9000 lsof -i :9000
Issue: Frontend not served properly
Solutions:
-
Check frontend path configuration:
server: frontend: "/app/dist" # Ensure path exists and contains built frontend
-
Verify frontend files exist:
ls -la /app/dist/ # Should contain: index.html, static/, assets/
Agent Configuration Problems
Issue: Agent cannot connect to server
Solutions:
-
Verify server URL configuration:
# Test connectivity curl -v http://konarr.example.com:9000/api/health
-
Check agent token:
# Retrieve current agent token from server sqlite3 /data/konarr.db "SELECT value FROM server_settings WHERE name='agent.key';"
-
Network troubleshooting:
# Test DNS resolution nslookup konarr.example.com # Test port connectivity telnet konarr.example.com 9000
Issue: Agent tools not found or installation fails
Solutions:
-
Manual tool installation:
# Install Syft curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | \ sh -s -- -b /usr/local/bin # Install Grype curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | \ sh -s -- -b /usr/local/bin # Install Trivy curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | \ sh -s -- -b /usr/local/bin
-
Verify tool paths:
# Check tools are accessible which syft grype trivy /usr/local/bin/syft version /usr/local/bin/grype version /usr/local/bin/trivy version
-
Configure custom tool paths:
agent: toolcache_path: "/opt/security-tools" tool_auto_install: false
Performance Optimization
Database Performance
# Analyze database size and performance
sqlite3 /data/konarr.db "PRAGMA integrity_check;"
sqlite3 /data/konarr.db "VACUUM;"
sqlite3 /data/konarr.db "ANALYZE;"
# Check database file size
du -h /data/konarr.db
Memory and Resource Usage
# Monitor server resource usage
ps aux | grep konarr-server
htop -p $(pgrep konarr-server)
# Container resource monitoring
docker stats konarr-server konarr-agent
Security Verification
SSL/TLS Configuration
# Test SSL certificate and configuration
openssl s_client -connect konarr.example.com:443 -servername konarr.example.com
# Check certificate expiration
curl -vI https://konarr.example.com 2>&1 | grep -E "(expire|until)"
Token Security
# Verify agent token entropy and length
echo $KONARR_AGENT_TOKEN | wc -c # Should be 43+ characters
echo $KONARR_AGENT_TOKEN | head -c 10 # Should start with "kagent_"
Logging and Debugging
Enable Server Debug Logging
Server debug mode:
# Environment variable
export RUST_LOG=debug
# Or configuration file
echo "log_level: debug" >> konarr.yml
Agent debug mode:
# CLI flag
konarr-cli --debug agent monitor
# Environment variable
export KONARR_LOG_LEVEL=debug
Log Analysis
# Server logs (Docker)
docker logs -f konarr-server
# Agent logs (Docker)
docker logs -f konarr-agent
# System service logs
journalctl -u konarr-server -f
journalctl -u konarr-agent -f
Configuration Testing and Validation
Complete Configuration Test
# Test complete configuration (development)
cargo run -p konarr-server -- --config konarr.yml --debug
# Test agent configuration
konarr-cli --config konarr.yml --debug
Environment Variable Precedence
# Check configuration with debug output
konarr-cli --config konarr.yml --debug
# List all environment variables
env | grep KONARR_ | sort
Getting Help
Log Collection
When seeking support, collect these logs:
# Server logs
docker logs konarr-server > server.log 2>&1
# Agent logs
docker logs konarr-agent > agent.log 2>&1
# System information
docker info > docker-info.txt
uname -a > system-info.txt
Support Channels
- GitHub Issues: https://github.com/42ByteLabs/konarr/issues
- Documentation: https://42bytelabs.github.io/konarr-docs/
- Community: GitHub Discussions
Reporting Bugs
Include in bug reports:
- Konarr version (
konarr-server --version
) - Operating system and version
- Docker/container runtime version
- Complete error messages and stack traces
- Steps to reproduce the issue
- Configuration files (remove sensitive data)