Basic Setup
The basic setup of the Data Streamhouse consists of a single Portal instance and a single Argus instance. Optional Machina instances may be added to enable stream processing and analytics capabilities. This configuration is recommended for:
Proof-of-Concepts (PoCs)
Small environments or projects
Isolated network segments
Rapid evaluation and onboarding
This guide describes the complete deployment process for this setup. Once completed, you will be able to:
Log in to the Portal
Monitor your clusters and applications via Argus
Explore your streaming data interactively
Deployment Overview
To bring up a fully functional environment, complete the following steps in order:
Preparation
Access & Credentials
Configuration
Deployment
Testing
1. Preparation
Before deployment, ensure your infrastructure and access are correctly configured.
Portal Preparation
Container Setup
Portal is distributed as a Docker image and can be deployed as:
A standalone container
A Kubernetes Pod
A Helm-based deployment
Ingress configuration is often required. Portal communicates over HTTP + WebSocket (WS) or HTTPS + Secure WebSocket (WSS). If an ingress controller is used, ensure that the appropriate headers (e.g., Connection: Upgrade
) are forwarded to support WebSocket upgrades.
Database Configuration
Portal requires access to an external database (Postgres or H2) with full read and write access to a dedicated schema (e.g., dshportal
). The default schema used is PUBLIC
, but a custom schema can be used for secure environments. We recommend using a dedicated database for Portal.
SSL/TLS Certificate
Deploying Portal with SSL/TLS enabled is strongly recommended. Certificates must be embedded in a Java Keystore (JKS) and mounted into the container at runtime.
2. Access & Credentials
Before deploying the Data Streamhouse, you must obtain the necessary credentials from Xeotek. These credentials are required to:
Pull container images from the private registry
Configure system secrets and initialize the platform
Container Images
Container images for the Data Streamhouse are hosted in a private repository on Docker Hub.
The Data Streamhouse operates in high-security environments such as large enterprises and government agencies. For security reasons, direct access to the images is restricted and not publicly available.
System Credentials
In addition to image access, you will receive your Data Streamhouse team and secret. These are required to activate and operate your Data Streamhouse installation.
Both the container registry credentials and the team and secret are provided by Xeotek during onboarding.
Important:
Store all credentials securely and restrict access.
The team and secret must be configured before system startup.
Follow your organization's policies for handling confidential deployment artifacts.
3. Configuration
For a full list of configuration options for the Data Streamhouse, please visit the Configuration Table page.
Portal Configuration
dsh_portal_team
String
Your team's id
dsh_portal_secret
String
Your team's secret
dsh_portal_port
8080 (or 8443)
The port through which the web user interface is accessible. Note: the container user has no access to system level ports.
dsh_portal_loglevel
debug
Start with "debug" and change it to "warn" later.
dsh_portal_db_url
jdbc:postgresql://dshportal.acme.com:5432/dshportal
JDBC connection string to your Postgres or H2 database.
dsh_portal_db_username
String
Username of the database user with read/write access to the PUBLIC schema, if not specified otherwise.
dsh_portal_db_password
String
Password of the database user.
dsh_portal_keystore_path
/etc/selfsigned.jks
Path to mounted JKS
dsh_portal_keystore_pass
String
Password to access the certificate
dsh_portal_keystore_alias
String
Alias of the certificate
4. Deployment
Portal and Argus can be deployed using a Helm chart, as a standalone container, or as a Kubernetes workload (including OpenShift). This section outlines recommended practices for each option with an emphasis on production readiness and operational maintainability.
Deployment Targets
Helm (preferred for Kubernetes-based environments)
Pod/Deployment YAMLs (for manual control or OpenShift customization)
Docker run (for local development or PoC-only usage)
Helm Deployment
All Data Streamhouse components are available as Helm charts for streamlined deployment in Kubernetes-based environments.
Adding the Helm Repository
Add the Data Streamhouse Helm repository:
Image Pull Secrets
Because Data Streamhouse container images are hosted in a private registry, an image pull secret must be configured.
Create the secret using the credentials provided by Xeotek:
Reference the secret in your custom Helm values:
Best practice: Always scope the secret to the same namespace where Data Streamhouse components are deployed.
Keystore Mounting (TLS)
When TLS is enabled, a Java Keystore (JKS) must be mounted into the Portal container at runtime. There are multiple strategies to manage this securely:
Option 1: Volume Mount
Mount a pre-generated .jks
file into the container:
Option 2: External Secret Manager
Use a cloud-native secret manager (e.g., AWS Secrets Manager, HashiCorp Vault) with a sidecar injector or an operator to mount the keystore dynamically.
Best practice: Avoid hardcoding passwords and TLS paths in environment variables or values files. Use Kubernetes secrets or external secret managers.
Namespace & Isolation
Use a dedicated namespace for the Data Streamhouse components (e.g., datastreamhouse-system
) to isolate the environment and simplify resource control and RBAC management.
Best practice: Apply namespace-specific resource quotas and network policies to ensure isolation and enforce limits.
Health Endpoints
Portal exposes the following endpoints for orchestration and monitoring:
/health
Overall health status
System dashboards, alerts
/live
Liveness probe
Kubernetes liveness checks
/ready
Readiness probe
Kubernetes readiness checks
When deploying via Helm or custom manifests, ensure probes are configured as:
Additional Recommendations
Use readiness gates to delay service exposure until the database and keystore are available.
Enable resource requests/limits to ensure predictable performance and avoid node contention.
Ensure anti-affinity rules if deploying Portal in a HA setup to spread across availability zones or nodes.
5. Testing
After deploying Portal and Argus, validate that the system is operational before connecting additional components or exposing it to end users.
Browser Access
Open a browser and navigate to the Portal endpoint:
Log in using the default credentials:
Note: Change the default password immediately after login in production environments.
Log Inspection
Check the Portal logs to confirm successful startup. You should see output similar to:
No stack traces or repeated warnings should appear during startup. All components (Portal, Argus) should reach a ready state.
Logs are written to the container’s local file system and output to the console.
The default log file location is:
Both console output and log file contents are identical.
Health Checks
Verify that health endpoints return expected HTTP 200 status codes:
These endpoints can also be queried manually or tested via:
Troubleshooting
If you encounter problems during deployment or runtime, detailed logs are critical for diagnosis.
Configuring Log Levels
Control log verbosity by setting environment variables at startup.
Main application log level:
dsh_portal_loglevel
Additional component log levels:
dsh_portal_loglevel_kafka
— Apache Kafka client librariesdsh_portal_loglevel_netty
— Netty networking
Available log levels:
ERROR — Critical failures only
WARN — Warnings and potential issues (default)
INFO — General operational information
DEBUG — Detailed internal state information (for troubleshooting only)
Last updated
Was this helpful?