LogoLogo
Kadeck DocumentationSubmit Ticket
  • Platform overview
  • Deployment
  • API Reference
  • Platform updates
  • Introduction
  • Context & Scope
  • Interoperability & Network
  • System Requirements
  • Configuration List
  • Deployments
    • Overview
    • Basic Setup
    • Advanced Setup
    • Automated Deployment
    • Identity Provider Integration
    • License Activation Scenarios
  • FAQs
    • How do I create a certificate and keystore?
  • How do I access the container images?
  • How to make container images offline available?
  • How do I configure the memory for Portal?

Legal

  • Legal Notice
  • Privacy Policy

© 2025 Xeotek Inc. and its affiliates

On this page
  • Deployment Overview
  • 1. Preparation
  • Portal Preparation
  • 2. Access & Credentials
  • Container Images
  • System Credentials
  • 3. Configuration
  • Portal Configuration
  • 4. Deployment
  • Deployment Targets
  • Helm Deployment
  • Keystore Mounting (TLS)
  • Namespace & Isolation
  • Health Endpoints
  • Additional Recommendations
  • 5. Testing
  • Browser Access
  • Log Inspection
  • Health Checks
  • Troubleshooting

Was this helpful?

Export as PDF
  1. Deployments

Basic Setup

The basic setup of the Data Streamhouse consists of a single Portal instance and a single Argus instance. Optional Machina instances may be added to enable stream processing and analytics capabilities. This configuration is recommended for:

  • Proof-of-Concepts (PoCs)

  • Small environments or projects

  • Isolated network segments

  • Rapid evaluation and onboarding

This guide describes the complete deployment process for this setup. Once completed, you will be able to:

  • Log in to the Portal

  • Monitor your clusters and applications via Argus

  • Explore your streaming data interactively


Deployment Overview

To bring up a fully functional environment, complete the following steps in order:

  1. Preparation

  2. Access & Credentials

  3. Configuration

  4. Deployment

  5. Testing


1. Preparation

Before deployment, ensure your infrastructure and access are correctly configured.

Portal Preparation

Container Setup

Portal is distributed as a Docker image and can be deployed as:

  • A standalone container

  • A Kubernetes Pod

  • A Helm-based deployment

Ingress configuration is often required. Portal communicates over HTTP + WebSocket (WS) or HTTPS + Secure WebSocket (WSS). If an ingress controller is used, ensure that the appropriate headers (e.g., Connection: Upgrade) are forwarded to support WebSocket upgrades.

For required ports and network policies, refer to the Interoperability & Network section.

Database Configuration

Portal requires access to an external database (Postgres or H2) with full read and write access to a dedicated schema (e.g., dshportal). The default schema used is PUBLIC, but a custom schema can be used for secure environments. We recommend using a dedicated database for Portal.

SSL/TLS Certificate

Deploying Portal with SSL/TLS enabled is strongly recommended. Certificates must be embedded in a Java Keystore (JKS) and mounted into the container at runtime.

Instructions for generating a custom certificate are provided in the FAQs section.


2. Access & Credentials

Before deploying the Data Streamhouse, you must obtain the necessary credentials from Xeotek. These credentials are required to:

  • Pull container images from the private registry

  • Configure system secrets and initialize the platform

Container Images

Container images for the Data Streamhouse are hosted in a private repository on Docker Hub.

The Data Streamhouse operates in high-security environments such as large enterprises and government agencies. For security reasons, direct access to the images is restricted and not publicly available.

Please contact your representative at Xeotek to obtain access to the Data Streamhouse images.

System Credentials

In addition to image access, you will receive your Data Streamhouse team and secret. These are required to activate and operate your Data Streamhouse installation.

Both the container registry credentials and the team and secret are provided by Xeotek during onboarding.

Important:

  • Store all credentials securely and restrict access.

  • The team and secret must be configured before system startup.

  • Follow your organization's policies for handling confidential deployment artifacts.


3. Configuration

For a full list of configuration options for the Data Streamhouse, please visit the Configuration Table page.

Portal Configuration

Key
Values
Description

dsh_portal_team

String

Your team's id

dsh_portal_secret

String

Your team's secret

dsh_portal_port

8080 (or 8443)

The port through which the web user interface is accessible. Note: the container user has no access to system level ports.

dsh_portal_loglevel

debug

Start with "debug" and change it to "warn" later.

dsh_portal_db_url

jdbc:postgresql://dshportal.acme.com:5432/dshportal

JDBC connection string to your Postgres or H2 database.

dsh_portal_db_username

String

Username of the database user with read/write access to the PUBLIC schema, if not specified otherwise.

dsh_portal_db_password

String

Password of the database user.

dsh_portal_keystore_path

/etc/selfsigned.jks

Path to mounted JKS

dsh_portal_keystore_pass

String

Password to access the certificate

dsh_portal_keystore_alias

String

Alias of the certificate

4. Deployment

Portal and Argus can be deployed using a Helm chart, as a standalone container, or as a Kubernetes workload (including OpenShift). This section outlines recommended practices for each option with an emphasis on production readiness and operational maintainability.

Deployment Targets

  • Helm (preferred for Kubernetes-based environments)

  • Pod/Deployment YAMLs (for manual control or OpenShift customization)

  • Docker run (for local development or PoC-only usage)


Helm Deployment

All Data Streamhouse components are available as Helm charts for streamlined deployment in Kubernetes-based environments.

Adding the Helm Repository

Add the Data Streamhouse Helm repository:

helm repo add datastreamhouse https://dl.cloudsmith.io/public/xeotek/datastreamhouse/helm/charts/
helm repo update

Image Pull Secrets

Because Data Streamhouse container images are hosted in a private registry, an image pull secret must be configured.

Create the secret using the credentials provided by Xeotek:

kubectl create secret docker-registry dsh-registry-secret \
  --docker-server=https://index.docker.io/v1/ \
  --docker-username=<your-xeotek-username> \
  --docker-password=<your-xeotek-password> \
  --namespace=<datastreamhouse-system-namespace>

Reference the secret in your custom Helm values:

image:
  imagePullSecrets:
    - name: dsh-registry-secret

Best practice: Always scope the secret to the same namespace where Data Streamhouse components are deployed.


Keystore Mounting (TLS)

When TLS is enabled, a Java Keystore (JKS) must be mounted into the Portal container at runtime. There are multiple strategies to manage this securely:

Option 1: Volume Mount

Mount a pre-generated .jks file into the container:

volumeMounts:
  - name: tls-keystore
    mountPath: /opt/portal/keystore
    readOnly: true

volumes:
  - name: tls-keystore
    secret:
      secretName: portal-tls

Option 2: External Secret Manager

Use a cloud-native secret manager (e.g., AWS Secrets Manager, HashiCorp Vault) with a sidecar injector or an operator to mount the keystore dynamically.

Best practice: Avoid hardcoding passwords and TLS paths in environment variables or values files. Use Kubernetes secrets or external secret managers.


Namespace & Isolation

Use a dedicated namespace for the Data Streamhouse components (e.g., datastreamhouse-system) to isolate the environment and simplify resource control and RBAC management.

kubectl create namespace datastreamhouse-system

Best practice: Apply namespace-specific resource quotas and network policies to ensure isolation and enforce limits.


Health Endpoints

Portal exposes the following endpoints for orchestration and monitoring:

Endpoint
Description
Usage

/health

Overall health status

System dashboards, alerts

/live

Liveness probe

Kubernetes liveness checks

/ready

Readiness probe

Kubernetes readiness checks

When deploying via Helm or custom manifests, ensure probes are configured as:

livenessProbe:
  httpGet:
    path: /live
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10

Additional Recommendations

  • Use readiness gates to delay service exposure until the database and keystore are available.

  • Enable resource requests/limits to ensure predictable performance and avoid node contention.

  • Ensure anti-affinity rules if deploying Portal in a HA setup to spread across availability zones or nodes.

5. Testing

After deploying Portal and Argus, validate that the system is operational before connecting additional components or exposing it to end users.

Browser Access

Open a browser and navigate to the Portal endpoint:

http(s)://<your-ingress-or-service-endpoint>

Log in using the default credentials:

Username: admin  
Password: admin

Note: Change the default password immediately after login in production environments.


Log Inspection

Check the Portal logs to confirm successful startup. You should see output similar to:

INFO Server started at: http://0.0.0.0:8080

No stack traces or repeated warnings should appear during startup. All components (Portal, Argus) should reach a ready state.

Logs are written to the container’s local file system and output to the console.

The default log file location is:

/root/.dsh_portal_log

Both console output and log file contents are identical.


Health Checks

Verify that health endpoints return expected HTTP 200 status codes:

GET /live   → 200 OK  
GET /ready  → 200 OK  
GET /health → 200 OK

These endpoints can also be queried manually or tested via:

curl http://<pod-ip>:8080/live

Troubleshooting

If you encounter problems during deployment or runtime, detailed logs are critical for diagnosis.

Configuring Log Levels

Control log verbosity by setting environment variables at startup.

Main application log level:

  • dsh_portal_loglevel

Additional component log levels:

  • dsh_portal_loglevel_kafka — Apache Kafka client libraries

  • dsh_portal_loglevel_netty — Netty networking

Available log levels:

  • ERROR — Critical failures only

  • WARN — Warnings and potential issues (default)

  • INFO — General operational information

  • DEBUG — Detailed internal state information (for troubleshooting only)

Recommendation: Use DEBUG level only temporarily during troubleshooting. Use WARN or INFO for regular operations.

PreviousOverviewNextAdvanced Setup

Last updated 1 month ago

Was this helpful?