๐Ÿš€ deploy flowise + kubernetes + helm on ubuntu vps (10-step guide)
๐Ÿš€ deploy flowise + kubernetes + helm on ubuntu vps (10-step guide)

Hereโ€™s a step-by-step guide to deploy FlowiseAI (commonly just Flowise) on a Ubuntu VPS using Helm + Kubernetes. I assume you have a Ubuntu server (or VPS) and youโ€™re comfortable with shell commands and basic Kubernetes.

Iโ€™ll walk you through setting up Kubernetes (if not already), installing Helm, and deploying the Flowise helm chart, with production-ready tweaks.

What is Flowise?

Flowise is a low-code/no-code open-source platform for building AI agents and workflows visually.

Here are the key facts, followed by some thoughts on how it might fit your marketing/hosting-company context.

โœ… What it is

  • Provides modular building-blocks (nodes) that you can wire together in a drag-and-drop visual UI to create conversational agents, workflow automations, knowledge-retrieval bots, etc.
  • Supports multiple fronts: chat assistants, multi-agent systems, human-in-the-loop workflows, observability/metrics for production use.
  • Open-source: the codebase is on GitHub with ~46k stars (as of the referenced snapshot) and you can self-host.

๐ŸŽฏ Key Features (for your reference)

  • Visual builder + workflow orchestration for single-agent or multi-agent systems.
  • Integration support: wide range of LLMs (large language models), embedding/vector DBs, and various data sources.
  • Human-in-the-loop: you can insert checkpoints where a human reviews/approves the agentโ€™s output.
  • Observability/metrics: execution traces, monitoring, fine-tuning of workflows in production.
  • Deployment flexibility: self-hosted or cloud, which gives you control (important for a hosting/infra-oriented user).

What is Kubernetes?

Kubernetes (often abbreviated K8s) is an open-source system for automating the deployment, scaling, and management of containerized applications.

๐Ÿงฉ Overview

Kubernetes was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Itโ€™s designed to help you run applications reliably across clusters of physical or virtual machines by managing containers (like those from Docker, containerd, etc.) automatically.

โš™๏ธ Core Functions

  • Container orchestration: Automatically schedules and runs containers across nodes in a cluster.
  • Scaling: Increases or decreases application instances based on demand (manual or automatic).
  • Load balancing: Distributes traffic evenly across multiple instances of a service.
  • Self-healing: Restarts failed containers, replaces unresponsive nodes, and ensures desired state is maintained.
  • Rolling updates: Deploys new versions of apps without downtime.
  • Secret and configuration management: Securely stores passwords, keys, and config values.

๐Ÿง  Key Components

Component Purpose
Pod Smallest deployable unit; one or more containers that share resources.
Node A worker machine (VM or physical) where pods run.
Cluster A set of nodes managed by Kubernetes.
Deployment Defines how many replicas (copies) of an app should run and how to update them.
Service Provides a stable network endpoint (IP/DNS) for accessing pods.
Ingress Manages external access to services (usually HTTP/HTTPS).
Namespace Logical partitioning for multi-tenant or environment isolation.
ConfigMap & Secret Handle environment variables and sensitive data.

๐Ÿš€ Why Itโ€™s Popular

  • Works across any environment โ€” on-premises, private cloud, or public cloud (AWS, Azure, GCP, etc.).
  • Enables microservices architecture, allowing teams to develop and deploy components independently.
  • Supports GitOps, DevOps pipelines, and CI/CD automation.
  • Backed by a large ecosystem (Helm, Istio, Prometheus, etc.).

What is Helm?

Helm is the package manager for Kubernetes โ€” often described as โ€œaptโ€ or โ€œyumโ€ for Kubernetes.โ€

It simplifies the deployment, management, and versioning of complex Kubernetes applications by packaging them into reusable units called Helm charts.

๐Ÿงฉ Overview

Helm was created by Deis (acquired by Microsoft) and is now a CNCF (Cloud Native Computing Foundation) project โ€” just like Kubernetes itself.
It provides a standardized way to define, install, and upgrade even the most complex Kubernetes workloads with just a few commands.

๐Ÿ“ฆ What is a Helm Chart?

A Helm chart is a collection of files that describe a set of Kubernetes resources.
Each chart contains:

  • Chart.yaml โ€” metadata about the chart (name, version, description)
  • values.yaml โ€” default configuration values
  • templates/ โ€” YAML templates that define Kubernetes manifests (Deployments, Services, Ingress, etc.)
  • charts/ โ€” optional subcharts (dependencies)

Charts let you define reusable and configurable deployments.

Example:
Instead of manually applying 10 YAML files to deploy a web app, you can use:

helm install myapp ./mychart

Helm will generate all necessary Kubernetes objects (pods, services, ingress, etc.) from your chart.

โš™๏ธ Key Features

Feature Description
Packaging Bundle all Kubernetes manifests into one deployable โ€œchart.โ€
Templating Use variables and conditions for flexible deployments.
Versioning & Rollback Keep release history; roll back instantly if needed.
Dependency management Automatically install required subcharts.
Repositories Store and share charts via public or private registries (like Docker Hub for Helm).
Upgrade management Apply changes safely with helm upgrade.

๐Ÿš€ Why Use Helm

  • Simplifies deploying complex, multi-service applications (e.g., WordPress, FlowiseAI, Prometheus, Grafana).
  • Supports DevOps automation and CI/CD pipelines.
  • Enables consistent deployments across multiple environments (dev, staging, production).
  • Makes it easy to share your application as a chart to other users or customers.

Prerequisites

Make sure:

  • You have a Ubuntu 20.04/Ubuntu 22.04 VPS with sudo access.
  • Enough resources: Flowise docs recommend for production 2 main servers each with ~4 vCPU + 8 GB RAM and workers similarly.
  • You have or will set up a Kubernetes cluster (single node is okay for testing โ€“ multi-node for production).
  • kubectl installed and configured to talk to your cluster.
  • Helm v3 installed. The Flowise Helm chart lists prerequisite Helm >= 3.9 and Kubernetes >= 1.24.
  • A domain or at least ability to expose a service (Ingress/LoadBalancer) if you want external access.

Launch 100% ssd ubuntu vps from $2. 49/mo!

How to Deploy Flowise + Kubernetes + Helm on Ubuntu VPS

To deploy Flowise + Kubernetes + Helm on Ubuntu VPS, follow the steps outlined below:

  1. Set up Kubernetes on Ubuntu VPS

    If you donโ€™t already have K8s cluster, you can set up a simple one-node cluster for testing. Example with kubeadm (you can adjust for your needs):

    sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-jammy main" | sudo tee /etc/apt/sources.list.d/kubernetes.list echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg sudo apt update sudo swapoff -a sudo ufw allow 22,8080/tcp sudo ufw enable sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt update sudo apt install docker-ce -y sudo systemctl start docker sudo systemctl enable docker sudo apt install -y kubelet kubeadm kubectl sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    Then set up your userโ€™s kubeconfig:

    mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Install a pod network (e.g., Flannel):

    kubectlย applyย -fย https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

    (For production, you may choose a more robust CNI & multi-node). Once the cluster is up, ensure kubectl get nodes shows the node as Ready.

  2. Install Helm

    On your Ubuntu VPS (or whichever machine youโ€™ll manage helm from):

    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash helm version

    You should see Helm v3.x. Also add any necessary Helm repositories if needed (weโ€™ll do that below).

  3. Add the Flowise Helm chart repository

    According to the Helm chart metadata: the chart is available under the cowboysysop repo (and other repos).
    Run:

    helm repo add cowboysysop https://cowboysysop.github.io/charts/ helm repo update

    You can verify by:

    helm search repo cowboysysop/flowise

    You should see chart details (versions, etc).

  4. Create a namespace & override values for production

    Itโ€™s good practice to isolate Flowise in its own namespace.

    kubectl create namespace flowise
    

    Then prepare a values.yaml to override defaults for your deployment. Create a file flowise-values.yaml with contents such as:

    replicaCount: 1
    image:
    repository: flowiseai/flowise
    tag: latest
    # Database settings: for production you should use PostgreSQL (not sqlite)
    database:
    type: postgres
    host: 
    port: 5432
    username: flowise
    password: 
    database: flowisedb
    persistence:
    enabled: true
    storageClass: 
    accessModes:
    - ReadWriteOnce
    size: 10Gi
    ingress:
    enabled: true
    hosts:
    - host: flowise.example.com
    paths: ["/"]
    tls:
    - secretName: flowise-tls
    hosts:
    - flowise.example.com
    # Optional resources
    resources:
    requests:
    cpu: "500m"
    memory: "1Gi"
    limits:
    cpu: "1"
    memory: "2Gi"
    

    Notes:

    • The official docs recommend using PostgreSQL instead of SQLite for production.
    • Enable persistence so your flows survive pod restarts.
    • Configure an Ingress (or LoadBalancer) if you want external access.
    • Adjust resource requests/limits based on your VPS.
  5. Install the Helm chart

    With your namespace and values in place, run:

    helm install flowise cowboysysop/flowise \ --namespace flowise \ -f flowise-values.yaml

    This will deploy Flowise into your cluster. You can check status:

    kubectl get pods -n flowise kubectl get svc -n flowise

    Wait until the pods are in Running and the service is available.

  6. Verify external access

    If you set up Ingress with a domain (e.g., flowise.example.com), ensure DNS points to your LoadBalancer/Ingress IP. Visit https://flowise.example.com (or http:// depending on your setup) in your browser. You should see Flowise UI. If you didnโ€™t set Ingress and only internal access: you can port-forward:

    kubectl port-forward svc/flowise -n flowise 3000:3000

    Then open http://localhost:3000.

  7. Post-deployment configuration & production tweaks

    • In the Flowise UI or via .env (depending on chart), set admin username/password (or via Helm values).
    • Enable persistent storage for files (logs, uploads). See Flowise docs on Storage.
    • For high-traffic / production: consider multiple replicas, worker nodes, autoscaling.
    • Configure backup for your PostgreSQL database.
    • Secure your cluster: enable RBAC, network policies, TLS.
    • Monitor logs via kubectl logs or integrate with a logging stack.
    • Set up proper resource limits, liveness/readiness probes (many Helm charts include these).
    • Ensure you pin image tags (not always โ€œlatestโ€) to avoid unexpected changes.
  8. Upgrade / maintain the deployment

    When a new version of the chart is released:

    helm repo update helm upgrade flowise cowboysysop/flowise \ --namespace flowise -f flowise-values.yaml

    Follow chart CHANGELOG and ensure cluster compatibility (Kubernetes version must meet minimum).
    Also test on staging before production.

  9. (Optional) Scaling / Workers / Queue mode

    Flowise supports a โ€œQueue modeโ€ for production (queue workers, etc).
    You may configure additional worker pods in your values.yaml, e.g.:

    worker: enabled: true replicaCount: 2 resources: requests: cpu: "1" memory: "2Gi" limits: cpu: "2" memory: "4Gi"

    Check the chart documentation for exact values supported. Also ensure you have message queue infrastructure (if required) and configure accordingly.

  10. Clean-up (if needed)

    If you ever want to remove the deployment:

    helm uninstall flowise --namespace flowise kubectl delete namespace flowise

    Also delete any persistent volumes, storage classes, and database if you created them for this setup.

Summary

You now have a deployment plan:

  • Set up Kubernetes โ†’ Install Helm โ†’ Add Flowise chart repo โ†’ Create namespace + values override โ†’ Install chart โ†’ Configure and access โ†’ Maintain & scale.
    This setup gives you a self-hosted Flowise on your Ubuntu VPS + K8s cluster, managed via Helm.

Hereโ€™s a production-ready values.yaml template for deploying FlowiseAI on Kubernetes via Helm โ€” tuned for a typical Ubuntu VPS or small cluster.

It includes configuration for PostgreSQL, persistent storage, Ingress with TLS, environment variables, autoscaling, and worker queue mode.

# values.yaml for FlowiseAI (Helm)
# =====================================
# Customize values before installing with:
# helm install flowise cowboysysop/flowise -n flowise -f values.yaml
replicaCount: 1
image:
repository: flowiseai/flowise
tag: "latest"      # use a fixed version for production (e.g. "2.0.1")
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
securityContext: {}
service:
type: ClusterIP
port: 3000
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Persistent Storage Configuration
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
persistence:
enabled: true
existingClaim: ""
storageClass: "standard"   # Use your storage class name
accessModes:
- ReadWriteOnce
size: 10Gi
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Environment Variables / App Config
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
env:
- name: FLOWISE_USERNAME
value: "admin"
- name: FLOWISE_PASSWORD
value: "ChangeMe123!"
- name: FLOWISE_PORT
value: "3000"
- name: FLOWISE_FILE_SIZE_LIMIT
value: "20mb"
# Database (PostgreSQL recommended for production)
- name: DATABASE_TYPE
value: "postgres"
- name: DATABASE_HOST
value: "postgresql.default.svc.cluster.local"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: flowise-db-secret
key: postgres-username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: flowise-db-secret
key: postgres-password
- name: DATABASE_NAME
value: "flowisedb"
# Optional base URL (helpful when running behind reverse proxy)
- name: FLOWISE_BASE_URL
value: "https://flowise.example.com"
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# PostgreSQL External / Subchart Configuration
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
postgresql:
enabled: true
image:
tag: "15"
auth:
username: flowise
password: ChangeMePG!
database: flowisedb
primary:
persistence:
enabled: true
size: 5Gi
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Ingress Configuration
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: flowise.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: flowise-tls
hosts:
- flowise.example.com
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Resources / Limits
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1"
memory: "2Gi"
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Autoscaling (HorizontalPodAutoscaler)
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Worker Queue Mode (Optional)
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
worker:
enabled: false
replicaCount: 2
resources:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Node / Scheduling Preferences
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
nodeSelector: {}
tolerations: []
affinity: {}

๐Ÿ” Create a Secret for PostgreSQL credentials

Before installing, create the secret referenced in the values.yaml:

kubectl create secret generic flowise-db-secret \
--namespace flowise \
--from-literal=postgres-username=flowise \
--from-literal=postgres-password='ChangeMePG!'

๐Ÿš€ Deploy Flowise with Helm

helm repo add cowboysysop https://cowboysysop.github.io/charts/
helm repo update
kubectl create namespace flowise
helm install flowise cowboysysop/flowise -n flowise -f values.yaml

Check progress:

kubectl get pods -n flowise
kubectl get svc -n flowise

Once pods are running and Ingress is configured, access via:

https://flowise.example.com

Single-Node Cluster

Below is a ready-to-deploy values-k3s.yaml designed for MicroK8s or k3s single-node Kubernetes environments on Ubuntu VPS.

It auto-uses the built-in local-path storage, NodePort networking, and SQLite persistence for a smooth, self-contained FlowiseAI deployment.

๐Ÿงฉ values-k3s.yaml

# FlowiseAI Helm configuration for single-node k3s or MicroK8s clusters
# Tested on Ubuntu 22.04 LTS + k3s v1.30 / microk8s v1.30
replicaCount: 1
image:
repository: flowiseai/flowise
tag: "latest"
pullPolicy: IfNotPresent
serviceAccount:
create: true
service:
type: NodePort
port: 3000
nodePort: 31000   # Accessible at http://:31000
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Storage
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# k3s and microk8s provide "local-path" as default storageClass
persistence:
enabled: true
storageClass: "local-path"
accessModes:
- ReadWriteOnce
size: 5Gi
annotations: {}
finalizers:
- kubernetes.io/pvc-protection
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Environment Variables
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# SQLite for simplicity on single-node deployments
env:
- name: FLOWISE_USERNAME
value: "admin"
- name: FLOWISE_PASSWORD
value: "ChangeMe123!"
- name: FLOWISE_PORT
value: "3000"
- name: FLOWISE_FILE_SIZE_LIMIT
value: "20mb"
- name: DATABASE_TYPE
value: "sqlite"
- name: DATABASE_PATH
value: "/root/.flowise/database.sqlite"
- name: FLOWISE_BASE_URL
value: "http://:31000"  # Optional but recommended for API consistency
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Resource Allocation
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "1"
memory: "1Gi"
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
# Simplify single-node
# โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
autoscaling:
enabled: false
worker:
enabled: false
ingress:
enabled: false
nodeSelector: {}
tolerations: []
affinity: {}

๐Ÿš€ Deployment Instructions

  1. 1๏ธโƒฃ Install k3s or MicroK8s

    Option A โ€” k3s (lightweight Kubernetes):

    curl -sfL https://get.k3s.io | sh - sudo k3s kubectl get nodes

    Option B โ€” MicroK8s (Canonical):

    sudo snap install microk8s --classic sudo microk8s status --wait-ready sudo microk8s enable dns storage alias kubectl='microk8s kubectl'
  2. 2๏ธโƒฃ Install Helm

    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash helm version
  3. 3๏ธโƒฃ Deploy Flowise

    kubectl create namespace flowise helm repo add cowboysysop https://cowboysysop.github.io/charts/ helm repo update helm install flowise cowboysysop/flowise -n flowise -f values-k3s.yaml

    Wait until pod is running:

    kubectl get pods -n flowise
  4. 4๏ธโƒฃ Access Flowise

    Once ready, access:

    http://:31000

    Login with:

    Username: admin Password: ChangeMe123!
  5. ๐Ÿงฐ (Optional) Enable HTTPS Reverse Proxy with Caddy

    sudo apt install caddy -y sudo nano /etc/caddy/Caddyfile

    Add:

    flowise.example.com { reverse_proxy 127.0.0.1:31000 }

    Then:

    sudo systemctl restart caddy

    Your FlowiseAI instance will be available securely at:

    https://flowise.example.com

Launch 100% ssd ubuntu vps from $2. 49/mo!

Conclusion

You now know how to deploy Flowise + Kubernetes + Helm on Ubuntu VPS.

Avatar of editorial staff

Editorial Staff

Rad Web Hosting is a leading provider of web hosting, Cloud VPS, and Dedicated Servers in Dallas, TX.
lg