Raspberry PI 5 Ubuntu Server

Raspberry PI Hardware Build & Costs

  • Raspberry Pi 5 8GB
  • 27W USB-C Official Power Supply
  • Pineberry HatDrive Bottom
  • Active Cooler
  • Rogueware 512GB PCIe 3.0 NVMe
  1. Install cooler

  2. insert the “General RPI” 16GB SD card, just to get it booted

  3. And Flash Ubuntu 24.04 the SSD using the USB C adapter according to https://ubuntu.com/download/raspberry-pi with Raspberry Pi Imager

    1. use username (4-5 letter cosmic words)

    🚨 Tried to install without the preconfigured wifi network, but could not get LAN to work
    Screenshot_2024-08-16_at_22.16.43.png

<details>
<summary>Configuration settings</summary>

Screenshot_2024-08-16_at_22.06.46.png

Screenshot_2024-08-16_at_18.20.47.png

Screenshot_2024-08-16_at_18.20.55.png

</details>

  1. Connect SSD to PI with the HatDrive
  2. Power it on.
  3. Connect the Mini HDMI Cable
  4. Connect a keyboard
  5. If it magicall boots from the NVME:
    1. Login with your username and password set in Configuration Settings
    2. Read https://www.jeffgeerling.com/blog/2023/nvme-ssd-boot-raspberry-pi-5
    3. run lsblk to check if the nvme is connected
    4. Enable the external PCI Express port
      1. cd all the way up (out of home), and run sudo ano /boot/firmware/config.txt
    5. Set NVMe early in the boot order
  6. Plug in the lan cable for internet and run sudo ip link set eth0 up

Installed and flashed 24.04 according to https://ubuntu.com/download/raspberry-pi with Raspberry Pi Imager

192.168.30.74

Hostname through Untitled :

Config:

Host rpi-home-tunnel
  HostName rpi.webeng.co
  User webeng
  ProxyCommand cloudflared access ssh --hostname %h

Host rpi-lunar-tunnel
  HostName rpi-lunar.webeng.co
  User webeng-lunar
  ProxyCommand cloudflared access ssh --hostname %h

Host koos-tunnel
  HostName koos-ubuntu.webeng.co
  User webeng-koos
  ProxyCommand cloudflared access ssh --hostname %h
  • Raspberry PI at Northoaks

    ssh rpi-home-tunnel
    
  • Koos at Northoaks

    ssh koos-tunnel
    
sudo apt update
sudo apt upgrade

sudo apt install docker.io

sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER
sudo apt install docker-compose
reboot
sudo apt install net-tools
sudo apt install wireless-tools

# install zsh and set as default shell
sudo apt install zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"



sudo apt install isc-dhcp-client
sudo apt install speedtest-cli

Customizing Oh My Zsh

  • Edit ~/.zshrc file to:

    • Change themes
    • Enable plugins
    • Set aliases
    • Customize prompt
    # add below source $ZSH/oh-my-zsh.sh":
    
    PROMPT='%{$fg[green]%}➜%{$reset_color%} %n@%M %{$fg[cyan]%}%c%{$reset_color%} $(git_prompt_info) '
    
    #then run:
    source ~/.zshrc
    
    

    Screenshot_2024-08-04_at_17.20.04.png

Key Features of Oh My Zsh

  • Numerous built-in plugins (git, docker, npm, etc.)
  • Auto-completion
  • Syntax highlighting (with additional plugin)
  • Git status information in prompt

Networking: Prioritizing Ethernet over Wi-Fi on Ubuntu Server for Raspberry Pi

Overview

This guide outlines the process of configuring Ubuntu Server on a Raspberry Pi to prioritize Ethernet (LAN) connection over Wi-Fi.

Steps

  1. **Disable cloud-init’s network configuration:**Add this line to the file:Save and exit the editor.

    sudo nano /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
    
    network: {config: disabled}
    
  2. **Check current network interfaces:**Note the names of your Ethernet (eth0) and Wi-Fi (wlan0) interfaces.

    ip a
    
  3. Create a new Netplan configuration file:

    sudo nano /etc/netplan/01-netcfg.yaml
    
  4. **Add the following configuration:**Replace "Your_WiFi_SSID" and "Your_WiFi_Password" with your actual Wi-Fi details.

    network:
      version: 2
      renderer: networkd
      ethernets:
        eth0:
          dhcp4: true
          optional: true
          dhcp4-overrides:
            route-metric: 100
      wifis:
        wlan0:
          dhcp4: true
          optional: true
          dhcp4-overrides:
            route-metric: 600
          access-points:
            "Your_WiFi_SSID":
              password: "Your_WiFi_Password"
    
    
  5. Set correct file permissions:

    sudo chmod 600 /etc/netplan/01-netcfg.yaml
    
  6. Apply changes:

    sudo netplan apply
    
  7. Reboot the Raspberry Pi:

    sudo reboot
    

Verification

After rebooting, use these commands to verify the changes:

  • Check active connections: ip route
  • Check IP addresses: ip addr

The Ethernet connection (eth0) should be listed first in the routing table when connected.

disable wifi and enable eth

Here are the detailed steps to achieve this:

  1. Bring Up the Ethernet Interface: First, bring up the eth0 interface and configure it to use DHCP to obtain an IP address.

    sudo ifconfig eth0 up
    sudo dhclient eth0
    
  2. Check the Status of the Ethernet Interface: Verify that the eth0 interface is now up and has an IP address.

    ip a
    

    You should see eth0 with a state of UP and an assigned IP address.

  3. Disable the WiFi Interface: To ensure the system uses Ethernet for network connectivity, disable the WiFi interface wlan0.

    sudo ifconfig wlan0 down
    
  4. Verify the Network Configuration: Check the routing table to ensure that the default route is now via eth0.

    ip route show
    

    The default via line should indicate that eth0 is being used.

    Testing speeds from Synology to RPI

    ssh [email protected]
    cd /volume1/Family/Downloads
    scp Taylor.Swift.The.Eras.Tour.2023.Extended.2160p.AMZN.WEB-DL.DDP5.1.Atmos.H.265-FLUX[TGx].mkv [email protected]:/home/webeng
    

    Screenshot_2024-08-04_at_19.25.45.png

    also: 20GB 48.8MB/s 06:56

Swapfile

To configure your Raspberry Pi’s swap file to use the NVMe storage in Ubuntu Server 24, you can follow these steps:

  1. Create a Swap File on NVMe: Since your NVMe drive is mounted as /dev/nvme0n1p2 with 465.3G of space, we’ll create a swap file on that partition.

    First, decide how much swap space you want to allocate. For example, to create a 4GB swap file:

    Open a terminal and follow these commands:

    sudo fallocate -l 4G /mnt/nvme_swap
    
  2. Set the Correct Permissions: Make sure the swap file has the correct permissions:

    sudo chmod 600 /mnt/nvme_swap
    
  3. Make the Swap File: Use the mkswap command to designate the file as swap:

    sudo mkswap /mnt/nvme_swap
    
  4. Enable the Swap: Activate the swap file:

    sudo swapon /mnt/nvme_swap
    
  5. Make Swap Permanent: To ensure the swap file is used after every reboot, add it to the /etc/fstab file.

    Open /etc/fstab in a text editor:

    sudo nano /etc/fstab
    

    Add the following line to the end of the file:

    /mnt/nvme_swap none swap sw 0 0
    
  6. Verify the Swap: After setting up the swap, check to see if the swap is active:

    sudo swapon --show
    

    You should see /mnt/nvme_swap in the output with the size you allocated.

  7. Adjust Swapiness (Optional): By default, Linux systems start using swap space when RAM usage reaches 60%. You can adjust how aggressively the system uses swap by modifying the swappiness parameter.

    Open /etc/sysctl.conf:

    sudo nano /etc/sysctl.conf
    

    Add or modify the following line to set the swappiness (where a lower value means the system will use swap less):

    vm.swappiness=10
    

    Apply the changes:

    sudo sysctl -p
    

After these steps, your Raspberry Pi will start using the NVMe storage as swap when it runs out of RAM.

Docker Compose

If you have issues with installing docker compose, try this:

It seems like the required Docker packages are not available in the default Ubuntu repository. To resolve this, you need to add Docker’s official GPG key and the Docker repository. Here are the steps to install Docker on your Raspberry Pi:

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

Add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Set up the Docker stable repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Install Docker Engine and Docker Compose:

sudo apt-get install docker-ce docker-ce-cli containerd.io

Download and install Docker Compose:

sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*?(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

added Hugo’s ssh key to authorized_keys

webeng@raspberrypi:~$ sudo nano .ssh/authorized_keys

Github

Generated ssh key and added it to my personal profile

➜ webeng@raspberrypi docuseal git:(ctle-branding)  ssh-keygen -t rsa -b 4096 -C "rpi.webeng.co"

Backups

Certainly, automation with lifecycle rules is an important aspect of a robust backup strategy. Let's create a more comprehensive solution that incorporates automation and lifecycle management for your Ubuntu Raspberry Pi backup to both NAS and Google Drive.


```shell
#!/bin/bash

# Configuration
SOURCE_DEVICE="/dev/mmcblk0"  # Raspberry Pi SD card
BACKUP_NAME="ubuntu_rpi_backup"
NAS_MOUNT="/mnt/nas_backup"
NAS_BACKUP_DIR="${NAS_MOUNT}/rpi_backups"
GOOGLE_DRIVE_REMOTE="gdrive:"
GOOGLE_DRIVE_BACKUP_DIR="rpi_backups"
MAX_BACKUPS=5  # Number of backups to keep

# Create full disk image
create_disk_image() {
    sudo dd if=$SOURCE_DEVICE of="${NAS_BACKUP_DIR}/${BACKUP_NAME}_$(date +%Y%m%d).img" bs=4M status=progress
}

# Sync to NAS using rsync
sync_to_nas() {
    rsync -avz --delete /home/ ${NAS_BACKUP_DIR}/home_backup/
    rsync -avz --delete /etc/ ${NAS_BACKUP_DIR}/etc_backup/
}

# Sync NAS to Google Drive using rclone
sync_to_gdrive() {
    rclone sync ${NAS_BACKUP_DIR} ${GOOGLE_DRIVE_REMOTE}${GOOGLE_DRIVE_BACKUP_DIR} --progress
}

# Apply lifecycle rules
apply_lifecycle_rules() {
    # Remove old backups from NAS
    cd ${NAS_BACKUP_DIR}
    ls -t ${BACKUP_NAME}*.img | tail -n +$((MAX_BACKUPS + 1)) | xargs rm -f

    # Remove old backups from Google Drive
    rclone delete ${GOOGLE_DRIVE_REMOTE}${GOOGLE_DRIVE_BACKUP_DIR} --min-age 30d
}

# Main execution
main() {
    create_disk_image
    sync_to_nas
    sync_to_gdrive
    apply_lifecycle_rules
}

# Run the main function
main

# Add to crontab to run weekly:
# 0 2 * * 0 /path/to/this/script.sh

```


This script provides an automated solution for backing up your Ubuntu Raspberry Pi installation to both a NAS and Google Drive, incorporating lifecycle rules. Here's a breakdown of the script:

1. Configuration: Set up variables for devices, directories, and backup settings.
2. Full disk image: Uses `dd` to create a complete image of the SD card.
3. Incremental backup: Uses `rsync` to sync important directories to the NAS.
4. Google Drive sync: Uses `rclone` to sync the NAS backup to Google Drive.
5. Lifecycle management:
	- Keeps only the last 5 full disk images on the NAS.
	- Deletes backups older than 30 days from Google Drive.
6. Automation: Includes a comment for adding the script to crontab for weekly execution.

To use this script:

1. Save it to a file (e.g., `rpi_backup.sh`) and make it executable:

	```text
	chmod +x rpi_backup.sh
	
	```

2. Install necessary tools:

	```text
	sudo apt-get install rsync rclone
	
	```

3. Configure rclone for Google Drive access:

	```text
	rclone config
	
	```

4. Adjust the configuration variables in the script to match your setup.
5. Test the script manually first:

	```text
	sudo ./rpi_backup.sh
	
	```

6. Once confirmed working, add it to crontab for automatic weekly execution:

	```text
	crontab -e
	
	```


	Add the line:


	```text
	0 2 * * 0 /path/to/rpi_backup.sh
	
	```


This setup will create a full disk image and incremental backups of important directories, sync them to your NAS and Google Drive, and manage the lifecycle of your backups automatically.


Would you like me to explain any part of this script in more detail or help with any specific configuration?​​​​​​​​​​​​​​​​

Test:

/hello-world containing an HTML file with a Dockerfile

Dockerfile.txt

index.html

docker build -t hello-world .
docker run -d --restart=always -p 1234:80 --name hello-world-container hello-world

Run Cloudflared with the correct token using this command

Access through Untitled

You need to have Cloudflared installed on tour local machine to be able to access.
On Mac:

brew install cloudflared

On Windows: Use WSL

wget -q https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
sudo dpkg -i cloudflared-linux-amd64.deb

For both Mac and WSL

Then, in your ~/.ssh/config file, we must add the following to enable the host from Cloudflared:

Host rpi.webeng.co
  ProxyCommand cloudflared access ssh --hostname %h

Using Remote Explorer to log in through Untitled:

Host rpi-home
  HostName rpi.webeng.co
  User webeng
  ProxyCommand cloudflared access ssh --hostname %h

Penpot

For testing, run a docker container

Super simple and easy

Followed https://help.penpot.app/technical-guide/getting-started/#install-with-docker

Asked why export isn’t documented: https://github.com/penpot/penpot/issues/4978

Kimai

Untitled

For testing, run a docker container

Super simple and easy

Followed https://help.penpot.app/technical-guide/getting-started/#install-with-docker

For production:

ssh [email protected]
mkdir kimai && cd kimai
touch docker-compose.yml && touch .env

# configure compose & .env...

docker compose up -d
docker compose down

<details>
<summary>Docker-compose</summary>

version: '3.5'
services:

  sqldb:
    image: mysql:8.3
    volumes:
      - mysql:/var/lib/mysql
    environment:
      - MYSQL_DATABASE=${MYSQL_DATABASE}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
    command: --default-storage-engine innodb
    restart: unless-stopped
    healthcheck:
      test: mysqladmin -p$$MYSQL_ROOT_PASSWORD ping -h localhost
      interval: 20s
      start_period: 10s
      timeout: 10s
      retries: 3

  kimai:
    image: kimai/kimai2:apache
    volumes:
      - data:/opt/kimai/var/data
    ports:
      - 8001:8001
    environment:
      - ADMINMAIL=${ADMINMAIL}
      - ADMINPASS=${ADMINPASS}
      - DATABASE_URL=${DATABASE_URL}
      - TRUSTED_HOSTS=${TRUSTED_HOSTS}
    restart: unless-stopped
  
  mailer:
    image: schickling/mailcatcher
    ports:
      - "${MAILER_SMTP_PORT:-1025}:1025"
      - "${MAILER_ADMIN_PORT:-1080}:1080"

volumes:
  data:
  mysql:

</details>

Note: when using Untitled, be sure to set a cache bypass rule at
https://dash.cloudflare.com/24e078d0c9dd906e7674716e52348d71/webeng.co/caching/cache-rules/3629950f48c6454ca17dfef2def171ef

Emails:
https://www.kimai.org/documentation/emails.html

# in .env, the SES_SMTP_PASSWORD must be URL encoded
MAILER_URL=smtps://${SES_SMTP_USERNAME}:${SES_SMTP_PASSWORD}@email-smtp.${AWS_REGION}.amazonaws.com:465

See logs with docker-compose logs kimai

Mattermost

mkdir mattermost

https://docs.mattermost.com/install/install-docker.html

in .env, we changed:

  • DOMAIN=mm.webeng.co

! mattermost The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s

Alternatively, I used this repo to install on arm64 architecture: https://github.com/remiheens/mattermost-docker-arm

had to update the docker-compose file according to the pull request https://github.com/remiheens/mattermost-docker-arm/pull/49

TODO: Set up all other environment configs

version: "3"

services:
  db:
    image: rheens/mattermost-db:v9.8.1
    build:
      context: db
    restart: unless-stopped
    volumes:
      - ./volumes/db/var/lib/postgresql/data:/var/lib/postgresql/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - run.env
  app:
    image: rheens/mattermost-app:v9.8.1
    ports:
      - 8065:8000
    build:
      context: app
      args:
        - MM_VERSION=v9.8.1
        - GOOS=linux
        - GOARCH=arm64
    restart: unless-stopped
    volumes:
      - ./volumes/app/mattermost/config:/mattermost/config:rw
      - ./volumes/app/mattermost/data:/mattermost/data:rw
      - ./volumes/app/mattermost/logs:/mattermost/logs:rw
      - ./volumes/app/mattermost/plugins:/mattermost/plugins:rw
      - ./volumes/app/mattermost/client-plugins:/mattermost/client/plugins:rw
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - run.env

in the run.env file, we’ve added:

MM_SERVICESETTINGS_SITEURL=https://mm.webeng.co

MM_USERNAME=admin
MM_PASSWORD=admin
MM_DBNAME=mattermost
##### Optionnals ####
# DB_HOST=db
# DB_PORT_NUMBER=5432
# DB_USE_SSL=disable

MM_EMAILSETTINGS_ENABLEEMAIL=true
MM_EMAILSETTINGS_SENDEMAILNOTIFICATIONS=true
MM_EMAILSETTINGS_REQUIREEMAILVERIFICATION=true
MM_EMAILSETTINGS_FEEDBACKNAME="Web.Eng MM"
[email protected]
MM_EMAILSETTINGS_SMTPUSERNAME=redacted
MM_EMAILSETTINGS_SMTPPASSWORD="redacted"
MM_EMAILSETTINGS_SMTPSERVER=email-smtp.eu-west-1.amazonaws.com
MM_EMAILSETTINGS_SMTPPORT=465
MM_EMAILSETTINGS_CONNECTIONSECURITY=TLS
MM_EMAILSETTINGS_SMTPSERVERTIMEOUT=10
MM_EMAILSETTINGS_ENABLESMTPAUTH=true
MM_EMAILSETTINGS_SKIPSERVERCERTIFICATEVERIFICATION=false


MM_GOOGLESETTINGS_ENABLE=true
MM_GOOGLESETTINGS_ID=your_google_client_id
MM_GOOGLESETTINGS_SECRET=your_google_client_secret
MM_GOOGLESETTINGS_SCOPE=profile email
MM_GOOGLESETTINGS_AUTHENDPOINT=https://accounts.google.com/o/oauth2/auth
MM_GOOGLESETTINGS_TOKENENDPOINT=https://oauth2.googleapis.com/token
MM_GOOGLESETTINGS_USERAPIENDPOINT=https://www.googleapis.com/oauth2/v3/userinfo
MM_GOOGLESETTINGS_TEAMNAME=
MM_GOOGLESETTINGS_ALLOWSIGNUP=true

Posthog

Untitled

! clickhouse The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

! zookeeper The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

<details>
<summary>.env</summary>

POSTGRES_DB=posthog
POSTGRES_USER=posthog
POSTGRES_PASSWORD=posthog
SECRET_KEY=your-secret-key
MINIO_ACCESS_KEY=exampleAccessKey123
MINIO_SECRET_KEY=exampleSecretKey456789

</details>

<details>
<summary>docker-compose</summary>

version: '3.8'

services:
  db:
    image: postgres:12-alpine
    restart: always
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data

  redis:
    image: redis:6
    restart: always
    volumes:
      - redis-data:/data

  clickhouse:
    image: yandex/clickhouse-server:latest
    restart: on-failure
    depends_on:
      - kafka
      - zookeeper
    volumes:
      - clickhouse-data:/var/lib/clickhouse

  zookeeper:
    image: wurstmeister/zookeeper:latest
    restart: always
    volumes:
      - zookeeper-data:/data
      - zookeeper-datalog:/datalog

  kafka:
    image: wurstmeister/kafka:latest
    restart: always
    depends_on:
      - zookeeper
    environment:
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LOG_RETENTION_MS: 3600000
      KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000
      KAFKA_LOG_RETENTION_HOURS: 1
    volumes:
      - kafka-data:/kafka

  web:
    image: posthog/posthog:latest
    restart: always
    command: /bin/bash -c "bin/docker/entrypoint.sh start-web"
    volumes:
      - ./posthog:/app
    environment:
      DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      REDIS_URL: redis://redis:6379
      CLICKHOUSE_HOST: clickhouse
      CLICKHOUSE_DATABASE: default
      CLICKHOUSE_USER: default
      CLICKHOUSE_PASSWORD: ''
      KAFKA_HOSTS: kafka:9092
      SECRET_KEY: ${SECRET_KEY}
    ports:
      - "2841:8000"
    depends_on:
      - db
      - redis
      - clickhouse
      - kafka

  worker:
    image: posthog/posthog:latest
    restart: always
    command: /bin/bash -c "bin/docker/entrypoint.sh start-worker"
    volumes:
      - ./posthog:/app
    environment:
      DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      REDIS_URL: redis://redis:6379
      CLICKHOUSE_HOST: clickhouse
      CLICKHOUSE_DATABASE: default
      CLICKHOUSE_USER: default
      CLICKHOUSE_PASSWORD: ''
      KAFKA_HOSTS: kafka:9092
      SECRET_KEY: ${SECRET_KEY}
    depends_on:
      - db
      - redis
      - clickhouse
      - kafka

  plugins:
    image: posthog/posthog:latest
    restart: always
    command: /bin/bash -c "bin/docker/entrypoint.sh start-plugins"
    volumes:
      - ./posthog:/app
    environment:
      DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      REDIS_URL: redis://redis:6379
      CLICKHOUSE_HOST: clickhouse
      CLICKHOUSE_DATABASE: default
      CLICKHOUSE_USER: default
      CLICKHOUSE_PASSWORD: ''
      KAFKA_HOSTS: kafka:9092
      SECRET_KEY: ${SECRET_KEY}
    depends_on:
      - db
      - redis
      - clickhouse
      - kafka

  objectstorage:
    image: minio/minio:latest
    restart: on-failure
    volumes:
      - objectstorage:/data
    environment:
      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
    command: server /data
    ports:
      - "19000:9000"

volumes:
  zookeeper-data:
  zookeeper-datalog:
  kafka-data:
  postgres-data:
  clickhouse-data:
  redis-data:
  objectstorage:

</details>

they are making it extremely difficult to test locally. I’m going to try Matamo instead

I should try FOSS github at https://github.com/PostHog/posthog-foss

Docs at https://posthog.com/docs/self-host

What worked:

Amidst the SSL requirements, I tried running the install script on vultr-apps. I pointed an A record to the server and it worked.

I then realised that Untitled and Untitled can’t be on the same server, probably due to port/service conflicts

Got it to run on localhost, with the following challenges:

docker-compose.yaml:

💡 Updating https://posthog.com/docs/self-host#upgrading
We are using Cloudflare Tunnel, so we need to remove Caddy and the requirement for a static IP.

Be careful when running the update script, as it overwrites the docker-compose file.

When updates are run, carefully compare the docker-compose-current.yaml and docker-compose.yaml files for changes.

Use diffchecker.com to check which changes have been made. Add the new .yaml file on the left, and the -current.yaml version of the file on the right.

Then merge only the latest changes from left to right, keeping the standard localhost and env vars the same as below.

once done, use docker compose up -d to run everything again

Screenshot_2024-10-09_at_08.04.36.png

#
# `docker-compose` file used ONLY for hobby deployments.
#
# Please take a look at https://posthog.com/docs/self-host/deploy/hobby
# for more info.
#
# PostHog has sunset support for self-hosted K8s deployments.
# See: https://posthog.com/blog/sunsetting-helm-support-posthog
#

services:
    db:
        extends:
            file: docker-compose.base.yml
            service: db
        # Pin to postgres 12 until we have a process for pg_upgrade to postgres 15 for exsisting installations
        image: ${DOCKER_REGISTRY_PREFIX:-}postgres:12-alpine
        volumes:
            - postgres-data:/var/lib/postgresql/data

    redis:
        extends:
            file: docker-compose.base.yml
            service: redis
        volumes:
            - redis-data:/data

    redis7:
        extends:
            file: docker-compose.base.yml
            service: redis7
        volumes:
            - redis7-data:/data

    clickhouse:
        #
        # Note: please keep the default version in sync across
        #       `posthog` and the `charts-clickhouse` repos
        #
        extends:
            file: docker-compose.base.yml
            service: clickhouse
        restart: on-failure
        depends_on:
            - kafka
            - zookeeper
        volumes:
            - ./posthog/posthog/idl:/idl
            - ./posthog/docker/clickhouse/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
            - ./posthog/docker/clickhouse/config.xml:/etc/clickhouse-server/config.xml
            - ./posthog/docker/clickhouse/users.xml:/etc/clickhouse-server/users.xml
            - clickhouse-data:/var/lib/clickhouse
    zookeeper:
        extends:
            file: docker-compose.base.yml
            service: zookeeper
        volumes:
            - zookeeper-datalog:/datalog
            - zookeeper-data:/data
            - zookeeper-logs:/logs
    kafka:
        extends:
            file: docker-compose.base.yml
            service: kafka
        depends_on:
            - zookeeper
        environment:
            KAFKA_LOG_RETENTION_MS: 3600000
            KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000
            KAFKA_LOG_RETENTION_HOURS: 1
            KAFKA_CFG_MESSAGE_MAX_BYTES: 67108864         # 64 MB
            KAFKA_CFG_REPLICA_FETCH_MAX_BYTES: 67108864   # 64 MB
            KAFKA_CFG_MAX_PARTITION_FETCH_BYTES: 67108864 # 64 MB
            KAFKA_CFG_MAX_REQUEST_SIZE: 67108864          # 64 MB                 
        volumes:
            - kafka-data:/bitnami/kafka

    worker:
        extends:
            file: docker-compose.base.yml
            service: worker
        environment:
            SENTRY_DSN: 'https://[email protected]/1'
            SITE_URL: http://localhost:8341
            SECRET_KEY: 183625bfb7de41e44ffaa10d4e5e856f59728da45f574dfa43d30d1a
            OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
            OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'
            OBJECT_STORAGE_ENDPOINT: http://objectstorage:19000
            OBJECT_STORAGE_ENABLED: true
        image: posthog/posthog:latest

    web:
        extends:
            file: docker-compose.base.yml
            service: web
        command: /compose/start
        volumes:
            - ./compose:/compose
        image: posthog/posthog:latest
        environment:
            SENTRY_DSN: 'https://[email protected]/1'
            SITE_URL: http://localhost:8341
            SECRET_KEY: 183625bfb7de41e44ffaa10d4e5e856f59728da45f574dfa43d30d1a
            OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
            OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'
            OBJECT_STORAGE_ENDPOINT: http://objectstorage:19000
            OBJECT_STORAGE_ENABLED: true
        depends_on:
            - db
            - redis
            - clickhouse
            - kafka
            - objectstorage

    plugins:
        extends:
            file: docker-compose.base.yml
            service: plugins
        image: posthog/posthog:latest
        environment:
            SENTRY_DSN: 'https://[email protected]/1'
            SITE_URL: http://localhost:8341
            SECRET_KEY: 183625bfb7de41e44ffaa10d4e5e856f59728da45f574dfa43d30d1a
            OBJECT_STORAGE_ACCESS_KEY_ID: 'object_storage_root_user'
            OBJECT_STORAGE_SECRET_ACCESS_KEY: 'object_storage_root_password'
            OBJECT_STORAGE_ENDPOINT: http://objectstorage:19000
            OBJECT_STORAGE_ENABLED: true
            CDP_REDIS_HOST: redis7
            CDP_REDIS_PORT: 6379
        depends_on:
            - db
            - redis
            - redis7
            - clickhouse
            - kafka
            - objectstorage
 #  caddy:
 #      image: caddy:2.6.1
 #      restart: unless-stopped
 #      ports:
 #           - '80:80'
 #           - '443:443'
 #      volumes:
 #           - ./Caddyfile:/etc/caddy/Caddyfile
 #           - caddy-data:/data
 #           - caddy-config:/config
 #       depends_on:
 #           - web
    objectstorage:
        extends:
            file: docker-compose.base.yml
            service: objectstorage
        restart: on-failure
        volumes:
            - objectstorage:/data
        ports:
            - '19000:19000'
            - '19001:19001'

    asyncmigrationscheck:
        extends:
            file: docker-compose.base.yml
            service: asyncmigrationscheck
        image: posthog/posthog:latest
        environment:
            SENTRY_DSN: 'https://[email protected]/1'
            SITE_URL: http://localhost:8341
            SECRET_KEY: 183625bfb7de41e44ffaa10d4e5e856f59728da45f574dfa43d30d1a
            SKIP_ASYNC_MIGRATIONS_SETUP: 0

    # Temporal containers
    temporal:
        extends:
            file: docker-compose.base.yml
            service: temporal
        environment:
            - ENABLE_ES=false
        ports:
            - 7233:7233
        volumes:
            - ./posthog/docker/temporal/dynamicconfig:/etc/temporal/config/dynamicconfig
    elasticsearch:
        extends:
            file: docker-compose.base.yml
            service: elasticsearch
    temporal-admin-tools:
        extends:
            file: docker-compose.base.yml
            service: temporal-admin-tools
        depends_on:
            - temporal
    temporal-ui:
        extends:
            file: docker-compose.base.yml
            service: temporal-ui
        ports:
            - 8081:8080
        depends_on:
            temporal:
                condition: service_started
            db:
                condition: service_healthy
    temporal-django-worker:
        command: /compose/temporal-django-worker
        extends:
            file: docker-compose.base.yml
            service: temporal-django-worker
        volumes:
            - ./compose:/compose
        image: posthog/posthog:latest
        environment:
            SENTRY_DSN: 'https://[email protected]/1'
            SITE_URL: http://localhost:8341
            SECRET_KEY: 183625bfb7de41e44ffaa10d4e5e856f59728da45f574dfa43d30d1a
        depends_on:
            - db
            - redis
            - clickhouse
            - kafka
            - objectstorage
            - temporal

volumes:
    zookeeper-data:
    zookeeper-datalog:
    zookeeper-logs:
    objectstorage:
    postgres-data:
    clickhouse-data:
    #caddy-data:
    #caddy-config:
    redis-data:
    redis7-data:
    kafka-data:

Screenshot_2024-09-08_at_21.00.22.png

    web:
        extends:
            file: docker-compose.base.yml
            service: web
        command: /compose/start
        volumes:
            - ./compose:/compose
        image: posthog/posthog:latest
        ports:
            - "8341:8000"

Screenshot_2024-09-08_at_21.04.47.png

Posthog Plugin Server not working:

https://github.com/PostHog/posthog/issues/24417

I added those fields like thedoctor said

Posthog is not fully working after upgrade

After running the import script, I found that the docker compose files all get override, so, I’ve created .current files of the docker-compose.yaml and docker-compose.base.yaml files. Furthermore, I installed Portainer on koos to be able to check the logs of each docker container, and inspect if neccessary.

I found that the sessions are no longer recording, and i struggled to figure out why, until I saw this: https://posthog-koos.webeng.co/instance/status

Plugin server alive was No

So, I found that the encryption salt keys added in the new docker compose file wasn’t present. I also saw that the salt keys are hardcoded, so I moved it to the .env file.

We have to be carful when running updates, as this overwrites the compose files. It’s also perhaps a good idea to add version contriol to our own docker compose files

Message too large Error

link_preview

This was solved with:

services:
    ...
    kafka:
        extends:
            file: docker-compose.base.yml
            service: kafka
        depends_on:
            - zookeeper
        environment:
            KAFKA_LOG_RETENTION_MS: 3600000
            KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000
            KAFKA_LOG_RETENTION_HOURS: 1
            KAFKA_CFG_MESSAGE_MAX_BYTES: 67108864         # Added 64MB to avoid "Message too large" error
            KAFKA_CFG_REPLICA_FETCH_MAX_BYTES: 67108864   # Added 64MB to avoid "Message too large" error
            KAFKA_CFG_MAX_PARTITION_FETCH_BYTES: 67108864 # Added 64MB to avoid "Message too large" error
            KAFKA_CFG_MAX_REQUEST_SIZE: 67108864          # Added 64MB to avoid "Message too large" error
        volumes:
            - kafka-data:/bitnami/kafka

    ...
    web:
        extends:
            file: docker-compose.base.yml
            service: web
        command: /compose/start
        volumes:
            - ./compose:/compose
        image: posthog/posthog:latest
        environment:
            ...
            SESSION_RECORDING_KAFKA_MAX_REQUEST_SIZE_BYTES: 67108864 # Added 64MB to avoid "Message too large" error
        depends_on:
            - db
            - redis
            - clickhouse
            - kafka
            - objectstorage
    ...

For more info on the history of this error, go to Posthog Errors

RPI ran out of storage

Posthog used more than 400GB of storage until the RPI was full.

After lots of du digging it was found that the biggest directory is posthog-foss_objectstorage

sudo -i
cd /var/lib/docker/volumes/posthog-foss_objectstorage/_data/posthog/session_recordings/team_id/1/session_id/
sudo find . -depth -type d -mtime +30 -exec rm -rf {} + 2>/dev/null

Which is where the session recordings are saved. It seems like some of this could be uploaded to S3 using minio.

After some more digging, Eugene found:

115G	/var/lib/docker/volumes/posthog-foss_clickhouse-data
121G	/var/lib/docker/volumes/posthog-foss_zookeeper-datalog
144G	/var/lib/docker/volumes/posthog-foss_zookeeper-data
find . -type f -name "log.*" -mtime +30 -print
  • find . → Searches in the current directory.
  • type f → Looks for files.
  • name "log.*" → Matches files with names starting with log..
  • mtime +30 → Filters files older than 30 days.
  • print → Displays the files that match the criteria.

Then I deleted log.* older than 30 days.

$ du -h
17G	.

Now I could at least start the containers

See https://chatgpt.com/share/676561b4-d858-8012-9936-0839d45e51f3

OLD
objectstorage:
    image: minio/minio:RELEASE.2022-06-25T15-50-16Z
    restart: on-failure
    environment:
        MINIO_ROOT_USER: object_storage_root_user
        MINIO_ROOT_PASSWORD: object_storage_root_password
    entrypoint: sh
    command: -c 'mkdir -p /data/posthog && minio server --address ":19000" --console-address ":19001" /data' # create the 'posthog' bucket before starting the service

NEW
objectstorage:
    image: minio/minio:RELEASE.2022-06-25T15-50-16Z
    restart: on-failure
    environment:
      MINIO_ACCESS_KEY: object_storage_root_user  # Your AWS_ACCESS_KEY_ID
      MINIO_SECRET_KEY: object_storage_root_password  # Your AWS_SECRET_ACCESS_KEY
    command: gateway s3
    ports:
      - "19000:19000"  # MinIO API
      - "19001:19001"  # MinIO Console (optional)

Docuseal for CTLE

  1. Go to your existing IDE project

  2. Update forked main branch from upstream remote

    Screenshot_2024-07-28_at_06.57.38.png

  3. Createa a new branch

  4. Update the webeng-branch with latest updates using git pull from master. There will be merge conflicts and a bunch of changes. This is our local changes. Go through each file to check if all the changes are cosmetic, and ensure there are no conflicts.

git fetch origin
git checkout webeng-branding
git merge origin/master
  1. Before commiting, build the image locally and check if it’s all working Use the below to do that
    <details>
    <summary>Errors</summary>

After edting nd working with the directory in Mac finder, we might end up with .DS_Store issues.

Run this command to get rid of all of these files. I added this to the Dockerfile.

find . -name '.DS_Store' -type f -delete

Screenshot_2024-07-28_at_07.22.54.png

</details>

Signoz

Untitled arm64 config

[Dep] Build our own server RPI

eugene@raspberrypi:~ $ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Step-by-Step Guide to Install Docker on Debian 12 (Bookworm)

  1. Update the package list:

    shCopy code
    sudo apt update
    
    
    
  2. Install the required packages:

    shCopy code
    sudo apt install apt-transport-https ca-certificates curl software-properties-common
    
    
    
  3. Add Docker’s official GPG key:

    shCopy code
    curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
    
    
  4. Set up the stable repository:

    shCopy code
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
    
    
  5. Update the package list again:

    shCopy code
    sudo apt update
    
    
    
  6. Check if Docker CE is available:

    shCopy code
    apt-cache policy docker-ce
    
    
    

    This command should list the available versions of docker-ce. If it doesn’t list any versions, there might be an issue with the repository setup.

  7. Install Docker CE:

    shCopy code
    sudo apt install docker-ce docker-ce-cli containerd.io
    
    
    
  8. Add your user to the Docker group (so you can run Docker commands without sudo):

shCopy code
sudo usermod -aG docker $USER


Log out and back in so that the group membership is re-evaluated.

Step 4: Run Cloudflare Tunnel on Docker

See Untitled

docker run -d --restart=always --network="host" --name cloudflared cloudflare/cloudflared:latest tunnel --no-autoupdate run --token eyJhIj

Zammad

Followed the Deploy with Docker Compose steps, and all worked well, super easy to get up and running:

In order to set the tickets_updated_since value as per https://github.com/zammad/zammad/commit/8bff308c3cbfb77b160e073621f54eae22a8c508 for the Freshdesk import, I had to login to the rails console with

docker compose run --rm zammad-railsserver rails c

I then ran this command

# Inside Rails console:
subdomain = '{your_freshdesk_subdomain}.freshdesk.com'
token = '{your_freshdesk_api_token}'
Setting.set('import_freshdesk_endpoint', "https://#{subdomain}/api/v2")
Setting.set('import_freshdesk_endpoint_key', token)
Setting.set('import_backend', 'freshdesk')
Setting.set('import_mode', true)

job = ImportJob.create(
  name: 'Import::Freshdesk',
  payload: {
    tickets_updated_since: '2024-01-01'
  }
)

AsyncImportJob.perform_later(job)

pp ImportJob.find_by(name: 'Import::Freshdesk')

Check the status

pp ImportJob.find_by(name: 'Import::Freshdesk')

When done, I set turned off the import mode:

# After migration:
Setting.set('import_mode', false)
[Setting.set('system_init_done', true)](/ee38427bb69d431abacf98399cab671a)
Rails.cache.clear

More info here: https://docs.zammad.org/en/latest/migration/freshdesk.html

Couldn’t get the email to send/reply buttons wer hidden, so I tried this: https://community.zammad.org/t/email-configuration/7935/3

Remove or delete an email or ticket

You can do these tasks (deletion of a ticket) via scheduler or console:

https://docs.zammad.org/en/latest/console/dangerzone-for-experts.html 3.9k

Can’t login because of CSRF token errors? https://docs.zammad.org/en/latest/install/docker-compose/environment.html#nginx

Screenshot_2024-09-29_at_12.31.16.png

Monit

https://chatgpt.com/share/1fa63098-4a0c-4281-94ae-465aa11c0df3