I use my Raspberry Pi4 as a home media server with Plex using Ubuntu Desktop. I use a large flash drive for content that needs me to login every time the machine reboots. I started running into a consistent issue where every login would be met with distortion that looked something like:
I had a difficult time searching for just what this could be and took a few different approaches that worked with mixed results:
Installed a different window manager like xfce4 or kde-plasma
Created a new user.
jeremy
was the only one on the system.The approaches outlined in https://devicetests.com/fixing-xrdp-black-screen-issue-ubuntu including the installation script.
xrdp
on a whim.The solution I ended up going with was outlined in https://forums.raspberrypi.com/viewtopic.php?t=358088 as it works flawlessly for my current user, just as it was before whatever recent update broke it.
sudo nano /etc/X11/xrdp/xorg.conf
Find the line Option DRMDevice line and change it to
Option "DRMDevice" ""
reboot raspberry pi
This is one of the reasons I am not a fan of Linux on the desktop, even in 2024 nor do I think it passes the "grandma test." It was damn near impossible to search for this reliably and took me multiple days of searching over a few weeks. Grandma may have that kinda time because she's retired but I'm not, I need the shit to just work(tm). The Linux desktop experience has come a really long way from when I started working with it though.
I returned to a somewhat older Phoenix v1.6 application I had written that utilized direnv
to load environment variables from a .env
file.
Coming from Laravel, I'm highly used to this workflow and I found that using something like direnv
to inject the variables was better than the hacks I had been using at the time.
I started using asdf
as a version-manager-of-all-trades before moving over to rtx
. I liked its ergonomics and I had yet to run into an issue where I thought asdf
was a better choice until now.
The issue I was having was that for whatever reason direnv
wasn't executing upon entering the directory as I had been used to.
Running direnv status
showed output similar to the following:
direnv exec path /Users/jbrayton/.local/share/rtx/installs/direnv/2.32.2/bin/direnv
DIRENV_CONFIG /Users/jbrayton/.config/direnv
bash_path /usr/local/bin/bash
disable_stdin false
warn_timeout 5s
whitelist.prefix []
whitelist.exact map[]
No .envrc or .env loaded
There was more included but the key to focus on was No .envrc or .env loaded.
I ran through a couple of steps to try to figure out what was going on. I had found on their website that for the fish
shell that I likely needed to wire up the direnv hook fish | source
into my generic ~/.config/fish/config.fish
file.
To do that I installed direnv
via homebrew because previously using it with rtx
meant even my global usage wasn't global, or I was holding it wrong(tm).
What I was unaware of at the time was that when I went to update rtx
I saw that homebrew changed the name to mise
but the command mise
wasn't found.
After running brew install mise
I was able to see the following migration output:
migrating /Users/jbrayton/.local/share/rtx/installs/elixir to /Users/jbrayton/.local/share/mise/installs
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
migrating /Users/jbrayton/.config/rtx to /Users/jbrayton/.config/mise
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
I'm making this post primarily for my own benefit though I seriously doubt I would ever run into this again on this machine or another.
It's possible someone else may see similar weirdness with either one of the rtx
plugins or something similar.
From my understanding of all my other projects, rtx
was working flawlessly except for this one instance but it turned out that direnv
was broken for my entire system. The other projects that used it weren't working either.
If you see some weirdness with rtx
and you haven't migrated, performing the migration may help you move forward like it did for me.
It's also worth noting that the migration doesn't copy your installs and I have 10GB of data in my old installs directory that I'll need to prune.
Upon testing some new functionality locally, I noticed that I was getting my favorite error, 502 Bad Gateway
.
I had a look at the error logs in ~/.config/valet/Log/nginx-error.log
and found this grouping of errors:
2023/11/17 13:42:09 [error] 89485#0: *1 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:43:21 [error] 89483#0: *12 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:45:54 [error] 89476#0: *44 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:51:39 [error] 89471#0: *49 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: larajobs-menubar.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "larajobs-menubar.test"
I performed a generic Google search for nginx "*12 connect()" to unix
Nothing hit me directly but it did highlight the excellent primer on Datadog NGINX 502 Bad Gateway: PHP-FPM
I ran valet links
to get a list of other sites to try to see if this was isolated to a PHP version and it was, PHP 8.2 was the only one having problems.
I had tried valet restart
and valet restart php
but that didn't help.
I wanted to check the status of services to make sure they were running and we can do that via valet status
:
Checking status...
Valet status: Error
+--------------------------------------+----------+
| Check | Success? |
+--------------------------------------+----------+
| Is Valet fully installed? | Yes |
| Is Valet config valid? | Yes |
| Is Homebrew installed? | Yes |
| Is DnsMasq installed? | Yes |
| Is Dnsmasq running? | Yes |
| Is Dnsmasq running as root? | Yes |
| Is Nginx installed? | Yes |
| Is Nginx running? | Yes |
| Is Nginx running as root? | Yes |
| Is PHP installed? | Yes |
| Is linked PHP (php) running? | No |
| Is linked PHP (php) running as root? | No |
| Is valet.sock present? | Yes |
+--------------------------------------+----------+
Debug suggestions:
Run `valet restart`.
Uninstall PHP with Brew and run `valet use php@8.2`
I ended up using valet use php@8.2 --force
in the end:
⋊> brew uninstall php
Error: Refusing to uninstall /usr/local/Cellar/php/8.2.11
because it is required by composer, php-cs-fixer and wp-cli, which are currently installed.
You can override this and force removal with:
brew uninstall --ignore-dependencies php
⋊> valet use php@8.2
Valet is already using version: php@8.2. To re-link and re-configure use the --force parameter.
⋊> valet use php@8.2 --force
Unlinking current version: php
Linking new version: php@8.2
Stopping phpfpm...
Stopping php@7.2...
Stopping php@8.1...
Installing and configuring phpfpm...
Updating PHP configuration for php@8.2...
Restarting php@8.1...
Restarting php@7.1...
Restarting php@7.4...
Restarting php@7.2...
Restarting php@7.3...
Restarting php...
Restarting nginx...
Valet is now using php@8.2.
Note that you might need to run composer global update if your PHP version change affects the dependencies of global packages required by Composer.
The stop messages not aligning with the restarts is particularly interesting and likely part of what I was dealing with. I hadn't tried sites tied to each version independently.
I've learned a few things using Livebook that took a minute to uncover, so I wanted a place to keep it all together. I'll list them as a table of contents and explain further in each section, with links to my stumbling through issues and PRs.
asdf
or rtx
, the Livebook version will be bound to your global or local instance, depending on the directory.Open
./data
, so if you use relative pathing in your notebooks or apps for storing files, you either have to infer this or change the WORKDIR
in Docker.I spent maybe 20-30 minutes installing Livebook 0.9.3 and running it for my collection of notebooks.
I would execute the command livebook server index.livemd
in the directory, which ran version 1.14.4
at the time of discovery.
When I upgraded Livebook, I was outside this directory which took in the global version 1.14.5
.
Because rtx
or asdf
completely isolates versions, I wasn't aware it was executing the 0.9.2 build with 1.14.4
.
Changing .tool-versions
by using rtx use elixir@1.14.5-otp-25
brought my local environment up to global, and the Livebook version was what I expected.
I suspect this may bite me and others consistently if we're not careful. I'm sure I've had similar issues in the past that I did not equate to this at the time.
A ton has changed since January 2022, but I opened this issue, which spurred a subsequent pull request. This work has slowly morphed into what is being called "ElixirKit" and is in the elixirkit/ directory in the repository. ElixirKit has a different surface area than Elixir Desktop, which is somewhat of a wrapper or orchestrator for wxWidgets in Elixir.
In the past, when this was a Dock icon only, if you closed the browser window, it was less intuitive for me to open it again. I could reopen the window by clicking the Dock icon (see this issue for history). Now the app stays resident in the Menu Bar, and we can reopen windows or view logs to see the hidden console window. The view logs option is perfect for getting the auth token again. The dock also had issues fully closing the runtime but was ironed out <3.
I use the free version of the Vanilla app. If Livebook is on the always-visible right side, the icon will display whenever I close and reopen the desktop app. If it's on the default left side, I must quit Vanilla for the icon to appear. The long-term fix of keeping it visible is enough for me, but it may bite other people that use similar menu bar applications.
This issue spells out what I saw and may be helpful. I kept trying to run the app again, not knowing that it was there in the menu bar but I just couldn't see it.
I had issues understanding how to work with notebooks or public apps on Huggingface.
When I used something like data = "./data/file-sorted.csv"
where the notebook-relative data/
directory existed, I would get problems like ** (File.Error) could not open "./data/file-sorted.csv": no such file or directory
.
My fix at the time was for apps to use something like data = System.get_env("PWD")
to get the current working directory.
This change posed a problem locally because my relative data/
directory is in .gitignore
.
Using PWD would save the data in the current directory, which would not fall under .gitignore rules.
The long-term fix, which you can see here, was to include WORKDIR "/notebooks"
at the end of the file.
WORKDIR
tells Docker the working directory so that /data
is now the place for only Livebook configuration, and any relative paths will work as expected.
There are other parts to this Dockerfile
, such as pulling down my notebooks and the ash_tutorial so that I could work on Huggingface for things like Whisper or other ML models.
Adding git or FFmpeg as dependencies is trivial. I explicitly copy the public-apps
directory from my repository so that the Huggingface repo stays pure and only cares about setting up the environment.
One of my goals is to flesh out a system where I can work on notebooks remotely and periodically synchronize changes up to the repository.
That's why I've included git-sync
but I haven't worked out how to leverage it.
A public or private app could leverage Kino.start_child/1 to start a GenServer that watches the filesystem for changes and presents a UI to commit and push changes.
I believe something like egit could do this, I would need to create the UI for it. I'd certainly take this approach over shelling out to the git CLI though that's not extraordinarily difficult either.
I was recently tasked with evaluating Laravel-based and external SSO workflows and stumbled on Keycloak in several places. There is a Socialite provider as well as Supabase auth so I needed a way to make a quick evaluation. I chose to use Docker as I didn't want to dive into the world of Java installations. There is some extensive documentation, but like some OSS projects, it can be a firehose at times.
Fortunately, Google came to the rescue with many resources I've included at the bottom of this post for reference.
A Docker compose file to spin up the Keycloak container and Postgres to store its data.
The docker image needs environment variables set and the best way I know to do that is through direnv
and specifically the asdf
version manager plugin for it.
I lifted this template from this StackOverflow post and surgically altered it for my purposes.
I commented out the parts that weren't relevant and stripped away the backend
and frontend
services since I no longer needed them.
---
version: "3.8"
services:
database:
image: postgres:14
container_name: database
environment:
# add multiple schemas
# POSTGRES_MULTIPLE_DATABASES: ${DB_DATABASE},${KEYCLOAK_DATABASE}
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
# POSTGRES_KEYCLOAK_USER: ${KEYCLOAK_USER}
# POSTGRES_KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
# POSTGRES_DB2: ${KEYCLOAK_DATABASE}
hostname: local
restart: always
volumes:
- ./db-data:/var/lib/postgresql/data/
- ./sql:/docker-entrypoint-initdb.d/:ro
# - ./sql/access_attempt.sql:/docker-entrypoint-initdb.d/A.sql
# - ./sql/bceid.sql:/docker-entrypoint-initdb.d/B.sql
# - ./sql/lookup_activitytype.sql:/docker-entrypoint-initdb.d/C.sql
# - ./sql/lookup_gender_pronoun.sql:/docker-entrypoint-initdb.d/D.sql
# - ./sql/client.sql:/docker-entrypoint-initdb.d/E.sql
ports:
- "5439:5432"
networks:
- db-keycloak
keycloak:
image: quay.io/keycloak/keycloak:21.0.1
command: ["start-dev"]
container_name: keycloak
environment:
DB_VENDOR: ${DB_VENDOR}
DB_ADDR: database
DB_PORT: 5432
DB_SCHEMA: public
DB_DATABASE: ${DB_DATABASE}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
KEYCLOAK_USER: ${KEYCLOAK_USER}
KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_PROXY_MODE: edge
KC_METRICS_ENABLED: true
KC_HTTP_ENABLED: true
ports:
- "8089:8080"
- "8443:8443"
depends_on:
- database
restart: always
links:
- database
networks:
- db-keycloak
networks:
db-keycloak:
driver: bridge
This sets the environment variables used by both Postgres and Keycloak.
APP_DOMAIN="localhost"
DB_VENDOR="postgres"
DB_DATABASE="keycloak"
DB_USER="keycloak"
DB_PASSWORD="keycloak"
KEYCLOAK_USER="developer"
KEYCLOAK_PASSWORD="developer"
KEYCLOAK_ADMIN="admin"
KEYCLOAK_ADMIN_PASSWORD="admin"
KC_DB="postgres"
KC_DB_URL="jdbc:postgresql://database/keycloak"
# KC_HOSTNAME_FRONTEND_URL=""
# KC_HOSTNAME_ADMIN_URL=""
Run docker compose up
to run the containers in interactive mode.
db-data
directory should fill up with files and directories.docker compose up -d
to run your containers in the background.docker-compose down --rmi all
to completely cleanup all containers.command: ["start-dev"]
to start keycloak in the other modes. This is necessary as the entrypoint isn't specific enough.It may be useful to create an optimized Keycloak image, but that wasn't necessary for my purposes.
Modify the image section image: quay.io/keycloak/keycloak:21.0.1
to keycloak-custom:latest
to use the custom image.
docker build -t keycloak-custom:latest .
.