I returned to a somewhat older Phoenix v1.6 application I had written that utilized direnv
to load environment variables from a .env
file.
Coming from Laravel, I'm highly used to this workflow and I found that using something like direnv
to inject the variables was better than the hacks I had been using at the time.
I started using asdf
as a version-manager-of-all-trades before moving over to rtx
. I liked its ergonomics and I had yet to run into an issue where I thought asdf
was a better choice until now.
The issue I was having was that for whatever reason direnv
wasn't executing upon entering the directory as I had been used to.
Running direnv status
showed output similar to the following:
direnv exec path /Users/jbrayton/.local/share/rtx/installs/direnv/2.32.2/bin/direnv
DIRENV_CONFIG /Users/jbrayton/.config/direnv
bash_path /usr/local/bin/bash
disable_stdin false
warn_timeout 5s
whitelist.prefix []
whitelist.exact map[]
No .envrc or .env loaded
There was more included but the key to focus on was No .envrc or .env loaded.
I ran through a couple of steps to try to figure out what was going on. I had found on their website that for the fish
shell that I likely needed to wire up the direnv hook fish | source
into my generic ~/.config/fish/config.fish
file.
To do that I installed direnv
via homebrew because previously using it with rtx
meant even my global usage wasn't global, or I was holding it wrong(tm).
What I was unaware of at the time was that when I went to update rtx
I saw that homebrew changed the name to mise
but the command mise
wasn't found.
After running brew install mise
I was able to see the following migration output:
migrating /Users/jbrayton/.local/share/rtx/installs/elixir to /Users/jbrayton/.local/share/mise/installs
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
migrating /Users/jbrayton/.config/rtx to /Users/jbrayton/.config/mise
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
I'm making this post primarily for my own benefit though I seriously doubt I would ever run into this again on this machine or another.
It's possible someone else may see similar weirdness with either one of the rtx
plugins or something similar.
From my understanding of all my other projects, rtx
was working flawlessly except for this one instance but it turned out that direnv
was broken for my entire system. The other projects that used it weren't working either.
If you see some weirdness with rtx
and you haven't migrated, performing the migration may help you move forward like it did for me.
It's also worth noting that the migration doesn't copy your installs and I have 10GB of data in my old installs directory that I'll need to prune.
Upon testing some new functionality locally, I noticed that I was getting my favorite error, 502 Bad Gateway
.
I had a look at the error logs in ~/.config/valet/Log/nginx-error.log
and found this grouping of errors:
2023/11/17 13:42:09 [error] 89485#0: *1 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:43:21 [error] 89483#0: *12 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:45:54 [error] 89476#0: *44 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:51:39 [error] 89471#0: *49 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: larajobs-menubar.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "larajobs-menubar.test"
I performed a generic Google search for nginx "*12 connect()" to unix
Nothing hit me directly but it did highlight the excellent primer on Datadog NGINX 502 Bad Gateway: PHP-FPM
I ran valet links
to get a list of other sites to try to see if this was isolated to a PHP version and it was, PHP 8.2 was the only one having problems.
I had tried valet restart
and valet restart php
but that didn't help.
I wanted to check the status of services to make sure they were running and we can do that via valet status
:
Checking status...
Valet status: Error
+--------------------------------------+----------+
| Check | Success? |
+--------------------------------------+----------+
| Is Valet fully installed? | Yes |
| Is Valet config valid? | Yes |
| Is Homebrew installed? | Yes |
| Is DnsMasq installed? | Yes |
| Is Dnsmasq running? | Yes |
| Is Dnsmasq running as root? | Yes |
| Is Nginx installed? | Yes |
| Is Nginx running? | Yes |
| Is Nginx running as root? | Yes |
| Is PHP installed? | Yes |
| Is linked PHP (php) running? | No |
| Is linked PHP (php) running as root? | No |
| Is valet.sock present? | Yes |
+--------------------------------------+----------+
Debug suggestions:
Run `valet restart`.
Uninstall PHP with Brew and run `valet use php@8.2`
I ended up using valet use php@8.2 --force
in the end:
⋊> brew uninstall php
Error: Refusing to uninstall /usr/local/Cellar/php/8.2.11
because it is required by composer, php-cs-fixer and wp-cli, which are currently installed.
You can override this and force removal with:
brew uninstall --ignore-dependencies php
⋊> valet use php@8.2
Valet is already using version: php@8.2. To re-link and re-configure use the --force parameter.
⋊> valet use php@8.2 --force
Unlinking current version: php
Linking new version: php@8.2
Stopping phpfpm...
Stopping php@7.2...
Stopping php@8.1...
Installing and configuring phpfpm...
Updating PHP configuration for php@8.2...
Restarting php@8.1...
Restarting php@7.1...
Restarting php@7.4...
Restarting php@7.2...
Restarting php@7.3...
Restarting php...
Restarting nginx...
Valet is now using php@8.2.
Note that you might need to run composer global update if your PHP version change affects the dependencies of global packages required by Composer.
The stop messages not aligning with the restarts is particularly interesting and likely part of what I was dealing with. I hadn't tried sites tied to each version independently.
I've learned a few things using Livebook that took a minute to uncover, so I wanted a place to keep it all together. I'll list them as a table of contents and explain further in each section, with links to my stumbling through issues and PRs.
asdf
or rtx
, the Livebook version will be bound to your global or local instance, depending on the directory.Open
./data
, so if you use relative pathing in your notebooks or apps for storing files, you either have to infer this or change the WORKDIR
in Docker.I spent maybe 20-30 minutes installing Livebook 0.9.3 and running it for my collection of notebooks.
I would execute the command livebook server index.livemd
in the directory, which ran version 1.14.4
at the time of discovery.
When I upgraded Livebook, I was outside this directory which took in the global version 1.14.5
.
Because rtx
or asdf
completely isolates versions, I wasn't aware it was executing the 0.9.2 build with 1.14.4
.
Changing .tool-versions
by using rtx use elixir@1.14.5-otp-25
brought my local environment up to global, and the Livebook version was what I expected.
I suspect this may bite me and others consistently if we're not careful. I'm sure I've had similar issues in the past that I did not equate to this at the time.
A ton has changed since January 2022, but I opened this issue, which spurred a subsequent pull request. This work has slowly morphed into what is being called "ElixirKit" and is in the elixirkit/ directory in the repository. ElixirKit has a different surface area than Elixir Desktop, which is somewhat of a wrapper or orchestrator for wxWidgets in Elixir.
In the past, when this was a Dock icon only, if you closed the browser window, it was less intuitive for me to open it again. I could reopen the window by clicking the Dock icon (see this issue for history). Now the app stays resident in the Menu Bar, and we can reopen windows or view logs to see the hidden console window. The view logs option is perfect for getting the auth token again. The dock also had issues fully closing the runtime but was ironed out <3.
I use the free version of the Vanilla app. If Livebook is on the always-visible right side, the icon will display whenever I close and reopen the desktop app. If it's on the default left side, I must quit Vanilla for the icon to appear. The long-term fix of keeping it visible is enough for me, but it may bite other people that use similar menu bar applications.
This issue spells out what I saw and may be helpful. I kept trying to run the app again, not knowing that it was there in the menu bar but I just couldn't see it.
I had issues understanding how to work with notebooks or public apps on Huggingface.
When I used something like data = "./data/file-sorted.csv"
where the notebook-relative data/
directory existed, I would get problems like ** (File.Error) could not open "./data/file-sorted.csv": no such file or directory
.
My fix at the time was for apps to use something like data = System.get_env("PWD")
to get the current working directory.
This change posed a problem locally because my relative data/
directory is in .gitignore
.
Using PWD would save the data in the current directory, which would not fall under .gitignore rules.
The long-term fix, which you can see here, was to include WORKDIR "/notebooks"
at the end of the file.
WORKDIR
tells Docker the working directory so that /data
is now the place for only Livebook configuration, and any relative paths will work as expected.
There are other parts to this Dockerfile
, such as pulling down my notebooks and the ash_tutorial so that I could work on Huggingface for things like Whisper or other ML models.
Adding git or FFmpeg as dependencies is trivial. I explicitly copy the public-apps
directory from my repository so that the Huggingface repo stays pure and only cares about setting up the environment.
One of my goals is to flesh out a system where I can work on notebooks remotely and periodically synchronize changes up to the repository.
That's why I've included git-sync
but I haven't worked out how to leverage it.
A public or private app could leverage Kino.start_child/1 to start a GenServer that watches the filesystem for changes and presents a UI to commit and push changes.
I believe something like egit could do this, I would need to create the UI for it. I'd certainly take this approach over shelling out to the git CLI though that's not extraordinarily difficult either.
I was recently tasked with evaluating Laravel-based and external SSO workflows and stumbled on Keycloak in several places. There is a Socialite provider as well as Supabase auth so I needed a way to make a quick evaluation. I chose to use Docker as I didn't want to dive into the world of Java installations. There is some extensive documentation, but like some OSS projects, it can be a firehose at times.
Fortunately, Google came to the rescue with many resources I've included at the bottom of this post for reference.
A Docker compose file to spin up the Keycloak container and Postgres to store its data.
The docker image needs environment variables set and the best way I know to do that is through direnv
and specifically the asdf
version manager plugin for it.
I lifted this template from this StackOverflow post and surgically altered it for my purposes.
I commented out the parts that weren't relevant and stripped away the backend
and frontend
services since I no longer needed them.
---
version: "3.8"
services:
database:
image: postgres:14
container_name: database
environment:
# add multiple schemas
# POSTGRES_MULTIPLE_DATABASES: ${DB_DATABASE},${KEYCLOAK_DATABASE}
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
# POSTGRES_KEYCLOAK_USER: ${KEYCLOAK_USER}
# POSTGRES_KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
# POSTGRES_DB2: ${KEYCLOAK_DATABASE}
hostname: local
restart: always
volumes:
- ./db-data:/var/lib/postgresql/data/
- ./sql:/docker-entrypoint-initdb.d/:ro
# - ./sql/access_attempt.sql:/docker-entrypoint-initdb.d/A.sql
# - ./sql/bceid.sql:/docker-entrypoint-initdb.d/B.sql
# - ./sql/lookup_activitytype.sql:/docker-entrypoint-initdb.d/C.sql
# - ./sql/lookup_gender_pronoun.sql:/docker-entrypoint-initdb.d/D.sql
# - ./sql/client.sql:/docker-entrypoint-initdb.d/E.sql
ports:
- "5439:5432"
networks:
- db-keycloak
keycloak:
image: quay.io/keycloak/keycloak:21.0.1
command: ["start-dev"]
container_name: keycloak
environment:
DB_VENDOR: ${DB_VENDOR}
DB_ADDR: database
DB_PORT: 5432
DB_SCHEMA: public
DB_DATABASE: ${DB_DATABASE}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
KEYCLOAK_USER: ${KEYCLOAK_USER}
KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_PROXY_MODE: edge
KC_METRICS_ENABLED: true
KC_HTTP_ENABLED: true
ports:
- "8089:8080"
- "8443:8443"
depends_on:
- database
restart: always
links:
- database
networks:
- db-keycloak
networks:
db-keycloak:
driver: bridge
This sets the environment variables used by both Postgres and Keycloak.
APP_DOMAIN="localhost"
DB_VENDOR="postgres"
DB_DATABASE="keycloak"
DB_USER="keycloak"
DB_PASSWORD="keycloak"
KEYCLOAK_USER="developer"
KEYCLOAK_PASSWORD="developer"
KEYCLOAK_ADMIN="admin"
KEYCLOAK_ADMIN_PASSWORD="admin"
KC_DB="postgres"
KC_DB_URL="jdbc:postgresql://database/keycloak"
# KC_HOSTNAME_FRONTEND_URL=""
# KC_HOSTNAME_ADMIN_URL=""
Run docker compose up
to run the containers in interactive mode.
db-data
directory should fill up with files and directories.docker compose up -d
to run your containers in the background.docker-compose down --rmi all
to completely cleanup all containers.command: ["start-dev"]
to start keycloak in the other modes. This is necessary as the entrypoint isn't specific enough.It may be useful to create an optimized Keycloak image, but that wasn't necessary for my purposes.
Modify the image section image: quay.io/keycloak/keycloak:21.0.1
to keycloak-custom:latest
to use the custom image.
docker build -t keycloak-custom:latest .
.I had the privilege of hanging around Brooklin Myers before he joined DockYard as an instructor in early 2022. A unique Elixir community slowly coalesced with the first video of the beta cohort starting on September 21st, 2022. I wouldn't join the group until October 15th as I wasn't sure what to make of it at first. I figured I would audit the class like I was some college kid.
The academy skews toward junior developers or other Elixir newbies without previous formal instruction. Despite that, the curriculum and the commitment of 2 hours per day was an exceptional resource regardless of experience level.
The curriculum is not as lightweight as Elixir koans, and it is not as self-paced as Exercism's Elixir track. I hadn't been a part of the Exercism Elixir cohort on Discord, but I suspect it may have been similar.
What sets the curriculum apart is that it starts in Livebook, a low barrier to entry for learning Elixir.
Eventually, it moves to bare mix new
projects, graduating to full-on mix phx.new
Phoenix applications.
The beta curriculum experience was different than the first cohort, and there are upcoming changes for the second cohort.
It's helpful to know the curriculum changes when pain points surface.
There is no sleight of hand or abandonware as the official repository is what is taught from start to finish.
As someone that can have analysis paralysis at times when it comes to what and how to learn, having the path chosen for me was extremely helpful. Exercism gates the syllabus, but that can be daunting to decipher when you're starting. I also rushed through the concepts I was interested in rather than taking the time to enjoy the journey. I firmly believe the curriculum and Exercism complement each other very well.
The curriculum culminated in a capstone project, a chance to bundle all the skills we learned to produce our applications. The capstone sets it apart from other learning materials.
The beta cohort was a mix of Elixir newbies, seasoned Elixir developers and mentors, and people that hadn't touched a programming language. We experimented with teaching styles and nailed a cadence "locked in" at the last minute. Everyone I paired with showed remarkable improvement between October and the demo day on January 20th. That level of improvement is a testament to Brooklin's teaching style. Fundamentals became second nature very quickly. I would be lucky to work with anybody I met in the cohort or Discord server, as everyone grew into a developer. Elixir has a way of binding cohesive communities, but Brooklin truly has his superpower with the people around him. As much as I love DockYard, this felt like "The Brooklin Show" *sponsored by DockYard(tm)
I was one of the fewer resident developers to present on Demo Day, and that almost didn't happen. My capstone project, Beatseek, was hastily thrown together by duct tape.
I had a working prototype at least a month before the deadline, but I had only given myself ten days from mix phx.new
to what I presented.
I thought it went well without a script, working through some prior presentations, but it was unpolished.
I used sleight of hand as I do on some demos, but as a magician, I wanted to show all the tricks.
I didn't cut a public release until two months after demo day because I wasn't happy with what I produced. I had to retrofit tests, which exposed several shortcomings. If I had to do it again, I would choose anything other than id3 tags because the edge cases are absurdly complex.
I had a few issues working through the curriculum or with other cohort members. Tracking progress was difficult, but I used an Obsidian daily standup journal template to check off the table of contents manually. The standup journal became a good way of tracking changes over time, though there were few. The ramp-up to Phoenix for people with no web development or API exposure was pretty steep for the beta cohort, but I don't know if this is still true. Web development fundamentals span a breadth of knowledge, but the curriculum helps cement these concepts. People new to web development may wish to spend more time going through the same sections a few times until the concepts of things like MVC are less foreign. It'll make the later parts much easier to push through.
I am 100% glad I had access to an instructor and mentor, even in a limited capacity. Everyone on the Discord server is excellent and a joy to be around. I would do this again in a heartbeat, but 2 hours was a sweet spot for someone like me with a full-time position to juggle. I can see how much more beneficial the 6-hour full day could be with more immersion, but that is a lot of material to cram. We had some luxury in drawing the material out and taking some time to keep everyone on the same pace.