I had a difficult time searching for just what this could be and took a few different approaches that worked with mixed results:
jeremy
was the only one on the system.xrdp
on a whim.The solution I ended up going with was outlined in https://forums.raspberrypi.com/viewtopic.php?t=358088 as it works flawlessly for my current user, just as it was before whatever recent update broke it.
sudo nano /etc/X11/xrdp/xorg.conf
Find the line Option DRMDevice line and change it to
Option "DRMDevice" ""
reboot raspberry pi
This is one of the reasons I am not a fan of Linux on the desktop, even in 2024 nor do I think it passes the "grandma test." It was damn near impossible to search for this reliably and took me multiple days of searching over a few weeks. Grandma may have that kinda time because she's retired but I'm not, I need the shit to just work(tm). The Linux desktop experience has come a really long way from when I started working with it though.
]]>direnv
to load environment variables from a .env
file.
Coming from Laravel, I'm highly used to this workflow and I found that using something like direnv
to inject the variables was better ...
]]>direnv
to load environment variables from a .env
file.
Coming from Laravel, I'm highly used to this workflow and I found that using something like direnv
to inject the variables was better than the hacks I had been using at the time.
I started using asdf
as a version-manager-of-all-trades before moving over to rtx
. I liked its ergonomics and I had yet to run into an issue where I thought asdf
was a better choice until now.
The issue I was having was that for whatever reason direnv
wasn't executing upon entering the directory as I had been used to.
Running direnv status
showed output similar to the following:
direnv exec path /Users/jbrayton/.local/share/rtx/installs/direnv/2.32.2/bin/direnv
DIRENV_CONFIG /Users/jbrayton/.config/direnv
bash_path /usr/local/bin/bash
disable_stdin false
warn_timeout 5s
whitelist.prefix []
whitelist.exact map[]
No .envrc or .env loaded
There was more included but the key to focus on was No .envrc or .env loaded.
I ran through a couple of steps to try to figure out what was going on. I had found on their website that for the fish
shell that I likely needed to wire up the direnv hook fish | source
into my generic ~/.config/fish/config.fish
file.
To do that I installed direnv
via homebrew because previously using it with rtx
meant even my global usage wasn't global, or I was holding it wrong(tm).
What I was unaware of at the time was that when I went to update rtx
I saw that homebrew changed the name to mise
but the command mise
wasn't found.
After running brew install mise
I was able to see the following migration output:
migrating /Users/jbrayton/.local/share/rtx/installs/elixir to /Users/jbrayton/.local/share/mise/installs
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
migrating /Users/jbrayton/.config/rtx to /Users/jbrayton/.config/mise
migrated rtx directories to mise
see https://mise.jdx.dev/rtx.html
I'm making this post primarily for my own benefit though I seriously doubt I would ever run into this again on this machine or another.
It's possible someone else may see similar weirdness with either one of the rtx
plugins or something similar.
From my understanding of all my other projects, rtx
was working flawlessly except for this one instance but it turned out that direnv
was broken for my entire system. The other projects that used it weren't working either.
If you see some weirdness with rtx
and you haven't migrated, performing the migration may help you move forward like it did for me.
It's also worth noting that the migration doesn't copy your installs and I have 10GB of data in my old installs directory that I'll need to prune.
]]>502 Bad Gateway
.
I had a look at the error logs in ~/.config/valet/Log/nginx-error.log
and found this grouping of errors:
```text 2023/11/17 13:42:09 [error] 89485#0: *1 connect() ...
]]>502 Bad Gateway
.
I had a look at the error logs in ~/.config/valet/Log/nginx-error.log
and found this grouping of errors:
2023/11/17 13:42:09 [error] 89485#0: *1 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:43:21 [error] 89483#0: *12 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:45:54 [error] 89476#0: *44 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: stripe-sync.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "stripe-sync.test"
2023/11/17 13:51:39 [error] 89471#0: *49 connect() to unix:/Users/jbrayton/.config/valet/valet.sock failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: larajobs-menubar.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/jbrayton/.config/valet/valet.sock:", host: "larajobs-menubar.test"
I performed a generic Google search for nginx "*12 connect()" to unix
Nothing hit me directly but it did highlight the excellent primer on Datadog NGINX 502 Bad Gateway: PHP-FPM
I ran valet links
to get a list of other sites to try to see if this was isolated to a PHP version and it was, PHP 8.2 was the only one having problems.
I had tried valet restart
and valet restart php
but that didn't help.
I wanted to check the status of services to make sure they were running and we can do that via valet status
:
Checking status...
Valet status: Error
+--------------------------------------+----------+
| Check | Success? |
+--------------------------------------+----------+
| Is Valet fully installed? | Yes |
| Is Valet config valid? | Yes |
| Is Homebrew installed? | Yes |
| Is DnsMasq installed? | Yes |
| Is Dnsmasq running? | Yes |
| Is Dnsmasq running as root? | Yes |
| Is Nginx installed? | Yes |
| Is Nginx running? | Yes |
| Is Nginx running as root? | Yes |
| Is PHP installed? | Yes |
| Is linked PHP (php) running? | No |
| Is linked PHP (php) running as root? | No |
| Is valet.sock present? | Yes |
+--------------------------------------+----------+
Debug suggestions:
Run `valet restart`.
Uninstall PHP with Brew and run `valet use php@8.2`
I ended up using valet use php@8.2 --force
in the end:
⋊> brew uninstall php
Error: Refusing to uninstall /usr/local/Cellar/php/8.2.11
because it is required by composer, php-cs-fixer and wp-cli, which are currently installed.
You can override this and force removal with:
brew uninstall --ignore-dependencies php
⋊> valet use php@8.2
Valet is already using version: php@8.2. To re-link and re-configure use the --force parameter.
⋊> valet use php@8.2 --force
Unlinking current version: php
Linking new version: php@8.2
Stopping phpfpm...
Stopping php@7.2...
Stopping php@8.1...
Installing and configuring phpfpm...
Updating PHP configuration for php@8.2...
Restarting php@8.1...
Restarting php@7.1...
Restarting php@7.4...
Restarting php@7.2...
Restarting php@7.3...
Restarting php...
Restarting nginx...
Valet is now using php@8.2.
Note that you might need to run composer global update if your PHP version change affects the dependencies of global packages required by Composer.
The stop messages not aligning with the restarts is particularly interesting and likely part of what I was dealing with. I hadn't tried sites tied to each version independently.
]]>asdf
or rtx
, the Livebook version will be bound to your global or local instance, depending on the directory.Open
./data
, so if you use relative pathing in your notebooks or apps for storing files, you either have to infer this or change the WORKDIR
in Docker.I spent maybe 20-30 minutes installing Livebook 0.9.3 and running it for my collection of notebooks.
I would execute the command livebook server index.livemd
in the directory, which ran version 1.14.4
at the time of discovery.
When I upgraded Livebook, I was outside this directory which took in the global version 1.14.5
.
Because rtx
or asdf
completely isolates versions, I wasn't aware it was executing the 0.9.2 build with 1.14.4
.
Changing .tool-versions
by using rtx use elixir@1.14.5-otp-25
brought my local environment up to global, and the Livebook version was what I expected.
I suspect this may bite me and others consistently if we're not careful. I'm sure I've had similar issues in the past that I did not equate to this at the time.
A ton has changed since January 2022, but I opened this issue, which spurred a subsequent pull request. This work has slowly morphed into what is being called "ElixirKit" and is in the elixirkit/ directory in the repository. ElixirKit has a different surface area than Elixir Desktop, which is somewhat of a wrapper or orchestrator for wxWidgets in Elixir.
In the past, when this was a Dock icon only, if you closed the browser window, it was less intuitive for me to open it again. I could reopen the window by clicking the Dock icon (see this issue for history). Now the app stays resident in the Menu Bar, and we can reopen windows or view logs to see the hidden console window. The view logs option is perfect for getting the auth token again. The dock also had issues fully closing the runtime but was ironed out <3.
I use the free version of the Vanilla app. If Livebook is on the always-visible right side, the icon will display whenever I close and reopen the desktop app. If it's on the default left side, I must quit Vanilla for the icon to appear. The long-term fix of keeping it visible is enough for me, but it may bite other people that use similar menu bar applications.
This issue spells out what I saw and may be helpful. I kept trying to run the app again, not knowing that it was there in the menu bar but I just couldn't see it.
I had issues understanding how to work with notebooks or public apps on Huggingface.
When I used something like data = "./data/file-sorted.csv"
where the notebook-relative data/
directory existed, I would get problems like ** (File.Error) could not open "./data/file-sorted.csv": no such file or directory
.
My fix at the time was for apps to use something like data = System.get_env("PWD")
to get the current working directory.
This change posed a problem locally because my relative data/
directory is in .gitignore
.
Using PWD would save the data in the current directory, which would not fall under .gitignore rules.
The long-term fix, which you can see here, was to include WORKDIR "/notebooks"
at the end of the file.
WORKDIR
tells Docker the working directory so that /data
is now the place for only Livebook configuration, and any relative paths will work as expected.
There are other parts to this Dockerfile
, such as pulling down my notebooks and the ash_tutorial so that I could work on Huggingface for things like Whisper or other ML models.
Adding git or FFmpeg as dependencies is trivial. I explicitly copy the public-apps
directory from my repository so that the Huggingface repo stays pure and only cares about setting up the environment.
One of my goals is to flesh out a system where I can work on notebooks remotely and periodically synchronize changes up to the repository.
That's why I've included git-sync
but I haven't worked out how to leverage it.
A public or private app could leverage Kino.start_child/1 to start a GenServer that watches the filesystem for changes and presents a UI to commit and push changes.
I believe something like egit could do this, I would need to create the UI for it. I'd certainly take this approach over shelling out to the git CLI though that's not extraordinarily difficult either.
Fortunately, Google came to the rescue with many resources I've included at the bottom of this post for reference.
A Docker compose file to spin up the Keycloak container and Postgres to store its data.
The docker image needs environment variables set and the best way I know to do that is through direnv
and specifically the asdf
version manager plugin for it.
I lifted this template from this StackOverflow post and surgically altered it for my purposes.
I commented out the parts that weren't relevant and stripped away the backend
and frontend
services since I no longer needed them.
---
version: "3.8"
services:
database:
image: postgres:14
container_name: database
environment:
# add multiple schemas
# POSTGRES_MULTIPLE_DATABASES: ${DB_DATABASE},${KEYCLOAK_DATABASE}
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
# POSTGRES_KEYCLOAK_USER: ${KEYCLOAK_USER}
# POSTGRES_KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
# POSTGRES_DB2: ${KEYCLOAK_DATABASE}
hostname: local
restart: always
volumes:
- ./db-data:/var/lib/postgresql/data/
- ./sql:/docker-entrypoint-initdb.d/:ro
# - ./sql/access_attempt.sql:/docker-entrypoint-initdb.d/A.sql
# - ./sql/bceid.sql:/docker-entrypoint-initdb.d/B.sql
# - ./sql/lookup_activitytype.sql:/docker-entrypoint-initdb.d/C.sql
# - ./sql/lookup_gender_pronoun.sql:/docker-entrypoint-initdb.d/D.sql
# - ./sql/client.sql:/docker-entrypoint-initdb.d/E.sql
ports:
- "5439:5432"
networks:
- db-keycloak
keycloak:
image: quay.io/keycloak/keycloak:21.0.1
command: ["start-dev"]
container_name: keycloak
environment:
DB_VENDOR: ${DB_VENDOR}
DB_ADDR: database
DB_PORT: 5432
DB_SCHEMA: public
DB_DATABASE: ${DB_DATABASE}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
KEYCLOAK_USER: ${KEYCLOAK_USER}
KEYCLOAK_PASSWORD: ${KEYCLOAK_PASSWORD}
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_PROXY_MODE: edge
KC_METRICS_ENABLED: true
KC_HTTP_ENABLED: true
ports:
- "8089:8080"
- "8443:8443"
depends_on:
- database
restart: always
links:
- database
networks:
- db-keycloak
networks:
db-keycloak:
driver: bridge
This sets the environment variables used by both Postgres and Keycloak.
APP_DOMAIN="localhost"
DB_VENDOR="postgres"
DB_DATABASE="keycloak"
DB_USER="keycloak"
DB_PASSWORD="keycloak"
KEYCLOAK_USER="developer"
KEYCLOAK_PASSWORD="developer"
KEYCLOAK_ADMIN="admin"
KEYCLOAK_ADMIN_PASSWORD="admin"
KC_DB="postgres"
KC_DB_URL="jdbc:postgresql://database/keycloak"
# KC_HOSTNAME_FRONTEND_URL=""
# KC_HOSTNAME_ADMIN_URL=""
docker compose up
to run the containers in interactive mode.db-data
directory should fill up with files and directories.docker compose up -d
to run your containers in the background.docker-compose down --rmi all
to completely cleanup all containers.command: ["start-dev"]
to start keycloak in the other modes. This is necessary as the entrypoint isn't specific enough.It may be useful to create an optimized Keycloak image, but that wasn't necessary for my purposes.
image: quay.io/keycloak/keycloak:21.0.1
to keycloak-custom:latest
to use the custom image.docker build -t keycloak-custom:latest .
.The academy skews toward junior developers or other Elixir newbies without previous formal instruction. Despite that, the curriculum and the commitment of 2 hours per day was an exceptional resource regardless of experience level.
The curriculum is not as lightweight as Elixir koans, and it is not as self-paced as Exercism's Elixir track. I hadn't been a part of the Exercism Elixir cohort on Discord, but I suspect it may have been similar.
What sets the curriculum apart is that it starts in Livebook, a low barrier to entry for learning Elixir.
Eventually, it moves to bare mix new
projects, graduating to full-on mix phx.new
Phoenix applications.
The beta curriculum experience was different than the first cohort, and there are upcoming changes for the second cohort.
It's helpful to know the curriculum changes when pain points surface.
There is no sleight of hand or abandonware as the official repository is what is taught from start to finish.
As someone that can have analysis paralysis at times when it comes to what and how to learn, having the path chosen for me was extremely helpful. Exercism gates the syllabus, but that can be daunting to decipher when you're starting. I also rushed through the concepts I was interested in rather than taking the time to enjoy the journey. I firmly believe the curriculum and Exercism complement each other very well.
The curriculum culminated in a capstone project, a chance to bundle all the skills we learned to produce our applications. The capstone sets it apart from other learning materials.
The beta cohort was a mix of Elixir newbies, seasoned Elixir developers and mentors, and people that hadn't touched a programming language. We experimented with teaching styles and nailed a cadence "locked in" at the last minute. Everyone I paired with showed remarkable improvement between October and the demo day on January 20th. That level of improvement is a testament to Brooklin's teaching style. Fundamentals became second nature very quickly. I would be lucky to work with anybody I met in the cohort or Discord server, as everyone grew into a developer. Elixir has a way of binding cohesive communities, but Brooklin truly has his superpower with the people around him. As much as I love DockYard, this felt like "The Brooklin Show" *sponsored by DockYard(tm)
I was one of the fewer resident developers to present on Demo Day, and that almost didn't happen. My capstone project, Beatseek, was hastily thrown together by duct tape.
I had a working prototype at least a month before the deadline, but I had only given myself ten days from mix phx.new
to what I presented.
I thought it went well without a script, working through some prior presentations, but it was unpolished.
I used sleight of hand as I do on some demos, but as a magician, I wanted to show all the tricks.
I didn't cut a public release until two months after demo day because I wasn't happy with what I produced. I had to retrofit tests, which exposed several shortcomings. If I had to do it again, I would choose anything other than id3 tags because the edge cases are absurdly complex.
I had a few issues working through the curriculum or with other cohort members. Tracking progress was difficult, but I used an Obsidian daily standup journal template to check off the table of contents manually. The standup journal became a good way of tracking changes over time, though there were few. The ramp-up to Phoenix for people with no web development or API exposure was pretty steep for the beta cohort, but I don't know if this is still true. Web development fundamentals span a breadth of knowledge, but the curriculum helps cement these concepts. People new to web development may wish to spend more time going through the same sections a few times until the concepts of things like MVC are less foreign. It'll make the later parts much easier to push through.
I am 100% glad I had access to an instructor and mentor, even in a limited capacity. Everyone on the Discord server is excellent and a joy to be around. I would do this again in a heartbeat, but 2 hours was a sweet spot for someone like me with a full-time position to juggle. I can see how much more beneficial the 6-hour full day could be with more immersion, but that is a lot of material to cram. We had some luxury in drawing the material out and taking some time to keep everyone on the same pace.
]]>]]>If you're packaging these archives in an IDE plugin, make sure to build using the minimum supported OTP versi ...
If you're packaging these archives in an IDE plugin, make sure to build using the minimum supported OTP version for the best backward-compatibility If you're like me, you may not care to support older versions of Elixir. How do we configure the plugin to run the latest version?
The output I see in VSCode's Output tab (Shift-Command-U on macOS) for the ElixirLS extension:
[Info - 4:33:53 PM] Started ElixirLS v0.13.0
[Info - 4:33:53 PM] ElixirLS built with elixir "1.12.3" on OTP "22"
[Info - 4:33:53 PM] Running on elixir "1.14.2 (compiled with Erlang/OTP 25)" on OTP "25"
[Info - 4:33:53 PM] Elixir sources not found (checking in /home/build/elixir). Code navigation to Elixir modules disabled.
[Info - 4:33:54 PM] Loaded DETS databases in 32ms
[Info - 4:33:54 PM] Starting build with MIX_ENV: test MIX_TARGET: host
[Info - 4:33:55 PM] Compile took 854 milliseconds
There are numerous articles on building from source. What if we'd prefer to build the extension instead?
Let's unpack that Docker command to perform each step:
git clone --recursive --branch v0.13.0 https://github.com/elixir-lsp/vscode-elixir-ls.git /tmp/vscode-elixir-ls
.cd /tmp/vscode-elixir-ls
.npm install
.elixir-ls
directory: cd elixir-ls
.mix deps.get
.cd ..
.npx vsce package
.extensions
directory in $HOME: mkdir -p $HOME/extensions
.cp /tmp/vscode-elixir-ls/elixir-ls-0.13.0.vsix $HOME/extensions
.rm -rf /tmp/vscode-elixir-ls
.It is crucial to install Elixir v1.14.x and Erlang 25.1.x using your favorite method prior to packaging the new extension. I'm using asdf global
to do this, but you could create a local .tool-versions
inside the tmp folder if you wish.
The extension should now live at /tmp/vscode-elixir-ls/elixir-ls-0.13.0.vsix
.
The remaining steps copy the package to a directory the Docker container knows, and it's okay to stop here.
Because the prepublish.bash
file that executes at step #7 runs mix deps.get
, we can eliminate steps 4, 5, and 6.
These commands also compile the extension using MIX_ENV=dev
, which we may not want.
To change this, we can edit the last line in prepublish.bash
to MIX_ENV=prod mix elixir_ls.release -o ../elixir-ls-release
to compile for production.
Putting all of the (now reduced) commands together:
git clone --recursive --branch v0.13.0 https://github.com/elixir-lsp/vscode-elixir-ls.git /tmp/vscode-elixir-ls
cd /tmp/vscode-elixir-ls
npm install
npx vsce package
mkdir -p $HOME/extensions
cp /tmp/vscode-elixir-ls/elixir-ls-0.13.0.vsix $HOME/extensions
rm -rf /tmp/vscode-elixir-ls
We can install the extension from the VSIX file using the UI or the command code --install-extension $HOME/extensions/elixir-ls-0.13.0.vsix
.
To take advantage of the new extension in our projects, we need to rm -rf .elixir_ls
and navigate to an Elixir file.
ElixirLS won't start compiling until an Elixir file is open in the editor, and it'll usually take a few minutes to rebuild everything.
With the new extension installed we should see the change in VSCode's Output tab:
[Info - 4:35:42 PM] Started ElixirLS v0.13.0
[Info - 4:35:43 PM] ElixirLS built with elixir "1.14.2" on OTP "25"
[Info - 4:35:43 PM] Running on elixir "1.14.2 (compiled with Erlang/OTP 25)" on OTP "25"
[Info - 4:35:43 PM] Elixir sources not found (checking in /home/build/elixir). Code navigation to Elixir modules disabled.
[Info - 4:35:48 PM] Loaded DETS databases in 414ms
[Info - 4:35:48 PM] Starting build with MIX_ENV: test MIX_TARGET: host
[Info - 4:35:49 PM] Compile took 1811 milliseconds
]]>asdf
version manager using Homebrew and ran into a snag when trying to perform mix
commands.
I encounterd the error /Users/jbrayton/.asdf/shims/mix: line 13: /usr/local/Cellar/asdf/0.10.2/libexec/bin/asdf: No such file or directory
.
The key to noti ...
]]>asdf
version manager using Homebrew and ran into a snag when trying to perform mix
commands.
I encounterd the error /Users/jbrayton/.asdf/shims/mix: line 13: /usr/local/Cellar/asdf/0.10.2/libexec/bin/asdf: No such file or directory
.
The key to notice here is the path /usr/local/Cellar/asdf/0.10.2/
when the newest version is 0.11.0
, as there is clearly a mismatch.
I restarted my terminal and shell, but the problem persisted. I noticed all the files in ~/.asdf/shims
had the line exec /usr/local/Cellar/asdf/0.10.2/libexec/bin/asdf exec "odbcserver" "$@" # asdf_allow: ' asdf '
.
This line is not what we wanted and indicates the problem.
After looking at the pinned https://github.com/asdf-vm/asdf/issues/785 and then following that to https://github.com/asdf-vm/asdf/issues/1393, the solution rm -rf ~/.asdf/shims; asdf reshim
fixes my problem.
Now, whenever I examine one of the shim files, I see the line exec /usr/local/opt/asdf/libexec/bin/asdf exec "mix" "$@" # asdf_allow: ' asdf '
as expected.
The directory /usr/local/opt
is what I see when I run the command brew --prefix asdf
as the prefix is no longer /usr/local/Cellar/asdf/0.10.2/
or the Cellar location.
This corrective measure should be a more permanent solution moving forward as the prefix /usr/local/opt
should no longer change in the future.
This issue was also somewhat of a perfect storm as Phoenix 1.7 rc.1 dropped two days ago and I had just upgraded a bunch of homebrew packages, including asdf
.
In my case, I want to proxy the domain scdn-app.thinkorange.com
through my local version of the Laravel application.
~/.config/valet/config.json
on macOS and change the tld
parameter from test
to com
.valet link scdn-app.thinkorange
to set up our valet configuration to point the domain to this directory.valet secure scdn-app.thinkorange
to set up the SSL certificate.cd ~/.config/valet/dnsmasq.d
.cp tld-test.conf tld-com.conf
.address=/.com/127.0.0.1
and save the file.valet isolate --site scdn-app.thinkorange php@8.1
./etc/hosts
file to redirect the domain to 127.0.0.1
for ipv4 and ::1
. I use the excellent Gas Mask to make this step easier.Now we should have a functional production proxy through our local machine. This configuration creates a few problems around keeping the com
TLD.
Fortunately, a few extra steps are necessary for us to switch back to .test
while also keeping this site functional.
~/.config/valet/config.json
again and change the tld
parameter from com
back to test
. This change will immediately break our site.cd ~/.config/valet/Sites
.ls -al
to list the directory, we'll see our site scdn-app.thinkorange
. Let's change that.mv scdn-app.thinkorange scdn-app.thinkorange.com
.Our site should now be working again. We are also able to continue serving our previous local test domains.
Because we can create a permanently functional system using these steps, I believe it should be possible to create a pull request to reduce the number of hoops we have to jump through.
I'd love to be able to run valet link scdn-app.thinkorange.com.
with a period at the end to denote I'm including the full domain with TLD.
That would eliminate the temporary step of editing the config.json
file, and the Sites
directory would just work(TM) as it would include the .com
directory name.
I don't believe we even need the dnsmasq changes as I'm able to navigate to a functional site without them.
I believe Gas Mask is doing the work, but it's better to be safe than sorry.
If you'd prefer a YouTube video where I stumble through recreating these steps from scratch:
]]>There is a better way to handle this scenario. Livebook has had autosaves since 0.4:
https://twitter.com/livebookdev/status/1467576154941009920
The feature was added in this PR according to the changelog:
https://github.com/livebook-dev/livebook/pull/736
To find your autosave files:
~/Library/Application Support/livebook/autosaved/
./Users/jbrayton/Library/Application Support/livebook/autosaved/
.config/dev.exs
, this is set as config :livebook, :data_path, Path.expand("tmp/livebook_data/dev"
./Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/
.config/test.exs
this is set as Path.expand("tmp/livebook_data/test")
./Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/test/autosaved/
.Notebooks are saved by day in the autosave directory and the date corresponds to when they were created (when you immediately click the New notebook button).
To view or change your autosave directory in the CLI:
Settings
under the Home
and Learn
links.For the Desktop application, the port will be randomized but you can either change the URL to tack on /settings
after the port or click around to the settings page as described earlier.
If you are curious as to how this setting gets configured, we can start by looking at Livebook.Settings.default_autosave_path()
in https://github.com/livebook-dev/livebook/blob/main/lib/livebook/settings.ex#L32-L34.
We follow Livebook.Config.data_path()
to https://github.com/livebook-dev/livebook/blob/main/lib/livebook/config.ex#L76-L78 then the Erlang function :filename.basedir(:user_data, "livebook")
.
Running this in Livebook we get the output "/Users/jbrayton/Library/Application Support/livebook"
, precisely where the desktop app stores its files.
What lead me to this discovery, after vaguely remembering autosave was a thing, was looking for files on my computer.
I purposefully install and use the locate
command because I find it far easier to use than remembering the find -name
syntax.
Here's the output for checking that the word autosave
is in any directory or file name:
⋊> ~ locate autosaved/
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_10_31/18_25_03_mapset_drills_hedh.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/18_12_21_teller_bank_challenge_pv4e.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/18_13_39_untitled_notebook_pidb.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/19_31_57_dockyard_academy_amas_p75r.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/20_02_17_intro_to_timescale_jm7r.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_08/11_10_21_untitled_notebook_ervg.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_22/19_15_12_untitled_notebook_p75e.livemd
What I found interesting was that my files in ~/Library/Application Support/livebook/autosaved/
did not show up.
Had I not realized there could be different locations, I may have overlooked the notebook I was looking for all along.
I have no clue why locate
doesn't scour the directories in ~/Library
it should have access to but that's a problem for another day.
https://twitter.com/bcardarella/status/1474126383123247104?lang=en
Over the course of the past year, I've created a sample project a total of 3 times to get a better understanding fo ...
]]>https://twitter.com/bcardarella/status/1474126383123247104?lang=en
Over the course of the past year, I've created a sample project a total of 3 times to get a better understanding for how it operates. I haven't seen a ton of content on Beacon beyond announcement tweets, the mention in the ElixirConf 2022 keynote, and https://beaconcms.org/. This post covers the complete instructions in the readme with some notes on where to go from here. I had run into a few snags at first but a lot of those initial pain points have been hammered out so far. While a basic "Hello World" sample project is great, I plan on expanding on the sample with deeper dives into how Beacon serves up content. It takes a few novel approaches I haven't seen before to create either a CMS that runs along your application or it can be centralized with multi-tenancy. One CMS can service all of your ancillary marketing sites, blogs, or wherever you need the content.
The following instructions are also listed on the sample application readme so you're welcome to skip them if you want to look at the code.
Create a top-level directory to keep our application pair. This is temporary as the project matures.
mkdir beacon_sample
Clone GitHub - BeaconCMS/beacon: Beacon CMS to ./beacon
.
git clone git@github.com:BeaconCMS/beacon.git
Start with our first step from the Readme
mix phx.new --umbrella --install beacon_sample
Go to the umbrella project directory
cd beacon_sample/
Initialize git
git init
Commit the freshly initialized project
Initial commit of Phoenix v1.6.15
as of the time of this writing.Add :beacon as a dependency to both apps in your umbrella project
# Local:
{:beacon, path: "../../../beacon"},
# Or from GitHub:
{:beacon, github: "beaconCMS/beacon"},
apps/beacon_sample/mix.exs
and apps/beacon_sample_web/mix.exs
under the section defp deps do
.Run mix deps.get
to install the dependencies.
Commit the changes.
Add :beacon as a dependency to both apps in your umbrella project
seems like a good enough commit message.Configure Beacon Repo
Add the Beacon.Repo
under the ecto_repos:
section in config/config.exs
.
Configure the database in dev.exs
. We'll do production later.
# Configure beacon database
config :beacon, Beacon.Repo,
username: "postgres",
password: "postgres",
database: "beacon_sample_beacon",
hostname: "localhost",
show_sensitive_data_on_connection_error: true,
pool_size: 10
Commit the changes.
Configure Beacon Repo
subject with Configure the beacon repository in our dev only environment for now.
body.Create a BeaconDataSource module that implements Beacon.DataSource.Behaviour
Create apps/beacon_sample/lib/beacon_sample/datasource.ex
defmodule BeaconSample.BeaconDataSource do
@behaviour Beacon.DataSource.Behaviour
def live_data("my_site", ["home"], _params), do: %{vals: ["first", "second", "third"]}
def live_data("my_site", ["blog", blog_slug], _params), do: %{blog_slug_uppercase: String.upcase(blog_slug)}
def live_data(_, _, _), do: %{}
end
Add that DataSource to your config/config.exs
config :beacon,
data_source: BeaconSample.BeaconDataSource
Commit the changes.
Configure BeaconDataSource
Make router (apps/beacon_sample_web/lib/beacon_sample_web/router.ex
) changes to cover Beacon pages.
Add a :beacon
pipeline. I typically do this towards the pipeline sections at the top, starting at line 17.
pipeline :beacon do
plug BeaconWeb.Plug
end
Add a BeaconWeb
scope.
scope "/", BeaconWeb do
pipe_through :browser
pipe_through :beacon
live_session :beacon, session: %{"beacon_site" => "my_site"} do
live "/beacon/*path", PageLive, :path
end
end
Comment out existing scope.
# scope "/", BeaconSampleWeb do
# pipe_through :browser
# get "/", PageController, :index
# end
Commit the changes.
Add routing changes
Add some components to your apps/beacon_sample/priv/repo/seeds.exs
.
alias Beacon.Components
alias Beacon.Pages
alias Beacon.Layouts
alias Beacon.Stylesheets
Stylesheets.create_stylesheet!(%{
site: "my_site",
name: "sample_stylesheet",
content: "body {cursor: zoom-in;}"
})
Components.create_component!(%{
site: "my_site",
name: "sample_component",
body: """
<li>
<%= @val %>
</li>
"""
})
%{id: layout_id} =
Layouts.create_layout!(%{
site: "my_site",
title: "Sample Home Page",
meta_tags: %{"foo" => "bar"},
stylesheet_urls: [],
body: """
<header>
Header
</header>
<%= @inner_content %>
<footer>
Page Footer
</footer>
"""
})
%{id: page_id} =
Pages.create_page!(%{
path: "home",
site: "my_site",
layout_id: layout_id,
template: """
<main>
<h2>Some Values:</h2>
<ul>
<%= for val <- @beacon_live_data[:vals] do %>
<%= my_component("sample_component", val: val) %>
<% end %>
</ul>
<.form let={f} for={:greeting} phx-submit="hello">
Name: <%= text_input f, :name %> <%= submit "Hello" %>
</.form>
<%= if assigns[:message], do: assigns.message %>
</main>
"""
})
Pages.create_page!(%{
path: "blog/:blog_slug",
site: "my_site",
layout_id: layout_id,
template: """
<main>
<h2>A blog</h2>
<ul>
<li>Path Params Blog Slug: <%= @beacon_path_params.blog_slug %></li>
<li>Live Data blog_slug_uppercase: <%= @beacon_live_data.blog_slug_uppercase %></li>
</ul>
</main>
"""
})
Pages.create_page_event!(%{
page_id: page_id,
event_name: "hello",
code: """
{:noreply, Phoenix.LiveView.assign(socket, :message, "Hello \#{event_params["greeting"]["name"]}!")}
"""
})
Run ecto.reset
to create and seed our database(s).
cd apps/beacon_sample
.mix ecto.setup
(as our repos haven't been created yet).mix ecto.reset
thereafter.We can skip to Step 22 now that the SafeCode
package works as expected.
This is typically where we run into issues with safe_code
on the inner content of the layout seed, specifically:
** (RuntimeError) invalid_node:
assigns . :inner_content
If you remove the line <%= @inner_content %>
, seeding seems to complete.
Running mix phx.server
throws another error:
** (RuntimeError) invalid_node:
assigns . :val
It looks like safe_code
is problematic and needs to be surgically removed from Beacon for now.
In Beacon's repository, remove SafeCode.Validator.validate_heex!
function calls from the loaders
lib/beacon/loader/layout_module_loader.ex
lib/beacon/loader/page_module_loader.ex
lib/beacon/loader/component_module_loader.ex
Fix the seeder to work without SafeCode.
apps/beacon_sample/priv/repo/seeds.exs
under Pages.create_page!
from <%= for val <- live_data[:vals] do %>
to <%= for val <- live_data.vals do %>
.Commit the seeder changes.
Add component seeds
Enable Page Management and the Page Management API in router (apps/beacon_sample_web/lib/beacon_sample_web/router.ex
).
require BeaconWeb.PageManagement
require BeaconWeb.PageManagementApi
scope "/page_management", BeaconWeb.PageManagement do
pipe_through :browser
BeaconWeb.PageManagement.routes()
end
scope "/page_management_api", BeaconWeb.PageManagementApi do
pipe_through :api
BeaconWeb.PageManagementApi.routes()
end
Commit the Page Management router changes.
Add Page Management routes
Navigate to http://localhost:4000/beacon/home to view the main CMS page.
Header
, Some Values
, and Page Footer
with a zoom-in cursor over the page.Navigate to http://localhost:4000/beacon/blog/beacon_is_awesome to view the blog post.
Header
, A blog
, and Page Footer
with a zoom-in cursor over the page.Navigate to http://localhost:4000/page_management/pages to view the Page Management
section.
Listing Pages
, Reload Modules
, a list of pages, and New Page
.We should put the page management through its paces to determine weak points.
<main>
.<body>
section.stylesheet_urls
?0.17.7
.phx gen auth
.safe_code
was a problem during my first two attempts.BeaconWeb
scope and adding it as BeaconSampleWeb
instead.UndefinedFunctionError
as function BeaconSampleWeb.PageLive.__live__/0 is undefined (module BeaconSampleWeb.PageLive is not available)
.<head>
as inline <style>
tags.<body><div data-phx-main="true">
mix phx.server
) immediately boots our Beacon components before it shows the url.One change that happened at the end of 2020, I started the journal section to try to capture bite-sized rough ideas. I had started a journal at work with notes in files like Phoenix Developer Diary.txt
and I looked for a solution to merge my different diaries. The excellent Claire Codes has an extremely consistent diary at clairecodes and served as my main source of inspiration.
I've gone all-in learning Elixir by participating in my first Advent of Code in 2020. I tapered off pretty quickly as I had serious problems working through loops and control flow. Seeing other examples on Elixir Forum helped immensely as I had slowly gotten better at reading the code. Later on in the year, I decided to take a TodoMVC sample through to a LiveView version with a little help from other resources on the internet. I had also started a diary where I wanted to capture the approaches I took each day I worked on the example. I have a plan to try to tackle my version from scratch but I'm also looking at other application ideas.
While the Advent of Code and TodoMVC was good to get my feet wet, I learned far more by pushing through Exercism exercises. If you're on Exercism and curious, my solutions can be found here. I highly recommend using Exercism to learn any language it covers as the recently released version 3 makes for a great experience. Exercises feel a bit more "real world" and less like brain teasers that happen to use programming concepts. Even if I happened to look at the HINTS.md
file, it never felt like cheating as it would only guide us toward a solution, not implement it.
After attending the excellent ElixirConf 2021 virtually, I've started working with Livebook in a few examples. I wanted to highlight the 3 notebooks that use the excellent spider_man
package to crawl 3 websites: Elixir Jobs, Elixir Radar Jobs, and Elixir Companies. Parsing the DOM of each required slowly stretching far outside my comfort zone. It's also worth mentioning that in the Elixir Jobs
example, I left a problem I found under the Sorting the Results
section. Due to the zero-width space, the section throws the message ** (SyntaxError) nofile:5:1: unexpected token: "" (column 1, code point U+200B)
.
Coming to the end of 2021, I'm looking forward to immersing myself deeper in the Elixir ecosystem. Livebook is also a great way to get your feet wet with Elixir concepts, like a powerful language scratchpad. There have been other life changes since January 2020 but those deserve separate posts when I can get to them. Fortunately, the pandemic hasn't been harsh on my family or extended family at all, which I consider an extreme blessing. I can't say we weren't impacted by the last 2 years but things could've been much worse.
]]>One particular concept I had a problem with right out of the gate was how to use markdown files from multiple directories. I started with the post type to handle /year/month/day/title routes but I wanted to move to an equivalent of the generic page type from Hexo. In doing research to the search terms I could've used months ago, I stumbled on multiple issues that point out how to do it.
In the file gridsome.config.js
, I use the following snippet in the plugins section:
{
use: '@gridsome/source-filesystem',
options: {
path: 'blog/articles/**/*.md',
typeName: 'Article',
refs: {
authors: {
typeName: 'Author',
create: true
},
}
}
},
{
use: '@gridsome/source-filesystem',
options: {
path: 'blog/posts/**/*.md',
typeName: 'Post',
refs: {
authors: {
typeName: 'Author',
create: true
},
categories: {
typeName: 'Category',
create: true
},
tags: {
typeName: 'Tag',
create: true
},
}
}
},
Since Gridsome has a concept of pages already, I chose the word article to represent them instead. As an example, the portfolio page is an article type while this page represents a post type. While hindsight makes this seem intuitive now, I somehow had the impression that you were only allowed one plugin type for safety reasons.
To point out something else, the portfolio page highlights a technique I didn't think was possible at the time. The parent portfolio page is an article type but all the subsequent child pages are markdown files in a separate portfolio directory as a portfolio type. In the plugins section of gridsome.config.js
:
{
use: '@gridsome/source-filesystem',
options: {
path: 'blog/portfolio/**/*.md',
typeName: 'Portfolio',
refs: {
authors: {
typeName: 'Author',
create: true
},
}
}
},
Coming from Hexo, I opt for placing content in markdown files and having unique layouts defined in the various pages
and templates
files. As much as Gridsome is a generic website framework, I find that it can be extremely flexible to whatever workflow you wish to create. There are some parts of Hexo I miss like scaffolding new page types or steering me into blog concepts but the transition to Gridsome has been rather smooth. While Gridsome may not be for everyone, I can definitely see how JAMstack has gained traction recently. Barring very few gotchas, working on this site is fun again even in the I-can-see-every-blemish state it's currently in.
Static site generators like Hugo and Gatsby have picked up steam and the feature set of Gatsby, particularly the GraphQL component stood out. I wanted to stick to Vue for as many of my personal projects as possible, so I searched for any static site generator using Vue I could find. Fortunately Gridsome has come along as a nice clone of Gatsby using Vue rather than React and even though it's at v0.7.12 at the time of this post, I've run into very few hurdles.
I don't have the best understanding of JAMstack after working with a sample size of one, but learning GraphQL by only dealing with queries has made this one of the best ways to get my feet wet. I'm by no means an expert but this light interaction compels me to use it more often, as it's mostly been a pleasure to work with. Frameworks like Gridsome and I suspect Gatsby let you focus on almost entirely the frontend. Even though the A in JAMstack stands for APIs, as a backend developer I haven't had to write a single REST, GraphQL or what I'd typically associate with an API like I would with Laravel, Phoenix Framework, or Express.
One thing I miss about Hexo is that it had scaffolding to generate new files. Gridsome is a framework for generic sites, not just blogs, so scaffolding doesn't seem to be included. Coming from Hexo I wanted to keep as much of the existing markdown as possible and I think some of the approaches I've taken may be useful to others. A small example I had a problem understanding is that you can use a @gridsome/source-filesystem
plugin multiple times, one for each directory or type. It makes sense in hindsight but none of the starters used the technique nor did the docs seem to suggest it was possible. I'm tempted to create a starter based on my usage patterns but worst case, I plan on writing a post outlining some of these approaches in the near future.
One last thing is a small humblebrag. While the theme for this site draws a few cues from the older version, I wanted to flex my design abilities by focusing on techniques I've learned reading Refactoring UI. By the time this post is published it likely won't be perfect but I think it's a decent first pass that should only get better over time.
]]>I've been using this Swaggervel package with almost all my recent Laravel projects. A few instances were lightly customized to work against different authentication schemes and I only briefly touched on using Laravel Passp ...
]]>I've been using this Swaggervel package with almost all my recent Laravel projects. A few instances were lightly customized to work against different authentication schemes and I only briefly touched on using Laravel Passport.
I wanted to highlight a few areas while also offering up an example project as a lightly opinionated jumping off point. Just the highlights cover quite a bit of information but the example should have ample information in commit messages and in the finished product.
First we run laravel new <project_name>
, git init
and commit immediately to mark our base Laravel installation.
I've always preferred this immediate commit over making customizations first as it's far easier to track your customizations versus the base install.
Next, we run through the Laravel Passport docs with the following caveats:
php artisan vendor:publish --tag=passport-migrations
doesn't copy the migrations as expected. We manually do this.php artisan migrate --step
creates a migration batch for each migration file individually. This lets us rollback to individual steps and is primarily personal preference.app/Providers/AuthServiceProvider
contains the following:Passport::routes(function (RouteRegistrar $routeRegistrar) {
$routeRegistrar->all();
});
Passport::tokensCan([
]);
Passport::enableImplicitGrant();
Passport::tokensExpireIn(Carbon::now()->addDays(15));
Passport::refreshTokensExpireIn(Carbon::now()->addDays(30));
artisan make:auth
to utilize the app layout and create a home
view that is protected by the Login prompt.WelcomeController
with matching view that utilizes the same app layoutartisan route:cache
in the future as route closures aren't supported.Now that the basics are complete, we bring in Swaggervel via composer require appointer/swaggervel --dev
.
We can ignore the line in the documentation that mentions adding Appointer\Swaggervel\SwaggervelServiceProvider::class
as that's only for Laravel versions earlier than 5.5 without package discovery.
It's necessary to run artisan vendor:publish
to publish the content as we're using this package as a dev dependency and the assets won't show up otherwise.
Now that Swaggervel is in place we can bring it all together.
To start, we create the file app/Http/Controllers/Api/v1/Controller.php
as our generic API base controller.
This controller houses our root-level @SWG\Info
definition in a convenient location.
This also sets us up for future work where API controllers are versioned, though this is personal preference.
The secret sauce is the @SWG\SecurityScheme
annotation:
/**
* @SWG\SecurityScheme(
* securityDefinition="passport-swaggervel_auth",
* description="OAuth2 grant provided by Laravel Passport",
* type="oauth2",
* authorizationUrl="/oauth/authorize",
* tokenUrl="/oauth/token",
* flow="accessCode",
* scopes={
* *
* }
* ),
*/
The securityDefinition
property is arbitrary but needs to be included in every protected route definition.
You can specify multiple security schemes to cover things like an generic api key or likely multiple OAuth flows, though I haven't tried working out the latter.
These are the supported flows and it's important to note that Swaggervel is currently on the OpenAPI 2.0
specification, though this may change in the future.
The scopes
specified includes everything (*) but we could define any scopes explicitly.
It should be noted that we also need to setup the route definitions in our resource Controller classes but due to the verbosity they are too much to include in this post.
A small snippet that is unique to working with this setup is the following:
* security={
* {
* "passport-swaggervel_auth": {"*"}
* }
* },
This tells a specific endpoint to use the securityDefinition
created earlier and it's important that these match.
The example project has rudimentary UserController
, User
model, and UserRequest
definitions that should be a decent starting point, though I can't vouch for them being very comprehensive.
First we need to create an OAuth client specifically for Swaggervel connections.
Go to the /home
endpoint and under OAuth Clients
click Create New Client
.
Under Name
specify Laravel Passport Swaggervel
or just Swaggervel
.
Under Redirect URL
we're unable to specify /vendor/swaggervel/oauth2-redirect.html
directly, so use a placeholder like https://passport-swaggervel.test/vendor/swaggervel/oauth2-redirect.html
instead.
Using your SQL toolbox of choice, navigate to the table oauth_clients
and look for the row with the name specified in the previous step, in our case Laravel Passport Swaggervel
.
Manually update the redirect column to /vendor/swaggervel/oauth2-redirect.html
.
Now that our OAuth client in Passport should be setup correctly, we focus our attention on the config/swaggervel.php
settings.
The client-id
should be set to what Passport shows in the UI as the Client ID
field.
This is also the id of the row in the oauth_clients
table.
The client-secret
should be set to the what Passport shows in the UI as the Secret
field.
We also set both secure-protocol
and init-o-auth
to true, the latter of which fills in the UI with our secrets otherwise we'd have to put them in manually.
For the OAuth2 redirect to function properly we need to modify the Swagger UI configuration in resources/views/vendor/swaggervel/index.blade.php
.
Under const ui = SwaggerUIBundle({
right below the url parameter should be oauth2RedirectUrl: '/vendor/swaggervel/oauth2-redirect.html',
.
This reinforcement is necessary as the Swagger UI doesn't 'catch' the tokens properly without this.
Other notable additions that make the UI slightly easier to work with:
tagsSorter: 'alpha',
operationsSorter: 'alpha',
docExpansion: 'list',
filter: true
First we go to the api/docs
endpoint to display the Swagger UI.
Click the Authorize
button with the unlocked padlock icon.
Verify the client_id
and client_secret
sections are filled in.
Click Authorize
and the Laravel Passport screen labelled Authorization Request
should display with the Authorize
and Cancel
buttons.
Click Authorize
again and you should be redirected back to Swagger with the client_id
and client_secret
now showing as ******
with a Logout
button instead of Authorize
.
We should now be able to click on the GET /users
route, click the Try it out
button, click on the blue Execute
button and be greeted with our expected response as a list of users.
We've hopefully highlighted the basic touch points of the process with the example code going into much further detail. The project is lightly opinionated to facilitate practices that have served me well so far. It is by no means a complete reference but it should be a good jumping off point when it's somewhat harder to see the big picture without a comprehensive example.
In case you need the link to the project again.
]]>/metrics
with the output being statistics in Prometheus' format.
The real power of Prometheus comes when you expose your own /metrics
endpoint and have Prometheus consume the statistics you generate.
This post is also a very good introduction with the section Building your own exporter
being extremely valuable in describing just some of the possibilities.
After getting my bearings I started with a prototype with a simple premise "Why look at the usage graphs in Digital Ocean for each server independently? Why not have it in one location?" How To Install Prometheus on Ubuntu 16.04 is a very good primer to get everything up and running quickly.
I've made a few modifications since working through the article:
prometheus:prometheus
for ownership of core prometheus processes like prometheus
or alertmanager
.sudo useradd --no-create-home --shell /bin/false prometheus
prometheus-exporter:prometheus-exporter
for ownership of exporters. Exporters should possibly be more isolated but I feel it may be a case of YAGNI.sudo useradd --no-create-home --shell /bin/false prometheus-exporter
scrape_interval: 1m
.At $dayJob we've moved to provisioning servers using Laravel Forge, which has the possibility of utilizing exporters for mysqld, mariadb, postgres, memcached, redis, beanstalkd, nginx, php-fpm, and sendmail.
I've opted to use node_exporter, mysqld, nginx-vts-exporter, php-fpm, and redis respectively.
To put the original premise into perspective, replicating the newer monitoring agent graphs in Digital Ocean only require node_exporter
.
A few of the exporters require very little setup, only setting a few configuration variables systemd service definitions. Other exporters like nginx-vts-exporter
require building nginx from source.
I plan to introduce a series of posts that should aid in getting a very rudimentary implementation running. There is an abundant usage of Kubernetes in the Prometheus ecosystem, to the point that it almost seems required but fortunately it also just works(tm) in a traditional virtual machine without any real fuss.
]]>It wasn't until June 1st that I finally understood the full breadth of the transition and stumbled upon the integration faqs. The important bit of information is this snippet:
Will I be able to access my Code School invoices or course history?
No. Your invoices and course history will not carry over or be accessible as of 6/1.
Code School customers were instructed to generate a PDF of their profile before the migration. Due to finding the integration FAQs after June 1st, sadly I wasn't able to do that in time.
What particularly impacts me the most is a belief that pointing potential employers to a reputable website as a source of truth carries far more weight than a PDF that can be altered. As a web developer in an industry where employers seem to assume a resume is partially or wholly embellished, this seems like a step backwards.
In spite of the transition pains, I do find Pluralsight's Skill IQ
to be a fresh way to measure competency with multiple choice questions that cover broad aspects of a given topic.
You're shown what is marked wrong so you can learn from your mistake and the equivalent of the old Code School subscription I believe allows unlimited retests.
The integration with Stack Overflow's developer story is compelling enough to use it and I did gain quite a sense of accomplishment when I scored in the very low expert level range.
As I finished typing this up I noticed Pluralsight seems to have a fair number of the Code School courses by searching for the keyword "Code School".
There are newer interactive courses like the one titled HTML 5 and CSS 3: Overview of Tag, Attribute and Selector Additions
but the introductory video includes the Front End Formations
title that it was called on Code School.
It appears that some of the content is migrating over but things aren't 1:1 so we may never get credit for courses we've essentially completed.
I plan on going through the course shortly as I hope at least the challenges have been updated but it would be a terrible experience to go through all of this realizing I've accomplished it recently.
I don't quite know how I feel about the transition a month in and now after noticing at least some of the content was moved over. It's hard to lose the accomplishments but the outcome would've been no different if Code School closed completely. It does have me pause to make sure the course accomplishments I share are worth the investment and that's likely an important thing to remember whenever similar services catch my attention.
]]>I've been bitten by this issue so many times that I have a form of amnesia where I forget that it happened all over again. This github issue highlights the problem ...
]]>I've been bitten by this issue so many times that I have a form of amnesia where I forget that it happened all over again. This github issue highlights the problem but I'm more of a visual learner.
The problem can be traced back to configuring the redirect_uri
parameter incorrectly. OAuth2 highly
requires that the callbacks are identical between the server and consumer(s). For consumers that are
external to the app, this is almost never a problem. For first-party consumers like Swagger(vel), this is
extremely easy to configure incorrectly.
scripts/create-mysql.sh
with the following
snippet:
```sh ...
]]>scripts/create-mysql.sh
with the following
snippet:
#!/usr/bin/env bash
cat > /etc/mysql/conf.d/password_expiration.cnf << EOF
[mysqld]
default_password_lifetime = 0
EOF
service mysql restart
DB=$1;
mysql -e "CREATE DATABASE IF NOT EXISTS \`$DB\` DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_unicode_ci";
This change pipes the default_password_lifetime
setting into the file /etc/mysql/conf.d/password_expiration.cnf
and
restarts the mysql
service. The provisioning process then can proceed as normal.
This approach requires no updated
vagrant virtualbox image or
other similar adjustments and allows us to
keep using version 0.3.3
indefinitely.
I'm likely going to abandon my settler and homestead forks as I couldn't adequately maintain them moving forward. I'll work to push this upstream as I feel it should be implemented there.
]]>```shell PDOException in Connector.php line 55: SQLSTATE[HY000] [1862] Your password has expired. To log in you must change it using a client that ...
]]>PDOException in Connector.php line 55:
SQLSTATE[HY000] [1862] Your password has expired. To log in you must change it using a client that supports expired passwords.
Firing up a different vagrant machine, I was greeted with the same problem. This seemed to affect all of the vagrant
boxes using version laravel/homestead (virtualbox, 0.3.3)
.
On the machine, the MySQL version displayed by mysql --version
is
mysql Ver 14.14 Distrib 5.7.9, for Linux (x86_64) using EditLine wrapper
MySQL 5.7's password expiration policy seemed to point to the culprit.
From MySQL 5.7.4 to 5.7.10, the default default_password_lifetime value is 360 (passwords must be changed approximately once per year). For those versions, be aware that, if you make no changes to the default_password_lifetime variable or to individual user accounts, all user passwords will expire after 360 days, and all user accounts will start running in restricted mode when this happens.
Looking at the list of users with relevant columns shown that the password for the user homestead
was set on 2015-11-13 03:50:18
.
mysql> select host, user, authentication_string, password_expired, password_last_changed, password_lifetime from mysql.user;
+-----------+-----------+-------------------------------------------+------------------+-----------------------+-------------------+
| host | user | authentication_string | password_expired | password_last_changed | password_lifetime |
+-----------+-----------+-------------------------------------------+------------------+-----------------------+-------------------+
| localhost | root | *14E65567ABDB5135D0CFD9A70B3032C179A49EE7 | N | 2016-11-08 22:28:11 | NULL |
| localhost | mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | N | 2015-11-13 03:50:10 | NULL |
| 0.0.0.0 | root | *14E65567ABDB5135D0CFD9A70B3032C179A49EE7 | N | 2015-11-13 03:50:15 | NULL |
| 0.0.0.0 | homestead | *14E65567ABDB5135D0CFD9A70B3032C179A49EE7 | N | 2015-11-13 03:50:18 | NULL |
| % | homestead | *14E65567ABDB5135D0CFD9A70B3032C179A49EE7 | N | 2015-11-13 03:50:18 | NULL |
+-----------+-----------+-------------------------------------------+------------------+-----------------------+-------------------+
5 rows in set (0.00 sec)
Date manipulation in PHP showed
that 360 days from 2015-11-13 03:50:18
is 2016-11-07 03:50:18
, about the time this started occurring.
It was later that I discovered this pull request didn't make it into
branch revert-56-master
used to build the 0.3.3
box. It succinctly described the problem at hand.
I saw 4 possible choices for a permanent solution:
default_password_lifetime=0
explicitly in /etc/mysql/my.cnf
.PASSWORD EXPIRE NEVER
to disable password expiration for that user.In looking to correct upstream, the pull request was denied with very good reason. It was a ton of work to seemingly get the 5.6 branch up to master and I have absolutely no guarantee that something wasn't broken in the process.
Not being content with abandoning that work, I pushed a vagrant virtualbox image that should continue the 5.6 branch forward for the foreseeable future. There is one major caveat, it requires a patch to Homestead v2 to accommodate the changes introduced.
Steps required to use the image:
vagrant box add w0rd-driven/homestead
.box: "w0rd-driven/homestead"
in Homestead.yaml
to specify a different vagrant box than the default
of laravel/homestead
."laravel/homestead": "2.0.x-dev"
to the require-dev
section of composer.json
.repositories
section of composer.json
:"repositories": [
{
"type": "git",
"url": "https://github.com/w0rd-driven/homestead.git"
}
],
composer update
to change to the new composer package.vagrant destroy -f
then vagrant up
.I've enabled issues on both forks of settler and homestead. Unfortunately, I don't have VMWare Fusion to build the vmware provider image. If anyone has the capabilities, I would gladly grant the access to push the image.
]]>To make this easier, I thought I would repurpose the steps as I can't seem to find an independent or direct link:
I yearned for one interchangeable format that allowed me to generate HTML, Word and PDF at the very least. JSON Resume combined with resume-linkedin seemed like a great fit. Unfortunately, due to recent LinkedIn API changes resume-linkedin was all but useless. My first contribution was born out of the realization that if you could get the LinkedIn data through the API console, the process still worked, albeit extremely cumbersome.
As I worked on migrating this site to my personal fork of a theme in Hexo, I thought a custom JSON Resume theme would also be a good fit. These changes to my resume can be found here or here.
]]>In that time span I've:
In that time span I've:
That's really only scratching the surface. It would've been helpful to have blog posts as I moved along but as with most things, life got in the way.
My main goal for the early part of 2016 is to revamp this site and make it the playground I was looking for in 2013. Octopress is really nice but if I upgrade to v3 it's not much more work to migrating away to something like Hexo, Metalsmith, or DocPad.
]]>In my last post, I had proposed an attempt to tackle the FizzBuzz problem. PowerShell was done, PHP was barely started but I never pointed to it in a subsequent post or finished ...
]]>In my last post, I
had proposed an attempt to tackle the FizzBuzz problem. PowerShell was done, PHP was barely started but I never pointed
to it in a subsequent post or finished what I wanted. The project url has
completed and checked solutions for PHP and Node.js. I had mentioned
b. F#, Objective-C, CoffeeScript, C/C++, Go, Dart, and Haskell are the planned languages I've mostly touched in passing or know about.
,
as well as C#, Pascal, and Ruby but I may never get to them.
Shortly after that last post, I switched jobs from .NET to web development focusing on PHP with HTML, CSS, and Javascript. That one action shifted much of my focus from most of the languages in that list. With ES6 coming and recently finishing a CodeSchool course in CoffeeScript, the Javascript landscape is looking pretty awesome. Elixir and the Phoenix Framework have recently stood out as upcoming contenders for my mindshare as well.
My last post taught me that while I may know of a language, it doesn't mean I'll have a genuine desire to pursue it. It can also easily become difficult to want to pursue development outside of your day job. Staying current, however, is always worth pursuing. Tooling and efficiency around web development seems to have come a very long way.
To keep this post brief, I plan on making more updates as I feel a lot has changed for me in the past 2 years that I'd still love to share.
]]>My comment could ...
]]>My comment could likely be seen as dismissive or arrogant. I get that. My biggest problem is that because people still fail, this is the interview equivalent of patty cake: awkward, childish, and unrewarding (unless you're a 2 year old).
To be quite honest, I don't quite understand my disdain for the problem. It's simple enough that it can be solved a number of ways quickly and gets you to express at least the fundamentals of development in a particular language.
This exercise is an excellent opportunity for a number of things:
Note: I'm using https://rosettacode.org/wiki/FizzBuzz as a language guide only. If you see me follow a specific example, punch me in the nuts.
The best description of the problem can be found here, specifically (altered for this example):
Write a program that prints the numbers from 1 to 100. But for multiples of three print "Jazz" instead of the number and for the multiples of five print "Hands". For numbers which are multiples of both three and five print "JazzHands".
This brings up some excellent points. I'm definitely not above FizzBuzz or live coding but I still can't pinpoint why I have beef with this particular problem.
I honestly can't remember the last time I've actually tackled this problem so the potential to look really foolish, at least at the beginning, is pretty high.
]]>