Development

Laravel Valet for Production Domains

January 8, 2023 · 2 min read

Recently, after a brief outage at work, I wondered if it would be possible to replicate the problem locally using Laravel Valet. My Google search landed on this StackOverflow post, where the answers shot down the idea. Not to be dissuaded by something I read on the internet, I started investigating if it was possible and stumbled upon what I think is a viable solution. There aren't very many hoops to jump through or major quirks so I believe it's not only possible but could be supported out of the box.

In my case, I want to proxy the domain scdn-app.thinkorange.com through my local version of the Laravel application.

  1. Edit ~/.config/valet/config.json on macOS and change the tld parameter from test to com.
  2. Change to the directory of your application.
  3. Run the command valet link scdn-app.thinkorange to set up our valet configuration to point the domain to this directory.
  4. Run the command valet secure scdn-app.thinkorange to set up the SSL certificate.
  5. Change the directory to dnsmasq cd ~/.config/valet/dnsmasq.d.
  6. Copy the existing TLD config to cover the .com domain with the command cp tld-test.conf tld-com.conf.
  7. Edit the new file to change the first address line to address=/.com/127.0.0.1 and save the file.
  8. (Optionally) Isolate the site to PHP 8.1 with the command valet isolate --site scdn-app.thinkorange php@8.1.
  9. Change your /etc/hosts file to redirect the domain to 127.0.0.1 for ipv4 and ::1. I use the excellent Gas Mask to make this step easier.

Now we should have a functional production proxy through our local machine. This configuration creates a few problems around keeping the com TLD. Fortunately, a few extra steps are necessary for us to switch back to .test while also keeping this site functional.

  1. Edit ~/.config/valet/config.json again and change the tld parameter from com back to test. This change will immediately break our site.
  2. Change to the Sites directory cd ~/.config/valet/Sites.
  3. If we use ls -al to list the directory, we'll see our site scdn-app.thinkorange. Let's change that.
  4. Run the command mv scdn-app.thinkorange scdn-app.thinkorange.com.

Our site should now be working again. We are also able to continue serving our previous local test domains.

Because we can create a permanently functional system using these steps, I believe it should be possible to create a pull request to reduce the number of hoops we have to jump through. I'd love to be able to run valet link scdn-app.thinkorange.com. with a period at the end to denote I'm including the full domain with TLD. That would eliminate the temporary step of editing the config.json file, and the Sites directory would just work(TM) as it would include the .com directory name. I don't believe we even need the dnsmasq changes as I'm able to navigate to a functional site without them. I believe Gas Mask is doing the work, but it's better to be safe than sorry.

If you'd prefer a YouTube video where I stumble through recreating these steps from scratch:

Livebook Autosaves

December 14, 2022 · 3 min read

Tell me if you've done this before. You write up a nice little prototype of an idea in Livebook. You then get distracted by life situations like eating, writing an email, or taking a nap. You feel the need to close Livebook or prune the multiple sessions you've had running for weeks now. Because you have a million tabs open (with a session manager) and too many in Livebook to individually check, you restart your computer and let it crash(TM). When you open up Livebook again, "Oh. Shiiiiit" you exclaim. Where the hell did that notebook go? I'm 100% sure I clicked the disk icon, what the hell? If you're like me, you may have created this forked Livebook from memory, possibly taking a better approach.

There is a better way to handle this scenario. Livebook has had autosaves since 0.4:

The feature was added in this PR according to the changelog:

https://github.com/livebook-dev/livebook/pull/736

To find your autosave files:

  • For the Desktop application and CLI in production: ~/Library/Application Support/livebook/autosaved/.

    • On my machine this expands to the absolute path /Users/jbrayton/Library/Application Support/livebook/autosaved/.
  • For the dev environment: in config/dev.exs, this is set as config :livebook, :data_path, Path.expand("tmp/livebook_data/dev".

    • On my machine this expands to the absolute path /Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/.
  • For the test environment: in config/test.exs this is set as Path.expand("tmp/livebook_data/test").

    • On my machine this expands to the absolute path /Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/test/autosaved/.

Notebooks are saved by day in the autosave directory and the date corresponds to when they were created (when you immediately click the New notebook button).

To view or change your autosave directory in the CLI:

  • Go to http://localhost:8080/settings
  • Or, if you're already in a notebook, click the Livebook icon in the top left and click Settings under the Home and Learn links.

Livebook CLI settings page

For the Desktop application, the port will be randomized but you can either change the URL to tack on /settings after the port or click around to the settings page as described earlier.

Livebook Desktop application settings page

Tracing the Default Setting

If you are curious as to how this setting gets configured, we can start by looking at Livebook.Settings.default_autosave_path() in https://github.com/livebook-dev/livebook/blob/main/lib/livebook/settings.ex#L32-L34. We follow Livebook.Config.data_path() to https://github.com/livebook-dev/livebook/blob/main/lib/livebook/config.ex#L76-L78 then the Erlang function :filename.basedir(:user_data, "livebook").

Running this in Livebook we get the output "/Users/jbrayton/Library/Application Support/livebook", precisely where the desktop app stores its files.

Finding Files

What lead me to this discovery, after vaguely remembering autosave was a thing, was looking for files on my computer. I purposefully install and use the locate command because I find it far easier to use than remembering the find -name syntax.

Here's the output for checking that the word autosave is in any directory or file name:

> ~ locate autosaved/ 
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_10_31/18_25_03_mapset_drills_hedh.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/18_12_21_teller_bank_challenge_pv4e.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/18_13_39_untitled_notebook_pidb.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/19_31_57_dockyard_academy_amas_p75r.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_03/20_02_17_intro_to_timescale_jm7r.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_08/11_10_21_untitled_notebook_ervg.livemd
/Users/Shared/repositories/personal/elixir/livebook/tmp/livebook_data/dev/autosaved/2022_11_22/19_15_12_untitled_notebook_p75e.livemd

What I found interesting was that my files in ~/Library/Application Support/livebook/autosaved/ did not show up. Had I not realized there could be different locations, I may have overlooked the notebook I was looking for all along. I have no clue why locate doesn't scour the directories in ~/Library it should have access to but that's a problem for another day.

Introduction to DockYard Beacon CMS

December 1, 2022 · 7 min read

In December of 2021, Brian Cardarella introduced DockYard Beacon CMS in this series of tweets:

Over the course of the past year, I've created a sample project a total of 3 times to get a better understanding for how it operates. I haven't seen a ton of content on Beacon beyond announcement tweets, the mention in the ElixirConf 2022 keynote, and https://beaconcms.org/. This post covers the complete instructions in the readme with some notes on where to go from here. I had run into a few snags at first but a lot of those initial pain points have been hammered out so far. While a basic "Hello World" sample project is great, I plan on expanding on the sample with deeper dives into how Beacon serves up content. It takes a few novel approaches I haven't seen before to create either a CMS that runs along your application or it can be centralized with multi-tenancy. One CMS can service all of your ancillary marketing sites, blogs, or wherever you need the content.

The following instructions are also listed on the sample application readme so you're welcome to skip them if you want to look at the code.

Installation

Steps

  1. Create a top-level directory to keep our application pair. This is temporary as the project matures.

    1. mkdir beacon_sample
  2. Clone GitHub - BeaconCMS/beacon: Beacon CMS to ./beacon.

    1. git clone git@github.com:BeaconCMS/beacon.git
  3. Start with our first step from the Readme

    1. Create an umbrella phoenix app
    2. mix phx.new --umbrella --install beacon_sample
  4. Go to the umbrella project directory

    1. cd beacon_sample/
  5. Initialize git

    1. git init
  6. Commit the freshly initialized project

    1. Initial commit of Phoenix v1.6.15 as of the time of this writing.
    2. I prefer to capture the version and everything scaffolded as-is. This allows us to revert back to the pristine state if we ever need to.
  7. Add :beacon as a dependency to both apps in your umbrella project

    # Local:
    {:beacon, path: "../../../beacon"},
    # Or from GitHub:
    {:beacon, github: "beaconCMS/beacon"},
    1. Add to apps/beacon_sample/mix.exs and apps/beacon_sample_web/mix.exs under the section defp deps do.
    2. We choose the local version to override commits as needed. When the project solidifies, the GitHub repository will be far more ideal.
    3. I'll want to research the git dependency as I believe we can specify commits? There's possibly no need to have a local revision at all.
  8. Run mix deps.get to install the dependencies.
  9. Commit the changes.

    1. Add :beacon as a dependency to both apps in your umbrella project seems like a good enough commit message.
  10. Configure Beacon Repo

    1. Add the Beacon.Repo under the ecto_repos: section in config/config.exs.
    2. Configure the database in dev.exs. We'll do production later.

      # Configure beacon database
      config :beacon, Beacon.Repo,
      username: "postgres",
      password: "postgres",
      database: "beacon_sample_beacon",
      hostname: "localhost",
      show_sensitive_data_on_connection_error: true,
      pool_size: 10
  11. Commit the changes.

    1. Configure Beacon Repo subject with Configure the beacon repository in our dev only environment for now. body.
  12. Create a BeaconDataSource module that implements Beacon.DataSource.Behaviour

    1. Create apps/beacon_sample/lib/beacon_sample/datasource.ex

      defmodule BeaconSample.BeaconDataSource do
        @behaviour Beacon.DataSource.Behaviour
      
        def live_data("my_site", ["home"], _params), do: %{vals: ["first", "second", "third"]}
        def live_data("my_site", ["blog", blog_slug], _params), do: %{blog_slug_uppercase: String.upcase(blog_slug)}
        def live_data(_, _, _), do: %{}
      end
    2. Add that DataSource to your config/config.exs

      config :beacon,
        data_source: BeaconSample.BeaconDataSource
  13. Commit the changes.

    1. Configure BeaconDataSource
  14. Make router (apps/beacon_sample_web/lib/beacon_sample_web/router.ex) changes to cover Beacon pages.

    1. Add a :beacon pipeline. I typically do this towards the pipeline sections at the top, starting at line 17.

      pipeline :beacon do
        plug BeaconWeb.Plug
      end
    2. Add a BeaconWeb scope.

      scope "/", BeaconWeb do
        pipe_through :browser
        pipe_through :beacon
      
        live_session :beacon, session: %{"beacon_site" => "my_site"} do
          live "/beacon/*path", PageLive, :path
        end
      end
    3. Comment out existing scope.

      # scope "/", BeaconSampleWeb do
      #   pipe_through :browser
      
      #   get "/", PageController, :index
      # end
  15. Commit the changes.

    1. Add routing changes
  16. Add some components to your apps/beacon_sample/priv/repo/seeds.exs.

    alias Beacon.Components
    alias Beacon.Pages
    alias Beacon.Layouts
    alias Beacon.Stylesheets
    
    Stylesheets.create_stylesheet!(%{
      site: "my_site",
      name: "sample_stylesheet",
      content: "body {cursor: zoom-in;}"
    })
    
    Components.create_component!(%{
      site: "my_site",
      name: "sample_component",
      body: """
      <li>
        <%= @val %>
      </li>
      """
    })
    
    %{id: layout_id} =
      Layouts.create_layout!(%{
        site: "my_site",
        title: "Sample Home Page",
        meta_tags: %{"foo" => "bar"},
        stylesheet_urls: [],
        body: """
        <header>
          Header
        </header>
        <%= @inner_content %>
    
        <footer>
          Page Footer
        </footer>
        """
      })
    
    %{id: page_id} =
      Pages.create_page!(%{
        path: "home",
        site: "my_site",
        layout_id: layout_id,
        template: """
        <main>
          <h2>Some Values:</h2>
          <ul>
            <%= for val <- @beacon_live_data[:vals] do %>
              <%= my_component("sample_component", val: val) %>
            <% end %>
          </ul>
          <.form let={f} for={:greeting} phx-submit="hello">
            Name: <%= text_input f, :name %> <%= submit "Hello" %>
          </.form>
          <%= if assigns[:message], do: assigns.message %>
        </main>
        """
      })
    
    Pages.create_page!(%{
      path: "blog/:blog_slug",
      site: "my_site",
      layout_id: layout_id,
      template: """
      <main>
        <h2>A blog</h2>
        <ul>
          <li>Path Params Blog Slug: <%= @beacon_path_params.blog_slug %></li>
          <li>Live Data blog_slug_uppercase: <%= @beacon_live_data.blog_slug_uppercase %></li>
        </ul>
      </main>
      """
    })
    
    Pages.create_page_event!(%{
      page_id: page_id,
      event_name: "hello",
      code: """
        {:noreply, Phoenix.LiveView.assign(socket, :message, "Hello \#{event_params["greeting"]["name"]}!")}
      """
    })
  17. Run ecto.reset to create and seed our database(s).

    1. cd apps/beacon_sample.
    2. mix ecto.setup (as our repos haven't been created yet).
    3. mix ecto.reset thereafter.
  18. We can skip to Step 22 now that the SafeCode package works as expected.
  19. This is typically where we run into issues with safe_code on the inner content of the layout seed, specifically:

    ** (RuntimeError) invalid_node:
    
    assigns . :inner_content
    1. If you remove the line <%= @inner_content %>, seeding seems to complete.
    2. Running mix phx.server throws another error:

      ** (RuntimeError) invalid_node:
      
      assigns . :val
    3. It looks like safe_code is problematic and needs to be surgically removed from Beacon for now.
  20. In Beacon's repository, remove SafeCode.Validator.validate_heex! function calls from the loaders

    1. lib/beacon/loader/layout_module_loader.ex
    2. lib/beacon/loader/page_module_loader.ex
    3. lib/beacon/loader/component_module_loader.ex
  21. Fix the seeder to work without SafeCode.

    1. Change line 49 in apps/beacon_sample/priv/repo/seeds.exs under Pages.create_page! from <%= for val <- live_data[:vals] do %> to <%= for val <- live_data.vals do %>.
  22. Commit the seeder changes.

    1. Add component seeds
  23. Enable Page Management and the Page Management API in router (apps/beacon_sample_web/lib/beacon_sample_web/router.ex).

    require BeaconWeb.PageManagement
    require BeaconWeb.PageManagementApi
    
    scope "/page_management", BeaconWeb.PageManagement do
        pipe_through :browser
    
        BeaconWeb.PageManagement.routes()
    end
    
    scope "/page_management_api", BeaconWeb.PageManagementApi do
        pipe_through :api
    
        BeaconWeb.PageManagementApi.routes()
    end
  24. Commit the Page Management router changes.

    1. Add Page Management routes
  25. Navigate to http://localhost:4000/beacon/home to view the main CMS page.

    1. You should see Header, Some Values, and Page Footer with a zoom-in cursor over the page.
  26. Navigate to http://localhost:4000/beacon/blog/beacon_is_awesome to view the blog post.

    1. You should see Header, A blog, and Page Footer with a zoom-in cursor over the page.
  27. Navigate to http://localhost:4000/page_management/pages to view the Page Management section.

    1. You should see Listing Pages, Reload Modules, a list of pages, and New Page.

Playground

We should put the page management through its paces to determine weak points.

  1. Add another more robust layout.

    1. Can we bring in JS frameworks like Vue? My guess is no, the layout looks to start under a <main>.
    2. Inject javascript at the bottom, this should load at the bottom of our <body> section.
    3. Try CDN urls first, then localhost.
  2. Add another stylesheet. How do we use stylesheet_urls?
  3. Add another more robust component.

    1. Can we use LiveView slots here? We're on 0.17.7.
  4. A replica of Laravel Nova panel of pages. Welcome and Home are Laravel defaults. Users would be useful as we could integrate with phx gen auth.

    1. What migrations are possibly included by Phoenix? Only users?
    2. Add a user profile page.

Notes

  • The dependency safe_code was a problem during my first two attempts.

    • The third attempt on 11/6/2022 has no issues so far.
  • I ran into issues by failing to add a BeaconWeb scope and adding it as BeaconSampleWeb instead.

    • Navigating to http://localhost:4000/page/home throws an UndefinedFunctionError as function BeaconSampleWeb.PageLive.__live__/0 is undefined (module BeaconSampleWeb.PageLive is not available).
  • The sample isn't as "pristine" as I'd like due to the bug fix but it really shouldn't be a showstopper.

    • Fixed this as I generated a new repository. There really aren't a ton of steps.
  • As of 3/16 page management only covers the page. The layout, component, and stylesheet models are not covered yet.
  • Stylesheets are injected into the <head> as inline <style> tags.
  • Layout sits under <body><div data-phx-main="true">
  • Running the server (mix phx.server) immediately boots our Beacon components before it shows the url.

Laravel Passport usage with Swaggervel v2.3

July 10, 2018 · 5 min read

Overview

I've been using this Swaggervel package with almost all my recent Laravel projects. A few instances were lightly customized to work against different authentication schemes and I only briefly touched on using Laravel Passport.

I wanted to highlight a few areas while also offering up an example project as a lightly opinionated jumping off point. Just the highlights cover quite a bit of information but the example should have ample information in commit messages and in the finished product.

Setting up Laravel and Laravel Passport

First we run laravel new <project_name>, git init and commit immediately to mark our base Laravel installation. I've always preferred this immediate commit over making customizations first as it's far easier to track your customizations versus the base install. Next, we run through the Laravel Passport docs with the following caveats:

  • php artisan vendor:publish --tag=passport-migrations doesn't copy the migrations as expected. We manually do this.
  • php artisan migrate --step creates a migration batch for each migration file individually. This lets us rollback to individual steps and is primarily personal preference.
  • app/Providers/AuthServiceProvider contains the following:
Passport::routes(function (RouteRegistrar $routeRegistrar) {
    $routeRegistrar->all();
});
Passport::tokensCan([
]);
Passport::enableImplicitGrant();
Passport::tokensExpireIn(Carbon::now()->addDays(15));
Passport::refreshTokensExpireIn(Carbon::now()->addDays(30));
  • Run artisan make:auth to utilize the app layout and create a home view that is protected by the Login prompt.

    • The Passport Vue components could be displayed on the welcome page but we're attempting to set future users up for better practices.
  • Create a proper WelcomeController with matching view that utilizes the same app layout

    • This is not necessary but this one step makes it possible to properly utilize artisan route:cache in the future as route closures aren't supported.

Setting up Swaggervel

Now that the basics are complete, we bring in Swaggervel via composer require appointer/swaggervel --dev. We can ignore the line in the documentation that mentions adding Appointer\Swaggervel\SwaggervelServiceProvider::class as that's only for Laravel versions earlier than 5.5 without package discovery. It's necessary to run artisan vendor:publish to publish the content as we're using this package as a dev dependency and the assets won't show up otherwise. Now that Swaggervel is in place we can bring it all together.

To start, we create the file app/Http/Controllers/Api/v1/Controller.php as our generic API base controller. This controller houses our root-level @SWG\Info definition in a convenient location. This also sets us up for future work where API controllers are versioned, though this is personal preference. The secret sauce is the @SWG\SecurityScheme annotation:

/**
 *   @SWG\SecurityScheme(
 *     securityDefinition="passport-swaggervel_auth",
 *     description="OAuth2 grant provided by Laravel Passport",
 *     type="oauth2",
 *     authorizationUrl="/oauth/authorize",
 *     tokenUrl="/oauth/token",
 *     flow="accessCode",
 *     scopes={
 *       *
 *     }
 *   ),
 */

The securityDefinition property is arbitrary but needs to be included in every protected route definition. You can specify multiple security schemes to cover things like an generic api key or likely multiple OAuth flows, though I haven't tried working out the latter. These are the supported flows and it's important to note that Swaggervel is currently on the OpenAPI 2.0 specification, though this may change in the future. The scopes specified includes everything (*) but we could define any scopes explicitly. It should be noted that we also need to setup the route definitions in our resource Controller classes but due to the verbosity they are too much to include in this post. A small snippet that is unique to working with this setup is the following:

*   security={
*     {
*        "passport-swaggervel_auth": {"*"}
*     }
*   },

This tells a specific endpoint to use the securityDefinition created earlier and it's important that these match. The example project has rudimentary UserController, User model, and UserRequest definitions that should be a decent starting point, though I can't vouch for them being very comprehensive.

Configuring our OAuth Client

First we need to create an OAuth client specifically for Swaggervel connections. Go to the /home endpoint and under OAuth Clients click Create New Client. Under Name specify Laravel Passport Swaggervel or just Swaggervel. Under Redirect URL we're unable to specify /vendor/swaggervel/oauth2-redirect.html directly, so use a placeholder like https://passport-swaggervel.test/vendor/swaggervel/oauth2-redirect.html instead. Using your SQL toolbox of choice, navigate to the table oauth_clients and look for the row with the name specified in the previous step, in our case Laravel Passport Swaggervel. Manually update the redirect column to /vendor/swaggervel/oauth2-redirect.html.

Now that our OAuth client in Passport should be setup correctly, we focus our attention on the config/swaggervel.php settings. The client-id should be set to what Passport shows in the UI as the Client ID field. This is also the id of the row in the oauth_clients table. The client-secret should be set to the what Passport shows in the UI as the Secret field. We also set both secure-protocol and init-o-auth to true, the latter of which fills in the UI with our secrets otherwise we'd have to put them in manually.

Correcting Swagger UI to Capture Tokens

For the OAuth2 redirect to function properly we need to modify the Swagger UI configuration in resources/views/vendor/swaggervel/index.blade.php. Under const ui = SwaggerUIBundle({ right below the url parameter should be oauth2RedirectUrl: '/vendor/swaggervel/oauth2-redirect.html',. This reinforcement is necessary as the Swagger UI doesn't 'catch' the tokens properly without this. Other notable additions that make the UI slightly easier to work with:

tagsSorter: 'alpha',
operationsSorter: 'alpha',
docExpansion: 'list',
filter: true

Testing Authentication via the Swagger UI

First we go to the api/docs endpoint to display the Swagger UI. Click the Authorize button with the unlocked padlock icon. Verify the client_id and client_secret sections are filled in. Click Authorize and the Laravel Passport screen labelled Authorization Request should display with the Authorize and Cancel buttons. Click Authorize again and you should be redirected back to Swagger with the client_id and client_secret now showing as ****** with a Logout button instead of Authorize. We should now be able to click on the GET /users route, click the Try it out button, click on the blue Execute button and be greeted with our expected response as a list of users.

Conclusion

We've hopefully highlighted the basic touch points of the process with the example code going into much further detail. The project is lightly opinionated to facilitate practices that have served me well so far. It is by no means a complete reference but it should be a good jumping off point when it's somewhat harder to see the big picture without a comprehensive example.

In case you need the link to the project again.

Scratching an Itch with Prometheus

July 5, 2018 · 2 min read

Not too long ago I became obsessed with Prometheus. I'd heard about it for a while, knew it was powerful, and couldn't quite understand how everything fit together. The documentation is extremely verbose for good reason but it took playing with it for a while for everything to click. This post is a rather concise and extensive overview that goes a long way in expressing the basic concepts to my developer brain. In their simplest form, exporters expose an HTTP endpoint of /metrics with the output being statistics in Prometheus' format. The real power of Prometheus comes when you expose your own /metrics endpoint and have Prometheus consume the statistics you generate. This post is also a very good introduction with the section Building your own exporter being extremely valuable in describing just some of the possibilities.

After getting my bearings I started with a prototype with a simple premise "Why look at the usage graphs in Digital Ocean for each server independently? Why not have it in one location?" How To Install Prometheus on Ubuntu 16.04 is a very good primer to get everything up and running quickly.

I've made a few modifications since working through the article:

  • Prometheus version 2.3.1

    • There have been massive perf improvements in v2.3.x.
  • node_exporter version 0.16.0

    • There are significant changes to the metrics naming conventions.
    • This exporter typically has the most coupling with Grafana dashboards and often requires altering them to work correctly.
  • Use prometheus:prometheus for ownership of core prometheus processes like prometheus or alertmanager.

    • sudo useradd --no-create-home --shell /bin/false prometheus
  • Use prometheus-exporter:prometheus-exporter for ownership of exporters. Exporters should possibly be more isolated but I feel it may be a case of YAGNI.

    • sudo useradd --no-create-home --shell /bin/false prometheus-exporter
  • Set scrape_interval to 1 minute: scrape_interval: 1m.

    • 15 seconds is still doable but I'm currently not concerned with very granular detail.
    • This reduces the load of making 4 calls per minute to just 1, reducing some overhead required for Prometheus and every exporter.

At $dayJob we've moved to provisioning servers using Laravel Forge, which has the possibility of utilizing exporters for mysqld, mariadb, postgres, memcached, redis, beanstalkd, nginx, php-fpm, and sendmail. I've opted to use node_exporter, mysqld, nginx-vts-exporter, php-fpm, and redis respectively. To put the original premise into perspective, replicating the newer monitoring agent graphs in Digital Ocean only require node_exporter. A few of the exporters require very little setup, only setting a few configuration variables systemd service definitions. Other exporters like nginx-vts-exporter require building nginx from source.

I plan to introduce a series of posts that should aid in getting a very rudimentary implementation running. There is an abundant usage of Kubernetes in the Prometheus ecosystem, to the point that it almost seems required but fortunately it also just works(tm) in a traditional virtual machine without any real fuss.