Monitoring pfSense with Grafana Alloy

Joe Alford
9 min readJun 4, 2024

--

pfSense and Grafana logos

Having recently started a new role, and tasked with configuring monitoring for a new pfSense deployment, I looked to try and get some logs and performance metrics out of this device into our Grafana Cloud deployment. I could find a few guides that got most of the way there, but nothing that was fully conclusive for our deployment. This page looks to summarise my findings, and hopefully provide a useful template for other people.

Before we being though, I must credit Brendon Matheson for their great guide on generating the MIBs for pfSense, which I relied on heavily for this aspect of the project.

· Objective
· Design
· Package installation
snmp_exporter
Grafana Alloy
syslog_proxy
· SNMP configuration
Alloy (snmp)
· syslog configuration
pfSense (syslog)
Alloy (syslog)
· Summary

Objective

To get SNMP messages and syslog logs out of pfSense, and into Grafana Cloud.

If you’re only interested in one of these and not the other, this page is still relevant — just be selective in the configuration deployed.

Design

Our pfSense deployment is running as an EC2 instance within AWS, and all of our Grafana infrastructure is running in Grafana Cloud — we don’t have local Loki deployments etc., so I was keen to try to find a solution that came with the minimal number of extra moving parts. To that end, Grafana Alloy has been invaluable.

Let’s break down below how our end solution looked. Hopefully at this point, you’ll know if this page carries any relevance for your estate.

  • pfSense running on an AWS EC2 instance
  • Grafana Cloud deployment
  • An AWS EC2 instance running:
    - Grafana Alloy
    - snmp_exporter
Network design, showing high level overview
  • pfSense will send it’s syslog entries to Alloy — these are then sent to our remote Prometheus instance.
  • Alloy will poll pfSense for SNMP messages. These are then sent to our remote Loki instance.

Alloy is using the prometheus.remote.write and loki.write functions to push our data into Grafana Cloud, all from one application — meaning a minimal additional footprint.

This new EC2 instance is created from an AMI which itself is created by Hashicorp’s packer. You can find the code for this at my GitHub account.

Now we know what we’re trying to achieve, let’s set about configuring everything. My GitHub has a link to a packer image which could be useful here, but if not, I will include inline code blocks for each step.

Package installation

Before we look to configure any of our software, let’s make sure that we’ve installed all of the prerequisite packages/scripts. These steps are agnostic of the underlying platform (VMware, AWS, Azure etc.), and as such, you will have to make sure that the appropriate networking is in place.

See the diagram above for the ports needed.

snmp_exporter

Firstly, we have to get the SNMP stats into a format that Prometheus can understand — this is where snmp_exporter comes in.

We can install this easily, with the following commands:

snmp_exporter_location="/usr/bin"
snmp_exporter_version="0.26.0"

wget -q https://github.com/prometheus/snmp_exporter/releases/download/v$snmp_exporter_version/snmp_exporter-$snmp_exporter_version.linux-amd64.tar.gz -O /tmp/snmp_exporter-$snmp_exporter_version.linux-amd64.tar.gz
tar xvfz /tmp/snmp_exporter-$snmp_exporter_version.linux-amd64.tar.gz -C /tmp
cd /tmp/
sudo mv snmp_exporter-$snmp_exporter_version.linux-amd64/snmp_exporter $snmp_exporter_location
sudo chmod +x $snmp_exporter_location/snmp_exporter
sudo useradd snmp_exporter

sudo cat > /tmp/snmp_exporter.service << EOF
[Unit]
Description=SNMP Exporter
After=network-online.target

# This assumes you are running snmp_exporter under the user "snmp_exporter"

[Service]
User=snmp_exporter
Restart=on-failure
ExecStart=/usr/bin/snmp_exporter --config.file=/etc/snmp_exporter/snmp.yaml

[Install]
WantedBy=multi-user.target
EOF

sudo mv /tmp/snmp_exporter.service /etc/systemd/system/snmp_exporter.service
echo "Enabling systemd service…"
sudo systemctl daemon-reload
sudo systemctl enable snmp_exporter.service

Once this is installed, we need to provide it with the pfSense SNMP MIBs — otherwise it won’t know how to handle the data from pfSense. We can either use the snmp_exporter generator to create them (as outlined in Brendon Matheson’s guide), or, for ease, I have included a copy of snmp.yaml ready to go in GitHub. Regardless of how you obtain them, they need to be placed into /etc/snmp_exporter/snmp.yaml.

Grafana Alloy

Next up, we need to install Grafana Alloy. This block of commands is a little longer than it needs to be, but I ran into some oddness while automating this with packer, so there’s some commands there to handle those edge cases.

echo "## This will output a load of non-ASCII to the screen now, as it tries to print a GPG key. Don't panic!"
wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg
echo "## And output should be back to normal now…"
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update 1>/dev/null

echo "Adding user/group for Alloy"
sudo useradd alloy

echo "Installing Alloy…"
DEBIAN_FRONTEND=noninteractive sudo apt-get install alloy -y --fix-missing -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" 1>/dev/null

sudo systemctl enable alloy.service
sudo chown root:root /etc/alloy
sudo chown alloy:alloy /etc/alloy/config.alloy
sudo chmod 775 /etc/alloy

sudo cat > /tmp/alloy << EOF
## Path:
## Description: Grafana Alloy settings
## Type: string
## Default: ""
## ServiceRestart: alloy
#
# Command line options for Alloy.
#
# The configuration file holding the Alloy config.
CONFIG_FILE="/etc/alloy/config.alloy"

# User-defined arguments to pass to the run command.
CUSTOM_ARGS="--server.http.listen-addr=0.0.0.0:12345"
# exposes the debug URL to all clients. See 'expose the UI...' - https://grafana.com/docs/alloy/latest/tasks/configure/configure-linux/

# Restart on system upgrade. Defaults to true.
RESTART_ON_UPGRADE=true
EOF

sudo mv /tmp/alloy /etc/default/alloy
sudo mkdir -p /var/lib/alloy #it seems the install doesn't always create this?
sudo chown alloy:alloy /var/lib/alloy

syslog_proxy

This is a simple script to work around a bug in Alloy whereby pfSense syslog entries aren’t handled correctly. To fix this, we run a simple python script that receives all of the syslog entries, adds in the missing new-line character ( \n) and then passes them onto the loki.syslog function of our Alloy instance. The script runs as a service.

This script was written by ChatGPT (as I’m not familiar with Python) so there might be a more elegant way to write this code. However, follow the below steps to configure it:

syslog_proxy_user="syslog_proxy"
syslog_proxy_script="/usr/local/bin/syslog_proxy.py"
sudo useradd $syslog_proxy_user

sudo cat > /tmp/syslog_proxy.py << EOF
# This simple script it used to receive our syslog entries from pfSense, add `\n` to the end, and
# then send them on to Loki (running on Alloy). This is because of this bug: https://github.com/grafana/alloy/issues/560

# As a result, for every pfSense instance we wish to manage, we will need one of these proxys running, with a unique listening port.
# In the Alloy config, we will then have a unique `loki.source.syslog.listener` to which we forward these modified logs.

# See the files at /etc/systemd/system/syslog_proxy_****.service for an example of how to manage systemd services for them.

import socket
import argparse

def start_syslog_server(listen_port, forward_host, forward_port):
# Create a UDP socket for receiving
recv_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
recv_server_address = ('0.0.0.0', listen_port)
recv_sock.bind(recv_server_address)
print(f'Started syslog server on port {listen_port}')

# Create a UDP socket for forwarding
send_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
forward_address = (forward_host, forward_port)

while True:
data, address = recv_sock.recvfrom(4096)
if data:
log_entry = data.decode('utf-8').strip() + '\n'
print(f'Received log entry from {address}: {log_entry}')

# Forward the log entry to the specified destination
send_sock.sendto(log_entry.encode('utf-8'), forward_address)
print(f'Forwarded log entry to {forward_address}')

if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Syslog server that appends newline to log entries and forwards them.')
parser.add_argument('--listen-port', type=int, required=True, help='The port to listen on for syslog messages')
parser.add_argument('--forward-host', type=str, required=True, help='The host to forward the log entries to')
parser.add_argument('--forward-port', type=int, required=True, help='The port on the forward host to send the log entries to')
args = parser.parse_args()

start_syslog_server(args.listen_port, args.forward_host, args.forward_port)
EOF

sudo mv /tmp/syslog_proxy.py $syslog_proxy_script
sudo chmod ug+x $syslog_proxy_script
sudo chown $syslog_proxy_user:$syslog_proxy_user $syslog_proxy_script

sudo cat > /tmp/syslog_proxy_15140.service << EOF
[Unit]
Description=Syslog Proxy Port 15140
After=network-online.target

[Service]
User=syslog_proxy
Restart=on-failure
ExecStart=python3 /usr/local/bin/syslog_proxy.py - listen-port 15410 - forward-host 127.0.0.1 - forward-port 5140

[Install]
WantedBy=multi-user.target
EOF

sudo mv /tmp/syslog_proxy_15140.service /etc/systemd/system/syslog_proxy_15140.service
sudo systemctl daemon-reload
sudo systemctl enable syslog_proxy_15140.service && sudo systemctl start syslog_proxy_15140.service

SNMP configuration

pfSense (syslog)

To start with configuring SNMP, firstly enable it on the pfSense installation:

Services -> SNMP

  • Enable [x]
  • Set the Read Community String
  • SNMP modules
    - Mibll
    - Netgraph
    - PF
    - Host Resource
    - UCD
  • Select the relevant interface
  • Save

Firewall -> Rules

Add a new rule to Pass UDP/161 on the relevant interface.

snmp_exporter

Once snmp_exporter and Grafana Alloy are installed, we can edit the configuration to allow it to scrape SNMP from pfSense. Firstly edit /etc/snmp_exporter/snmp.yaml and make sure that the community string in public_v2 matches the value set in pfSense. If you changed this file, restart snmp_exporter (sudo systemctl restart snmp_exporter).

Alloy (snmp)

Now we need to update the Alloy configuration in /etc/alloy/config.alloy. Assuming just one target , you will end up with something like the below. For more targets , just duplicate the target block.

logging {
level = "info"
format = "logfmt"
}

//
// SNMP
//

// https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.snmp/#target-block
prometheus.exporter.snmp "pfsense" {
config_file = "/etc/snmp_exporter/snmp.yaml"

target "pfsense_az_a" {
address = "10.0.0.1" // your pfsense's IP (or DNS name?) here
module = "pfsense" // this is the name of the module in /etc/snmp_exporter/snmp.yaml
}
}

// Configure a prometheus.scrape component to collect SNMP metrics from the exporter above, and forward them to the remote_write.
prometheus.scrape "pfsense" {
targets = prometheus.exporter.snmp.pfsense.targets
forward_to = [prometheus.remote_write.pfsense.receiver]
}

// https://grafana.com/docs/alloy/latest/reference/components/prometheus.remote_write/#authorization-block
prometheus.remote_write "pfsense" {
endpoint {
// https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
url = "https://some_region.grafana.net/api/prom/push"

basic_auth {
username = "your_username_here" // https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
password = "your_password_here" // https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
}
}
}

Now restart Alloy (sudo systemctl restart alloy.service), and use systemctl status alloy or journalctl -u alloy to check the logs/status. If it is running, you should see it start sending metrics into Prometheus after a short while.

If it is not sending metrics, the following URLs will give you further diagnostic pointers:

  • Alloy status page: <alloy-machine-IP>:12345
  • snmp_exporter test page: <alloy-machine-IP>:9116

syslog configuration

Firstly, I’ll explain a design choice that I made, which might impact the next steps for you. As will become clear, Alloy has one syslog listener per sending-client. This needs a unique network socket. As we add Loki labelsper client, I therefore have one syslog_proxy service per client. For example:

  • pfsense_1 sends syslogs to syslog_proxy at <ip:15410>. syslog_proxy fixes the log and forwards to Alloy at <ip>:5410>. Alloy labels the log as pfsense_1 and ships to Grafana Cloud.
  • pfsense_2 sends syslogs to syslog_proxy at <ip:15411>. syslog_proxy fixes the log and forwards to Alloy at <ip>:5411>. Alloy labels the log as pfsense_2 and ships to Grafana Cloud.

As a result of this if you have more than one pfSense deployment, you will need to be mindful of the port number in the remote log servers section, and create/start new syslog_proxy_xxxxx.service services for each required port.

pfSense (syslog)

Again, let’s start by configuring pfSense:

Status -> Syslog Logs -> Settings

  • Log Message Format: syslog/RFC 5424
  • Remote Logging Options
  • Send log messages to remote syslog server
  • IP Protocol: IPv4
  • Remote log servers : <grafana_alloy_ip>:<syslog proxy port> (keep in mind here the different syslog_proxy for each client)
  • Remote Syslog Contents: Everything
  • Save

Alloy (syslog)

We’ll need to edit etc/alloy/config.alloy again, and this time add the following:

//
// Syslog
//

loki.source.syslog "pfsense" {
listener {
address = "127.0.0.1:5140"
protocol = "udp"
label_structured_data = true
// Note: these are example labels and will need changing!
labels = {
account = "production",
availability_zone = "az-a",
region = "eu-west-2",
service_name = "pfsense",
customer = "some_customer",
}
}
listener {
address = "127.0.0.1:5141"
protocol = "udp"
label_structured_data = true
// Note: these are example labels and will need changing!
labels = {
account = "production",
availability_zone = "az-b",
region = "eu-west-2",
service_name = "pfsense",
customer = "some_other_customer",
}
}

forward_to = [loki.write.pfsense.receiver]
}

loki.write "pfsense" {
endpoint {
// https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
url = "https://some_region.grafana.net/api/prom/push"

basic_auth {
username = "your_username_here" // https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
password = "your_password_here" // https://grafana.com/orgs/<your_org_nane_here>/ -> choose your stack -> Prometheus -> Send metrics
}
}
}

Once again, restart Alloy (sudo systemctl restart alloy), and you should start to see your logs appear in Loki/Grafana very soon. If you watch the logs for your syslog_proxy service ( jounralctl -u syslog_service_15410), you will see each log entry. If there are problems with Alloy handling the logs, they will be in jounralctl -u alloy.

Summary

And that’s it! This will hopefully have allowed you to configure pfSense to send SNMP and syslog messages into Grafana Cloud with as few moving parts as possible.

--

--

Joe Alford
Joe Alford

Written by Joe Alford

Senior Systems Engineer at Rugged Networks (https://giraffecctv.com)

Responses (2)