Skip to main content
  1. Posts/

Use Prometheus Node Exporter with SigLens and Vector.dev

·983 words·5 mins· loading · loading · ·
English Logging Security Expert
rOger Eisenecher
Author ::..
rOger Eisenecher
> 12 years leading and building a SOC for MSSP • > 20 years working in security • > 40 years working with IT • 100% tech nerd.
Table of Contents
SigLens - This article is part of a series.
Part 2: This Article

Some days ago I discovered SigLens the first time and wrote a blog post about feeding logs into it with the help of Vector.dev. SigLens does not only provide fast log management - no; it also supports metrics. In this article I show you how to setup ingestion of metrics from Prometheus Node Exporters.

Introduction

Some days ago I did the first steps with SigLens - see also my first post regarding SigLens. There I was already quite impressed about the performance and simplicity of this solution. But log ingestion is not the only functionality SigLens provides - you can also ingest metric data. Due I have already a Grafana/Prometheus environment I want to give it a try to fetch also metric data from my new system based on Vector.dev and SigLens.

The benefit is obvious: Everything in the same app and preferrable with the same performance as you can query logs. Unfortunatly due the early stage of development (and based on the latest build version 0.2.4 at the time of writing this article) the documentation was weak and I had to dig around until I got a working environment. Hopefully this article helps you to ingest metric data too.

This artice depends heavy on the things described in the first article and documents only the changes needed to include metrics into the existing environment. So if you landed directly on this post I advise to check the first article too.

Overview

Basically SigLens together with Vector.dev could be a drop in replacement for your existing Prometheus environment. But to be clear here: SigLens is in its early development phase - first priority was the log part so in the metrics section many features are not there nor complete. But for a parallel setup its perfectly suited.

It is an advantage if you know the basic architecture of Prometheus and the corresponding Node Exporters. If you are not familiar with it here in short: Every node you want to monitor gets a Node Exporter installed. This Node Exporter will be polled from the central instance (the so called scraper) to fetch metrics from the node(s). In a normal setup this will be done by Prometheus. For accessing and working with the collected data usually Grafana will be used which connects to the underlaying Prometheus instance.

In our scenario, we just rely on the existing Node Exporters. Instead prometheus Vector.dev scrapes the information and ingest the metrics data to SigLens. For accessing and working with the collected data also SigLens will be used.

The following diagram shows the generic architecture:

graph LR; A[Node A] B[Node B] subgraph Vector.dev subgraph sources D[source_prometheus_scrape] end subgraph transforms F["`*not used*`"] end subgraph sinks E[siglens_metrics] end end subgraph SigLens C[SigLens Instance] end D -- scrapes --> A D -- scrapes --> B D --> E E --> C

  • Vector.dev will scrape the nodes (systems) in the source block source_prometheus_scrape
  • Vector.dev will pass the data without any transform to the sink block siglens_metrics (which is will define SigLens instance as target)

As you can see the workhorse is again Vector.dev which fetches the metrics data and sends them to SigLens.

Vector.dev

As we saw above the important thing in the setup is Vector.dev. We have to define the corresponding sources and sinks.

Configuration

With the configuration file below we will achieve the things mentioned before. As an overview here a schematic diagram regarding the building blocks of the log processing and metrics fetching pipeline (please note that in the block diragram also the block from the first article are shown):

graph LR; subgraph sources A[source_syslog_udp] G[source_prometheus_scrape] end subgraph transforms B[filterlog] C[syslog_catch_all] end subgraph sinks D[siglens_firewall] E[siglens_syslog] H[siglens_metrics] end A-->B A-->C B-->D C-->E D-->F[SigLens ElasticSearch API] E-->F[SigLens ElasticSearch API] G-->H H-->I[SigLens metrics API]

Most important components of the configuration:

  • There is no transforms section due we do not have to transform the data
  • source_prometheus_scrape: Defines how the component receives metrics (sources). As usual you have to define the URL for the node to scrape the data. Of couse you could specify more than one endpoint - just extend the list. In this example the Node Exporter is installed on the docker host so the Node Exporter URL is http://host.docker.internal:9100/metrics.
  • siglens_metrics: Definition where to send processed metrics data (sinks). The important thing here is to use the proper API URL. Unfortunatly at the time of writing this article it as not documented. But luckily it is open source so I was able to check the source code. The URL must be: http://host.docker.internal:8081/promql/api/v1/write.

And finally here the parts which have to be added to the existing configuration file vector.yaml from the first article :

sources:
  source_prometheus_scrape:
    type: prometheus_scrape
    endpoints:
      - http://host.docker.internal:9100/metrics  
    scrape_interval_secs: 60

sinks:
  siglens_metrics:
    type: prometheus_remote_write
    inputs:
      - source_prometheus_scrape
    endpoint: http://host.docker.internal:8081/promql/api/v1/write
    healthcheck:
      enabled: false

SigLens

Configuration

Good news here; you do not have to modify anything on your existing configuration for SigLens; it just accepts also metric data.

GUI Access

Now open your favorite web browser and surf to your host and port 5122, eg. according our example http://10.40.1.42:5122/

Then go to Metrics and query for metrics data (Important: only PromQL is supported!). Here a simple example query to fetch metrics: sum(go_gc_duration_seconds) by (quantile):

SigLens Metrics Query
Here an example query for metrics data.

Even SigLens uses PromQL as their query language you should know that only a subset of the commands are supported.

Summary

Quite impressive that SigLens also can be used to ingest metrics data. Of course it could not replace Grafana at this time. This is due the early development stage of this tool. But I’m pretty sure that the basics are done right and with each release we will get more features which makes SigLens even more powerful!

Further Reading

Here are some links:

SigLens - This article is part of a series.
Part 2: This Article