CISA built LME (Logging Made Easy) as a no-cost SIEM starter kit for small-to-medium organizations: one Ansible playbook stands up Elasticsearch, Kibana, Wazuh, ElastAlert2, and Elastic Fleet in Podman containers, with pre-wired dashboards and alerting rules.
Why I starred it
Most orgs that need centralized logging don't have the budget for Splunk or a dedicated SOC engineer to maintain it. The open-source ELK stack solves the cost problem but hands you a blank canvas — you still need to wire up Wazuh for host-based detection, configure Fleet for agent management, write ElastAlert rules, and tune Kibana dashboards. LME does all of that wiring upfront. What caught my eye is the combination: government-backed, actively maintained (v2.2.0 shipped recently), and opinionated enough to actually be deployable.
How it works
The entry point is install.sh, which checks for Ansible, detects the Linux distro, and delegates immediately to ansible/site.yml. That playbook runs ten roles in sequence: base, nix, podman, elasticsearch, kibana, dashboards, wazuh, fleet, cleanup — each namespaced with tags so you can re-run subsets selectively.
The nix role is a notable architectural choice. Rather than relying on the system's package manager for Podman, LME installs Nix and uses it to manage the container runtime. The ansible/roles/nix/tasks/main.yml detects distro and version at runtime via with_first_found variable loading, falling back through a chain of OS-specific files down to vars/default.yml. This is what makes the RHEL/Ubuntu/offline support work without a forest of when: conditions scattered everywhere.
Secrets are handled through Ansible Vault + Podman secrets — not environment files. The scripts/extract_secrets.sh script shows the pattern clearly:
# Pulls each secret from ansible-vault and exports to shell
output=$(sudo "$PODMAN_PATH" secret ls)
while IFS= read -r line; do
secret_value=$(sudo -i ansible-vault view /etc/lme/vault/$id)
export_commands+="export $var_name='$secret_value'; "
done <<< "$output"
eval "$export_commands"
Vault-encrypted secrets live at /etc/lme/vault/<id>, and Podman secrets reference them. The example.env file has the full variable map, with comments pointing to which vault key each password comes from — readable without exposing values.
Alerting runs through ElastAlert2. The config/elastalert2/rules/kibana_alerts.yml rule shows the baseline approach:
name: Rollup Kibana Security Alerts
type: any
index: .alerts-security.alerts-*
filter:
- range:
"@timestamp":
gte: "now-5m"
- query_string:
query: "kibana.alert.rule.name:*"
realert:
minutes: 20
aggregation:
minutes: 15
It polls Kibana's alert index every 5 minutes and de-duplicates with a 20-minute window. The config/elastalert2/rules/ directory has stub imports for Slack, email, Teams, and Twilio — uncomment the one you want, fill in credentials, done.
The container stack is pinned in config/containers.txt:
docker.elastic.co/elasticsearch/elasticsearch:8.18.8
docker.elastic.co/beats/elastic-agent:8.18.8
docker.elastic.co/kibana/kibana:8.18.8
docker.io/wazuh/wazuh-manager:4.9.1
docker.io/jertel/elastalert2:2.20.0
docker.elastic.co/package-registry/distribution:lite-8.18.8
Six containers, all pinned to explicit versions. The package-registry/distribution:lite container is the detail worth noting — it bundles Elastic's integration catalog locally so Fleet doesn't need to phone home to download agent integrations. This is what makes the offline mode viable for air-gapped environments.
The Wazuh RBAC setup in ansible/roles/wazuh/tasks/main.yml uses Ansible's expect module to drive interactive scripts with no_log: true when debug is off — not pretty, but it works around Wazuh's lack of a proper API for initial password setup.
Using it
# Clone and run
git clone https://github.com/cisagov/LME.git
cd LME
./install.sh
# Non-interactive deployment (CI/GitOps)
NON_INTERACTIVE=true AUTO_CREATE_ENV=true AUTO_IP=192.168.1.10 ./install.sh
# Run only a subset of roles
./install.sh -p ansible/site.yml
ansible-playbook ansible/site.yml --tags "wazuh,fleet"
# Extract secrets into shell
source scripts/extract_secrets.sh -q
echo $elastic # now available
Offline mode is also supported — ./install.sh -o skips internet-dependent tasks and uses the pre-bundled package registry. scripts/prepare_offline.sh handles pre-downloading what's needed.
Rough edges
The RHEL support exists but diverges in places — offline RHEL uses the system Podman rather than Nix-installed, which means different codepaths and slightly different behavior. The nix role's when: not (offline_mode and ansible_os_family == "RedHat") guards are scattered through multiple task files rather than gated at the role level.
The Wazuh expect module usage is fragile — it's driving an interactive shell script with regex matches and a 240-second timeout. If Wazuh's prompt text changes between versions, the playbook silently hangs. There's no retry logic.
Detection rule enablement (LinuxDR=0, WindowsDR=0, MacOSDR=0 in the env file) defaults everything off. You'll need to read the docs to understand what you're turning on and why — there's no guided wizard for rule selection. The testing/ directory has Selenium and API tests, but coverage is thin on the Ansible side.
Documentation lives at a separate GitHub Pages repo (cisagov.github.io/lme-docs), not in the main repo. This means docs can lag the code, and there's no guaranteed correlation between a commit and its docs update.
Bottom line
If you're running a small agency or organization on a zero SIEM budget, LME gets you from bare Linux to a functional Elasticsearch + Wazuh + alerting stack in a single command. It's not a replacement for Splunk Enterprise Security or a dedicated SOC platform, but for the target audience — IT generalists who need something before the next incident — it's the most deployment-complete open-source option CISA has shipped.
