FAQs

STINGAR Installation Problems

Note on terminology used

The DOCKER_PASSWORD is the password provided in the private boxnote shared with you (this gives you access the registry of docker containers).

The API_KEY is automatically generated when you run the quickstart script. It creates a 'random' password string and stores it in the local stingar.env file and in the stinagrapi container database when you start everything up. However, if you have run the quickstart script more than once the database may have a different value in it to the new env value.

1) A common problem when you first open the StinagrUI in your web browser https://localhost:8080 and first try to create your STINGAR admin account Create Admin but see an error that looks like :

Problem retrieving Admin user! 401: Unauthorized. Verify API_KEY is set correctly.

This may be due to repeated runs of the installation script and the script has re-generated an API_KEY in your local stingar.env file environment that no longer matches the API_KEY in the stingarapi container image

To confirm this is the problem, take a look at the log file in the stingarapi container:

% docker-compose logs stingarapi

And look near top of the logs for the API_KEY printed there. Then check the API_KEY string in the stingar.env file:

% cat stingar.env 

If that doesn’t match the one in the stingar.env file, edit the stingar.env file with the same value.

% vim stingar.env # 

modify & re-save the file then stop & restart the current enviroment

% docker-compose down
...wait a few seconds for the process to complete
% docker-compose up -d

& retry the create admin steps on the https://localhost:8080

STINGAR Runtime problems

Out of memory errors

Reported in STINGAR's main Kibana console (aka Dashboard)

If you start seeing "memory errors" on the STINGAR dashboard after running for a few weeks (possibily sooner with large numbers of honeypots or attacks) this is usually due to an "out of RAM memory" problem in the underlying ElasticSearch database. To resolve this problem you can either:

1) Remove older data from the database

Using the Kibana Console

Use with caution ! Use of the Kibana console allows editing of all of the honeypot data collected by STINGAR - if you delete the data by accident STINGAR has no automatic recovery mechanism.

As administrator, you have direct access to the ElasticSearch database via STINGAR's Kibana console. You can view the console via the url https://localhost/kibana/app/kibana#/dev_tools/console. You can use the following query syntax to remove data older than 90 days:

POST stingar-*/_delete_by_query
{
  "query": {
    "range": {  
      "@timestamp": {
        "lte": "now-90d"       }
    }   
  } 
}

or

2) Extend the amount of memory available to the database.

Note on default installation values

The default STINGAR installation settings are a modest 256Mbytes of RAM for the Elasticsearch database to allow the software to run on a broad range of hardware/virtual machines. However, STINGAR can collect a lot of data from its honeypots and can quickly build a large database of historical data within a few weeks of continuous operation.

ElasticSearch recommends using no more than half the total system RAM of the host running (with a limit of 32Gbytes even on extra large VMs).

The amount of RAM allocated to ElasticSearch can be adjusted from the default RAM settings by changing the JAVA environment setting ES_JAVA_OPTS in the main docker-compose.yml

Risk of loss of data

It is possible to lose data from the database by changing the operating memory of ElasticSearch. Care should be taken when modifying the docker-compose.yml settings. As long as you are increasing to a multiple of the existing RAM setting (i.e. 2x or 4x) and you do not remove or re-name or modify the core volumes elastic_data:node_module_cache: within the container then historic honeypot data should remain available once you re-run the container.

elasticsearch:
    image: stingarregistry.azurecr.io/stingar/elasticsearch:latest
    volumes:
      - es_data:/usr/share/elasticsearch/data:z
    ports:
      - "127.0.0.1:9200:9200"
      - "127.0.0.1:9300:9300"
    environment:
      discovery.type: "single-node"
      # comment out original 256Mbyte setting 
      # ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      # and replace with 1Gbyte (for example) :
      ES_JAVA_OPTS: "-Xmx1g -Xms1g"

then restart the containers:

% docker-compose down
...wait a few seconds for command to complete, then 
% docker-compose up -d

ARM support for honeypots

STINGAR's honeypot images are built to support both x86 and ARM (i.e. amd64 & arm64) processor architectures, opening the possibility to deploy our honeypots on more host types (virtual machines & embedded devices) within networks.

Example honeypot installation on a Raspberry Pi

Note

This guide assumes the RaspberryPi hardware (Pi3 or Pi4) is running a 64bit version of Debian v11 (Bullseye) or later, but may also work on other versions of a 64bit Linux OS. To learn more about 64bit OS support for Raspberry Pi models see https://www.raspberrypi.com/software/operating-systems/

The basic honeypot deployment mechanism for Rapsberry Pi's is the same as for any other virtual host. I.e. STINGAR manages the host IP & creates ssh keys and deploys the selected honeypot on the host. However, there are some unique challenges when deploying to a Raspberry Pi that should be considered:

1) Is the RaspberryPi visible & accessible from the STINGAR server ?
2) Can STINGAR server "see" (i.e. resolve the IP address) of the host network or is the RaspberryPi running on a private (or home) network hidden behind a firewall or home router/gateway.
If STINGAR cannot resolve a public IP address for the RaspberryPi device it cannot automatically deploy the honeypot. However, it is still possible to manually deploy a honeypot and send all attack data back to the STINGAR server.

For Cowrie deployments, a new user account is required

STINGAR's automatic deployment mechanism, Langstroth, uses a Dockerfile which trys to create a new user/group (with ID 1000:1000) on the honeypot host. A Debian Bullseye installation already creates a similar account, by default, for the new user of Bullseye. Create a user/group with ID = 1001:1001 instead.

Manually deploying a honeypot

STINGARv2 supports manual deployment of honeypots on remote hosts. This requires some manual configuration on the remote host to connect the honeypot back to the STINGAR server. The simplest mechanism is to download the install script (which contains 2 configuration files: docker-compose.yml & stingar-hp.env) from the STINGAR [Deploy Honeypot] page and copy the files to the remote host and run % docker-compose up -d to run the honeypot

Download Install Script

Step by step instructions for Raspberry Pi (running Debian)

First, install docker & docker-compose tools on the honeypot host. Run the following commands on the honeypot host (i.e. RaspberryPi):

% sudo su --- run as root on the RaspberryPi
% curl -sSL https://get.docker.com | sh
% pip install docker-compose

Next, copy (or create new) the 2 configuration files to the honeypot host from the STINGAR server docker-compose.yml and stingar-hp.env (these 2 files are available to download from the STINGAR [Deploy Honeypot] page (see above) or you can create a new copy from the templates below (with some additonal edits):

1) Here is an example of the default docker-compose.yml file for a cowrie honeypot :

services:
  cowrie:
    depends_on:
    - fluentbit
    env_file: stingar-hp.env
    image: 4warned/cowrie
    links:
    - fluentbit:fluentbit
    ports:
    - 2222:2222
    - 2223:2223
  fluentbit:
    env_file: stingar-hp.env
    image: 4warned/fluentbit
    ports:
    - 127.0.0.1:24284:24284
    - 127.0.0.1:24284:24284/udp
version: '3.3'


2) Here is an example stingar-hp.env settings file which needs the {{{ SOME VALUE }}} values manually entered:

FLUENTD_HOST={{{ YOUR STINGAR SERVER IP ADDRESS or FQDN (e.g. 1.2.3.4 or example-server.duke.edu) }}}
FLUENTD_PORT=24224
FLUENTD_KEY={{{ YOUR FLUENTD_KEY value FROM stingar.env file on STINGAR Server (e.g. ASDFhjasdjhasfjh1234jrAewkJr0KITv) }}}
FLUENTD_APP=stingar
FLUENTBIT_HOST=fluentbit
FLUENTBIT_PORT=24284
FLUENTBIT_APP=stingar
FLUENTBIT_HOSTNAME=flb.local
HONEYPOT_IDENT={{{ unique hexadecimal identifier for this honeypot (e.g. bc826d844d6645d79a0dd95f5f5f04b9 }}}
HONEYPOT_IP= {{{ an IP address that uniquely identifies this honeypot on server logs (e.g 192.168.0.114) }}} 
HONEYPOT_HOST= {{{ an IP address that uniquely identifies this honeypot on server logs (e.g 192.168.0.114) }}} 
HONEYPOT_ASN=
TAGS=

Once these 2 files have been created on the honeypot host, simply run the docker-compose command from your honeypot host (i.e. Raspberry Pi) command line to run the honeypot:

% docker-compose up -d

Test your honeypot is running correctly by running an "attack" from another remote machine e.g. with cowrie simulate an ssh attack with root credentials and default port (2222) :

% ssh root@192.168.0.112 -p2222 -- your local honeypot IP address will be different 

or test locally on the honeypot host (e.g. using another terminal window on your RaspberryPi) using :

% ssh root@localhost -p2222 

Lastly, navigate in your browser to the STINGAR dashboard and ensure the attack was received by the STINGAR server - you will see the record of the attack on the "Attack analysis" page.

attackAnalysis

Troubleshooting honeypot installations

Sometimes, honeypot deployments just don't work as expected. Here are some additional debug tips to help try and debug what's going wrong with the environment

First, check your firewall settings to ensure the honeypot can reach the server

Check your firewall settings to ensure you can contact the STINGAR server and port from the honeypot host.

Use the netcat ('nc') utility to try to make a connection

% nc -v {{{STINGAR_SERVER_IP_ADDR}}} 24224 

Port #24224 is the default port used by fluentbit/fluentd

Once you confirm you can make a udp or tcp connection you should see an 'attack' register on the STINGAR server "Attack Analysis" page (see above).

However, if the real honeypot attack data is still not getting through:

Second, examine the packet traffic on the honeypot

1) install tcpdump to watch the IP traffic on your honeypot host.

% sudo apt-get install tcpdump

2) check the container IP network connections to determine which ones to monitor/watch, first get a list of all available IP network interfaces:

% ip addr

or if you are running on a MAC:

% ifconfig -a

3) then run the tcpdump on your internal container network (probably the best choice is the honeypot to fluentbit container connection) to examine the messages between containers to see if you can determine whats happening.

% tcpdump -Xvvi <br-12341132> -- NOTE *** the name of your local IP network will be different 

To view traffic on a specific port you can use the following :

% tcpdump -Xvvi eth0 'port 5514' -- NOTE *** the name of your local IP network will be different from eth0

If tcpdump doesn't provide enough clues as to the problem, try looking at the container logs:

Thirdly, read the honeypot container log files

Reading files inside the containers requires 2 steps, first you have to "log into" the container & then you need to know where to find the appropriate logs files. Each honeypot type manages its local log files differently. However, since all communication from the honeypot gets sent to fluentbit for transmission back to STINGAR server, that is a good place to start.

Fluentbit is a lightweight container when used in production and doesn't allow shell access. So, to read fluentbit logs you will need to install a debug version of the fluentbit image and re-run the container.
Steps
1) modify the docker-compose.yml file to use the debug image

services:
  fluentbit:
    image: fluent/fluent-bit:1.9.6-debug
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
    env-file:
      - stingar.env
    ports: "127.0.0.1:24284:24284"
    ports: "127.0.0.1:24284:24284/udp"

2) Create a fluent-bit.conf file

Note

Create a local copy of the following fluent-bit.conf file template edited with your FLUENTD_HOST (this is your STINGAR SERVER IP address (or FQDN), FLUENTD_KEY (this is in the stingar-hp.env file), FLUENTBIT_HOSTNAME (this is the honeypot IP address)

[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info
    Log_File     /fluent-bit/log/fluent-bit.log
    Parsers_File parsers.conf
    Parsers_File parsers_java.conf

[INPUT]
    Name Forward
    Port 24284

[OUTPUT]
    Name          forward
    Match         *
    Host          ${FLUENTD_HOST}
    Port          24224
    Shared_Key    ${FLUENTD_KEY}
    Self_Hostname ${FLUENTBIT_HOSTNAME}
    tls           on
    tls.verify    off

3) re-run docker-compose, which replaces the fluentbit image with the debug version

% docker-compose down 
% docker-compose build fluentbit 
% docker-compose up -d 

4) now you can run a shell on the fluentbit container

% docker-compose exec fluentbit /bin/sh
-- now you are running the shell inside the fluentbit container
% cd /fluent-bit/log
% cat logfile  -- to see what the fluentbit container has been doing to help you further debug
% exit -- return to honeypot host shell.

If you still can't figure out the problem, try the STINGAR slack channel or contact info@forewarned.io for help

if you figured it out, don't forget to replace the fluentbit container image in production

Edit docker-compose.yml to reference the production fluentbit image :
image: 4warned/fluentbit

and rebuild the container:

% docker-compose down
% docker-compose build fluentbit

and re-run the honeypot containers :

% docker-compose up -d