Latest Entries »

It’s really a long time I do not post about TIP. The good news is that TIP is starting growing really fast and this is mainly due to its modular design which allows to plug different kind of tracking modules with minimum effort. In this post I’ll provide a brief overview of the new still integrated features and the upcoming ones.

First of all, a new TIP Collector module named Malware was integrated and currently it handles data coming from GLSandbox, a sandbox for automated malware analysis written by Guido Landi. Other than just analyzing malware samples behavior, the idea is to collect additional data coming from such analysis too. An example of such interesting data is related to C&C identification which can be automatically handled by a botnet monitoring tool for further analysis. Another example is related to information about domains which could lead to the identification of new fast-flux domains.GLSandbox code is currently not public but plans exist to release it in the next future. A search engine was integrated in TIP in the last version! The idea is to index the database in order to be able to search into it with great efficiency and performance. In order to implement it, Haystack was used. The first tests were done using Apache Solr (deployed as Apache Tomcat application) as backend and confirm it works like a charm!  A new REST API was designed and realized in order to be able to more easily search and share data with other users and/or applications. The API was realized using Django-Piston and supports OAuth authentication. Moreover the last version of TIP supports Django 1.2 and stops supporting previous versions (due to some incompatible changes between versions 1.1 and 1.2) and introduces support to migrations using South in order to more easily make changes to the database schema while developing.

A lot of new cool features, a lot of upcoming cool ones! Stay tuned!

Challenge 4 of the Honeynet Project Forensic Challenge – titled “VoIP” – is now live. This challenge 4 – provided by Ben Reardon from the Australian and Sjur Eivind Usken from Norwegian Chapter – takes you into the realm of voice communications on the Internet. VoIP with SIP is becoming the de-facto standard. As this technology becomes more common, malicious parties have more opportunities and stronger motives to take control of these systems to conduct nefarious activities. This Challenge is designed to examine and explore some of attributes of the SIP and RTP protocols.

Note that our Chinese speaking chapters (Julia Cheng from the Taiwanese Chapter, Jianwei Zhuge from the Chinese Chapter and Roland Cheung from the Hongkong Chapter) have taken great initiative and translated the challenge into Chinese, which is available from the simplified Chinese and traditional Chinese pages (will be posted by EOD today.)

With this challenge, we are getting on a firm 2 month cycle. You will have one month to submit (deadline is June 30th 2010) and results will be released approximately 3 weeks later. Small prizes will be awarded to the top three submissions.

Enjoy the challenge!

Honeynet Project Challenge 2010/3 – “Banking Troubles” has just been posted and is to investigate a memory image of an infected virtual machine. The challenge has been provided by Josh Smith and Matt Cote from The Rochester Institute of Technology Chapter, Angelo Dell’Aera from the Italian Chapter and Nicolas Collery from the Singapore Chapter.

Submit your solution at http://www.honeynet.org/challenge2010/ by 17:00 EST, Sunday, April 18th 2010. Results will be released on Wednesday, May 5th 2010. Small prizes will be awarded to the top three submissions.

Skill Level: Difficult

UPDATE: Submission deadline extended to Monday, 26th of April 2010

Challenge 2 of the Honeynet Project Forensic Challenge has just been posted. The challenge has been provided by Nicolas Collery from the Singapore Chapter and Guillaume Arcas from the French Chapter and is titled browsers under attack.

Submission deadline is March 1st and results will be released on Monday, March 15th 2010. Small prizes will be awarded to the top three submissions.

Have fun!

UPDATE: Submission deadline extended to Monday, 8th of March 2010

About two months ago I started contributing PhoneyC, a pure Python honeyclient implementation originally developed by Jose Nazario. The perception is that our development efforts are moving on the right track. The code can be downloaded here. If you’re interested take a look at the different development branches and give us your feedback. Moreover if you’re interested in technical details about PhoneyC please read this paper by Jose Nazario.

After several years without any Honeynet Project Challenges, there will finally be new Forensic Challenges starting next Monday (January 18th, 2010). Here is the official announcement.

I am very happy to announce the Honeynet Project Forensic Challenge 2010. The purpose of the Forensic Challenges is to take learning one step farther. Instead of having the Honeynet Project analyze attacks and share their findings, Forensic Challenges give the security community the opportunity to analyze attacks and share their findings. In the end, individuals and organizations not only learn about threats, but also learn how to analyze them. Even better, individuals can access the write-ups from other individuals, and learn about new tools and techniques for analyzing attacks. Best of all, the attacks of the Forensic Challenge are attacks encountered in the wild, real hacks, provided by our members.
It has been several years since we provided Forensic Challenges and with the Forensic Challenge 2010, we will provide desperately needed upgrades. The Forensic Challenge 2010 will include a mixture of server-side attacks on the latest operating systems and services, attacks on client-side attacks that emerged in the past few years, attacks on VoiP systems, web applications, etc. At the end of challenge, we will provide a sample solution created by our members using the state-of-the-art tools that are publicly available, such as libemu and dionaea.
The first challenge (of several for 2010) will be posted on our Forensic Challenges web site on Monday, January 18th 2010. We will be open to submissions for about two weeks and announce the winners by February 15th 2010. This year, we will also award the top three submissions with prizes! Please check the web site on Monday, January 18th 2010 for further details…

Christian Seifert

A new series of papers is available from the Honeynet Project: “Know Your Tools” deals with specific types of honeypots and explains how to use them. The first paper in this series deals with Picviz, a tool to visualize data based on parallel coordinates plots. Picviz is a parallel coordinates plotter which enables easy scripting from various input (tcpdump, syslog, iptables logs, apache logs, etc..) to visualize data and discover interesting aspects of that data quickly. Picviz uncovers previously hidden data that is difficult to identify with traditional analysis methods. The paper is available at http://www.honeynet.org/node/499.

Abstract

This document explains how Picviz can be used to spot attacks. We will use three examples in this paper; analysis of ssh connection logs, demonstration of the graphical interface on network data generated by a port scanner and the use of Picviz command line to discover attacks towards an Apache web server. Picviz can handle large amounts of data, as illustrated by the last example in which two years of raw Apache access logs are analyzed. We will show how we can find attacks that previously have been hidden and discover them in a very short time! We hope Picviz will make you more efficient in analyzing any kind of log files, including network traffic, and able to spot abnormalities even with large dataset.

It’s long time since I don’t write about TIP and its evolution. A lot of things have changed during these last months in order to make TIP more efficient and scalable. So maybe it’s worth to talk about it! First of all, TIP really exploits the Twisted Plugin System as best as it can. As shown below, the Tracking Intelligence Project services are now Twisted commands implemented through the plugin system.

buffer@alnitak ~/tipproject/tip/core $ twistd –help
Usage: twistd [options]
Options:
–savestats              save the Stats object rather than the text output of the profiler.
-o, –no_save           do not save state on shutdown
-e, –encrypted        The specified tap/aos/xml file is encrypted.
–nothotshot             DEPRECATED. Don’t use the hotshot profiler even if it’s available.
-n, –nodaemon        don’t daemonize, don’t use default umask of 0077
-q, –quiet                 No-op for backwards compatibility.
–originalname         Don’t try to change the process name
–syslog                    Log to syslog, not to file
–euid                       Set only effective user-id rather than real user-id.
-l, –logfile=              log to a specified file, – for stdout
-p, –profile=            Run in profile mode, dumping results to specified file
–profiler=                Name of the profiler to use (profile, cprofile, hotshot). [default: hotshot]
-f, –file=                  read the given .tap file [default: twistd.tap]
-y, –python=           read an application from within a Python file (implies -o)
-x, –xml=               Read an application from a .tax file (Marmalade format).
-s, –source=          Read an application from a .tas file (AOT format).
-d, –rundir=           Change to a supplied directory before running [default: .]
–report-profile=     DEPRECATED.

Manage –report-profile option, which does nothing currently.

–prefix=                use the given prefix when syslogging [default: twisted]
–pidfile=               Name of the pidfile [default: twistd.pid]
–chroot=              Chroot to a supplied directory before running
-u, –uid=              The uid to run as.
-g, –gid=              The gid to run as.
–umask=              The (octal) file creation mask to apply.
–help-reactors     Display a list of possibly available reactor names.
–version               Print version information and exit.
–spew                   Print an insanely verbose log of everything that happens. Useful when debugging freezes or locks in complex code.
-b, –debug            run the application in the Python Debugger (implies nodaemon), sending SIGUSR2 will drop into debugger
-r, –reactor=         Which reactor to use (see –help-reactors for a list of possibilities)
–help                    Display this help and exit.
Commands:
tip-fastflux           Tracking Intelligence Project Fast-Flux Tracking service.
tip-collector        Tracking Intelligence Project Collector service
.
ftp                           An FTP server.
telnet                      A simple, telnet-based remote debugging service.
socks                     A SOCKSv4 proxy service.
manhole-old          An interactive remote debugger service.
portforward           A simple port-forwarder.
web                       A general-purpose web server which can serve from a filesystem or application resource.
inetd                     An inetd(8) replacement.
xmpp-router         An XMPP Router server
words                   A modern words server
toc                       An AIM TOC service.
dns                      A domain name server.

This is really useful since it allows to run just the needed modules fine tuning their behaviour as shown below.

buffer@alnitak ~/tipproject/tip/core $ twistd tip-collector –help
Usage: twistd [options] tip-collector [options]
Options:
-o, –one-shot                      Run the collector just one time
-c, –concurrency-level=     Set maximum concurrency level [default: 1]
-s, –reschedule-interval=   Set collector restart interval [default: 21600]
–version
–help                                   Display this help and exit.

buffer@alnitak ~/tipproject/tip/core $ twistd tip-fastflux –help
Usage: twistd [options] tip-fastflux [options]
Options:
-w, –whitelist-force-refresh  Force white-list domain refreshing at every commit
-s, –hot-restart=                 Set hot tracking process restart interval [default: 14400]
-t, –cold-restart=                Set cold tracking process restart interval [default: 7200]
-m, –hot-schedule=            Set hot tracking scheduling interdomain delay [default: 0.1]
-n, –cold-schedule=           Set cold tracking scheduling interdomain delay [default: 0.2]
-k, –cold-delay=                Set cold tracking first-start delay [default: 300]
–version
–help                                 Display this help and exit.

Moreover I’m definitely satisfied about the Fast-Flux Tracking module design which is explained in the commit log reported below.

commit 9ebf0d1b8ac73997f35d70435bdd3da52da6cd5d
Author: Angelo Dell’Aera <buffer@antifork.org>
Date:   Tue Aug 4 10:04:52 2009 +0200

Fast-Flux Tracking Module Domain Queues

. Fast-Flux Tracking Module was modified in order to allow two concurrent domain queues. The first queue is used just for domains which are still known to be fluxy. This is the most I/O intensive queue since it requires most frequently database operations for storing the collected data. These blocking operations are realized through a thread pool and the tests done on the previous version of the module showed these have a detrimental impact on the domain scheduling process slowing it too much. So the second queue was added and it is used for domains not still classified as fluxy. The idea is to minimize blocking operations so if a domain is not fluxy there are no blocking operations at all. If a domain is fluxy, the collected data are saved and then the tracking path ends in such a way that when the first queue will restart it will take care of this new domain. It’s worth noting that this approach allows really frequent restarts of both queues with no destructive interference among them and with a really low memory consumption.

A prerelease is coming. Stay tuned!

A new spamtrap submodule is currently under development. Its targets are spamtraps located on mailservers which I administer. Few of these mailservers generate huge amounts of spam mails and this leads to great performance troubles if you try to download them by POP3/IMAP and then parse. A different approach was thought for situations like these. In fact, I developed a small agent which has to be run on the mailserver host. This agent loops listing the spam files in the maildir and parsing them without any network-based data transfer. When it has done, it saves the interesting data in a serialized form on the filesystem (through the Python cPickle module) and assigns to this data a version number. This allows a remote agent to ask the last version and download just the missing versions. This submodule was developed using Twisted Perspective Broker directly serializing on the wire saved data and currently defines a basic authentication mechanism too. While developing this submodule I was thinking that it could be nice to use it for sharing data between researchers coming from multiple spamtraps. Suggestions are welcome!

Few days ago I started thinking about the scalability limits of the TIP Fast-Flux Tracking module and realized its design was really awful. The approach was based on the idea of assigning a monitoring thread to each fluxy domain. This approach is well suited if the number of threads is quite small but not for what I was just realizing. First of all, when the number of threads starts growing the performance starts decreasing due to the Python Global Interpreter Lock which limits concurrency of a single interpreter process with multiple threads (and there are no improvements in running the process on a multiprocessor system). Moreover, it’s really hard to guarantee each thread enough stack space for running not raising segmentation faults. For these reasons I decided to rewrite the module from scratch and currently I’m testing it. The new design is really simple, effective and scalable and I have to thank Jose Nazario, Marcello Barnaba and Orlando Bassotto for the really interesting talks we had about this matter. Just one process and no monitoring threads. The code is written is such a way not to have blocking calls thus realizing a really asynchronous module. But when a domain starts being monitored there’s the need to access to backend database thus requiring blocking calls. When this happens, the blocking calls are delegated to the Twisted thread pool with a cloned copy of the collected data in order not to compromise code scalability with not necessary locks. Moreover the module is now turning to be a Twisted Application of its own and the first tests done using the Twisted Epoll Reactor are absolutely encouraging. Stay tuned!

Bad Behavior has blocked 15 access attempts in the last 7 days.