Tag Archive: Honeynet Project


Honeynet Project Challenge 2010/3 – “Banking Troubles” has just been posted and is to investigate a memory image of an infected virtual machine. The challenge has been provided by Josh Smith and Matt Cote from The Rochester Institute of Technology Chapter, Angelo Dell’Aera from the Italian Chapter and Nicolas Collery from the Singapore Chapter.

Submit your solution at http://www.honeynet.org/challenge2010/ by 17:00 EST, Sunday, April 18th 2010. Results will be released on Wednesday, May 5th 2010. Small prizes will be awarded to the top three submissions.

Skill Level: Difficult

UPDATE: Submission deadline extended to Monday, 26th of April 2010

Challenge 2 of the Honeynet Project Forensic Challenge has just been posted. The challenge has been provided by Nicolas Collery from the Singapore Chapter and Guillaume Arcas from the French Chapter and is titled browsers under attack.

Submission deadline is March 1st and results will be released on Monday, March 15th 2010. Small prizes will be awarded to the top three submissions.

Have fun!

UPDATE: Submission deadline extended to Monday, 8th of March 2010

About two months ago I started contributing PhoneyC, a pure Python honeyclient implementation originally developed by Jose Nazario. The perception is that our development efforts are moving on the right track. The code can be downloaded here. If you’re interested take a look at the different development branches and give us your feedback. Moreover if you’re interested in technical details about PhoneyC please read this paper by Jose Nazario.

After several years without any Honeynet Project Challenges, there will finally be new Forensic Challenges starting next Monday (January 18th, 2010). Here is the official announcement.

I am very happy to announce the Honeynet Project Forensic Challenge 2010. The purpose of the Forensic Challenges is to take learning one step farther. Instead of having the Honeynet Project analyze attacks and share their findings, Forensic Challenges give the security community the opportunity to analyze attacks and share their findings. In the end, individuals and organizations not only learn about threats, but also learn how to analyze them. Even better, individuals can access the write-ups from other individuals, and learn about new tools and techniques for analyzing attacks. Best of all, the attacks of the Forensic Challenge are attacks encountered in the wild, real hacks, provided by our members.
It has been several years since we provided Forensic Challenges and with the Forensic Challenge 2010, we will provide desperately needed upgrades. The Forensic Challenge 2010 will include a mixture of server-side attacks on the latest operating systems and services, attacks on client-side attacks that emerged in the past few years, attacks on VoiP systems, web applications, etc. At the end of challenge, we will provide a sample solution created by our members using the state-of-the-art tools that are publicly available, such as libemu and dionaea.
The first challenge (of several for 2010) will be posted on our Forensic Challenges web site on Monday, January 18th 2010. We will be open to submissions for about two weeks and announce the winners by February 15th 2010. This year, we will also award the top three submissions with prizes! Please check the web site on Monday, January 18th 2010 for further details…

Christian Seifert

A new series of papers is available from the Honeynet Project: “Know Your Tools” deals with specific types of honeypots and explains how to use them. The first paper in this series deals with Picviz, a tool to visualize data based on parallel coordinates plots. Picviz is a parallel coordinates plotter which enables easy scripting from various input (tcpdump, syslog, iptables logs, apache logs, etc..) to visualize data and discover interesting aspects of that data quickly. Picviz uncovers previously hidden data that is difficult to identify with traditional analysis methods. The paper is available at http://www.honeynet.org/node/499.

Abstract

This document explains how Picviz can be used to spot attacks. We will use three examples in this paper; analysis of ssh connection logs, demonstration of the graphical interface on network data generated by a port scanner and the use of Picviz command line to discover attacks towards an Apache web server. Picviz can handle large amounts of data, as illustrated by the last example in which two years of raw Apache access logs are analyzed. We will show how we can find attacks that previously have been hidden and discover them in a very short time! We hope Picviz will make you more efficient in analyzing any kind of log files, including network traffic, and able to spot abnormalities even with large dataset.

It’s long time since I don’t write about TIP and its evolution. A lot of things have changed during these last months in order to make TIP more efficient and scalable. So maybe it’s worth to talk about it! First of all, TIP really exploits the Twisted Plugin System as best as it can. As shown below, the Tracking Intelligence Project services are now Twisted commands implemented through the plugin system.

buffer@alnitak ~/tipproject/tip/core $ twistd –help
Usage: twistd [options]
Options:
–savestats              save the Stats object rather than the text output of the profiler.
-o, –no_save           do not save state on shutdown
-e, –encrypted        The specified tap/aos/xml file is encrypted.
–nothotshot             DEPRECATED. Don’t use the hotshot profiler even if it’s available.
-n, –nodaemon        don’t daemonize, don’t use default umask of 0077
-q, –quiet                 No-op for backwards compatibility.
–originalname         Don’t try to change the process name
–syslog                    Log to syslog, not to file
–euid                       Set only effective user-id rather than real user-id.
-l, –logfile=              log to a specified file, – for stdout
-p, –profile=            Run in profile mode, dumping results to specified file
–profiler=                Name of the profiler to use (profile, cprofile, hotshot). [default: hotshot]
-f, –file=                  read the given .tap file [default: twistd.tap]
-y, –python=           read an application from within a Python file (implies -o)
-x, –xml=               Read an application from a .tax file (Marmalade format).
-s, –source=          Read an application from a .tas file (AOT format).
-d, –rundir=           Change to a supplied directory before running [default: .]
–report-profile=     DEPRECATED.

Manage –report-profile option, which does nothing currently.

–prefix=                use the given prefix when syslogging [default: twisted]
–pidfile=               Name of the pidfile [default: twistd.pid]
–chroot=              Chroot to a supplied directory before running
-u, –uid=              The uid to run as.
-g, –gid=              The gid to run as.
–umask=              The (octal) file creation mask to apply.
–help-reactors     Display a list of possibly available reactor names.
–version               Print version information and exit.
–spew                   Print an insanely verbose log of everything that happens. Useful when debugging freezes or locks in complex code.
-b, –debug            run the application in the Python Debugger (implies nodaemon), sending SIGUSR2 will drop into debugger
-r, –reactor=         Which reactor to use (see –help-reactors for a list of possibilities)
–help                    Display this help and exit.
Commands:
tip-fastflux           Tracking Intelligence Project Fast-Flux Tracking service.
tip-collector        Tracking Intelligence Project Collector service
.
ftp                           An FTP server.
telnet                      A simple, telnet-based remote debugging service.
socks                     A SOCKSv4 proxy service.
manhole-old          An interactive remote debugger service.
portforward           A simple port-forwarder.
web                       A general-purpose web server which can serve from a filesystem or application resource.
inetd                     An inetd(8) replacement.
xmpp-router         An XMPP Router server
words                   A modern words server
toc                       An AIM TOC service.
dns                      A domain name server.

This is really useful since it allows to run just the needed modules fine tuning their behaviour as shown below.

buffer@alnitak ~/tipproject/tip/core $ twistd tip-collector –help
Usage: twistd [options] tip-collector [options]
Options:
-o, –one-shot                      Run the collector just one time
-c, –concurrency-level=     Set maximum concurrency level [default: 1]
-s, –reschedule-interval=   Set collector restart interval [default: 21600]
–version
–help                                   Display this help and exit.

buffer@alnitak ~/tipproject/tip/core $ twistd tip-fastflux –help
Usage: twistd [options] tip-fastflux [options]
Options:
-w, –whitelist-force-refresh  Force white-list domain refreshing at every commit
-s, –hot-restart=                 Set hot tracking process restart interval [default: 14400]
-t, –cold-restart=                Set cold tracking process restart interval [default: 7200]
-m, –hot-schedule=            Set hot tracking scheduling interdomain delay [default: 0.1]
-n, –cold-schedule=           Set cold tracking scheduling interdomain delay [default: 0.2]
-k, –cold-delay=                Set cold tracking first-start delay [default: 300]
–version
–help                                 Display this help and exit.

Moreover I’m definitely satisfied about the Fast-Flux Tracking module design which is explained in the commit log reported below.

commit 9ebf0d1b8ac73997f35d70435bdd3da52da6cd5d
Author: Angelo Dell’Aera <buffer@antifork.org>
Date:   Tue Aug 4 10:04:52 2009 +0200

Fast-Flux Tracking Module Domain Queues

. Fast-Flux Tracking Module was modified in order to allow two concurrent domain queues. The first queue is used just for domains which are still known to be fluxy. This is the most I/O intensive queue since it requires most frequently database operations for storing the collected data. These blocking operations are realized through a thread pool and the tests done on the previous version of the module showed these have a detrimental impact on the domain scheduling process slowing it too much. So the second queue was added and it is used for domains not still classified as fluxy. The idea is to minimize blocking operations so if a domain is not fluxy there are no blocking operations at all. If a domain is fluxy, the collected data are saved and then the tracking path ends in such a way that when the first queue will restart it will take care of this new domain. It’s worth noting that this approach allows really frequent restarts of both queues with no destructive interference among them and with a really low memory consumption.

A prerelease is coming. Stay tuned!

Few days ago I started a new really exciting experience by joining the Honeynet Project. This really short post is just for saying thank you to Lance Spitzner for the umpteenth time for the opportunity he offered me. With the hope to be able to contribute as best as I can!