Thursday, December 31, 2009

Underhanded C Contest - 2008 Winners



The Underhanded C Contest is about writing simple, clear and concise code to solve a problem but written in such a way that a malicious behaviour is included. The 2009 competition has just opened and the problem is about maliciously misrouting luggage if a comment field meets certain criteria. Last years problem was about redacting images in a leaky way. The winners from 2008 have been posted and they're a fun read. Go have a look at them.

If you're too lazy to click through, here's a brief summary.

Third place (Linus Akesson): Relied on the input and output buffers being adjacent on the stack so that the outputted redacted images will have a copy of the original buffer appended. The code that achieve this looks like code that supports pixel colour depth of greater than 24 bits but really is just camouflage allowing for the change of a '>' to a '<' to double the output write length.

Second place (Avinash Baliga): Uses a buffer overflow in an error checking/message macro to overwrite the mask used to redact pixels with 0x0a which allows some data reconstruction. This entry got bonus points for using an error handler to be evil but lost points for masking out data to redacted rather than overwriting it.

First place (John Meacham): Handles the ASCII PPM image format and relies on discrepancies between how numbers are represented as ints versus chars. The redacted pixels colour values are replaced with the value 0 to indicate no intensity but since the file is stored as a series of ASCII characters the redacted values can be stored as a string of multiple '0' characters without modifying how the file displays. The ASCII characters were just replaced with '0' on a character by character basis. In this particular example low intensity colour values were encoded as '0' and higher intensity colour values were encoded as '00' or '000'. So RGB "255 32 0" ( 0xFF2000 ) would become "000 00 0" rather than "0 0 0". When these strings are parsed they all evaluate to '0' but in text format they leak enough information to retrieve redacted pixels.

I really recommend looking at the code snippets that go with these entries, they're fascinating.
Maybe I'll have a shot at maliciously routing baggage.

P.S Happy New Year.

Wednesday, December 30, 2009

The Best of the Web Filter



There are a lot of bad websites out there and visiting one can do bad things to your network. There's plenty of technologies that try and detect bad pages and block or filter them but the problem is that they're imperfect. There are other solutions like whitetrash that only allow you to visit sites that are listed as good (whitetrash is awesome, go play with it). The problem is creating the list of what sites are good, whitetrash allows users and administrators to decide (and I think there's a training mode where it tries to figure it out for you) but I was thinking about other lists that you could use. At about the same time I was writing a paper about Wikipedia, 4chan and Twitter (I may post it here in the future) and I was looking for statistics about their popularity, it's then I remembered alexa.com and their handy ratings and while poking around there I also discovered that they'll provide you with a csv file of the top 1 million domains. I decided to see what the web was like if you could only visit the top n sites of the web.

I decided to implement it as a squid redirector in python 3. They're dead simple, read a request off of standard in, write the new destination to standard out, I chose python 3 because I keep meaning to spend more time getting used to the language changes (it's trivial to switch it back to 2.6).

Anyway here's the script (and a download link):


#!/usr/bin/python3
import sys
import urllib.parse


maxsites = 5000
failurl = 'http://myfaildomain/'
pathtofile = '/path/to/top-1m.csv'
sites = {l.split(',')[1].strip() for l in open(pathtofile) if int(l.split(',')[0]) <= maxsites}
sites.add(urllib.parse.urlparse(failurl).netloc)


for l in sys.stdin:
    try:
        fqdn = urllib.parse.urlparse(l.split()[0]).netloc
        if fqdn not in sites and \
            ".".join(fqdn.split('.')[1:]) not in sites:
            sys.stdout.write('%s?fqdn=%s\n' % (failurl, fqdn))
        else:
            sys.stdout.write('\n')
        sys.stdout.flush()
    except Exception as e:
        pass

You may notice that this is horribly inefficient because squid uses multiple instances of the script (5 by default) to load balance and each one is going to have a big data structure in memory containing all the allowed sites. This is part of the reason why there's a parameter to limit the number of sites to add to the whitelist.

The list actually creates a pretty decent browsing experience but it isn't perfect. First of all there'll be sites that won't have all their content available because some of it gets served off a non-white listed domain (some of that is mitigated by trying the domain a second time without its least significant subdomain) but this is also a feature as you miss injected iframes as well as nasty tracking scripts. Secondly there'll just be some sites that you visit that won't actually be as popular as you think, in a real implementation I think it'd be worth scraping the top X sites from your country and the top Y sites from each category and adding them to the list as well. Finally the list is only of popular sites there are plenty of sites on the list that could be unsafe, several porn sites and other sites of dubious security posture so there's still risks and that this isn't a good policy enforcement list.

I don't see much in the way of production use of this type of list but maybe it would help people pre-screen URLs.  For example if you were doing blacklisting based off of a malware database (like Google's Safe Browsing API) but lookups were expensive you could filter out any domains on the popular list to speed things up (with some risks). Same if you have a high interaction honey-client, screening out URLs could save you a lot of time.

As an aside I wrote a similar script to the one above to compare URLs against the Google Safe Browsing API database but I seem to have lost the code.

Tuesday, December 29, 2009

whitetrash


whitetrash is a great web security companion for squid, it is great in many ways and I thought I'd throw a plug in for it here.

Not only does it only allow people to visit approved sites it also lets the users (or the administrators) add sites to the approved list via a web form that you get automatically redirected to. As of 1.0 it also checks sites against the Google Safebrowsing API so that sites that Google thinks are bad don't accidently get added to the whitelist. It's got fancy authentication options, captcha support and a firefox plugin. No steak knives though.

So why do you need something like this? Well mostly it's to stop unintended web content from being access and/or rendered. For example injected iframes on compromised sites or spyware trying to post your sensitive data to the bad guy. Basically it tries to cut down on the web traffic that happens without a specific user requesting it. All of what you want but nothing else.

Go look at it.

Monday, December 28, 2009

Nozzle: Protecting Browsers Against Heap Spraying Attacks

A slightly more technical entry today, to prove that it's not all net-centric cyberwar theorising or online gaming here at Meme Overload (Or "Meme Over" as I've started thinking of it, as in "That's it man, meme over man, meme over!")

Nozzle: Protecting Browsers Against Heap Spraying Attacks is a project from Microsoft Research that proposes a technique for detecting heap spraying attacks. If you need a heap spraying refresher or aren't quite sure what I'm on about yet have a look at the following wikipedia entries: Heap SprayingBuffer Overflow (there's even a reference to the Nozzle paper on the heap spraying page!). Basically heap spraying is a technique that makes heap buffer overflows reliably exploitable by increasing your chances of having your overwritten pointer point at injected code. In practice heap spraying is generally some injected javascript that loops around allocating strings filled with your shellcode.

Nozzle proposes to detect attempted heap spraying attacks by watching a process' allocated heap blocks and trying to determine which blocks contain executable code (basically by trying to disassemble them). Nozzle keeps some statistics about the proportion of the heap that has shellcode-like data in it and will alert if that number gets too high.

From Zorn, Livshits and Ratanaworabhan's experiments applying the technique to Firefox it seems to have worked really well. My understanding is that they modified Firefox with binary detour patches to add worker threads that checks the memory regions and modified the memory allocation routines to queue up newly allocated blocks for scanning. There's a slight timing problem here as at the time of allocation the memory block will not contain real data yet and so that a premature scan might yield false negatives but since new memory is added to the end of the work queue the memory will almost always have matured before being scanned. Only memory allocations of 32 bytes are examined because 32 bytes is really too small for any significant payload, especially when a NOP sled is taken into account and to minimize overhead only a random subset of blocks are scanned.

It's a very impressive piece of work. I'm not sure why they chose detour patches when they had the source code of Firefox to modify to prove their concept or if they were going for compatibility with closed source software why they didn't use IAT hooks via an injected DLL but still I'm a big fan of this work. I'd love to see Microsoft Research release some code for this so that we mere mortals can have a play. I'm also going to keep an eye on things in the hope that a more general solution is implemented.

Sunday, December 27, 2009

Twitter as a Vector for Disinformation


Earlier this year I wrote a paper about disinformation on Twitter, basically the potential for people to spread lies and shape opinion with Twitter. Like the EVE paper I've started polishing it before submitting it to a journal and though that I'd share my draft with you all. There's stuff about swine flu, the US economic stimulus package, plane crashes and terrorism! How can you not love it?


Twitter as a Vector for Disinformation
Abstract
Twitter is a social network that represents a powerful information channel with the potential to be a useful vector for disinformation. This paper examines the structure of the Twitter social network and how this structure has facilitated the passing of disinformation both accidental and deliberate. Examples of the use of Twitter as an information channel are examined from recent events. The possible effects of Twitter disinformation on the information sphere are explored as well as the defensive responses users are developing to protect against tainted information.

Update: This paper was published in Volume 9, Issue 1 of the Journal of Information Warfare. The full citation is:
Chamberlain, P. R. (2010). Twitter as a Vector for Disinformation. Journal of Information Warfare 9:1, 11-17.

Friday, December 25, 2009

A Bonus Christmas Morning Post

On the topic of EVE Online, The Mittani (GoonFleet Intelligence Agency (GIA)) has a column over at tentonhammer where he discusses EVE Online and in particular human intelligence / information operations in EVE. 

Have a look at:
Sins of a Solar Spymaster #4 - The Necessity of Espionage
Sins of a Solar Spymaster #22 - The Seven Types of Spy
Sins of a Solar Spymaster #32 - The Most Dangerous Agent

Also since I wrote my EVE paper there have been a few changes in the strategic landscape, for example BoB disbanded largely due to the defection of a BoB Director.

See The Mittani's forum post:
EVE Forums: there is no bob~~~

Hear The Mittani tell the story:

Thursday, December 24, 2009

Information Operations in and around EVE Online



So last year I wrote a paper about EVE Online and the interesting things corporations do to each other. Recently I've been polishing it before submitting it to a journal and as a special blog-warming post I thought I'd post my current draft. If you read it, I'd be interested to know what you think.


Abstract
Massively Multiplayer Online Role Playing Games (MMOs) provide a persistent virtual world for players to explore and interact with. CCP’s EVE Online is a science fiction MMO that explicitly encourages conflict between players. Information operation strategies are employed by groups of EVE Online players inside and outside the virtual world to seek tactical and strategic advantage. Players use propaganda, deception, and computer hacking techniques to complement their virtual military and economic operations. This paper examines how information operations are being conducted in and around EVE Online and the effect MMO information operations can have on the greater information environment.

First Post!

This is a blog I'll use to publish my research related activity. So, I'll throw up links to my papers (possibly even yet to be released work), links to stuff of interest as well as random stuff that I wanted to put somewhere. My research interests are all over the place but you're likely to find computer security, information warfare and Internet culture stuff here over time.

This blog will start off low volume (I'm imagining only 2 more posts in the next month) but may ramp up as I get a feel for what content I want to place up here. I highly recommend subscribing to the blog's RSS feed so that you don't miss the (very sporadic) action!