FTP and Stateful Firewalls

Recently had to try and explain why a FTPS configuration was not working over an otherwise open private WAN. Issue was the two stateful firewalls at each end. Since writing this post I have shown it / e-mailed it to three other people to try and help them understand their own encrypted FTP issues.

So because it seems helpful I’ll add it here (IP Guru’s will get annoyed by the very simplistic language used, but it could save you time if you get asked in future!)

FTP connections require more than one channel of information, there is the control channel, TCP Port 21 and then single / multiple data transfer channels for PUT/GET/DIR commands for sending/receiving/listing data etc. The TCP or UDP ports that the data transfer channels use are negotiated between the server and client via the control channel once the user has logged in successfully.

A modern firewall works by looking at outbound connections from an internal network and ‘tracking’ them, that is, it keeps a record of internal host A trying to contact internet webserver W1 on port 21, then only allows traffic from the internet if it is from Webserver W1 on port 21 sending data to internal host A. In this way, any communication channel that has not originated inside the customers network will not be allowed into the network from the internet (or <NETWORK NAME REMOVED> in this case). Clearly this posed a problem for the FTP protocol, as the original FTP specification mandated that the FTP SERVER would decide on the data transfer channel ports and try connecting BACK to the client on these new ports, of course that did not work, as the firewall at the client end has no record of the client connecting outbound on these data transfer channel ports and so drops the connection.

To get around this issue (when security became important on the internet and people started deploying firewalls) the FTP standards created a new ‘PASSIVE’ mode, this mode just allows the data transfer channels to be created FROM the client to the server, allowing the firewall to see the outbound connection and therefore allow return data traffic from the server. This works fine, unless BOTH server AND client are behind firewalls, at this point, neither ACTIVE or PASSIVE mode solves the problem, there will always be one firewall that drops the connection because it hasn’t seen the computer behind it initiate the connection first.

To solve this, most firewalls (including ours here and the one at <SITE NAME REMOVED>) have ‘FTP Helpers’ built in, these pieces of code inspect the data passed between server and client in the FTP control channel (Port 21) and therefore see the negotiation between the systems over what data channels to use, because they see which ports the systems are getting ready to use for FTP data channels, the firewalls can dynamically open the needed ports, expectantly waiting for the connection and then close the ports again when the control channel disconnects (because if there is no control channel there is no user).

This works perfectly, however if you need ENCRYPTION on your FTP transfer, due to the nature of the data you are transferring, then, both control and data channels are encrypted from client to server and back again with TLS or SSL encryption.The firewall becomes blind to the data it needs to ‘help’ the FTP connection, as the control channel appears to the firewall as nothing but encrypted jibberish, therefore the FTP helper in the firewalls cannot work out what ports are being negotiated.

This is why you can log in successfully, but anything that requires listing, sending or retrieving data fails, as the data channel cannot be set up because the firewalls are not expecting the connections. The only resolution to this is FTP Clear Control Channel mode, this (as the name suggests) only uses encryption for the transfer channels and leaves the control channel in plain text so that firewalls along the path can deal with the connection correctly.

It is support for FTP Clear Control Channel mode that I wanted to log onto the server and check for, but after some reading into FileZilla server, it appears this is not supported.

It is for this reason that both our site AND <SITE NAME REMOVED> are to blame, purely because they both operate firewalls.

This is not an issue that can be resolved without doing one of the following:

– Running a FTP server that supports CCC

– Removing one of the Firewalls

– Removing encryption

– Permanently opening up a range of ports from/to both machines and then configuring both server and client to always use these ports for data channels. This would also mean only that pair of systems specifically configured for this server could successfully use FTP in this manner.

Busy Busy!

A month since my last post!
Poor I know but I have been more snowed under with exams, coursework and final year project work than.. well, london 🙂

Now into the second semmester of my final year, timetable changes have meant my job work hours/days have changed, however this may not be such a bad thing. Last semester I worked Monday Tuesday part time for an IT company, then did uni and project work wed-fri. Now I have lectures Tuesday morning and every other day at some point except Monday, so I’m working Monday, then wed afternoon and Thursday morning.

Have done this for one week now and I actually like it, I feel that due to the break between the days I am going to get much more done on the long running projects that work has assigned me, as that break between wed/thurs allows your brain to reflect and carry on pondering, allowing touch ups and changes the next day.
It also breaks the work up, never a bad thing.

Had exams throughout January for last semesters modules, some pretty deep questions on low level bluetooth, UWB (Ultra wideband) and wimax (IEEE802.16d/e) were asked in our advanced network technologies exam however I think I answered them OK.

Also pondering next year quite a lot at the moment, with friends rushing round applying for jobs left right and center, I really need to consider exactly what I plan to do and where I want to be come next September.
Obviously somewhere in network security / advanced network tech, just where would best suit my knowledge and allow me to keep learning what I enjoy… Answers on a postcard 🙂

Anyway, enough of a catchup, I have stuff to ramble about like more IPtables stuff, opensolaris and my uni project, but I’ll save those for another post.


Root CA spoofing sucessful

A proof of concept attack has been presented at 25C3 (http://events.ccc.de/congress/2008/) showing that it is possible to use the well known MD5 hash collision insecurity to create your own ‘Certificate Authority’ (CA) signing certificate which is already accepted as trusted by most of the major browsers.

This allows an attacker with this root CA key to sign any other certificates he wishes, and all of these will be trusted by client browsers.

Scenario: In the past, if you went to your online banking website and a certificate error appeared, you would suspect something was up, possably you were being Man-in-the-middled and you were being proxied through a malicious machine, or alternativley your DNS had been poisoned and the site you were looking at was not the real bank’s site. You knew this BECAUSE of the certificate error and the attacker could do nothing about it because he was not able to get the private key of the bank’s certificate, or have his own bank certificate pair signed by a signing authority. The attacker just had to hope the user just clicked ‘Continue anyway’ etc.

However now, the attacker basically has the public and private key for a root CA certificate installed in your browser, he can sign any certificate pair he wants, and it will be trusted. How do you differentiate now? when both the real bank and attacker bank site come up with a rosy green SSL bill of health?

Creation of such a certificate only works against certificate authorities that still use MD5 (RapidSSL was used in this particular exploit) and with the release of this information, I should hope that the number of CA’s using this == 0 in a very short while 🙂

This has been a very crude and technically lacking explanation, however I suggest you read the following link for a much more indepth step by step process on how this was carried out;



Windows local user password reset

Hi all, Just a quick update.

I’m sure we are all farmilliar with the Windows NT Password offline editor? (if not http://home.eunet.no/pnordahl/ntpasswd/ )
It provides a bootable environment based on chntpw to change or blank any 2000/XP/2003/Vista local users password, very useful for lost accounts.
However, while playing around I was wondering how easy it would be to get a copy of the users original hash first, so it can be put back in place after you have reset the password, allowing you to cover your tracks (Not having to hastle users to set a new password is always a good thing!)..

Turns out windows does no checks on the file properties of the ‘SAM’ account manager registry hive, so;

  • boot into some form of linux with NTFS-3G (NTFS Read/Write support), copy SYSDRIVE:/Windows/System32/Config/SAM to SAM.Bak.
  • Go ahead with your chntpw based password reset (may as well use the raw chntpw tool since you are already in linux, however nothing wrong with shutting down and booting into the NTPWRS bootable cd (as the SAM.Bak file was saved on the actual drive).
  • Reset the users password of your choice and do whatever needs to be done…
  • When finished, boot back into Linux with NTFS R/W support and move SAM.Bak back to SAM, overwriting the current ‘SAM’ file.

Thats it, passwords for all users back to what they were.

This isn’t anything new, or actually that exciting, but it’s something not really mentioned around the NTPWRS/chntpw pages and I thaught it could come in useful to know it works 🙂

Right, onto the real point of my messing around, I want to be able to do the same for active directory;
So far it looks like I have hit a deadend trying to access the AD DB itself while the system is live, user passwords are stored in a ‘UnicodePWD’ class inside the users object which is a ‘write only’ field. I have a few more idea’s on how to get this, and then putting the hash back whenever required is very easy indeed 🙂

More later.


DSSS in 802.11b/g networks

The other day I cleared up something that has been confusing my brain for ages! (whether anyone else cares is another matter but anyway :P)

I could not understand why WIFI sniffing tools such as kismet were able to collect all data from clients on a given channel when the underlying multiplexing technology was direct sequence spread spectrum. DSSS (Basic overview: http://en.wikipedia.org/wiki/Direct-sequence_spread_spectrum) allows multiple clients to transmit simultaneously on the same frequency by multiplying a pseudorandom ‘chipping code’ of 1’s 0’s and -1’s to the data before transmission. The receiver can then use the code for that client to pick out the clients data from the other noise on that frequency range. The data can even be received if the clients signal is at a lower power than the noise floor.

It is this technology that is used in 3G UMTS systems to allow multiple mobile phones within the same cell area to all upload data (and download, because downloads still require ACK’s) at much faster speeds to GSM (GSM uses traditional frequency and time division multiplexing techniques to ‘slice’ up the available bandwidth and hand it out to clients (as without DSSS only one client can transmit on a certain frequency at a certain time))

So that’s the overview, this was my puzzle, if DSSS is being used on a wireless network, each client has a chipping code in line with how DSSS works. This would mean that traffic from laptop A would be sent to the access point multiplied by a pseudo-random number that only itself and the access point knew. Making it impossible for me, laptop B to sniff laptop A’s data, as I do not have the same chipping code and would therefore not decode laptop A’s transmission properly, therefore, DSSS would provide some rudimentary encryption just because of how it operates.

However, from sniffing wireless LAN’s with kismet, I KNOW this not to be the case, I can recover another wireless clients data very easily and from the collected data I can resemble full TCP streams, so I am definitely receiving all the traffic to/from that client.

The reason?
The IEEE’s use of DSSS for 802.11b/g is not how DSSS is ‘usually’ used. They have used DSSS for some of it’s other properties and not for it’s simultaneous client transmit ability (probably due to power/cost issues in full on DSSS decoding requirements and that broadcast traffic would have to be encoded with each clients chipping code).
Therefore, the 802.11b standard (I believe, I am trying to find it) actually specifies the chipping code to be used by all 802.11b compatible kit. This standard means that WIFI is still a ‘One person transmitting at a time’ medium (as everyone is using the same code so it offers no way to differentiate between simultaneous transmissions) and because of this CSMA/CA (carrier sense multiple access with collision avoidance) is used along with RTS/CTS (request to send/clear to send) management frames to ensure that only one client is transmitting at a time.
This single hardcoded chipping code also explains why kismet is able to sniff all traffic on a WIFI network, even though DSSS is in use!

Hope this helps someone else’s brain take a few hours off too 🙂 or at least got someone interested in low level network tech 🙂


Acer aspire one with ubuntu 8.10

Hi all,

Very long time without a post, but hopefully that will now change as I have much technical bodgery planned.
Anyway, without going to current life goings-ons too much, I am currently in the middle of my first semester of the final year computer networks BsC, My final year project submission has been approved (more on that kettle of fish in another post) and I am working two days a week for a local outsourced IT firm.

Onto the main reason of the post.. Toys! I am now the proud owner of an acer aspire one ‘netbook’. Tiny little laptop powered by an Intel atom processor. It ships with it’s own small linux distribution, however this did not even get a chance before it got ripped off and replaced with ubuntu 8.10 (first boot 🙂 )

Acer Aspire One: http://www.acer.com/aspireone/
Intel Atom: http://www.intel.com/technology/atom/index.htm
Ubuntu 8.10: http://www.ubuntu.com/
Performance Improvement for ubuntu on Aspire One: https://help.ubuntu.com/community/AspireOne

Anyway, this combination is awsome! The ease of use of ubuntu for day to day use, combined with the fact it’s still Linux for advanced hacking when needed, all in a tiny laptop package which makes it portable enough to throw in your bag and have it with you anywhere.

The default 802.11a/b/g card is an internal atheros miniPCI card as well, which means using the madwifi drivers the laptop supports wireless packet injection out of the box 🙂

There is however one issue I have had, and that is that wifi was broken after a suspend/resume (suspend/resume is damn fast). After a little playing around I have managed to fix this, my solution is below:

The solution is to unload the madwifi drivers on suspend, and reload them on resume.
you can do this manually after resuming by running the following commands as root:

/sbin/modprobe ath_pci

This unloads all madwifi kernel modules, and then loads them again. The wifi should spring back into life.
However, this is nasty, so the following script will run these commands for you on suspend/resume:

Create a new file in /usr/lib/pm-utils/sleep.d called ’06acerwifi’.
Chmod this to 755, and place the following into it:



/usr/local/bin/madwifi-unload > /dev/null

/sbin/modprobe ath_pci > /dev/null

case “$1” in
*) exit $NA

Hope this helps someone!

Zero to ‘Giving IT guys a bad name’ in one commercial

*Rant Warning*

Since moving into my new house last week, I have been spending a fair amount of time in the lounge, on the questionably comfy sofa’s in front of the TV with my laptop, and today, between the countless (but enjoyable nevertheless) reruns of Top Gear on Dave, something irritated me.

The same thing then irritated me on another channel, same irritation, different company. Then again, on the same channel, but again a different company!

And this irritation is:
‘Come train to be an IT/Networking expert and make lots of money, you don’t need any previous experience and you could be trained and working as a system admin/network admin/IT consultant within months’

If you have seen these adverts, you will probably know what I mean, but if not allow me to explain my annoyance:
It’s not really the fact they are herding people by the masses into an already crowded industry (although that is quite annoying)…

It’s the people they are bringing in ‘No previous experience required’, ‘Qualified within a few months’. Some of the Jobs they are suggesting these people move into would make me question weather I knew enough to fulfill the role and yet, I have had a real interest in IT since as long as I can remember, have spend countless days researching and tinkering with technologies just because I wanted to understand them and better my knowledge in my chosen field;

Yet these people are expected to gain knowledge amounting to years of reading and practice, countless weeks of late nights trying to get something to work not to mention years spent honeing linux and networking skills, and be unleashed onto some poor company as their computing saviour within months??

No wonder the ‘IT Department’ has such a globally bad and unappreciated reputation.
So thanks, overadvertised ‘We’ll Take anyone’ IT training companies, thanks for lowering the worth and reputation of IT roles accross the industry.. Tossers.


Life updates and blog directions

Evening all,

Over a month since my last post, and a lot has happened; however, There is a simple reason why I have not posted for so long;

This was always meant to be a technical blog, covering cool stuff I have been working with or pondering, or a place to write down reminders of how to config something, more of a random splattering of technical know-how more than anything. I don’t want it to become an oh-so common blurb of personal life crap, where I am, why I am there, what I ate for breakfast, with little or no juicy tech.

I don’t have time to be keeping that kind of blog and so, If I have nothing particularly blog worthy, I won’t just be posting for the hell of it to keep my post count up.
There will always be a couple of life related posts that slip through, and so I will be tagging all past and future posts with (at least) ‘tech’ or ‘life’ to allow readers to filter out what they do not want 🙂

So now, to a very short update to bring us into the present.

Well, I have left Sun Microsystems, my placement ended and I moved out of our amazing house and back to the more livley north 🙂 Once again a big thanks to everyone at Sun, truly amazing people to work with and I have gained so much hands on experience with more cool equipment than you could shake a reasonable sized datacenter at.

I spent two weeks back living back home with my parents, catching up was nice (so was not cooking my own meals every night or doing all my washing 🙂 ). Saw a few freinds from home and enjoyed a few visits to the pub.

Then another move, this time to my new house share in salford, for the final year of my degree. So after two house moves in two weeks, I have now been in my new uni house for a week, finally got everything unpacked, set up and the house is looking pretty good. I’m living with Martin Dave and Sam from uni, so it will be good to chill out with those guys again over a good few beers.

This year promises much more technical bloggery I feel, you should see the house networking config already!

Anyway, so right now, I’m in my new house, and currently trying to find a new job for the year, if anyone know’s of an ISP that needs someone to play with BGP4+ for a year (for a decent wage ;P) let me know 😉

Thanks for still reading after such a drought of input ;P


RHCE Certification


Hello hello,
Long time since my last real post, but things have been busy.
Coming to my last few weeks at Sun now and hoping that I can continue my employment with them while at university next year through the campus ambassador scheme.

In more current news, for my final training course with Sun, I was allowed to take the fasttrack red hat certified engineer course (RH300) which was taking place on King William St. in London (Between monument and bank tube stations)

The week was very fast paced, offering more of a quick overview of things we should already know and a brief recap of the more advanced topics rather than an in depth course. This was understandable, as we had to cram everything from redhat network installs, user management, to advanced services and security into four days and on the fifth, we sat the 5 1/2 hr practical RHCE exam.

Commuting was tiring, I was up at 6.00am to get the train for 7 into waterloo, then the tube to bank (or to Westminster then the district line to monument). Strange as it seems I actually enjoyed getting up early and heading into London, it’s just an excellent atmosphere to work in, much more going on than where I usually am.

Met some cool people on the course, all from businesses around the city so it was good to find out what kind of jobs are available and what skill sets are actually in use / in demand (It seems RHCE is right up there).

On the Wednesday night a few of us (including our instructor Joe) went for a pint or six after the course which was a nice break from the constant routine of the rest of the week, a really good night seemed to be had by all and some damn good london pride was consumed (We actually drank them out of the stuff, and moved on to deuchars IPA which was quite nice)

The exam was intense to say the least, Five and a half hours in total. I can’t say much about the exam itself as we had to sign an NDA before taking the exam. I can now see why it is such a highly regarded cert.

The good news, I passed! Got my results e-mailed to me last night at 12.30AM (Marked over in America) and so I am now a verified RHCE:


Overall, an excellent week, I have learn’t a lot, had fun, met some cool people and really enjoyed working in the center of London.

Good luck to all those who are going to retake the exam, and well done to those who passed 🙂