I am posting something a bit different today a opinion article on something I feel to be true. The basis is from a video from 1998’s L0pht testimony and a comparison of how little things have changed since then.
Recently in the Washington Post there was an article about the hacker group called L0pht and their plea to the government on how private companies need to be responsible for the software they put online. They were trying to bring to light that if you want a more secure system then don’t put it online. This does not mean that offline systems are impervious to attacks either. The testimonial is worth the 1 hour run time and I recommend you listen to it on youtube. It is very important if your business is accountable for holding data records, login info, and customer info. This is not related to my previous article but rather all kinds of software I see day in and day out.
I just wanted to touch on a few items on the video that I believe to still be prevalent in todays online culture and mentality of corporate security.
“Can the systems be secured? In most cases they can be … they can be remedied by incorporating relatively trivial and inexpensive cryptographically secure authentication.”
Often some of the insecure items I come across are due to no security at all, whether they end up using plain text to store data in the database or don’t use common and readily available technologies like HTTPS or TLS in order to transmit over public forms of communication. Having something is always better than having nothing.
“Insecure software is cheaper and easier to sell as there is no liability tied to the manufactures” … “encourage companies to include this [security] in their products and hold them liable when their products fail.”
Selling software is easy, ensuring it has perfect security is impossible. No product will ever be truly secure, it is not a matter of if but rather when.
“I don’t think it is possible to create a fool proof system, but I don’t think that should be the goal. The goal should be very difficult to get in.”
Putting hurdles in the way of would be exploiters slows them down and keeps away the script kiddies. This in combination with monitoring incursion events would keep organisations aware. Security needs to roll forward with the times, it is not something you can deploy and hope it will work for the lifetime of the product.
“If you have sensitive information then you should not share it with networks that are less secure or less trusted”
As straight forward as this sounds it could be a simple as allowing VPN users from outside of the office in or more commonly BYOD enrollment in the office.
So that leaves us with what can be done about it.
For starters listen and be aware to what is going on in both the industry and with your own systems. I am not saying go out now and update your Watchguard and Ironport devices and patching every device on the network. Simply I am referring to read up on what is going on, is there a new exploit for TLS downgrading that could affect my S3 instance? Are my offsite backups stored in an encrypted manner? Is there documentation on how strong this manner stands up to bruteforce techniques? Have I looked at the FTP logs for unusual activity? Maybe I should not have a FTP account that could expose the internal file server. All of these questions lead to new avenues of learning and awareness.
Also, listen to users who are trying to help. Its much easier and cheaper to ignore a problem, however when a internal or external user lets you know there is a issue with the current implementation list, getting upset will only make the user think twice about letting you know in the future. I see this as one of the biggest roadblocks on reporting issues. It is far easier to sell a exploit online and actually make money than it is to report it and then have pressure from the company. In a more recent example with starbucks. Imagine if this exploit was sold.
“The unpleasant part is a guy from Starbucks calling me with nothing like “thanks” but mentioning “fraud” and “malicious actions” instead.”
The selling of zero days and exploits also hurt the company far more than if they were to fix it after it was disclosed to them. This comes at a higher cost to both the organization and the clients that had put their faith and more importantly their data into the organization.
There is this cool free software out there called logstalgia it allows you to review webserver logs in order to find out in a visual way how clients use your site. But mostly it just looks cool, there are going to be benefits to looking at logs using it but really this is a ‘I was bored’ project so no real business use for it unless your CIO only responds to crazy colors and classic pong. I saw a number of users asking for IIS support on both the wiki and on the github so I decided to educate people on how you can use this on IIS not just Apache or NGIX.
Problem: I want to use logstaigia with IIS or Windows Web Services.
Solution: Change the format that IIS is logging in and make it easy for logging software to interpret the information written by the server.
To be clear you can also convert the logs to an apache like format but this would not allow for monitoring in real time. However this will show you how to fix the issue permanently.
First go on into IIS and click on the site you want to view logging for.
Once you have opened the logging settings. Change the log format to NCSA and be sure to pick a easy to get to location for the logs.
If you want to see it in more than real time seeing as IIS only seems to write to the file ever few mins then look into setting up realtime logging.
Enjoy using logstalgia to view web traffic!
First of all this is not a Blog, it may look like one but this is in fact the first blog post. This is an amalgamation of things that I found poorly documented or that I wanted to make a note of. Its posted publicly so that other people can access it and get help when they cant figure it out themselves. The articles I post are mostly unique and to my best understanding of the topic. I try and keep articles as brief and to the point as possible.
However I will branching into a new sub categoric type of postings from time to time loosely based around NetSec. These posts will be longer and less to the point than the Server Fixes or Workstation Fixes that I do as I am doing less of that in my employment now. I feel that Network Security starts with
A) Having free time (Something I have more of now after relocating)
B) Seeing something and asking if it can be broken (Something IT people ask on a daily basis)
C) Having the underlying hardware and knoledge in order to take on tasks on a efficient and proper manner – This is a biggie, there are time I would like to write about something but have not due to the fact I don’t want to mislead people or provide inaccurate information. I may be able to build it , see it work , and pull it off but at the same time to do a write up on something that I am not confident on does not feel right. This is why I did not write about my hodge podge setup for ESX that I have redone countless times until it became a well oiled machine.
NetSec has interested me for a while now and I have been getting into it more and more in recent. I feel its a constantly evolving field where you play a giant game of chess in order to out think and be more creative than the target is truely interesting to me. Once the idea is laid out, research into the tools and methods required is done, internal testing of exploits has been completed I find myself looking for ways to automate these tasks for larger scale.
The new posts (not all of them Ill still show you how to deal with Citrix XenApp Logon Times next week) will be related to Network Security, most of the information will be censored in order to protect the identities of the internal testing url’s but I will be doing more documentation on the exploits. I will also be covering some popular tools that I’m sure have been documented on thousands of websites but these will be for my own reference only.
For now, courage.
We have a number of these (roughly 12) at my work so this is just for personal refrence but feel free to comment if it helps you.
Problem: Brother printers HL-2720DW series printer shows a amber toner light, despite new toner it will not clear.
Solution: First do everything a normal person would reboot, clean the head as instructed on every toner change. If this does not resolve it follow below:
1. Open front door the light will change color.
2. Turn off printer with the switch on the side.
3. While holding down the green GO button, turn printer back on using the switch on the side.
4. When all four LEDs light up release GO button. All LEDs will turn off
5. Press the GO button 2 times. The 3 LEDs (toner, drum, paper) will light up solid
6. Press the GO button 5 times
7. Paper light will be blinking
8. At this point the toner end-of-life condition has been reset. Close front door.
This post is completely off topic but I want to make something available that I feel people would use. I recently jumped on the usenet train for fun and found its great that everything is available on webUI’s. The downside is typing in the IPs/Hostnames all the time to get to them. Sure you can bookmark em but why do that when you have have this free website I made for you to host on IIS8!
Just ensure you put a password on any of your services before publicly posting them on the net. Setting up HTTPS wouldn’t hurt either.
You can snag the files here, be sure to use your own IP’s / subdomains so you get get to the services. Click the link below to get the files.
SabSite I do not condone the use of Sab or Torrents, unless your downloading Backtrack.
Here you go AeroFS has been open to the public; finally a replacement for Windows Live Mesh.
Go get it here: https://www.aerofs.com/