Blocking Shodan | Keeping shodan.io in the dark from scanning

In the last few days of writing this post there has also been a massive amount of mongoDB installs that have been hacked. For more info in preparing for data breaches see my previous post on 3-2-1-0day rule for backups. While shodan is not responsible for this generating a largest list via their service is trivial for whatever service you have a exploit for. So it may not be a bad idea to try and keep away from the all seeing eye Shodan is. While there are arguments on both sides that shodan helps identify issues as well as identify targets I think its best if we had the option to opt out. Thus,

The Definitive Guide to Blocking Shodan from scanning.

First we need to identify the list of IPs that Shodan sends scans from, this are commonly from their census servers but can come from other hosts they control as well. Below is a list of the domains and IP addresses I have collected online, and monitored scanning my equipment.

census1.shodan.io 198.20.69.72 - 198.20.69.79 US
census2.shodan.io 198.20.69.96 - 198.20.69.103 US
census3.shodan.io 198.20.70.111 - 198.20.70.119 US
census4.shodan.io 198.20.99.128 - 198.20.99.135 NL
census5.shodan.io 93.120.27.62 RO
census6.shodan.io 66.240.236.119 US
census7.shodan.io 71.6.135.131 US
census8.shodan.io 66.240.192.138 US
census9.shodan.io 71.6.167.142 US
census10.shodan.io 82.221.105.6 IS
census11.shodan.io 82.221.105.7 IS
census12.shodan.io 71.6.165.200 US
atlantic.census.shodan.io 188.138.9.50 DE
pacific.census.shodan.io 85.25.103.50 DE
rim.census.shodan.io 85.25.43.94 DE
pirate.census.shodan.io 71.6.146.185 US
inspire.census.shodan.io 71.6.146.186 US
ninja.census.shodan.io 71.6.158.166 US
border.census.shodan.io 198.20.87.96 - 198.20.87.103 US
burger.census.shodan.io 66.240.219.146 US
atlantic.dns.shodan.io 209.126.110.38 US
blog.shodan.io 104.236.198.48 US *
hello.data.shodan.io 104.131.0.69 US
www.shodan.io 162.159.244.38 US **
host private.shodan.io , ny.private.shodan.io 159.203.176.62
atlantic249.serverprofi24.com 188.138.1.119 ***
sky.census.shodan.io 80.82.77.33
dojo.census.shodan.io 80.82.77.139
ubtuntu16146130.aspadmin.com 71.6.146.130

Community submitted IP addresses:

battery.census.shodan.io 93.174.95.106
house.census.shodan.io 89.248.172.16
goldfish.census.shodan.io 185.163.109.66
battery.census.shodan.io 93.174.95.106
mason.census.shodan.io 89.248.167.131
flower.census.shodan.io 94.102.49.190
cloud.census.shodan.io 94.102.49.193
turtle.census.shodan.io 185.181.102.18

Last updated: 2017-09-26

*Probably not a scanner
**Their main website, don’t block prior to running tests below / at all if needed
***Consistently appeared when forcing a scan on my own host details below

Now how can you trust that these are the IP address owned by shodan.io and not randomly selected by just reversing DNS? Easy!
Shodan does not want you to know where its scanners are located on the internet, and this makes sense since their business model revolves around it. To help hide the servers IPs they scan from shodan automatically censors its own IP addresses in results. Here is a random example of what the returned data looks like:

They replace their own IPs with xxx.xxx.xxx.xxx this is done prior to us ever getting the data. Even if you get raw firehose access to the scan results it is still censored prior to being given to the customer.


(example from firehose demo on their blog)

Due to this we can simply search any IP or domain name we think it operated by a Shodan scanner in Shodan! They will appear as censusN.xxx.xxx.xxx.xxx see the below example.

That’s great, now how do I check and make sure that Shodan cannot reach my host.
First block the IPs listed, I would recommend you check them first to ensure they are up to date but as of 2017-01-12 this is the most complete and accurate list available comapred to older postings I have found.

Then you have two options, you can sign up for a paid shodan.io account and force a scan on your host, or you can simply wait and check your IP periodically from the web interface for free: https://www.shodan.io/host/ [ip here] under the last update field.

Since I already am a paid Shodan member I can test my block list right away. This is done by installing Shodan instruction can be found here.

Once installed you want to initiate an on demand scan of your IP. A working example can be found below:

But if you have successfully blocked Shodan you will see the following alert when attempting the scan, the left is my terminal the right is the firewall dropping the connection.

Testing over multiple days I always got the same result. To ensure it was not just that I had scanned to close together I had tested another one of my hosts that had not been blocked and the Last Update was close to real time. You can also check when your host was last scanned using the following command:

You can see that since putting my IP block in place I have not been manually scanned at any of the two previous attempts. The dates are also listed when you were last scanned with sucsess. You can also see when the first time Shodan picked up your MongoDB or whatever else you run on that IP.

Shodan is definitely a useful tool, and will help admins who dont realize what is exposed to the internet find out their weak points. It is also very useful for vulnerability assessments and getting metrics about services from the internet as whole. But it is also like all good things used by people who want to exploit the data within for personal gain or entertainment.

There are literally hudreds of thousands of interesting and exploitable items on shodan, just dont be one of them.

3-2-1-0 Rule for Backups | A new take on 3-2-1 Backups

 

I would like to take a look at the 3-2-1 rule for backups that is commonly taught and ingrained in the memory in Netowrking101 and Computer101 classes.

While the basic rules of 3-2-1 still seem relevant in today’s day and age and have saved numerous company’s millions of dollars (See Pixar needing to go to an employee’s home PC in order to save the film Toy Story https://www.youtube.com/watch?v=8dhp_20j0Ys). I want to talk about the new darker rule 3-2-1-0. But in order to do that we need to know what 3-2-1 stands for.

TrendMicro, Rackpace, and Veeam define the 3-2-1 rule as:

3 – Have at least three copies of your data. 
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.

However in todays world we need to consider the new (as in fresh off the press) 3-2-1-0 Rule. This new version even comes with this nifty image:

3210rule3

3 – Have at least three copies of your data.
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.
0 – 0day release, assume someone else has illegally obtained a copy. (assuming someone else already has a copy or will obtain one in the future)

Rule 0 takes into account the fluid nature of how data is stored online today and what we need to do in order to prepare for the eventual discolsure of this data. It could be a user table with passwords from your database, a rouge developer cashing in on a backdoor left in the system, to a unlikely but possible scenario where someone loses an offsite unencrypted backup disk or laptop. Everyday there are a handful of leakes added to the public domain some new some old. But this is the world we live in.
This rule would call for a plan to be in place that would cover the following topics:

Response: What are the first actions a company would take after confirming or assuming their data has been compromised.
– Will services continue to operate during the Validation, Next Steps, and Review process. What are the risks of leaving the system live?
– Who are the groups that need to be alerted? (Company stakeholders, Users, Partner Orgs, etc)
– Acquiring and validating the data dump itself. Will the company purchase the data from a darkweb vendor or pay access to a fourm if necessary to confirm if the data is from their own system, or is it readily available online
– Were we notified by a 3rd party asking about a bug bounty, have there been recent twitter threats that now need to be considered as having truth to them.

Validation: Checking the data that you have acquired.
– Does the data align with the current data you have or does it appear to be a fake? (Same type of hashing method, same users, same tables)
– Does the data contain any unique information to confirm that the data was stolen from you such as unique system accounts or passwords.
– Was the data taken recently? (Compare the number of users, compare the password policy, timestamps of logins)
– If the data was not taken recently how long could it have been traded online prior to going public.
– Do any of the passwords not match the password policy set out by the company. (May indicate the passwords are from another source).

Next Steps: What to do now that you have validated the data.
– Roll out password resets.

– How was the data obtained? (SQLi, Account Stuffing, 3rd party websites)
– Prepare a statement for the media and users. The statement should be written by someone in IT not marketing and contain accurate information regarding the breach, not generic information on password hygiene.
– Comparing and or restoring the data to ensure that nothing was left behind or tampered with
– What information can be harvested from steps 3 2 1 that would assist in identifying the type of attack. This would aide in the event logs have been cleared.
– Issuing takedown requests on existing dumps and looking into vendor reputations services to automate the rest. Set up google alerts if you do not already have a social monitoring service.
– Do I need to blacklist any of my backups where data may have been tampered with or where security holes have been left unpatched.

Review phase: take a breath.
2dd
– Can we attribute (lol) this attack to anyone, competitors, scriptkiddies, China?
– How were we identified as a target? (example: Checking to see if you were listed on pastebin with a number of other vulnerable hosts of similar exploits)
– What type of encryption was used, was it sufficient, how difficult is it to implement a higher level of security in the event the data is taken in the future.

To date the 3-2-1 rule has been for protecting data you have onsite, ensuring reliability of those backups from data loss, and guidelines on media types to store it.
But I hope the 3-2-1-0 rule will bring to light some subjects that some companies may not have thought about regarding someone else having a ‘backup’ of their data.


There may just come a day when you will be buying the user data back from a nefarious party just so you can validate you were not hacked and the information is false, this just comes down to brand reputation in my opinion.

 

Introducing WhoIsByIP.com and Lazarus.

whosipromo2_no_ip_nice_try

For the last few months I have been working on a small side project that interested me between checking up on my hashtopus stack. Feeling that its a little more polished and stable I would like to present: WhoIsByIP.com , a site that allows users to reverse IP addresses and domains. Knowing there are other services that allow you to reverse domains and IP addresses I figured this would be a good opportunity to learn some more PHP and  actually create something that may be used by the public. but thats not it!

I also have added the functionality for you to reverse email addresses using obscured ( m***@f*******.com ) formats that sites like Steam and Facebook put out. It will give you a real result of the domain only based on the usage of the domain. The system currently has over two hundred and eight million records, and over nine million domains. Currently we are calling the system Lazarus.

laz2

I have various improvements coming out in the next few months. Including more real time site snapshotting, tor and VPN auto detection, and PDF reporting on the WhoIS side. As for the email resolver we will be adding some error correction on it to allow for easier identification for false positives.

The service will remain ad free, please feel free to share it and give feedback. You can also reach the site at whoisbyipaddress.com in case you are inept at remembering things and enjoy typing.

Update: Lazarus now has color coding to help those who don’t know what common domains are re-mailers, usererror on forms, and common.

WhoIsByIP now also detects over 1,500 unique VPN servers in over the top 10 VPN providers. Tor nodes have also been updated.

Taking Paypals 0$ invoice one step further.

Recently I saw a article on Bruce Schneier’s page regarding a spam vector identified by Troy Hunt where a user can send you a 0$ invoice. While this may seem like an annoyance and not a very big issue I see it as a spear fishing vector when used in conjunction with infected pc’s.

Imagine your PC has been infected with a RAT or trojan virus, or someone has a vendetta against you and decided to send you a malicious url that contains one of the many flash or java drive by exploits around the net today to infect you.

Sure they have access to your PC and can see what you can see, they can also tell when your active but that does not give them full access to your banking. Until they send you a 0$ bill. The infected user then goes to paypals site to inspect the payment and you capture their login credentials as they sign in. You basically set them up for failure.

I have yet to find record of this happening but I did however find an example on twitter of someone being sent 20$ then subsequently being ‘hacked’ the same day.

paypal-nurhd

While the attacker could have gained credentials from a leak or paste, why would they send the user 20$ ? This would serve no other purpose and leave a paypal-ish money trail when now they could simply send a 0$ invoice.

vtech db dump and the accountability of parents

The vtech hack has been the under talked story of the week. Until it was revealed today that the hacker had access to hundreds of thousands of files that could contain images of children. Suddenly it exploded, you saw news agencies that would not cover this story all over it.

Broken by Motherboard and Troy Hunt is that fact that vtech (the mfg. of choice for cheap LAN line phones) and line of children’s toys had been hacked. While the information currently (2015-12-01) has yet to be sold or traded to the level I have seen. It has really started to garner the wrong attention for the wrong reason.

Before I go any further, yes children should be protected, and yes vtech messed up. But it was how this happened should be considered. As stated in previous articles “assume everything that is encrypted will be decrypted, expect everything that is secret will be known” And with dealing with kids, there is no exceptions to this rule. The main goal is to ensure those items (photos, chat logs of children) never exist in the first place.

So let’s step back for a second.

Imagine this, vtech asks their IT dept. to set up a DB and some kind of web UI that allows kids to play games and interact with their toys. They use simplistic MD5 encryption because they figure, hey who will want to hack this? Kids don’t have this kind of knowhow. Then months later marketing sees how well received the online games are and then ask the IT team to set up a system the hardware engineers need for shipping a product that allows kids to send photos to their parents, communicate and so on. Code is reused from the original product without thought the fact that the content they are encrypting will carry a higher weight in privacy then before.

Is this their fault? Yes. But not just theirs.

Parents… were an important part of this process. The sign up required that requires parents to be a part of that Troy Hunt covers in his well-written article. The amount of trust they put into vtech was unwarranted and unfair to them. However it bears the heavy burden of a good lesson. Don’t trust a private company with private information of your child. If we can’t keep our affairs on Ashley Madison secret then how can we expect more for a child? For some parents they don’t want to give their children phones or unmonitored internet access to kids 4 to 9 years of age (the recommended age for this product from amazon.com). So why give them access to products that allow malicious hackers access to view photos of your kids?

I neglected to write an article about this for a number of days due to the fact it was just yet another data leak. But the fact that innocent kids images have been included in the leak I feel it crossed a line. No one liked public data leaks, more so when they are in them. But some companies fail to yield to the warning given to them by the exploiter even when given in good faith. Thus they feel they must leak the data in order to make a point to keep more malicious users away. I hope for the sake of the kids this leak does not get more public than it already has.

So what’s the solution?

vtech should have built in a higher level of cryptography and level of privacy (i.e. obscuring the children’s information in their DB) before it was rolled out. In something more secure than MD5, this algorithm has been around since 1991 with its first flaw found in 1996. The crypto should have been stronger. It’s sad to think that the protection built into the forum you use to buy car parts for your 1992 Honda civic is higher held than the one that allows you to talk and see your children.

The parents, this is tough one as it requires absolute vigilance on the parents end, and how can you trust the thousands upon thousands of vendors out there. The fact of the matter is you can’t, and you don’t have to. Just make judgement calls on product such as: Does my 4 your old really understand the complications of their toy being on wifi all the time? No? Then maybe I should look into something else.

It’s hard to be a parent, but with the season for giving to the ones we love, we should not avoid items that flash or are from the future, or are even from vtech. We should avoid placing the items in our kids hands that all people of a malicious nature to take over.

In ending this is not call for you to put your children in tinfoil hats, or to walk to vtech and burn down their offices but rather a word warning. The internet holds a lot of information that kids, adults, and even computers can learn from. We should not limit it, nor should we fear it. We just need to be aware of the weight of putting what we don’t want into it knowing someday it might just come back out.

 

Politcal doxing and corporate accountability.

Doxing (Wikipedia)

Doxing (from dox, abbreviation of documents), or doxxing,is the Internet-based practice of researching and broadcasting personally identifiable information about an individual.

The methods employed to acquire this information include searching publicly available databases and social media websites (like Facebook), hacking, and social engineering. It is closely related to internet vigilantism and hacktivism.

Doxing may be carried out for various reasons, including to aid law enforcement, business analysis, extortion, coercion, harassment, online shaming and vigilante justice.

Both Bruce Schneier and Brian Krebs have written excellent articles this week that I feel need to cross paths. If you have not read them yet, its ok I’ll wait.

We all know Lizard Squad happened last year but I feel that the COX fines mentioned in Brian’s article is a precursor for a standard procedure that will be eventually filed against AOL regarding the CIA Director John Brennan dox.

In short, Lizard Squad was a group of internet antagonists (DDoS) that used social engineering in order to gain access to accounts that belonged to 60 COX cable members. These were used for doxing and impersonation. Some see social engineering as simply a method for getting personal data but it is often used for privilege escalation to gain access to more accounts from celebrities to disliked bosses. A gateway hack, if it were.

What is interesting is that COX is actually being held accountable for this issue. Mostly due to the fact they had access to private information that they improperly gave the Lizard Squad members access to. This is important in two ways.

-It shows that social engineering works well enough that your front line personnel need to be aware, even Janet in the call center. 

-It should scare the shit out of IT admins who do not keep up to date with patching and security practices if a company can be liable for how the data is stored and who has access these types of decisions would have been held by the CTO or CSO. But generally systems are set up, tested, and put into production with security as an afterthought  But that’s a conversation for another time.

If COX can be fined for 595,000.00 $ for being tricked into giving access to a member of Lizard Squad to their customers data. I have a feeling AOL has one of these coming too after the more recent CIA Director John Brennan incident. The COX fine is just the beginning of how organizations need to wake up and handle their customers and employees data or this is not going away any time soon.

 

Let’s Encrypt | How the future of SSL has come to the pennyless.

A great product called Let’s Encrypt will be coming out in the near future. One of the best things about this service is how easy it will be to manage the SSL certificates. Oh and its free! Thats right web monkeys and hobbyists, stop paying godaddy for your SSL certs every year and spend your hard earned money on beer!

Problem: My certificate says its invalid or Im too poor / lazy to buy my own certificate for 100$ a year from my current domain provider.

Solution: Use Lets Encrypt in the Week of November 16, 2015!

A lot of people may say who cares? Well they are wrong, lets encrypt will alow people who want to spend more time developing their product and less time learning the difference between UCC  and wild-star certificates, let alone how to make the CSR. In fact the whole renewal process will be automated aswell assuming your using a compatible OS.

Let’s Encrypt is supposed to be so simple to use in fact that even people who were marketed Drobo’s will be able to use it. 

The reason I waited so long to post an article about this was the burning question, will it work and am I required to install a intermediate certificate (This is sometimes the case with BlowDaddy just have the client see the certificate as valid).

Well you can see for yourself on their live test page located here.

I’m looking forward to this forward thinking method of creating a more secure web and will be lined up on the 16th of November to start applying for certificates.

Notes: I do think that learning how SSL certificates work is a great idea, but for those of you who know already Let’s Encrypt is a great way to quickly get your web service online with a zero cost. 

Tracking shady hosting providers by Google Analytics UID’s

Often there are times that you come across a site and are unsure if it is the same or under the same umbrella as another site. This can be common with multiple scam or spam sites that are set up as quickly as possible and have a similar appearance.

Sometimes you just want to see if the site is owned by the same person but the WHOIS info is set to private. This solution is geared only to sites that use the highly popular Google Analytics engine.

For those of you who dont know Google Anylytics is a free solution that allows you to track users coming and going from your site, it will log City, Country, Refferal, and a number of other metrics. What users dont know is when they deploy the code accross multiple sites the UID is the same but there is a single digit appended to the end. How can this be useful? Let me show you!

Here is an example of a normal google anylytics code snippet that should be on every page of your website. For this example I have replaced my own UA- code (the unique code google assigns to you) with UA-123456789. See the code below.

<script>
(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,’script’,’//www.google-analytics.com/analytics.js’,’ga’);

ga(‘create’, ‘UA-123456789-6‘, ‘auto’);
ga(‘send’, ‘pageview’);
</script>

Pretend this code is on the source of examplepage.com Looking here we can see Analytics user UA-123456789 is currently tracking you on their site. If you were to go to scamsite.com and viewed source on that site and happened to come across the same UA-123456789 ID in the anaylytics section it would be fair game to assume that they are the same user tracking stats, unless they share an Google account, but that would be weird.

So how can we use these number to attempt to find out how many properties the owner has? Simple! at the end of the code you will see a number appended to the end of the UID. Refer to the example above.

ga(‘create’, ‘UA-123456789-6‘, ‘auto’);

This shows us that the user has registered up to 6 sites under the UA-UID for analytics.  This does not however prove that they still have 6 sites around but rather at one point they either messed up and made a new tracking code or happen to have it running across 6 sites.

Ok cool, I can guess how many sites my competitor / the user has. But how can I find the other sites? Simply by putting the UA- number in without the appended -digit into google you can get basic results leading to other properties owned by the user.

Ok ok, but why is this important.
The world of web indexing is getting smarter. Indexers not only crawl sites but the very raw html content they contain. Services like cuestat.com are already linking owners by anylytics UID’s and it wont be long before more do too. And you dont want to be the guy who is caught hosting yourname.com as a vanity site and freemovies-for-download.com with the same UID-1236456789-x ID when the MPAA comes calling.

*Note all sites are fictitious in this article, do not attempt to visit yourname.com or freemovies-for-download.com unless you want to. I know I have not.

Update 2015-10-27: Looks like Vice’s blog Motherboard used similar methodology on their post today when tracking down scammers who were using their face. They used a utility to do the discovery automatically but the thought process is the same.

To encrypt or not to encrypt. Is there a easy solution?

Here is a interesting article from Joseph Cox on encryption in the home and how well it stands up. Despite no sources being listed it gives what I consider a comprehensive look at the problem at hand of layering your encryption.

“This way, if you are stopped and forced to decrypt your hard-drive, your adversary is going to only have access to what you have deliberately stored on that computer. If your PGP secret key is stashed at home, or the leak you were provided with is on a computer elsewhere, your adversary isn’t going to get hold of them. But if you didn’t take this precaution, they are likely to gain access to everything: articles in progress, notes, interview transcripts, the lot.”

 

Exploits and large companies | How nothing has changed since 1998

header

I am posting something a bit different today a opinion article on something I feel to be true. The basis is from a video from 1998’s L0pht testimony and a comparison of how little things have changed since then.

Recently in the Washington Post there was an article about the hacker group called L0pht and their plea to the government on how private companies need to be responsible for the software they put online. They were trying to bring to light that if you want a more secure system then don’t put it online. This does not mean that offline systems are impervious to attacks either.  The testimonial is worth the 1 hour run time and I recommend you listen to it on youtube. It is very important if your business is accountable for holding data records, login info, and customer info. This is not related to my previous article but rather all kinds of software I see day in and day out.

I just wanted to touch on a few items on the video that I believe to still be prevalent in todays online culture and mentality of corporate security.

“Can the systems be secured? In most cases they can be … they can be remedied by incorporating relatively trivial and inexpensive cryptographically secure authentication.”
Often some of the insecure items I come across are due to no security at all, whether they end up using plain text to store data in the database or don’t use common and readily available technologies like HTTPS or TLS in order to transmit over public forms of communication. Having something is always better than having nothing.

“Insecure software is cheaper and easier to sell as there is no liability tied to the manufactures” … “encourage companies to include this [security] in their products and hold them liable when their products fail.” 
Selling software is easy, ensuring it has perfect security is impossible. No product will ever be truly secure, it is not a matter of if but rather when.

“I don’t think it is possible to create a fool proof system, but I don’t think that should be the goal. The goal should be very difficult to get in.”
Putting hurdles in the way of would be exploiters slows them down and keeps away the script kiddies. This in combination with monitoring incursion events would keep organisations aware. Security needs to roll forward with the times, it is not something you can deploy and hope it will work for the lifetime of the product.

“If you have sensitive information then you should not share it with networks that are less secure or less trusted”
As straight forward as this sounds it could be a simple as allowing VPN users from outside of the office in or more commonly BYOD enrollment in the office.

So that leaves us with what can be done about it.
For starters listen and be aware to what is going on in both the industry and with your own systems. I am not saying go out now and update your Watchguard and Ironport devices and patching every device on the network. Simply I am referring to read up on what is going on, is there a new exploit for TLS downgrading that could affect my S3 instance? Are my offsite backups stored in an encrypted manner? Is there documentation on how strong this manner stands up to bruteforce techniques? Have I looked at the FTP logs for unusual activity? Maybe I should not have a FTP account that could expose the internal file server.  All of these questions lead to new avenues of learning and awareness.

Also, listen to users who are trying to help. Its much easier and cheaper to ignore a problem, however when a internal or external user lets you know there is a issue with the current implementation list, getting upset will only make the user think twice about letting you know in the future. I see this as one of the biggest roadblocks on reporting issues. It is far easier to sell a exploit online and actually make money than it is to report it and then have pressure from the company. In a more recent example with starbucks. Imagine if this exploit was sold.
“The unpleasant part is a guy from Starbucks calling me with nothing like “thanks” but mentioning “fraud” and “malicious actions” instead.”
The selling of zero days and exploits also hurt the company far more than if they were to fix it after it was disclosed to them. This comes at a higher cost to both the organization and the clients that had put their faith and more importantly their data into the organization.