Blocking Shodan Part 2 | Automating the process for beyond 2017

Previously in my post blocking shodan I wrote about how to bait shodan to scan your infratructure to assist in identifying their IP addresses and mapping out their scanning network. While this was useful it lacked the ability to be automated and a central block list and required me to update the site all the time to keep it current. This is not ideal.

Seeing this as a interesting challange I have created two tools for dealing with Shodan that are at the bottom of the article skip down there for the code. For some more background I wanted to include the reason for the release.

I was on the fence about posting a tool that allows for the detection and automated blocking of Shodan servers as it allows admins to stick their heads in the sand when it comes to securing their infrastructure. However while attending Blackhat 2017 I saw the following slide that changed my opinion.

While this was not my favorite slide from Blackhat/Defcon I feel it may represent the average use of shodan today. I am not trying to paint it their userbase in a negative light. So there came the problem. How do I block the abusive bottom half while leaving the top half useable.

To understand how we can block just the bad users lets look at how the shodan scan process works and how their scanners are divided up.

From what I have seen coming from IP addresses I was able to associate with Shodan I have determined there are 3 different types of scanners.

Shodan Crawler – These crawl the web constantly and are usually identified by the censusn.shodan.io DNS addreses. These are not used when a user initates a scan and are simply there to add to a rolling index. The scan rate varies on how often you are hit by them.
On Demand Scanners – These are utilized when a user iniates a on demand scan using the CLI provided by shodan. These IPs seem to be the number one complaint by orginizations who get scanned daily.  Often harder to identify due to the DNS names.
Project Servers – To the best of my understanding these are servers that look for specific ports / products such as webcams, ICS, or may even be contracted out. DNS names are often the project SMB, Battery, RIM, Malware, etc.

So it occured to me that you could simply just block the on demand servers and that would not be too hard to do. So I came up with SIBL (located at the bottom of the article) and this is how I bait shodan into scanning my VPS hosts to expose all their scanners:

Using this method I am able to capture the IP address that is from their scanners, log it, and then add it to a block list. What is interesting is that if you block Shodan using this method the next time a scan is iniated it will fail to scan. Now on the third scan thats when things get interesting, shodan will provide a new server since you have blocked the old one. Repeating this method can assist in building a up to date list.

You can only initate a scan on a host once every few hours so I reccomend setting a cron job to run every 4 hours plus a random delay or run as often as you need or your shodan.io credits allow. When using this across mutiple a array of multiple VPS instances you can enumerate a list of all of their IP addresses in a realtivley short period of time.

Using SIBL will allow you to block the bottom half of that netsec triangle and create a list of known Scanning Servers. To ensure the test is accurate tcp dump only runs for the period while the shodan scan takes place. While this does not gaurentee a Chineese bot will not scan you within the 60 seconds the job is running its considered acceptiable losses since we can verify the shodan servers by DNS and by using Shodan.

You can download SIBL from github – Shodan IP Block List
Additional information how it works can be found on the github.

I am also annoucing my own rolling blockist in a text format that people will be able to pull to their VPS using curl or wget. This way if you dont want to sun SIBL you can just get a raw text copy of the latest servers.

The block list will be located here in the next few days:
Please do not configure your jobs to pull the list every minuite of every day, it will only be updated every 12 hours.

I will be automating the updating of this list using my own scanners and it will not include community contributions only the servers I have verified myself.

TL-WN823N Raspberry Pi 3 – Wireless Fix

Recently I have been playing around with using the Pi for different projects, monitoring plant moisture levels, checking power usage, even a car PC. I found mostly everything well documented online and had head on over to Amazon to purchase a external wifi adapter since the Pi seems to have issues connecting to a P2P wifi network or hotspot with my model of phone.

Image result for TL-WN823N

So I purchased the TL-WN823N That you can buy on Amazon seeing as it supported linux and it did not work out of the box, not super suprising but not super helpful. When reading around online I only managed to find a single soltuion that worked for the newest version os Rasbian and figured I would post it for anyone else who is having issues recently as most the old guides were written in 2014-2016.

How to add the TL-WN823N wireless adapter to the Pi 3 using Rasbian Strech or Jessie.

You will notice when you plug it in the usb adapter will light up and the green light will turn off, this is a good indication the drivers are not installed. While there are multiple ways to do this here is the skinny, the TL-WN823N driver is only compatable up to kernel version 3.x so atteempting on anything newer will cause the installer from TPLink to fail sure we can force make to do its job but I wanted to ensure it worked.

Open a new terminal window or if you are already ssh’ed into the box put in:
sudo wget http://www.fars-robotics.net/install-wifi -O /usr/bin/install-wifi

Once downloaded make the file executable with:
sudo chmod +x /usr/bin/install-wifi

Then runt he following:
sudo install-wifi

Bingo, it will detect the USB wireless adapter and everything will be working as it should. Now you can connect to P2P or hotspot APs without any issue.

Here are the items I used in this project, clicking these links help support the upkeep of the site.


Blocking Shodan | Keeping shodan.io in the dark from scanning

Update 2017-10-21: If you would like to know how I detect the IPs and want to run your own aggregator see my new article. If you just want the block list continue with this article.
2017-12-07 – Please also see reply from SANS ISC in the replies section, I have added the IP addresses to the list.

In the last few days of writing this post there has also been a massive amount of mongoDB installs that have been hacked. For more info in preparing for data breaches see my previous post on 3-2-1-0day rule for backups. While shodan is not responsible for this generating a largest list via their service is trivial for whatever service you have a exploit for. So it may not be a bad idea to try and keep away from the all seeing eye Shodan is. While there are arguments on both sides that shodan helps identify issues as well as identify targets I think its best if we had the option to opt out. Thus,

The Definitive Guide to Blocking Shodan from scanning.

First we need to identify the list of IPs that Shodan sends scans from, this are commonly from their census servers but can come from other hosts they control as well. Below is a list of the domains and IP addresses I have collected online, and monitored scanning my equipment.

census1.shodan.io - US
census2.shodan.io - US
census3.shodan.io - US
census4.shodan.io - NL
census5.shodan.io RO
census6.shodan.io US
census7.shodan.io US
census8.shodan.io US
census9.shodan.io US
census10.shodan.io IS
census11.shodan.io IS
census12.shodan.io US
atlantic.census.shodan.io DE
pacific.census.shodan.io DE
rim.census.shodan.io DE
pirate.census.shodan.io US
inspire.census.shodan.io US
ninja.census.shodan.io US
border.census.shodan.io - US
burger.census.shodan.io US
atlantic.dns.shodan.io US
blog.shodan.io US *
hello.data.shodan.io US
www.shodan.io US **
host private.shodan.io , ny.private.shodan.io
atlantic249.serverprofi24.com ***

Community submitted IP addresses:


Last updated: 2017-12-07

*Probably not a scanner
**Their main website, don’t block prior to running tests below / at all if needed
***Consistently appeared when forcing a scan on my own host details below

Now how can you trust that these are the IP address owned by shodan.io and not randomly selected by just reversing DNS? Easy!
Shodan does not want you to know where its scanners are located on the internet, and this makes sense since their business model revolves around it. To help hide the servers IPs they scan from shodan automatically censors its own IP addresses in results. Here is a random example of what the returned data looks like:

They replace their own IPs with xxx.xxx.xxx.xxx this is done prior to us ever getting the data. Even if you get raw firehose access to the scan results it is still censored prior to being given to the customer.

(example from firehose demo on their blog)

Due to this we can simply search any IP or domain name we think it operated by a Shodan scanner in Shodan! They will appear as censusN.xxx.xxx.xxx.xxx see the below example.

That’s great, now how do I check and make sure that Shodan cannot reach my host.
First block the IPs listed, I would recommend you check them first to ensure they are up to date but as of 2017-01-12 this is the most complete and accurate list available comapred to older postings I have found.

Then you have two options, you can sign up for a paid shodan.io account and force a scan on your host, or you can simply wait and check your IP periodically from the web interface for free: https://www.shodan.io/host/ [ip here] under the last update field.

Since I already am a paid Shodan member I can test my block list right away. This is done by installing Shodan instruction can be found here.

Once installed you want to initiate an on demand scan of your IP. A working example can be found below:

But if you have successfully blocked Shodan you will see the following alert when attempting the scan, the left is my terminal the right is the firewall dropping the connection.

Testing over multiple days I always got the same result. To ensure it was not just that I had scanned to close together I had tested another one of my hosts that had not been blocked and the Last Update was close to real time. You can also check when your host was last scanned using the following command:

You can see that since putting my IP block in place I have not been manually scanned at any of the two previous attempts. The dates are also listed when you were last scanned with sucsess. You can also see when the first time Shodan picked up your MongoDB or whatever else you run on that IP.

Shodan is definitely a useful tool, and will help admins who dont realize what is exposed to the internet find out their weak points. It is also very useful for vulnerability assessments and getting metrics about services from the internet as whole. But it is also like all good things used by people who want to exploit the data within for personal gain or entertainment.

There are literally hudreds of thousands of interesting and exploitable items on shodan, just dont be one of them.

3-2-1-0 Rule for Backups | A new take on 3-2-1 Backups


I would like to take a look at the 3-2-1 rule for backups that is commonly taught and ingrained in the memory in Netowrking101 and Computer101 classes.

While the basic rules of 3-2-1 still seem relevant in today’s day and age and have saved numerous company’s millions of dollars (See Pixar needing to go to an employee’s home PC in order to save the film Toy Story https://www.youtube.com/watch?v=8dhp_20j0Ys). I want to talk about the new darker rule 3-2-1-0. But in order to do that we need to know what 3-2-1 stands for.

TrendMicro, Rackpace, and Veeam define the 3-2-1 rule as:

3 – Have at least three copies of your data. 
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.

However in todays world we need to consider the new (as in fresh off the press) 3-2-1-0 Rule. This new version even comes with this nifty image:


3 – Have at least three copies of your data.
2 – Store the copies on two different media.
1 – Keep one backup copy offsite.
0 – 0day release, assume someone else has illegally obtained a copy. (assuming someone else already has a copy or will obtain one in the future)

Rule 0 takes into account the fluid nature of how data is stored online today and what we need to do in order to prepare for the eventual discolsure of this data. It could be a user table with passwords from your database, a rouge developer cashing in on a backdoor left in the system, to a unlikely but possible scenario where someone loses an offsite unencrypted backup disk or laptop. Everyday there are a handful of leakes added to the public domain some new some old. But this is the world we live in.
This rule would call for a plan to be in place that would cover the following topics:

Response: What are the first actions a company would take after confirming or assuming their data has been compromised.
– Will services continue to operate during the Validation, Next Steps, and Review process. What are the risks of leaving the system live?
– Who are the groups that need to be alerted? (Company stakeholders, Users, Partner Orgs, etc)
– Acquiring and validating the data dump itself. Will the company purchase the data from a darkweb vendor or pay access to a fourm if necessary to confirm if the data is from their own system, or is it readily available online
– Were we notified by a 3rd party asking about a bug bounty, have there been recent twitter threats that now need to be considered as having truth to them.

Validation: Checking the data that you have acquired.
– Does the data align with the current data you have or does it appear to be a fake? (Same type of hashing method, same users, same tables)
– Does the data contain any unique information to confirm that the data was stolen from you such as unique system accounts or passwords.
– Was the data taken recently? (Compare the number of users, compare the password policy, timestamps of logins)
– If the data was not taken recently how long could it have been traded online prior to going public.
– Do any of the passwords not match the password policy set out by the company. (May indicate the passwords are from another source).

Next Steps: What to do now that you have validated the data.
– Roll out password resets.

– How was the data obtained? (SQLi, Account Stuffing, 3rd party websites)
– Prepare a statement for the media and users. The statement should be written by someone in IT not marketing and contain accurate information regarding the breach, not generic information on password hygiene.
– Comparing and or restoring the data to ensure that nothing was left behind or tampered with
– What information can be harvested from steps 3 2 1 that would assist in identifying the type of attack. This would aide in the event logs have been cleared.
– Issuing takedown requests on existing dumps and looking into vendor reputations services to automate the rest. Set up google alerts if you do not already have a social monitoring service.
– Do I need to blacklist any of my backups where data may have been tampered with or where security holes have been left unpatched.

Review phase: take a breath.
– Can we attribute (lol) this attack to anyone, competitors, scriptkiddies, China?
– How were we identified as a target? (example: Checking to see if you were listed on pastebin with a number of other vulnerable hosts of similar exploits)
– What type of encryption was used, was it sufficient, how difficult is it to implement a higher level of security in the event the data is taken in the future.

To date the 3-2-1 rule has been for protecting data you have onsite, ensuring reliability of those backups from data loss, and guidelines on media types to store it.
But I hope the 3-2-1-0 rule will bring to light some subjects that some companies may not have thought about regarding someone else having a ‘backup’ of their data.

There may just come a day when you will be buying the user data back from a nefarious party just so you can validate you were not hacked and the information is false, this just comes down to brand reputation in my opinion.


Introducing WhoIsByIP.com and Lazarus.


For the last few months I have been working on a small side project that interested me between checking up on my hashtopus stack. Feeling that its a little more polished and stable I would like to present: WhoIsByIP.com , a site that allows users to reverse IP addresses and domains. Knowing there are other services that allow you to reverse domains and IP addresses I figured this would be a good opportunity to learn some more PHP and  actually create something that may be used by the public. but thats not it!

I also have added the functionality for you to reverse email addresses using obscured ( m***@f*******.com ) formats that sites like Steam and Facebook put out. It will give you a real result of the domain only based on the usage of the domain. The system currently has over two hundred and eight million records, and over nine million domains. Currently we are calling the system Lazarus.


I have various improvements coming out in the next few months. Including more real time site snapshotting, tor and VPN auto detection, and PDF reporting on the WhoIS side. As for the email resolver we will be adding some error correction on it to allow for easier identification for false positives.

The service will remain ad free, please feel free to share it and give feedback. You can also reach the site at whoisbyipaddress.com in case you are inept at remembering things and enjoy typing.

Update: Lazarus now has color coding to help those who don’t know what common domains are re-mailers, usererror on forms, and common.

WhoIsByIP now also detects over 1,500 unique VPN servers in over the top 10 VPN providers. Tor nodes have also been updated.

Taking Paypals 0$ invoice one step further.

Recently I saw a article on Bruce Schneier’s page regarding a spam vector identified by Troy Hunt where a user can send you a 0$ invoice. While this may seem like an annoyance and not a very big issue I see it as a spear fishing vector when used in conjunction with infected pc’s.

Imagine your PC has been infected with a RAT or trojan virus, or someone has a vendetta against you and decided to send you a malicious url that contains one of the many flash or java drive by exploits around the net today to infect you.

Sure they have access to your PC and can see what you can see, they can also tell when your active but that does not give them full access to your banking. Until they send you a 0$ bill. The infected user then goes to paypals site to inspect the payment and you capture their login credentials as they sign in. You basically set them up for failure.

I have yet to find record of this happening but I did however find an example on twitter of someone being sent 20$ then subsequently being ‘hacked’ the same day.


While the attacker could have gained credentials from a leak or paste, why would they send the user 20$ ? This would serve no other purpose and leave a paypal-ish money trail when now they could simply send a 0$ invoice.

2015 – Year of the dumps | With big data, comes big leaks.

Year of the Dumps – 2015 | It has been a interesting year for monitoring data dumps. The biggest being the fact that the news has been following it closer as well. The largest story being Ashley Madison, it will be included in what I feel will be the closest thing this site will ever have to a threat assessment containing over 100 dumps from various sources around the web. I don’t want to focus on how these are pulled off or specifically call a few grey market startups out but rather I want to give a overall idea on the status of the dump industry, targets, and direction it may be heading.

Without getting into the paper too much here are a few items it covers:
-100 Dumps from various sites
-Break down of industry targeted, language, and encryption used.
-Developments and strategies used by individuals with the dumped data for economic gain.
While this servers to give an idea or a snapshot of what kind of industies are vurnable it does not scratch the surface on the information if it was possible to capture all the dumps from 2015. Thus only 100 were chosen (Dont worry its still about 537,879 users not counting hand picked ones).
The paper also covers

See below for my paper titled : Year of the Dumps – 2015


Bitlocker adds support for XTS-AES 256-bit in Windows 10!

Good news since my last article windows has added support for drive encryption for up to XTS-ARS 256-bit. Provided you are using a Windows 10 machine that is updated past 1511 or later.


Currently there is no way to change the level of encryption for drives that are already encrypted. So you will need to disable bitlocker , set the GPO as written in my previous article and then encrypt the disk.


This version is available from windows update or from the newest MSDN image.

Just be aware of a few things, removable drives will not work on previous versions of Windows 10. And there is currently an issue with SEDs and Bitlocker so steer clear if you are using them.

vtech db dump and the accountability of parents

The vtech hack has been the under talked story of the week. Until it was revealed today that the hacker had access to hundreds of thousands of files that could contain images of children. Suddenly it exploded, you saw news agencies that would not cover this story all over it.

Broken by Motherboard and Troy Hunt is that fact that vtech (the mfg. of choice for cheap LAN line phones) and line of children’s toys had been hacked. While the information currently (2015-12-01) has yet to be sold or traded to the level I have seen. It has really started to garner the wrong attention for the wrong reason.

Before I go any further, yes children should be protected, and yes vtech messed up. But it was how this happened should be considered. As stated in previous articles “assume everything that is encrypted will be decrypted, expect everything that is secret will be known” And with dealing with kids, there is no exceptions to this rule. The main goal is to ensure those items (photos, chat logs of children) never exist in the first place.

So let’s step back for a second.

Imagine this, vtech asks their IT dept. to set up a DB and some kind of web UI that allows kids to play games and interact with their toys. They use simplistic MD5 encryption because they figure, hey who will want to hack this? Kids don’t have this kind of knowhow. Then months later marketing sees how well received the online games are and then ask the IT team to set up a system the hardware engineers need for shipping a product that allows kids to send photos to their parents, communicate and so on. Code is reused from the original product without thought the fact that the content they are encrypting will carry a higher weight in privacy then before.

Is this their fault? Yes. But not just theirs.

Parents… were an important part of this process. The sign up required that requires parents to be a part of that Troy Hunt covers in his well-written article. The amount of trust they put into vtech was unwarranted and unfair to them. However it bears the heavy burden of a good lesson. Don’t trust a private company with private information of your child. If we can’t keep our affairs on Ashley Madison secret then how can we expect more for a child? For some parents they don’t want to give their children phones or unmonitored internet access to kids 4 to 9 years of age (the recommended age for this product from amazon.com). So why give them access to products that allow malicious hackers access to view photos of your kids?

I neglected to write an article about this for a number of days due to the fact it was just yet another data leak. But the fact that innocent kids images have been included in the leak I feel it crossed a line. No one liked public data leaks, more so when they are in them. But some companies fail to yield to the warning given to them by the exploiter even when given in good faith. Thus they feel they must leak the data in order to make a point to keep more malicious users away. I hope for the sake of the kids this leak does not get more public than it already has.

So what’s the solution?

vtech should have built in a higher level of cryptography and level of privacy (i.e. obscuring the children’s information in their DB) before it was rolled out. In something more secure than MD5, this algorithm has been around since 1991 with its first flaw found in 1996. The crypto should have been stronger. It’s sad to think that the protection built into the forum you use to buy car parts for your 1992 Honda civic is higher held than the one that allows you to talk and see your children.

The parents, this is tough one as it requires absolute vigilance on the parents end, and how can you trust the thousands upon thousands of vendors out there. The fact of the matter is you can’t, and you don’t have to. Just make judgement calls on product such as: Does my 4 your old really understand the complications of their toy being on wifi all the time? No? Then maybe I should look into something else.

It’s hard to be a parent, but with the season for giving to the ones we love, we should not avoid items that flash or are from the future, or are even from vtech. We should avoid placing the items in our kids hands that all people of a malicious nature to take over.

In ending this is not call for you to put your children in tinfoil hats, or to walk to vtech and burn down their offices but rather a word warning. The internet holds a lot of information that kids, adults, and even computers can learn from. We should not limit it, nor should we fear it. We just need to be aware of the weight of putting what we don’t want into it knowing someday it might just come back out.