Wednesday, July 29, 2009

Checking accessibility with online DNS tools

It appears that yesterday, SoftLayer had a DDoS attack on one of their DNS servers, though nothing official is confirmed yet.

SoftLayer is our hosting provider and a DDoS attack is a distributed denial of service action to make a resource unavailable, in this case a DNS server. Roughly explained, a DNS server is in charge with translating a domain name to an IP address, making it possible for a user to type a name like www.teamness.com in the browser address bar and get the proper web page back. Pingdom has a blog post called A visual explanation of how DNS lookups work, with a nice picture depicting the process.

All the above means that www.teamness.com wasn't accessible in some areas in the world, at least by the domain name. I tried a few websites that will let you use their DNS servers, to get an idea of where the website wasn't available. Here is the list with these services (I also included IP addresses, since if you have a DNS problem, you might not be able to access them by names):

Website by name: http://www.all-nettools.com/toolbox
Website by IP: http://216.92.207.177/toolbox
Location: United States - Pennsylvania - Pittsburgh

Website by name: http://www.dnsstuff.com/
Website by IP: http://66.36.247.82/
Location: United States - Texas - Dallas

Website by name: http://network-tools.com
Website by IP: http://67.222.132.196/
Location: United States - New Jersey - Ocean City

Website by name: http://ping.eu
Location: Germany - Berlin

Website by name: http://www.demon.net/external
Website by IP: http://194.159.246.194/external
Location: United Kingdom - Scotland - Aberdeen

Website by name: http://www.knossos.net.nz/checkdomain.cgi
Location: New Zealand - Auckland

I used http://whois.domaintools.com to get the locations.

The interesting thing is that www.teamness.com was accessible through all the above, but not from my machine in Sweden. I asked a friend from Cyprus who was up at that time to check it with nslookup and it didn't work from his machine either.

And people were on a re-twitting frenzy about the incident.

More links on the matter:

Wednesday, June 10, 2009

Built with what?

I was curious about the server side platforms of some websites I visited recently:

LinkedIn - J2EE
Flickr- PHP
NikonUSA - J2EE
Ebay - Java
Blogspot - Google Front End
Youtube - PHP
Wikipedia - PHP
Stackoverflow - ASP.NET
StumbleUpon - PHP
Twitter - Ruby On Rails
Digg - PHP
TheFreeDictionary - ASP.NET
Wordpress.com - PHP

A big help came from BuiltWith, a technology information profiler tool. It doesn't limit itself to only the framework (which sometimes it doesn't get), but also to the tools used for analytics and tracking, javascript libraries, CDN solutions and more.

One can also see a list of websites using a certain technology. For instance here is some information related to ASP.NET, including a chart that displays the penetration of the technology over a time period on a set of websites queried by BuiltWith.

Friday, June 5, 2009

The WWW prefix

It comes a time in the start of any web business to pick a domain name. This is a laborious task in itself, since pretty much all the cool domain names that resonate with your business are already taken either by the existing competition or by domain pirates.

It was the case with Teamness and after the domain was chosen and we were about to wipe the sweat off our foreheads, we ran into the unruly question of which URL should we promote: www.teamness.com or teamness.com ?

There are many pros or cons on neither of them. Some believe it's better with the www prefix, others think it's nicer without.

The most important thing is to pick one in the beginning and stick to it. We don't want the links pointing to us on the web to be either with www.teamness.com or teamness.com.

Another important thing is that no matter what the visitors are typing, www.teamness.com or teamness.com, they must reach Teamness nevertheless. We think it's incomprehensibly rude to punish someone who typed teamness.com by sending them to the error page. So a redirect from one form to the other is mandatory.

We chose www.teamness.com over teamness.com and one reason had to do with subdomains: we post ramblings to blog.teamness.com, the private stuff is located at my.teamness.com and probably we'll use more subdomains in the future, so the www form acts as a disambiguator.

There is, however, a technical issue with the www-less domain. The cookies will be set for the whole domain. Each cookie has a domain and a path and the browser sends the cookie to the domain specified in there. If the domain is www.teamness.com, the cookie will not be sent to blog.teamness.com, but if the domain is teamness.com, then the cookie will be sent to all subdomains, like blog.teamness.com, my.teamness.com and so on.

Few people use the www prefix in verbal communication as it became implied when referring to a website. Also, when you type the name in the address bar and hit Ctrl+Enter, every browser will add the www. Prefix and append .com.

Friday, May 15, 2009

Learning from mistakes and test cases

Pawel was questioning the usefulness of test cases in one of his blog posts. I dislike useless documentation as much as any sane developer, but I see a strong reason for having test cases, besides outsourcing the tests, as one of the readers pointed out in the comments. My reason is regression testing.

Every time you test a new build of the software, you find bugs. Some are easy to reproduce and appear in daily usage scenarios. You want them fixed, otherwise they will get noticed by the users and most probably you don't want that.

In future builds, you would also like to stay free from the previously discovered bugs. Therefore you make notes about the context and the steps that led to the bug appearance. Surely, not every problem you find is worth keeping track of in a test case. A crash when accessing a web page with valid parameters doesn't usually need a test case, as you'll probably notice it anyway if it will come back.

At a certain point the build becomes stable and free of any obvious problems, which are easily accessible and reproducible. You find less bugs, but more subtle. The contexts in which they appear are not straightforward. Who would've thought to hit the Back and Forward buttons in a wizard 4 times in a row? Still, it's a malfunction and, if it doesn't happen only with Konqueror running on SuSE 6, you would probably want to fix it.

You make a note to check this bug in the next build as well. Throughout time, the list of notes is growing. You need to detail the context and the steps as much as needed, not necessarily as much as possible, because recalling all details in a few months doesn't really work.

Without noticing it, if you didn't want to acknowledge this from the beginning, you have a list of test cases.

In the future, if you think some of the test cases are not worth checking anymore, being obsolete or so, just delete them. Or better yet, mark them obsolete, to be able to revive them with some adjustments.

There is of course, automation, having the computer run a suite of tests for regression, instead of going through a list of test cases yourself. But this comes to show the same thing once again. These automatic tests are, in fact, test cases in a machine readable form.

Wednesday, April 8, 2009

The Catch-22 of functional specifications

Running away from writing functional specifications is a common thing. We prefer to talk about them. When someone joins the team, people from various areas of the project are asked to have a chat with the newcomer and provide her some insights.

However, these discussions have a tendency to repeat themselves in time and also change in a few ways, leading to chaos.

Writing functional specifications helps. Joel Spolsky shows how in a 4-part series, which may look daunting to read due to the length, but it's basically a 30 minutes lecture and it's fun and useful.

Why don't people write specifications, even for a small tool they built, that can be reused and that can clearly state what's going on? The invoked reason is always the same: no time!

This is the Catch-22 of functional specifications: people don't have time to write/maintain specifications, because they have to spend time explaining to their colleagues how things work. And they do this, because there are no up to date specifications.

Thursday, April 2, 2009

IE6 - to support or not to support

Yesterday I was reading this post from the Pingdom blog. I forgot it was the 1st of April, so I thought they went mad.

I realized it's a joke when I saw the first testimonial on the SaveIE6 website mentioned in the post, testimonial written by a certain Steve B. Bangal, Inventor of spaghetti code.

Image by hashmil

First time I read about this call to action by Robert Nyman here. Robert is trying to convince web developers to stop writing special code for Internet Explorer 6. And when you want your web pages to look decent in IE6, you need to write special code. People call it hacks.

We didn't discuss yet if we're going to stop supporting Internet Explorer 6 in Teamness. It's very tempting to do it, given all the frustrations we've been through.

So far, we had 18% of the visits on the public website and 6.5% on the private one coming from IE6. The numbers are small, so I believe we can start the countdown for casting out the hacks.

Sunday, March 29, 2009

Recommended reading with Twitter

"Twitter is a waste of time", some of my friends are saying. "Why do I need to know what others are doing?"

I guess most of the reluctant guys take Twitter's incentive, "What are you doing?", too literally. Surely, this is not interesting in most of the cases. I couldn't care less when others wake up, if their coffee tastes good or if it's raining in London. Except if I'm going to London, of course. But in that case there are 100 websites to check the weather.

Twitter is a flexible platform. There are probably a ton of articles out there explaining the benefits of Twitter and the way one may use it. I guess one of Twitter's strong points resides in its mass usage. You can see what's hot. For instance, just search for "Earth hour" updates to get an idea.

A few days ago I was reading an article, probably this one, and I realized that amongst the things I read on the web, some of them I would like to keep track of, as in the case of the books. And I also want to share them.

I could use Delicious for this, as I do with all my bookmarks, but I wanted something like a stream of articles that are sent to more than my network of people on Delicious.

Twitter to the rescue.

However, I needed something more than just: "Hey, I found this nice piece of work - link", more specifically I wanted to be able to differentiate between other updates and the ones referring to articles.

Therefore I prefixed the update with the string #RR, as an acronym for Recommended Reading. The hash sign "#" is there to indicate some sort of a label for that update.

Ok, now I have the procedure in place, but how do I make these updates stand out from the crowd?

Chris Heilmann had a neat idea on how to dig through his series of updates on Twitter after certain ones. Here he describes how he used Yahoo Pipes to drill after tweets ending in a "§", the character he appends to each update he considers useful. Then he processed the last 5 tweets with Javascript to display them in a panel on his blog.

I shamelessly cloned Chris' pipe and changed it a bit to match my needs. I wanted a feed with the recommended reading updates, which is easy to get by changing a parameter in the pipe URL that tells the pipe what to render as result.

Here is the RSS feed with the recommended reading from Teamness. Please feel free to subscribe to it.

I also kept the Twitter id in the pipe configurable, so if you mark some of your updates in the same way, prefixed with #RR, you may use the same pipe by changing only the id below:

http://pipes.yahoo.com/pipes/pipe.run?_id=839250028c8c36570145554b0bcd190c&_render=rss&id=15095949