Simple ways to speed up your website

Having a fast website is very important. As I mentioned in my Black Friday post, nobody likes a slow website and if your site take more than a few seconds to load, the chances are you are loosing visitors because of that lag.

This article contains a few easy to implement tips which you can use to help you reduce the load time of your website.

Keep Your Code Tidy

Unless something goes wrong, or someone chooses to view your source code, most of the people who visit your website will never see any of the code that is stuffed away behind the scenes. That doesn’t mean it isn’t important however. After all, the code at the back-end is what creates the website at the front end.

Minify HTML

Minimising your HTML, CSS and JavaScript is a very easy way to reduce the size of your website. If there is less to load, then your website will load faster. If you use a CMS like WordPress, there are many plugins which can minify your code for you. If you self-code there are websites which will shrink your code for you, or you could go through it yourself, removing unnecessary spaces and tags etc.

Reduce Files Fetched

It is good practice to fetch as few files as possible when loading your website. For example, many sites use separate style sheets for different parts of the website – for example one for text, one for images and another for general layout. Every file that your page calls upon increases its overall load time. Fetching one big CSS document will usually be faster than fetching three smaller ones.

Also consider how many external resources you load – for example adding a Facebook like button will require the users browser to visit Facebook’s website to pull the code across, whilst loading your page. A link or a delayed load on things like social sharing buttons can give you a big speed boost.

Optimise Your Images

Images make your content more exciting, however if you don’t optimise them then they can often really slow your page load time down. There are various ways you can reduce the file size of your images without compromising on quality.

Resize Pictures

When you take a picture, it can often be much bigger than you really need it to be. By resizing photos before you upload them, you can massively reduce the file size of your images. If you leave the file big, but resize it using HTML or CSS – by setting a smaller height and width – then the end user still has to load the big image, and then their browser then has to squash it down to fit your new image dimensions.

Choose The Right File Type

The most commonly used image formats are .jpg, .gif and .png.  Different images lend themselves to different formats. Reducing the number of colours available to a GIF or a PNG-8 image will reduce the files size, whilst reducing the image quality will lower the size of a JPEG file.

Use An Image Compressor

Image compressors are another way to shrink images. Technology Bloggers currently uses a WordPress plugin called WP Smush.it which uses the Yahoo! Smush.it tool to reduce image files.

Example

Here is a picture that I took several years ago whilst in South Africa.

Elephants in South Africa
The full sized image was 3.44 megabytes. Resizing it in Photoshop helped me reduce that to 1.61 megabytes. Because there are lots of colours and the image was quite big, choosing GIF and PNG-8 format made it look too pixelated, so it was between PNG-24 and JPEG. PNG-24 squashed the image down to 831 kilobytes, whilst JPEG compressed it to a tidy 450 kilobytes. Although that is a lot smaller than the original file, it would still take a long time to load on a slow connection so by taking a very small hit on the image quality, I managed to get the file size down to 164 kilobytes. Finally running the image through Smush.it took it down to 157 kilobytes. Some images see a big reduction, most (like this one) see a smaller reduction of just a few percent.

Use A Content Delivery Network

Content delivery networks, or CDNs, can help to improve a websites speed and make it more reliable. Put very simply, when someone tries to access your site, without a CDN they are directed to your hosting provider, who will then serve them your website and all its files from their server. This means that if your host goes down because of a fault, or a sudden surge in traffic you loose your site, and also if your host is not close to a user, it can take a long time for them to communicate.

With a CDN, users can fetch your site faster, because it is offered in multiple locations around the world. Additionally many CDNs can cache a copy of your site, so if your host goes offline, they can provide a static version of your site to users until it comes back up.

For example, Technology Bloggers is currently hosted in Gloucester in the UK. If you access us from Australia, CloudFlare (the CDN we use) will send you to its closest data centre, which could be in Australia, which will then deliver the files you need to see our site. It is faster because your requests don’t have to travel all the way to the UK and nor does the data being sent back to you either.

Control Your Cache

Server Side

If you use a CMS, then the chances are your content is dynamically delivered upon request. Basically, when the user requests a page, your site creates it and then sends it back. By using some form of caching you can create a static image of your site, so your site doesn’t have to create the content each time a user visits it. There are various plugins you can use to help with this, Technology Bloggers uses CloudFlare’s caching system, as I have found this seems to work better than other WordPress plugins I have tried. Also, using too many plugins, slows your site down, hence why I let the CDN manage it.

User Side

A users browser also saves files for later, in case they visit your site again. It is possible to determine what files are saved and for how long these files are saved for, by adding caching headers to your .htaccess file you can change these settings.

How To Test If Your Site Is Faster

Refreshing your page and timing it with a stopwatch is one way to gauge how quick your site loads. This probably isn’t the best way to do it though!

There are various websites which rate your sites speed performance. I tend to measure Technology Bloggers using four main speed analysis sites.

Google PageSpeed

Google are keen for the web to be faster and offer a very useful tool which gives your site a score for mobile load time and desktop load time. It also suggests what it believes is slowing your site down. Google’s tool also gives an image of your fully loaded site – all the content above the fold. Unfortunately, their test doesn’t actually state how fast your site loads, just how well optimised it is.

WebPageTest

Probably the most thorough site I use is WebPageTest, which presents loads of different information, including first view load time, repeat view load time (which should be quicker if you have user side caching), a waterfall view of all the files loading, a visual representation of how your site loads, suggestions as to where performance issues lie and loads more.

An analysis of TechnologyBloggers.org using the WebPageTest tool

Pingdom

Pingdom is another useful tool, it gives a handy speed score and also tells you how fast your site is compared to other sites it has tested. It also saves your speed results, so you can view historic test result speeds on a graph, and see how your sites speed has changed.

GTmetrix

GTmetrix is another useful site. It also gives lots of details, and helps you to see what is slowing your site down. GTmetrix also lets you compare one site to another, which I’m not really sure is that useful, but it is interesting to see how your competitors site compares to your own.

An analysis of TechnologyBloggers.org using the GTmetrix tool

Happy Browsing

Remember to enjoy your new, faster site! Hopefully your visitors will too. 🙂

Blog Action Day 2013

Blogs all across the world are talking about human rights today. For the fourth year in a row I am taking part in Blog Action Day.

Blog Action Day's logoThis year the topic is human rights.

I am going to share with you might thoughts on the relationship between the Internet and human rights.

Imagine what it would be like if every day, a cloaked figure followed you around, observing your every action and taking notes. It would be a bit creepy wouldn’t it, not to mention the privacy issues.

Back in 2011, I wrote a post asking whether everyone should be entitled to use the Internet and whether in fact it should be a human right. Founder of Facebook Mark Zuckerberg believes that it should be; make your own decision as to whether this is only because he wants more business for his site.

So, imagine what it would be like with Mark Zuckerberg following you around all day, taking notes on what you do, invading your privacy… hold on, if you are on Facebook, he kind of does.

See how I linked that. 😉

I am no stranger to complaining about Facebook, but it isn’t the only culprit, Google is also a huge threat to online privacy. It stores all information it collects about you for at least 18 months. Why? In the words of Hungry Beast, because “Google wants to know who you are, where you are and what you like, so it can target ads at you.Check out Hungry Beast’s video to scare yourself.

So to get to the point, I don’t believe access to the Internet need be a human right, (not yet anyway) however I do believe that the right to privacy online should be. The United Nations logoArticle 12 of the United Nations Universal Declaration of Human Rights states:

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

Why does this not cover our online lives too? Should Google, Amazon, Facebook, Yahoo and Apple (and others) be allowed to monitor us so much?

I shall keep this short and sweet and leave you with those thoughts.

Fighting spam and recapturing books with reCAPTCHA

A CAPTCHA is an anti-spam test used to work out whether a request has been made by a human, or a spambot. CAPTCHAs no longer seem to be as popular as they once were, as other spam identification techniques have emerged, however a considerable number of websites still use them.

CAPTCHA pictures

Some common examples of CAPTCHAs.

CAPTCHAs can be really annoying, hence their downfall in recent years. Take a look at the different CAPTCHAs in the image above, if you had spent 30 seconds filling in a feedback form, would you be willing to try and decipher one of the above CAPTCHAs, or would you just abandon the feedback?

The top left image could be ZYPEB, however it could just as easily be 2tPF8. If you get it wrong, usually you will be forced to do another, which could be just as difficult.

The BBC recently reported how The National Federation for the Blind has criticised CAPTCHAs, due to their restrictive nature for the visually impaired. Many CAPTCHAs do offer an auditory version, however if you check out the BBC article (which has an example of an auditory CAPTCHA), you will see that they are near impossible to understand.

reCAPTCHA

Luis von Ahn is a computer scientist who was instrumental in developing the CAPTCHA back in the late 90’s and early 2000’s. According to an article the Canadian magazine The Walrus, when CAPTCHAs started to become popular, Luis von Ahn “realized that he had unwittingly created a system that was frittering away, in ten-second increments, millions of hours of a most precious resource: human brain cycles.

Anti-spam reCAPTCHA

An example of a reCAPTCHA CAPTCHA.

In order to try and ensure that this time was not wasted, von Ahn set about developing a way to better utilise this time; it was at this point that reCAPTCHA was born.

reCAPTCHA is different to most CAPTCHAs because it uses two words. One word is generated by a computer, whilst the other is taken from an old book, journal, or newspaper article.

Recapturing Literature

As I mentioned, reCAPTCHA shows you two words. One of the images is to prevent spam, and confirm the accuracy of your reading; you must get this one right, or you will be presented with another. The other image is designed to help piece together text from old literature, so that books, newspapers and journals can be digitised.

reCAPTCHA presents the same word to a variety of users and then uses the average response to work out what the word actually says – this helps to stop abuse. In a 2007 quality test, using a standard computer text reader, (also known as OCR) 83.5% of words were identified correctly – a reasonably high amount – however the accuracy of human interpretation via reCAPTCHA was an astonishing 99.1%!

According to an entry in the journal Science, in 2007 reCAPTCHA was present on over 40,000 websites, and users had interpreted over 440 million words! Google claim that today around 200 million CAPTCHAs are solved each day.

If each CAPTCHA took 10 seconds to solve, that would have been around 139 years (or 4.4 billion seconds) of brain time wasted; I am starting to see what Mr von Ahn meant! To put the 440 million words into perspective, the complete works of Shakespeare is around 900,000 words – or 0.9 million.

Whilst the progress of reCAPTCHA seems pretty impressive, it is a tiny step on the path to total digitisation. According to this BBC article, at the time von Ahn is quoted saying:

“There’s still about 100 million books to be digitised, which at the current rate will take us about 400 years to complete”

Google

In 2009 Google acquired reCAPTCHA. The search giant claimed that it wanted to “teach computers to read” hence the acquisition.

Many speculate that Google‘s ultimate aim is to index the world, and reCAPTCHA will help it to accelerate this process. That said, if that is its goal, it is still a very long way off.

We won’t be implementing a CAPTCHA on Technology Bloggers any time soon, however next time you have to fill one in, do spare a thought for the [free] work you might be doing for literature, for history and for Google.

The size of the Internet – and the human brain

How many human brains would it take to store the Internet?

Last September I asked if the human brain were a hard drive how much data could it hold?

The human hard drive: the brainI concluded that approximately 300 exabytes (or 300 million terabytes) of data can be stored in the memory of the average person. Interesting stuff right?

Now I know how much computer data the human brain can potentially hold, I want to know how many people’s brains would be needed to store the Internet.

To do this I need to know how big the Internet is. That can’t be too hard to find out, right?

It sounds like a simple question, but it’s almost like asking how big is the Universe!

Eric Schmidt

In 2005, Executive chairman of Google, Eric Schmidt, famously wrote regarding the size of the Internet:

“A study that was done last year indicated roughly five million terabytes. How much is indexable, searchable today? Current estimate: about 170 terabytes.”

So in 2004, the Internet was estimated to be 5 exobytes (or 5,120,000,000,000,000,000 bytes).

The Journal Science

In early 2011, the journal Science calculated that the amount of data in the world in 2007 was equivalent to around 300 exabytes. That’s a lot of data, and most would have been stored in such a way that it was accessible via the Internet – whether publicly accessible or not.

So in 2007, the average memory capacity of just one person, could have stored all the virtual data in the world. Technology has some catching up to do. Mother Nature is walking all over it!

The Impossible Question

In 2013, the size of the Internet is unknown. Without mass global collaboration, I don’t think we will ever know how big it is. The problem is defining what is the Internet and what isn’t. Is a businesses intranet which is accessible from external locations (so an extranet) part of the Internet? Arguably yes, it is.

A graph of the internet

A map of the known and indexed Internet, developed by Ruslan Enikeev using Alexa rank

I could try and work out how many sites there are, and then times this by the average site size. However what’s the average size of a website? YouTube is petabytes in size, whilst my personal website is just kilobytes. How do you average that out?

Part of the graph of the internet

See the red circle? That is pointing at Technology Bloggers! Yes we are on the Internet map.

The Internet is now too big to try and quantify, so I can’t determine it’s size. My best chance is a rough estimate.

How Big Is The Internet?

What is the size of the Internet in 2013? Or to put it another way, how many bytes is the Internet? Well, if in 2004 Google had indexed around 170 terabytes of an estimated 500 million terabyte net, then it had indexed around 0.00000034% of the web at that time.

On Google’s how search works feature, the company boasts how their index is well over 100,000,000 gigabytes. That’s 100,000 terabytes or 100 petabytes. Assuming that Google is getting slightly better at finding and indexing things, and therefore has now indexed around 0.000001% of the web (meaning it’s indexed three times more of the web as a percentage than it had in 2004) then 0.000001% of the web would be 100 petabytes.

100 petabytes times 1,000,000 is equal to 100 zettabytes, meaning 1% of the net is equal to around 100 zettabytes. Times 100 zettabytes by 100 and you get 10 yottabytes, which is (by my calculations) equivalent to the size of the web.

So the Internet is 10 yottabytes! Or 10,000,000,000,000 (ten thousand billion) terabytes.

How Many People Would It Take Memorise The Internet?

If the web is equivalent to 10 yottabytes (or 10,000,000,000,000,000,000,000,000 bytes) and the memory capacity of a person is 0.0003 yottabytes, (0.3 zettabytes) then currently, in 2013, it would take around 33,333 people to store the Internet – in their heads.

A Human Internet

The population of earth is currently 7.09 billion. So if there was a human Internet, whereby all people on earth were connected, how much data could we all hold?

The calculation: 0.0003 yottabytes x 7,090,000,000 = 2,127,000 yottabytes.

A yottabyte is currently the biggest officially recognised unit of data, however the next step (which isn’t currently recognised) is a brontobyte. So if mankind was to max-out its memory, we could store 2,127 brontobytes of data.

I estimated the Internet would take up a tiny 0.00047% of humanities memory capacity.

The conclusion of my post on how much data the human brain can hold was that we won’t ever be able to technically match the amazing feats that nature has achieved. Have I changed my mind? Not really, no.