Skip to content

This Blog Will Be on an Indefinite Hiatus

I have worked on this blog for more than two years. And during that time, I have posted new material to it at least once every few weeks. And I have enjoyed the time that I have spent writing about topics that I have found interesting. However, the time for me to focus on other projects has come. That means that it will be a longer period of time until any new entries will be posted here after this one.

It is because of an opportunity that recently came about that I may not be able to have the time to keep this blog updated. I also expect to be unable to spend much time updating the software that I have previously written. I am not sure what will be in the future of this blog, nor am I sure about the future of the software that I have written. However, I am quite sure that I will miss working on these projects, and I will look forward to working on them again.

DNS Prefetching and the SPDY Protocol: Attempts by Google to Make Web Browsing More Efficient

To follow up on the previous entry that I wrote here, I considered writing about DNS prefetching in the Google Chrome web browser. After writing about Chrome’s JavaScript performance, I thought it might be appropriate to mention that web pages may appear more quickly in Chrome when DNS prefetching is enabled in it. It was claimed in this blog post that when domains are first visited in Chrome, 250 ms are saved on average when DNS prefetching is used. I thought that DNS prefetching in Chrome was a topic that I should cover, even though that topic had already been discussed elsewhere when Chrome was first released over a year ago. However, it was recently announced that Google was going beyond work on a browser to make web pages appear in browsers more quickly. In this entry, I will also write about the protocol called SPDY that Google has created to address HTTP’s inefficiencies.

When considering ways to make web pages appear more quickly in browsers, one needs to consider everything that occurs when a user visits a web page. It might seem to be a good idea to retrieve the IP addresses associated with the domains to which users may navigate next in advance. When this is done, then the time it takes for web pages to appear should decrease. One can see how much faster pages appear when DNS prefetching is used in Chrome after entering “about:dns” into the address bar in Chrome. After viewing that page, one will be able to see which hostnames had benefits produced as a result of this prefetching. I found this information quite interesting. So I used the packet sniffer named Wireshark to view network traffic with DNS prefetching off, and then I viewed network traffic with it on. There are many pages with many links on them that one can view with DNS prefetching off and then with it on. I personally experimented with viewing Slashdot with this off, and found it interesting to see how much more DNS traffic there was when I refreshed the page with it on.

In theory, and according to tests run by Google, DNS prefetching leads to more efficient web browsing. However, when viewing DNS traffic using a sniffer such as Wireshark, one can get an idea how this can have the opposite effect on web browsing. When there are many links on a page, the majority of them may not ever be clicked by users who view these pages. In fact, it is said in this blog entry that DNS prefetching can lead to the browser spending much time resolving hostnames. It is also said in that entry that incorrect implementation of DNS caching or prefetching or other related errors can lead to web pages not appearing at all. It is suggested there that DNS prefetching be disabled, although I would like to note that I have never encountered issues such as these in the time that I have used Chrome. It should be mentioned that disabling of DNS prefetching tends to be considered a good idea when Chrome seems to be running slowly. When one searches for information on DNS prefetching on the help forum for Chrome, one will see that disabling of DNS prefetching is a common suggestion there. Although some may have found that browser speed issues are worse when this prefetching is disabled, the goal of reduced perceived latency when DNS prefetching is used might not have been attained by Google.

Of course, ideas that may seem good in theory may not always be good in practice. DNS prefetching can be considered an example of such an idea. One must appreciate the efforts that Google is making to improve the web browsing experience for users. There most certainly is room for improvement in the way that the web works, and Google has recognized this. Some of this room for improvement came as a result of the nature of the web changing, and so Google is trying to keep up with changing times. This is what led them to try to improve on HTTP.

When HTTP was first designed, it was made for a web that consisted of single pages, and one in which fewer files were sent from a server to a client. Now that many files are sent to users when they load web pages, this would not suffice. HTTP 1.1 supports sending several requests before replies arrive, in what is known as pipelining. However, pipelining may not be very advantageous, as HTTP 1.1 requires that replies to requests be returned in order. The server must spend extra time processing incoming packets to determine the order in which packets must be sent. Some might also see how sending several requests at once and requiring the server to process them can lead to denial-of-service attacks. This is all explained well in this blog post.

SPDY addresses issues that HTTP has in a number of ways. In addition to its ability to send several requests over one TCP connection, requests are prioritized. The client prioritizing requests leads to what is more important being sent to the browser first. Also, headers are compressed, which is appropriate as much redundant header data tends to be sent back and forth when HTTP is used. It is interesting to note that all SPDY traffic is encrypted. It was claimed that page load times were reduced by up to 64% when SPDY was tested. However, SPDY may have its drawbacks. As it says in this article on SPDY, the amount of processing done in compression and encryption could lead to servers having to be upgraded. This article describes changes that would need to be made on the server side in greater detail.

Google also recently announced the launch of a public DNS resolver called “Google Public DNS” in another attempt to improve the web browsing experience for users. Google’s attempts at making web browsing more efficient may not work as well in practice as they do in theory. Nevertheless, one has to appreciate their efforts in updating the way in which browsers and protocols work to improve the efficiency of web browsing.

Benchmarking of JavaScript Performance: Firefox 3.5 vs. Google Chrome 3.0

Many web pages and web applications rely on JavaScript. That is the reason JavaScript performance in web browsers tends to be considered a high priority by browser vendors. There may be other factors that affect browser performance, as browsers are not simply JavaScript engines. This was mentioned by a senior product manager on the Internet Explorer development team, as you can see in this Computerworld article in which the results of JavaScript benchmark tests on different web browsers are mentioned. However, in that same article, it is said that Internet Explorer 8 Release Candidate 1 completed those same JavaScript benchmark tests four times faster than IE8 Beta 2 did. This seems to indicate that those working on Internet Explorer do consider JavaScript performance important. Since browser vendors consider JavaScript performance important, it is important that browsers do well on these benchmark tests.

There are three major JavaScript performance test suites, each of which were released by different browser vendors. Webkit released SunSpider, Mozilla released Dromaeo, and Google released the V8 benchmark. As is often the case with benchmark tests, they may not necessarily accurately determine what performs best in the actual situations that they try to simulate. These JavaScript benchmark tests may have their flaws, some of which are still being addressed. Despite the flaws that JavaScript benchmarks may have, they can give an indication of how some JavaScript engines can be better than others. In the Computerworld article previously mentioned here, the SunSpider test suite was used to test the performance of the JavaScript engines of different browsers.

After I wrote about Google Chrome in the last entry here, I wanted to more precisely determine how much better it is at JavaScript performance than other browsers are. I considered running a few benchmark tests on a few different browsers. However, that has already been done many times by others. One can view results of such tests in the Computerworld article to which I previously posted a link, and here. However, those tests were run before Firefox 3.5 was released, and JavaScript performance in Firefox 3.0 is not as good as it is in Firefox 3.5. In fact, according to what is on this page on Firefox performance, SunSpider tests indicate that JavaScript performance in Firefox 3.5 is twice as good as JavaScript performance in Firefox 3.0. I also wanted to run these tests myself. I ran the SunSpider benchmark tests using Firefox and Chrome, and found that the benchmark tests ran much faster in Chrome than they did in Firefox.

The SunSpider test suite is a comprehensive one that simulates situations that users will encounter when they browse the web, such as generation of tag clouds from JSON input. In running these tests, I ensured that I was using the latest versions of these web browsers. I also ensured that no other browser tabs were open when I ran these tests. I also created a new Firefox profile and used it to run the SunSpider tests on Firefox. I wanted to ensure that these test results would be as accurate as possible.

I found it interesting to see the mean times it took for the tests in the SunSpider suite to be completed. It was also interesting to see what the 95% confidence intervals were, which as some of those reading this may know, are used to determine with 95% certainty that true mean values are within those intervals. I viewed the source code on the SunSpider website to see how those values were calculated using a Student’s t-distribution, and was reminded of concepts that I learned about in a university statistics course. After running the tests three times with Google Chrome, the average time it took for the tests to be completed was 822.2 ms. I then ran the tests three times with Firefox, and the average time it took for the tests on Firefox to be completed was 1756.8 ms. Firefox took more than twice as long as Chrome to complete the suite of tests. One can verify this after viewing the results of the first, second, and third tests I ran with Chrome, and the results of the first, second, and third tests I ran with Firefox. One can also copy and paste URLs from one set of test results into a text box on a page that contains the other browser’s results to compare results.

As you may have seen if you have viewed the results of other JavaScript benchmark tests that have been posted on the web, they tend to list only the mean times it took to complete them. Confidence intervals are not mentioned in the Computerworld article that I previously mentioned. When one wants to see precisely how much faster one browser’s JavaScript engine is than another’s, one may want to take confidence intervals into account. However, in the tests that I ran, overall confidence intervals did not differ by a large percentage. The mean values and confidence intervals were similar for the three tests that were run on each browser. This gives a strong indication that Chrome’s JavaScript performance is significantly better than that of Firefox.

Google Chrome may outperform other browsers in JavaScript performance. However, Internet Explorer’s JavaScript performance is as good as Google Chrome’s when Google Chrome Frame is used with it. According to this other Computerworld article, when Internet Explorer is used with Chrome’s JavaScript engine, its JavaScript performance is, understandably, much better. Once again, Computerworld used the SunSpider test suite to determine these JavaScript performance differences. Google Chrome’s JavaScript engine’s performance may be better than the performance of other JavaScript engines. It also appears that it will continue to be better than other JavaScript engines. In this post on the Google Chrome blog it was said that JavaScript performance in its latest beta has increased by 30% since the last stable release of Google Chrome.

Explanations of What Makes Google Chrome Fast and Efficient

Ever since I started using Google Chrome, I have been very impressed with its speed. I wondered what it is about this web browser that makes it as fast and efficient as it is. I decided to do some research on what makes it faster than other browsers that I have used. I have read some explanations of the design decisions that make it stand out, and some of these explanations are better than others. After my positive review of A Stick Figure Guide to the Advanced Encryption Standard (AES), some may not be surprised to find that I consider the best explanation of what makes Google Chrome fast and efficient to be in the form of a comic book.

After I read about what makes Google Chrome faster than other browsers, I have found that a few points tend to be mentioned consistently. It is often said that its use of the Webkit open source web browser layout engine and the open source JavaScript engine known as V8 set it apart. What it is about Webkit and V8 that make Google Chrome faster and more efficient is mentioned in a number of different places. And these explanations appear in a number of different forms, such as this video in which Lars Bak, the head programmer of the V8 project, explains the concepts behind V8.

Some might consider the explanation in that video useful. Others, who do not have as much knowledge of the topics that Bak explains, would prefer explanations that they would be more likely to understand. I personally wanted a thorough and complete explanation of what makes Google Chrome as fast and efficient as it is. What I had not realized was that I needed to look no further than the page on Google that lists its features for the link to the information that I wanted to find. This information may have been there for a long time, although it seems to continue to be the best introductory guide to the technology that makes Google Chrome different.

It seemed to be appropriate that the best guide to Google Chrome’s features came from those employed at Google. The differences in its user interface, security handling, and performance are covered in it. In this post, I will only focus on Google Chrome’s speed and efficiency. I may cover the user interface and security design decisions in a different post about Google Chrome. If I do, I may once again refer to this comic.

There is a section on Webkit and V8 in that comic in which important concepts about V8 are explained well. Webkit, which is also used by Safari, is said to manage memory well. It is said that Google Chrome is designed for the world wide web of today rather than the one that existed when browsers were first written. That is the reason it was said to be important that its JavaScript engine be efficient, considering how often JavaScript is used in web applications. The concept of hidden classes is covered well, and it is covered in enough detail for its intended audience to understand the basics of it. Those who would like a more detailed explanation of this concept may see it explained in further detail here. Also, it is mentioned that JavaScript is compiled into machine code when V8 is used. As those familiar with advantages of compiled code over code that is interpreted as JavaScript typically is, this can lead to a significant performance increase. Also, as hidden classes are created, the machine code can refer to these hidden classes, which leads to better performance. In addition, V8’s method of garbage collection is very efficient, and it is explained how memory is managed much more efficiently than it is in other JavaScript engines.

The explanations in this comic seem to be aimed at an audience that has some knowledge of the concepts explained in it. I thought that those who want to know more about this technology would find it understandable and entertaining. Some might also understand jokes in it, such as the “speed limit” depicted on this panel as being 10100, also known as a googol. Some might like this comic, some might not. For those who prefer to view a video that covers some of the same material, but in less depth, there is this video.

Some may find my recent praise for comics interesting. Some might also find it interesting that I am praising Google Chrome after writing much here about Firefox. I still use Firefox quite often, mostly because of its extensions. However, when I want to use a browser that is simple, fast, and efficient, I use Google Chrome.

Can Stick Figures Make AES More Understandable?

I sometimes come across informative explanations of concepts from other bloggers. Sometimes these explanations are more entertaining than I would ever be able to make them. I sometimes write explanations of concepts here and I try to present them in a way that will make readers interested in reading them. I also sometimes critique explanations that others write. I recently came across “A Stick Figure Guide to the Advanced Encryption Standard (AES)” that explains AES in a way that I thought was both amusing and informative.

This guide to AES, written by Jeff Moser, was presented in a very unique way. It is unlikely that any other guide to AES had ever been written in the form of a series of handwritten illustrations that contain stick figures. However, Mr. Moser wrote a guide to AES that was in this form. This guide consists of four different sections, referred to as the four acts of the story of AES. The first two acts appear to be intended for those who are only interested in understanding what AES is and why it matters. The third and fourth acts cover how AES actually works.

The first act gives a short overview of what led up to the development of AES. While it is intended for those who are not familiar with AES, or cryptography in general, those with some knowledge of cryptography will understand the inside jokes in it. The panel in which ROT13 is mentioned contains a message encrypted with ROT13, followed by a response that says “double ROT13 is better” that might as well have been “encrypted” with two ROT13 rounds. Some will also find it appropriate that when DES is mentioned, it is pointed out that its key length was shortened. It is also appropriate that the attack on DES from Distributed.net is depicted as one that came from many individuals. Triple DES, as well as its performance issues that led to a need for a new data encryption standard are mentioned next. It is then mentioned which algorithms were in competition with each other to become this new standard. The next act consists of a short overview of cryptography.

In the third act, the details of how AES works are mentioned. Descriptions of expansion of the key, and the steps commonly referred to as SubBytes, ShiftRows, MixColumns, and AddRoundKey are given. There is not as much humourous material in this section, although some will like how ShiftRows is described in a way that is unlikely to appear in any textbooks that cover AES. There are some details that are not included in this section, such as the details on what S-boxes do, and details on what exactly is done in the MixColumns step. Those details, however, are covered in the next act.

This next and final act covers the mathematics involved in AES in depth. A short review of polynomial equations is given. Shortly after that, some readers may suddenly find the material quite foreign to them. When I read the section on polynomials in finite fields, I needed to have my memory refreshed about concepts that I learned about in a university course on rings and fields years ago. The reason this material is covered gets covered later in the act than some readers might like. How this applies to the mathematics behind S-boxes and MixColumns is covered well, however. Some details are not explained in great detail, such as why certain polynomials are used in certain calculations. Nevertheless, the material is made understandable to those who are willing to take the time to try to “grok” it, as the author says.

Some might be left wondering why certain steps are taken in the AES algorithm. The guide does not give details about how AES is better than DES, and why it might be more resistant to attacks. This absence of explanations of these topics is made into a joke, in which it is suggested that readers would not want to read any further. I was reminded of the statement that Fermat made when he mentioned that he proved his last theorem, and could not fit the proof within the margin of the paper on which that theorem was written. However, more information on AES is available to those who may want to look for it, and this was by far those most amusing explanation of AES that many may ever come across.

While this guide to AES may be considered entertaining, can it also be considered the best guide to AES overall? Some might think that it is, some might not. I personally thought that it helped to see the material presented in this form. If I wanted to learn the details about how AES works, I would have consulted Wikipedia. However, while the Wikipedia article on AES and related ones there are informative, I personally might have been more likely to have my eyes glaze over while reading them. As with any explanation of AES, one might need to re-read sections of the explanation to understand it. However, this guide seems to make going over the material more enjoyable. And while there were previously some inaccuracies in calculations in it, one of which I pointed out, it gave an accurate explanation of how AES works. Links at the end of the guide are provided for those who would prefer more formal explanations of how AES works. AES is complicated, and I consider making explanations of it more entertaining a good idea. For many, it may take time and comic relief to fully understand anything that can be described as follows:



A Review of the Fourth Chapter of the Second Edition of “Hacking: The Art of Exploitation”

Those who have read my reviews of parts of the second edition of “Hacking: The Art of Exploitation” by Jon Erickson may not be surprised to see that I am continuing this series of reviews of sections of that book. At the end of my review of the third chapter of that book, I mentioned that I looked forward to reading and reviewing the fourth chapter of the book, which is the chapter on networking. Networking is an interesting concept, and an important one, as Erickson mentions at the beginning of this chapter. Networking has allowed computers to have many more capabilities than they would have had without it. However, with these increased capabilities have come more vulnerabilities. In this chapter of the book, the basics of networking are explained in detail, leading to explanations of vulnerabilities and how they can be exploited.

The chapter appropriately begins with an introduction to the layers of the OSI model. Although I have seen better introductions to the OSI model, this section of the chapter serves its purpose in giving a basic introduction to networking. The basics of programming with sockets are then covered, and are covered well. This then leads to a description of how a very basic web server can be written. After this is a segue into a description of the OSI model again, with more detailed descriptions of protocol layers. These layers are described using interesting and appropriate analogies. As an example, the data link layer is described as being analogous to interoffice mail, the physical layer is compared to the carts used to deliver the mail, whereas the network layer above these layers is compared to a worldwide postal system. It was also interesting to see the details of how TCP/IP connections are started, and to see why they are started using the “three-way handshake” method.

Next, sniffing of network traffic is explained. The libpcap library is described in detail, and code examples are given to explain how a sniffer can be written to display network traffic at three different levels. The examples of what gets sniffed may make those who read them want to conduct experiments. Some may want to see more about the data that would be transmitted, such as the SYN and ACK values in TCP/IP connections. It is later mentioned that sniffing cannot be done easily on switched networks. Then it is described how the way in which ARP is implemented easily allows both spoofing of ARP addresses as well as ARP cache poisoning, and how that can be done to sniff traffic on switched networks. While readers read through it, some of them might consider how to defend against this flaw inherent in ARP that allows cache poisoning. Some of them might also be entertained by the author’s cleverness, as the MAC address of the attacker in the ARP spoofing example is, appropriately enough, 00:00:00:FA:CA:DE. The libnet library is then described, as are the tools known as Nemesis and arpspoof that use this library. The author appropriately encourages readers to view the source code of software that uses these libraries so that they will be better able to learn about these libraries. Many code examples are given, and the author seems to understandably assume that readers will work with these examples. He often says that the code examples should make sense to readers.

Denial-of-service attacks are covered next. The author explains how the implementation of protocols can be exploited, in particular in the description of SYN floods. It is then explained that that attack and similar ones are unlikely to succeed now, as operating systems have been updated to prevent these attacks from happening. However, it is explained why these historical examples matter. The author mentions that while oversized ICMP packets will not crash computers anymore, some Bluetooth implementations are vulnerable to oversized ping packets. As the author says, it is often that “the same mistakes made in the past are repeated by early implementations of new products.”

In the next section of this chapter, the importance of being able to sniff network traffic is underscored. It is demonstrated how this sniffing is what needs to be done to be able to hijack TCP/IP connections. Prior to the explanation of how this hijacking is done, it is appropriately explained that this can be done when a one-time password is used to connect to a host. Next, port scanning and the different methods of it are covered. The author then explains how discovery of which ports are open can be prevented. This is done by creating the illusion that all ports are open, which is done by responding to any packets that are sent when ports are being scanned. As is the case with previous sections of the chapter, example source code is given to illustrate how this can be implemented.

What I consider the best part of the chapter is near the end of it, where it is explained how concepts from the previous chapter can be combined with concepts explained in this chapter. The reader is first given a second chance to look for a buffer overflow vulnerability in a code example given previously in the chapter on networking. It is then explained how this vulnerability can lead to shell-spawning code being run. Then, to make the attack more useful, it is demonstrated how port-binding shellcode can be used to open a port to which the attacker can connect and gain root access to a remote system. This combination of interesting concepts may have been awaited by readers as they read through the book. This combination of concepts makes these concepts more interesting than they were individually.

After reading this chapter, readers of it should understand that the implementation of network software can be, and historically has been, flawed and vulnerable to attacks. The author encourages readers to write software that implements the concepts that are explained, such as the “ping of death” attack. Readers of this book are understandably expected to understand the material well enough to be able to do that. Something else that readers should understand after reading that chapter is how concepts can be combined. The author does not explicitly say in this chapter that readers should understand this. However, readers who will get the most out of this book will see how that can be done. After the chapter ended with explanations of how shellcode can be more advanced, and how countermeasures against attacks can be implemented, readers will want to keep reading this book. The next two chapters cover the topics of shellcode and countermeasures, and I plan on reading and reviewing them once I find the time to do so.

This chapter of the book is quite informative. The explanations of concepts are clear, and are sometimes even entertaining. The clever sense of humour that the author has is quite evident in it. Those who try to fully understand the material and pay attention to detail will enjoy this chapter. Reviews of the next chapters of the book may appear here before long, as I continue to enjoy reading this book.

Using Gmail’s Filters and Labels to Organize Data

When users of Gmail are asked why they prefer to use Gmail, they may give many different answers. Some Gmail users might say that they like how Gmail offers a large amount of storage space. Some of them may use Gmail because of its spam filtering capabilities. Some of them like the features that it has, as some like to be able to easily search, filter, and label their e-mails. I set up a Gmail account for all of these reasons that I mentioned. I previously had not found that I had much need to be able to search, filter, or label e-mails in my inbox. However, I recently found that I had more use for filters and labels than I thought I had.

E-mail may often be considered a medium of communication that tends to be between two individuals. However, it has been made so that it can be a one-to-many communication medium in addition to being a one-to-one communication medium. Electronic mailing lists have been in existence for a long time, so e-mails have been sent in bulk without them being considered spam for a long time. Also, websites that are often updated tend to include the option for users to receive updates about them via e-mail. Some websites simply send e-mail alerts to users who have accounts set up on them by default. This type of e-mail gets sent out so often, the term “Bacn” was coined for it. Users can limit the amount of Bacn that they receive by not subscribing to e-mail lists, and by subscribing to RSS feeds instead whenever possible. However, e-mail updates can sometimes be necessary.

E-mails from mailing lists and websites tend to be characterized as being sent from the same sender, and these e-mails tend to have similar text in their subject lines. Therefore, users can easily set up filters in Gmail so that messages that match characteristics of Bacn that they receive can have appropriate labels applied to them. For example, I use a WordPress plugin named “WP-DB-Backup” to send myself backup copies of the database that this blog uses via e-mail. The e-mails sent by this plugin all list “WordPress” as the sender of them, and have “Jake Kasprzak Online Database Backup” in the subject lines of these e-mails. As these e-mails have these characteristics, I was able to set up a filter so that all messages that have these characteristics will have the label of “blog backup” applied to them. Therefore, any time I want to display only the backup copies of this blog that I have received via e-mail, I can select the appropriate label. There may also be times that I may not want to see my inbox cluttered with Bacn such as this. As I tend to label the Bacn that I receive, I display only unlabeled messages to filter out the Bacn in my inbox. There are Greasemonkey user scripts for displaying only unlabeled messages in Gmail, and I use those ones. One titled “Gmail Unlabelled” can be used with the older version of Gmail and the version of this script that can be used with the newer version of Gmail can be found here.

There are other reasons to use these labels. This is because e-mail account inboxes are sometimes used for purposes other than storage of messages from other people. When e-mail services offer large amounts of space for storing messages, users use these services for storing large files. The storage capacity offered to Gmail users is one of the reasons many Gmail users, myself included, use it to store files. In fact, GmailFS was created for storing files on Gmail accounts. However, Google’s terms of use prohibit the use of their services by any automated means or any means other than through the interface provided by Google, so GmailFS violates these terms. Nevertheless, Gmail can be useful for storing files, albeit through non-automated means. I sometimes store files there, and I sometimes perform searches through Gmail’s interface to find these files. I also create filters that apply labels to essentially save the searches that I perform most often.

In the time that I have spent blogging, I have sometimes found that I have needed to send notes to myself. I sometimes need to e-mail links to myself, and I annotate these links with information on them. I am often at different locations when I find interesting information on the web, and so I often have my information stored on and accessible from my Gmail account. As you may have surmised, sending myself many e-mails leads to clutter in my inbox. I have found that when I e-mail information such as links to myself, I tend to use similar subject lines. I often include the word “notes” in the subject lines of these e-mails that I send myself. Therefore, I can create a filter that looks for what I tend to include in the subject lines of these messages, and applies appropriate labels to them. I can also have these filters search for appropriate text within the messages that I may want to bring up at later times.

What I have mentioned here may certainly not be considered novel or innovative by those who already try to get the most out of Gmail. I do not plan on being one of those who will submit a video on how I organize my Gmail inbox. However, there may be many who may not be getting the most out of Gmail, and may find that it is even more useful than they thought it was. This blog entry is intended for those who have yet to see how useful Gmail can be. E-mail accounts may often be used for storing personal information, and Gmail seems to have been designed with that in mind. Gmail truly is an e-mail service that has been made better than users have previously imagined an e-mail service could be.

Why I Set Up a Twitter Account

Previously, I had not found that I needed to post anything to this blog that was up to the minute. And I previously never found that I needed to post anything that was 140 characters in length or less. For these reasons, I never set up an account on Twitter. I could have used a Twitter account to post information on when this blog is updated. However, I did not think that there would be anything that I would “tweet” other than information on blog updates. That was until a few incidents that occurred recently.

I found that I needed to update the Greasemonkey user script that I wrote titled “Do Not Remember Me” a few weeks ago. There was a very minor adjustment that I needed to make to it after a change was made to the Google Accounts login form. I chose not to write a blog post about this update, as this update was a very minor one. However, if I had a Twitter account at the time, information on this update could have been posted there. Also, the Firefox extension that I wrote titled “Bookmark Current Tab Set” is now considered public and is no longer considered experimental. This change of the status of this Firefox extension is more information that I thought would be better suited to a Twitter post than to a post to this blog.

You may find me on Twitter here. I am not sure how often I will be posting to my Twitter account. However, I plan on posting to it, as I plan on continuing to post to this blog, and I plan on “tweeting” about updates to this blog.

Bookmark Current Tab Set 0.2.2 Released

I was quite busy shortly before version 3.5 of the Mozilla Firefox web browser was released. I was unable to take the time needed to ensure that Bookmark Current Tab Set, the extension for Firefox that I wrote, was compatible with this newer version of Firefox. I was unfortunately unable to make this extension compatible with Firefox 3.5 by the time this newer version of Firefox was released. I understand that some may not be using this new version of Firefox because some Firefox extensions are still not compatible with it. For this reason, I have tried to have this extension updated as soon as possible. Now a new version of Bookmark Current Tab Set is available, and it is compatible with version 3.5 of Firefox.

The most significant difference between this version of the extension and previous versions of it is that this new version is compatible with Firefox 3.5. However, some might find that folders to which this extension adds bookmarks can no longer be placed within other bookmark folders. Some might also find that new folders cannot be created from the dialog box from which tabs can be bookmarked. However, this functionality was quite dependent on code in Firefox 3.0, and that code had been modified. The ability to put the folders that this extension creates within other folders may have been a seldom used feature, and it needed to be removed in order for this extension to be released within a reasonable amount of time. I apologize to those who made use of these features that needed to be removed. If some want this ability to put folders created by the extension into other folders, then the feature for doing this may appear again in future versions of this extension.

The extension can be downloaded and installed from here or from here. I would appreciate receiving feedback on this extension. It has also been modified so that it will not be as likely to conflict with other Firefox add-ons. For this reason, it will be more likely to be considered non-experimental soon, and so there may be more users who will suggest changes to it in the future. And when more users make suggestions on what can be adjusted in the extension, suggested changes will be more likely to be implemented.

URL Shortening Services and Their Security Implications

URL shortening services such as TinyURL.com have been in existence for years. These services that are used for creating shorter versions of long URLs have been considered useful for a long time. Now that micro-blogging services such as Twitter are often used, and because some of these services enforce a limit of 140 characters per entry, URL shortening services are now considered more useful than ever. Users of micro-blogging services often need to make the links that they post as short as possible, and so even URLs that are not very long tend to be shortened in micro-blog posts. And while URL shortening services have always been useful, there are security risks associated with them. When URLs are converted to ones that would not reveal any information about the content of pages to which these shortened URLs direct users, the probability of users clicking on malicious links increases.

Reflected XSS attacks tend to be carried out by directing users to malicious URLs. When I look through the URLs on the list of reflected XSS vulnerabilities on XSSed.com, I find that many of these URLs are quite unwieldy, and contain text that may appear suspicious. It seems that a method for making URLs appear to be innocuous would be to display them in the form of URLs that would be output by URL shortening services. Also, Twitter has been used to spread an XSS worm before, and so URL shortening services could be used to launch XSS attacks via Twitter once again.

I am not the first to write about the security implications that these services have. There is a very good blog entry on this topic that can be viewed here. There are articles on how these services can be used in phishing attacks that can be viewed here and here. It is mentioned in each of these articles how revealing of the longer versions of shortened URLs can be done. However, many users may not take the time to verify where they would be taken when they go to one of these shortened URLs. Some might not want to take the time to visit a website such as LongURL to view the longer URL to which they would be redirected. And some might not use tools such as the Firefox extension for revealing URLs shortened by the service known as bit.ly. Many users simply prefer not to take the necessary amounts of time to prevent themselves from possibly going to a place on the web to which they would not want to be redirected.

It seems that when users are given a choice between security and convenience, they tend to choose the latter option. There must be a way for URL shortening services to be both secure and convenient. Until there are improved methods for determining whether or not shortened URLs are being used for malicious purposes, these URL shortening services will be used for attacks such as phishing and XSS attacks.