Skip to content

How Google Had Me Mistaken for a Bot

As trusted as Google is, and as infallible as it may sometimes seem, sometimes mistakes are made by Google’s services. In fact, a rather amusing error appeared in Google News recently. When Russian troops entered the former Soviet republic of Georgia, a story about it on Google News included a map that indicated that these Russian troops were actually in the American state of Georgia, and were located just west of Savannah and southeast of Atlanta. Although the error may have been corrected, it was not corrected before it was noticed by at least a few individuals, as you can see if you view this story about this error. There are times such as these that it is apparent that Google’s services need to have improvements made to their ability to analyze data to determine the context of data. And recently, when I was using Google’s search service, I found that it made the mistake of indicating that I was not a human.

In last fortnight’s entry here, I wrote about performing and preventing SQL injection attacks. After writing about that, I considered following up on that entry by writing about these attacks in greater depth. I thought of writing more about everything that one would perform in the process leading up to the SQL injection attack, rather than simply the SQL injection itself. So I decided to gain some practical experience with what may be involved in performing such attacks.

A step that is often performed when hacking is that of looking for potential targets that might have some sort of vulnerability. This activity is one that tends to have the word “war” prefixed to it, as phrases such as “wardialing”, “wardriving” and even “warflying” are often used to describe variations of this activity. And to look for websites that have login forms (through which SQL injection attacks can be attempted) Google can be used. To list websites that have login forms, one can enter a search query such as “inurl:login.php” into Google. This works because pages with login forms often have “login.php” or “login.asp” in their URLs. After I used this method for finding pages that have login forms, many results were returned. However, many of the sites returned in these results may not have had SQL injection vulnerabilities. So I then thought of a way of making it more likely that search results would contain pages that have these vulnerabilities.

The idea that I had for increasing the probability of finding sites with such vulnerabilities was based on this assumption: Websites that have these vulnerabilities are unlikely to be listed in the first pages of Google’s search results. Websites that would be more likely to be ranked before others when Google searches are performed may be more likely to have the measures needed to prevent SQL injection attacks implemented. Therefore, after displaying the first few pages of results after submitting the search query, I decided to bring up the tenth page of results, and then I brought up the nineteenth page of results. Then I received an error saying that my search query looked “similar to automated requests from a computer virus or spyware application.” Google thought that I was a bot. And so if you click here to perform a Google search similar to the one that I performed, you can convince Google that you are a bot as well.

I recall hearing about others finding that something like this happened to them. However, I had never experienced anything like this myself. I was not trying to pass or fail a reverse Turing test, and Google gave me a result that was unexpected and, of course, inaccurate. Some may find it interesting that I am writing something that may seem critical of Google near Google’s tenth anniversary. However, I did use Google in trying to find websites with a certain characteristic. And that is one of many tasks that I would always use Google to perform.

How Not to Prevent SQL Injection Attacks

SQL injection vulnerabilities have existed and have been exploited for several years. However, as is often the case with a class of security vulnerabilities, such vulnerabilities continue to exist long after methods of preventing these vulnerabilities become well-known. One might think that certain commonly-used SQL injection attack methods would not succeed long after they become well-known. However, some websites are vulnerable to SQL injection attacks that had been known about for years. And in the following video, an individual demonstrates this in performing a security audit on the Northwestern Health Sciences University’s website.

As one can see after viewing the video, after an initial attempt at injecting SQL statements into a login form on this page, it was revealed that characters that tend to be used in SQL injection attacks were filtered using JavaScript. Then, it was demonstrated that such JavaScript-based filters can be removed by simply editing the page’s source code and saving it on the local hard drive. Other modifications were made to the page’s source code, such as conversion of a relative URL for submission to an absolute URL, which was necessary as the modified page was to be viewed offline. In addition, the maximum length of the password field needed to be increased in order for the SQL injection attack string to be entered, and so that change was also made. Also, as one would prefer to be able to view all data one is submitting, the password field was changed to a regular text field so that text in that field could have been viewed.

Measures may have been taken to prevent SQL injection attacks through this page. However, as was demonstrated in this video, measures taken through client-side code can be very easily circumvented. As a matter of fact, some of the tasks performed by the individual performing the security audit could have been automated, thus decreasing the amount of effort needed to succeed in performing the attack. Making the password field text visible and increasing the maximum length of the password field can be done automatically via Greasemonkey user scripts. Also, extensions for Firefox such as the Web Developer extension can also automatically perform such tasks as revealing what is typed in password fields. Disabling of JavaScript could also possibly be done to circumvent JavaScript-based filters. And if it is found that the site requires that JavaScript be used, then manual page modification could be done to remove JavaScript. In any case, measures for preventing SQL injections should not be in client-side code, which is an important lesson one can learn after viewing this video.

Also demonstrated in the video are possible implications of SQL injection vulnerabilities. After it was revealed that the attack was successful, it was demonstrated why it was important for security measures to be put in place on that site. Much information on the individual whose account was accessed became visible, as one could see near the latter part of the video. One such piece of sensitive information accessed was that individual’s social security number. And when SSNs can be accessed through a site, one should certainly keep that fact in mind when trying to prevent unauthorized access to the data in databases used by the site.

This video has been viewed over 500,000 times at the time that I have published this entry. However, I am not sure how many times it was viewed before the page was corrected. This video demonstrates how not to prevent SQL injections, although it should not be considered a tutorial on how to perform SQL injections. A very common SQL injection string is used after the easily-circumvented measures taken to prevent it from being submitted are circumvented. Use of the methods used in performing this attack will not work when attempted on many sites. However, one has to wonder how many websites use such a flawed method of preventing SQL injection attacks. Even if 99% of websites use better measures to prevent these attacks, that still leaves many websites vulnerable to this type of attack. And how many of those websites have information as sensitive as SSNs stored in the databases that they use?

This video may not teach those who view it much about how to prevent or perform SQL injection attacks. Its lesson seems to be the importance of preventing these attacks. When sensitive data can be accessed by a user with proper credentials, a website through which this data can be accessed should certainly not fall for some of the oldest tricks in the book.

My Vacation is Over

It has been more than two weeks since I have posted anything here. This has been one of the longest periods of time between posts to this blog. I had been on a two week vacation, and I have taken a vacation from posting here. And during that time, I have taken time to think of what I do not usually take time to think about. It may have been important for me to think of why I have set this unwritten rule of having blog posts no more than a fortnight apart. It may be best for me to only post here when I have something to write about. And as I like to keep this blog updated, I need to keep myself busy in order to have material to write about. However, I have taken this vacation in order to avoid being busy. So about that all I have to write is this explanation for the lack of material posted here to the few who read what I write.

However, during this vacation, I have taken time to make one change to this blog. One change that might have previously been noticed is that there was a CAPTCHA installed here for a short period of time. After discovering how little the CAPTCHA was doing to prevent spam from being posted here, I installed Akismet. After installing Akismet, I was finally once again able to not be concerned about manually marking posts as spam. I was also able to disable the CAPTCHA, as it became redundant after the installation of Akismet. I apologize to those who would prefer not to solve CAPTCHAs to post comments, although installing a CAPTCHA seemed to be a good idea when I first installed it. I have also been working on my own WordPress plugin for bloggers who want information about who is trying to post spam on their blogs. I am not sure how much more work on WordPress plugins I will do in the future. However, I can tell you that this plugin will be released before long, now that I have returned from this vacation.

An Update I Would Not Be Expected to Perform on a Script That I May Not Keep Updated

As my frequency of visits to Digg decreases, the probability of me writing Greasemonkey scripts that work with Digg also decreases. Plans that I had for writing scripts that work with Digg have been ranked lower among my priorities, and the completion and release of these scripts will occur later than I originally expected. In addition, maintenance of scripts that I wrote for use with Digg is less likely to occur as a result of me not using such scripts as often. However, discovering that one of my scripts does not work properly immediately makes work on that script a higher priority than any other work that I plan on doing. And it was not long ago that I discovered that the most recent redesign of Digg necessitated that one of my scripts for Digg be updated. While the functionality of the script was intact, the design of it was rendered unacceptably unprofessional. So work on modifying this script began within minutes of me noticing the need for these updates.

I first considered simply adding a notice on the page on on which the script can be downloaded saying that updates to the script needed to be made. However, it would have been preferable for me to post an actual update to the script rather than a notice saying that an update to the script would be made available soon. I tried to determine whether or not I could make corrections to the script so that what it adds to the page would appear as it previously did. And fortunately, after finding that only minor updates needed to be made to the script, a new version of the script was made available shortly after determining that updates needed to be made. Still, I was not sure exactly how long this script had a design that did not match how it appeared in this screen shot. If this script were one that I used as often as I previously did, these necessary updates would have been made in a more timely manner.

It may have been this need to update this script that made me consider making further improvements to its design. I have previously mentioned that I consider myself more interested in what is performed after data is submitted through a user interface than design of the user interface itself. However, I understand the importance of good user interface design, and sometimes I find that I need to focus on a more diverse group of topics. I have wanted to ensure that the design looked as professional as possible. For that reason, I decided to remove the rounded corners on the header above where the search form the script adds is added. And after successfully removing those rounded corners, these added search form elements now look more appropriate. And so I released another new version of the script that made more changes to the appearance of page elements on Digg. A screen shot of how Digg appears when using the new version of the script can be viewed by clicking the thumbnail below.

Free Image Hosting at

I am not sure how much time I will spend maintaining this script in the future. However, I will be sure to take time to determine whether or not it works properly, and I will respond to any feedback from end users about it.

Link Verifier: A WordPress Plugin for Checking Links in Blog Entries

If you are a blogger, you likely understand the importance of ensuring that what you post to your blog is as good as it can be before you publish it. You try to determine whether or not there are better ways of expressing ideas that you wish to convey. You take out anything in it that is superfluous and would only serve the purpose of adding clutter to your entry. You check for spelling errors in what you have written, as it is well-known that spell-checking software cannot find all spelling errors. You want to ensure that what you will post to your blog is free of any errors that would make it appear unprofessional. And one kind of error you want to avoid is the inclusion of invalid links in your blog entry. You do not want to provide a link to a page with a 404 error when it is supposed to be a link to a page with information that you consider useful to your readers. You may click each link in your post to determine if each of these links are valid. However, with everything else you need to do, would you not like to have these links checked automatically?

I have had some interest in writing a plugin for WordPress, although I was not sure what kind of plugin to write. I had not experienced much difficulty in working with WordPress in my time using it. And any time there was some functionality I wanted WordPress to have, I found that a plugin for including that functionality was available. However, after a certain amount of manual link checking that I had done, I wanted a plugin for automatically checking links in blog entries before these entries that contain these links are posted. Reading the section on link verification web bots in Michael Schrenk’s book on web bots titled “Webbots, Spiders, and Screen Scrapers” may have also inspired me to write this plugin. And after not finding that a plugin for automatically verifying links existed, I found that I had every reason I needed to write this plugin.

I decided to use the cURL library for checking the HTTP codes that would be returned after clicking each link in the window for editing posts, as cURL is useful for retrieving this kind of information. For this reason, the cURL module needs to be installed for this plugin to work. I needed to retrieve only the HTTP codes in this plugin, as these codes are all that is needed to determine if links are considered “broken” or not. Any time a page is found to return an error code, the link is considered broken. And in this plugin, a filter is set up to modify what is in the editor window to include the error code in the link. Perhaps there might be better ways to alert the individual who is posting the link that the link is broken than to add text to the link. This is one issue that I will consider when writing future versions of this plugin, and there are other ideas that I have for future versions as well.

This plugin may be useful for finding links that are broken. However, it cannot determine which links are links to incorrect pages. Sometimes when entering a URL incorrectly, the user is taken to a page that is not the correct one, although not one that will give an error code such as 404. For that reason, I have considered including functionality for checking valid links for information such as their titles, and returning this information. In addition, the plugin works only with absolute links, and not relative links. I personally always use absolute links in my posts, although in the future I might include functionality for checking relative links. Whether or not this functionality will be included may depend on how much demand there will be for the inclusion of this functionality.

The plugin can be downloaded if you click here. Feedback on what I write is always welcome, and I would be interested in hearing opinions on this plugin. This is the first WordPress plugin that I have ever written, and I decided to get it released at this time to meet a deadline that I have set for myself. However, I have found that it worked correctly in tests that I have run with it, and I would like to know if there are any bugs that it has so that I can correct them. This plugin is one that may not be considered very useful now, although future versions of it may be more useful. I know that I would prefer to have one less concern when I write posts to this blog. And hopefully, I will not need to be concerned about this plugin not working properly.

Documentation of and Experimentation with the New Version of Greasemonkey

In the previous entry here, the release of the newest version of Firefox was mentioned. Also mentioned in that entry was the release of a new version of Greasemonkey that coincided with the release of this new version of Firefox. I mentioned that what may have been most important about this new version of Greasemonkey was its compatibility with this new version of Firefox. And what seems to be the most discussed topic regarding this new version is the new Greasemonkey icon. However, there is more to this Greasemonkey version than compatibility with Firefox 3 and a new icon. And this post is about actual improvements made to Greasemonkey, rather than cosmetic changes or updates to simply make it compatible with the new version of Firefox.

There are a number of new features that the new version of Greasemonkey has that are actual improvements upon the previous version of it. There is now the @resource directive, which can be used with the new GM_getResourceURL and GM_getResourceText functions in user scripts. The @resource directive is one that can be used for storing base64-encoded data from a specified URL, and the GM_getResourceURL function is used for putting that data into a string. Similarly, the @resource directive can also be used to load external text data which is accessed through the GM_getResourceText function. In addition, the @require directive is used for including JavaScript code from other source files, and those who write user scripts for Greasemonkey have wanted a feature for including JavaScript code from other files for a long time.

Interestingly, there is not much that has been said about these new features by those who would use them. In fact, there seems to be much more discussion of the new Greasemonkey icon. And in addition to the lack of discussion of these new features in online forums, there is a lack of information about these new features on the GreaseSpot wiki. Although there are basic examples there about how these new directives and functions can be used, there is not much detail about when and why these directives and functions would be used. Perhaps working code examples could be provided to make script writers see why these features can be used, and how they can make Greasemonkey scripts better. Inclusion of these explanations of how these new features make Greasemonkey more useful and powerful could make more people want to use it and write scripts for it. It is mentioned on pages on the GreaseSpot wiki that these new features allow inclusion of data from other sources, although some might consider it important for more details to be included about these features. And little is said about another new feature in this new version, which is the finalURL property that is now included in the GM_xmlhttpRequest function. The advantages of this property would be good to know about.

Some might wonder why I have not modified the GreaseSpot wiki to improve on the documentation of these features. Well, I personally would need to work with these features before documenting them. And a number of factors have conspired to keep me from performing these experiments with features that could make Greasemonkey even better than it previously was. What is written on the GreaseSpot wiki may be written by those who write Greasemonkey scripts. However, attempts to make it for those who write these scripts could be better. Many of those who use Greasemonkey may prefer writing scripts to writing documentation. And I would need time to write scripts that use these features to document them as well as they can be documented. And we who write these scripts still have yet to write documentation that is as good as it can and should be.

Firefox 3: Not the Only Software Release for Which Firefox Users Have Been Waiting

Many have been awaiting the release of Mozilla Firefox version 3.0, as this version of the Firefox web browser will have many improvements over previous versions of it. I was not quite able to wait until the expected release date of June 17th to be able to start using Firefox 3. So I decided to take the time to back up my Firefox profile data, download Release Candidate 3 of Firefox, which can be found here, and install this release candidate. However, I also wanted to be able to use the Firefox extensions that had become so much a part of my web browsing experience. And not all Firefox extensions that I use had been upgraded to work with Firefox 3. So perhaps it was not only because of a lack of time that I did not previously install any Firefox release candidates. I consider Firefox’s extensions to be what truly sets it apart from other browsers. Therefore, Firefox’s extensions being upgraded to be compatible with Firefox 3 mattered just as much to me as Firefox 3’s release.

Within the last few days before this June 17th scheduled release date for Firefox 3, a new version of Greasemonkey was released. Although this new version has a number of useful new features, its compatibility with Firefox 3 is what may be considered more important than any feature added to this new version. And this new version’s compatibility with Firefox 3 may also be the reason it is being released now. This compatibility is what many Greasemonkey users have requested, and it is not likely a coincidence that this version was released only a few days before Firefox 3’s scheduled release date. There are still a few bugs in it that need to be corrected. And perhaps if Firefox 3 were scheduled to be released at a later time, then time might have been taken to correct these bugs before this new version of Greasemonkey would have been released. Still, it is good to have this extension released in time for the release of Firefox 3, as many of those who will be downloading and installing Firefox 3 will not want to be without Greasemonkey.

Perhaps it was this news of the release of a Greasemonkey version compatible with Firefox 3 that made me want to download and install this release candidate. I installed this release candidate before extensions that I find very useful, such as Firebug and Tab Mix Plus, were available for this new version of Firefox. Although I missed being able to use those extensions, the overall web browsing experience that I had with this release candidate was a good one. A list of new features in this version can be found here. However, I would recommend upgrading to this version for the performance improvements, which are noted here, alone. Using Firefox without being concerned about it running slowly and taking up much memory has been something I have wanted to be able to do for a long time. A new world record for most software downloads within twenty-four (24) hours could be set with this release, as you can see here. And as someone who has enjoyed using this new version of Firefox, I can tell you it would be appropriate for this record to be set. This new version of Firefox should be considered one that many wanted as soon as it was made available.

Copying Data About Browser Tab Content, One Tab at a Time

I have recently considered writing another entry here in which I would list Greasemonkey user scripts that are useful for a certain purpose. And when I search for scripts to write about, I ensure that I am able to provide links to the pages on on which these scripts can be found. So when I find these scripts, I copy and paste the URLs where they are located to a text file where I keep this information. However, the URLs that assigns to each script’s page do not contain information about the scripts located at these URLs. Therefore, I need to manually enter information about each of these URLs that I copy and paste. Just as the permalinks of blog posts do not always contain the titles of entries, but instead, the order in which these entries were posted, one cannot tell which script is located at a URL by checking the URL. If there were a way to quickly copy the title of a page with its URL, that certainly would be useful to me when I remind myself why I copy and paste these URLs that I copy and paste.

It was about ten blog posts ago that I mentioned the Firefox extension with the title of Copy All Urls here. This extension is quite useful for copying the URLs in each browser tab to the clipboard with information associated with these URLs. One such piece of information copied to the clipboard with each URL is the title of the page, which is good to have in the situation that I previously described. However, there are times that I would prefer to only copy information on the contents of the currently selected tab. I thought that if there were some sort of modification that could be made to the extension to give the option for copying information only on what is in the currently selected tab to the clipboard, then the extension would be even more useful. So I checked to see if there were modifications that I could make to this extension so that this functionality for copying data on only the contents of the currently selected tab could be added to it. And I was quite pleased to see that with only a few modifications to the extension’s code, I was able to add this feature to the extension.

Although I have modified the code so that this feature would be available to me, I am not sure if a Firefox extension with this feature might be available soon. I have contacted Jürgen Plasser, the author of the extension, about including this functionality in the extension. He recently mentioned here that a new version of it will be available soon. And according to this page about the extension, he is open to feedback on features that can be added to it. However, adding this feature would make the extension’s name something of a misnomer. Still, I would consider addition of this feature useful, and so he might be willing to release it with a new name. In any case, I will try to ensure that ability to copy data on only the selected tab will be available to all those who might want that ability. No such extension is available now, as I found when I looked for extensions that copy data on tabs to the clipboard. In looking for extensions similar to Copy All Urls, I did find extensions that I considered interesting. So I copied and pasted the URLs where they were located to an e-mail that I sent to myself, so I could take time to install these extensions later. However, since the URLs of pages for extensions at do not contain the names of extensions located at these URLs, I was once again reminded why an extension with this feature should be made available.

Searching Less and Finding More on Digg

Quite a while ago, I mentioned that I was writing a Greasemonkey user script that would allow users to perform advanced searches on Digg without having to search for anything on Digg first. It was actually not long after I wrote the script for automatically sorting search results on Digg by which ones received the most “diggs” by default that I had this idea. And it was not long after I began work on this script that I got sidetracked with other work that I considered more of a priority at the time that I did this work. The few of you who have been reading what I have been posting here may know some reasons this script that would make my previous script for Digg obsolete was not released shortly after I began work on it. However, I have finally found some time to complete work on this script. And now that I consider this script ready to be released, it is the topic of this blog post.

There are many times that I have found that when I want to find something on Digg that I previously found interesting, it was a story that received many diggs. This is the reason I wrote the script that automatically sorts search results so that stories that received the most diggs are listed first. However, there are also many times that I would like to be able to do more to narrow down search results to find what I would like to find on Digg more quickly and efficiently. For example, if I remember the title of a story that appears on Digg, I would like to be able to search only the titles of stories. And with this script that gives these advanced options for searching Digg, I am now able to perform searches such as these from any page on Digg from which searches of Digg can be done. As a matter of fact, I had been able to do this for some time, as the script had been working properly for some time. I only needed to make a few adjustments to improve its design and functionality before releasing it.

However, there are a few other adjustments and improvements that could be made to this script. Greasemonkey scripts that are designed well fit in well with the designs of pages to which they are added. I have made attempts to make the design of what gets added to the page well incorporated with the design of Digg. However, I am not a CSS guru, much less a web design guru. And this might be apparent to some after seeing what this script does to pages on Digg. For example, the corners of the section added are not rounded, and the place where the script adds elements may not be considered the best place to add them. However, I was primarily interested in ensuring that the script’s functionality was correct, and I found the design satisfactory. And below is a link to a screen shot of Digg with the script running on it, and you can decide whether or not the design is satisfactory.

Free Image Hosting at

In addition, improvements could have been made to the script functionality-wise. For example, I included no option to include “buried” stories in search results. I personally do not often find that I want to find stories that are buried, so no feature to include buried stories in search results was included. Also, searches of the sections for images, videos, or podcasts can only be done when one of these sections are already selected. Some might want to be able to search one of these sections of Digg without having to go to one of these sections. Perhaps in a future version of this script, more options for fine-tuning Digg searches can be included.

If you have Greasemonkey installed, then you can install this script if you click here. Improvements to it could be made, and I am always interested in hearing suggestions on how improvements can be made to what I write. For now, I am relieved that this script that I said I was working on did not turn out to be considered vapourware.

Cookie Revealer: One Reason Greasemonkey Should Allow Its Scripts Access to Cookies

Log off. That cookie s— makes me nervous.
–Tony Soprano

The malicious activity that can result from Greasemonkey scripts having the ability to access cookies has been a topic of discussion among Greasemonkey aficionados. Cookie-related Greasemonkey issues and the possible solutions to them were mentioned in the recent trilogy of entries on this blog about past and present security concerns with Greasemonkey. Possible solutions to these issues were mentioned in the third part of that series of entries. And one solution mentioned is one that would completely eliminate the possibility of Greasemonkey scripts performing malicious cookie-related activity. Such a solution would be implemented by having a future version of Greasemonkey deny its user scripts access to cookies. This solution may seem drastic, although it is one that has been given some consideration.

This simple solution’s drawbacks are about as obvious as its benefits. And if this solution were implemented, these drawbacks would be very similar to the ones that existed a few years ago when Greasemonkey 0.3.5 was released. The issues that Greasemonkey had with its API functions at the time necessitated that these API functions be disabled in version 0.3.5, and these changes caused many scripts to not work with this version. And Greasemonkey denying its scripts access to cookies would cause hundreds of Greasemonkey scripts to not work, as one can see by performing this Google search for scripts that use the document.cookie property. It should also be noted that the security issues prevented at the time Greasemonkey’s API functions were disabled were considered much more serious than the issue of cookie stealing scripts. In this case, the benefits of precluding a security risk do not outweigh the disadvantage of causing the number of scripts that would be affected to not work.

Many Greasemonkey scripts depend upon access to cookie data via the document.cookie property, and some of these scripts cannot exist at all without access to cookies. These scripts that would cease to be useful at all if Greasemonkey scripts could not access cookies are ones that perform what many users would like to have done with cookies. And in this entry, these scripts that completely depend on access to cookies are the topic. However, as the title of this entry suggests, much of the focus of this entry will be on one script in particular whose function is to work with cookies.

Greasemonkey scripts may be able to perform malicious cookie-related activity, although there are a few scripts that do what many users may want done with cookies. I considered writing a post here in which I would list a few of these scripts that are primarily for working with cookies. And I discovered a few scripts that require access to cookies in order to perform what they are intended to perform. One of these scripts is titled “Google Search Cookie Cleaner.” Much has been said about Google’s cookie policy, and how Google having its cookies expire after two years matters little to some of those concerned about privacy. And the Google Search Cookie Cleaner script removes much data from cookies that could possibly be used to track users. Another script, titled “Google Anonymizer” takes this prevention of possible tracking of user activity on Google’s part a few steps further. In addition to deleting more data that is in cookies from Google, it can disable JavaScript functions that appear to be used by Google to track users. And whereas the Google Search Cookie Cleaner script has a disadvantage in that it takes away the user’s ability to store preferences for Google searches, the Google Anonymizer script allows users to store these preferences by saving them in Firefox’s preferences.

There was one more script that I found was entirely dependent upon its ability to access cookies, as work with cookies was its primary function. This script, known as “Cookie Monster” was one that I found did not work at all when I tried to use it. It was quite unfortunate that it did not work, as its ability to give a quick way to reveal cookie data set by the page being viewed would be considered useful by some. A few modifications were needed in order to make it work, and so I decided to make those modifications, and I added a few of my own personal touches to it. And this modified version of the script, which I refer to as “Cookie Revealer,” is the primary topic of this blog post, as I explain how I went beyond simply making the script on which it was based work properly

There are a number of different ways that one can view the cookie data set by a page being viewed when Firefox is used. However, when using the option in Firefox for displaying cookies or a number of Firefox extensions made for working with cookies, there are no easy ways to get a quick overview of what is happening on a page cookie-wise. One might want information on cookies set by a page to be accessible through the page itself. Therefore, a script that adds elements to pages in which cookie data set by pages can be accessed could be one that some might want. And this script, which adds elements to web pages and frames within pages through which cookie data set by the page and its frames can be accessed, makes data in cookies easily accessible. And after I made adjustments to the script so that it would work as intended, I found the script useful. In fact, I used it to determine what data was removed from cookies set by Google by the two scripts previously mentioned. However, there were a few adjustments that I thought should be made to it.

Previously, the script would display cookie data only when leaving the mouse cursor over the elements the script would add. This may not be preferable to some, as in this case, the cursor would often be in the way of the cookie data being displayed. In addition, one would have more difficulty highlighting this cookie data when trying to copy and paste it, as the cookie data would disappear whenever the cursor is even slightly outside the area that would display this data. Therefore, I chose to modify it so that one could toggle whether or not cookie data is to be displayed by double-clicking the elements the script adds to pages. Also, I modified the CSS properties of the added elements so that scroll bars would added to the elements when necessary, so that all cookie data could be viewed when there is much cookie data to display. In addition, changes could be made to cookies while the pages that set them are displayed. And so I modified the script so that after these changes are made, they will appear when the cookie data is redisplayed. Also, the functionality for completely removing the elements for displaying cookie data was also removed, as one could simply disable the script and refresh the page when one does not want these elements to be displayed. Below are links to screen shots of a page on which this script is running. The first one simply displays the element added in the lower left corner of the page by the script, and the other displays cookie data after this element is double-clicked.

Free Image Hosting at www.ImageShack.usFree Image Hosting at

If you have Greasemonkey installed, then you can click here to install this script. I already have some ideas in mind on how to improve upon this script. I was primarily interested in simply making this script work correctly, and there are some improvements that can be made to it to make cookie data more easily visible in some cases. This is only version 0.1.0 of this script, and new versions of it will almost certainly be released in the future. There might be those who will suggest improvements to it, and implementing requested improvements will be a priority for me. There may be interest in this script and in improvements to it, and that is why this script could be considered a reason Greasemonkey should not disallow its scripts access to cookies.

However, some ideas I have for improvements to this script are beyond Greasemonkey’s scope. Therefore, what could evolve from this script is another useful Firefox extension for handling cookies. I may need to take some time to see if there are Firefox extensions that perform anything similar to what I am thinking of writing, as I prefer not to waste my time writing redundant extensions. In any case, I will write software that could make some people less nervous about cookies.