Friday, November 27, 2020

Release 1.3.0 - Partial Fix for Firefox 83 Slowness!

 This release is an emergency release in response to the release of Firefox 83.

TL;DR - Firefox 83 broke things for some users and made browsing unbearably slow. While things get properly fixed, I can make it faster again, but not quite as fast as the plugin would be in Firefox 82.

The long version:
This plugin leverages another excellent library, Tensorflow.js, that runs the AI models created for this plugin. Tensorflow.js gives many different ways to run the AI models, called backends. They all give the same prediction, but some backends are much faster than others. The fast backend (WebGL) started failing in Firefox 83 for some users, which caused the default slow backend (CPU) to be used instead. For at least two users, this made the browsing experience so slow as to be unusable.
Fortunately Tensorflow.js recently added support for another relatively fast backend (WASM) that I have found does not seem to fail to load in Firefox 83. I am adding in support for that new backend as a fallback. It is not quite as fast, but makes browsing usable once again.

 If you are experiencing issues, please disable the plugin and let me know over at Github  - thanks!

For the technically curious, the Tensorflow.js team has a great writeup on the introduction of fast SIMD in the WASM backend over at their blog.

 One final note - this version also fixes one issue that caused downloads to sometimes show up as gibberish rather than prompting for download.

Saturday, November 21, 2020

Firefox 83 Problem!

(Update: This problem has been partially worked around, see the later post on 1.3.0)

(Update 2: I have traced this back to the specific change in Firefox 83 that caused the issue and have posted an issue on Mozilla's bug tracker. Please be aware that given the nature of the commit that caused the issue, it's possible that fixing the issue experienced by Wingman Jr. may cause other things to break - so fixing this may not be as easy as it seems.)

(Update 3: I have found a technical workaround and have a full fix in progress - here's the post explaining the changes.)

 

Today my browser updated itself to Firefox 83, and it promptly made the addon on unusable! The underlying issue is something related to the way the graphics card, Firefox 83, and possibly Tensorflow.js are interacting. Note this may not affect all users but if suddenly performance became unusable after Firefox updated itself, this is why.

Workaround: Revert back to Firefox 82; otherwise the performance was poor enough you may have to disable the plugin until this can be resolved.

Things I thought might help, but did not:

  • Updating graphics driver
  • Reverting to an old version of Tensorflow.js. This also means older versions of the addon are unlikely to work either.

Technical details can be found with the bug I am tracking for this.

 Sorry for the inconvenience!

Wednesday, November 4, 2020

Release 1.2.1 - The Case of the Distorted Symbols

International users - this release is a bug fix release for you!
One of you kindly reported that they were seeing special characters such as "ä", "ö", "ü", "ß" and "€" showing up incorrectly as ¿½. This release should fix most instances of that happening, but please comment at https://github.com/wingman-jr-addon/wingman_jr/issues/70 if you are still seeing problems. Thanks!

 For the technically curious (or perhaps those who are having trouble falling asleep at night and need something boring to read), here's what was happening. In order to scan images that have been encoded as Base64 data URI's, I fully scan all documents of Content-Type text/html and do search and replace as necessary. However, when I get the document it is as bytes, so I need to handle the decoding from bytes into text myself. All examples out there just use UTF-8 for the TextDecoder, but alas, real life is a bit more complex - the source of this issue is due to incorrectly decoding non-UTF-8 docs as UTF-8. So now I try to do rudimentary encoding detection based on "charset" in Content-Type. An interesting followup is that when I turn text back into bytes, I use TextEncoder which - at present - only supports UTF-8, so I need to make sure the Content-Type gets set appropriately for that.

Note that using only Content-Type for character encoding detection is considerably simpler than the mechanism that browsers use, but it still hits a vast majority of the use cases even though it is not quite accurate. You can see how it fares against a selection of standardized tests by W3C. Character encoding detection is exceedingly sophisticated - if I still haven't bored you with the details, I recommend checking out the spec for those facing truly persistent insomnia.