Tuesday, August 6, 2024

Release 3.4.0 Better Non-English Webpage Support

This release is especially for those whose native language is not English, or frequently browse international websites. If you've been keeping up with the releases, you will have noticed that international support has featured in quite a few of them with notes about character encoding etc. being fixed for a few characters. So you'd think that by now some of the main problems would all be fixed, right? How could this release help if so much work has already been done?

Well, it has to do with how web pages get served up. Handling international text is actually quite a difficult problem with many nuances. From a historical perspective, it wasn't solved well right away, and the early approach was often to use "code pages". How do these work?

Most languages - but not all, particularly excluding east Asian languages - could generally use a relatively limited number of characters to represent their language. In particular, the vast majority of languages could use 256 or fewer characters, which meant that since one byte can represent the numbers 0-255, you could use one byte to represent one character. Simple! This mapping is called a "code page". 

So what's the catch? Well, it didn't work as well for east Asian languages for one thing - but if you used two bytes you could make it work. But, it also meant that you couldn't use just one code page - you had to use one for each language. This makes new difficulties - for example, what if you want to show two different languages in the same page?

As a result, a new system was born - Unicode. The most popular encoding for Unicode is now UTF-8. This can use multiple bytes in a fancy system to represent an arbitrary number of "code points" - basically numbers to represent characters. (There is certainly more nuance here but this is roughly the idea.) This system has become ubiquitous, but it should be noted that web pages can take up a bit more room to support multiple languages.

Prior to release 3.4.0, only international pages using UTF-8 were really supported well, with a bit of extra support for the most common code page. But many web pages - particularly those with a focus on serving a single country - would use the older code page approach. Unfortunately this ran into a slightly tricky problem. To scan web pages containing embedded Base64 images, the web page itself is decoded and re-encoded. This means that the addon has to turn bytes into characters, check things, then turn the characters back into bytes. Decoding the bytes into characters is easy - just use TextDecoder. So you might be thinking that there would be a TextEncoder, too ... and you'd be right, but there's a catch. The built-in TextDecoder can decode basically anything, but the TextEncoder can only turn characters back into bytes using UTF-8. So, that means that out of the box, there is literally no way to support re-encoding text back into the served up code page. Ideally this would just exist in the main API.

But, to handle this I've now created a program that helps! Basically it creates a map containing all this code page information so that it can be used in place of TextEncoder. Is it perfect or complete? No. But there is a good chance your home language may now be supported for national websites - so, I hope this release works well for you and happy browsing!

Special thanks to Ayaya and Dragodraki for continuing to provide feedback on international language support!


Tuesday, January 30, 2024

Mailbag (Later 2023-Jan 2024)

 I was sifting through recent user feedback in the last couple of months, and have received a decent bit of feedback.

From Dragodraki

Dragodraki continues to be a star bug reporter! I've been able to fix many issues with international support due to their reports. Recent release 3.3.6 features another fix related to proper charset handling of Windows-1252.

From Ayaya - Cyrillic

Ayaya sent in feedback that Cyrillic was still not working fully correctly. See issue https://github.com/wingman-jr-addon/wingman_jr/issues/201

From SplinterCell

I got some constructive criticism from user SplinterCell mixed in with some other positive feedback (slightly edited for clarity):

  1.  It is unclear to me what the numbers mean on the images
  2. Why is there no option to blur images - this way users can recognize false-positives more easily
  3. The UI is ugly and it is not as self-explanatory as you think -> Do the buttons work per site, per browsing sessions, what do the buttons do, etc. it's wholly unclear. 

Good questions all, so I'll take a bit of time on each.

First, the numbers relate to the score that the image filter model returns. Basically, the higher the number, the more likely it is to be an NSFW image. This isn't quite the same as saying that it's a more NSFW image if it's a higher number, but there is often a correlation. For the technically-minded, it finds the model's confidence score, maps that on the ROC curve and returns 1.0-TPR at that point; not the most well-founded way but a confidence indicator.

Second, regarding blurring - there are a couple reasons. The first is that I try to think through the psychology of the addon a bit as well, and while blurring images allows false-positives to be picked out more easily it also also allows true-positives to be observed a bit better as well. Could it be an option for some? Yes, but probably not a default. The second reason is that blur effects are fairly computationally expensive, and I've tried to avoid incurring that cost so that pages with large numbers of images will still be speedy. In practice, it could be that this wouldn't be an issue. So - head on over to GitHub if you feel strongly about it and enter an issue.

Third - yes, I agree the UI is ugly and a bit clunky. As the main developer on the project, I have to choose where to put my time and this just hasn't been a focus. Here are a few notes:

  • Image filtering currently works at a global level, so the buttons are not per domain or per web page. However, this is something I've pondered changing.  (See related: https://github.com/wingman-jr-addon/wingman_jr/issues/184 and https://github.com/wingman-jr-addon/wingman_jr/issues/168)
  • The basic way that it works is there are different zones based on how sensitive the model is configured to be: use Trusted for sites without much chance of bad content, Neutral on sites where there's a chance some questionable content will pop up but rarely, and Untrusted on sketchy sites. You can switch between the zones to kick it into manual mode, otherwise it'll try to flip back and forth automatically when Automatic is selected.

 Whitelisting

A couple users (Opensourcerer and happydev) wrote in regarding a whitelisting feature. This is being tracked but hasn't seen action in a while - see https://github.com/wingman-jr-addon/wingman_jr/issues/184 from above.

Mobile - Future?

I got an unexpected PR from one ArthurMelton that helps support use on mobile. The addon doesn't officially support that, but this pushes it closer! Thanks Arthur!

Conclusion and Next Steps

Thanks for the feedback! Lately I've been working on reviewing machine learning research over the past 2 or so years to check for possible advancements to improve the base model, particularly those related to the explosion of growth brought about by the cross-pollination of transformers to image classification. You can see some of the experiments here: https://github.com/wingman-jr-addon/model/issues/7