Tuesday, March 25, 2025

A Progress Update

Hello!

I've been chugging away on some things in the background, but the progress isn't in a release yet so I thought I'd talk a bit about some of the recent work.

 Here are three areas I've been working on:

  • Transformation #1 - Switch to a site-based focus for scanning (like uBO)
  • Bug - Try to improve the hidden tab situation. I have one idea for this already in motion.
  • Research #1 - new AI models, maybe new infrastructure for training

Site-based Focus

I've been working over at https://github.com/wingman-jr-addon/wingman_jr/issues/211 on getting the site-oriented focus working. I've gotten a bit of the core foundational work in that's not very exciting, and I've been starting to experiment with what kinds of new opportunities are available once going site-based.

One key opportunity is that now sites can be filtered using per-site statistics, which allows for some new ideas. For example, one thing I'm playing around with is an adaptive mode for mixed mode sites: think anything from Amazon to YouTube. These types of sites are historically quite difficult to get right - you can't just set one threshold and have it work particularly well. One aspect of this relates to human psychology as well: you may be generally browsing for good content but then a relatively objectionable image might pop up, and while it may not be that bad it stand out relative to its peers and has a somewhat similar effect.

 Note that generally the site-oriented arc of work is large, so don't expect a release with this feature in it soon. But the adventurous can see the experimentation going on if they like!

Hidden Tab Mitigation

 I have another option available for folks struggling with the hidden tab to try! Basically, go back to how the old addon version worked by picking a different backend option that doesn't use the hidden tab. It'll do a pre-check to ensure the original performance issue doesn't pop up, and fall back to the hidden tab if needed. For most, they'd see a decrease in speed, but it's far better than the tab regenerating. This work was merged in https://github.com/wingman-jr-addon/wingman_jr/issues/209.

New AI Model

I've done a smaller amount of work here, with SQRXR 145 being a potential candidate. This is an iterative rather than new model.

Friday, March 14, 2025

Model Candidate SQRXR 145

 I've been playing around with a new model variant, SQRXR 145. I've generally found it to be a bit better, but I'm trying to confirm how well it performs with false positives in trusted mode. For neutral and untrusted modes I think it does a bit better job all around. The adventurous can clone the repo at https://github.com/wingman-jr-addon/wingman_jr/tree/sqrxr-145 and give it a try.

Sunday, February 2, 2025

What Next in 2025?

Ideas for 2025

Here are the likely areas of focus for 2025:

  • Transformation #1 - Switch to a site-based focus for scanning (like uBO)
  • Transformation #2 - More long-term, but features or new addon variant to help those struggling with pornography
  • Bug - Try to improve the hidden tab situation. I have one idea for this already in motion.
  • Feature - Custom image replacement
  • Feature - Make balanced video scanning solution for better out-of-the-box experience
  • Research #1 - new AI models, maybe new infrastructure for training
  • Research #2 - could WebGPU + ONNX replace Tensorflow.js?

I do not expect these all to complete or even be successful, but I think most of my effort will go into these areas. Read below for a further breakdown!

Where to Focus?

 This can be a difficult question to answer; fortunately, people have responded to both the feedback survey as well as the exit survey which helps. So let's break it down a bit.

First, of the listed reasons on the exit survey, the top three reasons people leave are:
  1. The AI model isn't good enough, with responses split about 2:1 that it blocked too many safe images vs. letting bad ones through.
  2. The speed of image scanning is too slow.
  3. The hidden tab causes frustration. I didn't initially have an option for this but got enough write-ins that I added it and it has grown quickly, so there's a good chance this is actually reason #2.

Second, looking through feedback as well as the exit survey, some themes pop out:

  1. Many people use this as adults for themselves because they struggle with pornography, which is different than the original use case of primarily having it as a safeguard for kids. More on this in a bit.
  2. Lots of requests for custom image replacement for silent mode.
  3. Some folks don't like the UI, or find it confusing.
  4. Video scanning should be better.
  5. Some requests for whitelisting/blacklisting by site.
 Third, let's look at developments occurring from other angles.
  1. The main AI base library, Tensorflow.js, has stagnated heavily, and I no longer think we will see any performance improvements coming unless Google chooses to reinvest.
  2. AI has continued to advance rapidly, but it's not yet clear how applicable those advancements would be to the model for this addon. However, there are significant efforts that can be taken to potentially push forward.
  3. NVIDIA DIGITS as an AI training server may be interesting.
 

Transformative Ideas

While most of the ideas above are smaller changes, there are two that are transformative.
 
The first is that based on several pieces of feedback tying together, the addon would benefit greatly by changing to a site-oriented approach to blocking. If you've used uBlock Origin, it would be a bit closer to how that works. This would clear up UI confusion and would naturally tie in with whitelisting/blacklisting sites, etc.
 
The second is that some changes would benefit those struggling with pornography. While I definitely believe that you can't solve human problems with technology, the best technologies help humans help other humans. I'm still mulling this over given the limitations of what an addon can do, but I think key areas to look at would be 1) ways to establish accountability with others in their life and 2) reminders or features to help break out of the loop when they get stuck. This is more of a long-term transformation, and may even lead to a different variant of the addon depending on the features that make sense. If a true new addon variant were created, that would ideally occur after many of the other changes here are implemented so it would have a better foundation to move forward with.

Summary

There's certainly enough here to keep busy! If I can get a few of these knocked out, I think the addon will improve in some tangible ways.
 
As always, you can watch the issues over at https://github.com/wingman-jr-addon/wingman_jr/issues
I'm also playing around a bit with the GitHub Projects feature; we'll see if it goes anywhere but I'm checking it out at https://github.com/users/wingman-jr-addon/projects/1
Please continue to give feedback via the different surveys; I don't always respond but I do read them and try to consider how to incorporate them. Thanks!

Wednesday, January 29, 2025

Up Next 2025?

Just wanted to let folks know that I've been reviewing all the feedback and am working on what features to plan out for 2025! So stay tuned and I'll plan to comment on the upcoming roadmap in say the next month or so.

Tuesday, August 6, 2024

Release 3.4.0 Better Non-English Webpage Support

This release is especially for those whose native language is not English, or frequently browse international websites. If you've been keeping up with the releases, you will have noticed that international support has featured in quite a few of them with notes about character encoding etc. being fixed for a few characters. So you'd think that by now some of the main problems would all be fixed, right? How could this release help if so much work has already been done?

Well, it has to do with how web pages get served up. Handling international text is actually quite a difficult problem with many nuances. From a historical perspective, it wasn't solved well right away, and the early approach was often to use "code pages". How do these work?

Most languages - but not all, particularly excluding east Asian languages - could generally use a relatively limited number of characters to represent their language. In particular, the vast majority of languages could use 256 or fewer characters, which meant that since one byte can represent the numbers 0-255, you could use one byte to represent one character. Simple! This mapping is called a "code page". 

So what's the catch? Well, it didn't work as well for east Asian languages for one thing - but if you used two bytes you could make it work. But, it also meant that you couldn't use just one code page - you had to use one for each language. This makes new difficulties - for example, what if you want to show two different languages in the same page?

As a result, a new system was born - Unicode. The most popular encoding for Unicode is now UTF-8. This can use multiple bytes in a fancy system to represent an arbitrary number of "code points" - basically numbers to represent characters. (There is certainly more nuance here but this is roughly the idea.) This system has become ubiquitous, but it should be noted that web pages can take up a bit more room to support multiple languages.

Prior to release 3.4.0, only international pages using UTF-8 were really supported well, with a bit of extra support for the most common code page. But many web pages - particularly those with a focus on serving a single country - would use the older code page approach. Unfortunately this ran into a slightly tricky problem. To scan web pages containing embedded Base64 images, the web page itself is decoded and re-encoded. This means that the addon has to turn bytes into characters, check things, then turn the characters back into bytes. Decoding the bytes into characters is easy - just use TextDecoder. So you might be thinking that there would be a TextEncoder, too ... and you'd be right, but there's a catch. The built-in TextDecoder can decode basically anything, but the TextEncoder can only turn characters back into bytes using UTF-8. So, that means that out of the box, there is literally no way to support re-encoding text back into the served up code page. Ideally this would just exist in the main API.

But, to handle this I've now created a program that helps! Basically it creates a map containing all this code page information so that it can be used in place of TextEncoder. Is it perfect or complete? No. But there is a good chance your home language may now be supported for national websites - so, I hope this release works well for you and happy browsing!

Special thanks to Ayaya and Dragodraki for continuing to provide feedback on international language support!


Tuesday, January 30, 2024

Mailbag (Later 2023-Jan 2024)

 I was sifting through recent user feedback in the last couple of months, and have received a decent bit of feedback.

From Dragodraki

Dragodraki continues to be a star bug reporter! I've been able to fix many issues with international support due to their reports. Recent release 3.3.6 features another fix related to proper charset handling of Windows-1252.

From Ayaya - Cyrillic

Ayaya sent in feedback that Cyrillic was still not working fully correctly. See issue https://github.com/wingman-jr-addon/wingman_jr/issues/201

From SplinterCell

I got some constructive criticism from user SplinterCell mixed in with some other positive feedback (slightly edited for clarity):

  1.  It is unclear to me what the numbers mean on the images
  2. Why is there no option to blur images - this way users can recognize false-positives more easily
  3. The UI is ugly and it is not as self-explanatory as you think -> Do the buttons work per site, per browsing sessions, what do the buttons do, etc. it's wholly unclear. 

Good questions all, so I'll take a bit of time on each.

First, the numbers relate to the score that the image filter model returns. Basically, the higher the number, the more likely it is to be an NSFW image. This isn't quite the same as saying that it's a more NSFW image if it's a higher number, but there is often a correlation. For the technically-minded, it finds the model's confidence score, maps that on the ROC curve and returns 1.0-TPR at that point; not the most well-founded way but a confidence indicator.

Second, regarding blurring - there are a couple reasons. The first is that I try to think through the psychology of the addon a bit as well, and while blurring images allows false-positives to be picked out more easily it also also allows true-positives to be observed a bit better as well. Could it be an option for some? Yes, but probably not a default. The second reason is that blur effects are fairly computationally expensive, and I've tried to avoid incurring that cost so that pages with large numbers of images will still be speedy. In practice, it could be that this wouldn't be an issue. So - head on over to GitHub if you feel strongly about it and enter an issue.

Third - yes, I agree the UI is ugly and a bit clunky. As the main developer on the project, I have to choose where to put my time and this just hasn't been a focus. Here are a few notes:

  • Image filtering currently works at a global level, so the buttons are not per domain or per web page. However, this is something I've pondered changing.  (See related: https://github.com/wingman-jr-addon/wingman_jr/issues/184 and https://github.com/wingman-jr-addon/wingman_jr/issues/168)
  • The basic way that it works is there are different zones based on how sensitive the model is configured to be: use Trusted for sites without much chance of bad content, Neutral on sites where there's a chance some questionable content will pop up but rarely, and Untrusted on sketchy sites. You can switch between the zones to kick it into manual mode, otherwise it'll try to flip back and forth automatically when Automatic is selected.

 Whitelisting

A couple users (Opensourcerer and happydev) wrote in regarding a whitelisting feature. This is being tracked but hasn't seen action in a while - see https://github.com/wingman-jr-addon/wingman_jr/issues/184 from above.

Mobile - Future?

I got an unexpected PR from one ArthurMelton that helps support use on mobile. The addon doesn't officially support that, but this pushes it closer! Thanks Arthur!

Conclusion and Next Steps

Thanks for the feedback! Lately I've been working on reviewing machine learning research over the past 2 or so years to check for possible advancements to improve the base model, particularly those related to the explosion of growth brought about by the cross-pollination of transformers to image classification. You can see some of the experiments here: https://github.com/wingman-jr-addon/model/issues/7

Thursday, May 18, 2023

Release 3.3.4 - Revenge of the Character Encoding

Previously, in The Case of the Distorted Symbols I talked a bit about some improvements being made to better handle character encoding detection - this is the followup. If you're a non-technical reader, just know that some sites should hopefully work better to soon for displaying accented characters etc. as they ought to be rather than as symbols. If however, you're a technical reader, read on for some interesting notes about handling character encoding on the web.

 I've had at least one dedicated international user helping report bugs. To that end, I'd like to thank Drago for the helpful feedback in reviews. Recently, Drago reported that a specific site wasn't working, so it gave me an opportunity to debug further and nail down the specific problems.

In the first round, I took a naive approach to detecting character encoding and was able to pass most of the test suites found here: https://www.w3.org/2006/11/mwbp-tests/index.xhtml 

However, I had some interesting problems:

  • My original approach would read in bytes and output them through the TextEncoder as utf-8. This is problematic because the input bytes could actually have been in iso-8859-1.
  • True character set detection is quite difficult because you have to actually sniff the request contents because the headers are not enough to definitively determine the character set type.
     

By default, the new implementation starts in iso-8859-1 and then "upgrades" to utf-8 if any of a variety of conditions are encountered:

  1. Headers: Content-Type has a charset
  2. Content sniffing: starts with BOM
  3. Content sniffing: XML encoding indicates utf-8
  4. Content sniffing: meta http-equiv Content-Type indicates utf-8

Content sniffing uses the first 512 bytes currently, and the specific upgrade types have quite narrow search patterns - e.g. different ordering of http-equiv could cause non-detection etc.

All of the tests in the W3C test suite now pass!

With the improved approach, I'm hopeful that >90% of international pages will be correctly handled now, but we'll see what folks like you encounter - let me know if you encounter any bugs via the feedback link in the addon or via https://github.com/wingman-jr-addon/wingman_jr!