Follow

Online ID / age verification: the death of online search, and non-browser web access?

A (public) draft of a blog post on an unintended, perhaps unnoticed, effect of the current push for ID / online.

Feedback welcome.

decoded.legal/blog/2021/07/onl

@neil
> There might be an approach based on allow-listing known IPs of spiders, but it's hardly optimal.

It will also block any chance for new independent small search engines to pop-up, effectively locking the search mainstream engine space down for the oligopolists.

@neil good piece though! Thank you for writing it and sharing it.

@rysiek Can I add that point, please? Credit to you and your mastodon account?

@neil haha, sure thing, that's why I commented! no credit necessary, I feel it's a reasonably trivial point, but of course it's up to you!

@neil I know that at least some of the tools (curl I think) can take access to cokies from a browser

@penguin42 with a shared cookie jar, perhaps, but that requires long term cookies, and browser access in the first place to get the cookie!

@neil Yep, it doesn't help things like spidering; it does help your own backups potentially.

@neil Fascinating, thank you!

We're truly trying to over-engineer a non-issue here. As you say, they're working on the principal that offline = safe / on-line = dangerous - which is so not the case.

The problem, is that many parents (not all), has little to no interest to what their children do on-line. There are already numerous solutions for restricting access via parental controls in almost everything these days. However, they don't wish to learn how to make use of them.

@neil It's far easier to just pass the blame on to another entity and let them handle it.

Another part of the problem is that the people coming up with these "ideas" are also completely oblivious to how technology works (most of the time). They shouldn't be allowed / encouraged to propose these things without first understanding the underlying platforms.

@neil In addition to noninteractive tools:

There are console web clients which would be largely unable to interact with such systems, including lynx, links, elinks2, w3m, and others.
Numerous Web proxy tools, including, ironically, "parental controls" tools such as DansGuardian, but also potentially Squid and others, could well be affected.

@dredmorbius Thanks! I mentioned lynx, but haven’t thought about web proxies. That’s an interesting point.

@dredmorbius Indeed. I don’t think it will be AV which kills off proxies, but ever-improving encryption.

@neil OTOH, much of the Web is already proxied through commercial services (Cloudflare, Limelight, Amazon's stuff, Akamai are still breathing, etc.). Those have to deal with TLS termination / rehosting as well.

So the problem's not entirely w/o solution (or major risks/compromises are being taken).

I see "user agent" as getting split/diversified in the not-too-distant future for numerous reasons. These will effectively be proxies.

Tangential to your concerns there though, I think.

@neil mitmproxy is one possible solution to the TLS/SSL case:

mitmproxy.org/posts/releases/m

I'm working on a longer response which may or may not be relevant...

@dredmorbius I look forward to reading it.

I’m particularly interested in proxied traffic (TLS or otherwise) in school environments - although it may be that these are the environments which would suffer the least from being unable to access certain resources.

@neil I'm not sure I've got much to say to that, other than the relatively obvious "it's technologically infeasible to restrict access based on reliable assertions of age".

@neil The more I think about this, the more the UK's focus seems misguided.

Not that there isn't a problem with access to harmful content. But that seeking individual validation in all instances is the wrong approach.

A better gatekeeper might be at the ISP level, and with access granted or denied based on volitional content classification (for which there are extant schemes) and account-level permit or deny policies.

Where multiple policies might be appropriate or a given customer (a home with adults and children, a business, a school), specific filtered or unfiltered access gateways, say as VPNs or virtual accounts, might be provided. The subscriber is responsible for maintaining integrity over those gateways.

Content providers would self-certify content as, e.g., "all audiences" (no restrictions), or filterable on one or more specific criteria (e.g., sexual content).

There is no need for identifiers on access, in this case. Internet accounts with unfiltered access would be able to use technical tools (e.g., curl, wget, lynx), proxies, etc., as would those with filtered access, though in the latter case, some content would be blocked (an appropriate error code should be generated, e.g., HTTP status 401, "Unauthorized".

This should be compatible with both wired and wireless service providers. Workarounds via VPNs or other means might also be possible. An Orwellian "papers, please" state is avoided.

@dredmorbius Personally, my preference would be to do filtering with a box on each customer’s network, over which they have full control.

@neil This ... is close to that. The equipment is ISP-side, which may be preferable.

Mind that this puts ISPs into the inspection business, which is ... problematic, and they'll no doubt protest loudly.

The indicator might be present at the DNS level. Given the options of classifying domains as a whole SFW vs. not, that's probably more tractable. Note that it also puts almost all of social media off-limits, which would be ... interesting.

And again, this would be an opt-in / opt-out at the subscriber level. The identification / assertion question basically gets kicked out to that level.

@neil NB: Nothing would prevent someone from, say, signing up for an unrestricted / unfiltered service and supplying their own filtering capabilities. Though you might want to suggest this as a specific aspect of the proposal.

But for the vast majority of the public, a provisioned and maintained capability would be preferable and really the only feasible option. The same hooks that would make an ISP / carrier capable of filtering would apply to an independent techical user.

@dredmorbius The cost of doing network level filtering is so great, and I’m not a fan of IAPs doing more than just routing packets, so, personally, I’m really loathe to suggest it as a mechanism.

@neil Note that this is DNS-base rather than packet-based filtering. A lookup per domain, cachable.

I'm not sure if there's an extant rating system that exists. I recall some mention with regard to web filtering (content-inspection) software such as Dansguardian.

ICRA disbanded in 2010: en.wikipedia.org/wiki/Internet

That has a content-description system, based on self-rating by sites. It was a member organisation of ISPs (similar to what I'm suggesting): AOL, BT, Microsoft, T-Online, Verizon. The successor seems to be FOSI (Family Online Safety Institute):
fosi.org/

There are also ASACP / RTA
PICS
Internet Watch Foundation

(I'm well aware of and sympathetic to concerns over the many pitfalls, failure modes, and concerns with Internet filtering generally. That's not what I'm focusing on here.)

PICS is a W3C standard:
en.wikipedia.org/wiki/Platform

A key question I'm looking at is whether the negotiation should be in the browser or at the network level. I'm leaning strongly to network, though at which end (customer or ISP) of the peering point I'm open to argument.

@neil Also POWDER and VCR

w3.org/TR/powder-primer/
seomastering.com/wiki/Voluntar

Both seem to be currently active as specifications, though I have no idea how extensively they're utilised.

(I'm not endorsing or recommending any of these at the moment, I'm largely live-blogging my researchWWikipedia reading.)

@dredmorbius Browser-based filtering has its place, but there’s more to the Internet than browser-based access. And not all browsers can have add-ons. So I favour network solutions too. If it had to be a third party service, I’d probably go for ISP-agnostic filtered DNS.

@neil Hrm.... Third-party might take some of the concentration-of-power element away from ISPs.

Mind, if ISPs did offer the filtering, they'd quite possibly be contracting or bidding out the process as well. Third parties which subscribers could choose from might be an improvement, though there's still the educating-the-public problem.

Food for thought.

@dredmorbius In the U.K., most ISPs run their own filters, but it’s bought-in kit with managed service provision for the classification lists.

Sign in to participate in the conversation
mastodon.neilzone.co.uk

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!