Online ID / age verification: the death of online search, and non-browser web access?

A (public) draft of a blog post on an unintended, perhaps unnoticed, effect of the current push for ID / online.

Feedback welcome.

@neil In addition to noninteractive tools:

There are console web clients which would be largely unable to interact with such systems, including lynx, links, elinks2, w3m, and others.
Numerous Web proxy tools, including, ironically, "parental controls" tools such as DansGuardian, but also potentially Squid and others, could well be affected.

@dredmorbius Thanks! I mentioned lynx, but haven’t thought about web proxies. That’s an interesting point.

@dredmorbius Indeed. I don’t think it will be AV which kills off proxies, but ever-improving encryption.

@neil mitmproxy is one possible solution to the TLS/SSL case:

I'm working on a longer response which may or may not be relevant...

@dredmorbius I look forward to reading it.

I’m particularly interested in proxied traffic (TLS or otherwise) in school environments - although it may be that these are the environments which would suffer the least from being unable to access certain resources.

@neil The more I think about this, the more the UK's focus seems misguided.

Not that there isn't a problem with access to harmful content. But that seeking individual validation in all instances is the wrong approach.

A better gatekeeper might be at the ISP level, and with access granted or denied based on volitional content classification (for which there are extant schemes) and account-level permit or deny policies.

Where multiple policies might be appropriate or a given customer (a home with adults and children, a business, a school), specific filtered or unfiltered access gateways, say as VPNs or virtual accounts, might be provided. The subscriber is responsible for maintaining integrity over those gateways.

Content providers would self-certify content as, e.g., "all audiences" (no restrictions), or filterable on one or more specific criteria (e.g., sexual content).

There is no need for identifiers on access, in this case. Internet accounts with unfiltered access would be able to use technical tools (e.g., curl, wget, lynx), proxies, etc., as would those with filtered access, though in the latter case, some content would be blocked (an appropriate error code should be generated, e.g., HTTP status 401, "Unauthorized".

This should be compatible with both wired and wireless service providers. Workarounds via VPNs or other means might also be possible. An Orwellian "papers, please" state is avoided.

@dredmorbius Personally, my preference would be to do filtering with a box on each customer’s network, over which they have full control.

@neil This ... is close to that. The equipment is ISP-side, which may be preferable.

Mind that this puts ISPs into the inspection business, which is ... problematic, and they'll no doubt protest loudly.

The indicator might be present at the DNS level. Given the options of classifying domains as a whole SFW vs. not, that's probably more tractable. Note that it also puts almost all of social media off-limits, which would be ... interesting.

And again, this would be an opt-in / opt-out at the subscriber level. The identification / assertion question basically gets kicked out to that level.

@neil NB: Nothing would prevent someone from, say, signing up for an unrestricted / unfiltered service and supplying their own filtering capabilities. Though you might want to suggest this as a specific aspect of the proposal.

But for the vast majority of the public, a provisioned and maintained capability would be preferable and really the only feasible option. The same hooks that would make an ISP / carrier capable of filtering would apply to an independent techical user.

@dredmorbius The cost of doing network level filtering is so great, and I’m not a fan of IAPs doing more than just routing packets, so, personally, I’m really loathe to suggest it as a mechanism.

@neil Note that this is DNS-base rather than packet-based filtering. A lookup per domain, cachable.

I'm not sure if there's an extant rating system that exists. I recall some mention with regard to web filtering (content-inspection) software such as Dansguardian.

ICRA disbanded in 2010:

That has a content-description system, based on self-rating by sites. It was a member organisation of ISPs (similar to what I'm suggesting): AOL, BT, Microsoft, T-Online, Verizon. The successor seems to be FOSI (Family Online Safety Institute):

There are also ASACP / RTA
Internet Watch Foundation

(I'm well aware of and sympathetic to concerns over the many pitfalls, failure modes, and concerns with Internet filtering generally. That's not what I'm focusing on here.)

PICS is a W3C standard:

A key question I'm looking at is whether the negotiation should be in the browser or at the network level. I'm leaning strongly to network, though at which end (customer or ISP) of the peering point I'm open to argument.


@dredmorbius Browser-based filtering has its place, but there’s more to the Internet than browser-based access. And not all browsers can have add-ons. So I favour network solutions too. If it had to be a third party service, I’d probably go for ISP-agnostic filtered DNS.

@neil Hrm.... Third-party might take some of the concentration-of-power element away from ISPs.

Mind, if ISPs did offer the filtering, they'd quite possibly be contracting or bidding out the process as well. Third parties which subscribers could choose from might be an improvement, though there's still the educating-the-public problem.

Food for thought.

@dredmorbius In the U.K., most ISPs run their own filters, but it’s bought-in kit with managed service provision for the classification lists.

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!