The proposal explicitly goes against “more fingerprinting”, which is maybe the one area where they are honest. So I do think that it’s not about more data collection, at least not directly. The token is generated locally on the user’s machine and it’s supposedly the only thing that need to be shared. So the website’s vendor do get potentially some infos (in effect: that you pass the test used to verify your client), but I don’t think that it’s the major point.
What you’re describing is the status quo today. Websites try to run invasive scripts to get as much info about you as they can, and if you try to derail that, they deem that you aren’t human, and they throw you a captcha.
Right now though, you can absolutely configure your browser to lie at every step about who you are.
I think that the proposal has much less to do with direct data collection (there’s better way to do that) than it has to do with control over the content-delivery chain.
If google gets its way, it would effectively switch control over how you access the web from you to them. This enables all the stuff that people have been talking about in the comment: the end of edge case browser and operating systems, the prevention of add blocking (and with it indeed, the extension of data collection), the consolidation of chrome’s dominant position, etc.
This is one of the worst case of tech dude tries to solve social sciences with math I’ve ever read. The paper is not just bad as a whole, it deliberately disregard 200 years of research in at least 3 different academic fields and instead quotes Borat.
And then goes on to gleefully describe how the authors made a giant machine to reproduce their own (dangerous) biases about the universality of emotion-voicing with just chat-GPT and a zero-shot classifier, would you look at that? Yay science I guess?