I’m still weirded out that Google Lens can do animal and plant identification reliably these days. I remember using the first few versions of Google Lens and you’d be lucky if it could even identify a flower as a flower. It was basically only useful for architecture and OCR.
although I suppose this xkcd did come out in 2014, it’s been more than five years and Google has some pretty sizeable research teams.
Me, trying to watch a video without having to expend effort on digging out headphones and processing auditory input: “oh, it’s only got auto-captions, those things are practically useless, maybe I should just come back lat–holy shit, Youtube auto-captions actually *work* now! when did *that* happen?!”
Tags:
#and how long will it be until I can download a 2020-Google-level auto-transcriber from a Canonical repository #and start running my own stuff through it without having to offer everything up to the Cloud #reply via reblog #proud citizen of the Future