Removing all (almost all) HTML/DOM & using WebGL or WebGPU, having the page load a big wasm rendering engine & doing all the interactivity through that. There's a bunch of interest in doing this widely to a huge part of the web. Are they just doing this stuff to appease publishers? Are they willing to sacrifice accessibility for this Sisyphean attempt to quash digital piracy? I understand both sides of the argument from a theoretical perspective, but I don't get how this is supposed to work on a practical level. I don't understand how they think they'll 'win' this game of cat-and-mouse. And once one person has pirated a book, it's game over. Hell, you can even point your iPhone at a computer screen and capture all the text. But how effective can this be? It's not like people can't use OCR to capture text from the new image-based platform. My guess is that they did this for anti-piracy reasons. For some people with disabilities, this move essentially 'bricked' their Kindle library. This is of course a huge step backward in terms of accessibility, and we heard from joint users who were upset to no longer be able to access their Kindle books in ways that were easy for them to read. This was very helpful for people with disabilities who use browser plugins (like mine ) to get accessible access to their Kindle books.Ī couple years ago, Amazon "upgraded" their Kindle Cloud Reader so that it now displays images of text, instead of the text itself. The Kindle Cloud Reader used to display book text in the DOM.
0 Comments
Leave a Reply. |