Friday, October 14, 2016

Maybe IDPF and W3C should *compete* in eBook Standards

A controversy has been brewing in the world of eBook standards. The International Digital Publishing Forum (IDPF) and the World Wide Web Consortium (W3C) have proposed to combine. At first glance, this seems a sensible thing to do; IDPF's EPUB work leans heavily on W3C's HTML5 standard, and IDPF has been over-achieving with limited infrastructure and resources.

Not everyone I've talked to thinks the combination is a good idea. In the publishing world, there is fear that the giants of the internet who dominate the W3C will not be responsive to the idiosyncratic needs of more traditional publishing businesses. On the other side, there is fear that the work of IDPF and Readium on "Lightweight Content Protection" (a.k.a. Digital Rights Management) will be a another step towards "locking down the web". (see the controversy about "Encrypted Media Extensions")

What's more, a peek into the HTML5 development process reveals a complicated history. The HTML5 that we have today derives from a a group of developers (the WHATWG) who got sick of the W3C's processes and dependencies and broke away from W3C. Politics above my pay grade occurred and the breakaway effort was folded back into W3C as a "Community Group". So now we have two, slightly different versions of HTML, the "standard" HTML5 and WHATWG's HTML "Living Standard". That's also why HTML5 omitted much of W3C's Semantic Web development work such as RDFa.

Amazon (not a member of either IDPF or W3C) is the elephant in the room. They take advantage of IDPF's work in a backhanded way. Instead of supporting the EPUB standard in their Kindle devices, they use proprietary formats under their exclusive control. But they accept EPUB files in their content ingest process and thus extract huge benefit from EPUB standardization. This puts the advancement of EPUB in a difficult position. New features added to EPUB have no effect on the majority of ebook user because Amazon just converts everything to a proprietary format.

Last month, the W3C published its vision for eBook standards, in the form on an innocuously titled "Portable Web Publications Use Cases and Requirements".  For whatever reason, this got rather limited notice or comment, considering that it could be the basis for the entire digital book industry. Incredibly, the word "ebook" appears not once in the entire document. "EPUB" appears just once, in the phrase "This document is also available in this non-normative format: ePub". But read the document, and it's clear that "Portable Web Publication" is intended to be the new standard for ebooks. For example, the PWP (can we just pronounce that "puup"?) "must provide the possibility to switch to a paginated view" . The PWP (say it, "puup") needs a "default reading order", i.e. a table of contents. And of course the PWP has to support digital rights management: "A PWP should allow for access control and write protections of the resource." Under the oblique requirement that "The distribution of PWPs should conform to the standard processes and expectations of commercial publishing channels." we discover that this means "Alice acquires a PWP through a subscription service and downloads it. When, later on, she decides to unsubscribe from the service, this PWP becomes unavailable to her." So make no mistake, PWP is meant to be EPUB 4 (or maybe ePub4, to use the non-normative capitalization).

There's a lot of unalloyed good stuff there, too. The issues of making web publications work well offline (an essential ingredient for archiving them) are technical, difficult and subtle, and W3C's document does a good job of flushing them out. There's a good start (albeit limited) on archiving issues for web publications. But nowhere in the statement of "use cases and requirements" is there a use case for low cost PWP production or for efficient conversion from other formats, despite the statement that PWPs "should be able to make use of all facilities offered by the [Open Web Platform]".

The proposed merger of IDPF and W3C raises the question: who gets to decide what "the ebook" will become? It's an important question, and the answer eventually has to be open rather than proprietary. If a combined IDPF and W3C can get the support of Amazon in open standards development, then everyone will benefit. But if not, a divergence is inevitable. The publishing industry needs to sustain their business; for that, they need an open standard for content optimized to feed supply chains like Amazon's. I'm not sure that's quite what W3C has in mind.

I think ebooks are more important than just the commercial book publishing industry. The world needs ways to deliver portable content that don't run through the Amazon tollgates. For that we need innovation that's as unconstrained and disruptive as the rest of the internet. The proposed combination of IDPF and W3C needs to be examined for its effects on innovation and competition.

Philip K. Dick's Mr. Robot is
one of the stories in Imagination
Stories of Science and Fantasy
,
January 1953. It is available as
an ebook from Project Gutenberg
and from GITenberg
My guess is that Amazon is not going to participate in open ebook standards development. That means that two different standards development efforts are needed. Publishers need a content markup format that plays well with whatever Amazon comes up with. But there also needs to be a way for the industry to innovate and compete with Amazon on ebook UI and features. That's a very different development project, and it needs a group more like WHATWG to nurture it. Maybe the W3C can fold that sort of innovation into its unruly stable of standards efforts.

I worry that by combining with IDPF, the W3C work on portable content will be chained to the supply-chain needs of today's publishing industry, and no one will take up the banner of open innovation for ebooks. But it's also possible that the combined resources of IDPF and W3C will catalyze the development of open alternatives for the ebook of tomorrow.

Is that too much to hope?

Wednesday, September 7, 2016

Start Saying Goodbye to eBook Pagination

Book pages may be the most unfortunate things ever invented by the reading-industrial complex. No one knows the who or how of their invention. The Egyptians and the Chinese didn't need pages because they sensibly wrote in vertical lines. It must have been the Greeks who invented and refined the page.
Egyptian scroll held in the UNE Antiquities Museum
CC BY 
unephotos
In my imagination, some scribes invented the page in a dark and damp scriptorium after arguing about landscape versus portrait on their medieval iScrolls. They didn't worry about user experience. The debate must have been ergonomics versus cognitive load. Opening the scroll side-to-side allowed the monk to rest comfortably when the scribing got boring, and the addition brainwork of figuring out how to start a new column was probably a great relief from monotony. That and drop-caps. The codex probably came about when the scribes ran out of white-out.
Scroll of the Book of EstherSevilleSpain
Technical debt from this bad decision lingers. Consider the the horrors engendered by pagination:
  • We have to break pages in the middle of sentences?!?!?!? Those exasperating friends of yours who stop their sentences in mid-air due to lack of interest or memory probably work as professional paginators. 
  • When our pagination leaves just one line of a paragraph at the top of a page, you have what's known as a widow. Pagination is sexist as well as exasperating.
  • Don't you hate it when a wide table is printed sideways? Any engineer can see this is a kludgy result of choosing the wrong paper size.
To be fair, having pages is sometimes advantageous.
  • You can put numbers on the pages. This allows an entire class of students to turn to the same page. It also allows textbook companies to force students to buy the most recent edition of their exorbitantly priced textbooks. The ease of shifting page numbers spares the textbook company of the huge expense of making actual revisions to the text.
  • Pages have corners, convenient for folding.
  • You can tell often-read pages in a book by looking for finger-grease accumulation on the page-edges. I really hope that stuff is just finger-grease.
  • You can rip out important pages. Because you can't keep a library book forever.
  • Without pages in books, how would you press flowers?
  • With some careful bending, you can make a cute heart shape.
Definition of love by Billy Rowlinson, on Flickr; CC-BY 

While putting toes in the water of our ebook future, we still cling to pages like Linus and his blankie. At first, this was useful. Users who had never seen an ebook could guess how they worked. Early e-ink based e-reading devices had great contrast and readability but slow refresh rates. "Turning" a page was giant hack that turned a technical liability of slow refresh into a whizzy dissolve feature. Apple's iBooks app for the iPad appeared at the zenith of skeuomorphic UI design fashion and its too-cute page-turn animation is probably why the DOJ took Apple to court. (Anti-trust?? give me a break!)

But seriously, automated pagination is hard. I remember my first adventures with TEΧ, in the late '80s, half my time was spent wrestling with unfortunate, inexplicable pagination and equation bounding boxes. (Other half spent being mesmerized at seeing my own words in typeset glory.)

The thing that started me on this rant is the recent publication of the draft EPUB 3.1 specification, which has nothing wrong with it but makes me sad anyway. It's sad because the vast majority of ebook lovers will never be able to take advantage of all the good things in it. And it's not just because of Amazon and its Kindle propriety. It's the tug of war between the page-oriented past of books and the web-oriented future of ebooks. EPUB's role is to leverage web standards while preserving the publishing industry's investment in print-compatible text. Mission accomplished, as much as you can expect.

What has not been part of EPUB's mission is to leverage the web's amazingly rapid innovation in user interfaces. EPUB is essentially a website packaged into a compressed archive. But over the last eight years, innovations in "responsive" web reading UI, driven by the need of websites to work on both desktop and mobile devices, have been magical. Tap, scroll and swipe are now universally understood and websites that don't work that way seem buggy or weird. Websites adjust to your screen size and orientation. They're pretty easy to implement, because of javascript/css frameworks such as Bootstrap and Foundation. They're perfect for ebooks, except... the affordances provided by these responsive design frameworks conflict with the built-in affordances of ebook readers (such as pagination). The result has been that, from the UI point of view,  EPUBs are zipped up turn-of-the-century websites with added pagination.

Which is what makes me sad. Responsive, touch-based web designs, not container-paginated EPUBs, are the future of ebooks. The first step (which Apple took two years ago) is to stop resisting the scroll, and start saying goodbye to pagination.

Sunday, July 31, 2016

Entitled: The art of naming without further elaboration or qualification.

As I begin data herding for our project Mapping the Free Ebook Supply Chain, I've been thing a lot about titles and subtitles, and it got me wondering: what effect do subtitles have on the usage, and should open-access status of a book affect the naming strategy for the book? We are awash in click-bait titles for articles on the web; should ebook titles be clickbaity, too? To what extent should ebook titles be search-engine optimized, and which search engines should they be optimized for?

Here are some examples of titles that I've looked at recently, along with my non-specialist's reactions:
Title: Bigger than You: Big Data and Obesity
Subtitle: An Inquiry toward Decelerationist Aesthetics
The title is really excellent; it gives a flavor of what the book's about and piques my interest because I'm curious what obesity and big data might have to do with each other. The subtitle is a huge turn-off. It screams "you will hate this book unless you already know about decelerationist aesthetics" (and I don't).
(from Punctum Books)



Title: Web Writing
Subtitle: Why and How for Liberal Arts Teaching and Learning
The title is blah and I'm not sure whether the book consists of web writing or is something about how to write for or about the web. The subtitle at least clues me in to the genre, but fails to excite me. It also suggest to me that the people who came up with the name might not be experts in writing coherent, informative and effective titles for the web.
From University of Michigan Press



Title: DOOM
Subtitle: SCARYDARKFAST

If I saw the title alone I would probably mistake it for something it's not. An apocalyptic novel, perhaps. And why is it all caps? The subtitle is very cool though, I'd click to see what it means.
From Digital Culture Books




It's important to understand how title metadata gets used in the real world. Because the title and subtitle get transported in different metadata fields, using a subtitle cedes some control over title presentation to the websites that display it. Four example, Unglue.it's data model has a single title field, so if we get both title and subtitle in a metadata feed, we squash them together in the title field. Unless we don't. Because some of our incoming feeds don't include the subtitle. Different websites do different things. Amazon uses the full title but some sites omit the subtitle until you get to the detail page. So you should have a good reason to use a subtitle as opposed to just putting the words from the subtitle in the title field. DOOM: SCARYDARKFAST is a much better title than DOOM. (The DOOM in the book turns out to be the game DOOM, which I would have guessed from the all-caps if I had ever played DOOM.) And you can't depend on sites preserving your capitalization; Amazon presents several versions of DOOM: SCARYDARKFAST

Another thing to think about is the "marketing funnel". This is the idea that in order to make a sale or to have in impact, your product has to pass through a sequence of hurdles, each step yielding a market that's a fraction of the previous steps. So for ebooks, you have to first get them selected into channels, each of which might be a website. Then a fraction of users searching those websites might see your ebook's title (or cover), for example in a search result. Then a fraction of those users might decide to click on the title, to see a detail page, at which point there had better be an abstract or the potential reader becomes a non-reader.

Having reached a detail page, some fraction of potential readers (or purchase agents) will be enticed to buy or download the ebook. Any "friction" in this process is to be avoided. If you're just trying to sell the ebook, you're done. But if you're interested in impact, you're still not done, because even if a potential reader has downloaded the ebook, there's no impact until the ebook gets used. The title and cover continue to be important because the user is often saving the ebook for later use. If the ebook doesn't open to something interesting and useful, a free ebook will often be discarded or put aside.

Bigger than You's strong title should get it the clicks, but the subtitle doesn't help much at any step of the marketing funnel. "Aesthetics" might help it in searches; it's possible that even the book's author has never ever entered "Decelerationist" as a search term. The book's abstract, not the subtitle, needs to do the heavy lifting of driving purchases or downloads.

The first sentence of "Web Writing" suggest to me that a better title might have been:
"Rebooting how we think about the Internet in higher education
But check back in a couple months. Once we start looking at the data on usage, we might find that what I've written here is completely wrong, and the Web Writing was the best title of them all!

Notes:
1. The title of this blog post  is the creation of Adrian Short, who seems to have left twitter.







Wednesday, June 22, 2016

Wiley's Fake Journal of Constructive Metaphysics and the War on Automated Downloading


Suppose you were a publisher and you wanted to get the goods on a pirate who was downloading your subscription content and giving it away for free. One thing you could try is to trick the pirate into downloading fake content loaded with spy software and encoded information about how the downloading was being done. Then you could verify when the fake content was turning up on the pirate's website.

This is not an original idea. Back in the glory days of Napster, record companies would try to fill the site with bad files which somehow became infested with malware. Peer-to-peer networks evolved trust mechanisms to foil bad-file strategies.

I had hoped that the emergence of Sci-Hub as an efficient, though unlawful, distributor of scientific articles would not provoke scientific publishers to do things that could tarnish the reputation of their journals. I had hoped that publishers would get their acts together and implement secure websites so that they could be sure that articles were getting to their real subscribers. Silly me.

In series of tweets, Rik Smith-Unna noted with dismay that the Wiley Online Library was using "fake DOIs" as "trap URLs", URLs in links invisible to human users. A poorly written web spider or crawler would try to follow the link, triggering a revocation of the user's access privileges. (For not obeying the website's terms of service, I suppose.)

Gabriel J. Gardner of Cal State Long Beach has reported his library's receipt of a scary email from Wiley stating:
Wiley has been investigating activity that uses compromised user credentials from institutions to access proxy servers like EZProxy (or, in some cases, other types of proxy) to then access IP-authenticated content from the Wiley Online Library (and other material). We have identified a compromised proxy at your institution as evidenced by the log file below. 
We will need to restrict your institution’s proxy access to Wiley Online Library if we do not receive confirmation that this has been remedied within the next 24 hours.  

I've been seeing these trap urls in scholarly journals for almost 20 years now. Two years ago they reappeared in ACS journals. They're rarely well thought out, and from talking with publishers who have tried them, they don't work as intended. The Wiley trap URLs exhibit several mistakes in implementation.
  1. Spider trap URLs are are useful for detecting bots that ignore robot exclusions. But Wiley's robots.txt document doesn't exclude the trap urls, so "well-behaved" spiders, such as googlebot are also caught. As a result, the fake Wiley page is indexed in Google, and because of the way Google aggregates the weight of links, it's actually a rather highly ranked page
  2. The download urls for the fake article don't download anything, but instead return a 500 error code whenever an invalid pseudo-DOI is presented to the site. This is a site misconfiguration that can cause problems with link checking or link verification software.
  3. Because the fake URLs look like Wiley DOI's, they could cause confusion if circulated. Crossref discourages this.
  4. The trap URLs as implemented by Wiley can be used for malicious attacks. With a list of trap URLs, it's trivial to craft an email or a web page that causes the user to request the full list of trap URLs. When the trap URLs trigger service suspensions this gives you the ability to trigger a suspension by sending the target an email.
  5. Apparently, Wiley used a special cookie to block the downloading. Have they not heard of sessions?
  6. The blocks affected both subscription and open-access content. Umm, do I need to explain the concept of "Open Access"?
  7. It's just not a smart idea (Even on April Fools!) for a reputable publisher to create fake article pages for "Constructive Metaphysics in Theories of Continental Drift. (warning: until Wiley realizes their ineptness, this link may trigger unexpected behavior. Use Tor.) It's an insult to both geophysicists and philosophers. And how does the University of Bradford feel about hosting a fictitious Department of Geophysics???


Instead of trap urls, online businesses that need to detect automated activity have developed elaborate and effective mechanisms to do so. Automated downloads are a billion dollar problem for the advertising industry in particular. So advertisers, advertising networks, and market research companies use coded, downloaded javascripts and flash scripts to track and monitor both users and bots. I've written about how these practices are inappropriate in library contexts. In comparison, the trap URLs being deployed by Wiley are sophomoric and a technical embarrassment.

If you visit the Wiley fake article page now, you won't get an article. You get a full dose of monitoring software. Wiley uses a service called Qualtrics Site Intercept to send you "Creatives" if you meet targeting criteria. But you'll also get that if you access Wiley's Online Library's real articles, along with sophisticated trackers from Krux Digital, Grapeshot, Jivox, Omniture, Tradedesk, Videology and Neustar.

Here's the letter I'd like libraries to start sending publishers:
[Library] has been investigating activity that causes spyware from advertising networks to compromise the privacy of IP-authenticated users of the [Publisher] Online Library, a service for we have been billed [$XXX,XXX]. We have identified numerous third party tracking beacons and monitoring scripts infesting your service as evidenced by the log file below. 
We will need to restrict [Publisher]'s access to our payment processes if we do not receive confirmation that this has been remedied within the next 24 hours.  
Notes:
  1. Here's another example of Wiley cutting off access because of fake URL clicking. The implication that Wiley has stopped using trap URLs seems to be false.
  2. Some people have suggested that the "fake DOIs" are damaging the DOI system. Don't worry, they're not real DOI's and have not been registered. The DOI system is robust against this sort of thing; it's still disrespectful.
Update June 23:
  1. Tom Griffin, a spokesman for Wiley, has posted a denial to LIBLICENCE which has a tenuous grip on reality.
  2. Smith-Unna has posted a point-by-point response to Griffin's denial in the form of a gist . 

Monday, May 23, 2016

97% of Research Library Searches Leak Privacy... and Other Disappointing Statistics.


...But first, some good news. Among the 123 members of the Association of Research Libraries, there are four libraries with almost secure search services that don't send clickstream data to Amazon, Google, or any advertising network. Let's now sing the praises of libraries at Southern Illinois University, University of Louisville, University of Maryland, and University of New Mexico for their commendable attention to the privacy of their users. And it's no fault of their own that they're not fully secure. SIU fails to earn a green lock badge because of mixed content issues in the CARLI service; while Louisville, Maryland and New Mexico miss out on green locks because of the weak cipher suite used by OCLC on their Worldcat Local installations. These are relatively minor issues that are likely to get addressed without much drama.

Over the weekend, I decided to try to quantify the extent of privacy leakage in public-facing library services by studying the search services of the 123 ARL libraries. These are the best funded and most prestigious libraries in North America, and we should expect them to positively represent libraries. I went to each library's on-line search facility and did a search for a book whose title might suggest to an advertiser that I might be pregnant. (I'm not!) I checked to see whether the default search linked to by the library's home page (as listed on the ARL website) was delivered over a secure connection (HTTPS). I checked for privacy leakage of referer headers from cover images by using Chrome developer tools (the sources tab). I used Ghostery to see if the library's online search used Google Analytics or not. I also noted whether advertising network "web beacons" were placed by the search session.

72% of the ARL libraries let Google look over the shoulder of every click by every user, by virtue of the pervasive use of Google Analytics. Given the commitment to reader privacy embodied by the American Library Association's code of ethics, I'm surprised this is not more controversial. ALA even sponsors workshops on "Getting Started with Google Analytics". To paraphrase privacy advocate and educator Dorothea Salo, the code of ethics does not say:
We protect each library user's right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted, except for Google Analytics.
While it's true that Google has a huge stake in maintaining the trust of users in their handling of personal information, and people seem to trust Google with their most intimate secrets, it's also true that Google's privacy policy puts almost no constraints on what Google (itself) can do with the information they collect. They offer strong commitments not to share personally identifiable information with other entities, but they are free to keep and use personally identifiable information. Google can associate Analytics-tracked library searches with personally identifiable information for any user that has a Google account; Libraries cannot be under the illusion that they are uninvolved with this data collection if they benefit from Google Analytics. (Full disclosure: many of the the web sites I administer also use Google Analytics.)

80% of the ARL libraries provide their default discovery tools to users without the benefit of a secure connection. This means that any network provider in the path between the library and the user can read and alter the query, and the results returned to the user. It also means that when a user accesses the library over public wifi, such as in a coffee shop, the user's clicks are available for everyone else in the coffee shop to look at, and potentially to tamper with. (The Digital Library Privacy Pledge is not having the effect we had hoped for, at least not yet.)

28% of ARL libraries enrich their catalog displays with cover images sourced from Amazon.com. Because of privacy leakage in referer headers, this means that a user's searches for library books are available for use by Amazon when Amazon wants to sell that user something. It's not clear that libraries realize this is happening, or whether they just don't realize that their catalog enrichment service uses cover images sourced by Amazon.

13% of ARL libraries help advertisers (other than Google) target their ads by allowing web beacons to be placed on their catalog web pages. Whether the beacons are from Facebook, DoubleClick, AddThis or Sharethis, advertisers track individual users, often in a personally identifiable way. Searches on these library catalogs are available to the ad networks to maximize the value of advertising placed throughout their networks.

Much of the privacy leakage I found in my survey occurs beyond the control of librarians. There are IT departments, vender-provided services, and incumbent bureaucracies involved. Important library services appear to be unavailable in secure versions. But specific, serious privacy leakage problems that I've discussed with product managers and CTOs of library automation vendors have gone unfixed for more than a year. I'm getting tired of it.

The results of my quick survey for each of the 123 ARL libraries are available as a Google Sheet. There are bound to be a few errors, and I'd love to be able to make changes as privacy leaks get plugged and websites become secure, so feel free to leave a comment.