Friday, January 13, 2017

Google's "Crypto-Cookies" are tracking Chrome users

Ordinary HTTP cookies are used in many ways to make the internet work. Cookies help websites remember their users. A common use of cookies is for authentication: when you log into a website, the reason you stay logged is because of a cookie that contains your authentication info. Every request you make to the website includes this cookie; the website then knows to grant you access.

But there's a problem: someone might steal your cookies and hijack your login. This is particularly easy for thieves if your communication with the website isn't encrypted with HTTPS. To address the risk of cookie theft, the security engineers of the internet have been working on ways to protect these cookies with strong encryption. In this article, I'll call these "crypto-cookies", a term not used by the folks developing them. The Chrome user interface calls them Channel IDs.


Development of secure "crypto-cookies" has not been a straight path. A first approach, called "Origin Bound Certificates" has been abandoned. A second approach "TLS Channel IDs" has been implemented, then superseded by a third approach, "TLS Token Binding" (nicknamed "TokBind"). If you use the Chrome web browser, your connections to Google web services take advantage of TokBind for most, if not all, Google services.

This is excellent for security, but might not be so good for privacy; 3rd party content is the culprit. It turns out that Google has not limited crypto-cookie deployment to services like GMail and Youtube that have log-ins. Google hosts many popular utilities that don't get tracked by conventional cookies. Font libraries such as Google Fonts, javascript libraries such as jQuery, and app frameworks such as Angular, are all hosted on Google servers. Many websites load these resources from Google for convenience and fast load times.  In addition, Google utility scripts such as Analytics and Tag Manager are delivered from separate domains so that users are only tracked across websites if so configured.  But with Google Chrome (and Microsoft's Edge Browser), every user that visits any website using Google Analytics, Google Tag Manager, Google Fonts, JQuery, Angular, etc. are subject to tracking across websites by Google. According to Princeton's OpenWMP project, more than half of all websites embed content hosted on Google servers.
Top 3rd-party content hosts. From Princeton's OpenWMP.
Note that most of the hosts labeled "Non-Tracking Content"
are at this time subject to "crypto-cookie" tracking.


While using 3rd party content hosted by Google was always problematic for privacy-sensitive sites, the impact on privacy was blunted by two factors – cacheing and statelessness. If a website loads fonts from fonts.gstatic.com, or style files from fonts.googleapis.com, the files are cached by the browser and only loaded once per day. Before the rollout of crypto-cookies, Google had no way to connect one request for a font file with the next – the request was stateless; the domains never set cookies. In fact, Google says:
Use of Google Fonts is unauthenticated. No cookies are sent by website visitors to the Google Fonts API. Requests to the Google Fonts API are made to resource-specific domains, such as fonts.googleapis.com or fonts.gstatic.com, so that your requests for fonts are separate from and do not contain any credentials you send to google.com while using other Google services that are authenticated, such as Gmail. 
But if you use Chrome, your requests for these font files are no longer stateless. Google can follow you from one website to the next, without using conventional tracking cookies.

There's worse. Crypto-cookies aren't yet recognized by privacy plugins like Privacy Badger, so you can be tracked even though you're trying not to be. The TokBind RFC also includes a feature called "Referred Token Binding" which is meant to allow federated authentication (so you can sign into one site and be recognized by another). In the hands of the advertising industry, this will get used for sharing of the crypto-cookie across domains.

To be fair, there's nothing in the crypto-cookie technology itself that makes the privacy situation any different from the status quo. But as the tracking mechanism moves into the web security layer, control of tracking is moved away from application layers. It's entirely possible that the parts of Google running services like gstatic.com and googleapis.com have not realized that their infrastructure has started tracking users. If so, we'll eventually see the tracking turned off.  It's also possible that this is all part of Google's evil master plan for better advertising, but I'm guessing it's just a deployment mistake.

So far, not many companies have deployed crypto-cookie technology on the server-side. In addition to Google and Microsoft, I find a few advertising companies that are using it.  Chrome and Edge are the only client side implementations I know of.

For now, web developers who are concerned about user privacy can no longer ignore the risks of embedding third party content. Web users concerned about being tracked might want to use Firefox for a while.

Notes:

  1. This blog is hosted on a Google service, so assume you're being watched. Hi Google!
  2. OS X Chrome saves the crypto-cookies in an SQLite file at "~/Library/Application Support/Google/Chrome/Default/Origin Bound Certs". 
  3. I've filed bug reports/issues for Google Fonts, Google Chrome, and Privacy Badger. 
  4. Dirk Balfanz, one of the engineers behind TokBind has a really good website that explains the ins and outs of what I call crypto-cookies.


Thursday, December 22, 2016

How to check if your library is leaking catalog searches to Amazon

I've been writing about privacy in libraries for a while now, and I get a bit down sometimes because progress is so slow. I've come to realize that part of the problem is that the issues are sometimes really complex and  technical; people just don't believe that the web works the way it does, violating user privacy at every opportunity.

Content embedded in websites is a a huge source of privacy leakage in library services. Cover images can be particularly problematic. I've written before that, without meaning to, many libraries send data to Amazon about the books a user is searching for; cover images are almost always the culprit. I've been reporting this issue to the library automation companies that enable this, but a year and a half later, nothing has changed. (I understand that "discovery" services such as Primo/Summon even include config checkboxes that make this easy to do; the companies say this is what their customers want.)

Two indications that a third-party cover image is a privacy problem are:
  1. the provider sets tracking cookies on the hostname serving the content.
  2. the provider collects personal information, for example as part of commerce. 
For example, covers served by Amazon send a bonanza of actionable intelligence to Amazon.

Here's how to tell if your library is sending Amazon your library search data.

Setup

You'll need a web browser equipped with developer tools; I use Chrome. Firefox should work, too.

Log into Amazon.com. They will give you a tracking cookie that identifies you. If you buy something, they'll have your credit card number, your physical and electronic addresses, records about the stuff you buy, and a big chunk of your web browsing history on websites that offer affiliate linking. These cookies are used to optimize the advertisements you're shown around the web.

To see your Amazon cookies, go to Preferences > Settings. Click "Show advanced setting..." (It's hiding at the bottom.)

Click the  "Content settings.." button.

Now click the "All cookies and site data" button.

in the "Search cookies" box, type "amazon". Chances are, you'll see something like this.

I've got 65 cookies for "amazon.com"!

If you remove all the cookies and then go back to Amazon, you'll get 15 fresh cookies, most of them set to last for 20 years. Amazon knows who I am even if a delete all the cookies except "x-main".

Test the Library

Now it's time to find a library search box. For demonstration purposes, I'll use Harvard's "Hollis" catalog. I would get similar results at 36 different ARL libraries, but Harvard has lots of books and returns plenty of results. In the past, I've used What to expect as my search string, but just to make a point, I'll use Killing Trump, a book that Bill O'Reilly hasn't written yet.

Once you've executed your search, choose View > Developer > Developer Tools

Click on the "Sources" tab and to see the requests made of "images.amazon.com". Amazon has returned 1x1 clear pixels for three requested covers. The covers are requested by ISBN. But that's not all the information contained in the cover request.

To see the cover request, click on the "Network" tab and hit reload. You can see that the cover images were requested by a javascript called "primo_library_web" (Hollis is an instance of Ex Libris' Primo discovery service.)

Now click on the request you're interested in. Look at the request headers.


There are two of interest, the "Cookie" and the "Referer".

The "Cookie" sent to Amazon is this:
x-main="oO@WgrX2LoaTFJeRfVIWNu1Hx?a1Mt0s";
skin=noskin; session-token="bcgYhb7dksVolyQIRy4abz1kCvlXoYGNUM5gZe9z4pV75B53o/4Bs6cv1Plr4INdSFTkEPBV1pm74vGkGGd0HHLb9cMvu9bp3qekVLaboQtTr+gtC90lOFvJwXDM4Fpqi6bEbmv3lCqYC5FDhDKZQp1v8DlYr8ZdJJBP5lwEu2a+OSXbJhfVFnb3860I1i3DWntYyU1ip0s="; x-wl-uid=1OgIBsslBlOoArUsYcVdZ0IESKFUYR0iZ3fLcjTXQ1PyTMaFdjy6gB9uaILvMGaN9I+mRtJmbSFwNKfMRJWX7jg==; ubid-main=156-1472903-4100903;
session-id-time=2082787201l;
session-id=161-0692439-8899146
Note that Amazon can tell who I am from the x-main cookie alone. In the privacy biz, this is known as "PII" or personally identifiable information.

The "Referer" sent to Amazon is this:
http://hollis.harvard.edu/primo_library/libweb/action/search.do?fn=search&ct=search&initialSearch=true&mode=Basic&tab=everything&indx=1&dum=true&srt=rank&vid=HVD&frbg=&tb=t&vl%28freeText0%29=killing+trump&scp.scps=scope%3A%28HVD_FGDC%29%2Cscope%3A%28HVD%29%2Cscope%3A%28HVD_VIA%29%2Cprimo_central_multiple_fe&vl%28394521272UI1%29=all_items&vl%281UI0%29=contains&vl%2851615747UI0%29=any&vl%2851615747UI0%29=title&vl%2851615747UI0%29=any
To put this plainly, my entire search session, including my search string killing trump is sent to Amazon, alongside my personal information, whether I like it or not. I don't know what Amazon does with this information. I assume if a government actor wants my search history, they will get it from Amazon without much fuss.

I don't like it.

Rant

[I wrote a rant; but I decided to save it for a future post if needed.] Anyone want a Cookie?

Notes 12/23/2016:


  1. As Keith Jenkins noted, users can configure Chrome and Safari to block 3rd Party cookies. Firefox won't block Amazon cookies, however. And some libraries advise users to not to block 3rd party cookies because doing so can cause problems with proxy authentication.
  2. If Chrome's network panel tells you "Provisional headers are shown" this means it doesn't know what request headers were really sent because another plugin is modifying headers. So if you have HTTPS Everywhere, Ghostery, Adblock, or Privacy Badger installed, you may not be able to use Chrome developer tools to see request headers. Thanks to Scott Carlson for the heads up.
  3. Cover images from Google leak similar data; as does use of Google Analytics. As do Facebook Like buttons. Et cetera.
  4. Thanks to Sarah Houghton for suggesting that I write this up.

Friday, October 14, 2016

Maybe IDPF and W3C should *compete* in eBook Standards

A controversy has been brewing in the world of eBook standards. The International Digital Publishing Forum (IDPF) and the World Wide Web Consortium (W3C) have proposed to combine. At first glance, this seems a sensible thing to do; IDPF's EPUB work leans heavily on W3C's HTML5 standard, and IDPF has been over-achieving with limited infrastructure and resources.

Not everyone I've talked to thinks the combination is a good idea. In the publishing world, there is fear that the giants of the internet who dominate the W3C will not be responsive to the idiosyncratic needs of more traditional publishing businesses. On the other side, there is fear that the work of IDPF and Readium on "Lightweight Content Protection" (a.k.a. Digital Rights Management) will be a another step towards "locking down the web". (see the controversy about "Encrypted Media Extensions")

What's more, a peek into the HTML5 development process reveals a complicated history. The HTML5 that we have today derives from a a group of developers (the WHATWG) who got sick of the W3C's processes and dependencies and broke away from W3C. Politics above my pay grade occurred and the breakaway effort was folded back into W3C as a "Community Group". So now we have two, slightly different versions of HTML, the "standard" HTML5 and WHATWG's HTML "Living Standard". That's also why HTML5 omitted much of W3C's Semantic Web development work such as RDFa.

Amazon (not a member of either IDPF or W3C) is the elephant in the room. They take advantage of IDPF's work in a backhanded way. Instead of supporting the EPUB standard in their Kindle devices, they use proprietary formats under their exclusive control. But they accept EPUB files in their content ingest process and thus extract huge benefit from EPUB standardization. This puts the advancement of EPUB in a difficult position. New features added to EPUB have no effect on the majority of ebook user because Amazon just converts everything to a proprietary format.

Last month, the W3C published its vision for eBook standards, in the form on an innocuously titled "Portable Web Publications Use Cases and Requirements".  For whatever reason, this got rather limited notice or comment, considering that it could be the basis for the entire digital book industry. Incredibly, the word "ebook" appears not once in the entire document. "EPUB" appears just once, in the phrase "This document is also available in this non-normative format: ePub". But read the document, and it's clear that "Portable Web Publication" is intended to be the new standard for ebooks. For example, the PWP (can we just pronounce that "puup"?) "must provide the possibility to switch to a paginated view" . The PWP (say it, "puup") needs a "default reading order", i.e. a table of contents. And of course the PWP has to support digital rights management: "A PWP should allow for access control and write protections of the resource." Under the oblique requirement that "The distribution of PWPs should conform to the standard processes and expectations of commercial publishing channels." we discover that this means "Alice acquires a PWP through a subscription service and downloads it. When, later on, she decides to unsubscribe from the service, this PWP becomes unavailable to her." So make no mistake, PWP is meant to be EPUB 4 (or maybe ePub4, to use the non-normative capitalization).

There's a lot of unalloyed good stuff there, too. The issues of making web publications work well offline (an essential ingredient for archiving them) are technical, difficult and subtle, and W3C's document does a good job of flushing them out. There's a good start (albeit limited) on archiving issues for web publications. But nowhere in the statement of "use cases and requirements" is there a use case for low cost PWP production or for efficient conversion from other formats, despite the statement that PWPs "should be able to make use of all facilities offered by the [Open Web Platform]".

The proposed merger of IDPF and W3C raises the question: who gets to decide what "the ebook" will become? It's an important question, and the answer eventually has to be open rather than proprietary. If a combined IDPF and W3C can get the support of Amazon in open standards development, then everyone will benefit. But if not, a divergence is inevitable. The publishing industry needs to sustain their business; for that, they need an open standard for content optimized to feed supply chains like Amazon's. I'm not sure that's quite what W3C has in mind.

I think ebooks are more important than just the commercial book publishing industry. The world needs ways to deliver portable content that don't run through the Amazon tollgates. For that we need innovation that's as unconstrained and disruptive as the rest of the internet. The proposed combination of IDPF and W3C needs to be examined for its effects on innovation and competition.

Philip K. Dick's Mr. Robot is
one of the stories in Imagination
Stories of Science and Fantasy
,
January 1953. It is available as
an ebook from Project Gutenberg
and from GITenberg
My guess is that Amazon is not going to participate in open ebook standards development. That means that two different standards development efforts are needed. Publishers need a content markup format that plays well with whatever Amazon comes up with. But there also needs to be a way for the industry to innovate and compete with Amazon on ebook UI and features. That's a very different development project, and it needs a group more like WHATWG to nurture it. Maybe the W3C can fold that sort of innovation into its unruly stable of standards efforts.

I worry that by combining with IDPF, the W3C work on portable content will be chained to the supply-chain needs of today's publishing industry, and no one will take up the banner of open innovation for ebooks. But it's also possible that the combined resources of IDPF and W3C will catalyze the development of open alternatives for the ebook of tomorrow.

Is that too much to hope?

Wednesday, September 7, 2016

Start Saying Goodbye to eBook Pagination

Book pages may be the most unfortunate things ever invented by the reading-industrial complex. No one knows the who or how of their invention. The Egyptians and the Chinese didn't need pages because they sensibly wrote in vertical lines. It must have been the Greeks who invented and refined the page.
Egyptian scroll held in the UNE Antiquities Museum
CC BY 
unephotos
In my imagination, some scribes invented the page in a dark and damp scriptorium after arguing about landscape versus portrait on their medieval iScrolls. They didn't worry about user experience. The debate must have been ergonomics versus cognitive load. Opening the scroll side-to-side allowed the monk to rest comfortably when the scribing got boring, and the addition brainwork of figuring out how to start a new column was probably a great relief from monotony. That and drop-caps. The codex probably came about when the scribes ran out of white-out.
Scroll of the Book of EstherSevilleSpain
Technical debt from this bad decision lingers. Consider the the horrors engendered by pagination:
  • We have to break pages in the middle of sentences?!?!?!? Those exasperating friends of yours who stop their sentences in mid-air due to lack of interest or memory probably work as professional paginators. 
  • When our pagination leaves just one line of a paragraph at the top of a page, you have what's known as a widow. Pagination is sexist as well as exasperating.
  • Don't you hate it when a wide table is printed sideways? Any engineer can see this is a kludgy result of choosing the wrong paper size.
To be fair, having pages is sometimes advantageous.
  • You can put numbers on the pages. This allows an entire class of students to turn to the same page. It also allows textbook companies to force students to buy the most recent edition of their exorbitantly priced textbooks. The ease of shifting page numbers spares the textbook company of the huge expense of making actual revisions to the text.
  • Pages have corners, convenient for folding.
  • You can tell often-read pages in a book by looking for finger-grease accumulation on the page-edges. I really hope that stuff is just finger-grease.
  • You can rip out important pages. Because you can't keep a library book forever.
  • Without pages in books, how would you press flowers?
  • With some careful bending, you can make a cute heart shape.
Definition of love by Billy Rowlinson, on Flickr; CC-BY 

While putting toes in the water of our ebook future, we still cling to pages like Linus and his blankie. At first, this was useful. Users who had never seen an ebook could guess how they worked. Early e-ink based e-reading devices had great contrast and readability but slow refresh rates. "Turning" a page was giant hack that turned a technical liability of slow refresh into a whizzy dissolve feature. Apple's iBooks app for the iPad appeared at the zenith of skeuomorphic UI design fashion and its too-cute page-turn animation is probably why the DOJ took Apple to court. (Anti-trust?? give me a break!)

But seriously, automated pagination is hard. I remember my first adventures with TEΧ, in the late '80s, half my time was spent wrestling with unfortunate, inexplicable pagination and equation bounding boxes. (Other half spent being mesmerized at seeing my own words in typeset glory.)

The thing that started me on this rant is the recent publication of the draft EPUB 3.1 specification, which has nothing wrong with it but makes me sad anyway. It's sad because the vast majority of ebook lovers will never be able to take advantage of all the good things in it. And it's not just because of Amazon and its Kindle propriety. It's the tug of war between the page-oriented past of books and the web-oriented future of ebooks. EPUB's role is to leverage web standards while preserving the publishing industry's investment in print-compatible text. Mission accomplished, as much as you can expect.

What has not been part of EPUB's mission is to leverage the web's amazingly rapid innovation in user interfaces. EPUB is essentially a website packaged into a compressed archive. But over the last eight years, innovations in "responsive" web reading UI, driven by the need of websites to work on both desktop and mobile devices, have been magical. Tap, scroll and swipe are now universally understood and websites that don't work that way seem buggy or weird. Websites adjust to your screen size and orientation. They're pretty easy to implement, because of javascript/css frameworks such as Bootstrap and Foundation. They're perfect for ebooks, except... the affordances provided by these responsive design frameworks conflict with the built-in affordances of ebook readers (such as pagination). The result has been that, from the UI point of view,  EPUBs are zipped up turn-of-the-century websites with added pagination.

Which is what makes me sad. Responsive, touch-based web designs, not container-paginated EPUBs, are the future of ebooks. The first step (which Apple took two years ago) is to stop resisting the scroll, and start saying goodbye to pagination.

Sunday, July 31, 2016

Entitled: The art of naming without further elaboration or qualification.

As I begin data herding for our project Mapping the Free Ebook Supply Chain, I've been thing a lot about titles and subtitles, and it got me wondering: what effect do subtitles have on the usage, and should open-access status of a book affect the naming strategy for the book? We are awash in click-bait titles for articles on the web; should ebook titles be clickbaity, too? To what extent should ebook titles be search-engine optimized, and which search engines should they be optimized for?

Here are some examples of titles that I've looked at recently, along with my non-specialist's reactions:
Title: Bigger than You: Big Data and Obesity
Subtitle: An Inquiry toward Decelerationist Aesthetics
The title is really excellent; it gives a flavor of what the book's about and piques my interest because I'm curious what obesity and big data might have to do with each other. The subtitle is a huge turn-off. It screams "you will hate this book unless you already know about decelerationist aesthetics" (and I don't).
(from Punctum Books)



Title: Web Writing
Subtitle: Why and How for Liberal Arts Teaching and Learning
The title is blah and I'm not sure whether the book consists of web writing or is something about how to write for or about the web. The subtitle at least clues me in to the genre, but fails to excite me. It also suggest to me that the people who came up with the name might not be experts in writing coherent, informative and effective titles for the web.
From University of Michigan Press



Title: DOOM
Subtitle: SCARYDARKFAST

If I saw the title alone I would probably mistake it for something it's not. An apocalyptic novel, perhaps. And why is it all caps? The subtitle is very cool though, I'd click to see what it means.
From Digital Culture Books




It's important to understand how title metadata gets used in the real world. Because the title and subtitle get transported in different metadata fields, using a subtitle cedes some control over title presentation to the websites that display it. Four example, Unglue.it's data model has a single title field, so if we get both title and subtitle in a metadata feed, we squash them together in the title field. Unless we don't. Because some of our incoming feeds don't include the subtitle. Different websites do different things. Amazon uses the full title but some sites omit the subtitle until you get to the detail page. So you should have a good reason to use a subtitle as opposed to just putting the words from the subtitle in the title field. DOOM: SCARYDARKFAST is a much better title than DOOM. (The DOOM in the book turns out to be the game DOOM, which I would have guessed from the all-caps if I had ever played DOOM.) And you can't depend on sites preserving your capitalization; Amazon presents several versions of DOOM: SCARYDARKFAST

Another thing to think about is the "marketing funnel". This is the idea that in order to make a sale or to have in impact, your product has to pass through a sequence of hurdles, each step yielding a market that's a fraction of the previous steps. So for ebooks, you have to first get them selected into channels, each of which might be a website. Then a fraction of users searching those websites might see your ebook's title (or cover), for example in a search result. Then a fraction of those users might decide to click on the title, to see a detail page, at which point there had better be an abstract or the potential reader becomes a non-reader.

Having reached a detail page, some fraction of potential readers (or purchase agents) will be enticed to buy or download the ebook. Any "friction" in this process is to be avoided. If you're just trying to sell the ebook, you're done. But if you're interested in impact, you're still not done, because even if a potential reader has downloaded the ebook, there's no impact until the ebook gets used. The title and cover continue to be important because the user is often saving the ebook for later use. If the ebook doesn't open to something interesting and useful, a free ebook will often be discarded or put aside.

Bigger than You's strong title should get it the clicks, but the subtitle doesn't help much at any step of the marketing funnel. "Aesthetics" might help it in searches; it's possible that even the book's author has never ever entered "Decelerationist" as a search term. The book's abstract, not the subtitle, needs to do the heavy lifting of driving purchases or downloads.

The first sentence of "Web Writing" suggest to me that a better title might have been:
"Rebooting how we think about the Internet in higher education
But check back in a couple months. Once we start looking at the data on usage, we might find that what I've written here is completely wrong, and the Web Writing was the best title of them all!

Notes:
1. The title of this blog post  is the creation of Adrian Short, who seems to have left twitter.