Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some notes about IIIF annotation rendering and crossLinks. #6

Open
aeschylus opened this issue Oct 21, 2017 · 0 comments
Open

Some notes about IIIF annotation rendering and crossLinks. #6

aeschylus opened this issue Oct 21, 2017 · 0 comments

Comments

@aeschylus
Copy link

aeschylus commented Oct 21, 2017

Rendering IIIF Annotations

What does this have to do with trails and the memex? With enough layers of abstraction built up on top of them, IIIF Annotations can serve as a recognised, interoperable representation of the connective tissue between documents and objects in a memex-like web. There is enough semantic flexibility and expressive power in the webAnnotation specifications to describe most of the kinds of navigational subtleties of "trails" that frame particular segments or aspects of web resources, and build a rich navigational experience "through" a visible "web" of these resources while maintaining legibility and avoiding the first resort of drawing a node-link diagram.

Single Annotation

Retrieving the an image of the canvas

This is currently not supported by the existing standard without dereferencing and processing the entire manifest. For this reason, it would be preferable to require canvas URIs (@ids) to be dereferenceable, or for annotation JSON to be presented with the expanded canvas JSON embedded in itself, eliminating that network traversal.

By fudging the URI scheme of the canvas IDs into a known URL pattern for "page" images on a particular system, it is possible to simply write magic URLs that happen to return page images, but this is not a sustainable solution, as it is non-standards compliant and any code written will be depressingly system-specific.

Retrieving an image of the annotated region

Similarly to retrieving the image canvas above, it is not currently possible to draw the annotated region onto a thumbnail without dereferencing the entire manifest, since we cannot know the canvas's original height and width to scale the annotation's fragment dimensions in accordance with the canvas's original size.

Example:

No Rendering Possible Without "Within"

As all the above processes require dereferencing the entire manifest, and it is not possible to "outwardly discover" this manifest unless it is included in the within property of the annotation itself. If canvas URIs were dereferenceable, it might be possible to also retrieve the manifest from the canvas's own within field, if provided. Without it, there can be no outward navigation from a dereferenced annotation to the original context it augments.

Thumbnails representing multi-layered canvases and regions

The annotation region does not technically contain any imagery. Rather, it identifies a sub-region of the abstract coordinate space of the canvas. To represent the annotation, however, we really want to know more about what the user was seeing at that time. At the very least, we need a way to reconstruct a thumbnail from the default detail images attached to the canvas, and at best, we would want some rich description of which "layers" were being viewed when a given comment or transcription was made on the canvas. Along these lines, it would be useful to have a service or scripting tool to construct these thumbnails, and a way to identify them.

In general, the lack of thumbnails on the annotation serialisation itself creates brittle work for the programmer, and the browser, to traverse the graph and either locate or reconstruct them. Perhaps it would be a useful piece of advise to add thumbnails to the JSON serialisation itself, both for the canvas, and for the region, with the appropriate visual adjustments and layer compositing already complete. This is not hard to achieve technically on a one-off basis, but to standardise it requires authoring-time image generation and then subsequent storage and hosting by some server-side process. Of course, in a distributed context (ipfs), this point is moot: both client-composed thumbnails and their linked annotation serialisations are stored in the mesh as peer-hosted linked-data.

Drawing a Region over the source Thumbnail

In many rendering scenarios, we want to show both the extracted canvas region as well as the source context with the region and connection visually drawn as a box and a line.

Example:

As mentioned above, in order to do this accurately, we need to dereference the entire manifest to do so, requiring "within", and/or process the canvas dimensions that apply to the source thumbnail.

Representing Crosslinks

The CrossLink Idea

Proof of Concept Data Model

Toward a More Standards-based Representation

One of the core difficulties with the above proof-of-concept approach is the need to double-enter the annotations. We need to be able to discover that an annotation is pointing to a given annotation as well as point to that annotation. This leads to a situation in which both bodies are pointing to each other in opposite directions, which is functional for a serialisation that only needs to act on one side of the link at once, but could cause a problem if a client attempts to modify only one side without modifying the referenced resource. A rendering of a large collection of annotations might display the linked annotations twice, and to render them as a single annotation would need to de-duplicate across an entire dataset.

An alternative is to create one overarching annotation that points at the two existing annotations or targets. This would require a compound "on" field.

Outward Discovery: Data Requirements

Embedding and Display Contexts

Viewing a Single Annotation at a URL

Viewing Annotations as a Stream

Clustering Annotations

Displaying Annotations in a Blog Post

Expanding Annotations in Chat Applications (Slack Expansion POC)

Expanding Annotations in a Newsfeed (Facebook, Twitter)

Enabling Ambient Search Over Annotation Collections for Composition

Layout Considerations

Interactivity, "Navigation", and Context Re-invocation

Useful Data Sources and Demonstrations

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant