Almost all of these documents are Jupyter notebooks, which is a programming environment that runs in a web browser. A notebook can contain both text in a markdown format and Python code that can be run directly as part of the workflow.
If this is your first time using Jupyter notebooks, here are a couple great tutorials online to help you install and set up the software:
- https://reproducible-science-curriculum.github.io/workshop-RR-Jupyter/setup/
- https://programminghistorian.org/en/lessons/jupyter-notebooks
Also, remember that the applications listed are only a few examples intended as a starting point! The possibilities are far wider. We're excited to see what you do with the collections.
These tutorials are designed to help people understand the API and make use of it. The repository is intended to inspire creative uses of the Library of Congress collections. So, it's not comprehensive, but highlights the most relevant aspects. If you have ideas about what other topics should be covered or anything else to make the API even more useful, let us know by emailing [email protected]!
An overview of how to retrieve information in a JSON format from the Library of Congress API. This tutorial sets a baseline for doing powerful data retrieval and visualization projects.
Background knowledge:
- Understand URLs for loc.gov API requests and how to modify them
Applications:
- Get and visualize data
- Show images from collections
- Make cool projects like this clock using collection item names and this political cartoon visualizer - for more inspiration, see the experiments that LC Labs has been working on!
Brief guide to searching loc.gov directly from a web browser.
Background knowledge: None
Applications: Searching the website
Describes Sitemaps and how to get information about the frequency of page updates.
Background knowledge: None, but understanding the basics of Sitemaps and how the formatting works is recommended
Applications:
- Determining how often collections or parts of the website are updated
- Finding the number of items in a collection or sub-items on a page of the site
How to access, display, and download images in bulk. Also provides information about what metadata is available and how to get particular details.
This tutorial is for accessing images directly via the the API, so the images are generally smaller (150 px on one side) and low resolution. For accessing larger images that can be manipulated (size, rotation, crop, etc.), see the next tutorial, on IIIF.
Background knowledge:
- Understand URLs for loc.gov API requests and how to modify them
Applications:
- Find URLs for images
- Download images in bulk
- Get information about the images, such as copyright and usage details, dates, locations, etc.
How to scale, rotate, reflect, crop, and otherwise manipulate images using the IIIF API.
IIIF stands for the International Image Interoperability Framework. It is a standardized way to get images that is used by various libraries, museums, digital archives, etc.
Background knowledge:
- Where to find the IIIF URLs (can be found in image metadata accessed via the JSON API)
- (optional) Details of IIIF URL structure
Applications:
- Get higher resolution images and manipulate them
- Display images - individually or in galleries
- Can also be done in bulk
How to find and analyze the colors in an image. This tutorial uses k-means clustering to analyze and group the pixel values into 6 colors per image, but you can adjust that as needed.
Background knowledge:
- Using links in HTML
- How to create an SVG rectangle in HTML
Applications:
- Visualize colors in Library of Congress collections images
- Categorize or search images by color
Similar to the Accessing Images for Data Analysis notebook — provides code for accessing and downloading images specifically from the Lessing J. Rosenwald Collection.
See background knowledge and applications for the accessing images notebook.
Demonstrates how to retrieve geographic data (latitude and longitude) and plot it onto a map. This tutorial focuses on items in the Historic American Engineering Record (HAER). The way that geographic data is stored across collections does vary, so some collections may require more data cleaning and manipulation before doing geographic visualizations.
Background knowledge:
- Understand URLs for loc.gov API requests and how to modify them
- (optional) Python folium mapping
- (optional) Python pandas dataframe
Applications:
- Map item locations
- Analyze geographic data and connect it to other information, such as date
- Compare geographies across collections
How to query and download cartographic material. Includes:
- performing bulk downloads of cartographic materials using the loc.gov API and Python
- crafting advanced API queries for map content
- performing post-query filtering
Background knowledge: None, though it may be useful to have some familiarity with Python
Applications:
- Download and display images from the collections
- Create sets of images that can be used in a number of other applications
How to find, analyze, and visualize cartographic metadata. This tutorial focuses on metadata associated with the files in the Maps Downloading and Querying tutorial as well as items in the Sanborn Maps collection.
Background knowledge:
- How to install a python package using pip - or some other package manager
Applications:
- Search for items within a collection/dataset that have particular locations, dates, etc.
- Analyze the different parts of the metadata (longest/shortest/average item length, most common dates or locations)
- Create charts with the data that compare all of the items
This tutorial provides an introduction to what information is available via the API — one notable difference is that since this collection consists of newspaper pages, you can retrieve the text from the page (collected using OCR) via the API. It also covers searching for keywords and analyzing data from bulk records.
Background knowledge:
- Understand URLs for loc.gov API requests and how to modify them
Applications:
- Visualize search results on a map
- Find certain quotes through time
- Do historical research
- See these projects for more!
Introduction to csv data analysis in Python using the Memgenerator dataset from this page. Shows how to:
- find column headers to see what data is available
- count occurrences within the dataset
- visualize data in a bar graph
- retrieve and display images
Background knowledge: None, though it may be useful to have some familiarity with Python
Applications:
- Exploring memes
- Finding top 10s and other statistics in a dataset
- Data analysis and visualization for any dataset
Slightly more advanced csv data analysis in Python using the GIPHY.com dataset taken from this page. Shows how to:
- Get dates for the GIFs
- Group and visualize the dates
- Find the GIF file sizes
- Search the titles in the dataset
- Download all of the GIFs
Background knowledge: some familiarity with Python and/or data analysis - the Memegenerator tutorial is a good place to start
Applications:
- Exploring GIFs
- Searching through datasets
- Data analysis and visualization for any dataset
Provides code for selecting random audio segments and combining them together. Assumes that users have already downloaded the audio files into the same folder as this notebook. The dataset includes 1000 randomly selected audio clips. For more information on how this dataset was generated, see the README.
Background knowledge:
- Python:
- Importing packages
- Defining functions
- Using the pydub package
- (optional) Using the glob, re, and random packages
Applications:
- Create remixed audio
- Manipulate existing audio files
- Build interactive audio sampling tools, like this