+```
+
+## Using Pandoc to Create a Website
+
+Now that we have cloned the repository we can generate the HTML locally using Pandoc.
+
+Pandoc is a universal document converter. It reads and writes between very many different file
+formats, including many flavours of Markdown, HTML, LaTeX, Word, RTF, rst and many more. We use
+it to generate static websites from Markdown.
+
+First, let's download a container with pandoc installed and run it to see what the
+pandoc version is.
+
+```source
+docker container run pandoc/core --version
+```
+
+```output
+Unable to find image 'pandoc/core:latest' locally
+latest: Pulling from pandoc/core
+f84cab65f19f: Pull complete
+f95e84a31132: Pull complete
+5d5ebbd90555: Pull complete
+d084fb969d20: Pull complete
+Digest: sha256:af1d118e3280ffaf6181af5a9f87ef0c010af9b5877053b750be33d0c47cc6ce
+Status: Downloaded newer image for pandoc/core:latest
+pandoc 2.12
+Compiled with pandoc-types 1.22, texmath 0.12.1.1, skylighting 0.10.4,
+citeproc 0.3.0.8, ipynb 0.1.0.1
+User data directory: /root/.local/share/pandoc
+Copyright (C) 2006-2021 John MacFarlane. Web: https://pandoc.org
+This is free software; see the source for copying conditions. There is no
+warranty, not even for merchantability or fitness for a particular purpose.
+```
+
+Now, we can run pandoc on our `README.md` file by including our current directory and
+the `README.md` file as part of the `docker container run` command:
+
+```source
+docker container run --mount type=bind,source=${PWD},target=/tmp pandoc/core /tmp/README.md
+```
+
+```output
+readme-pages
+Example for generating Github.io pages from Readme with Pandoc.
+```
+
+Here, the `--mount type=bind,source=${PWD},target=/tmp` flag says to take the directory at `${PWD}` and make it available inside the
+container as `/tmp`. Then `pandoc` can read the source file (`README.md`) and convert it to HTML. While this HTML
+is valid, it doesn't show the complete structure of a standalone HTML document. For that we need to
+add the `--standalone` argument to the pandoc command. Also we can redirect the output to create a HTML file in the
+`build` directory.
+
+```source
+mkdir -p build
+docker container run --mount type=bind,source=${PWD},target=/tmp pandoc/core /tmp/README.md --standalone --output=/tmp/build/index.html
+```
+
+```output
+[WARNING] This document format requires a nonempty element.
+ Defaulting to 'README' as the title.
+ To specify a title, use 'title' in metadata or --metadata title="...".
+```
+
+To suppress the warning message we may add the following lines at the top of the `README.md` file:
+
+```
+---
+title: Hello, Pandoc
+---
+```
+
+Or add the mentioned `--metadata title="..."` to the command line.
+
+Once we've made all of these changes, and produced the output we want, we can
+check it, using this command:
+
+```source
+cat build/index.html
+```
+
+```output
+
+
+
+
+... etc
+```
+
+We now have tested our website deployment workflow - given the source files from
+Github, we can use a Docker container and command to generate our website. We now
+want to automate this process via Github Actions.
+
+## Automating Deployment on Github Actions
+
+Github Actions is a cloud service for automating continuous integration and deployment. This means
+we can have Github build our website and publish it on `github.io` automatically at every commit.
+
+Go to the Github project page you created earlier and click on "Actions". Because
+we have no active workflows yet, we
+are taken immediately to a menu for creating a new one. We will skip the templates and click on
+"set up a workflow yourself". The configuration format is YAML.
+
+The first entry is the **name** of the workflow
+
+```source, yaml
+name: Deploy pages
+```
+
+Next we specify **when** this workflow is run. In this case: every time content is pushed to the
+`main` branch
+
+```source, yaml
+on:
+ push:
+ branches:
+ - main
+```
+
+Now we tell Github **what** to do.
+
+```source, yaml
+jobs:
+ deploy: # a free machine-readable name for this job
+ runs-on: ubuntu-latest # specify the base operating system
+ steps:
+ - name: Checkout repo content # fetch the contents of the repository
+ uses: actions/checkout@v2
+ - name: Prepare build environment
+ run: | # multiple Bash commands follow
+ mkdir -p build
+ touch build/.nojekyll
+```
+
+Now for the Docker bit:
+
+```source, yaml
+ - name: Run pandoc
+ uses: docker://pandoc/core:2.12 # Always specify a version!
+ with:
+ args: >- # multi-line argument
+ --standalone
+ --output=build/index.html
+ README.md
+ - name: Deploy on github pages # Use a third-party plugin to upload the content
+ uses: JamesIves/github-pages-deploy-action@4.1.0
+ with:
+ branch: gh-pages
+ folder: build
+```
+
+We may recognize the command-line that we had previously. Notice that we don't need to specify the
+`--mount` flag. Github Actions arranges the Docker environment such that the files are in the correct
+location. The last step uploads the `build` directory to the `gh-pages` branch.
+
+Now we should enable Github Pages on this repository: go to the "Settings" tab and scroll down to
+"GitHub Pages". There we select the root folder in the `gh-pages` branch. After a few (tens) of
+seconds the page should be up.
+
+# Reference material
+
+- [Pandoc the universal document converter](https://pandoc.org)
+- [Documentation on GitHub Actions](https://docs.github.com/en/actions)
+- [GitHub Pages deploy action](https://github.com/marketplace/actions/deploy-to-github-pages)
+- [Pandoc action example](https://github.com/pandoc/pandoc-action-example)
+
+
+
+
+
+
diff --git a/fig/.gitkeep b/fig/.gitkeep
new file mode 100644
index 000000000..e69de29bb
diff --git a/fig/containers-cookie-cutter.png b/fig/containers-cookie-cutter.png
new file mode 100644
index 000000000..80016e910
Binary files /dev/null and b/fig/containers-cookie-cutter.png differ
diff --git a/fig/github-gh-pages-branch.png b/fig/github-gh-pages-branch.png
new file mode 100644
index 000000000..3730a3c9d
Binary files /dev/null and b/fig/github-gh-pages-branch.png differ
diff --git a/fig/github-io-pages.png b/fig/github-io-pages.png
new file mode 100644
index 000000000..92ae33752
Binary files /dev/null and b/fig/github-io-pages.png differ
diff --git a/fig/github-main-branch.png b/fig/github-main-branch.png
new file mode 100644
index 000000000..45ec5c2f1
Binary files /dev/null and b/fig/github-main-branch.png differ
diff --git a/files/.gitkeep b/files/.gitkeep
new file mode 100644
index 000000000..e69de29bb
diff --git a/files/docker-intro.zip b/files/docker-intro.zip
new file mode 100644
index 000000000..92c95d3c3
Binary files /dev/null and b/files/docker-intro.zip differ
diff --git a/index.md b/index.md
new file mode 100644
index 000000000..b11b92eb3
--- /dev/null
+++ b/index.md
@@ -0,0 +1,95 @@
+---
+permalink: index.html
+site: sandpaper::sandpaper_site
+---
+
+This session aims to introduce the use of Docker containers with the goal of using them to effect reproducible computational environments. Such environments are useful for ensuring reproducible research outputs, for example.
+
+:::::::::::::::::::::::::::::::::::::: objectives
+
+## After completing this session you should:
+
+- Have an understanding of what Docker containers are, why they are useful
+ and the common terminology used
+- Have a working Docker installation on your local system to allow you to
+ use containers
+- Understand how to use existing Docker containers for common tasks
+- Be able to build your own Docker containers by understanding both the role
+ of a `Dockerfile` in building containers, and the syntax used in `Dockerfile`s
+- Understand how to manage Docker containers on your local system
+- Appreciate issues around reproducibility in software, understand how
+ containers can address some of these issues and what the limits to
+ reproducibility using containers are
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+The practical work in this lesson is primarily aimed at using Docker on your own laptop. Beyond your laptop, software container technologies such as Docker can also be used in the cloud and on high performance computing (HPC) systems. Some of the material in this lesson will be applicable to those environments too.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Containers on HPC systems
+
+On HPC systems it is more likely that *Singularity* rather than Docker will be the available container technology.
+If you are looking for a lesson on using Singularity containers (instead of Docker), see this lesson:
+
+- [Reproducible Computational Environments Using Containers: Introduction to Singularity](https://carpentries-incubator.github.io/singularity-introduction/)
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::::: prereq
+
+## Prerequisites
+
+- You should have basic familiarity with using a command shell, and the lesson text will at times request that you "open a shell window", with an assumption that you know what this means.
+ - Under Linux or macOS it is assumed that you will access a `bash` shell (usually the default), using your Terminal application.
+ - Under Windows, Powershell and Git Bash should allow you to use the Unix instructions. We will also try to give command variants for Windows `cmd.exe`.
+- The lessons will sometimes request that you use a text editor to create or edit files in particular directories. It is assumed that you either have an editor that you know how to use that runs within the working directory of your shell window (e.g. `nano`), or that if you use a graphical editor, that you can use it to read and write files into the working directory of your shell.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Target audience
+
+This lesson on the use of Docker is intended to be relevant to a wide range of
+researchers, as well as existing and prospective technical professionals. It is
+intended as a beginner level course that is suitable for people who have no
+experience of containers.
+
+We are aiming to help people who want to develop their knowledge of container
+tooling to help improve reproducibility and support their research work, or
+that of individuals or teams they are working with.
+
+We provide more detail on specific roles that might benefit from this course on
+the [Learner Profiles](/profiles.html) page.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## A note about Docker
+
+Docker is a mature, robust and very widely used application. Nonetheless,
+it is still under extensive development. New versions are released regularly
+often containing a range of updates and new features.
+
+While we do our best to ensure that this lesson remains up to date and the
+descriptions and outputs shown match what you will see on your own computer,
+inconsistencies can occur.
+
+If you spot inconsistencies or encounter any problems, please do report them
+by [opening an issue][open a lesson issue] in the [GitHub repository][docker-introduction repository]
+for this lesson.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+
+
+
+
+
diff --git a/instructor-notes.md b/instructor-notes.md
new file mode 100644
index 000000000..5c2ebf70b
--- /dev/null
+++ b/instructor-notes.md
@@ -0,0 +1,225 @@
+---
+title: Instructor Notes
+---
+
+## Before Teaching This Lesson
+
+[Docker][Docker] and its associated ecosystem is rapidly developing.
+While many core features will be stable, the overall environment
+changes regularly with version updates and new tools for interacting with
+Docker and running containers on different platforms.
+
+In particular, there can be differences between macOS, Windows and Linux
+platforms. Updates and changes introduced in Docker releases are highlighted
+in the [Docker release notes][Docker release notes].
+
+*You are strongly advised to run through the lesson content prior to teaching
+the lesson to ensure that everything works as expected.*
+
+If you experience any issues, please [open an issue][open a lesson issue] in the lesson
+repository describing the problem and platform(s) affected. The lesson maintainers will
+aim to resolve the issue as soon as possible but we also welcome the opening
+of pull requests (linked to issues) that resolve anything that doesn't work as
+expected with the lesson content.
+
+## Miscellaneous Tips
+
+- **Timing**: With all the lesson episodes taken together, there's way more than three hours of material in this lesson.
+ Focusing on the earlier episodes (Introduction through the first half
+ of Creating Container Images) will take just about three hours if you
+ also include a brief general introduction and time to check your learners'
+ software installations.
+- **Install Issues**: From the feedback we have received about past lessons, computers running
+ Microsoft Windows have encountered the largest number of challenges setting up Docker.
+ Consider having people check their install in advance at a separate time or come early.
+ In online workshops, consider using your video conferencing software's "breakout room" functionality
+ to form smaller groups within which participants can troubleshoot their installations.
+ Note that you should use a more complex command than `docker --version` to test the installation, as the
+ simplest `docker` commands to not connect to the Docker backend.
+- **Virtualization Illustration**: When going through the intro to containers,
+ consider demonstrating what this might look like by having two shells (or shell tabs)
+ open, one on your host computer and one into a container you started before the
+ workshop. Then you can demonstrate in a simple way that from the same (host) computer,
+ you can access two different types of environments -- one via the shell on your
+ host computer and one via the shell into a running container. Sample commands could include:
+ - `whoami`
+ - `pwd` and `ls`
+ - something that shows the OS. On mac, this could be `sw_vers`, on linux `cat /etc/os-release`
+- **Reflection Exercise**: At the beginning and end of the workshop, give participants time to
+ reflect on what they want to get out of the workshop (at the beginning) and what they
+ can apply to their work (at the end). Using the shared notes doc is a great way to
+ do this and a good way to make sure that you've addressed specific concerns or goals
+ of the participants.
+
+## Learner Profiles and Pathways
+
+In this section we provide some details of example learner profiles and
+suggest some possible different pathways or technical focuses to consider
+when teaching or planning a lesson based around this Docker material. As such,
+the information in this section is not designed to define fixed approaches and
+structures for teaching this material. It is instead aimed to provide ideas
+and inspiration and to encourage you to think about your audience when
+preparing to teach this material. The information here is based on both
+discussions about the intended audiences for this material and on direct
+experiences of instructors who have taught it at workshops following different
+technical pathways.
+
+### Learner profiles
+
+We begin by providing some example learner profiles to highlight the potential
+target audience and the types of different research and technical backgrounds
+that you may find among learners engaging with this material. With these
+profiles, we aim to encourage you to think about the learners attending your
+workshop(s) and which episodes it may be most useful to teach.
+
+***Nelson is a graduate student in microbiology.*** They have experience in running Unix shell
+commands and using libraries in R for the bioinformatics workflows they have developed.
+They are expanding their analysis to run on 3000 genomes in 200 samples and they have
+started to use the local cluster to run their workflows. The local research computing
+facilitator has advised them that Docker could be useful for running their workflows on
+the cluster. They'd like to make use of existing containers that other bioinformaticians
+have made so they want to learn how to use Docker. They would also be interested in
+creating their own Docker images for other lab members and collaborators to re-use their
+workflows.
+
+***Caitlin is a second year undergraduate in computer science examining Docker for the first
+time.*** She has heard about Docker but does not really know what it achieves or why it is
+useful. She is reasonably confident in using the Unix shell, having used it briefly in
+her first year modules. She is keen to find jump-off points to learn more about technical
+details and alternative technologies that are also popular, having heard that container
+technologies are widely used within industry.
+
+***Xu, a materials science researcher, wants to package her software for release with
+a paper to help ensure reproducibility.*** She has written some code that makes use of a
+series of Python libraries to undertake analysis of a compound. She wants to (or is
+required to) make her software available as part of the paper submission. She
+understands why Docker is important in helping to ensure reproducibility but not the
+process and low-level detail of preparing a container and archiving it to obtain a DOI
+for inclusion with the paper submission.
+
+***Bronwyn is a PhD student running Python/R scripts on her local laptop/workstation.***
+She is having difficulty getting all the tools she needs to work because of conflicting
+dependencies and little experience with package managers. She is also keen to reduce
+the overhead of managing software so she can get on with her thesis research. She has
+heard that Docker might be able to help out but is not confident to start exploring
+this on her own and does not have access to any expertise in this within her local
+research group. She currently wants to know how to use preexisting Docker containers
+but may need to create her own containers in the future.
+
+***Virat is a grad student who is running an obscure bioinformatics tool (from a GitHub
+repo) that depends on a number of other tools that need to be pre-installed .*** He wants to be able to
+run on multiple resources and have his undergrad assistant use the same tools. Virat
+has command line experience and has struggled his way through complex installations
+but he has no formal CS background - he only knows to use containers because a departmental
+IT person suggested it. He is usually working from a Windows computer. He needs to
+understand how to create his own container, use it locally, and train his student
+to use it as well.
+
+Considering things from a higher level, we also highlight three core groups of
+learners, based on job roles, who you may find attending lessons covering this
+material. While recognising that there are likely to be many learners who
+don't fit into one of the following groups, or who span more than one of them,
+we hope that highlighting these groups helps to provide an example of the
+different types of skills and expertise that learners engaging with this
+material may have:
+
+- **Researchers:** For researchers, even those based in non-computational domains, software
+ is an increasingly important element of their day-to-day work. Whether they are writing
+ code or installing, configuring and/or running software to support their research, they
+ will eventually need to deal with the complexities of running software on different
+ platforms, handling complex software dependencies and potentially submitting their code and data to
+ repositories to support the reproduction of research outputs by other researchers, or to
+ meet the requirements of publishers or funders. Software container technologies are valuable
+ to help researchers address these challenges.
+
+- **RSEs:** RSEs -- Research Software Engineers -- provide software development, training
+ and technical guidance to support the development of reliable, maintainable, sustainable
+ research software. They will generally have extensive technical skills but they may not
+ have experience of working with or managing software containers. In addition to working with
+ researchers to help build and package software, they are likely to be interested in how
+ containers can help to support best practices for the development of research software
+ and aspects such as software deployment.
+
+- **Systems professionals:** Systems professionals represent the more technical end of
+ our spectrum of learners. They may be based within a central IT services environment
+ within a research institution or within individual departments or research groups.
+ Their work is likely to encompass supporting researchers with effective use of
+ infrastructure and they are likely to need to know about managing and orchestrating
+ multiple containers in more complex environments. For example, they may need to provide
+ database servers, web application servers and other services that can be deployed
+ in containerized environments to support more straightforward management, maintenance
+ and upgradeability.
+
+### Learner Pathways
+
+We now come to look at some ideas around learner pathways for learners
+interested in Docker, and container technologies more generally.
+
+Containers involve a variety of different technologies, and teaching material
+about them can therefore encompass significant volumes of technical
+information. Depending on the domain they work in, and their motivation for
+taking a course covering this material, learners are likely to have various
+different reasons for wanting to learn about Docker, that may not necessarily
+all overlap. The material in this lesson covers a set of core concepts,
+introducing containers and then looking at the key features of Docker and how
+to use them.
+
+Moving beyond the core features there are a number of topics that are likely
+to only be of interest to different sub-groups of learners. To support these
+different groups of learners we have developed a set of "*learner pathways*"
+that provide suggested routes through the material based on different use
+cases or areas of interest.
+
+You are, of course, welcome to mix and match lesson content to offer a course
+that best suits your target audience but we are listing some different
+pathways or themes for covering this material to offer you some guidance and
+examples of the different routes through the material that you might want to
+consider. Each pathway will have a slightly different emphasis on specific
+sets of topics. We highlight learner different profiles that we believe map
+well to specific pathways.
+
+*Note that the material in this lesson continues to develop and experience
+of teaching the material is increasing. In due course we intend to offer more
+detailed pathway information including specific episode schedules that we
+think are most suited to the pathways highlighted.*
+
+**Core content:**
+
+The Docker lesson contains a set of core content that we expect to be relevant
+for all learner pathways. This includes:
+
+- Introducing container concepts and the Docker software
+- Running through the basic use of Docker including:
+ - Core commands for listing and managing images and containers
+ - Obtaining container images from Docker Hub
+ - Running containers from container images
+ - Building container images
+
+Beyond this, different pathways offer scope to bring in different episodes
+containing different lesson content to support different target audiences or
+areas of interest
+
+Some suggested pathways include:
+
+- **Reproducible research**
+
+ - *Common learner profiles:* Researcher; RSE
+
+- **Cloud computing**
+
+ - *Common learner profiles:* Sytems professional, RSE
+
+- **High performance computing**
+
+ - *Common learner profiles:* Researcher; RSE; Systems professional
+
+## Common Points of Confusion
+
+- difference between a container and container image
+- what it means for a container to be stopped (but not removed)
+- differences in container behaviour between hosts that are running Linux compared to hosts running macOS or Microsoft Windows
+ - on Linux hosts there is usually only one OS kernel shared between the host and the containers, so less separation than is typical when using macOS or Windows hosts. This can lead to effects such as volume mounts behaving differently, e.g., regarding filesystem permissions, user and group mappings between the host and the container.
+
+
+
+
diff --git a/introduction.md b/introduction.md
new file mode 100644
index 000000000..01c975991
--- /dev/null
+++ b/introduction.md
@@ -0,0 +1,210 @@
+---
+title: Introducing Containers
+teaching: 20
+exercises: 5
+---
+
+::::::::::::::::::::::::::::::::::::::: objectives
+
+- Show how software depending on other software leads to configuration management problems.
+- Identify the problems that software installation can pose for research.
+- Explain the advantages of containerization.
+- Explain how using containers can solve software configuration problems
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: questions
+
+- What are containers, and why might they be useful to me?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Learning about Docker Containers
+
+The Australian Research Data Commons has produced a short introductory video
+about Docker containers that covers many of the points below. Watch it before
+or after you go through this section to reinforce your understanding!
+
+[How can software containers help your research?](https://www.youtube.com/watch?v=HelrQnm3v4g)
+
+Australian Research Data Commons, 2021. *How can software containers help your research?*. [video] Available at: [https://www.youtube.com/watch?v=HelrQnm3v4g](https://www.youtube.com/watch?v=HelrQnm3v4g) DOI: [http://doi.org/10.5281/zenodo.5091260](https://doi.org/10.5281/zenodo.5091260)
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Scientific Software Challenges
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## What's Your Experience?
+
+Take a minute to think about challenges that you have experienced in using
+scientific software (or software in general!) for your research. Then,
+share with your neighbors and try to come up with a list of common gripes or
+challenges.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+You may have come up with some of the following:
+
+- you want to use software that doesn't exist for the operating system (Mac, Windows, Linux) you'd prefer.
+- you struggle with installing a software tool because you have to install a number of other dependencies first. Those dependencies, in turn, require *other* things, and so on (i.e. combinatoric explosion).
+- the software you're setting up involves many dependencies and only a subset of all possible versions of those dependencies actually works as desired.
+- you're not actually sure what version of the software you're using because the install process was so circuitous.
+- you and a colleague are using the same software but get different results because you have installed different versions and/or are using different operating systems.
+- you installed everything correctly on your computer but now need to install it on a colleague's computer/campus computing cluster/etc.
+- you've written a package for other people to use but a lot of your users frequently have trouble with installation.
+- you need to reproduce a research project from a former colleague and the software used was on a system you no longer have access to.
+
+A lot of these characteristics boil down to one fact: the main program you want
+to use likely depends on many, many, different other programs (including the
+operating system!), creating a very complex, and often fragile system. One change
+or missing piece may stop the whole thing from working or break something that was
+already running. It's no surprise that this situation is sometimes
+informally termed **dependency hell**.
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Software and Science
+
+Again, take a minute to think about how the software challenges we've discussed
+could impact (or have impacted!) the quality of your work.
+Share your thoughts with your neighbors. What can go wrong if our software
+doesn't work?
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Unsurprisingly, software installation and configuration challenges can have
+negative consequences for research:
+
+- you can't use a specific tool at all, because it's not available or installable.
+- you can't reproduce your results because you're not sure what tools you're actually using.
+- you can't access extra/newer resources because you're not able to replicate your software set up.
+- others cannot validate and/or build upon your work because they cannot recreate your system's unique configuration.
+
+Thankfully there are ways to get underneath (a lot of) this mess: containers
+to the rescue! Containers provide a way to package up software dependencies
+and access to resources such as files and communications networks in a uniform manner.
+
+## What is a Container? What is Docker?
+
+[Docker][Docker] is a tool that allows you to build what are called **containers**. It's
+not the only tool that can create containers, but is the one we've chosen for
+this workshop. But what *is* a container?
+
+To understand containers, let's first talk briefly about your computer.
+
+Your computer has some standard pieces that allow it to work -- often what's
+called the hardware. One of these pieces is the CPU or processor; another is
+the amount of memory or RAM that your computer can use to store information
+temporarily while running programs; another is the hard drive, which can store
+information over the long-term. All these pieces work together to do the
+computing of a computer, but we don't see them because they're hidden from view (usually).
+
+Instead, what we see is our desktop, program windows, different folders, and
+files. These all live in what's called the filesystem. Everything on your computer -- programs,
+pictures, documents, the operating system itself -- lives somewhere in the filesystem.
+
+NOW, imagine you want to install some new software but don't want to take the chance
+of making a mess of your existing system by installing a bunch of additional stuff
+(libraries/dependencies/etc.).
+You don't want to buy a whole new computer because it's too expensive.
+What if, instead, you could have another independent filesystem and running operating system that you could access from your main computer, and that is actually stored within this existing computer?
+
+Or, imagine you have two tools you want to use in your groundbreaking research on cat memes: `PurrLOLing`, a tool that does AMAZINGLY well at predicting the best text for a meme based on the cat species and `WhiskerSpot`, the only tool available for identifying cat species from images. You want to send cat pictures to `WhiskerSpot`, and then send the species output to `PurrLOLing`. But there's a problem: `PurrLOLing` only works on Ubuntu and `WhiskerSpot` is only supported for OpenSUSE so you can't have them on the same system! Again, we really want another filesystem (or two) on our computer that we could use to chain together `WhiskerSpot` and `PurrLOLing` in a **computational pipeline**...
+
+Container systems, like Docker, are special programs on your computer that make it possible!
+The term container can be usefully considered with reference to shipping
+containers. Before shipping containers were developed, packing and unpacking
+cargo ships was time consuming and error prone, with high potential for
+different clients' goods to become mixed up. Just like shipping containers keep things
+together that should stay together, software containers standardize the description and
+creation of a complete software system: you can drop a container into any computer with
+the container software installed (the 'container host'), and it should *just work*.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Virtualization
+
+Containers are an example of what's called **virtualization** -- having a
+second virtual computer running and accessible from a main or **host**
+computer. Another example of virtualization are **virtual machines** or
+VMs. A virtual machine typically contains a whole copy of an operating system in
+addition to its own filesystem and has to get booted up in the same way
+a computer would.
+A container is considered a lightweight version of a virtual machine;
+underneath, the container is (usually) using the Linux kernel and simply has some
+flavour of Linux + the filesystem inside.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+One final term: while the **container** is an alternative filesystem layer that you
+can access and run from your computer, the **container image** is the 'recipe' or template
+for a container. The container image has all the required information to start
+up a running copy of the container. A running container tends to be transient
+and can be started and shut down. The container image is more long-lived, as a definition for the container.
+You could think of the container image like a cookie cutter -- it
+can be used to create multiple copies of the same shape (or container)
+and is relatively unchanging, where cookies come and go. If you want a
+different type of container (cookie) you need a different container image (cookie cutter).
+
+![](fig/containers-cookie-cutter.png){alt='An image comparing using a cookie cutter to the container workflow'}
+
+## Putting the Pieces Together
+
+Think back to some of the challenges we described at the beginning. The many layers
+of scientific software installations make it hard to install and re-install
+scientific software -- which ultimately, hinders reliability and reproducibility.
+
+But now, think about what a container is -- a self-contained, complete, separate
+computer filesystem. What advantages are there if you put your scientific software
+tools into containers?
+
+This solves several of our problems:
+
+- documentation -- there is a clear record of what software and software dependencies were used, from bottom to top.
+- portability -- the container can be used on any computer that has Docker installed -- it doesn't matter whether the computer is Mac, Windows or Linux-based.
+- reproducibility -- you can use the exact same software and environment on your computer and on other resources (like a large-scale computing cluster).
+- configurability -- containers can be sized to take advantage of more resources (memory, CPU, etc.) on large systems (clusters) or less, depending on the circumstances.
+
+The rest of this workshop will show you how to download and run containers from pre-existing
+container images on your own computer, and how to create and share your own container images.
+
+## Use cases for containers
+
+Now that we have discussed a little bit about containers -- what they do and the
+issues they attempt to address -- you may be able to think of a few potential use
+cases in your area of work. Some examples of common use cases for containers in
+a research context include:
+
+- Using containers solely on your own computer to use a specific software tool
+ or to test out a tool (possibly to avoid a difficult and complex installation
+ process, to save your time or to avoid dependency hell).
+- Creating a `Dockerfile` that generates a container image with software that you
+ specify installed, then sharing a container image generated using this Dockerfile with
+ your collaborators for use on their computers or a remote computing resource
+ (e.g. cloud-based or HPC system).
+- Archiving the container images so you can repeat analysis/modelling using the
+ same software and configuration in the future -- capturing your workflow.
+
+
+
+
+
+:::::::::::::::::::::::::::::::::::::::: keypoints
+
+- Almost all software depends on other software components to function, but these components have independent evolutionary paths.
+- Small environments that contain only the software that is needed for a given task are easier to replicate and maintain.
+- Critical systems that cannot be upgraded, due to cost, difficulty, etc. need to be reproduced on newer systems in a maintainable and self-documented way.
+- Virtualization allows multiple environments to run on a single computer.
+- Containerization improves upon the virtualization of whole computers by allowing efficient management of the host computer's memory and storage resources.
+- Containers are built from 'recipes' that define the required set of software components and the instructions necessary to build/install them within a container image.
+- Docker is just one software platform that can create containers and the resources they use.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
diff --git a/learner-profiles.md b/learner-profiles.md
new file mode 100644
index 000000000..201aaac9f
--- /dev/null
+++ b/learner-profiles.md
@@ -0,0 +1,99 @@
+---
+title: Learner profiles
+---
+
+Here we provide some example profiles of people who represent the target
+audience for this lesson. These example scenarios are designed to give you an
+idea of the different reasons people might want to learn Docker and the types
+of roles that they might hold.
+
+The profiles provided here and the individuals described are fictional but they
+represent the lesson developers' experiences of teaching members of the
+research community about Docker and other container technologies over a period
+of several years. They also incorporate feedback from instructors involved in
+pilot runs of this course.
+
+Note that containers are applicable across a wide range of use cases within the
+research and High Performance Computing communities. These profiles are not
+intended to cover all areas but rather to offer some examples of the types of
+roles people learning this material might hold and their reasons for learning
+about containers.
+
+## Individual learner profiles
+
+***Nelson is a graduate student in microbiology.*** They have experience in running Unix shell
+commands and using libraries in R for the bioinformatics workflows they have developed.
+They are expanding their analysis to run on 3000 genomes in 200 samples and they have
+started to use the local cluster to run their workflows. The local research computing
+facilitator has advised them that Docker could be useful for running their workflows on
+the cluster. They'd like to make use of existing containers that other bioinformaticians
+have made so they want to learn how to use Docker. They would also be interested in
+creating their own Docker images for other lab members and collaborators to re-use their
+workflows.
+
+***Caitlin is a second year undergraduate in computer science examining Docker for the first
+time.*** She has heard about Docker but does not really know what it achieves or why it is
+useful. She is reasonably confident in using the Unix shell, having used it briefly in
+her first year modules. She is keen to find jump-off points to learn more about technical
+details and alternative technologies that are also popular, having heard that container
+technologies are widely used within industry.
+
+***Xu, a materials science researcher, wants to package her software for release with
+a paper to help ensure reproducibility.*** She has written some code that makes use of a
+series of Python libraries to undertake analysis of a compound. She wants to (or is
+required to) make her software available as part of the paper submission. She
+understands why Docker is important in helping to ensure reproducibility but not the
+process and low-level detail of preparing a container and archiving it to obtain a DOI
+for inclusion with the paper submission.
+
+***Bronwyn is a PhD student running Python/R scripts on her local laptop/workstation.***
+She is having difficulty getting all the tools she needs to work because of conflicting
+dependencies and little experience with package managers. She is also keen to reduce
+the overhead of managing software so she can get on with her thesis research. She has
+heard that Docker might be able to help out but is not confident to start exploring
+this on her own and does not have access to any expertise in this within her local
+research group. She currently wants to know how to use preexisting Docker containers
+but may need to create her own containers in the future.
+
+***Virat is a grad student who is running an obscure bioinformatics tool (from a GitHub
+repo) that depends on a number of other tools that need to be pre-installed .*** He wants to be able to
+run on multiple resources and have his undergrad assistant use the same tools. Virat
+has command line experience and has struggled his way through complex installations
+but he has no formal CS background - he only knows to use containers because a departmental
+IT person suggested it. He is usually working from a Windows computer. He needs to
+understand how to create his own container, use it locally, and train his student
+to use it as well.
+
+## Group profiles
+
+In addition to our individual learner profiles above, we also look at three
+more general groups who may want to learn about containers. This is intended to
+help you get a perspective of the different types of skills and expertise that
+learners engaging with this material may have:
+
+- **Researchers:** For researchers, even those based in non-computational domains, software
+ is an increasingly important element of their day-to-day work. Whether they are writing
+ code or installing, configuring and/or running software to support their research, they
+ will eventually need to deal with the complexities of running software on different
+ platforms, handling complex software dependencies and potentially submitting their code and data to
+ repositories to support the reproduction of research outputs by other researchers, or to
+ meet the requirements of publishers or funders. Software container technologies are valuable
+ to help researchers address these challenges.
+
+- **RSEs:** RSEs -- Research Software Engineers -- provide software development, training
+ and technical guidance to support the development of reliable, maintainable, sustainable
+ research software. They will generally have extensive technical skills but they may not
+ have experience of working with or managing software containers. In addition to working with
+ researchers to help build and package software, they are likely to be interested in how
+ containers can help to support best practices for the development of research software
+ and aspects such as software deployment.
+
+- **Systems professionals:** Systems professionals represent the more technical end of
+ our spectrum of learners. They may be based within a central IT services environment
+ within a research institution or within individual departments or research groups.
+ Their work is likely to encompass supporting researchers with effective use of
+ infrastructure and they are likely to need to know about managing and orchestrating
+ multiple containers in more complex environments. For example, they may need to provide
+ database servers, web application servers and other services that can be deployed
+ in containerized environments to support more straightforward management, maintenance
+ and upgradeability.
diff --git a/links.md b/links.md
new file mode 100644
index 000000000..d56b5334e
--- /dev/null
+++ b/links.md
@@ -0,0 +1,4 @@
+[Docker]: https://www.docker.com/
+[Docker release notes]: https://docs.docker.com/release-notes/
+[docker-introduction repository]: https://github.com/carpentries-incubator/docker-introduction
+[open a lesson issue]: https://github.com/carpentries-incubator/docker-introduction/issues/new
\ No newline at end of file
diff --git a/managing-containers.md b/managing-containers.md
new file mode 100644
index 000000000..e1c350e6e
--- /dev/null
+++ b/managing-containers.md
@@ -0,0 +1,178 @@
+---
+title: Cleaning Up Containers
+teaching: 10
+exercises: 0
+---
+
+::::::::::::::::::::::::::::::::::::::: objectives
+
+- Explain how to list running and completed containers.
+- Know how to list and remove container images.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: questions
+
+- How do I interact with a Docker container on my computer?
+- How do I manage my containers and container images?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Removing images
+
+The container images and their corresponding containers can start to take up a lot of disk space if you don't clean them up occasionally, so it's a good idea to periodically remove containers and container images that you won't be using anymore.
+
+In order to remove a specific container image, you need to find out details about the container image,
+specifically, the "Image ID". For example, say my laptop contained the following container image:
+
+```bash
+$ docker image ls
+```
+
+```output
+REPOSITORY TAG IMAGE ID CREATED SIZE
+hello-world latest fce289e99eb9 15 months ago 1.84kB
+```
+
+You can remove the container image with a `docker image rm` command that includes the *Image ID*, such as:
+
+```bash
+$ docker image rm fce289e99eb9
+```
+
+or use the container image name, like so:
+
+```bash
+$ docker image rm hello-world
+```
+
+However, you may see this output:
+
+```output
+Error response from daemon: conflict: unable to remove repository reference "hello-world" (must force) - container e7d3b76b00f4 is using its referenced image fce289e99eb9
+```
+
+This happens when Docker hasn't cleaned up some of the previously running containers
+based on this container image. So, before removing the container image, we need to be able
+to see what containers are currently running, or have been run recently, and how
+to remove these.
+
+## What containers are running?
+
+Working with containers, we are going to shift back to the command: `docker container`. Similar to `docker image`, we can list running containers by typing:
+
+```bash
+$ docker container ls
+```
+
+```output
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+```
+
+Notice that this command didn't return any containers because our containers all exited and thus stopped running after they completed their work.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## `docker ps`
+
+The command `docker ps` serves the same purpose as `docker container ls`, and comes
+from the Unix shell command `ps` which describes running processes.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## What containers have run recently?
+
+There is also a way to list running containers, and those that have completed recently, which is to add the `--all`/`-a` flag to the `docker container ls` command as shown below.
+
+```bash
+$ docker container ls --all
+```
+
+```output
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+9c698655416a hello-world "/hello" 2 minutes ago Exited (0) 2 minutes ago zen_dubinsky
+6dd822cf6ca9 hello-world "/hello" 3 minutes ago Exited (0) 3 minutes ago eager_engelbart
+```
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Keeping it clean
+
+You might be surprised at the number of containers Docker is still keeping track of.
+One way to prevent this from happening is to add the `--rm` flag to `docker container run`. This
+will completely wipe out the record of the run container when it exits. If you need
+a reference to the running container for any reason, **don't** use this flag.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## How do I remove an exited container?
+
+To delete an exited container you can run the following command, inserting the `CONTAINER ID` for the container you wish to remove.
+It will repeat the `CONTAINER ID` back to you, if successful.
+
+```bash
+$ docker container rm 9c698655416a
+```
+
+```output
+9c698655416a
+```
+
+An alternative option for deleting exited containers is the `docker container prune` command. Note that this command doesn't accept a container ID as an
+option because it deletes ALL exited containers!
+**Be careful** with this command as deleting the container is **forever**.
+**Once a container is deleted you can not get it back.**
+If you have containers you may want to reconnect to, you should **not** use this command.
+It will ask you if to confirm you want to remove these containers, see output below.
+If successful it will print the full `CONTAINER ID` back to you for each container it has
+removed.
+
+```bash
+$ docker container prune
+```
+
+```output
+WARNING! This will remove all stopped containers.
+Are you sure you want to continue? [y/N] y
+Deleted Containers:
+9c698655416a848278d16bb1352b97e72b7ea85884bff8f106877afe0210acfc
+6dd822cf6ca92f3040eaecbd26ad2af63595f30bb7e7a20eacf4554f6ccc9b2b
+```
+
+## Removing images, for real this time
+
+Now that we've removed any potentially running or stopped containers, we can try again to
+delete the `hello-world` **container image**.
+
+```bash
+$ docker image rm hello-world
+```
+
+```output
+Untagged: hello-world:latest
+Untagged: hello-world@sha256:5f179596a7335398b805f036f7e8561b6f0e32cd30a32f5e19d17a3cda6cc33d
+Deleted: sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e
+Deleted: sha256:af0b15c8625bb1938f1d7b17081031f649fd14e6b233688eea3c5483994a66a3
+```
+
+The reason that there are a few lines of output, is that a given container image may have been formed by merging multiple underlying layers.
+Any layers that are used by multiple Docker container images will only be stored once.
+Now the result of `docker image ls` should no longer include the `hello-world` container image.
+
+
+
+
+
+
+
+:::::::::::::::::::::::::::::::::::::::: keypoints
+
+- `docker container` has subcommands used to interact and manage containers.
+- `docker image` has subcommands used to interact and manage container images.
+- `docker container ls` or `docker ps` can provide information on currently running containers.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
diff --git a/md5sum.txt b/md5sum.txt
new file mode 100644
index 000000000..bdf6de740
--- /dev/null
+++ b/md5sum.txt
@@ -0,0 +1,25 @@
+"file" "checksum" "built" "date"
+"CODE_OF_CONDUCT.md" "c93c83c630db2fe2462240bf72552548" "site/built/CODE_OF_CONDUCT.md" "2024-06-27"
+"LICENSE.md" "b24ebbb41b14ca25cf6b8216dda83e5f" "site/built/LICENSE.md" "2024-06-27"
+"aio.md" "bbb0f59db3ef6dccf60fb4a7a86d3020" "site/built/aio.md" "2024-06-27"
+"config.yaml" "54be1fabc599404a592c83552a49916f" "site/built/config.yaml" "2024-07-26"
+"index.md" "16a0cc69e6e31090b65bec6484cdf513" "site/built/index.md" "2024-08-16"
+"links.md" "00995287cb95631827a4f30cbe5a7722" "site/built/links.md" "2024-08-16"
+"episodes/introduction.md" "fbd6c719d897bfa342d976928b942d56" "site/built/introduction.md" "2024-08-01"
+"episodes/meet-docker.md" "36a6daa2e4727a8ce88db8a4a1a0fa88" "site/built/meet-docker.md" "2024-08-01"
+"episodes/running-containers.md" "4bd40434e9fee516256b848e2a423f5a" "site/built/running-containers.md" "2024-06-27"
+"episodes/managing-containers.md" "cd974b695f6fa04b3042765a827df552" "site/built/managing-containers.md" "2024-06-27"
+"episodes/docker-hub.md" "430220bbc73531857a09eddfc6247b4c" "site/built/docker-hub.md" "2024-06-27"
+"episodes/creating-container-images.md" "1c4f5343cd4e6e32f49c7105b879cd46" "site/built/creating-container-images.md" "2024-08-16"
+"episodes/advanced-containers.md" "a7bce20bf3222a7ac60363800526990d" "site/built/advanced-containers.md" "2024-08-16"
+"episodes/docker-image-examples.md" "caddfa3f2785fee60367ae05d100920a" "site/built/docker-image-examples.md" "2024-08-16"
+"episodes/reproduciblity.md" "55087b4f3997a95e2a5c5d6f9fd8cb7a" "site/built/reproduciblity.md" "2024-08-16"
+"instructors/06-containers-on-the-cloud.md" "6838e441f1869570ec5313bc72e85eb4" "site/built/06-containers-on-the-cloud.md" "2024-06-27"
+"instructors/08-orchestration.md" "6f69af23a2cd48c8382e2573ec2907ad" "site/built/08-orchestration.md" "2024-06-27"
+"instructors/about.md" "1df29c85850c4e3a718d5fc3a361e846" "site/built/about.md" "2024-06-27"
+"instructors/e01-github-actions.md" "ae95c2390c400410b5708a9e5f4c29c1" "site/built/e01-github-actions.md" "2024-06-27"
+"instructors/instructor-notes.md" "6ccb557863cff40a02727a9b8729add7" "site/built/instructor-notes.md" "2024-06-27"
+"learners/discuss.md" "2758e2e5abd231d82d25c6453d8abbc6" "site/built/discuss.md" "2024-06-27"
+"learners/reference.md" "bbb68ff9187bcebed81d18156df503cc" "site/built/reference.md" "2024-08-01"
+"learners/setup.md" "fd74bc2dd9538bf486391304cb6f6f7f" "site/built/setup.md" "2024-06-27"
+"profiles/learner-profiles.md" "6fcb80ab2baf4f2762193ae4a6f1294a" "site/built/learner-profiles.md" "2024-08-16"
diff --git a/meet-docker.md b/meet-docker.md
new file mode 100644
index 000000000..a63829b78
--- /dev/null
+++ b/meet-docker.md
@@ -0,0 +1,358 @@
+---
+title: Introducing the Docker Command Line
+teaching: 10
+exercises: 5
+---
+
+::::::::::::::::::::::::::::::::::::::: objectives
+
+- Explain how to check that Docker is installed and is ready to use.
+- Demonstrate some initial Docker command line interactions.
+- Use the built-in help for Docker commands.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: questions
+
+- How do I know Docker is installed and running?
+- How do I interact with Docker?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Docker command line
+
+Start the Docker application that you installed in working through the setup instructions for this session. Note that this might not be necessary if your laptop is running Linux or if the installation added the Docker application to your startup process.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## You may need to login to Docker Hub
+
+The Docker application will usually provide a way for you to log in to the Docker Hub using the application's menu (macOS) or systray
+icon (Windows) and it is usually convenient to do this when the application starts. This will require you to use your Docker Hub
+username and your password. We will not actually require access to the Docker Hub until later in the course but if you can login now,
+you should do so.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Determining your Docker Hub username
+
+If you no longer recall your Docker Hub username, e.g., because you have been logging into the Docker Hub using your email address,
+you can find out what it is through the steps:
+
+- Open [https://hub.docker.com/](https://hub.docker.com/) in a web browser window
+- Sign-in using your email and password (don't tell us what it is)
+- In the top-right of the screen you will see your username
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Once your Docker application is running, open a shell (terminal) window, and run the following command to check that Docker is installed and the command line tools are working correctly. Below is the output for a Mac version, but the specific version is unlikely to matter much: it does not have to precisely match the one listed below.
+
+```bash
+$ docker --version
+```
+
+```output
+Docker version 20.10.5, build 55c4c88
+```
+
+The above command has not actually relied on the part of Docker that runs containers, just that Docker
+is installed and you can access it correctly from the command line.
+
+A command that checks that Docker is working correctly is the `docker container ls` command (we cover this command in more detail later in the course).
+
+Without explaining the details, output on a newly installed system would likely be:
+
+```bash
+$ docker container ls
+```
+
+```output
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+```
+
+(The command `docker system info` could also be used to verify that Docker is correctly installed and operational but it produces a larger amount of output.)
+
+However, if you instead get a message similar to the following
+
+```output
+Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
+```
+
+then you need to check that you have started the Docker Desktop, Docker Engine, or however else you worked through the setup instructions.
+
+## Getting help
+
+Often when working with a new command line tool, we need to get help. These tools often have some
+sort of subcommand or flag (usually `help`, `-h`, or `--help`) that displays a prompt describing how to use the
+tool. For Docker, it's no different. If we run `docker --help`, we see the following output (running `docker` also produces the help message):
+
+```output
+
+Usage: docker [OPTIONS] COMMAND
+
+A self-sufficient runtime for containers
+
+Options:
+ --config string Location of client config files (default "/Users/vini/.docker")
+ -c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context use")
+ -D, --debug Enable debug mode
+ -H, --host list Daemon socket(s) to connect to
+ -l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
+ --tls Use TLS; implied by --tlsverify
+ --tlscacert string Trust certs signed only by this CA (default "/Users/vini/.docker/ca.pem")
+ --tlscert string Path to TLS certificate file (default "/Users/vini/.docker/cert.pem")
+ --tlskey string Path to TLS key file (default "/Users/vini/.docker/key.pem")
+ --tlsverify Use TLS and verify the remote
+ -v, --version Print version information and quit
+
+Management Commands:
+ app* Docker App (Docker Inc., v0.9.1-beta3)
+ builder Manage builds
+ buildx* Build with BuildKit (Docker Inc., v0.5.1-docker)
+ config Manage Docker configs
+ container Manage containers
+ context Manage contexts
+ image Manage images
+ manifest Manage Docker image manifests and manifest lists
+ network Manage networks
+ node Manage Swarm nodes
+ plugin Manage plugins
+ scan* Docker Scan (Docker Inc., v0.6.0)
+ secret Manage Docker secrets
+ service Manage services
+ stack Manage Docker stacks
+ swarm Manage Swarm
+ system Manage Docker
+ trust Manage trust on Docker images
+ volume Manage volumes
+
+Commands:
+ attach Attach local standard input, output, and error streams to a running container
+ build Build an image from a Dockerfile
+ commit Create a new image from a container's changes
+ cp Copy files/folders between a container and the local filesystem
+ create Create a new container
+ diff Inspect changes to files or directories on a container's filesystem
+ events Get real time events from the server
+ exec Run a command in a running container
+ export Export a container's filesystem as a tar archive
+ history Show the history of an image
+ images List images
+ import Import the contents from a tarball to create a filesystem image
+ info Display system-wide information
+ inspect Return low-level information on Docker objects
+ kill Kill one or more running containers
+ load Load an image from a tar archive or STDIN
+ login Log in to a Docker registry
+ logout Log out from a Docker registry
+ logs Fetch the logs of a container
+ pause Pause all processes within one or more containers
+ port List port mappings or a specific mapping for the container
+ ps List containers
+ pull Pull an image or a repository from a registry
+ push Push an image or a repository to a registry
+ rename Rename a container
+ restart Restart one or more containers
+ rm Remove one or more containers
+ rmi Remove one or more images
+ run Run a command in a new container
+ save Save one or more images to a tar archive (streamed to STDOUT by default)
+ search Search the Docker Hub for images
+ start Start one or more stopped containers
+ stats Display a live stream of container(s) resource usage statistics
+ stop Stop one or more running containers
+ tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
+ top Display the running processes of a container
+ unpause Unpause all processes within one or more containers
+ update Update configuration of one or more containers
+ version Show the Docker version information
+ wait Block until one or more containers stop, then print their exit codes
+
+Run 'docker COMMAND --help' for more information on a command.
+```
+
+There is a list of commands and the end of the help message says: `Run 'docker COMMAND --help' for more information on a command.` For example, take the `docker container ls` command that we ran previously. We can see from the Docker help prompt
+that `container` is a Docker command, so to get help for that command, we run:
+
+```bash
+docker container --help # or instead 'docker container'
+```
+
+```output
+
+Usage: docker container COMMAND
+
+Manage containers
+
+Commands:
+ attach Attach local standard input, output, and error streams to a running container
+ commit Create a new image from a container's changes
+ cp Copy files/folders between a container and the local filesystem
+ create Create a new container
+ diff Inspect changes to files or directories on a container's filesystem
+ exec Run a command in a running container
+ export Export a container's filesystem as a tar archive
+ inspect Display detailed information on one or more containers
+ kill Kill one or more running containers
+ logs Fetch the logs of a container
+ ls List containers
+ pause Pause all processes within one or more containers
+ port List port mappings or a specific mapping for the container
+ prune Remove all stopped containers
+ rename Rename a container
+ restart Restart one or more containers
+ rm Remove one or more containers
+ run Run a command in a new container
+ start Start one or more stopped containers
+ stats Display a live stream of container(s) resource usage statistics
+ stop Stop one or more running containers
+ top Display the running processes of a container
+ unpause Unpause all processes within one or more containers
+ update Update configuration of one or more containers
+ wait Block until one or more containers stop, then print their exit codes
+
+Run 'docker container COMMAND --help' for more information on a command.
+```
+
+There's also help for the `container ls` command:
+
+```bash
+docker container ls --help # this one actually requires the '--help' flag
+```
+
+```output
+Usage: docker container ls [OPTIONS]
+
+List containers
+
+Aliases:
+ ls, ps, list
+
+Options:
+ -a, --all Show all containers (default shows just running)
+ -f, --filter filter Filter output based on conditions provided
+ --format string Pretty-print containers using a Go template
+ -n, --last int Show n last created containers (includes all states) (default -1)
+ -l, --latest Show the latest created container (includes all states)
+ --no-trunc Don't truncate output
+ -q, --quiet Only display container IDs
+ -s, --size Display total file sizes
+```
+
+You may notice that there are many commands that stem from the `docker` command. Instead of trying to remember
+all possible commands and options, it's better to learn how to effectively get help from the command line. Although
+we can always search the web, getting the built-in help from our tool is often much faster and may provide the answer
+right away. This applies not only to Docker, but also to most command line-based tools.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Docker Command Line Interface (CLI) syntax
+
+In this lesson we use the newest Docker CLI syntax
+[introduced with the Docker Engine version 1.13](https://www.docker.com/blog/whats-new-in-docker-1-13/).
+This new syntax combines commands into groups you will most often
+want to interact with. In the help example above you can see `image` and `container`
+management commands, which can be used to interact with your images and
+containers respectively. With this new syntax you issue commands using the following
+pattern `docker [command] [subcommand] [additional options]`
+
+Comparing the output of two help commands above, you can
+see that the same thing can be achieved in multiple ways. For example to start a
+Docker container using the old syntax you would use `docker run`. To achieve the
+same with the new syntax, you use `docker container run` instead. Even though the old
+approach is shorter and still officially supported, the new syntax is more descriptive, less
+error-prone and is therefore recommended.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Exploring a command
+
+Run `docker --help` and pick a command from the list.
+Explore the help prompt for that command. Try to guess how a command would work by looking at the `Usage: `
+section of the prompt.
+
+::::::::::::::: solution
+
+## Solution
+
+Suppose we pick the `docker image build` command:
+
+```bash
+docker image build --help
+```
+
+```output
+Usage: docker image build [OPTIONS] PATH | URL | -
+
+Build an image from a Dockerfile
+
+Options:
+ --add-host list Add a custom host-to-IP mapping (host:ip)
+ --build-arg list Set build-time variables
+ --cache-from strings Images to consider as cache sources
+ --cgroup-parent string Optional parent cgroup for the container
+ --compress Compress the build context using gzip
+ --cpu-period int Limit the CPU CFS (Completely Fair Scheduler) period
+ --cpu-quota int Limit the CPU CFS (Completely Fair Scheduler) quota
+ -c, --cpu-shares int CPU shares (relative weight)
+ --cpuset-cpus string CPUs in which to allow execution (0-3, 0,1)
+ --cpuset-mems string MEMs in which to allow execution (0-3, 0,1)
+ --disable-content-trust Skip image verification (default true)
+ -f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile')
+ --force-rm Always remove intermediate containers
+ --iidfile string Write the image ID to the file
+ --isolation string Container isolation technology
+ --label list Set metadata for an image
+ -m, --memory bytes Memory limit
+ --memory-swap bytes Swap limit equal to memory plus swap: '-1' to enable unlimited swap
+ --network string Set the networking mode for the RUN instructions during build (default "default")
+ --no-cache Do not use cache when building the image
+ --pull Always attempt to pull a newer version of the image
+ -q, --quiet Suppress the build output and print image ID on success
+ --rm Remove intermediate containers after a successful build (default true)
+ --security-opt strings Security options
+ --shm-size bytes Size of /dev/shm
+ -t, --tag list Name and optionally a tag in the 'name:tag' format
+ --target string Set the target build stage to build.
+ --ulimit ulimit Ulimit options (default [])
+```
+
+We could try to guess that the command could be run like this:
+
+```bash
+docker image build .
+```
+
+or
+
+```bash
+docker image build https://github.com/docker/rootfs.git
+```
+
+Where `https://github.com/docker/rootfs.git` could be any relevant URL that supports a Docker image.
+
+
+
+:::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+
+
+
+
+:::::::::::::::::::::::::::::::::::::::: keypoints
+
+- A toolbar icon indicates that Docker is ready to use (on Windows and macOS).
+- You will typically interact with Docker using the command line.
+- To learn how to run a certain Docker command, we can type the command followed by the `--help` flag.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
diff --git a/reference.md b/reference.md
new file mode 100644
index 000000000..34f383f12
--- /dev/null
+++ b/reference.md
@@ -0,0 +1,64 @@
+---
+title: 'Glossary'
+---
+
+## Glossary
+
+
+ - Command-line argument/option
+ - See the Carpentries Glossario entry
+ - Command-line interface (CLI)
+ - See the Carpentries Glossario entry
+ - Computational pipeline
+ - A combination of different software tools in a particular order that is used to perform a defined set of repeatable operations on different input data.
+ - Container
+ - A particular instance of a lightweight virtual machine derived from a container image. Containers are typically transient, unlike container images which persist.
+ - Container image
+ - The persistent binary artefact that encapsulates the set of files and configuration for running an instance of a container. Sometimes shortened to just image
+ - CPU/processor
+ - See the Carpentries Glossario entry
+ - Dependency
+ - See the Carpentries Glossario entry
+ - Dependency hell
+ - A colloquial term for the frustration of some software users who run into issues with software packages which have dependencies on specific versions of other software packages. The dependency issue arises when several packages have dependencies on the same shared packages or libraries, but they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages. Extract from Wikipedia
+ - Digital object identifier (DOI)
+ - See the Carpentries Glossario entry
+ - Docker
+ - A software framework for creating, running and managing containers.
+ - Docker build context
+ - The docker build command builds Docker images from a Dockerfile and a "context". A build's context is the set of files located in the specified PATH or URL.
+ - Docker Hub
+ - An online library of Docker container images.
+ - Docker Hub repository
+ - A collection of related Docker container images hosted on Docker Hub.
+ - Docker tag
+ - The specific version identifier associated with a Docker container image.
+ - Dockerfile
+ - The file containing the commands to build a Docker container image along with the Docker context.
+ - Filesystem
+ - See the Carpentries Glossario entry
+ - Filesystem layer
+ - Each container image is made up of multiple read-only filesystem layers that represent the file system differences from the layers below them in the image.
+ - Hardware
+ - See the Carpentries Glossario entry
+ - Hard drive
+ - The hardware in a computer that hosts the filesystem (or, sometimes, other storage types).
+ - Host computer
+ - The computer system which is running the container.
+ - Memory/RAM
+ - Random Access Memory (RAM) is where data the CPU is working with is temporarily stored.
+ - Operating system (OS)
+ - See the Carpentries Glossario entry
+ - Reproducible research
+ - See the Carpentries Glossario entry
+ - Software library
+ - See the Carpentries Glossario entry
+ - Tar archive
+ - A file archive format commonly used in Unix-like operating systems that combines multiple files into a single file. tar archive files are used as the export format of Docker images.
+ - Virtualization
+ - Containers are an example of virtualization – having a second "virtual" computer running and accessible from a host computer.
+
+
+
+
+
diff --git a/reproduciblity.md b/reproduciblity.md
new file mode 100644
index 000000000..8a4a90afe
--- /dev/null
+++ b/reproduciblity.md
@@ -0,0 +1,198 @@
+---
+title: 'Containers in Research Workflows: Reproducibility and Granularity'
+teaching: 20
+exercises: 5
+---
+
+::::::::::::::::::::::::::::::::::::::: objectives
+
+- Understand how container images can help make research more reproducible.
+- Understand what practical steps I can take to improve the reproducibility of my research using containers.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: questions
+
+- How can I use container images to make my research more reproducible?
+- How do I incorporate containers into my research workflow?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Although this workshop is titled "Reproducible computational environments using containers",
+so far we have mostly covered the mechanics of using Docker with only passing reference to
+the reproducibility aspects. In this section, we discuss these aspects in more detail.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Work in progress...
+
+Note that reproducibility aspects of software and containers are an active area of research, discussion and development so are subject to many changes. We will present some ideas and approaches here but best practices will likely evolve in the near future.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Reproducibility
+
+By *reproducibility* here we mean the ability of someone else (or your future self) being able to reproduce
+what you did computationally at a particular time (be this in research, analysis or something else)
+as closely as possible, even if they do not have access to exactly the same hardware resources
+that you had when you did the original work.
+
+What makes this especially important? With research being increasingly digital
+in nature, more and more of our research outputs are a result of the use of
+software and data processing or analysis. With complex software stacks or
+groups of dependencies often being required to run research software, we need
+approaches to ensure that we can make it as easy as possible to recreate an
+environment in which a given research process was undertaken. There many
+reasons why this matters, one example being someone wanting to reproduce
+the results of a publication in order to verify them and then build on that
+research.
+
+Some examples of why containers are an attractive technology to help with reproducibility include:
+
+- The same computational work can be run seamlessly on different operating systems (e.g. Windows, macOS, Linux).
+- You can save the exact process that you used for your computational work (rather than relying on potentially incomplete notes).
+- You can save the exact versions of software and their dependencies in the container image.
+- You can provide access to legacy versions of software and underlying dependencies which may not be generally available any more.
+- Depending on their size, you can also potentially store a copy of key data within the container image.
+- You can archive and share a container image as well as associating a persistent identifier with it, to allow other researchers to reproduce and build on your work.
+
+## Sharing images
+
+As we have already seen, the Docker Hub provides a platform for sharing container images publicly. Once you have uploaded a container image, you can point people to its public location and they can download and build upon it.
+
+This is fine for working collaboratively with container images on a day-to-day basis but the Docker Hub is not a good option for long-term archiving of container images in support of research and publications as:
+
+- free accounts have a limit on how long a container image will be hosted if it is not updated
+- it does not support adding persistent identifiers to container images
+- it is easy to overwrite tagged container images with newer versions by mistake.
+
+## Archiving and persistently identifying container images using Zenodo
+
+When you publish your work or make it publicly available in some way it is good practice to make container images that you used for computational work available in an immutable, persistent way and to have an identifier that allows people to cite and give you credit for the work you have done. [Zenodo](https://zenodo.org/) is one service that provides this functionality.
+
+Zenodo supports the upload of *tar* archives and we can capture our Docker container images as tar archives using the `docker image save` command. For example, to export the container image we created earlier in this lesson:
+
+```bash
+docker image save alice/alpine-python:v1 -o alpine-python.tar
+```
+
+These tar container images can become quite large and Zenodo supports uploads up to 50GB so you may need to compress your archive to make it fit on Zenodo using a tool such as gzip (or zip):
+
+```bash
+gzip alpine-python.tar
+```
+
+Once you have your archive, you can [deposit it on Zenodo](https://zenodo.org/deposit/) and this will:
+
+- Create a long-term archive snapshot of your Docker container image which people (including your future self) can download and reuse or reproduce your work.
+- Create a persistent DOI (*Digital Object Identifier*) that you can cite in any publications or outputs to enable reproducibility and recognition of your work.
+
+In addition to the archive file itself, the deposit process will ask you to provide some basic metadata to classify the container image and the associated work.
+
+Note that Zenodo is not the only option for archiving and generating persistent DOIs for container images. There are other services out there -- for example, some organizations may provide their own, equivalent, service.
+
+## Reproducibility good practice
+
+- Make use of container images to capture the computational environment required for your work.
+- Decide on the appropriate granularity for the container images you will use for your computational work -- this will be different for each project/area. Take note of accepted practice from contemporary work in the same area. What are the right building blocks for individual container images in your work?
+- Document what you have done and why -- this can be put in comments in the `Dockerfile` and the use of the container image described in associated documentation and/or publications. Make sure that references are made in both directions so that the container image and the documentation are appropriately linked.
+- When you publish work (in whatever way) use an archiving and DOI service such
+ as Zenodo to make sure your container image is captured as it was used for
+ the work and that it is assigned a persistent DOI to allow it to be cited and
+ referenced properly.
+- Make use of tags when naming your container images, this ensures that if you
+ update the image in future, previous versions can be retained within a
+ container repository to be easily accessed, if this is required.
+- A built and archived container image can ensure a persistently bundled set of
+ software and dependecies. However, a `Dockerfile` provides a lightweight
+ means of storing a container definition that can be used to re-create a
+ container image at a later time. If you're taking this approach, ensure that
+ you specify software package and dependency versions within your `Dockerfile`
+ rather than just specifying package names which will generally install the
+ most up-to-date version of a package. This may be incompatible with other
+ elements of your software stack. Also note that storing only a `Dockerfile`
+ presents reproducibility challenges because required versions of packages may
+ not be available indefinitely, potentially meaning that you're unable to
+ reproduce the required environment and, hence, the research results.
+
+## Container Granularity
+
+As mentioned above, one of the decisions you may need to make when containerising your research workflows
+is what level of *granularity* you wish to employ. The two extremes of this decision could be characterized
+as:
+
+- Create a single container image with all the tools you require for your research or analysis workflow
+- Create many container images each running a single command (or step) of the workflow and use them together
+
+Of course, many real applications will sit somewhere between these two extremes.
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Positives and negatives
+
+What are the advantages and disadvantages of the two approaches to container granularity for research
+workflows described above? Think about this
+and write a few bullet points for advantages and disadvantages for each approach in the course Etherpad.
+
+::::::::::::::: solution
+
+## Solution
+
+This is not an exhaustive list but some of the advantages and disadvantages could be:
+
+### Single large container image
+
+- Advantages:
+ - Simpler to document
+ - Full set of requirements packaged in one place
+ - Potentially easier to maintain (though could be opposite if working with large, distributed group)
+- Disadvantages:
+ - Could get very large in size, making it more difficult to distribute
+ - Could use [Docker multi-stage build](https://docs.docker.com/develop/develop-images/multistage-build) to reduce size
+ - May end up with same dependency issues within the container image from different software requirements
+ - Potentially more complex to test
+ - Less re-useable for different, but related, work
+
+### Multiple smaller container images
+
+- Advantages:
+ - Individual components can be re-used for different, but related, work
+ - Individual parts are smaller in size making them easier to distribute
+ - Avoid dependency issues between different pieces of software
+ - Easier to test
+- Disadvantage:
+ - More difficult to document
+ - Potentially more difficult to maintain (though could be easier if working with large, distributed group)
+ - May end up with dependency issues between component container images if they get out of sync
+
+
+
+:::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Next steps with containers
+
+Now that we're at the end of the lesson material, take a moment to reflect on
+what you've learned, how it applies to you, and what to do next.
+
+1. In your own notes, write down or diagram your understanding of Docker containers and container images:
+ concepts, commands, and how they work.
+2. In the workshop's shared notes document, write down how you think you might
+ use containers in your daily work. If there's something you want to try doing with
+ containers right away, what is a next step after this workshop to make that happen?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: keypoints
+
+- Container images allow us to encapsulate the computation (and data) we have used in our research.
+- Using a service such as Docker Hub allows us to easily share computational work we have done.
+- Using container images along with a DOI service such as Zenodo allows us to capture our work and enables reproducibility.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+
diff --git a/running-containers.md b/running-containers.md
new file mode 100644
index 000000000..4db2d815e
--- /dev/null
+++ b/running-containers.md
@@ -0,0 +1,366 @@
+---
+title: Exploring and Running Containers
+teaching: 20
+exercises: 10
+---
+
+::::::::::::::::::::::::::::::::::::::: objectives
+
+- Use the correct command to see which Docker container images are on your computer.
+- Be able to download new Docker container images.
+- Demonstrate how to start an instance of a container from a container image.
+- Describe at least two ways to execute commands inside a running Docker container.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+:::::::::::::::::::::::::::::::::::::::: questions
+
+- How do I interact with Docker containers and container images on my computer?
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Reminder of terminology: container images and containers
+
+Recall that a *container image* is the template from which particular instances of *containers* will be created.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Let's explore our first Docker container. The Docker team provides a simple container
+image online called `hello-world`. We'll start with that one.
+
+## Downloading Docker images
+
+The `docker image` command is used to interact with Docker container images.
+You can find out what container images you have on your computer by using the following command ("ls" is short for "list"):
+
+```bash
+$ docker image ls
+```
+
+If you've just
+installed Docker, you won't see any container images listed.
+
+To get a copy of the `hello-world` Docker container image from the internet, run this command:
+
+```bash
+$ docker image pull hello-world
+```
+
+You should see output like this:
+
+```output
+Using default tag: latest
+latest: Pulling from library/hello-world
+1b930d010525: Pull complete
+Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac7f2fdd86d7e4e
+Status: Downloaded newer image for hello-world:latest
+docker.io/library/hello-world:latest
+```
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Docker Hub
+
+Where did the `hello-world` container image come from? It came from the Docker Hub
+website, which is a place to share Docker container images with other people. More on that
+in a later episode.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Exercise: Check on Your Images
+
+What command would you use to see if the `hello-world` Docker container image had downloaded
+successfully and was on your computer?
+Give it a try before checking the solution.
+
+::::::::::::::: solution
+
+## Solution
+
+To see if the `hello-world` container image is now on your computer, run:
+
+```bash
+$ docker image ls
+```
+
+:::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Note that the downloaded `hello-world` container image is not in the folder where you are in the terminal! (Run
+`ls` by itself to check.) The container image is not a file like our normal programs and documents;
+Docker stores it in a specific location that isn't commonly accessed, so it's necessary
+to use the special `docker image` command to see what Docker container images you have on your
+computer.
+
+## Running the `hello-world` container
+
+To create and run containers from named Docker container images you use the `docker container run` command. Try the following `docker container run` invocation. Note that it does not matter what your current working directory is.
+
+```bash
+$ docker container run hello-world
+```
+
+```output
+Hello from Docker!
+This message shows that your installation appears to be working correctly.
+
+To generate this message, Docker took the following steps:
+ 1. The Docker client contacted the Docker daemon.
+ 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
+ (amd64)
+ 3. The Docker daemon created a new container from that image which runs the
+ executable that produces the output you are currently reading.
+ 4. The Docker daemon streamed that output to the Docker client, which sent it
+ to your terminal.
+
+To try something more ambitious, you can run an Ubuntu container with:
+ $ docker run -it ubuntu bash
+
+Share images, automate workflows, and more with a free Docker ID:
+ https://hub.docker.com/
+
+For more examples and ideas, visit:
+ https://docs.docker.com/get-started/
+```
+
+What just happened? When we use the `docker container run` command, Docker does three things:
+
+| 1\. Starts a Running Container | 2\. Performs Default Action | 3\. Shuts Down the Container |
+| --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Starts a running container, based on the container image. Think of this as the "alive" or "inflated" version of the container -- it's actually doing something. | If the container has a default action set, it will perform that default action. This could be as simple as printing a message (as above) or running a whole analysis pipeline! | Once the default action is complete, the container stops running (or exits). The container image is still there, but nothing is actively running. |
+
+The `hello-world` container is set up to run an action by default --
+namely to print this message.
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Using `docker container run` to get the image
+
+We could have skipped the `docker image pull` step; if you use the `docker container run`
+command and you don't already have a copy of the Docker container image, Docker will
+automatically pull the container image first and then run it.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Running a container with a chosen command
+
+But what if we wanted to do something different with the container? The output
+just gave us a suggestion of what to do -- let's use a different Docker container image
+to explore what else we can do with the `docker container run` command. The suggestion above
+is to use `ubuntu`, but we're going to run a different type of Linux, `alpine`
+instead because it's quicker to download.
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Run the Alpine Docker container
+
+Try downloading the `alpine` container image and using it to run a container. You can do it in
+two steps, or one. What are they?
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+What happened when you ran the Alpine Docker container?
+
+```bash
+$ docker container run alpine
+```
+
+If you have never used the `alpine` Docker container image on your computer, Docker probably printed a
+message that it couldn't find the container image and had to download it.
+If you used the `alpine` container image before, the command will probably show no output. That's
+because this particular container is designed for you to provide commands yourself. Try running
+this instead:
+
+```bash
+$ docker container run alpine cat /etc/os-release
+```
+
+You should see the output of the `cat /etc/os-release` command, which prints out
+the version of Alpine Linux that this container is using and a few additional bits of information.
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Hello World, Part 2
+
+Can you run a copy of the `alpine` container and make it print a "hello world" message?
+
+Give it a try before checking the solution.
+
+::::::::::::::: solution
+
+## Solution
+
+Use the same command as above, but with the `echo` command to print a message.
+
+```bash
+$ docker container run alpine echo 'Hello World'
+```
+
+:::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+So here, we see another option -- we can provide commands at the end of the `docker container run`
+command and they will execute inside the running container.
+
+## Running containers interactively
+
+In all the examples above, Docker has started the container, run a command, and then
+immediately stopped the container. But what if we wanted to keep the container
+running so we could log into it and test drive more commands? The way to
+do this is by adding the interactive flags `-i` and `-t` (usually combined as `-it`)
+to the `docker container run` command and provide a shell (`bash`,`sh`, etc.)
+as our command. The `alpine` Docker container image doesn't include `bash` so we need
+to use `sh`.
+
+```bash
+$ docker container run -it alpine sh
+```
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Technically...
+
+Technically, the interactive flag is just `-i` -- the extra `-t` (combined
+as `-it` above) is the "pseudo-TTY" option, a fancy term that means a text interface.
+This allows you to connect to a shell, like `sh`, using a command line. Since you usually
+want to have a command line when running interactively, it makes sense to use the two together.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+Your prompt should change significantly to look like this:
+
+```bash
+/ #
+```
+
+That's because you're now inside the running container! Try these commands:
+
+- `pwd`
+- `ls`
+- `whoami`
+- `echo $PATH`
+- `cat /etc/os-release`
+
+All of these are being run from inside the running container, so you'll get information
+about the container itself, instead of your computer. To finish using the container,
+type `exit`.
+
+```bash
+/ # exit
+```
+
+::::::::::::::::::::::::::::::::::::::: challenge
+
+## Practice Makes Perfect
+
+Can you find out the version of Ubuntu installed on the `ubuntu` container image?
+(Hint: You can use the same command as used to find the version of alpine.)
+
+Can you also find the `apt-get` program? What does it do? (Hint: try passing `--help`
+to almost any command will give you more information.)
+
+::::::::::::::: solution
+
+## Solution 1 -- Interactive
+
+Run an interactive ubuntu container -- you can use `docker image pull` first, or just
+run it with this command:
+
+```bash
+$ docker container run -it ubuntu sh
+```
+
+OR you can get the bash shell instead
+
+```bash
+$ docker container run -it ubuntu bash
+```
+
+Then try, running these commands
+
+```bash
+/# cat /etc/os-release
+/# apt-get --help
+```
+
+Exit when you're done.
+
+```bash
+/# exit
+```
+
+:::::::::::::::::::::::::
+
+::::::::::::::: solution
+
+## Solution 2 -- Run commands
+
+Run a ubuntu container, first with a command to read out the Linux version:
+
+```bash
+$ docker container run ubuntu cat /etc/os-release
+```
+
+Then run a container with a command to print out the apt-get help:
+
+```bash
+$ docker container run ubuntu apt-get --help
+```
+
+:::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Even More Options
+
+There are many more options, besides `-it` that can be used with the `docker container run`
+command! A few of them will be covered in [later episodes](/advanced-containers)
+and we'll share two more common ones here:
+
+- `--rm`: this option guarantees that any running container is completely
+ removed from your computer after the container is stopped. Without this option,
+ Docker actually keeps the "stopped" container around, which you'll see in a later
+ episode. Note that this option doesn't impact the *container images* that you've pulled,
+ just running instances of containers.
+
+- `--name=`: By default, Docker assigns a random name and ID number to each container
+ instance that you run on your computer. If you want to be able to more easily refer
+ to a specific running container, you can assign it a name using this option.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+## Conclusion
+
+So far, we've seen how to download Docker container images, use them to run commands inside
+running containers, and even how to explore a running container from the inside.
+Next, we'll take a closer look at all the different kinds of Docker container images that are out there.
+
+
+
+
+
+
+
+:::::::::::::::::::::::::::::::::::::::: keypoints
+
+- The `docker image pull` command downloads Docker container images from the internet.
+- The `docker image ls` command lists Docker container images that are (now) on your computer.
+- The `docker container run` command creates running containers from container images and can run commands inside them.
+- When using the `docker container run` command, a container can run a default action (if it has one), a user specified action, or a shell to be used interactively.
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
diff --git a/setup.md b/setup.md
new file mode 100644
index 000000000..18fa2c3dc
--- /dev/null
+++ b/setup.md
@@ -0,0 +1,156 @@
+---
+title: Setup
+---
+
+### Website accounts to create
+
+Please seek help at the start of the lesson if you have not been able to establish a website account on:
+
+- The [Docker Hub](https://hub.docker.com). We will use the Docker Hub to download pre-built container images, and for you to upload and download container images that you create, as explained in the relevant lesson episodes.
+
+### Files to download
+
+Download the [`docker-intro.zip`](files/docker-intro.zip) file. *This file can alternatively be downloaded from the `files` directory in the [docker-introduction GitHub repository](https://github.com/carpentries-incubator/docker-introduction/blob/gh-pages/files/docker-intro.zip)*.
+
+Move the downloaded file to your Desktop and unzip it. It should unzip to a folder called `docker-intro`.
+
+### Software to install
+
+Docker's installation experience has steadily improved, however situations will arise in which installing Docker on your computer may not be straightforward unless you have a large amount of technical experience.
+Workshops try to have helpers on hand that have worked their way through the install process, but do be prepared for some troubleshooting.
+
+In most cases, you will need to have administrator rights on the computer in order to install the Docker software. If you are using a computer managed by your organisation and do not have administrator rights, you *may* be able to get your organisation's IT staff to install Docker for you. Alternatively your IT support staff *may* be able to give you remote access to a server that can run Docker commands.
+
+Please try to install the appropriate software from the list below depending on the operating system that your computer is running. Do let the workshop organisers know as early as possible if you are unable to install Docker using these instructions, as there may be other options available.
+
+#### Microsoft Windows
+
+**You must have admin rights to run Docker!** Some parts of the lesson will work without running as admin but if you are unable to `Run as administrator` on your machine some elements of this workshop might not work as described.
+
+Ideally, you will be able to install the Docker Desktop software, following the [Docker website's documentation](https://docs.docker.com/docker-for-windows/install/). Note that the instructions for installing Docker Desktop on Windows 10 Home Edition are different from other versions of Windows 10.
+
+Note that the above installation instructions highlight a minimum version or "build" that is required to be able to install Docker on your Windows 10 system. See [Which version of Windows operating system am I running?](https://support.microsoft.com/en-us/windows/which-version-of-windows-operating-system-am-i-running-628bec99-476a-2c13-5296-9dd081cdd808) for details of how to find out which version/build of Windows 10 you have.
+
+If you are unable to follow the above instructions to install Docker Desktop on your Windows system, the final release of the deprecated Docker Toolbox version of Docker for Windows can be downloaded from the [releases page of the Docker Toolbox GitHub repository](https://github.com/docker/toolbox/releases). (Download the `.exe` file for the Windows installer). *Please note that this final release of Docker Toolbox includes an old version of Docker and you are strongly advised not to attempt to use this for any production use. It will, however, enable you to follow along with the lesson material.*
+
+::::::::::::::::::::::::::::::::::::::::: callout
+
+## Warning: Git Bash
+
+If you are using Git Bash as your terminal on Windows then you should be aware that you may run
+into issues running some of the commands in this lesson as Git Bash will automatically re-write
+any paths you specify at the command line into Windows versions of the paths and this will confuse
+the Docker container you are trying to use. For example, if you enter the command:
+
+```
+docker run alpine cat /etc/os-release
+```
+
+Git Bash will change the `/etc/os-release` path to `C:\etc\os-release\` before passing the command
+to the Docker container and the container will report an error. If you want to use Git Bash then you
+can request that this path translation does not take place by adding an extra `/` to the start of the
+path. i.e. the command would become:
+
+```
+docker run alpine cat //etc/os-release
+```
+
+This should suppress the path translation functionality in Git Bash.
+
+
+::::::::::::::::::::::::::::::::::::::::::::::::::
+
+#### Apple macOS
+
+Ideally, you will be able to install the Docker Desktop software, following the
+[Docker website's documentation](https://docs.docker.com/docker-for-mac/install/).
+The current version of the Docker Desktop software requires macOS version 10.14 (Mojave) or later.
+
+If you already use Homebrew or MacPorts to manage your software, and would prefer to use those
+tools rather than Docker's installer, you can do so. For Homebrew, you can run the command
+`brew install --cask docker`. Note that you still need to run the Docker graphical user interface
+once to complete the initial setup, after which time the command line functionality of Docker will
+become available. The Homebrew install of Docker also requires a minimum macOS version of 10.14.
+The MacPorts Docker port should support older, as well as the most recent, operating system
+versions (see the [port details](https://ports.macports.org/port/docker/details/)), but note that
+we have not recently tested the Docker installation process via MacPorts.
+
+#### Linux
+
+There are too many varieties of Linux to give precise instructions here, but hopefully you can locate documentation for getting Docker installed on your Linux distribution. It may already be installed. If it is not already installed on your system, the [Install Docker Engine](https://docs.docker.com/engine/install/) page provides an overview of supported Linux distributions and pointers to relevant installation information. Alternatively, see:
+
+- [Docker Engine on CentOS](https://docs.docker.com/install/linux/docker-ce/centos/)
+- [Docker Engine on Debian](https://docs.docker.com/install/linux/docker-ce/debian/)
+- [Docker Engine on Fedora](https://docs.docker.com/install/linux/docker-ce/fedora/)
+- [Docker Engine on Ubuntu](https://docs.docker.com/install/linux/docker-ce/ubuntu/)
+
+### Verify Installation
+
+To quickly check if the Docker and client and server are working run the following command in a new terminal or ssh session:
+
+```bash
+$ docker version
+```
+
+```output
+Client:
+ Version: 20.10.2
+ API version: 1.41
+ Go version: go1.13.8
+ Git commit: 20.10.2-0ubuntu2
+ Built: Tue Mar 2 05:52:27 2021
+ OS/Arch: linux/arm64
+ Context: default
+ Experimental: true
+
+Server:
+ Engine:
+ Version: 20.10.2
+ API version: 1.41 (minimum version 1.12)
+ Go version: go1.13.8
+ Git commit: 20.10.2-0ubuntu2
+ Built: Tue Mar 2 05:45:16 2021
+ OS/Arch: linux/arm64
+ Experimental: false
+ containerd:
+ Version: 1.4.4-0ubuntu1
+ GitCommit:
+ runc:
+ Version: 1.0.0~rc95-0ubuntu1~21.04.1
+ GitCommit:
+ docker-init:
+ Version: 0.19.0
+ GitCommit:
+```
+
+The above output shows a successful installation and will vary based on your system. The important part is that the "Client" and the "Server" parts are both working and returns information. It is beyond the scope of this document to debug installation problems but common errors include the user not belonging to the `docker` group and forgetting to start a new terminal or ssh session.
+
+### A quick tutorial on copy/pasting file contents from episodes of the lesson
+
+Let's say you want to copy text off the lesson website and paste it into a file named `myfile` in the current working directory of a shell window. This can be achieved in many ways, depending on your computer's operating system, but routes I have found work for me:
+
+- macOS and Linux: you are likely to have the `nano` editor installed, which provides you with a very straightforward way to create such a file, just run `nano myfile`, then paste text into the shell window, and press control\+x to exit: you will be prompted whether you want to save changes to the file, and you can type y to say "yes".
+- Microsoft Windows running `cmd.exe` shells:
+ - `del myfile` to remove `myfile` if it already existed;
+ - `copy con myfile` to mean what's typed in your shell window is copied into `myfile`;
+ - paste the text you want within `myfile` into the shell window;
+ - type control\+z and then press enter to finish copying content into `myfile` and return to your shell;
+ - you can run the command `type myfile` to check the content of that file, as a double-check.
+- Microsoft Windows running PowerShell:
+ - The `cmd.exe` method probably works, but another is to paste your file contents into a so-called "here-string" between `@'` and `'@` as in this example that follows (the ">" is the prompt indicator):
+
+ ```
+ > @'
+ Some hypothetical
+ file content that is
+
+ split over many
+
+ lines.
+ '@ | Set-Content myfile -encoding ascii
+ ```
+
+
+
+