Skip to content

Commit

Permalink
jupyter book
Browse files Browse the repository at this point in the history
  • Loading branch information
phelps-sg committed Apr 26, 2023
1 parent de6eeb2 commit f9132e2
Show file tree
Hide file tree
Showing 12 changed files with 242 additions and 32 deletions.
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,13 @@
/results.txt
/paper/llm-cooperation.pdf
/notebooks/Results.ipynb
/jupyter-book/Results.ipynb
/jupyter-book/_build/.doctrees/environment.pickle
/jupyter-book/_build/.doctrees/intro.doctree
/jupyter-book/_build/.doctrees/llm-cooperation.doctree
/jupyter-book/_build/.doctrees/markdown.doctree
/jupyter-book/_build/.doctrees/markdown-notebooks.doctree
/jupyter-book/_build/.doctrees/notebooks.doctree
/jupyter-book/_build/.doctrees/Results.doctree
/jupyter-book/.ipynb_checkpoints/Results-checkpoint.ipynb
/jupytext.toml
3 changes: 3 additions & 0 deletions notebooks/Results.py → jupyter-book/Results.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
# name: python3
# ---

# %% [markdown]
# # Results

# %%
import pandas as pd
# %%
Expand Down
31 changes: 31 additions & 0 deletions jupyter-book/_config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Book settings
# Learn more at https://jupyterbook.org/customize/config.html

title: Investigating Emergent Goal-Like Behavior in Large Language Models using Experimental Economics
author: Steve Phelps

# Force re-execution of notebooks on each build.
# See https://jupyterbook.org/content/execute.html
execute:
execute_notebooks: force

# Define the name of the latex output file for PDF builds
latex:
latex_documents:
targetname: llm-cooperation.tex

# Add a bibtex file so that we can create citations
bibtex_bibfiles:
- llm-cooperation.bib

# Information about where the book exists on the web
repository:
url: https://github.com/executablebooks/jupyter-book # Online location of your book
path_to_book: docs # Optional path to your book, relative to the repository root
branch: master # Which branch of the repository should be used when creating links (optional)

# Add GitHub buttons to your book
# See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository
html:
use_issues_button: true
use_repository_button: true
8 changes: 8 additions & 0 deletions jupyter-book/_toc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Table of contents
# Learn more at https://jupyterbook.org/customize/toc.html

format: jb-book
root: intro
chapters:
- file: llm-cooperation
- file: Results
23 changes: 23 additions & 0 deletions jupyter-book/intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
As large language models (LLMs) continue to advance, understanding the emergent
goal-like behaviors and cooperative tendencies of artificially generated agents
becomes crucial for AI alignment research. This paper outlines an
interdisciplinary research agenda that combines insights from cognitive
science, artificial intelligence, and experimental economics to explore the
cooperative and competitive dynamics of simulacra instantiated by LLM prompts.

We propose a series of experimental economics simulations, including the
Prisoner's Dilemma, public goods games, and other well-established frameworks,
to evaluate the propensity of LLM-generated simulacra to cooperate under
various conditions. These experiments will enable us to assess the goal-like
behaviors that emerge, and whether these behaviors align with human values and
cooperation norms.

In addition to detailing the experimental design, we will discuss the potential
implications of our findings for the development of AI alignment strategies and
safe AGI. By elucidating the factors that govern cooperative behavior in
LLM-generated simulacra, this research aims to contribute to the broader
understanding of the emergent properties of AI systems and inform the design of
models that better align with human values and societal goals.

```{tableofcontents}
```
File renamed without changes.
30 changes: 0 additions & 30 deletions paper/llm-cooperation.md → jupyter-book/llm-cooperation.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,6 @@
---
title: Investigating Emergent Goal-Like Behavior in Large Language Models using Experimental Economics
author: Steve Phelps and Yvan Russell
geometry: margin=2cm
fontsize: 12pt
date: April 6, 2023
output: pdf_document
bibliography: llm-cooperation.bib
---

## Abstract

As large language models (LLMs) continue to advance, understanding the emergent
goal-like behaviors and cooperative tendencies of artificially generated agents
becomes crucial for AI alignment research. This paper outlines an
interdisciplinary research agenda that combines insights from cognitive
science, artificial intelligence, and experimental economics to explore the
cooperative and competitive dynamics of simulacra instantiated by LLM prompts.

We propose a series of experimental economics simulations, including the
Prisoner's Dilemma, public goods games, and other well-established frameworks,
to evaluate the propensity of LLM-generated simulacra to cooperate under
various conditions. These experiments will enable us to assess the goal-like
behaviors that emerge, and whether these behaviors align with human values and
cooperation norms.

In addition to detailing the experimental design, we will discuss the potential
implications of our findings for the development of AI alignment strategies and
safe AGI. By elucidating the factors that govern cooperative behavior in
LLM-generated simulacra, this research aims to contribute to the broader
understanding of the emergent properties of AI systems and inform the design of
models that better align with human values and societal goals.

## Motivation and background

Expand Down Expand Up @@ -266,5 +237,4 @@ Payoff structure: The experiment's payoff structure incentivizes cooperation, co

Although the specific payoffs in the experiment differ from the classic Prisoner's Dilemma, the structure and the strategic decision-making process are quite similar. The experiment can be seen as a variant of the Prisoner's Dilemma, allowing researchers to study trust, cooperation, and competition in a controlled setting.

## Bibliography

53 changes: 53 additions & 0 deletions jupyter-book/markdown-notebooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
jupytext:
formats: md:myst
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.11.5
kernelspec:
display_name: Python 3
language: python
name: python3
---

# Notebooks with MyST Markdown

Jupyter Book also lets you write text-based notebooks using MyST Markdown.
See [the Notebooks with MyST Markdown documentation](https://jupyterbook.org/file-types/myst-notebooks.html) for more detailed instructions.
This page shows off a notebook written in MyST Markdown.

## An example cell

With MyST Markdown, you can define code cells with a directive like so:

```{code-cell}
print(2 + 2)
```

When your book is built, the contents of any `{code-cell}` blocks will be
executed with your default Jupyter kernel, and their outputs will be displayed
in-line with the rest of your content.

```{seealso}
Jupyter Book uses [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert text-based files to notebooks, and can support [many other text-based notebook files](https://jupyterbook.org/file-types/jupytext.html).
```

## Create a notebook with MyST Markdown

MyST Markdown notebooks are defined by two things:

1. YAML metadata that is needed to understand if / how it should convert text files to notebooks (including information about the kernel needed).
See the YAML at the top of this page for example.
2. The presence of `{code-cell}` directives, which will be executed with your book.

That's all that is needed to get started!

## Quickly add YAML metadata for MyST Notebooks

If you have a markdown file and you'd like to quickly add YAML metadata to it, so that Jupyter Book will treat it as a MyST Markdown Notebook, run the following command:

```
jupyter-book myst init path/to/markdownfile.md
```
55 changes: 55 additions & 0 deletions jupyter-book/markdown.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Markdown Files

Whether you write your book's content in Jupyter Notebooks (`.ipynb`) or
in regular markdown files (`.md`), you'll write in the same flavor of markdown
called **MyST Markdown**.
This is a simple file to help you get started and show off some syntax.

## What is MyST?

MyST stands for "Markedly Structured Text". It
is a slight variation on a flavor of markdown called "CommonMark" markdown,
with small syntax extensions to allow you to write **roles** and **directives**
in the Sphinx ecosystem.

For more about MyST, see [the MyST Markdown Overview](https://jupyterbook.org/content/myst.html).

## Sample Roles and Directives

Roles and directives are two of the most powerful tools in Jupyter Book. They
are kind of like functions, but written in a markup language. They both
serve a similar purpose, but **roles are written in one line**, whereas
**directives span many lines**. They both accept different kinds of inputs,
and what they do with those inputs depends on the specific role or directive
that is being called.

Here is a "note" directive:

```{note}
Here is a note
```

It will be rendered in a special box when you build your book.

Here is an inline directive to refer to a document: {doc}`markdown-notebooks`.


## Citations

You can also cite references that are stored in a `bibtex` file. For example,
the following syntax: `` {cite}`holdgraf_evidence_2014` `` will render like
this: {cite}`holdgraf_evidence_2014`.

Moreover, you can insert a bibliography into your page with this syntax:
The `{bibliography}` directive must be used for all the `{cite}` roles to
render properly.
For example, if the references for your book are stored in `references.bib`,
then the bibliography is inserted with:

```{bibliography}
```

## Learn more

This is just a simple starter to get you started.
You can learn a lot more at [jupyterbook.org](https://jupyterbook.org).
56 changes: 56 additions & 0 deletions jupyter-book/references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
---
@inproceedings{holdgraf_evidence_2014,
address = {Brisbane, Australia, Australia},
title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}},
booktitle = {International {Conference} on {Cognitive} {Neuroscience}},
publisher = {Frontiers in Neuroscience},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.},
year = {2014}
}

@article{holdgraf_rapid_2016,
title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility},
volume = {7},
issn = {2041-1723},
url = {http://www.nature.com/doifinder/10.1038/ncomms13654},
doi = {10.1038/ncomms13654},
number = {May},
journal = {Nature Communications},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.},
year = {2016},
pages = {13654},
file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf}
}

@inproceedings{holdgraf_portable_2017,
title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science},
volume = {Part F1287},
isbn = {978-1-4503-5272-7},
doi = {10.1145/3093338.3093370},
abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.},
booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}},
author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.},
year = {2017},
keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy}
}

@article{holdgraf_encoding_2017,
title = {Encoding and decoding models in cognitive electrophysiology},
volume = {11},
issn = {16625137},
doi = {10.3389/fnsys.2017.00061},
abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.},
journal = {Frontiers in Systems Neuroscience},
author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.},
year = {2017},
keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials}
}

@book{ruby,
title = {The Ruby Programming Language},
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
}
3 changes: 3 additions & 0 deletions jupyter-book/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
jupyter-book
matplotlib
numpy
2 changes: 0 additions & 2 deletions paper/Makefile

This file was deleted.

0 comments on commit f9132e2

Please sign in to comment.