Skip to content

Commit

Permalink
Merge pull request #91 from fhdsl/diffaudience
Browse files Browse the repository at this point in the history
Adding info for audiences
  • Loading branch information
carriewright11 authored Jan 2, 2024
2 parents c752720 + 6793932 commit a59222a
Show file tree
Hide file tree
Showing 9 changed files with 443 additions and 38 deletions.
2 changes: 1 addition & 1 deletion 02a-Avoiding_Harm-intro.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9

## Target Audience

This course is intended for leaders who might make decisions about AI at nonprofits, in industry, or academia.
This course is intended for leaders who might make decisions about AI at nonprofits, in industry, or academia. They may have an interest to use or develop AI tools.

## Curriculum

Expand Down
168 changes: 150 additions & 18 deletions 02b-Avoiding_Harm-concepts.Rmd

Large diffs are not rendered by default.

49 changes: 40 additions & 9 deletions 02c-Avoiding_Harm-algorithms.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,18 @@ Note that this is an incomplete list; additional ethical concerns will become ap

One major concern is the use of AI to generate malicious content. Secondly the AI itself may accidentally create harmful responses or suggestions. For instance, AI could start suggesting the creation of code that spreads malware or hacks into computer systems. Another issue is what is called ["toxicity"](https://towardsdatascience.com/toxicity-in-ai-text-generation-9e9d9646e68f), which refers to disrespectful, rude, or hateful responses (@nikulski_toxicity_2021). These responses can have very negative consequences for users. Ultimately these issues could cause severe damage to individuals and organizations, including data breaches and financial losses. AI systems need to be designed with safeguards to avoid harmful responses, to test for such responses, and to ensure that the system is not infiltrated by additional possibly harmful parties.

```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "Image of a mosaic robot."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_93")
```

### Tips for avoiding the creation of harmful content

For decision makers about AI users:

* Be careful about what commercial tools you employ, they should be transparent about what they do to avoid harm.
* If designing a system, ensure that best practices are employed to avoid harmful responses. This should be done during the design process and should the system should also be regularly evaluated. Some development systems such as [Amazon Bedrock](https://aws.amazon.com/blogs/aws/evaluate-compare-and-select-the-best-foundation-models-for-your-use-case-in-amazon-bedrock-preview/) have tools for evaluating [toxicity](https://towardsdatascience.com/toxicity-in-ai-text-generation-9e9d9646e68f) to test for harmful responses. Although such systems can be helpful to automatically test, evaluation should also be done directly by humans.
* Be careful about the context in which you might have people use AI - will they know how to use it responsibly?
* Be careful about what content you share publicly, as it could be used for malicious purposes.
* Consider how the content might be used by others.
* Consider how the content might be used by others unintended purposes.
* Ask the AI tools to help you, but do not rely on them alone.

<div class = "query">
Expand All @@ -44,6 +48,13 @@ What are the possible downstream uses of this content?
What are some possible negative consequences of using this content?
</div>

For decision makers about AI developers:

* If designing a system, ensure that best practices are employed to avoid harmful responses. This should be done during the design process and should the system should also be regularly evaluated. Some development systems such as [Amazon Bedrock](https://aws.amazon.com/blogs/aws/evaluate-compare-and-select-the-best-foundation-models-for-your-use-case-in-amazon-bedrock-preview/) have tools for evaluating [toxicity](https://towardsdatascience.com/toxicity-in-ai-text-generation-9e9d9646e68f) to test for harmful responses. Although such systems can be helpful to automatically test, evaluation should also be done directly by humans.
* Consider how the content from AI tools that you design might be used by others for unintended purposes.
* Monitor your tools for unusual and harmful responses.



## Lack of Interpretability

Expand All @@ -57,13 +68,24 @@ This could result in negative consequences, such as for example reliance on a sy

### Tips for avoiding a lack of interpretability

For decision makers about AI users:

* Content should be reviewed by those experienced in the given field.
* Ask AI tools to help you understand the how it got to the response that it did, but get expert assistance where needed.
* New AI tools should be designed with interpretability in mind.
* Ask AI tools to help you understand how it got to the response that it did, but get expert assistance where needed.
* Always consider how an AI system derived a decision if the decision is being used for something that impacts humans


<div class = query>
<div class = query>
Can you explain how you generated this response?
</div>

For decision makers about AI developers:

* New AI tools should be designed with interpretability in mind, simpler models may make it easier to interpret results.
* Responses from new tools should be reviewed by those experienced in the given field.
* Provide transparency to users about how new AI tools generally create responses.



## Misinformation and Faulty Responses

Expand All @@ -81,8 +103,15 @@ AI tools may also report that fake data is real, when it is in fact not real. Fo

It is also important to remember that content generated by AI tools is not necessarily better than content written by humans. Additionally review and auditing of AI-generated content by humans is needed to ensure that they are working properly and giving expected results.

```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "image of a robot writing."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_88")
```


### Tips for reducing misinformation & faulty responses

For decision makers about AI users:

* Be aware that some AI tools currently make up false information based on artifacts of the algorithm called hallucinations or based on false information in the training data.
* Do not assume that the content generated by AI is real or correct.
* Realize that AI is only as good or up-to-date as what it was trained on, the content may be generated using out-of-date data. Look up responses to ensure it is up-to-date.
Expand All @@ -98,6 +127,12 @@ What assumptions were made in creating this content?
</div>


For decision makers about AI developers:

* Monitor newly developed tools for accuracy
* Be transparent with users about the limitations of the tool
* Consider training generative AI tools to have responses that are transparent about limitations of the tool.

<div class = example>

**Real World Example**
Expand All @@ -114,10 +149,6 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9







## Summary

Here is a summary of all the tips we suggested:
Expand Down
65 changes: 60 additions & 5 deletions 02d-Avoiding_Harm-adherence.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,27 @@ Launching large projects using AI before you get a chance to test them could lea

This also gives you time to correspond with legal, equity, security, etc. experts about the risks of your AI use.

```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "An image of a small robot."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_27")
```

### Tips for starting slow

For decision makers about AI users:

* Consider an early adopters program to evaluate usage.
* Educate early users about the limitations of AI.
* Consider using AI first for more specific purposes.
* Consult with experts about potential unforeseen challenges.
* Continue to assess and evaluate AI systems over time.


For decision makers about AI developers:

* Consider developing tools for more simple specific tasks, rather than broad difficult tasks.
* Consider giving potential users guidance about using the tool for simpler tasks at first.
* Continue to assess and evaluate AI systems over time.


<div class = example>

Expand All @@ -50,6 +66,12 @@ See here for addition info: https://ieeexplore.ieee.org/abstract/document/867851

When AI systems are trained on data, they may also learn and incorporate copyrighted information or protected intellectual property. This means that AI-generated content could potentially infringe on the copyright or protection of trademarks or patents of the original author. For more extreme examples, if an AI system is trained on an essay or art or in some cases even code written by a human, the AI system could generate responses that are identical to or very similar to that of the original author, which some AI tools have done. Regardless, even training AI tools on copyrighted information where the responses are still relatively different, if the AI system uses this content without permission from the original author, this could constitute copyright or trademark infringement @brittain_more_2023.


```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "An image of a robot checking lists on a bicycle."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_38")
```


<div class = example>

Open AI is facing lawsuits about using writing from several authors to train ChatGPT without permission from the authors. While this poses legal questions, it also poses ethical questions about the use of these tools and what it means for the people who created content that helped train AI tools. How can we properly give credit to such individuals?
Expand All @@ -73,10 +95,17 @@ AI poses questions about how we define art and if AI will reduce the opportuniti

### Tips for checking for allowed use

* Be transparent about what AI tools you use to write your code.

For decision makers about AI users:

* Be transparent about what AI tools you use to create content.
* Ask the AI tools if the content it helped generate used any content that you can cite.

For decision makers about AI developers:

* Obtain permission from the copyright holders of any content that you use to train an AI system. Only use content that has been licensed for use.
* Cite all content that you can.
* Ask the AI tools if the content it helped generate used any content that you can cite.


<div class = "query">
Did this content use any content from others that I can cite?
Expand All @@ -88,14 +117,25 @@ Did this content use any content from others that I can cite?

Only using one AI tool can increase the risk of the ethical issues discussed. For example, it may be easier to determine if a tool incorrect about a response if we see that a variety of tools have different answers to the same prompt. Secondly, as our technology evolves, some tools may perform better than others at specific tasks. It is also necessary to check responses over time with the same tool, to verify that a result is even consistent from the same tool.


```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "An image of a several different robots."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_22")
```

### Tips for using multiple AI tools

For decision makers about AI users:

- Check that each tool you are using meets the privacy and security restrictions that you need.
- Utilize platforms that make it easier to use multiple AI tools, such as https://poe.com/, which as access to many tools, or [Amazon Bedrock](https://aws.amazon.com/about-aws/whats-new/2023/11/evaluate-compare-select-fms-use-case-amazon-bedrock/), which actually has a feature to send the same prompt to multiple tools automatically, including for more advanced usage in the development of models based on modifying existing foundation models.
- Evaluate the results of the same prompt multiple times with the same tool to see how consistent it is overtime.
- Use slightly different prompts to see how the response may change with the same tool.
- Consider if using different types of data maybe helpful for answering the same question.
- Consider if using tools that work with different types of data maybe helpful for answering the same question.

For decision makers about AI developers:

- Consider if using different types of data maybe helpful for answering the same question.
- Consider promoting your tool on platforms that allow users to work with multiple AI tools.

## Educate Yourself and Others

Expand All @@ -105,6 +145,11 @@ Properly educating those you wish to comply with standards, can better ensure th

It is especially helpful if training materials are developed to be especially relevant to the actually potential uses by the individuals receiving training and if the training includes enough fundamentals so that individuals understand why policies are in place.

```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "An image of a robot teaching at a chalkboard."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_32")
```


<div class = "example">

**Real-World Example**
Expand All @@ -123,14 +168,24 @@ As a result, the Italian Data Protection Authority has banned ChatGPT, while Ger

### Tips to educate yourself and others

For decision makers about AI users:

* Emphasize the importance of training and education
* Emphasize the importance of training and education.
* Recognize that general AI literacy to better understand how AI works, can help individuals use AI more responsibly.
* Seek existing education content made by experts that can possibly be modified for your use case
* Seek existing education content made by experts that can possibly be modified for your use case.
* Consider how often people will need to be reminded about best practices. Should training be required regularly? Should individuals receive reminders about best practices especially in contexts in which they might use AI tools.
* Make your best practices easily findable and help point people to the right individuals to ask for guidance.
* Recognize that best practices for AI will likely change frequently in the near future as the technology evolves, education content should be updated accordingly.

For decision makers about AI developers:

* Emphasize the importance of training and education.
* Recognize that more AI literacy to better understand security, privacy, bias, climate impact and more can help individuals develop AI more responsibly.
* Seek existing education content made by experts that can possibly be modified for your use case.
* Consider how often people will need to be reminded about best practices. Should training be required regularly? Should individuals receive reminders about best practices especially in contexts in which they might develop AI tools.
* Make your best practices easily findable and help point people to the right individuals to ask for guidance.
* Recognize that best practices for AI will likely change frequently in the near future as the technology evolves, education content should be updated accordingly.


## Summary

Expand Down
24 changes: 23 additions & 1 deletion 02e-Avoiding_Harm-consent_and_ai.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ ottrpal::set_knitr_image_path()
Much of the world is developing data privacy regulations, as many individuals value their right to better control how others can collect and store data about them (@chaaya_privacy_2021).



While data collection concerns have been increasing up for years, AI systems present new challenges (@pearce_beware_2021; @tucker_privacy_2018):

- **Accountability** - It is more difficult to determine who is accountable at times when separate parties may collect versus redistribute data, versus use data (@hao_deleting_2021)
Expand Down Expand Up @@ -55,6 +54,29 @@ See [here](https://truthout.org/articles/eus-ai-act-falls-short-no-ban-on-live-f

<div>

```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "An artistic image of a face that looks like it is being scanned for facial recognition."}
ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_57")
```


### Tips to encourage responsible consent practices

For decision makers about AI users:

- Emphasize education about consent practices
- Stay up-to-date on current issues related to consent.
- Encourage usage of tools that are transparent about using responsible consent practices.
- Encourage users to be careful what data they upload or allow AI tools to use.


For decision makers about AI developers:

- Emphasize education of AI developers about consent considerations, guidelines, and regulations.
- Stay up-to-date on current issues related to consent.
- Be considerate of the data that you use for AI tools and how it was collected and if individuals consented to the collection and distribution of the data
- Be transparent with users about what consent practices were used for the data utilized by the tool.
- Be transparent with users about what may happen with their responses if they are being collected.

## Summary

Overall the consent process is particularly challenging and consideration should especially be centered on the rights of the individuals who may have data collected about them. We hope that awareness of some of the major challenges can help you to more responsibly implement any consenting processes that may be needed for AI tools that you employ or develop. We advise that you speak with ethical and legal experts.
Expand Down
Loading

0 comments on commit a59222a

Please sign in to comment.