Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…into main
  • Loading branch information
jhudsl-robot committed Jan 3, 2024
2 parents c0b4b19 + 5c7ffb4 commit 1b1d4d2
Show file tree
Hide file tree
Showing 80 changed files with 397 additions and 1,324 deletions.
2 changes: 1 addition & 1 deletion docs/no_toc/02a-Avoiding_Harm-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This course is intended for leaders who might make decisions about AI at nonprof

## Curriculum

This course provides a brief introduction about ethical concepts to be aware of when making decisions about AI, as well as **real-world examples** of situations that involved ethical challenges.
This course provides a brief introduction about ethical concepts to be aware of when making decisions about AI, as well as **real-world examples** of situations that involved ethical challenges. The course is largely focused on **generative AI considerations**, although some of the content will also be applicable to other types of AI applications.

The course will cover:

Expand Down
79 changes: 64 additions & 15 deletions docs/no_toc/02b-Avoiding_Harm-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,18 +66,23 @@ Therefore it is critical that we be considerate of the downstream consequences o

### Tips for avoiding inadvertent harm

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Consider how the content or decisions generated by an AI tool might be used by others.
* Continually audit how AI tools that you are using are preforming.
* Do not implement changes to systems or make important decisions using AI tools without AI oversight.

For decision makers about AI developers:
</div>

<div class = fordev>
**For decision makers about AI development:**

* Consider newly developed AI tools might be used by others.
* Continually audit AI tools to look for unexpected and potentially harmful or biased behavior.
* Be transparent with users about the limitations of the tool and the data used to train the tool.
* Caution potential users about any potential negative consequences of use
</div>

## Replacing Humans

Expand Down Expand Up @@ -117,17 +122,26 @@ Computer science is a field that has historically lacked diversity. It is also c

### Tips for supporting human contributions

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Avoid thinking that content by AI tools must be better than that created by humans, as this is not true (@sinz_engineering_2019).
* Recall that humans wrote the code to create these AI tools and that the data used to train these AI tools also came from humans. Many of the large commercial AI tools were trained on websites and other content from the internet.
* Be transparent where possible about **when you do or do not use AI tools**, give credit to the humans involved as much as possible.
* Make decisions about using AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.

For decision makers about AI developers:
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Be transparent about the data used to generate tools as much as possible and provide information about what humans may have been involved in the creation of the data.
* Make decisions about creating AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.
</div>

<br>

<div Class = ethics>
A new term in the medical field called [AI paternalism](https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/) describes the concept that doctors (and others) may trust AI over their own judgment or the experiences of the patients they treat. This has already been shown to be a problem with earlier AI systems intended to help distinguish patient groups. Not all humans will necessarily fit the expectations of the AI model if it is not very good at predicting edge cases [@AI_paternalism]. Therefore, in all fields it is important for us to not forget our value as humans in our understanding of the world.
Expand Down Expand Up @@ -169,7 +183,8 @@ Read more about this in this [article](https://www.technologyreview.com/2022/12/

### Tips for avoiding inappropriate uses and lack of oversight

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
* Stay up-to-date on the news for how others have experienced their use of AI.
Expand All @@ -179,8 +194,10 @@ For decision makers about AI users:
* Seek outside expert opinion whenever you are unsure about your AI use plans.
* Consider AI alternatives if something doesn't feel right.

</div>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

* Be transparent with users about the potential risks that usage may cause.
* Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
Expand All @@ -192,6 +209,8 @@ For decision makers about AI developers:
* Design tools with safeguards to stop users from requesting harmful or irresponsible uses.
* Design tools with responses that may ask users to be more considerate in the usage of the tool.

</div>

## Bias Perpetuation and Disparities

One of the biggest concerns is the potential for AI to further perpetuate bias. AI systems are trained on data created by humans. If this data used to train the system is biased (and this includes existing code that may be written in a biased manner), the resulting content from the AI tools could also be biased. This could lead to discrimination, abuse, or neglect for certain groups of people, such as those with certain ethnic or cultural backgrounds, genders, ages, sexuality, capabilities, religions or other group affiliations.
Expand All @@ -206,14 +225,20 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti

### Tips for avoiding bias

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Be aware of the biases in the data that is used to train AI systems.
* Check what data was used to train the AI tools that you use where possible. Tools that are more transparent are likely more ethically developed.
* Check if the developers of the AI tools you are using were/are considerate of bias issues in their development where possible. Tools that are more transparent are likely more ethically developed.
* Consider the possible outcomes of the use of content created by AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.

For decision makers about AI developers:
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Check for possible biases within data used to train new AI tools.
- Are there harmful data values? Examples could include discriminatory and false associations.
Expand All @@ -223,6 +248,7 @@ For decision makers about AI developers:
* Continually audit the code for potentially biased responses. Potentially seek expert help.
* Be transparent with users about potential bias risks.
* Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.
</div>

See @belenguer_ai_2022 for more guidance. We also encourage you to check out the following video for a classic example of bias in AI:

Expand Down Expand Up @@ -269,24 +295,35 @@ It is important to follow legal and ethical guidance around the collection of da

### Tips for reducing security and privacy issues

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Check that no sensitive data, such as Personal Identifiable Information (PII) or propriety information becomes public through prompts to consumer AI systems or systems not designed or set up with the right legal agreements in place for sensitive data.
* Consider purchasing a license for a private AI system if needed or create your own if you wish to work with sensitive data (seek expert guidance to determine if the AI systems are secure enough).
* Ask AI tools for help with security when using consumer tools, but to not rely on them alone. In some cases, consumer AI tools will even provide little guidance about who developed the tool and what data it was trained on, regardless of what happens to the prompts and if they are collected and maintained in a secure way.
* Promote regulation of AI tools by voting for standards where possible.

For decision makers about AI developers:
<div class = "query">
**Possible Generative AI Prompt:**
Are there any methods that could be implemented to make this code more secure?
</div>
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Consult with an expert about data security if you want to design or use a AI tool that will regularly use private or propriety data.
* Be clear with users about the limitations and security risks associated with tools that you develop.
* Promote regulation of AI tools by voting for standards where possible.


<div class = "query">
**Possible Generative AI Prompt:**
Are there any possible data security or privacy issues associated with the plan you proposed?
</div>

</div>

## Climate Impact

Expand All @@ -309,12 +346,17 @@ However, AI also poses a number of climate risks (@bender_dangers_2021; @hulick_

## Tips for reducing climate impact

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

- Where possible use tools that are transparent about resource usage and that identify how they have attempted to improve efficiency

</div>

<br>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

- Modify existing models as opposed to unnecessarily creating new models from scratch where possible.
- Avoid using models with datasets that are unnecessarily large (@bender_dangers_2021)
Expand All @@ -323,6 +365,8 @@ For decision makers about AI developers:
- Be transparent about resources used to train models (@castano_fernandez_greenability_2023).
- Utilize data storage and computing options that are designed to be more environmentally conscious options, such as solar or wind power generated electricity.

</div>

## Transparency

In the United States Blueprint for the AI Bill of Rights, it states:
Expand All @@ -338,16 +382,21 @@ It also better helps us to understand what AI systems may need to be fixed or ad

### Tips for being transparent

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

- Where possible include the AI tool and version that you may be using and why so people can trace back where decisions or content came from
- Use tools that are transparent about what data was used where possible
</div>

<br>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

- Providing information about what training data was or methods used to develop new AI models can help people to better understand why it is working in a particular

</div>

## Summary

Expand Down
41 changes: 32 additions & 9 deletions docs/no_toc/02c-Avoiding_Harm-algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ One major concern is the use of AI to generate malicious content. Secondly the A

### Tips for avoiding the creation of harmful content

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Be careful about what commercial tools you employ, they should be transparent about what they do to avoid harm.
* Be careful about the context in which you might have people use AI - will they know how to use it responsibly?
Expand All @@ -37,19 +38,26 @@ For decision makers about AI users:
* Ask the AI tools to help you, but do not rely on them alone.

<div class = "query">
**Possible Generative AI Prompt:**
What are the possible downstream uses of this content?
</div>

<div class = "query">
**Possible Generative AI Prompt:**
What are some possible negative consequences of using this content?
</div>

For decision makers about AI developers:
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* If designing a system, ensure that best practices are employed to avoid harmful responses. This should be done during the design process and should the system should also be regularly evaluated. Some development systems such as [Amazon Bedrock](https://aws.amazon.com/blogs/aws/evaluate-compare-and-select-the-best-foundation-models-for-your-use-case-in-amazon-bedrock-preview/) have tools for evaluating [toxicity](https://towardsdatascience.com/toxicity-in-ai-text-generation-9e9d9646e68f) to test for harmful responses. Although such systems can be helpful to automatically test, evaluation should also be done directly by humans.
* Consider how the content from AI tools that you design might be used by others for unintended purposes.
* Monitor your tools for unusual and harmful responses.

</div>


## Lack of Interpretability
Expand All @@ -64,23 +72,27 @@ This could result in negative consequences, such as for example reliance on a sy

### Tips for avoiding a lack of interpretability

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Content should be reviewed by those experienced in the given field.
* Ask AI tools to help you understand how it got to the response that it did, but get expert assistance where needed.
* Always consider how an AI system derived a decision if the decision is being used for something that impacts humans


<div class = query>
**Possible Generative AI Prompt:**
Can you explain how you generated this response?
</div>

For decision makers about AI developers:
</div>
<br>
<div class = fordev>
**For decision makers about AI developers:**

* New AI tools should be designed with interpretability in mind, simpler models may make it easier to interpret results.
* Responses from new tools should be reviewed by those experienced in the given field.
* Provide transparency to users about how new AI tools generally create responses.

</div>


## Misinformation and Faulty Responses
Expand All @@ -104,7 +116,8 @@ It is also important to remember that content generated by AI tools is not neces

### Tips for reducing misinformation & faulty responses

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Be aware that some AI tools currently make up false information based on artifacts of the algorithm called hallucinations or based on false information in the training data.
* Do not assume that the content generated by AI is real or correct.
Expand All @@ -113,19 +126,29 @@ For decision makers about AI users:
* Ask the AI tools for extra information about if there are any potential limitations or weaknesses in the responses, but keep in mind that the tool may not be aware of issues and therefore human review is required. The information provided by the tool can however be a helpful starting point.

<div class = query>
**Possible Generative AI Prompt:**
Are there any limitations associated with this response?
</div>

<div class = query>
**Possible Generative AI Prompt:**
What assumptions were made in creating this content?
</div>

</div>

<br>

For decision makers about AI developers:
<div class = fordev>

**For decision makers about AI development:**

* Monitor newly developed tools for accuracy
* Be transparent with users about the limitations of the tool
* Consider training generative AI tools to have responses that are transparent about limitations of the tool.
</div>

<br>

<div class = example>

Expand Down
Loading

0 comments on commit 1b1d4d2

Please sign in to comment.