Skip to content

Commit

Permalink
Merge pull request #95 from fhdsl/diffaudience
Browse files Browse the repository at this point in the history
adding query info and making tips diff colors
  • Loading branch information
carriewright11 authored Jan 3, 2024
2 parents f3f1790 + 78acca5 commit f4962fc
Show file tree
Hide file tree
Showing 8 changed files with 242 additions and 128 deletions.
2 changes: 1 addition & 1 deletion 02a-Avoiding_Harm-intro.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This course is intended for leaders who might make decisions about AI at nonprof

## Curriculum

This course provides a brief introduction about ethical concepts to be aware of when making decisions about AI, as well as **real-world examples** of situations that involved ethical challenges.
This course provides a brief introduction about ethical concepts to be aware of when making decisions about AI, as well as **real-world examples** of situations that involved ethical challenges. The course is largely focused on **generative AI considerations**, although some of the content will also be applicable to other types of AI applications.

The course will cover:

Expand Down
79 changes: 64 additions & 15 deletions 02b-Avoiding_Harm-concepts.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -72,18 +72,23 @@ ottrpal::include_slide("hhttps://docs.google.com/presentation/d/1L6-8DWn028c1o0p

### Tips for avoiding inadvertent harm

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Consider how the content or decisions generated by an AI tool might be used by others.
* Continually audit how AI tools that you are using are preforming.
* Do not implement changes to systems or make important decisions using AI tools without AI oversight.

For decision makers about AI developers:
</div>

<div class = fordev>
**For decision makers about AI development:**

* Consider newly developed AI tools might be used by others.
* Continually audit AI tools to look for unexpected and potentially harmful or biased behavior.
* Be transparent with users about the limitations of the tool and the data used to train the tool.
* Caution potential users about any potential negative consequences of use
</div>

## Replacing Humans

Expand Down Expand Up @@ -125,17 +130,26 @@ Computer science is a field that has historically lacked diversity. It is also c

### Tips for supporting human contributions

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Avoid thinking that content by AI tools must be better than that created by humans, as this is not true (@sinz_engineering_2019).
* Recall that humans wrote the code to create these AI tools and that the data used to train these AI tools also came from humans. Many of the large commercial AI tools were trained on websites and other content from the internet.
* Be transparent where possible about **when you do or do not use AI tools**, give credit to the humans involved as much as possible.
* Make decisions about using AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.

For decision makers about AI developers:
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Be transparent about the data used to generate tools as much as possible and provide information about what humans may have been involved in the creation of the data.
* Make decisions about creating AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.
</div>

<br>

<div Class = ethics>
A new term in the medical field called [AI paternalism](https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/) describes the concept that doctors (and others) may trust AI over their own judgment or the experiences of the patients they treat. This has already been shown to be a problem with earlier AI systems intended to help distinguish patient groups. Not all humans will necessarily fit the expectations of the AI model if it is not very good at predicting edge cases [@AI_paternalism]. Therefore, in all fields it is important for us to not forget our value as humans in our understanding of the world.
Expand Down Expand Up @@ -177,7 +191,8 @@ Read more about this in this [article](https://www.technologyreview.com/2022/12/

### Tips for avoiding inappropriate uses and lack of oversight

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
* Stay up-to-date on the news for how others have experienced their use of AI.
Expand All @@ -187,8 +202,10 @@ For decision makers about AI users:
* Seek outside expert opinion whenever you are unsure about your AI use plans.
* Consider AI alternatives if something doesn't feel right.

</div>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

* Be transparent with users about the potential risks that usage may cause.
* Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
Expand All @@ -200,6 +217,8 @@ For decision makers about AI developers:
* Design tools with safeguards to stop users from requesting harmful or irresponsible uses.
* Design tools with responses that may ask users to be more considerate in the usage of the tool.

</div>

## Bias Perpetuation and Disparities

One of the biggest concerns is the potential for AI to further perpetuate bias. AI systems are trained on data created by humans. If this data used to train the system is biased (and this includes existing code that may be written in a biased manner), the resulting content from the AI tools could also be biased. This could lead to discrimination, abuse, or neglect for certain groups of people, such as those with certain ethnic or cultural backgrounds, genders, ages, sexuality, capabilities, religions or other group affiliations.
Expand All @@ -214,14 +233,20 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti

### Tips for avoiding bias

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Be aware of the biases in the data that is used to train AI systems.
* Check what data was used to train the AI tools that you use where possible. Tools that are more transparent are likely more ethically developed.
* Check if the developers of the AI tools you are using were/are considerate of bias issues in their development where possible. Tools that are more transparent are likely more ethically developed.
* Consider the possible outcomes of the use of content created by AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.

For decision makers about AI developers:
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Check for possible biases within data used to train new AI tools.
- Are there harmful data values? Examples could include discriminatory and false associations.
Expand All @@ -231,6 +256,7 @@ For decision makers about AI developers:
* Continually audit the code for potentially biased responses. Potentially seek expert help.
* Be transparent with users about potential bias risks.
* Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination.
</div>

See @belenguer_ai_2022 for more guidance. We also encourage you to check out the following video for a classic example of bias in AI:

Expand Down Expand Up @@ -281,24 +307,35 @@ It is important to follow legal and ethical guidance around the collection of da

### Tips for reducing security and privacy issues

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

* Check that no sensitive data, such as Personal Identifiable Information (PII) or propriety information becomes public through prompts to consumer AI systems or systems not designed or set up with the right legal agreements in place for sensitive data.
* Consider purchasing a license for a private AI system if needed or create your own if you wish to work with sensitive data (seek expert guidance to determine if the AI systems are secure enough).
* Ask AI tools for help with security when using consumer tools, but to not rely on them alone. In some cases, consumer AI tools will even provide little guidance about who developed the tool and what data it was trained on, regardless of what happens to the prompts and if they are collected and maintained in a secure way.
* Promote regulation of AI tools by voting for standards where possible.

For decision makers about AI developers:
<div class = "query">
**Possible Generative AI Prompt:**
Are there any methods that could be implemented to make this code more secure?
</div>
</div>

<br>

<div class = fordev>
**For decision makers about AI development:**

* Consult with an expert about data security if you want to design or use a AI tool that will regularly use private or propriety data.
* Be clear with users about the limitations and security risks associated with tools that you develop.
* Promote regulation of AI tools by voting for standards where possible.


<div class = "query">
**Possible Generative AI Prompt:**
Are there any possible data security or privacy issues associated with the plan you proposed?
</div>

</div>

## Climate Impact

Expand All @@ -323,12 +360,17 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9

## Tips for reducing climate impact

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

- Where possible use tools that are transparent about resource usage and that identify how they have attempted to improve efficiency

</div>

<br>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

- Modify existing models as opposed to unnecessarily creating new models from scratch where possible.
- Avoid using models with datasets that are unnecessarily large (@bender_dangers_2021)
Expand All @@ -337,6 +379,8 @@ For decision makers about AI developers:
- Be transparent about resources used to train models (@castano_fernandez_greenability_2023).
- Utilize data storage and computing options that are designed to be more environmentally conscious options, such as solar or wind power generated electricity.

</div>

## Transparency

In the United States Blueprint for the AI Bill of Rights, it states:
Expand All @@ -354,16 +398,21 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9

### Tips for being transparent

For decision makers about AI users:
<div class = foruse>
**For decision makers about AI use:**

- Where possible include the AI tool and version that you may be using and why so people can trace back where decisions or content came from
- Use tools that are transparent about what data was used where possible
</div>

<br>

For decision makers about AI developers:
<div class = fordev>
**For decision makers about AI development:**

- Providing information about what training data was or methods used to develop new AI models can help people to better understand why it is working in a particular

</div>

## Summary

Expand Down
88 changes: 0 additions & 88 deletions 02ba-Effective-use-training-testing.Rmd

This file was deleted.

Loading

0 comments on commit f4962fc

Please sign in to comment.