Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…into main
jhudsl-robot committed Dec 22, 2023
2 parents 8c54530 + 62a1c1d commit 4c92e7b
Showing 127 changed files with 492 additions and 405 deletions.
16 changes: 10 additions & 6 deletions docs/02b-Avoiding_Harm-concepts.md
Original file line number Diff line number Diff line change
@@ -14,13 +14,16 @@ There is the potential for AI to dramatically influence society. It is our respo
</div>


## Ethics Codes
## Guidelines for Responsible Development and Use of AI.

There are a few current major codes of ethics for AI:
There are a few current major codes for the responsible use and development of AI:

- United States [Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/)
- United States [Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)
- [United States National Institute of Standards and Technology (NIST): AI Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework)
- European Commission [Ethics Guidelines for trustworthy AI](https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)
- [European Union AI Act](https://artificialintelligenceact.eu/the-act/)
- [United Kingdom National AI Strategy](https://www.gov.uk/government/publications/national-ai-strategy)
- The Institute of Electrical and Electronics Engineers (IEEE) [Ethically Aligned Design Version 2](https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf)


@@ -32,7 +35,7 @@ In this chapter we will discuss the some of the major ethical considerations in

1) **Intentional and Inadvertent Harm** - Data and technology intended to serve one purpose may be reused by others for unintended purposes. How do we prevent intentional harm?
1) **Replacing Humans and Human autonomy** - AI tools can help humans, but they are not a replacement. Humans are still much better at generalizing their knowledge to other contexts and human oversight is required (@sinz_engineering_2019).
1) **Inappropriate Use** - There are situations in which using AI might not be appropriate now or in the future.
1) **Inappropriate Use and Lack of Oversight** - There are situations in which using AI might not be appropriate now or in the future. A lack of human monitoring and oversight can result in harm.
1) **Bias Perpetuation and Disparities** - AI models are built on data and code that were created by biased humans, thus bias can be further perpetuated by using AI tools. In some cases bias can even be exaggerated. This combined with differences in access may exacerbate disparities.
1) **Security and Privacy Issues** - Data for AI systems should be collected in an ethical manner that is mindful of the rights of the individuals the data comes from. Data around usage of those tools should also be collected in an ethical manner. Commercial tool usage with proprietary or private data, code, text, images or other files may result in leaked data not only to the developers of the commercial tool, but potentially also to other users.
1) **Climate Impact** - As we continue to use more and more data and computing power, we need to be ever more mindful of how we generate the electricity to store and perform our computations.
@@ -78,7 +81,7 @@ Computer science is a field that has historically lacked diversity. It is critic
A new term in the medical field called [AI paternalism](https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/) describes the concept that doctors (and others) may trust AI over their own judgment or the experiences of the patients they treat. This has already been shown to be a problem with earlier AI systems intended to help distinguish patient groups. Not all humans will necessarily fit the expectations of the AI model if it is not very good at predicting edge cases [@AI_paternalism]. Therefore, in all fields it is important for us to not forget our value as humans in our understanding of the world.
</div>

## Inappropriate Use
## Inappropriate Use and Lack of Oversight

There are situations in which we may, as a society, not want an automated response. There may even be situations in which we do not want to bias our own human judgment by that of an AI system. There may be other situations where the efficiency of AI may also be considered inappropriate. While many of these topics are still under debate and AI technology continues to improve, we challenge the readers to consider such cases given what is currently possible and what may be possible in the future.

@@ -112,9 +115,10 @@ Read more about this in this [article](https://www.technologyreview.com/2022/12/

</div>

### Tips for avoiding inappropriate uses
### Tips for avoiding inappropriate uses and lack of oversight

* Stay up-to-date on current practices and standards for your field, as well as up-to-date on the news for how others have experienced their use of AI.
* Stay up-to-date on current laws, practices, and standards for your field, especially for high-risk uses.
* Stay up-to-date on the news for how others have experienced their use of AI.
* Stay involved in discussions about appropriate uses for AI, particularly for policy.
* Begin using AI slowly and iteratively to allow time to determine the appropriateness of the use. Some issues will only be discovered after some experience.
* Involve a diverse group of individuals in discussions of intended uses to better account for a variety of perspectives.
22 changes: 18 additions & 4 deletions docs/02d-Avoiding_Harm-adherence.md
Original file line number Diff line number Diff line change
@@ -46,12 +46,26 @@ See here for addition info: https://ieeexplore.ieee.org/abstract/document/867851

## Check for Allowed Use

When AI systems are trained on data, they may also learn and incorporate copyrighted information. This means that AI-generated content could potentially infringe on the copyright of the original author. For example, if an AI system is trained on a code written by a human programmer, the AI system could generate code that is identical to or similar to the code from that author. If the AI system then uses this code without permission from the original author, this could constitute copyright infringement.
When AI systems are trained on data, they may also learn and incorporate copyrighted information or protected intellectual property. This means that AI-generated content could potentially infringe on the copyright or protection of trademarks or patents of the original author. For more extreme examples, if an AI system is trained on an essay or art or in some cases even code written by a human, the AI system could generate responses that are identical to or very similar to that of the original author, which some AI tools have done. Regardless, even training AI tools on copyrighted information where the responses are still relatively different, if the AI system uses this content without permission from the original author, this could constitute copyright or trademark infringement @brittain_more_2023.

Similarly, AI systems could potentially infringe on intellectual property rights by using code that is protected by trademarks or patents. For example, if an AI system is trained on a training manual that contains code that is protected by a trademark, the AI system could generate code that is identical to or similar to the code in the training manual. If the AI system then uses this code without permission from the trademark owner, this could constitute trademark infringement.
<div class = example>

Open AI is facing lawsuits about using writing from several authors to train ChatGPT without permission from the authors. While this poses legal questions, it also poses ethical questions about the use of these tools and what it means for the people who created content that helped train AI tools. How can we properly give credit to such individuals?

The [lawsuits](https://www.reuters.com/technology/more-writers-sue-openai-copyright-infringement-over-ai-training-2023-09-11/) are summarized by @brittain_more_2023:

> The lawsuit is at least the third proposed copyright-infringement class action filed by authors against Microsoft-backed OpenAI. Companies, including Microsoft (MSFT.O), Meta Platforms (META.O) and Stability AI, have also been sued by copyright owners over the use of their work in AI training
> The new San Francisco lawsuit said that works like books, plays and articles are particularly valuable for ChatGPT's training as the "best examples of high-quality, long form writing."
> OpenAI and other companies have argued that AI training makes fair use of copyrighted material scraped from the internet.
> The lawsuit requested an unspecified amount of money damages and an order blocking OpenAI's "unlawful and unfair business practices."
</div>

<div class =reflection>
The same is true for music, art, poetry etc. AI poses questions about how we define art and if AI will reduce the opportunities for employment for human artists. See [here](https://www.wired.com/story/picture-limitless-creativity-ai-image-generators/) for an interesting discussion, in which it is argued that AI may enhance our capacity to create art. This will be an important topic for society to consider.
AI poses questions about how we define art and if AI will reduce the opportunities for employment for human artists. See [here](https://www.wired.com/story/picture-limitless-creativity-ai-image-generators/) for an interesting discussion, in which it is argued that AI may enhance our capacity to create art. This will be an important topic for society to consider.

</div>

@@ -124,7 +138,7 @@ Here is a summary of all the tips we suggested:

* Disclose when you use AI tools to create content.
* Be aware that AI systems may behave in unexpected ways. Implement new AI solutions slowly to account for the unexpected. Test those systems and try to better understand how they work in different contexts.
* Adhere to copyright restrictions for use of data and content created by AI systems.
* Adhere to restrictions for use of data and content created by AI systems where possible, citing the AI system itself and learning how the tool obtained permission for use can help reduce risk.
* Cross-check content from AI tools by using multiple AI tools and checking for consistent results over time. Check that each tool meets the privacy and security restrictions that you need.
* Emphasize training and education about AI and recognize that best practices will evolve as the technology evolves.

6 changes: 3 additions & 3 deletions docs/404.html
Original file line number Diff line number Diff line change
@@ -229,7 +229,7 @@
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html"><i class="fa fa-check"></i>Societal Impact</a>
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#ethics-codes"><i class="fa fa-check"></i>Ethics Codes</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#guidelines-for-responsible-development-and-use-of-ai."><i class="fa fa-check"></i>Guidelines for Responsible Development and Use of AI.</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#major-ethical-considerations"><i class="fa fa-check"></i>Major Ethical Considerations</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#intentional-and-inadvertent-harm"><i class="fa fa-check"></i>Intentional and Inadvertent Harm</a>
<ul>
@@ -239,9 +239,9 @@
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-supporting-human-contributions"><i class="fa fa-check"></i>Tips for supporting human contributions</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#inappropriate-use"><i class="fa fa-check"></i>Inappropriate Use</a>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#inappropriate-use-and-lack-of-oversight"><i class="fa fa-check"></i>Inappropriate Use and Lack of Oversight</a>
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-avoiding-inappropriate-uses"><i class="fa fa-check"></i>Tips for avoiding inappropriate uses</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-avoiding-inappropriate-uses-and-lack-of-oversight"><i class="fa fa-check"></i>Tips for avoiding inappropriate uses and lack of oversight</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#bias-perpetuation-and-disparities"><i class="fa fa-check"></i>Bias Perpetuation and Disparities</a>
<ul>
Binary file modified docs/AI-for-Decision-Makers.docx
Binary file not shown.
8 changes: 4 additions & 4 deletions docs/About.md
Original file line number Diff line number Diff line change
@@ -12,9 +12,9 @@ These credits are based on our [course contributors table guidelines](https://ww
|Lead Content Instructor(s)|[Ava Hoffman] - Course 1: <br> Exploring AI Possibilities <br> [Carrie Wright] - Course 2: <br> Avoiding AI Harm <br> [Candace Savonen] - Course 3: <br> Determining AI Needs <br> [Elizabeth Humphries] - Course 4:<br> Developing AI Policy <br>|
|Project Management| [Elizabeth Humphries], [Shasta Nicholson]|
|Content Author| [Christopher Lo] - [Avoiding AI Harm - Effective Use of Training and Testing Data](https://hutchdatascience.org/AI_for_Decision_Makers/effective-use-of-training-and-testing-data.html), [Developing AI Policy - Education case study](https://hutchdatascience.org/AI_for_Decision_Makers/ai-acts-orders-and-regulations.html#education) <br> [Monica Gerber] - [Developing AI Policy - Healthcare case study](https://hutchdatascience.org/AI_for_Decision_Makers/ai-acts-orders-and-regulations.html#healthcare) |
|Content Editor(s)/Reviewer(s) | [Sitapriya Moorthi], [Jeff Leek], [Amy Paguirigan], [Jenny Weddle], [Christopher Lo]|
|Content Editor(s)/Reviewer(s) | [Sitapriya Moorthi], [Jeffrey Leek], [Amy Paguirigan], [Jenny Weddle], [Christopher Lo]|
|Content Director(s) |[Jeff Leek] , [Elizabeth Humphries] |
|Content Consultants | [Robert McDermott], [Jenny Weddle], [Adina Mueller]|
|Content Consultants | [Robert McDermott], [Jennifer Weddle], [Adina Mueller]|
|**Production**||
|Content Publisher(s)| [Shasta Nicholson]|
|Content Publishing Reviewer(s)| [Ava Hoffman], [Carrie Wright], [Candace Savonen],[Elizabeth Humphries] |
@@ -97,14 +97,14 @@ These credits are based on our [course contributors table guidelines](https://ww
[Candace Savonen]: https://www.cansavvy.com/
[Carrie Wright]: https://carriewright11.github.io/
[Ava Hoffman]: https://www.avahoffman.com/
[Jeff Leek]: https://jtleek.com/
[Jeffrey Leek]: https://jtleek.com/
[Christopher Lo]: https://www.linkedin.com/in/christopher-lo-23316221b
[Shasta Nicholson]: https://www.linkedin.com/in/shastanicholson
[Sandy Ombrek]: https://www.linkedin.com/in/sandy-ormbrek-1410b113
[Elizabeth Humphries]: https://www.linkedin.com/in/elizabeth-humphries-61202a103/
[Christopher Lo]: https://www.linkedin.com/in/christopher-lo-23316221b/
[Sitapriya Moorthi]: https://www.linkedin.com/in/sitapriyamoorthi/
[Jenny Weddle]: https://hutchdatascience.org/ourteam/
[Jennifer Weddle]: https://hutchdatascience.org/ourteam/
[Robert McDermott]: https://www.linkedin.com/in/robert-mcdermott-a77b9011/
[Adina Mueller]: https://www.linkedin.com/in/adina-mueller-575aaa/
[Maleah O'Conner]: https://hutchdatascience.org/ourteam/
12 changes: 6 additions & 6 deletions docs/about-the-authors.html
Original file line number Diff line number Diff line change
@@ -229,7 +229,7 @@
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html"><i class="fa fa-check"></i>Societal Impact</a>
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#ethics-codes"><i class="fa fa-check"></i>Ethics Codes</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#guidelines-for-responsible-development-and-use-of-ai."><i class="fa fa-check"></i>Guidelines for Responsible Development and Use of AI.</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#major-ethical-considerations"><i class="fa fa-check"></i>Major Ethical Considerations</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#intentional-and-inadvertent-harm"><i class="fa fa-check"></i>Intentional and Inadvertent Harm</a>
<ul>
@@ -239,9 +239,9 @@
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-supporting-human-contributions"><i class="fa fa-check"></i>Tips for supporting human contributions</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#inappropriate-use"><i class="fa fa-check"></i>Inappropriate Use</a>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#inappropriate-use-and-lack-of-oversight"><i class="fa fa-check"></i>Inappropriate Use and Lack of Oversight</a>
<ul>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-avoiding-inappropriate-uses"><i class="fa fa-check"></i>Tips for avoiding inappropriate uses</a></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#tips-for-avoiding-inappropriate-uses-and-lack-of-oversight"><i class="fa fa-check"></i>Tips for avoiding inappropriate uses and lack of oversight</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="societal-impact.html"><a href="societal-impact.html#bias-perpetuation-and-disparities"><i class="fa fa-check"></i>Bias Perpetuation and Disparities</a>
<ul>
@@ -551,15 +551,15 @@ <h1>About the Authors</h1>
</tr>
<tr class="odd">
<td>Content Editor(s)/Reviewer(s)</td>
<td><a href="https://www.linkedin.com/in/sitapriyamoorthi/">Sitapriya Moorthi</a>, <a href="https://jtleek.com/">Jeff Leek</a>, <a href="https://amypag.com/">Amy Paguirigan</a>, <a href="https://hutchdatascience.org/ourteam/">Jenny Weddle</a>, <a href="https://www.linkedin.com/in/christopher-lo-23316221b/">Christopher Lo</a></td>
<td><a href="https://www.linkedin.com/in/sitapriyamoorthi/">Sitapriya Moorthi</a>, <a href="https://jtleek.com/">Jeffrey Leek</a>, <a href="https://amypag.com/">Amy Paguirigan</a>, [Jenny Weddle], <a href="https://www.linkedin.com/in/christopher-lo-23316221b/">Christopher Lo</a></td>
</tr>
<tr class="even">
<td>Content Director(s)</td>
<td><a href="https://jtleek.com/">Jeff Leek</a> , <a href="https://www.linkedin.com/in/elizabeth-humphries-61202a103/">Elizabeth Humphries</a></td>
<td>[Jeff Leek] , <a href="https://www.linkedin.com/in/elizabeth-humphries-61202a103/">Elizabeth Humphries</a></td>
</tr>
<tr class="odd">
<td>Content Consultants</td>
<td><a href="https://www.linkedin.com/in/robert-mcdermott-a77b9011/">Robert McDermott</a>, <a href="https://hutchdatascience.org/ourteam/">Jenny Weddle</a>, <a href="https://www.linkedin.com/in/adina-mueller-575aaa/">Adina Mueller</a></td>
<td><a href="https://www.linkedin.com/in/robert-mcdermott-a77b9011/">Robert McDermott</a>, <a href="https://hutchdatascience.org/ourteam/">Jennifer Weddle</a>, <a href="https://www.linkedin.com/in/adina-mueller-575aaa/">Adina Mueller</a></td>
</tr>
<tr class="even">
<td><strong>Production</strong></td>
Loading

0 comments on commit 4c92e7b

Please sign in to comment.