diff --git a/01e-AI_Possibilities-possibilities.Rmd b/01e-AI_Possibilities-possibilities.Rmd index caaf7974..29e67e1f 100644 --- a/01e-AI_Possibilities-possibilities.Rmd +++ b/01e-AI_Possibilities-possibilities.Rmd @@ -1,3 +1,6 @@ +--- +always_allow_html: yes +--- ```{r, include = FALSE} ottrpal::set_knitr_image_path() @@ -315,14 +318,14 @@ Here is the response: Here is a toy time series dataset tracking individuals, time points, and coffee consumption: ```{r echo = FALSE, warning = FALSE, message = FALSE} -install.packages("kableExtra") -library(kableExtra) +install.packages("kableExtra", repos = "https://cloud.r-project.org") +library(magrittr) data.frame( ID = c(rep(1:3, each = 3)), Time_point = c(rep(1:3, times = 3)), Coffee_cups = c(2,3,1,4,2,3,1,0,2) ) %>% - kbl() + kableExtra::kbl() ``` This tracks 3 individuals over 3 time points (days) and their daily coffee consumption in cups. Individual 1 drank 2 cups on day 1, 3 cups on day 2, and 1 cup on day 3. Individual 2 drank 4 cups on day 1, 2 cups on day 2, and 3 cups on day 3. Individual 3 drank 1 cup on day 1, 0 cups on day 2, and 2 cups on day 3. diff --git a/02b-Avoiding_Harm-concepts.Rmd b/02b-Avoiding_Harm-concepts.Rmd index 665c81b5..05b37cd8 100644 --- a/02b-Avoiding_Harm-concepts.Rmd +++ b/02b-Avoiding_Harm-concepts.Rmd @@ -1,4 +1,6 @@ - +--- +always_allow_html: yes +--- ```{r, include = FALSE} @@ -258,11 +260,7 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti * Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination. -See @belenguer_ai_2022 for more guidance. We also encourage you to check out the following video for a classic example of bias in AI: - -```{r, fig.align="center", fig.alt = "video", echo=FALSE, out.width="90%"} -knitr::include_url("https://www.youtube.com/embed/TWWsW1w-BVo?si=YLGbpVKrUz5b56vM") -``` +See @belenguer_ai_2022 for more guidance. We also encourage you to check out [this video for a classic example of bias in AI](https://www.youtube.com/embed/TWWsW1w-BVo?si=YLGbpVKrUz5b56vM). For further details check out this [course](https://www.coursera.org/learn/algorithmic-fairness) on Coursera about building fair algorithms. We will also describe more in the next section.