diff --git a/site/conf.py b/site/conf.py index 682744c..cf5a9fc 100644 --- a/site/conf.py +++ b/site/conf.py @@ -53,7 +53,7 @@ html_theme_options = { 'github_url': 'https://github.com/very-good-science/data-hazards', - 'twitter_url': 'https://twitter.com/hashtag/DataEthicsClub', + 'twitter_url': 'https://twitter.com/hashtag/DataHazards', 'search_bar_text': 'Search this site...', 'show_prev_next': False, "footer_items": ["license-footer", "sphinx-version"], diff --git a/site/contents/materials/workshop/data-hazards.md b/site/contents/materials/workshop/data-hazards.md index d34d1f0..3c45e17 100644 --- a/site/contents/materials/workshop/data-hazards.md +++ b/site/contents/materials/workshop/data-hazards.md @@ -146,15 +146,15 @@ __Safety Precautions:__ __Hazard: Difficult to understand__ There is a danger that the technology is difficult to understand. -This could be because of the technology itself is hard to interpret (e.g. neural nets), or it's implementation (i.e. code is hidden and we are not allowed to see exactly what it is doing). +This could be because of the technology itself is hard to interpret (e.g. neural nets), or problems with it's implementation (i.e. code is not provided, or not documented). Depending on the circumstances of its use, this could mean that incorrect results are hard to identify, or that the technology is inaccessible to people (difficult to implement or use). ^^^ -__Example 1:__ Google does not make code available for many projects, from it's DeepMind AlphaFold [protein-folding research](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) to its' [Search Engine algorithms](https://www.searchenginejournal.com/google-algorithm-history/). +__Example 1:__ Deep learning is used to perform [credit-scoring](https://www.moodysanalytics.com/risk-perspectives-magazine/managing-disruption/spotlight/machine-learning-challenges-lessons-and-opportunities-in-credit-risk-modeling) (i.e. could deny people credit), but it is difficult to understand (and therefore check) what these decisions are based on. -__Example 2:__ Deep learning is used to perform [credit-scoring](https://www.moodysanalytics.com/risk-perspectives-magazine/managing-disruption/spotlight/machine-learning-challenges-lessons-and-opportunities-in-credit-risk-modeling) (i.e. could deny people credit), but it is difficult to understand (and therefore check) what these decisions are based on. +__Example 2:__ Even when journals have a policy of having code and data availability, published researchers can be unaware of what they agreed to and resist sharing it, as [this](https://www.pnas.org/content/115/11/2584) paper surveying Science publications shows. +++ __Safety Precautions:__ @@ -165,7 +165,7 @@ __Safety Precautions:__ :img-top: /images/hazards/direct-harm.png __Hazard: May cause direct harm__ -The application area of this technology means that it is capable of causing direct physical harm to someone if it malfunctions, even if used correctly e.g. healthcare, driverless vehicles. +The application area of this technology means that it is capable of causing direct physical or psychological harm to someone even if used correctly e.g. healthcare and driverless vehicles may be expected to directly harm someone unless they have 100% accuracy. ^^^