Skip to content

Commit

Permalink
add FOC workshop
Browse files Browse the repository at this point in the history
  • Loading branch information
shlomihod committed Nov 3, 2023
1 parent f68d51e commit 19da23d
Show file tree
Hide file tree
Showing 4 changed files with 192 additions and 104 deletions.
103 changes: 103 additions & 0 deletions docs/23-congress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
hide:
- navigation
---

![Responsible AI, Law, Ethics & Society Logo](assets/logo.png){: style="width:300px;"}

# Operationalizing Responsible AI <br> Congressional Workshop

Time: __August 7-8, 2023__
Place: __Russell 385__

## Overview

As AI becomes prevalent in many aspects of our lives, it's crucial to ensure that AI driven systems align with our societal values. A key factor in achieving this is how AI systems will be governed and regulated. Effective AI governance and regulation is challenging: open-ended legal and ethical concepts such as "fairness" or "privacy" must be transposed to concrete design specifications. This task requires “Responsible AI literacy” - an understanding of how AI systems are developed and deployed, and how design choices throughout the system life cycle affect legal, ethical and normative outcomes.

During the workshop, participants will enhance their Responsible AI literacy and develop their skills to conduct an effective and critical dialogue with relevant experts.

The workshop consists of three interactive, hands-on sessions where participants engage in teamwork, designing and analyzing AI systems and discussing real-world scenarios to develop a comprehensive understanding of the interaction between AI design choices and ethical and legal implications.

This training methodology, further described [here](https://go.responsibly.ai/paper/open), was successfully implemented in multiple [settings](https://teach.responsibly.ai): academia:material-information-outline:{ title="Cornell Tech, Boston University, Princeton University, Georgetown University, Warwick University, Bocconi University, Technion, Tel Aviv University, University of Haifa. Next year also at University of California, Berkeley." }, industry:material-information-outline:{ title="At the NYC-based Runway accelerator to tech entrepreneurs." } and the government:material-information-outline:{ title="At OECD and the Israeli Government Ministry of Innovation, Science and Technology, leading the National AI Initiative." }.

Participants are strongly encouraged to attend all three sessions. This is a hands-on workshop, so please remember to bring your laptop. Technical background in AI is not necessary for this workshop, it is designed specifically for policymakers.

## Agenda

### Monday, August 7th 2023

| Time | Session |
|-----------------|----------------------------------------------------------------------------------|
| 9:30 am - 10 am | Registration and coffee |
| 10 am - 12 pm | First session: [Balancing trade-offs in the design of AI systems](#1-balancing-trade-offs-in-the-design-of-ai-systems) |
| 12 pm - 1 pm | Lunch |
| 1 pm - 3 pm | Second session: [Deploying AI applications with foundation models & generative AI](#2-deploying-ai-applications-with-foundation-models-generative-ai) |
| 3 pm - 3:30 pm | Coffee break |
| 3:30 pm - 5 pm | Second session: con’t |

### Tuesday, August 8th 2023

| Time | Session |
|--------------------|-------------------------------------------------------------------------|
| 8:30 am - 9:00 am | Coffee |
| 9:00 am - 10:30 am | Third session: [Building content moderation systems for online platforms](#3-building-content-moderation-systems-for-online-platforms) |
| 10:30 am - 11 am | Coffee break |
| 11 am - 12:30 pm | Third session: con’t |
| 12:30 pm - 1:30 pm | Lunch and closing remarks |


## Sessions

### 1 Balancing trade-offs in the design of AI systems
To operationalize Responsible AI, one needs to translate technical specifications and human values, as well as to trade off between design alternatives. Every design choice carries normative implications and vice versa.

In this session, the participants will engage in core aspects of Responsible AI through a case study pertaining to the use of AI in social services.

The general lessons learned about the nature of trade-offs from this government case study apply to any AI system design.

### 2 Deploying AI applications with foundation models & generative AI
The advent of recent technologies like ChatGPT has made the development of AI systems more accessible, enabling developers with some software engineering skills to create AI-based applications even with no technical expertise in AI. Ensuring these systems perform well, exhibit robustness and adhere to human values, however, requires more than software engineering skills.

In this session, the participants will have a crash course in prompt engineering, have the opportunity to build AI systems using foundation models and generative AI and delve into the challenges associated with evaluating and testing their performance with respect to Responsible AI.

### 3 Building content moderation systems for online platforms
Content moderation systems are powerful and sophisticated tools that have a crucial, daily impact on shaping speech in our society. Different design choices for content moderation systems are not simply technical: they imply significant normative meanings and invoke issues pertaining to various Responsible AI principles such as free speech, fairness, privacy, transparency, safety and human autonomy. Integrating these principles into a coherent functioning system is a daunting task.

In this “mini-hackathon” session, the participants will experiment with operationalizing these normative values by designing a content moderation system, and receive feedback on their work.

## Team

[**Niva Elkin-Koren, J.S.D.**](https://en-law.tau.ac.il/profile/elkiniva) is a Professor of Law at Tel Aviv University Faculty of Law and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. She is the academic director of the Chief Justice Meir Shamgar Center for Digital Law and Innovation, a co-director of the Algorithmic Governance Lab at TAU Innovation Lab (“TIL”) and a member of the Academic Management Committee of TAU Center for Artificial Intelligence and Data Science. Her research is located at the intersection between law and information technology, focusing on values in design, intellectual property, governance by AI and governance of AI.

[**Avigdor Gal, D.Sc.**](https://agp.iem.technion.ac.il/avigal/) is the Benjamin and Florence Free Chaired Professor of Data Science and the Co-chair of the Center for Humanities & AI at the Technion - Israel Institute of Technology. He is with the Faculty of Data & Decision Sciences, where he led the design of the first engineering program in data science in Israel (and possibly the world). Gal’s research focuses on elements of data integration under uncertainty, making use of state-of-the-art machine learning and deep learning techniques to offer an improved data quality. His research is implemented, through his ties as a consultant, in multiple industries including FinTech (e.g., Pagaya). Gal has been involved in developing methods for embedding responsible AI in companies and government authorities through an education process that increases dialogue abilities between data scientists and other stakeholders (e.g., lawyers and regulators).

[**Karni Chagal-Feferkorn, Ph.D.**](https://www.shamgarlaw.sites.tau.ac.il/en/pepole/dr.-karni-chagal-feferkorn) is an Assistant Professor of Law at the Academic Center of Law & Business in Ramat-Gan, working on AI & global regulation and especially legal liability in cases where AI systems were involved in causing harm. Before that, she was a Law Postdoc fellow at the University of Ottawa.
Karni holds an LL.M. in Law & Technology from Stanford University and is a licensed attorney in New York, California and Israel.
In addition to her academic engagement, Karni is the co-founder of a consultancy firm that advises government agencies and private sector companies on global regulation.

[**Shlomi Hod**](https://shlomi.hod.xyz) is a computer science Ph.D. student working on Responsible AI at Boston University. Currently, he works with the Israeli Ministry of Health to release to the public the National Birth Registry using PETs (Privacy-Enhancing Technologies).
Last summer, he worked as a data scientist at Twitter, where he leveraged human-in-the-loop research to improve toxicity models.
In the past, Shlomi was the co-founder of the Israeli National Cyber Education Center. There he led the development of nationwide educational programs in computing for kids and teens. The center aims to increase the social mobility and tech participation of underrepresented groups in tech, such as women, minorities, and individuals from the suburbs of Israel. Before that, he led a data science and cybersecurity research team.

[**Amit Ashkenazi**](https://www.linkedin.com/in/amit-ashkenazi-1000b71ba/) is a law and technology expert, supporting public and private organizations on legal, policy and compliance aspects of cybersecurity, artificial intelligence, and data protection. He has extensive experience in legal policymaking in the domestic and international contexts. Amit was the legal advisor of the Israeli National Cyber Directorate (INCD) at the Prime Minister's Office between 2014 and 2022. Before INCD, Amit was Head of the Legal Department in the Israeli Law Information and Technology Authority in the Ministry of Justice (ILITA), Israel’s Data Protection Authority. In both positions Amit set up and led the legal departments and held responsibility for legal opinions, legislation, and international legal relations. Amit is a member of ISO SC-42 WG1 expert group on AI, and recently advised the Israeli Ministry of Science and Technology on the development of national artificial intelligence regulatory policy. Amit is an adjunct lecturer on cyber law and policy at the University of Tel Aviv, University of Haifa and Reichman University.

[**Hofit Wasserman Rozen**](https://www.linkedin.com/in/hofit-wasserman-rozen-843997b9/) is a research fellow at Shamgar Center, TAU Faculty of Law and a lawyer, consulting on the ethical & regulation aspects of developing, deploying and using Artificial Intelligence systems. Hofit is currently a PhD candidate at Tel-Aviv University (Law and Engineering faculties). Her research focuses on AI regulation and ethics, specifically on the legal and ML aspects of Explainability and the legal right for explanation of AI systems. In her professional role she also serves as a consultant in the global hi-tech industry, supporting upper management in organizational and strategic planning. Hofit is a licensed attorney in Israel, and her LLB and LLM (both with honors) are from Tel-Aviv university.

## Contact

<div class="grid cards" markdown>

__Shlomi Hod__ [[email protected]](mailto:[email protected])

__Aditi Gupta__ [[email protected]](mailto:[email protected])

</div>

## Sponsors

<div class="image-grid">
<div class="image-cell"><img src="/assets/ieee.png" style="height:150px"></div>
<div class="image-cell"><img src="/assets/bu.png" style="height:120px"></div>
<div class="image-cell"><img src="/assets/technion.png" style="height:120px"></div>
<div class="image-cell"><img src="/assets/tau.png" style="height:100px"></div>
</div>
Loading

0 comments on commit 19da23d

Please sign in to comment.