Skip to content

Latest commit

 

History

History
276 lines (131 loc) · 18.4 KB

strategic-action-plan.md

File metadata and controls

276 lines (131 loc) · 18.4 KB

What is this paper for?

So-called AI technologies are now widespread. As with the advent of the internet, organisations must take stock of what these changes mean across all aspects of their business or risk stagnating.

In openDemocracy’s case, this will include considering new tools and approaches for our journalism, distribution, audience, administrative, and operations activities, that is, across the board. This makes sense when we consider that AI offers opportunities to streamline any activity that deals with information-based processes i.e. much of our work.

From writing internal emails to investigative reporting, new tools will become available to help. Many of these can have significant impacts on how we work and manage resources. Some of them may involve significant alterations to staff roles and responsibilities, or automated decisions that are prone to bias.

“Those who are subject to these systems are generally ignored in conversations about algorithmic governance and regulation.”

Meredith Whittaker, co-founder of AI Now Institute and professor at New York University

We will therefore benefit from a strategy and policies around how procurement, implementation and training takes place. These will seek to smooth the transition, communicate the approach to stakeholders and ensure ethical principles are centred. Beyond this, they will help with future funding bids and business activities that require organisational proficiency in this area.

This document should help answer questions like:

  • Could a given set of work tasks be accomplished by an AI system?
  • What costs will be associated with that?
  • What risks to the organisation, our staff, readers, etc. does it entail?
  • Does the introduction of this system require training, specific data policies or new internal practices?
  • Do we have AI policies and resources to support new projects reliant on AI.

The objectives are to:

  • Ensure the organization is able to integrate complex new technologies and participate in projects dependent on those.
  • Ensure ethical principles (e.g. regarding HR) underpin policy and actions.

In what follows, we will consider various elements of AI strategy and policy design with a focus on surfacing viable, organisational actions. Those actions will be highlighted (with “Action:”) then triaged and put forward as recommendations.

“News organisations can’t pay engineers as much as the tech giants, but they can implement AI technology to take up time-consuming tasks, thus freeing up journalists to produce the kind of investigative work that holds power to account and raises the organisation’s profile. Most don’t have an actionable template for introducing AI into the newsroom but emphasise the value of determining what problems need solving and where these technologies might fit.”

How AI is becoming an integral part of the news-making process, LSE

Principles {#principles}

Some of the questions we face when considering the integration of AI systems relate to fundamental issues of journalistic integrity, labour rights, data rights and other ethical principles. Many organisations have issued dedicated AI policies that state how they will deal with AI organisationally.

Relevant resources include Google’s internal policy, Microsoft’s approach to responsible AI, Santa Clara Principles on Transparency and Accountability in Content Moderation, and the Oxford Commission on AI & Good Governance.

Such principles include ensuring that systems “avoid creating or reinforcing unfair bias” and are “accountable to people”.

We are committed to many of these principles in other domains such as labour rights. It is worth restating and updating them in the particular scope of AI. This should be considered as part of future organisational strategy exercises.

"Ethical guidelines on AI usually lack clear enforcement mechanisms: Voluntary commitments and recommendations for ethical AI development are less effective than binding commitments and practical processes that operationalize principles.”

Mozilla Internet Health Report 2020

**Action: **Define organisational commitment to key principles of ethical AI.

**Action: **Consider if our processes for managing AI will ensure ethical principles are upheld.

Deciding to implement new systems {#deciding-to-implement-new-systems}

We would benefit from a process to help decide whether and how to employ a given new technology. Such a process would involve researching the points listed below for each proposed system to ensure suitability and highlight any problems with implementation.

This needn’t be employed in every decision but may often help. In some cases, it may be easy to decide: it seems clear that a task like human translation of article copy is valuable but resource-intensive, and that limited budgets would be better spent elsewhere e.g. on commissioning or investigative work. Furthermore, an appropriate solution i.e. software translation, is increasingly effective, at least in making a first draft which can be adjusted by a human editor.

In a less clear-cut case, we may want to gather data on the following points to help make an informed decision, as well as to assess how AI software would actually function.

Task analysis and data {#task-analysis-and-data}

Analyse the structure of the tasks to be automated (e.g. ‘translate article’). This can involve:

  • detailing what existing processes for accomplishing the task are (e.g. ‘copy sent to staffer or freelancer; copy returned upon completion; freelancer is paid’),
  • how those will be replaced (‘translation service connected to CMS translates copy on page directly; service is paid for per-API call‘),

We might then gather input data for a possible AI system, to assess its machine-readability and suitability for processing by a given piece of software. In the example of translation, this might include:

  • Samples of text for translation e.g. body copy, subject lines, social media copy, etc.
  • Other details e.g. what languages the copy for translation comes in
  • Where the data lives before processing e.g. Google docs
  • GDPR check: e.g. “no personal data passed to external AI system”

In a less obviously beneficial example than automated translation, we might initially gather data on the value of the task e.g. analytics data on reader numbers and quality of engagement on translated articles. This can help decide whether or not the eventual costs of purchasing, training for, migrating to, maintaining and using the software are justified.

We would then look at existing solutions (e.g. software) and see how they apply to our use case.

Categorising {#categorising}

Categorising tasks can assist with analysis. Journalistic work can be usefully broken down into three areas:

  1. Information gathering and research (e.g. document search)
  2. Article production (e.g. article translation; interview transcription)
  3. Distribution (e.g. relevant audience identification)

A task, like translation, may primarily sit in the journalistic production category, but a chosen solution may also be useful in other categories e.g. 1) helping translate source material, 3) translating comments on the article and social media for non-native social media monitoring staff to understand.

These categories don’t capture operational areas of work, such as human resources, technical and finance, that may also benefit from AI. We may add categories for those domains.

Cost analysis {#cost-analysis}

Does this system introduce organisational cost savings?

  • Savings on freelance translation costs

What costs, including unseen costs, might the organisation and other relevant stakeholders (including staff, readers, communities we serve) incur as a result?

  • Technical costs: cost of translation API + code development/maintenance/hosting costs

  • Staff training costs

  • Costs to staff: freelancers lose work; value of human translator’s nuance

Data risk {#data-risk}

Does the use of this system involve sharing user data or creating new data that may infringe on the rights of staff, readers, supporters or other stakeholders?

Systemic bias {#systemic-bias}

Are the AI systems involved capable of bias that may prejudice groups or individuals.

e.g. could automated translation models overlook cultural nuance, reducing quality?

Labour rights {#labour-rights}

Is this work that humans have done until now and where we are displacing human work for the sake of efficiency without respecting our staff and the value they could continue to bring to the organisation? Is it a cost-cutting measure that fails to recognise the human costs and the lost value to the organisation from reducing human involvement?

Public perception {#public-perception}

Does the use of this system affect trust in the organisation or specific aspects of our work?œ

e.g. could automated translations cause offence or reputational damage? Does swapping human translators for machines pose ethical problems?

Real examples of process in action {#real-examples-of-process-in-action}

Implementing automated toxic comment moderation with https://perspectiveapi.com

Task analysis and data {#task-analysis-and-data}

Existing process: Reader comments are moderated by a staff member by actively monitoring comment spaces and responding to complaints about bad comments that have already been published.

Replacement process: Automated system blocks comments that the model deems problematic. System warns commenters before moderating, encouraging them to re-write comments, which can often lead to less toxic comments and moderation time. Staff member still has to moderate after the fact, unblocking comments that were not really bad (false positives) and blocking those that the system did not catch (false negatives).

Gather input data: Text of user comments provided in article comments section.

Data on the value of the task: Time spent by staff member moderating toxic comments before implementation, compared with expected time spent with system in place. We may need to rely on benchmark data from other users of the system to gauge savings as we cannot make the comparison before implementation.

Existing solutions: Software solution (Perspective API) came as an option with our new commenting system (Coral Talk).

Categorising {#categorising}

This task falls in the category of journalistic distribution as it deals with supporting staff to manage public interactions after publication.

Cost analysis {#cost-analysis}

**Cost savings: Time saved by editorial staff actively moderating comments. **

Costs incurred: Technical setup and maintenance costs (e.g. 1.5 days a year of technical staff time).

Data risk {#data-risk}

“No personal data is used in any way that has not already been submitted for public consumption”

Systemic bias {#systemic-bias}

Moderation model could deem words or phrases to be offensive when they are not, leading to claims of censorship or prejudice.

Labour rights {#labour-rights}

We have never had money to pay moderators nor found effective volunteer models that enable humans to do this work without undermining their work elsewhere. This replacement system only supports existing staff to focus on richer forms of engagement.

Public perception {#public-perception}

Yes. A community moderator judges content and has de facto censorship powers. The case should be made for this clearly with recourse available to users. Santa Clara Principles are relevant here.

**Action: **Establish this process as an organisational tool for analysing prospective new processes and systems. Develop process over time to incorporate reflections.

**Action: **Create a live list of key, existing work tasks – e.g. translation; image research – to assess for AI viability. Share findings with staff, potentially with workshops running through the above process – this should engender familiarity and develop skills.

Future examples of process in action {#future-examples-of-process-in-action}

Here’s an illustrative list of features we might consider incorporating in future and where the above process could be useful.

This is a short outline though the options are plentiful, more examples here.

Action: Regularly survey staff to discover which tasks may be suitable for automation. This would be an expansion of our existing staff software survey.

Action: We should consider ways of signposting AI usage for the maintenance of user trust. For example, just as we currently mark articles as “Opinion” or “Analysis”, we should be prepared to add journalistic labeling of content as e.g. “Computer generated”. “Public trust requires clear disclosure when using text-generating AI tools

Wider opportunities {#wider-opportunities}

This document contends that AI strategy will be necessary to ensure organisational competence and flourishing during the coming transformation. Beyond this, there will be opportunities for organisations that are specifically adept in this regard, that understand the issues, that have a basis for relevant decision-making and a reputation for doing so.

These include opportunities for collaboration on complex investigations, technical development, critical analysis, consultancy and research in the field. For example, should openDemocracy establish a solid foundation in AI practice, we will be well placed to tackle complex investigations and knowledgeably participate in collaborative work dealing with the subject.

To build on this, the organisation would commit to and invest in strategic activities that produce skilled staff who are confident in collaborating on and managing AI-oriented projects.

"The more organisations explore these kinds of collaborations, the sooner we will realise as an industry the value that collaboration can bring to help us make the most of the potential offered by AI technology."

"The impact of AI and collaboration on investigative journalism, LSE"

**Action: **Determine if and how openDemocracy wants to develop AI competencies and communicate those externally, with the public, and potential funders and partners – e.g. positioning our teams as AI-savvy.

**Action: **Decide how AI might relate to our brand and general strategy, the problems it might solve, or the needs it could meet.

**Action: **Assign responsibility for external relations with partners, clients, and wider AI resources with a mission to investigate and incorporate AI innovation.

Conclusion and recommendations {#conclusion-and-recommendations}

We may wish to build on this document by undertaking a discrete, organisational AI-readiness assessment, or we may incorporate specific actions into existing workflows in an ad-hoc fashion.

Triaging the actions surfaced in this report is best done in the context of organisational discussion. Nevertheless, here is an attempt to bring together a set of prioritised recommendations based on the above and the current state of openDemocracy. Actions have been prioritised on a matrix of likely impact versus resource required to implement.

  • Create a live list of key, existing work tasks – e.g. translation; image research – to assess for AI viability.
    • Share findings with staff, potentially with workshops running through the above process – this should engender familiarity and develop skills.
  • Consider ways of signposting AI usage for the maintenance of user trust.
  • Establish and develop a process for analysing prospective new AI processes and systems.
  • Consider if our processes for managing AI will ensure ethical principles are upheld.
  • Technology radar: Regularly survey staff to discover which tasks may be suitable for automation, and what software people have seen or heard about that may be useful. This would be an expansion of our existing staff software survey.
  • Determine if and how openDemocracy wants to develop AI competencies and communicate those externally, with the public, and potential funders and partners – e.g. positioning our teams as AI-savvy.
  • Define organisational commitment to key principles of ethical AI.
  • Decide how AI might relate to our brand and general strategy, the problems it might solve, or the needs it could meet.
  • Assign responsibility for external relations with partners, clients, and wider AI resources with a mission to investigate and incorporate AI innovation.
  • Internal communication to staff about this policy and changes
  • Staff training
    • Along with the technology radar, future training exercises should encourage staff to consider their own technology needs with a view to identifying processes that could be targeted by AI systems. This involves developing skills in understanding which tasks can be made into data and processed.
    • Beyond that, training should help staff test and evaluate new software and processes.
  • Identify key obstacles: resources, skills, culture, management, etc.

and plan how to address them in a systematic way.

  • Assign roles and responsibilities and create a communications

structure across the organisation to include all stakeholders.

  • Establish systems of monitoring and reviewing performance.