diff --git a/.github/issue_template.md b/.github/issue_template.md index 9ff136cd5..ae5740c52 100644 --- a/.github/issue_template.md +++ b/.github/issue_template.md @@ -1,13 +1,3 @@ -Please replace this line with full information about your idea or problem. If it's a bug share as much as possible to reproduce it. +# Overview -Job story if there is one ... - -## Acceptance - -* [ ] ... - -## Tasks - -* [ ] ... - -## Analysis +Please replace this line with full information about your idea or problem. If it's a bug share as much as possible to reproduce it diff --git a/.github/stale.yaml b/.github/stale.yaml new file mode 100644 index 000000000..cc7278093 --- /dev/null +++ b/.github/stale.yaml @@ -0,0 +1,23 @@ +# Number of days of inactivity before an issue becomes stale +daysUntilStale: 90 + +# Number of days of inactivity before a stale issue is closed +daysUntilClose: 30 + +# Issues with these labels will never be considered stale +exemptLabels: + - feature + - enhancement + - bug + +# Label to use when marking an issue as stale +staleLabel: wontfix + +# Comment to post when marking an issue as stale. Set to `false` to disable +markComment: > + This issue has been automatically marked as stale because it has not had + recent activity. It will be closed if no further activity occurs. Thank you + for your contributions. + +# Comment to post when closing a stale issue. Set to `false` to disable +closeComment: false diff --git a/.github/workflows/general.yaml b/.github/workflows/general.yaml new file mode 100644 index 000000000..cb5a86f1e --- /dev/null +++ b/.github/workflows/general.yaml @@ -0,0 +1,44 @@ +name: general + +on: + push: + branches: + - main + tags: + - v*.*.* + pull_request: + branches: + - main + +jobs: + + # Test + + test: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v1 + - name: Configure + run: npm install + - name: Build + run: npm run build + + # Deploy + + deploy: + if: github.ref == 'refs/heads/main' + runs-on: ubuntu-latest + needs: [test] + steps: + - name: Checkout + uses: actions/checkout@v1 + - name: Configure + run: npm install + - name: Build + run: npm run build + - name: Deploy + uses: JamesIves/github-pages-deploy-action@4.1.4 + with: + branch: site + folder: site/.vuepress/dist diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..4ca122b8d --- /dev/null +++ b/.gitignore @@ -0,0 +1,6 @@ +node_modules/* +site/.vuepress/dist/* +sandbox/* +.env + +.vercel \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..a881db2f0 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,110 @@ +# Contributing + +The website contribution guide. + +## Frontmatter + +Frontmatter is the metadata in pages. It is an essential "API" between content authors and the theme designers. Here you can find the frontmatter definitions for main category of pages. + +### Common for all pages + +This frontmatter is common to all pages and maybe used in any of them. Every page MUST have a title. All other values are optional. + +```md +--- +title: # the title of the page +description: # short description / summary / excerpt +tagline: # one sentence summary +image: /data-package-diagram.png +layout: product | job [| blog] +--- +``` + +* `layout`: You only need to set layout if you need a custom layout. Default pages don't need it, nor does the blog section (layout for blog pages is set automatically by the blog plugin) +* `image`: only needed if you have a primary image for this page that is going to be used elsewhere. For example, for the blog, this is the featured image that is used on the blog listing page. +* `description`: you can set this explicitly in the frontmatter OR you have this automatically generated by using the `` tag as per https://v1.vuepress.vuejs.org/theme/writing-a-theme.html#content-excerpt + * If set, the description will be used to set the meta description tag. +* `tagline`: this is rarely used as you have title and description. TODO: specify which page types use this. + +### Page titles + +Our convention is that page titles are set in frontmatter not in markdown. This allows them to be styled different etc. + +:white_check_mark: + +```md +--- +title: My Page +--- + +This page is about X ... +``` + +:x: + +```md +# My Page + +This page is about X +``` + +### Blog posts + +```md +--- +category: case-studies | grantee-profiles | pilots | grantee-profiles-2019 +date: # date of publication in yyyy-mm-dd format +author: +tags: ["pilots", "case-studies"] +--- +``` + +#### Author + +Author can be for single or multiple authors. A single author is a single value: + +``` +author: Rufus Pollock +``` + +Multipe authors is an array of values: + +``` +author: + - Rufus Pollock + - Lilly Winfree +``` + +#### Category + + +When category == "case-studies" | "pilots" + +```md +interviewee: +subject_context: +``` + +When category == "grantee-profiles" + +```md +github: +twitter: +website: +``` + +### Jobs pages + +```md +tagline: +pain: +context: (?) +hexagon: +``` + +### Product pages + +```md +hexagon: +github: # list of github repos ... +``` diff --git a/LEAD.md b/LEAD.md new file mode 100644 index 000000000..e8b747f77 --- /dev/null +++ b/LEAD.md @@ -0,0 +1 @@ +roll diff --git a/LICENSE.md b/LICENSE.md new file mode 100644 index 000000000..a209e75e7 --- /dev/null +++ b/LICENSE.md @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2020 Open Knowledge Foundation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md index 3d536dce8..551384ae3 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,10 @@ -# Frictionless - Project Management +# Frictionless Data -This is a repo for managing the Frictionless project – https://frictionlessdata.io/ +[![Build](https://img.shields.io/github/workflow/status/frictionlessdata/website/general/main)](https://github.com/frictionlessdata/frictionlessdata.io/actions) +[![Codebase](https://img.shields.io/badge/codebase-github-brightgreen)](https://github.com/frictionlessdata/frictionlessdata.io) +[![Support](https://img.shields.io/badge/support-slack-brightgreen)](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) -As such it is more core team focused. :smile: +This is a repo for managing the Frictionless project – https://frictionlessdata.io/. As such it is more core team focused. :smile: * Frictionless Data: https://frictionlessdata.io/ * Specifications: https://specs.frictionlessdata.io/ @@ -21,3 +23,18 @@ Want to cite this repo? Please use this DOI: * Also the catch-all for general questions, suggestions, ideas, support requests, etc. * **Project**: https://github.com/frictionlessdata/project/issues – the default place for core team to organize and plan work, schedule sprints, etc. **NOT** for general discussion, ideas, support, etc. * **Specs**: https://github.com/frictionlessdata/specs/issues + +## How to contribute to the website + +This is the new FrictionlessData.io website to be released in 2020. It reflects the recent updates made to Frictionless Data project setup and brand. + +### Development + +```console +$ npm install +$ npm start +``` + +### Deployment + +New commits into the master branch will be automatically deployed to GitHub Pages by a [workflow](.github/workflows/general.yml). diff --git a/index.html b/index.html deleted file mode 100644 index 4df8f8154..000000000 --- a/index.html +++ /dev/null @@ -1,2446 +0,0 @@ - - - - - - - - -Welcome to Livemark | Livemark - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -
- -
- -
-
-
-
-
-
-
- Written in Livemark
- (2022-06-13 15:16) -
- -

Welcome to Livemark

-
-

Edit index.md file to explore Livemark's features or just remove everything to start from a scratch.

-
-

It's a template document created automatically to introduce Livemark. We will list here core Livemark features and you can play with theme live editing the document. It's possible to use any standard Markdown features as well.

-

Logic

-

We can pre-process our markdown file using Jinja:

- -

Table

-

We can visualize our data as a table using HandsOnTable:

-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
typebrandmodelpricekmplbhp
SedanVolkswagenVento78516.1104
SedanHyundaiVerna77417.4106
SedanSkodaRapid75615.0104
SedanSuzukiCiaz72520.791
SedanRenaultScala72416.998
SedanSuzukiSX471516.5103
SedanFiatLinea70015.7112
SedanNissanSunny69916.998
SedanFiatLinea Classic61214.989
SedanToyotaEtios60316.889
SedanSanStorm59516.059
SedanChevroletSail55118.282
HatchbackVolkswagenPolo53516.574
HatchbackHyundaii2052318.682
SedanHondaAmaze51918.087
SedanSuzukiSwift DZire50819.186
SedanFordClassic50614.1100
HatchbackSkodaFabia50316.475
HatchbackToyotaEtios Liva50017.779
HatchbackFiatPunto Evo49915.867
SedanTataIndigo49914.065
SedanHyundaiXcent49619.182
SedanTataZest48117.689
HatchbackChevroletSail Hatchback46818.282
HatchbackSuzukiSwift46220.483
HatchbackRenaultPulse44618.074
HatchbackSuzukiRitz44218.586
HatchbackChevroletBeat42118.679
HatchbackHondaBrio42119.487
HatchbackHyundaii1041819.868
HatchbackFordFigo41415.370
HatchbackNissanMicra41319.567
HatchbackSuzukiCelerio39223.167
HatchbackSuzukiWagon-R36320.567
HatchbackVolkswagenUp36021.074
HatchbackChevroletSpark34516.262
HatchbackSuzukiEstilo33819.067
HatchbackSuzukiAlto31524.167
HatchbackNissanDatsun GO31220.667
HatchbackHyundaiEON30221.155
HatchbackSuzukiAlto 80024822.747
HatchbackTataNano19923.938
typebrandmodelpricekmplbhp
-
-
- - -

Chart

-

Another option is to draw a chart using Vega:

-
-
-
-
- - -

Script

-

Moreover, we can execute scripts in Python/Bash:

-
for number in range(1, 6):
-    print(f'Hello World #{number}!')
-
- -
Hello World #1!
-Hello World #2!
-Hello World #3!
-Hello World #4!
-Hello World #5!
-

Markup

-

Markdown is not enough? Finally, let's add some markup with Bootstrap:

-
-
-
-
-

Package

-
-
-

Data Package

-

A simple container format for describing a coherent collection of data in a single package.

-
-
-
-

Resource

-
-
-

Data Resource

-

A simple format to describe and package a single data resource such as a individual table or file.

-
-
-
-

Schema

-
-
-

Table Schema

-

A simple format to declare a schema for tabular data. The schema is designed to be expressible in JSON.

-
-
-
-
-
-
-
- -
- -
- -
-
- It's a template document created automatically to introduce Livemark -
-
- - -
- - - - - - - - - - - - - - - -
- -
- - - -
-
- -
-
- -
-
- -
-
- -
-
- - - -
-
- - - -
-
- - - - - \ No newline at end of file diff --git a/index.md b/index.md deleted file mode 100644 index 68469dc63..000000000 --- a/index.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -brand: - text: Livemark -github: - user: frictionlessdata - repo: livemark -links: - items: - - name: Documentation - path: https://livemark.frictionlessdata.io/ -# add other config options here or create livemark.yaml file ---- - -# Welcome to Livemark - -> Edit `index.md` file to explore Livemark's features or just remove everything to start from a scratch. - -It's a template document created automatically to introduce Livemark. We will list here core [Livemark](https://livemark.frictionlessdata.io/) features and you can play with theme live editing the document. It's possible to use any standard Markdown features as well. - -## Logic - -We can pre-process our markdown file using [Jinja](https://jinja.palletsprojects.com/): - -{% for car in frictionless.extract('https://raw.githubusercontent.com/frictionlessdata/livemark/main/data/cars.csv', layout={"limitRows": 5}) %} -- {{ car.brand }} {{ car.model }}: ${{ car.price }} -{% endfor %} - -## Table - -We can visualize our data as a table using [HandsOnTable](https://handsontable.com/): - -```yaml table -data: https://raw.githubusercontent.com/frictionlessdata/livemark/main/data/cars.csv -width: 600 -order: - - [3, 'desc'] -columns: - - data: type - - data: brand - - data: model - - data: price - - data: kmpl - - data: bhp -``` - -## Chart - -Another option is to draw a chart using [Vega](https://vega.github.io/vega-lite/): - -```yaml chart -data: - url: https://raw.githubusercontent.com/frictionlessdata/livemark/main/data/cars.csv -mark: circle -selection: - brush: - type: interval -encoding: - x: - type: quantitative - field: kmpl - scale: - domain: [12,25] - y: - type: quantitative - field: price - scale: - domain: [100,900] - color: - condition: - selection: brush - field: type - type: nominal - value: grey - size: - type: quantitative - field: bhp -width: 500 -height: 300 -``` - -## Script - -Moreover, we can execute scripts in [Python](https://www.python.org/)/[Bash](https://www.gnu.org/software/bash/): - -```python script -for number in range(1, 6): - print(f'Hello World #{number}!') -``` - -## Markup - -Markdown is not enough? Finally, let's add some markup with [Bootstrap](https://getbootstrap.com/): - -```html markup -
-
-
-
-
![Package](https://livemark.frictionlessdata.io/assets/data-package.png)
-
-

Data Package

-

A simple container format for describing a coherent collection of data in a single package.

-
-
-
-
![Resource](https://livemark.frictionlessdata.io/assets/data-resource.png)
-
-

Data Resource

-

A simple format to describe and package a single data resource such as a individual table or file.

-
-
-
-
![Schema](https://livemark.frictionlessdata.io/assets/table-schema.png)
-
-

Table Schema

-

A simple format to declare a schema for tabular data. The schema is designed to be expressible in JSON.

-
-
-
-
-
-``` diff --git a/metrics-track/README.md b/metrics/README.md similarity index 100% rename from metrics-track/README.md rename to metrics/README.md diff --git a/metrics-track/fd-repos-metrics.csv b/metrics/fd-repos-metrics.csv similarity index 100% rename from metrics-track/fd-repos-metrics.csv rename to metrics/fd-repos-metrics.csv diff --git a/package-lock.json b/package-lock.json new file mode 100644 index 000000000..2a2872ad6 --- /dev/null +++ b/package-lock.json @@ -0,0 +1,14655 @@ +{ + "name": "frictionlessdata.io", + "version": "1.0.0", + "lockfileVersion": 1, + "requires": true, + "dependencies": { + "@babel/code-frame": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.12.13.tgz", + "integrity": "sha512-HV1Cm0Q3ZrpCR93tkWOYiuYIgLxZXZFVG2VgK+MBWjUqZTundupbfx2aXarXuw5Ko5aMcjtJgbSs4vUGBS5v6g==", + "dev": true, + "requires": { + "@babel/highlight": "^7.12.13" + } + }, + "@babel/compat-data": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.14.0.tgz", + "integrity": "sha512-vu9V3uMM/1o5Hl5OekMUowo3FqXLJSw+s+66nt0fSWVWTtmosdzn45JHOB3cPtZoe6CTBDzvSw0RdOY85Q37+Q==", + "dev": true + }, + "@babel/core": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.14.3.tgz", + "integrity": "sha512-jB5AmTKOCSJIZ72sd78ECEhuPiDMKlQdDI/4QRI6lzYATx5SSogS1oQA2AoPecRCknm30gHi2l+QVvNUu3wZAg==", + "dev": true, + "requires": { + "@babel/code-frame": "^7.12.13", + "@babel/generator": "^7.14.3", + "@babel/helper-compilation-targets": "^7.13.16", + "@babel/helper-module-transforms": "^7.14.2", + "@babel/helpers": "^7.14.0", + "@babel/parser": "^7.14.3", + "@babel/template": "^7.12.13", + "@babel/traverse": "^7.14.2", + "@babel/types": "^7.14.2", + "convert-source-map": "^1.7.0", + "debug": "^4.1.0", + "gensync": "^1.0.0-beta.2", + "json5": "^2.1.2", + "semver": "^6.3.0", + "source-map": "^0.5.0" + }, + "dependencies": { + "source-map": { + "version": "0.5.7", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha1-igOdLRAh0i0eoUyA2OpGi6LvP8w=", + "dev": true + } + } + }, + "@babel/generator": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.14.3.tgz", + "integrity": "sha512-bn0S6flG/j0xtQdz3hsjJ624h3W0r3llttBMfyHX3YrZ/KtLYr15bjA0FXkgW7FpvrDuTuElXeVjiKlYRpnOFA==", + "dev": true, + "requires": { + "@babel/types": "^7.14.2", + "jsesc": "^2.5.1", + "source-map": "^0.5.0" + }, + "dependencies": { + "source-map": { + "version": "0.5.7", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha1-igOdLRAh0i0eoUyA2OpGi6LvP8w=", + "dev": true + } + } + }, + "@babel/helper-annotate-as-pure": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.12.13.tgz", + "integrity": "sha512-7YXfX5wQ5aYM/BOlbSccHDbuXXFPxeoUmfWtz8le2yTkTZc+BxsiEnENFoi2SlmA8ewDkG2LgIMIVzzn2h8kfw==", + "dev": true, + "requires": { + "@babel/types": "^7.12.13" + } + }, + "@babel/helper-builder-binary-assignment-operator-visitor": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/helper-builder-binary-assignment-operator-visitor/-/helper-builder-binary-assignment-operator-visitor-7.12.13.tgz", + "integrity": "sha512-CZOv9tGphhDRlVjVkAgm8Nhklm9RzSmWpX2my+t7Ua/KT616pEzXsQCjinzvkRvHWJ9itO4f296efroX23XCMA==", + "dev": true, + "requires": { + "@babel/helper-explode-assignable-expression": "^7.12.13", + "@babel/types": "^7.12.13" + } + }, + "@babel/helper-compilation-targets": { + "version": "7.13.16", + "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.13.16.tgz", + "integrity": "sha512-3gmkYIrpqsLlieFwjkGgLaSHmhnvlAYzZLlYVjlW+QwI+1zE17kGxuJGmIqDQdYp56XdmGeD+Bswx0UTyG18xA==", + "dev": true, + "requires": { + "@babel/compat-data": "^7.13.15", + "@babel/helper-validator-option": "^7.12.17", + "browserslist": "^4.14.5", + "semver": "^6.3.0" + } + }, + "@babel/helper-create-class-features-plugin": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.14.3.tgz", + "integrity": "sha512-BnEfi5+6J2Lte9LeiL6TxLWdIlEv9Woacc1qXzXBgbikcOzMRM2Oya5XGg/f/ngotv1ej2A/b+3iJH8wbS1+lQ==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.12.13", + "@babel/helper-function-name": "^7.14.2", + "@babel/helper-member-expression-to-functions": "^7.13.12", + "@babel/helper-optimise-call-expression": "^7.12.13", + "@babel/helper-replace-supers": "^7.14.3", + "@babel/helper-split-export-declaration": "^7.12.13" + } + }, + "@babel/helper-create-regexp-features-plugin": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.14.3.tgz", + "integrity": "sha512-JIB2+XJrb7v3zceV2XzDhGIB902CmKGSpSl4q2C6agU9SNLG/2V1RtFRGPG1Ajh9STj3+q6zJMOC+N/pp2P9DA==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.12.13", + "regexpu-core": "^4.7.1" + } + }, + "@babel/helper-define-polyfill-provider": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/@babel/helper-define-polyfill-provider/-/helper-define-polyfill-provider-0.2.1.tgz", + "integrity": "sha512-x3AUTVZNPunaw1opRTa5OwVA5N0YxGlIad9xQ5QflK1uIS7PnAGGU5O2Dj/G183fR//N8AzTq+Q8+oiu9m0VFg==", + "dev": true, + "requires": { + "@babel/helper-compilation-targets": "^7.13.0", + "@babel/helper-module-imports": "^7.12.13", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/traverse": "^7.13.0", + "debug": "^4.1.1", + "lodash.debounce": "^4.0.8", + "resolve": "^1.14.2", + "semver": "^6.1.2" + } + }, + "@babel/helper-explode-assignable-expression": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/helper-explode-assignable-expression/-/helper-explode-assignable-expression-7.13.0.tgz", + "integrity": "sha512-qS0peLTDP8kOisG1blKbaoBg/o9OSa1qoumMjTK5pM+KDTtpxpsiubnCGP34vK8BXGcb2M9eigwgvoJryrzwWA==", + "dev": true, + "requires": { + "@babel/types": "^7.13.0" + } + }, + "@babel/helper-function-name": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.14.2.tgz", + "integrity": "sha512-NYZlkZRydxw+YT56IlhIcS8PAhb+FEUiOzuhFTfqDyPmzAhRge6ua0dQYT/Uh0t/EDHq05/i+e5M2d4XvjgarQ==", + "dev": true, + "requires": { + "@babel/helper-get-function-arity": "^7.12.13", + "@babel/template": "^7.12.13", + "@babel/types": "^7.14.2" + } + }, + "@babel/helper-get-function-arity": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.12.13.tgz", + "integrity": "sha512-DjEVzQNz5LICkzN0REdpD5prGoidvbdYk1BVgRUOINaWJP2t6avB27X1guXK1kXNrX0WMfsrm1A/ZBthYuIMQg==", + "dev": true, + "requires": { + "@babel/types": "^7.12.13" + } + }, + "@babel/helper-hoist-variables": { + "version": "7.13.16", + "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.13.16.tgz", + "integrity": "sha512-1eMtTrXtrwscjcAeO4BVK+vvkxaLJSPFz1w1KLawz6HLNi9bPFGBNwwDyVfiu1Tv/vRRFYfoGaKhmAQPGPn5Wg==", + "dev": true, + "requires": { + "@babel/traverse": "^7.13.15", + "@babel/types": "^7.13.16" + } + }, + "@babel/helper-member-expression-to-functions": { + "version": "7.13.12", + "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.13.12.tgz", + "integrity": "sha512-48ql1CLL59aKbU94Y88Xgb2VFy7a95ykGRbJJaaVv+LX5U8wFpLfiGXJJGUozsmA1oEh/o5Bp60Voq7ACyA/Sw==", + "dev": true, + "requires": { + "@babel/types": "^7.13.12" + } + }, + "@babel/helper-module-imports": { + "version": "7.13.12", + "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.13.12.tgz", + "integrity": "sha512-4cVvR2/1B693IuOvSI20xqqa/+bl7lqAMR59R4iu39R9aOX8/JoYY1sFaNvUMyMBGnHdwvJgUrzNLoUZxXypxA==", + "dev": true, + "requires": { + "@babel/types": "^7.13.12" + } + }, + "@babel/helper-module-transforms": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.14.2.tgz", + "integrity": "sha512-OznJUda/soKXv0XhpvzGWDnml4Qnwp16GN+D/kZIdLsWoHj05kyu8Rm5kXmMef+rVJZ0+4pSGLkeixdqNUATDA==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.13.12", + "@babel/helper-replace-supers": "^7.13.12", + "@babel/helper-simple-access": "^7.13.12", + "@babel/helper-split-export-declaration": "^7.12.13", + "@babel/helper-validator-identifier": "^7.14.0", + "@babel/template": "^7.12.13", + "@babel/traverse": "^7.14.2", + "@babel/types": "^7.14.2" + } + }, + "@babel/helper-optimise-call-expression": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.12.13.tgz", + "integrity": "sha512-BdWQhoVJkp6nVjB7nkFWcn43dkprYauqtk++Py2eaf/GRDFm5BxRqEIZCiHlZUGAVmtwKcsVL1dC68WmzeFmiA==", + "dev": true, + "requires": { + "@babel/types": "^7.12.13" + } + }, + "@babel/helper-plugin-utils": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.13.0.tgz", + "integrity": "sha512-ZPafIPSwzUlAoWT8DKs1W2VyF2gOWthGd5NGFMsBcMMol+ZhK+EQY/e6V96poa6PA/Bh+C9plWN0hXO1uB8AfQ==", + "dev": true + }, + "@babel/helper-remap-async-to-generator": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/helper-remap-async-to-generator/-/helper-remap-async-to-generator-7.13.0.tgz", + "integrity": "sha512-pUQpFBE9JvC9lrQbpX0TmeNIy5s7GnZjna2lhhcHC7DzgBs6fWn722Y5cfwgrtrqc7NAJwMvOa0mKhq6XaE4jg==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.12.13", + "@babel/helper-wrap-function": "^7.13.0", + "@babel/types": "^7.13.0" + } + }, + "@babel/helper-replace-supers": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.14.3.tgz", + "integrity": "sha512-Rlh8qEWZSTfdz+tgNV/N4gz1a0TMNwCUcENhMjHTHKp3LseYH5Jha0NSlyTQWMnjbYcwFt+bqAMqSLHVXkQ6UA==", + "dev": true, + "requires": { + "@babel/helper-member-expression-to-functions": "^7.13.12", + "@babel/helper-optimise-call-expression": "^7.12.13", + "@babel/traverse": "^7.14.2", + "@babel/types": "^7.14.2" + } + }, + "@babel/helper-simple-access": { + "version": "7.13.12", + "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.13.12.tgz", + "integrity": "sha512-7FEjbrx5SL9cWvXioDbnlYTppcZGuCY6ow3/D5vMggb2Ywgu4dMrpTJX0JdQAIcRRUElOIxF3yEooa9gUb9ZbA==", + "dev": true, + "requires": { + "@babel/types": "^7.13.12" + } + }, + "@babel/helper-skip-transparent-expression-wrappers": { + "version": "7.12.1", + "resolved": "https://registry.npmjs.org/@babel/helper-skip-transparent-expression-wrappers/-/helper-skip-transparent-expression-wrappers-7.12.1.tgz", + "integrity": "sha512-Mf5AUuhG1/OCChOJ/HcADmvcHM42WJockombn8ATJG3OnyiSxBK/Mm5x78BQWvmtXZKHgbjdGL2kin/HOLlZGA==", + "dev": true, + "requires": { + "@babel/types": "^7.12.1" + } + }, + "@babel/helper-split-export-declaration": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.12.13.tgz", + "integrity": "sha512-tCJDltF83htUtXx5NLcaDqRmknv652ZWCHyoTETf1CXYJdPC7nohZohjUgieXhv0hTJdRf2FjDueFehdNucpzg==", + "dev": true, + "requires": { + "@babel/types": "^7.12.13" + } + }, + "@babel/helper-validator-identifier": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.14.0.tgz", + "integrity": "sha512-V3ts7zMSu5lfiwWDVWzRDGIN+lnCEUdaXgtVHJgLb1rGaA6jMrtB9EmE7L18foXJIE8Un/A/h6NJfGQp/e1J4A==", + "dev": true + }, + "@babel/helper-validator-option": { + "version": "7.12.17", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.12.17.tgz", + "integrity": "sha512-TopkMDmLzq8ngChwRlyjR6raKD6gMSae4JdYDB8bByKreQgG0RBTuKe9LRxW3wFtUnjxOPRKBDwEH6Mg5KeDfw==", + "dev": true + }, + "@babel/helper-wrap-function": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/helper-wrap-function/-/helper-wrap-function-7.13.0.tgz", + "integrity": "sha512-1UX9F7K3BS42fI6qd2A4BjKzgGjToscyZTdp1DjknHLCIvpgne6918io+aL5LXFcER/8QWiwpoY902pVEqgTXA==", + "dev": true, + "requires": { + "@babel/helper-function-name": "^7.12.13", + "@babel/template": "^7.12.13", + "@babel/traverse": "^7.13.0", + "@babel/types": "^7.13.0" + } + }, + "@babel/helpers": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.14.0.tgz", + "integrity": "sha512-+ufuXprtQ1D1iZTO/K9+EBRn+qPWMJjZSw/S0KlFrxCw4tkrzv9grgpDHkY9MeQTjTY8i2sp7Jep8DfU6tN9Mg==", + "dev": true, + "requires": { + "@babel/template": "^7.12.13", + "@babel/traverse": "^7.14.0", + "@babel/types": "^7.14.0" + } + }, + "@babel/highlight": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.14.0.tgz", + "integrity": "sha512-YSCOwxvTYEIMSGaBQb5kDDsCopDdiUGsqpatp3fOlI4+2HQSkTmEVWnVuySdAC5EWCqSWWTv0ib63RjR7dTBdg==", + "dev": true, + "requires": { + "@babel/helper-validator-identifier": "^7.14.0", + "chalk": "^2.0.0", + "js-tokens": "^4.0.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "@babel/parser": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.14.3.tgz", + "integrity": "sha512-7MpZDIfI7sUC5zWo2+foJ50CSI5lcqDehZ0lVgIhSi4bFEk94fLAKlF3Q0nzSQQ+ca0lm+O6G9ztKVBeu8PMRQ==", + "dev": true + }, + "@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining": { + "version": "7.13.12", + "resolved": "https://registry.npmjs.org/@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining/-/plugin-bugfix-v8-spread-parameters-in-optional-chaining-7.13.12.tgz", + "integrity": "sha512-d0u3zWKcoZf379fOeJdr1a5WPDny4aOFZ6hlfKivgK0LY7ZxNfoaHL2fWwdGtHyVvra38FC+HVYkO+byfSA8AQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-skip-transparent-expression-wrappers": "^7.12.1", + "@babel/plugin-proposal-optional-chaining": "^7.13.12" + } + }, + "@babel/plugin-proposal-async-generator-functions": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-async-generator-functions/-/plugin-proposal-async-generator-functions-7.14.2.tgz", + "integrity": "sha512-b1AM4F6fwck4N8ItZ/AtC4FP/cqZqmKRQ4FaTDutwSYyjuhtvsGEMLK4N/ztV/ImP40BjIDyMgBQAeAMsQYVFQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-remap-async-to-generator": "^7.13.0", + "@babel/plugin-syntax-async-generators": "^7.8.4" + } + }, + "@babel/plugin-proposal-class-properties": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-class-properties/-/plugin-proposal-class-properties-7.13.0.tgz", + "integrity": "sha512-KnTDjFNC1g+45ka0myZNvSBFLhNCLN+GeGYLDEA8Oq7MZ6yMgfLoIRh86GRT0FjtJhZw8JyUskP9uvj5pHM9Zg==", + "dev": true, + "requires": { + "@babel/helper-create-class-features-plugin": "^7.13.0", + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-proposal-class-static-block": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-class-static-block/-/plugin-proposal-class-static-block-7.14.3.tgz", + "integrity": "sha512-HEjzp5q+lWSjAgJtSluFDrGGosmwTgKwCXdDQZvhKsRlwv3YdkUEqxNrrjesJd+B9E9zvr1PVPVBvhYZ9msjvQ==", + "dev": true, + "requires": { + "@babel/helper-create-class-features-plugin": "^7.14.3", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-class-static-block": "^7.12.13" + } + }, + "@babel/plugin-proposal-decorators": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-decorators/-/plugin-proposal-decorators-7.14.2.tgz", + "integrity": "sha512-LauAqDd/VjQDtae58QgBcEOE42NNP+jB2OE+XeC3KBI/E+BhhRjtr5viCIrj1hmu1YvrguLipIPRJZmS5yUcFw==", + "dev": true, + "requires": { + "@babel/helper-create-class-features-plugin": "^7.14.2", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-decorators": "^7.12.13" + } + }, + "@babel/plugin-proposal-dynamic-import": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-dynamic-import/-/plugin-proposal-dynamic-import-7.14.2.tgz", + "integrity": "sha512-oxVQZIWFh91vuNEMKltqNsKLFWkOIyJc95k2Gv9lWVyDfPUQGSSlbDEgWuJUU1afGE9WwlzpucMZ3yDRHIItkA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-dynamic-import": "^7.8.3" + } + }, + "@babel/plugin-proposal-export-namespace-from": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-export-namespace-from/-/plugin-proposal-export-namespace-from-7.14.2.tgz", + "integrity": "sha512-sRxW3z3Zp3pFfLAgVEvzTFutTXax837oOatUIvSG9o5gRj9mKwm3br1Se5f4QalTQs9x4AzlA/HrCWbQIHASUQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-export-namespace-from": "^7.8.3" + } + }, + "@babel/plugin-proposal-json-strings": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-json-strings/-/plugin-proposal-json-strings-7.14.2.tgz", + "integrity": "sha512-w2DtsfXBBJddJacXMBhElGEYqCZQqN99Se1qeYn8DVLB33owlrlLftIbMzn5nz1OITfDVknXF433tBrLEAOEjA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-json-strings": "^7.8.3" + } + }, + "@babel/plugin-proposal-logical-assignment-operators": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-logical-assignment-operators/-/plugin-proposal-logical-assignment-operators-7.14.2.tgz", + "integrity": "sha512-1JAZtUrqYyGsS7IDmFeaem+/LJqujfLZ2weLR9ugB0ufUPjzf8cguyVT1g5im7f7RXxuLq1xUxEzvm68uYRtGg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-logical-assignment-operators": "^7.10.4" + } + }, + "@babel/plugin-proposal-nullish-coalescing-operator": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-nullish-coalescing-operator/-/plugin-proposal-nullish-coalescing-operator-7.14.2.tgz", + "integrity": "sha512-ebR0zU9OvI2N4qiAC38KIAK75KItpIPTpAtd2r4OZmMFeKbKJpUFLYP2EuDut82+BmYi8sz42B+TfTptJ9iG5Q==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.3" + } + }, + "@babel/plugin-proposal-numeric-separator": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-numeric-separator/-/plugin-proposal-numeric-separator-7.14.2.tgz", + "integrity": "sha512-DcTQY9syxu9BpU3Uo94fjCB3LN9/hgPS8oUL7KrSW3bA2ePrKZZPJcc5y0hoJAM9dft3pGfErtEUvxXQcfLxUg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-numeric-separator": "^7.10.4" + } + }, + "@babel/plugin-proposal-object-rest-spread": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.14.2.tgz", + "integrity": "sha512-hBIQFxwZi8GIp934+nj5uV31mqclC1aYDhctDu5khTi9PCCUOczyy0b34W0oE9U/eJXiqQaKyVsmjeagOaSlbw==", + "dev": true, + "requires": { + "@babel/compat-data": "^7.14.0", + "@babel/helper-compilation-targets": "^7.13.16", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-object-rest-spread": "^7.8.3", + "@babel/plugin-transform-parameters": "^7.14.2" + } + }, + "@babel/plugin-proposal-optional-catch-binding": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-optional-catch-binding/-/plugin-proposal-optional-catch-binding-7.14.2.tgz", + "integrity": "sha512-XtkJsmJtBaUbOxZsNk0Fvrv8eiqgneug0A6aqLFZ4TSkar2L5dSXWcnUKHgmjJt49pyB/6ZHvkr3dPgl9MOWRQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-optional-catch-binding": "^7.8.3" + } + }, + "@babel/plugin-proposal-optional-chaining": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-optional-chaining/-/plugin-proposal-optional-chaining-7.14.2.tgz", + "integrity": "sha512-qQByMRPwMZJainfig10BoaDldx/+VDtNcrA7qdNaEOAj6VXud+gfrkA8j4CRAU5HjnWREXqIpSpH30qZX1xivA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-skip-transparent-expression-wrappers": "^7.12.1", + "@babel/plugin-syntax-optional-chaining": "^7.8.3" + } + }, + "@babel/plugin-proposal-private-methods": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-private-methods/-/plugin-proposal-private-methods-7.13.0.tgz", + "integrity": "sha512-MXyyKQd9inhx1kDYPkFRVOBXQ20ES8Pto3T7UZ92xj2mY0EVD8oAVzeyYuVfy/mxAdTSIayOvg+aVzcHV2bn6Q==", + "dev": true, + "requires": { + "@babel/helper-create-class-features-plugin": "^7.13.0", + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-proposal-private-property-in-object": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-private-property-in-object/-/plugin-proposal-private-property-in-object-7.14.0.tgz", + "integrity": "sha512-59ANdmEwwRUkLjB7CRtwJxxwtjESw+X2IePItA+RGQh+oy5RmpCh/EvVVvh5XQc3yxsm5gtv0+i9oBZhaDNVTg==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.12.13", + "@babel/helper-create-class-features-plugin": "^7.14.0", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/plugin-syntax-private-property-in-object": "^7.14.0" + } + }, + "@babel/plugin-proposal-unicode-property-regex": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-unicode-property-regex/-/plugin-proposal-unicode-property-regex-7.12.13.tgz", + "integrity": "sha512-XyJmZidNfofEkqFV5VC/bLabGmO5QzenPO/YOfGuEbgU+2sSwMmio3YLb4WtBgcmmdwZHyVyv8on77IUjQ5Gvg==", + "dev": true, + "requires": { + "@babel/helper-create-regexp-features-plugin": "^7.12.13", + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-syntax-async-generators": { + "version": "7.8.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-async-generators/-/plugin-syntax-async-generators-7.8.4.tgz", + "integrity": "sha512-tycmZxkGfZaxhMRbXlPXuVFpdWlXpir2W4AMhSJgRKzk/eDlIXOhb2LHWoLpDF7TEHylV5zNhykX6KAgHJmTNw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-class-properties": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.12.13.tgz", + "integrity": "sha512-fm4idjKla0YahUNgFNLCB0qySdsoPiZP3iQE3rky0mBUtMZ23yDJ9SJdg6dXTSDnulOVqiF3Hgr9nbXvXTQZYA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-syntax-class-static-block": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-static-block/-/plugin-syntax-class-static-block-7.12.13.tgz", + "integrity": "sha512-ZmKQ0ZXR0nYpHZIIuj9zE7oIqCx2hw9TKi+lIo73NNrMPAZGHfS92/VRV0ZmPj6H2ffBgyFHXvJ5NYsNeEaP2A==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-syntax-decorators": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-decorators/-/plugin-syntax-decorators-7.12.13.tgz", + "integrity": "sha512-Rw6aIXGuqDLr6/LoBBYE57nKOzQpz/aDkKlMqEwH+Vp0MXbG6H/TfRjaY343LKxzAKAMXIHsQ8JzaZKuDZ9MwA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-syntax-dynamic-import": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-dynamic-import/-/plugin-syntax-dynamic-import-7.8.3.tgz", + "integrity": "sha512-5gdGbFon+PszYzqs83S3E5mpi7/y/8M9eC90MRTZfduQOYW76ig6SOSPNe41IG5LoP3FGBn2N0RjVDSQiS94kQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-export-namespace-from": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-export-namespace-from/-/plugin-syntax-export-namespace-from-7.8.3.tgz", + "integrity": "sha512-MXf5laXo6c1IbEbegDmzGPwGNTsHZmEy6QGznu5Sh2UCWvueywb2ee+CCE4zQiZstxU9BMoQO9i6zUFSY0Kj0Q==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.3" + } + }, + "@babel/plugin-syntax-json-strings": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-json-strings/-/plugin-syntax-json-strings-7.8.3.tgz", + "integrity": "sha512-lY6kdGpWHvjoe2vk4WrAapEuBR69EMxZl+RoGRhrFGNYVK8mOPAW8VfbT/ZgrFbXlDNiiaxQnAtgVCZ6jv30EA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-jsx": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.12.13.tgz", + "integrity": "sha512-d4HM23Q1K7oq/SLNmG6mRt85l2csmQ0cHRaxRXjKW0YFdEXqlZ5kzFQKH5Uc3rDJECgu+yCRgPkG04Mm98R/1g==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-syntax-logical-assignment-operators": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-logical-assignment-operators/-/plugin-syntax-logical-assignment-operators-7.10.4.tgz", + "integrity": "sha512-d8waShlpFDinQ5MtvGU9xDAOzKH47+FFoney2baFIoMr952hKOLp1HR7VszoZvOsV/4+RRszNY7D17ba0te0ig==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.10.4" + } + }, + "@babel/plugin-syntax-nullish-coalescing-operator": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-nullish-coalescing-operator/-/plugin-syntax-nullish-coalescing-operator-7.8.3.tgz", + "integrity": "sha512-aSff4zPII1u2QD7y+F8oDsz19ew4IGEJg9SVW+bqwpwtfFleiQDMdzA/R+UlWDzfnHFCxxleFT0PMIrR36XLNQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-numeric-separator": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-numeric-separator/-/plugin-syntax-numeric-separator-7.10.4.tgz", + "integrity": "sha512-9H6YdfkcK/uOnY/K7/aA2xpzaAgkQn37yzWUMRK7OaPOqOpGS1+n0H5hxT9AUw9EsSjPW8SVyMJwYRtWs3X3ug==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.10.4" + } + }, + "@babel/plugin-syntax-object-rest-spread": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz", + "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-optional-catch-binding": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-catch-binding/-/plugin-syntax-optional-catch-binding-7.8.3.tgz", + "integrity": "sha512-6VPD0Pc1lpTqw0aKoeRTMiB+kWhAoT24PA+ksWSBrFtl5SIRVpZlwN3NNPQjehA2E/91FV3RjLWoVTglWcSV3Q==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-optional-chaining": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-chaining/-/plugin-syntax-optional-chaining-7.8.3.tgz", + "integrity": "sha512-KoK9ErH1MBlCPxV0VANkXW2/dw4vlbGDrFgz8bmUsBGYkFRcbRwMh6cIJubdPrkxRwuGdtCk0v/wPTKbQgBjkg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.8.0" + } + }, + "@babel/plugin-syntax-private-property-in-object": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-private-property-in-object/-/plugin-syntax-private-property-in-object-7.14.0.tgz", + "integrity": "sha512-bda3xF8wGl5/5btF794utNOL0Jw+9jE5C1sLZcoK7c4uonE/y3iQiyG+KbkF3WBV/paX58VCpjhxLPkdj5Fe4w==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-syntax-top-level-await": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-top-level-await/-/plugin-syntax-top-level-await-7.12.13.tgz", + "integrity": "sha512-A81F9pDwyS7yM//KwbCSDqy3Uj4NMIurtplxphWxoYtNPov7cJsDkAFNNyVlIZ3jwGycVsurZ+LtOA8gZ376iQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-arrow-functions": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-arrow-functions/-/plugin-transform-arrow-functions-7.13.0.tgz", + "integrity": "sha512-96lgJagobeVmazXFaDrbmCLQxBysKu7U6Do3mLsx27gf5Dk85ezysrs2BZUpXD703U/Su1xTBDxxar2oa4jAGg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-async-to-generator": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-async-to-generator/-/plugin-transform-async-to-generator-7.13.0.tgz", + "integrity": "sha512-3j6E004Dx0K3eGmhxVJxwwI89CTJrce7lg3UrtFuDAVQ/2+SJ/h/aSFOeE6/n0WB1GsOffsJp6MnPQNQ8nmwhg==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.12.13", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-remap-async-to-generator": "^7.13.0" + } + }, + "@babel/plugin-transform-block-scoped-functions": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoped-functions/-/plugin-transform-block-scoped-functions-7.12.13.tgz", + "integrity": "sha512-zNyFqbc3kI/fVpqwfqkg6RvBgFpC4J18aKKMmv7KdQ/1GgREapSJAykLMVNwfRGO3BtHj3YQZl8kxCXPcVMVeg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-block-scoping": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.14.2.tgz", + "integrity": "sha512-neZZcP19NugZZqNwMTH+KoBjx5WyvESPSIOQb4JHpfd+zPfqcH65RMu5xJju5+6q/Y2VzYrleQTr+b6METyyxg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-classes": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-classes/-/plugin-transform-classes-7.14.2.tgz", + "integrity": "sha512-7oafAVcucHquA/VZCsXv/gmuiHeYd64UJyyTYU+MPfNu0KeNlxw06IeENBO8bJjXVbolu+j1MM5aKQtH1OMCNg==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.12.13", + "@babel/helper-function-name": "^7.14.2", + "@babel/helper-optimise-call-expression": "^7.12.13", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-replace-supers": "^7.13.12", + "@babel/helper-split-export-declaration": "^7.12.13", + "globals": "^11.1.0" + } + }, + "@babel/plugin-transform-computed-properties": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-computed-properties/-/plugin-transform-computed-properties-7.13.0.tgz", + "integrity": "sha512-RRqTYTeZkZAz8WbieLTvKUEUxZlUTdmL5KGMyZj7FnMfLNKV4+r5549aORG/mgojRmFlQMJDUupwAMiF2Q7OUg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-destructuring": { + "version": "7.13.17", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-destructuring/-/plugin-transform-destructuring-7.13.17.tgz", + "integrity": "sha512-UAUqiLv+uRLO+xuBKKMEpC+t7YRNVRqBsWWq1yKXbBZBje/t3IXCiSinZhjn/DC3qzBfICeYd2EFGEbHsh5RLA==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-dotall-regex": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-dotall-regex/-/plugin-transform-dotall-regex-7.12.13.tgz", + "integrity": "sha512-foDrozE65ZFdUC2OfgeOCrEPTxdB3yjqxpXh8CH+ipd9CHd4s/iq81kcUpyH8ACGNEPdFqbtzfgzbT/ZGlbDeQ==", + "dev": true, + "requires": { + "@babel/helper-create-regexp-features-plugin": "^7.12.13", + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-duplicate-keys": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-duplicate-keys/-/plugin-transform-duplicate-keys-7.12.13.tgz", + "integrity": "sha512-NfADJiiHdhLBW3pulJlJI2NB0t4cci4WTZ8FtdIuNc2+8pslXdPtRRAEWqUY+m9kNOk2eRYbTAOipAxlrOcwwQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-exponentiation-operator": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-exponentiation-operator/-/plugin-transform-exponentiation-operator-7.12.13.tgz", + "integrity": "sha512-fbUelkM1apvqez/yYx1/oICVnGo2KM5s63mhGylrmXUxK/IAXSIf87QIxVfZldWf4QsOafY6vV3bX8aMHSvNrA==", + "dev": true, + "requires": { + "@babel/helper-builder-binary-assignment-operator-visitor": "^7.12.13", + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-for-of": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.13.0.tgz", + "integrity": "sha512-IHKT00mwUVYE0zzbkDgNRP6SRzvfGCYsOxIRz8KsiaaHCcT9BWIkO+H9QRJseHBLOGBZkHUdHiqj6r0POsdytg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-function-name": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-function-name/-/plugin-transform-function-name-7.12.13.tgz", + "integrity": "sha512-6K7gZycG0cmIwwF7uMK/ZqeCikCGVBdyP2J5SKNCXO5EOHcqi+z7Jwf8AmyDNcBgxET8DrEtCt/mPKPyAzXyqQ==", + "dev": true, + "requires": { + "@babel/helper-function-name": "^7.12.13", + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-literals": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-literals/-/plugin-transform-literals-7.12.13.tgz", + "integrity": "sha512-FW+WPjSR7hiUxMcKqyNjP05tQ2kmBCdpEpZHY1ARm96tGQCCBvXKnpjILtDplUnJ/eHZ0lALLM+d2lMFSpYJrQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-member-expression-literals": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-member-expression-literals/-/plugin-transform-member-expression-literals-7.12.13.tgz", + "integrity": "sha512-kxLkOsg8yir4YeEPHLuO2tXP9R/gTjpuTOjshqSpELUN3ZAg2jfDnKUvzzJxObun38sw3wm4Uu69sX/zA7iRvg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-modules-amd": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-amd/-/plugin-transform-modules-amd-7.14.2.tgz", + "integrity": "sha512-hPC6XBswt8P3G2D1tSV2HzdKvkqOpmbyoy+g73JG0qlF/qx2y3KaMmXb1fLrpmWGLZYA0ojCvaHdzFWjlmV+Pw==", + "dev": true, + "requires": { + "@babel/helper-module-transforms": "^7.14.2", + "@babel/helper-plugin-utils": "^7.13.0", + "babel-plugin-dynamic-import-node": "^2.3.3" + } + }, + "@babel/plugin-transform-modules-commonjs": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-commonjs/-/plugin-transform-modules-commonjs-7.14.0.tgz", + "integrity": "sha512-EX4QePlsTaRZQmw9BsoPeyh5OCtRGIhwfLquhxGp5e32w+dyL8htOcDwamlitmNFK6xBZYlygjdye9dbd9rUlQ==", + "dev": true, + "requires": { + "@babel/helper-module-transforms": "^7.14.0", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-simple-access": "^7.13.12", + "babel-plugin-dynamic-import-node": "^2.3.3" + } + }, + "@babel/plugin-transform-modules-systemjs": { + "version": "7.13.8", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-systemjs/-/plugin-transform-modules-systemjs-7.13.8.tgz", + "integrity": "sha512-hwqctPYjhM6cWvVIlOIe27jCIBgHCsdH2xCJVAYQm7V5yTMoilbVMi9f6wKg0rpQAOn6ZG4AOyvCqFF/hUh6+A==", + "dev": true, + "requires": { + "@babel/helper-hoist-variables": "^7.13.0", + "@babel/helper-module-transforms": "^7.13.0", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-validator-identifier": "^7.12.11", + "babel-plugin-dynamic-import-node": "^2.3.3" + } + }, + "@babel/plugin-transform-modules-umd": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-umd/-/plugin-transform-modules-umd-7.14.0.tgz", + "integrity": "sha512-nPZdnWtXXeY7I87UZr9VlsWme3Y0cfFFE41Wbxz4bbaexAjNMInXPFUpRRUJ8NoMm0Cw+zxbqjdPmLhcjfazMw==", + "dev": true, + "requires": { + "@babel/helper-module-transforms": "^7.14.0", + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-named-capturing-groups-regex": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-named-capturing-groups-regex/-/plugin-transform-named-capturing-groups-regex-7.12.13.tgz", + "integrity": "sha512-Xsm8P2hr5hAxyYblrfACXpQKdQbx4m2df9/ZZSQ8MAhsadw06+jW7s9zsSw6he+mJZXRlVMyEnVktJo4zjk1WA==", + "dev": true, + "requires": { + "@babel/helper-create-regexp-features-plugin": "^7.12.13" + } + }, + "@babel/plugin-transform-new-target": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-new-target/-/plugin-transform-new-target-7.12.13.tgz", + "integrity": "sha512-/KY2hbLxrG5GTQ9zzZSc3xWiOy379pIETEhbtzwZcw9rvuaVV4Fqy7BYGYOWZnaoXIQYbbJ0ziXLa/sKcGCYEQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-object-super": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-object-super/-/plugin-transform-object-super-7.12.13.tgz", + "integrity": "sha512-JzYIcj3XtYspZDV8j9ulnoMPZZnF/Cj0LUxPOjR89BdBVx+zYJI9MdMIlUZjbXDX+6YVeS6I3e8op+qQ3BYBoQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13", + "@babel/helper-replace-supers": "^7.12.13" + } + }, + "@babel/plugin-transform-parameters": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.14.2.tgz", + "integrity": "sha512-NxoVmA3APNCC1JdMXkdYXuQS+EMdqy0vIwyDHeKHiJKRxmp1qGSdb0JLEIoPRhkx6H/8Qi3RJ3uqOCYw8giy9A==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-property-literals": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-property-literals/-/plugin-transform-property-literals-7.12.13.tgz", + "integrity": "sha512-nqVigwVan+lR+g8Fj8Exl0UQX2kymtjcWfMOYM1vTYEKujeyv2SkMgazf2qNcK7l4SDiKyTA/nHCPqL4e2zo1A==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-regenerator": { + "version": "7.13.15", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.13.15.tgz", + "integrity": "sha512-Bk9cOLSz8DiurcMETZ8E2YtIVJbFCPGW28DJWUakmyVWtQSm6Wsf0p4B4BfEr/eL2Nkhe/CICiUiMOCi1TPhuQ==", + "dev": true, + "requires": { + "regenerator-transform": "^0.14.2" + } + }, + "@babel/plugin-transform-reserved-words": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-reserved-words/-/plugin-transform-reserved-words-7.12.13.tgz", + "integrity": "sha512-xhUPzDXxZN1QfiOy/I5tyye+TRz6lA7z6xaT4CLOjPRMVg1ldRf0LHw0TDBpYL4vG78556WuHdyO9oi5UmzZBg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-runtime": { + "version": "7.14.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-runtime/-/plugin-transform-runtime-7.14.3.tgz", + "integrity": "sha512-t960xbi8wpTFE623ef7sd+UpEC5T6EEguQlTBJDEO05+XwnIWVfuqLw/vdLWY6IdFmtZE+65CZAfByT39zRpkg==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.13.12", + "@babel/helper-plugin-utils": "^7.13.0", + "babel-plugin-polyfill-corejs2": "^0.2.0", + "babel-plugin-polyfill-corejs3": "^0.2.0", + "babel-plugin-polyfill-regenerator": "^0.2.0", + "semver": "^6.3.0" + } + }, + "@babel/plugin-transform-shorthand-properties": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-shorthand-properties/-/plugin-transform-shorthand-properties-7.12.13.tgz", + "integrity": "sha512-xpL49pqPnLtf0tVluuqvzWIgLEhuPpZzvs2yabUHSKRNlN7ScYU7aMlmavOeyXJZKgZKQRBlh8rHbKiJDraTSw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-spread": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-spread/-/plugin-transform-spread-7.13.0.tgz", + "integrity": "sha512-V6vkiXijjzYeFmQTr3dBxPtZYLPcUfY34DebOU27jIl2M/Y8Egm52Hw82CSjjPqd54GTlJs5x+CR7HeNr24ckg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-skip-transparent-expression-wrappers": "^7.12.1" + } + }, + "@babel/plugin-transform-sticky-regex": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-sticky-regex/-/plugin-transform-sticky-regex-7.12.13.tgz", + "integrity": "sha512-Jc3JSaaWT8+fr7GRvQP02fKDsYk4K/lYwWq38r/UGfaxo89ajud321NH28KRQ7xy1Ybc0VUE5Pz8psjNNDUglg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-template-literals": { + "version": "7.13.0", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-template-literals/-/plugin-transform-template-literals-7.13.0.tgz", + "integrity": "sha512-d67umW6nlfmr1iehCcBv69eSUSySk1EsIS8aTDX4Xo9qajAh6mYtcl4kJrBkGXuxZPEgVr7RVfAvNW6YQkd4Mw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.13.0" + } + }, + "@babel/plugin-transform-typeof-symbol": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.12.13.tgz", + "integrity": "sha512-eKv/LmUJpMnu4npgfvs3LiHhJua5fo/CysENxa45YCQXZwKnGCQKAg87bvoqSW1fFT+HA32l03Qxsm8ouTY3ZQ==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-unicode-escapes": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-escapes/-/plugin-transform-unicode-escapes-7.12.13.tgz", + "integrity": "sha512-0bHEkdwJ/sN/ikBHfSmOXPypN/beiGqjo+o4/5K+vxEFNPRPdImhviPakMKG4x96l85emoa0Z6cDflsdBusZbw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/plugin-transform-unicode-regex": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-regex/-/plugin-transform-unicode-regex-7.12.13.tgz", + "integrity": "sha512-mDRzSNY7/zopwisPZ5kM9XKCfhchqIYwAKRERtEnhYscZB79VRekuRSoYbN0+KVe3y8+q1h6A4svXtP7N+UoCA==", + "dev": true, + "requires": { + "@babel/helper-create-regexp-features-plugin": "^7.12.13", + "@babel/helper-plugin-utils": "^7.12.13" + } + }, + "@babel/preset-env": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.14.2.tgz", + "integrity": "sha512-7dD7lVT8GMrE73v4lvDEb85cgcQhdES91BSD7jS/xjC6QY8PnRhux35ac+GCpbiRhp8crexBvZZqnaL6VrY8TQ==", + "dev": true, + "requires": { + "@babel/compat-data": "^7.14.0", + "@babel/helper-compilation-targets": "^7.13.16", + "@babel/helper-plugin-utils": "^7.13.0", + "@babel/helper-validator-option": "^7.12.17", + "@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining": "^7.13.12", + "@babel/plugin-proposal-async-generator-functions": "^7.14.2", + "@babel/plugin-proposal-class-properties": "^7.13.0", + "@babel/plugin-proposal-class-static-block": "^7.13.11", + "@babel/plugin-proposal-dynamic-import": "^7.14.2", + "@babel/plugin-proposal-export-namespace-from": "^7.14.2", + "@babel/plugin-proposal-json-strings": "^7.14.2", + "@babel/plugin-proposal-logical-assignment-operators": "^7.14.2", + "@babel/plugin-proposal-nullish-coalescing-operator": "^7.14.2", + "@babel/plugin-proposal-numeric-separator": "^7.14.2", + "@babel/plugin-proposal-object-rest-spread": "^7.14.2", + "@babel/plugin-proposal-optional-catch-binding": "^7.14.2", + "@babel/plugin-proposal-optional-chaining": "^7.14.2", + "@babel/plugin-proposal-private-methods": "^7.13.0", + "@babel/plugin-proposal-private-property-in-object": "^7.14.0", + "@babel/plugin-proposal-unicode-property-regex": "^7.12.13", + "@babel/plugin-syntax-async-generators": "^7.8.4", + "@babel/plugin-syntax-class-properties": "^7.12.13", + "@babel/plugin-syntax-class-static-block": "^7.12.13", + "@babel/plugin-syntax-dynamic-import": "^7.8.3", + "@babel/plugin-syntax-export-namespace-from": "^7.8.3", + "@babel/plugin-syntax-json-strings": "^7.8.3", + "@babel/plugin-syntax-logical-assignment-operators": "^7.10.4", + "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.3", + "@babel/plugin-syntax-numeric-separator": "^7.10.4", + "@babel/plugin-syntax-object-rest-spread": "^7.8.3", + "@babel/plugin-syntax-optional-catch-binding": "^7.8.3", + "@babel/plugin-syntax-optional-chaining": "^7.8.3", + "@babel/plugin-syntax-private-property-in-object": "^7.14.0", + "@babel/plugin-syntax-top-level-await": "^7.12.13", + "@babel/plugin-transform-arrow-functions": "^7.13.0", + "@babel/plugin-transform-async-to-generator": "^7.13.0", + "@babel/plugin-transform-block-scoped-functions": "^7.12.13", + "@babel/plugin-transform-block-scoping": "^7.14.2", + "@babel/plugin-transform-classes": "^7.14.2", + "@babel/plugin-transform-computed-properties": "^7.13.0", + "@babel/plugin-transform-destructuring": "^7.13.17", + "@babel/plugin-transform-dotall-regex": "^7.12.13", + "@babel/plugin-transform-duplicate-keys": "^7.12.13", + "@babel/plugin-transform-exponentiation-operator": "^7.12.13", + "@babel/plugin-transform-for-of": "^7.13.0", + "@babel/plugin-transform-function-name": "^7.12.13", + "@babel/plugin-transform-literals": "^7.12.13", + "@babel/plugin-transform-member-expression-literals": "^7.12.13", + "@babel/plugin-transform-modules-amd": "^7.14.2", + "@babel/plugin-transform-modules-commonjs": "^7.14.0", + "@babel/plugin-transform-modules-systemjs": "^7.13.8", + "@babel/plugin-transform-modules-umd": "^7.14.0", + "@babel/plugin-transform-named-capturing-groups-regex": "^7.12.13", + "@babel/plugin-transform-new-target": "^7.12.13", + "@babel/plugin-transform-object-super": "^7.12.13", + "@babel/plugin-transform-parameters": "^7.14.2", + "@babel/plugin-transform-property-literals": "^7.12.13", + "@babel/plugin-transform-regenerator": "^7.13.15", + "@babel/plugin-transform-reserved-words": "^7.12.13", + "@babel/plugin-transform-shorthand-properties": "^7.12.13", + "@babel/plugin-transform-spread": "^7.13.0", + "@babel/plugin-transform-sticky-regex": "^7.12.13", + "@babel/plugin-transform-template-literals": "^7.13.0", + "@babel/plugin-transform-typeof-symbol": "^7.12.13", + "@babel/plugin-transform-unicode-escapes": "^7.12.13", + "@babel/plugin-transform-unicode-regex": "^7.12.13", + "@babel/preset-modules": "^0.1.4", + "@babel/types": "^7.14.2", + "babel-plugin-polyfill-corejs2": "^0.2.0", + "babel-plugin-polyfill-corejs3": "^0.2.0", + "babel-plugin-polyfill-regenerator": "^0.2.0", + "core-js-compat": "^3.9.0", + "semver": "^6.3.0" + } + }, + "@babel/preset-modules": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/@babel/preset-modules/-/preset-modules-0.1.4.tgz", + "integrity": "sha512-J36NhwnfdzpmH41M1DrnkkgAqhZaqr/NBdPfQ677mLzlaXo+oDiv1deyCDtgAhz8p328otdob0Du7+xgHGZbKg==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.0.0", + "@babel/plugin-proposal-unicode-property-regex": "^7.4.4", + "@babel/plugin-transform-dotall-regex": "^7.4.4", + "@babel/types": "^7.4.4", + "esutils": "^2.0.2" + } + }, + "@babel/runtime": { + "version": "7.14.0", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.14.0.tgz", + "integrity": "sha512-JELkvo/DlpNdJ7dlyw/eY7E0suy5i5GQH+Vlxaq1nsNJ+H7f4Vtv3jMeCEgRhZZQFXTjldYfQgv2qmM6M1v5wA==", + "dev": true, + "requires": { + "regenerator-runtime": "^0.13.4" + } + }, + "@babel/template": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.12.13.tgz", + "integrity": "sha512-/7xxiGA57xMo/P2GVvdEumr8ONhFOhfgq2ihK3h1e6THqzTAkHbkXgB0xI9yeTfIUoH3+oAeHhqm/I43OTbbjA==", + "dev": true, + "requires": { + "@babel/code-frame": "^7.12.13", + "@babel/parser": "^7.12.13", + "@babel/types": "^7.12.13" + } + }, + "@babel/traverse": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.14.2.tgz", + "integrity": "sha512-TsdRgvBFHMyHOOzcP9S6QU0QQtjxlRpEYOy3mcCO5RgmC305ki42aSAmfZEMSSYBla2oZ9BMqYlncBaKmD/7iA==", + "dev": true, + "requires": { + "@babel/code-frame": "^7.12.13", + "@babel/generator": "^7.14.2", + "@babel/helper-function-name": "^7.14.2", + "@babel/helper-split-export-declaration": "^7.12.13", + "@babel/parser": "^7.14.2", + "@babel/types": "^7.14.2", + "debug": "^4.1.0", + "globals": "^11.1.0" + } + }, + "@babel/types": { + "version": "7.14.2", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.14.2.tgz", + "integrity": "sha512-SdjAG/3DikRHpUOjxZgnkbR11xUlyDMUFJdvnIgZEE16mqmY0BINMmc4//JMJglEmn6i7sq6p+mGrFWyZ98EEw==", + "dev": true, + "requires": { + "@babel/helper-validator-identifier": "^7.14.0", + "to-fast-properties": "^2.0.0" + } + }, + "@braintree/sanitize-url": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/@braintree/sanitize-url/-/sanitize-url-6.0.0.tgz", + "integrity": "sha512-mgmE7XBYY/21erpzhexk4Cj1cyTQ9LzvnTxtzM17BJ7ERMNE6W72mQRo0I1Ud8eFJ+RVVIcBNhLFZ3GX4XFz5w==", + "dev": true + }, + "@fullhuman/postcss-purgecss": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/@fullhuman/postcss-purgecss/-/postcss-purgecss-2.3.0.tgz", + "integrity": "sha512-qnKm5dIOyPGJ70kPZ5jiz0I9foVOic0j+cOzNDoo8KoCf6HjicIZ99UfO2OmE7vCYSKAAepEwJtNzpiiZAh9xw==", + "dev": true, + "requires": { + "postcss": "7.0.32", + "purgecss": "^2.3.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "dependencies": { + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "postcss": { + "version": "7.0.32", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz", + "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==", + "dev": true, + "requires": { + "chalk": "^2.4.2", + "source-map": "^0.6.1", + "supports-color": "^6.1.0" + } + } + } + }, + "@hapi/hoek": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-9.2.0.tgz", + "integrity": "sha512-sqKVVVOe5ivCaXDWivIJYVSaEgdQK9ul7a4Kity5Iw7u9+wBAPbX1RMSnLLmp7O4Vzj0WOWwMAJsTL00xwaNug==", + "dev": true + }, + "@hapi/topo": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/@hapi/topo/-/topo-5.0.0.tgz", + "integrity": "sha512-tFJlT47db0kMqVm3H4nQYgn6Pwg10GTZHb1pwmSiv1K4ks6drQOtfEF5ZnPjkvC+y4/bUPHK+bc87QvLcL+WMw==", + "dev": true, + "requires": { + "@hapi/hoek": "^9.0.0" + } + }, + "@limdongjin/vuepress-plugin-simple-seo": { + "version": "1.0.4-alpha.5", + "resolved": "https://registry.npmjs.org/@limdongjin/vuepress-plugin-simple-seo/-/vuepress-plugin-simple-seo-1.0.4-alpha.5.tgz", + "integrity": "sha512-p0uyWSsQ154D7PIIynTq8DHJWO6og+cvJ2YZ6BnOlX6a9OhG/OOpo6oOoN1PPPo5VR3hcIroChxo0HGiSzIgwA==", + "dev": true + }, + "@mrmlnc/readdir-enhanced": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/@mrmlnc/readdir-enhanced/-/readdir-enhanced-2.2.1.tgz", + "integrity": "sha512-bPHp6Ji8b41szTOcaP63VlnbbO5Ny6dwAATtY6JTjh5N2OLrb5Qk/Th5cRkRQhkWCt+EJsYrNB0MiL+Gpn6e3g==", + "dev": true, + "requires": { + "call-me-maybe": "^1.0.1", + "glob-to-regexp": "^0.3.0" + } + }, + "@nodelib/fs.scandir": { + "version": "2.1.4", + "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.4.tgz", + "integrity": "sha512-33g3pMJk3bg5nXbL/+CY6I2eJDzZAni49PfJnL5fghPTggPvBd/pFNSgJsdAgWptuFu7qq/ERvOYFlhvsLTCKA==", + "dev": true, + "requires": { + "@nodelib/fs.stat": "2.0.4", + "run-parallel": "^1.1.9" + }, + "dependencies": { + "@nodelib/fs.stat": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.4.tgz", + "integrity": "sha512-IYlHJA0clt2+Vg7bccq+TzRdJvv19c2INqBSsoOLp1je7xjtr7J26+WXR72MCdvU9q1qTzIWDfhMf+DRvQJK4Q==", + "dev": true + } + } + }, + "@nodelib/fs.stat": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-1.1.3.tgz", + "integrity": "sha512-shAmDyaQC4H92APFoIaVDHCx5bStIocgvbwQyxPRrbUY20V1EYTbSDchWbuwlMG3V17cprZhA6+78JfB+3DTPw==", + "dev": true + }, + "@nodelib/fs.walk": { + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.6.tgz", + "integrity": "sha512-8Broas6vTtW4GIXTAHDoE32hnN2M5ykgCpWGbuXHQ15vEMqr23pB76e/GZcYsZCHALv50ktd24qhEyKr6wBtow==", + "dev": true, + "requires": { + "@nodelib/fs.scandir": "2.1.4", + "fastq": "^1.6.0" + } + }, + "@npmcli/git": { + "version": "2.0.9", + "resolved": "https://registry.npmjs.org/@npmcli/git/-/git-2.0.9.tgz", + "integrity": "sha512-hTMbMryvOqGLwnmMBKs5usbPsJtyEsMsgXwJbmNrsEuQQh1LAIMDU77IoOrwkCg+NgQWl+ySlarJASwM3SutCA==", + "dev": true, + "requires": { + "@npmcli/promise-spawn": "^1.3.2", + "lru-cache": "^6.0.0", + "mkdirp": "^1.0.4", + "npm-pick-manifest": "^6.1.1", + "promise-inflight": "^1.0.1", + "promise-retry": "^2.0.1", + "semver": "^7.3.5", + "which": "^2.0.2" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "@npmcli/installed-package-contents": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/@npmcli/installed-package-contents/-/installed-package-contents-1.0.7.tgz", + "integrity": "sha512-9rufe0wnJusCQoLpV9ZPKIVP55itrM5BxOXs10DmdbRfgWtHy1LDyskbwRnBghuB0PrF7pNPOqREVtpz4HqzKw==", + "dev": true, + "requires": { + "npm-bundled": "^1.1.1", + "npm-normalize-package-bin": "^1.0.1" + } + }, + "@npmcli/move-file": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/@npmcli/move-file/-/move-file-1.1.2.tgz", + "integrity": "sha512-1SUf/Cg2GzGDyaf15aR9St9TWlb+XvbZXWpDx8YKs7MLzMH/BCeopv+y9vzrzgkfykCGuWOlSu3mZhj2+FQcrg==", + "dev": true, + "requires": { + "mkdirp": "^1.0.4", + "rimraf": "^3.0.2" + }, + "dependencies": { + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + } + } + }, + "@npmcli/node-gyp": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/@npmcli/node-gyp/-/node-gyp-1.0.2.tgz", + "integrity": "sha512-yrJUe6reVMpktcvagumoqD9r08fH1iRo01gn1u0zoCApa9lnZGEigVKUd2hzsCId4gdtkZZIVscLhNxMECKgRg==", + "dev": true + }, + "@npmcli/promise-spawn": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/@npmcli/promise-spawn/-/promise-spawn-1.3.2.tgz", + "integrity": "sha512-QyAGYo/Fbj4MXeGdJcFzZ+FkDkomfRBrPM+9QYJSg+PxgAUL+LU3FneQk37rKR2/zjqkCV1BLHccX98wRXG3Sg==", + "dev": true, + "requires": { + "infer-owner": "^1.0.4" + } + }, + "@npmcli/run-script": { + "version": "1.8.5", + "resolved": "https://registry.npmjs.org/@npmcli/run-script/-/run-script-1.8.5.tgz", + "integrity": "sha512-NQspusBCpTjNwNRFMtz2C5MxoxyzlbuJ4YEhxAKrIonTiirKDtatsZictx9RgamQIx6+QuHMNmPl0wQdoESs9A==", + "dev": true, + "requires": { + "@npmcli/node-gyp": "^1.0.2", + "@npmcli/promise-spawn": "^1.3.2", + "infer-owner": "^1.0.4", + "node-gyp": "^7.1.0", + "read-package-json-fast": "^2.0.1" + } + }, + "@sideway/address": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/@sideway/address/-/address-4.1.2.tgz", + "integrity": "sha512-idTz8ibqWFrPU8kMirL0CoPH/A29XOzzAzpyN3zQ4kAWnzmNfFmRaoMNN6VI8ske5M73HZyhIaW4OuSFIdM4oA==", + "dev": true, + "requires": { + "@hapi/hoek": "^9.0.0" + } + }, + "@sideway/formula": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/@sideway/formula/-/formula-3.0.0.tgz", + "integrity": "sha512-vHe7wZ4NOXVfkoRb8T5otiENVlT7a3IAiw7H5M2+GO+9CDgcVUUsX1zalAztCmwyOr2RUTGJdgB+ZvSVqmdHmg==", + "dev": true + }, + "@sideway/pinpoint": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/@sideway/pinpoint/-/pinpoint-2.0.0.tgz", + "integrity": "sha512-RNiOoTPkptFtSVzQevY/yWtZwf/RxyVnPy/OcA9HBM3MlGDnBEYL5B41H0MTn0Uec8Hi+2qUtTfG2WWZBmMejQ==", + "dev": true + }, + "@sindresorhus/is": { + "version": "0.14.0", + "resolved": "https://registry.npmjs.org/@sindresorhus/is/-/is-0.14.0.tgz", + "integrity": "sha512-9NET910DNaIPngYnLLPeg+Ogzqsi9uM4mSboU5y6p8S5DzMTVEsJZrawi+BoDNUVBa2DhJqQYUFvMDfgU062LQ==", + "dev": true + }, + "@szmarczak/http-timer": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/@szmarczak/http-timer/-/http-timer-1.1.2.tgz", + "integrity": "sha512-XIB2XbzHTN6ieIjfIMV9hlVcfPU26s2vafYWQcZHWXHOxiaRZYEDKEwdl129Zyg50+foYV2jCgtrqSA6qNuNSA==", + "dev": true, + "requires": { + "defer-to-connect": "^1.0.1" + } + }, + "@tootallnate/once": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/@tootallnate/once/-/once-1.1.2.tgz", + "integrity": "sha512-RbzJvlNzmRq5c3O09UipeuXno4tA1FE6ikOjxZK0tuxVv3412l64l5t1W5pj4+rJq9vpkm/kwiR07aZXnsKPxw==", + "dev": true + }, + "@types/glob": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/@types/glob/-/glob-7.1.3.tgz", + "integrity": "sha512-SEYeGAIQIQX8NN6LDKprLjbrd5dARM5EXsd8GI/A5l0apYI1fGMWgPHSe4ZKL4eozlAyI+doUE9XbYS4xCkQ1w==", + "dev": true, + "requires": { + "@types/minimatch": "*", + "@types/node": "*" + } + }, + "@types/json-schema": { + "version": "7.0.7", + "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.7.tgz", + "integrity": "sha512-cxWFQVseBm6O9Gbw1IWb8r6OS4OhSt3hPZLkFApLjM8TEXROBuQGLAH2i2gZpcXdLBIrpXuTDhH7Vbm1iXmNGA==", + "dev": true + }, + "@types/minimatch": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/@types/minimatch/-/minimatch-3.0.4.tgz", + "integrity": "sha512-1z8k4wzFnNjVK/tlxvrWuK5WMt6mydWWP7+zvH5eFep4oj+UkrfiJTRtjCeBXNpwaA/FYqqtb4/QS4ianFpIRA==", + "dev": true + }, + "@types/node": { + "version": "15.6.1", + "resolved": "https://registry.npmjs.org/@types/node/-/node-15.6.1.tgz", + "integrity": "sha512-7EIraBEyRHEe7CH+Fm1XvgqU6uwZN8Q7jppJGcqjROMT29qhAuuOxYB1uEY5UMYQKEmA5D+5tBnhdaPXSsLONA==", + "dev": true + }, + "@types/q": { + "version": "1.5.4", + "resolved": "https://registry.npmjs.org/@types/q/-/q-1.5.4.tgz", + "integrity": "sha512-1HcDas8SEj4z1Wc696tH56G8OlRaH/sqZOynNNB+HF0WOeXPaxTtbYzJY2oEfiUxjSKjhCKr+MvR7dCHcEelug==", + "dev": true + }, + "@vssue/api-github-v3": { + "version": "1.4.7", + "resolved": "https://registry.npmjs.org/@vssue/api-github-v3/-/api-github-v3-1.4.7.tgz", + "integrity": "sha512-ukhOnzGQarmj606ZiYN9iCMyr3EJS3YEPdZXX+zBLVsuzjTL5ffLbbpXsEtPBh2XNt3Ig3XdzCvA9bVXhwy4mQ==", + "dev": true, + "requires": { + "@vssue/utils": "^1.4.7", + "axios": "^0.21.1" + } + }, + "@vssue/utils": { + "version": "1.4.7", + "resolved": "https://registry.npmjs.org/@vssue/utils/-/utils-1.4.7.tgz", + "integrity": "sha512-e94karP4szmSNT2L4bgIT+VGToBSY3bdlgmGcomcD2qCXTWDK4krSOYm8ES+BhHTcmCvzQYU/xenHR4tzrythA==", + "dev": true, + "requires": { + "date-fns": "^1.29.0", + "qs": "^6.6.0" + } + }, + "@vssue/vuepress-plugin-vssue": { + "version": "1.4.8", + "resolved": "https://registry.npmjs.org/@vssue/vuepress-plugin-vssue/-/vuepress-plugin-vssue-1.4.8.tgz", + "integrity": "sha512-0QzegHl/Rx4/XgXswThIJi4Yk+b6AIaM450jX6p4RbOM6yOTzEKLTMduUo54Rvhq/NHNusu4Yy/w1iY8NTqBEg==", + "dev": true, + "requires": { + "vssue": "^1.4.8" + } + }, + "@vue/babel-helper-vue-jsx-merge-props": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@vue/babel-helper-vue-jsx-merge-props/-/babel-helper-vue-jsx-merge-props-1.2.1.tgz", + "integrity": "sha512-QOi5OW45e2R20VygMSNhyQHvpdUwQZqGPc748JLGCYEy+yp8fNFNdbNIGAgZmi9e+2JHPd6i6idRuqivyicIkA==", + "dev": true + }, + "@vue/babel-helper-vue-transform-on": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/@vue/babel-helper-vue-transform-on/-/babel-helper-vue-transform-on-1.0.2.tgz", + "integrity": "sha512-hz4R8tS5jMn8lDq6iD+yWL6XNB699pGIVLk7WSJnn1dbpjaazsjZQkieJoRX6gW5zpYSCFqQ7jUquPNY65tQYA==", + "dev": true + }, + "@vue/babel-plugin-jsx": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/@vue/babel-plugin-jsx/-/babel-plugin-jsx-1.0.6.tgz", + "integrity": "sha512-RzYsvBhzKUmY2YG6LoV+W5PnlnkInq0thh1AzCmewwctAgGN6e9UFon6ZrQQV1CO5G5PeME7MqpB+/vvGg0h4g==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.0.0", + "@babel/plugin-syntax-jsx": "^7.0.0", + "@babel/template": "^7.0.0", + "@babel/traverse": "^7.0.0", + "@babel/types": "^7.0.0", + "@vue/babel-helper-vue-transform-on": "^1.0.2", + "camelcase": "^6.0.0", + "html-tags": "^3.1.0", + "svg-tags": "^1.0.0" + } + }, + "@vue/babel-plugin-transform-vue-jsx": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@vue/babel-plugin-transform-vue-jsx/-/babel-plugin-transform-vue-jsx-1.2.1.tgz", + "integrity": "sha512-HJuqwACYehQwh1fNT8f4kyzqlNMpBuUK4rSiSES5D4QsYncv5fxFsLyrxFPG2ksO7t5WP+Vgix6tt6yKClwPzA==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.0.0", + "@babel/plugin-syntax-jsx": "^7.2.0", + "@vue/babel-helper-vue-jsx-merge-props": "^1.2.1", + "html-tags": "^2.0.0", + "lodash.kebabcase": "^4.1.1", + "svg-tags": "^1.0.0" + }, + "dependencies": { + "html-tags": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/html-tags/-/html-tags-2.0.0.tgz", + "integrity": "sha1-ELMKOGCF9Dzt41PMj6fLDe7qZos=", + "dev": true + } + } + }, + "@vue/babel-preset-app": { + "version": "4.5.13", + "resolved": "https://registry.npmjs.org/@vue/babel-preset-app/-/babel-preset-app-4.5.13.tgz", + "integrity": "sha512-pM7CR3yXB6L8Gfn6EmX7FLNE3+V/15I3o33GkSNsWvgsMp6HVGXKkXgojrcfUUauyL1LZOdvTmu4enU2RePGHw==", + "dev": true, + "requires": { + "@babel/core": "^7.11.0", + "@babel/helper-compilation-targets": "^7.9.6", + "@babel/helper-module-imports": "^7.8.3", + "@babel/plugin-proposal-class-properties": "^7.8.3", + "@babel/plugin-proposal-decorators": "^7.8.3", + "@babel/plugin-syntax-dynamic-import": "^7.8.3", + "@babel/plugin-syntax-jsx": "^7.8.3", + "@babel/plugin-transform-runtime": "^7.11.0", + "@babel/preset-env": "^7.11.0", + "@babel/runtime": "^7.11.0", + "@vue/babel-plugin-jsx": "^1.0.3", + "@vue/babel-preset-jsx": "^1.2.4", + "babel-plugin-dynamic-import-node": "^2.3.3", + "core-js": "^3.6.5", + "core-js-compat": "^3.6.5", + "semver": "^6.1.0" + } + }, + "@vue/babel-preset-jsx": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@vue/babel-preset-jsx/-/babel-preset-jsx-1.2.4.tgz", + "integrity": "sha512-oRVnmN2a77bYDJzeGSt92AuHXbkIxbf/XXSE3klINnh9AXBmVS1DGa1f0d+dDYpLfsAKElMnqKTQfKn7obcL4w==", + "dev": true, + "requires": { + "@vue/babel-helper-vue-jsx-merge-props": "^1.2.1", + "@vue/babel-plugin-transform-vue-jsx": "^1.2.1", + "@vue/babel-sugar-composition-api-inject-h": "^1.2.1", + "@vue/babel-sugar-composition-api-render-instance": "^1.2.4", + "@vue/babel-sugar-functional-vue": "^1.2.2", + "@vue/babel-sugar-inject-h": "^1.2.2", + "@vue/babel-sugar-v-model": "^1.2.3", + "@vue/babel-sugar-v-on": "^1.2.3" + } + }, + "@vue/babel-sugar-composition-api-inject-h": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-composition-api-inject-h/-/babel-sugar-composition-api-inject-h-1.2.1.tgz", + "integrity": "sha512-4B3L5Z2G+7s+9Bwbf+zPIifkFNcKth7fQwekVbnOA3cr3Pq71q71goWr97sk4/yyzH8phfe5ODVzEjX7HU7ItQ==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0" + } + }, + "@vue/babel-sugar-composition-api-render-instance": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-composition-api-render-instance/-/babel-sugar-composition-api-render-instance-1.2.4.tgz", + "integrity": "sha512-joha4PZznQMsxQYXtR3MnTgCASC9u3zt9KfBxIeuI5g2gscpTsSKRDzWQt4aqNIpx6cv8On7/m6zmmovlNsG7Q==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0" + } + }, + "@vue/babel-sugar-functional-vue": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-functional-vue/-/babel-sugar-functional-vue-1.2.2.tgz", + "integrity": "sha512-JvbgGn1bjCLByIAU1VOoepHQ1vFsroSA/QkzdiSs657V79q6OwEWLCQtQnEXD/rLTA8rRit4rMOhFpbjRFm82w==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0" + } + }, + "@vue/babel-sugar-inject-h": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-inject-h/-/babel-sugar-inject-h-1.2.2.tgz", + "integrity": "sha512-y8vTo00oRkzQTgufeotjCLPAvlhnpSkcHFEp60+LJUwygGcd5Chrpn5480AQp/thrxVm8m2ifAk0LyFel9oCnw==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0" + } + }, + "@vue/babel-sugar-v-model": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-v-model/-/babel-sugar-v-model-1.2.3.tgz", + "integrity": "sha512-A2jxx87mySr/ulAsSSyYE8un6SIH0NWHiLaCWpodPCVOlQVODCaSpiR4+IMsmBr73haG+oeCuSvMOM+ttWUqRQ==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0", + "@vue/babel-helper-vue-jsx-merge-props": "^1.2.1", + "@vue/babel-plugin-transform-vue-jsx": "^1.2.1", + "camelcase": "^5.0.0", + "html-tags": "^2.0.0", + "svg-tags": "^1.0.0" + }, + "dependencies": { + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + }, + "html-tags": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/html-tags/-/html-tags-2.0.0.tgz", + "integrity": "sha1-ELMKOGCF9Dzt41PMj6fLDe7qZos=", + "dev": true + } + } + }, + "@vue/babel-sugar-v-on": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/@vue/babel-sugar-v-on/-/babel-sugar-v-on-1.2.3.tgz", + "integrity": "sha512-kt12VJdz/37D3N3eglBywV8GStKNUhNrsxChXIV+o0MwVXORYuhDTHJRKPgLJRb/EY3vM2aRFQdxJBp9CLikjw==", + "dev": true, + "requires": { + "@babel/plugin-syntax-jsx": "^7.2.0", + "@vue/babel-plugin-transform-vue-jsx": "^1.2.1", + "camelcase": "^5.0.0" + }, + "dependencies": { + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + } + } + }, + "@vue/component-compiler-utils": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/@vue/component-compiler-utils/-/component-compiler-utils-3.2.0.tgz", + "integrity": "sha512-lejBLa7xAMsfiZfNp7Kv51zOzifnb29FwdnMLa96z26kXErPFioSf9BMcePVIQ6/Gc6/mC0UrPpxAWIHyae0vw==", + "dev": true, + "requires": { + "consolidate": "^0.15.1", + "hash-sum": "^1.0.2", + "lru-cache": "^4.1.2", + "merge-source-map": "^1.1.0", + "postcss": "^7.0.14", + "postcss-selector-parser": "^6.0.2", + "prettier": "^1.18.2", + "source-map": "~0.6.1", + "vue-template-es2015-compiler": "^1.9.0" + }, + "dependencies": { + "lru-cache": { + "version": "4.1.5", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-4.1.5.tgz", + "integrity": "sha512-sWZlbEP2OsHNkXrMl5GYk/jKk70MBng6UU4YI/qGDYbgf6YbP4EvmqISbXCoJiRKs+1bSpFHVgQxvJ17F2li5g==", + "dev": true, + "requires": { + "pseudomap": "^1.0.2", + "yallist": "^2.1.2" + } + }, + "yallist": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-2.1.2.tgz", + "integrity": "sha1-HBH5IY8HYImkfdUS+TxmmaaoHVI=", + "dev": true + } + } + }, + "@vuepress/core": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/core/-/core-1.8.2.tgz", + "integrity": "sha512-lh9BLC06k9s0wxTuWtCkiNj49fkbW87enp0XSrFZHEoyDGSGndQjZmMMErcHc5Hx7nrW1nzc33sPH1NNtJl0hw==", + "dev": true, + "requires": { + "@babel/core": "^7.8.4", + "@vue/babel-preset-app": "^4.1.2", + "@vuepress/markdown": "1.8.2", + "@vuepress/markdown-loader": "1.8.2", + "@vuepress/plugin-last-updated": "1.8.2", + "@vuepress/plugin-register-components": "1.8.2", + "@vuepress/shared-utils": "1.8.2", + "autoprefixer": "^9.5.1", + "babel-loader": "^8.0.4", + "cache-loader": "^3.0.0", + "chokidar": "^2.0.3", + "connect-history-api-fallback": "^1.5.0", + "copy-webpack-plugin": "^5.0.2", + "core-js": "^3.6.4", + "cross-spawn": "^6.0.5", + "css-loader": "^2.1.1", + "file-loader": "^3.0.1", + "js-yaml": "^3.13.1", + "lru-cache": "^5.1.1", + "mini-css-extract-plugin": "0.6.0", + "optimize-css-assets-webpack-plugin": "^5.0.1", + "portfinder": "^1.0.13", + "postcss-loader": "^3.0.0", + "postcss-safe-parser": "^4.0.1", + "toml": "^3.0.0", + "url-loader": "^1.0.1", + "vue": "^2.6.10", + "vue-loader": "^15.7.1", + "vue-router": "^3.4.5", + "vue-server-renderer": "^2.6.10", + "vue-template-compiler": "^2.6.10", + "vuepress-html-webpack-plugin": "^3.2.0", + "vuepress-plugin-container": "^2.0.2", + "webpack": "^4.8.1", + "webpack-chain": "^6.0.0", + "webpack-dev-server": "^3.5.1", + "webpack-merge": "^4.1.2", + "webpackbar": "3.2.0" + }, + "dependencies": { + "cross-spawn": { + "version": "6.0.5", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz", + "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==", + "dev": true, + "requires": { + "nice-try": "^1.0.4", + "path-key": "^2.0.1", + "semver": "^5.5.0", + "shebang-command": "^1.2.0", + "which": "^1.2.9" + } + }, + "path-key": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-2.0.1.tgz", + "integrity": "sha1-QRyttXTFoUDTpLGRDUDYDMn0C0A=", + "dev": true + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + }, + "shebang-command": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz", + "integrity": "sha1-RKrGW2lbAzmJaMOfNj/uXer98eo=", + "dev": true, + "requires": { + "shebang-regex": "^1.0.0" + } + }, + "shebang-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-1.0.0.tgz", + "integrity": "sha1-2kL0l0DAtC2yypcoVxyxkMmO/qM=", + "dev": true + }, + "which": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", + "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + } + } + }, + "@vuepress/markdown": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/markdown/-/markdown-1.8.2.tgz", + "integrity": "sha512-zznBHVqW+iBkznF/BO/GY9RFu53khyl0Ey0PnGqvwCJpRLNan6y5EXgYumtjw2GSYn5nDTTALYxtyNBdz64PKg==", + "dev": true, + "requires": { + "@vuepress/shared-utils": "1.8.2", + "markdown-it": "^8.4.1", + "markdown-it-anchor": "^5.0.2", + "markdown-it-chain": "^1.3.0", + "markdown-it-emoji": "^1.4.0", + "markdown-it-table-of-contents": "^0.4.0", + "prismjs": "^1.13.0" + } + }, + "@vuepress/markdown-loader": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/markdown-loader/-/markdown-loader-1.8.2.tgz", + "integrity": "sha512-mWzFXikCUcAN/chpKkqZpRYKdo0312hMv8cBea2hvrJYV6y4ODB066XKvXN8JwOcxuCjxWYJkhWGr+pXq1oTtw==", + "dev": true, + "requires": { + "@vuepress/markdown": "1.8.2", + "loader-utils": "^1.1.0", + "lru-cache": "^5.1.1" + } + }, + "@vuepress/plugin-active-header-links": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-active-header-links/-/plugin-active-header-links-1.8.2.tgz", + "integrity": "sha512-JmXAQg8D7J8mcKe2Ue3BZ9dOCzJMJXP4Cnkkc/IrqfDg0ET0l96gYWZohCqlvRIWt4f0VPiFAO4FLYrW+hko+g==", + "dev": true, + "requires": { + "lodash.debounce": "^4.0.8" + } + }, + "@vuepress/plugin-back-to-top": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-back-to-top/-/plugin-back-to-top-1.8.2.tgz", + "integrity": "sha512-htAf2m8+6cGmYQexWerznGBY10y1E4TBfebYC3Y3wqNjFjvXUmRKcAG/u6Yxvey4OFkQUxbth2ilKi/GlIW8aQ==", + "dev": true, + "requires": { + "lodash.debounce": "^4.0.8" + } + }, + "@vuepress/plugin-blog": { + "version": "1.9.4", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-blog/-/plugin-blog-1.9.4.tgz", + "integrity": "sha512-7A4Y3mYrSOUKdzsTjeVOKt0XgZ0m1Iqq7BeZn7y9YeZfDcZ4Fx6UldsPfK2+THwtYwGzQ7Not3zO8djyk7z3ew==", + "dev": true, + "requires": { + "@vssue/api-github-v3": "^1.1.2", + "@vssue/vuepress-plugin-vssue": "^1.2.0", + "dayjs": "^1.10.3", + "vuejs-paginate": "^2.1.0", + "vuepress-plugin-disqus": "^0.2.0", + "vuepress-plugin-feed": "^0.1.8", + "vuepress-plugin-mailchimp": "^1.4.1", + "vuepress-plugin-sitemap": "^2.3.1" + } + }, + "@vuepress/plugin-last-updated": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-last-updated/-/plugin-last-updated-1.8.2.tgz", + "integrity": "sha512-pYIRZi52huO9b6HY3JQNPKNERCLzMHejjBRt9ekdnJ1xhLs4MmRvt37BoXjI/qzvXkYtr7nmGgnKThNBVRTZuA==", + "dev": true, + "requires": { + "cross-spawn": "^6.0.5" + }, + "dependencies": { + "cross-spawn": { + "version": "6.0.5", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz", + "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==", + "dev": true, + "requires": { + "nice-try": "^1.0.4", + "path-key": "^2.0.1", + "semver": "^5.5.0", + "shebang-command": "^1.2.0", + "which": "^1.2.9" + } + }, + "path-key": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-2.0.1.tgz", + "integrity": "sha1-QRyttXTFoUDTpLGRDUDYDMn0C0A=", + "dev": true + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + }, + "shebang-command": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz", + "integrity": "sha1-RKrGW2lbAzmJaMOfNj/uXer98eo=", + "dev": true, + "requires": { + "shebang-regex": "^1.0.0" + } + }, + "shebang-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-1.0.0.tgz", + "integrity": "sha1-2kL0l0DAtC2yypcoVxyxkMmO/qM=", + "dev": true + }, + "which": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", + "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + } + } + }, + "@vuepress/plugin-nprogress": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-nprogress/-/plugin-nprogress-1.8.2.tgz", + "integrity": "sha512-3TOBee2NM3WLr1tdjDTGfrAMggjN+OlEPyKyv8FqThsVkDYhw48O3HwqlThp9KX7UbL3ExxIFBwWRFLC+kYrdw==", + "dev": true, + "requires": { + "nprogress": "^0.2.0" + } + }, + "@vuepress/plugin-register-components": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-register-components/-/plugin-register-components-1.8.2.tgz", + "integrity": "sha512-6SUq3nHFMEh9qKFnjA8QnrNxj0kLs7+Gspq1OBU8vtu0NQmSvLFZVaMV7pzT/9zN2nO5Pld5qhsUJv1g71MrEA==", + "dev": true, + "requires": { + "@vuepress/shared-utils": "1.8.2" + } + }, + "@vuepress/plugin-search": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/plugin-search/-/plugin-search-1.8.2.tgz", + "integrity": "sha512-JrSJr9o0Kar14lVtZ4wfw39pplxvvMh8vDBD9oW09a+6Zi/4bySPGdcdaqdqGW+OHSiZNvG+6uyfKSBBBqF6PA==", + "dev": true + }, + "@vuepress/shared-utils": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/shared-utils/-/shared-utils-1.8.2.tgz", + "integrity": "sha512-6kGubc7iBDWruEBUU7yR+sQ++SOhMuvKWvWeTZJKRZedthycdzYz7QVpua0FaZSAJm5/dIt8ymU4WQvxTtZgTQ==", + "dev": true, + "requires": { + "chalk": "^2.3.2", + "escape-html": "^1.0.3", + "fs-extra": "^7.0.1", + "globby": "^9.2.0", + "gray-matter": "^4.0.1", + "hash-sum": "^1.0.2", + "semver": "^6.0.0", + "toml": "^3.0.0", + "upath": "^1.1.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "fs-extra": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-7.0.1.tgz", + "integrity": "sha512-YJDaCJZEnBmcbw13fvdAM9AwNOJwOzrE4pqMqBq5nFiEqXUqHwlK4B+3pUw6JNvfSPtX05xFHtYy/1ni01eGCw==", + "dev": true, + "requires": { + "graceful-fs": "^4.1.2", + "jsonfile": "^4.0.0", + "universalify": "^0.1.0" + } + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "@vuepress/theme-default": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/@vuepress/theme-default/-/theme-default-1.8.2.tgz", + "integrity": "sha512-rE7M1rs3n2xp4a/GrweO8EGwqFn3EA5gnFWdVmVIHyr7C1nix+EqjpPQF1SVWNnIrDdQuCw38PqS+oND1K2vYw==", + "dev": true, + "requires": { + "@vuepress/plugin-active-header-links": "1.8.2", + "@vuepress/plugin-nprogress": "1.8.2", + "@vuepress/plugin-search": "1.8.2", + "docsearch.js": "^2.5.2", + "lodash": "^4.17.15", + "stylus": "^0.54.8", + "stylus-loader": "^3.0.2", + "vuepress-plugin-container": "^2.0.2", + "vuepress-plugin-smooth-scroll": "^0.0.3" + } + }, + "@webassemblyjs/ast": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.9.0.tgz", + "integrity": "sha512-C6wW5L+b7ogSDVqymbkkvuW9kruN//YisMED04xzeBBqjHa2FYnmvOlS6Xj68xWQRgWvI9cIglsjFowH/RJyEA==", + "dev": true, + "requires": { + "@webassemblyjs/helper-module-context": "1.9.0", + "@webassemblyjs/helper-wasm-bytecode": "1.9.0", + "@webassemblyjs/wast-parser": "1.9.0" + } + }, + "@webassemblyjs/floating-point-hex-parser": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.9.0.tgz", + "integrity": "sha512-TG5qcFsS8QB4g4MhrxK5TqfdNe7Ey/7YL/xN+36rRjl/BlGE/NcBvJcqsRgCP6Z92mRE+7N50pRIi8SmKUbcQA==", + "dev": true + }, + "@webassemblyjs/helper-api-error": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.9.0.tgz", + "integrity": "sha512-NcMLjoFMXpsASZFxJ5h2HZRcEhDkvnNFOAKneP5RbKRzaWJN36NC4jqQHKwStIhGXu5mUWlUUk7ygdtrO8lbmw==", + "dev": true + }, + "@webassemblyjs/helper-buffer": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.9.0.tgz", + "integrity": "sha512-qZol43oqhq6yBPx7YM3m9Bv7WMV9Eevj6kMi6InKOuZxhw+q9hOkvq5e/PpKSiLfyetpaBnogSbNCfBwyB00CA==", + "dev": true + }, + "@webassemblyjs/helper-code-frame": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-code-frame/-/helper-code-frame-1.9.0.tgz", + "integrity": "sha512-ERCYdJBkD9Vu4vtjUYe8LZruWuNIToYq/ME22igL+2vj2dQ2OOujIZr3MEFvfEaqKoVqpsFKAGsRdBSBjrIvZA==", + "dev": true, + "requires": { + "@webassemblyjs/wast-printer": "1.9.0" + } + }, + "@webassemblyjs/helper-fsm": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-fsm/-/helper-fsm-1.9.0.tgz", + "integrity": "sha512-OPRowhGbshCb5PxJ8LocpdX9Kl0uB4XsAjl6jH/dWKlk/mzsANvhwbiULsaiqT5GZGT9qinTICdj6PLuM5gslw==", + "dev": true + }, + "@webassemblyjs/helper-module-context": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-module-context/-/helper-module-context-1.9.0.tgz", + "integrity": "sha512-MJCW8iGC08tMk2enck1aPW+BE5Cw8/7ph/VGZxwyvGbJwjktKkDK7vy7gAmMDx88D7mhDTCNKAW5tED+gZ0W8g==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0" + } + }, + "@webassemblyjs/helper-wasm-bytecode": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.9.0.tgz", + "integrity": "sha512-R7FStIzyNcd7xKxCZH5lE0Bqy+hGTwS3LJjuv1ZVxd9O7eHCedSdrId/hMOd20I+v8wDXEn+bjfKDLzTepoaUw==", + "dev": true + }, + "@webassemblyjs/helper-wasm-section": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.9.0.tgz", + "integrity": "sha512-XnMB8l3ek4tvrKUUku+IVaXNHz2YsJyOOmz+MMkZvh8h1uSJpSen6vYnw3IoQ7WwEuAhL8Efjms1ZWjqh2agvw==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-buffer": "1.9.0", + "@webassemblyjs/helper-wasm-bytecode": "1.9.0", + "@webassemblyjs/wasm-gen": "1.9.0" + } + }, + "@webassemblyjs/ieee754": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.9.0.tgz", + "integrity": "sha512-dcX8JuYU/gvymzIHc9DgxTzUUTLexWwt8uCTWP3otys596io0L5aW02Gb1RjYpx2+0Jus1h4ZFqjla7umFniTg==", + "dev": true, + "requires": { + "@xtuc/ieee754": "^1.2.0" + } + }, + "@webassemblyjs/leb128": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.9.0.tgz", + "integrity": "sha512-ENVzM5VwV1ojs9jam6vPys97B/S65YQtv/aanqnU7D8aSoHFX8GyhGg0CMfyKNIHBuAVjy3tlzd5QMMINa7wpw==", + "dev": true, + "requires": { + "@xtuc/long": "4.2.2" + } + }, + "@webassemblyjs/utf8": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.9.0.tgz", + "integrity": "sha512-GZbQlWtopBTP0u7cHrEx+73yZKrQoBMpwkGEIqlacljhXCkVM1kMQge/Mf+csMJAjEdSwhOyLAS0AoR3AG5P8w==", + "dev": true + }, + "@webassemblyjs/wasm-edit": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.9.0.tgz", + "integrity": "sha512-FgHzBm80uwz5M8WKnMTn6j/sVbqilPdQXTWraSjBwFXSYGirpkSWE2R9Qvz9tNiTKQvoKILpCuTjBKzOIm0nxw==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-buffer": "1.9.0", + "@webassemblyjs/helper-wasm-bytecode": "1.9.0", + "@webassemblyjs/helper-wasm-section": "1.9.0", + "@webassemblyjs/wasm-gen": "1.9.0", + "@webassemblyjs/wasm-opt": "1.9.0", + "@webassemblyjs/wasm-parser": "1.9.0", + "@webassemblyjs/wast-printer": "1.9.0" + } + }, + "@webassemblyjs/wasm-gen": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.9.0.tgz", + "integrity": "sha512-cPE3o44YzOOHvlsb4+E9qSqjc9Qf9Na1OO/BHFy4OI91XDE14MjFN4lTMezzaIWdPqHnsTodGGNP+iRSYfGkjA==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-wasm-bytecode": "1.9.0", + "@webassemblyjs/ieee754": "1.9.0", + "@webassemblyjs/leb128": "1.9.0", + "@webassemblyjs/utf8": "1.9.0" + } + }, + "@webassemblyjs/wasm-opt": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.9.0.tgz", + "integrity": "sha512-Qkjgm6Anhm+OMbIL0iokO7meajkzQD71ioelnfPEj6r4eOFuqm4YC3VBPqXjFyyNwowzbMD+hizmprP/Fwkl2A==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-buffer": "1.9.0", + "@webassemblyjs/wasm-gen": "1.9.0", + "@webassemblyjs/wasm-parser": "1.9.0" + } + }, + "@webassemblyjs/wasm-parser": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.9.0.tgz", + "integrity": "sha512-9+wkMowR2AmdSWQzsPEjFU7njh8HTO5MqO8vjwEHuM+AMHioNqSBONRdr0NQQ3dVQrzp0s8lTcYqzUdb7YgELA==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-api-error": "1.9.0", + "@webassemblyjs/helper-wasm-bytecode": "1.9.0", + "@webassemblyjs/ieee754": "1.9.0", + "@webassemblyjs/leb128": "1.9.0", + "@webassemblyjs/utf8": "1.9.0" + } + }, + "@webassemblyjs/wast-parser": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-parser/-/wast-parser-1.9.0.tgz", + "integrity": "sha512-qsqSAP3QQ3LyZjNC/0jBJ/ToSxfYJ8kYyuiGvtn/8MK89VrNEfwj7BPQzJVHi0jGTRK2dGdJ5PRqhtjzoww+bw==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/floating-point-hex-parser": "1.9.0", + "@webassemblyjs/helper-api-error": "1.9.0", + "@webassemblyjs/helper-code-frame": "1.9.0", + "@webassemblyjs/helper-fsm": "1.9.0", + "@xtuc/long": "4.2.2" + } + }, + "@webassemblyjs/wast-printer": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.9.0.tgz", + "integrity": "sha512-2J0nE95rHXHyQ24cWjMKJ1tqB/ds8z/cyeOZxJhcb+rW+SQASVjuznUSmdz5GpVJTzU8JkhYut0D3siFDD6wsA==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/wast-parser": "1.9.0", + "@xtuc/long": "4.2.2" + } + }, + "@xtuc/ieee754": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz", + "integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==", + "dev": true + }, + "@xtuc/long": { + "version": "4.2.2", + "resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.2.tgz", + "integrity": "sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==", + "dev": true + }, + "abbrev": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/abbrev/-/abbrev-1.1.1.tgz", + "integrity": "sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q==", + "dev": true + }, + "accepts": { + "version": "1.3.7", + "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.7.tgz", + "integrity": "sha512-Il80Qs2WjYlJIBNzNkK6KYqlVMTbZLXgHx2oT0pU/fjRHyEp+PEfEPY0R3WCwAGVOtauxh1hOxNgIf5bv7dQpA==", + "dev": true, + "requires": { + "mime-types": "~2.1.24", + "negotiator": "0.6.2" + } + }, + "acorn": { + "version": "7.4.1", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-7.4.1.tgz", + "integrity": "sha512-nQyp0o1/mNdbTO1PO6kHkwSrmgZ0MT/jCCpNiwbUjGoRN4dlBhqJtoQuCnEOKzgTVwg0ZWiCoQy6SxMebQVh8A==", + "dev": true + }, + "acorn-node": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/acorn-node/-/acorn-node-1.8.2.tgz", + "integrity": "sha512-8mt+fslDufLYntIoPAaIMUe/lrbrehIiwmR3t2k9LljIzoigEPF27eLk2hy8zSGzmR/ogr7zbRKINMo1u0yh5A==", + "dev": true, + "requires": { + "acorn": "^7.0.0", + "acorn-walk": "^7.0.0", + "xtend": "^4.0.2" + } + }, + "acorn-walk": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/acorn-walk/-/acorn-walk-7.2.0.tgz", + "integrity": "sha512-OPdCF6GsMIP+Az+aWfAAOEt2/+iVDKE7oy6lJ098aoe59oAmK76qV6Gw60SbZ8jHuG2wH058GF4pLFbYamYrVA==", + "dev": true + }, + "agent-base": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-6.0.2.tgz", + "integrity": "sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==", + "dev": true, + "requires": { + "debug": "4" + } + }, + "agentkeepalive": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-2.2.0.tgz", + "integrity": "sha1-xdG9SxKQCPEWPyNvhuX66iAm4u8=", + "dev": true + }, + "aggregate-error": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/aggregate-error/-/aggregate-error-3.1.0.tgz", + "integrity": "sha512-4I7Td01quW/RpocfNayFdFVk1qSuoh0E7JrbRJ16nH01HhKFQ88INq9Sd+nd72zqRySlr9BmDA8xlEJ6vJMrYA==", + "dev": true, + "requires": { + "clean-stack": "^2.0.0", + "indent-string": "^4.0.0" + } + }, + "ajv": { + "version": "6.12.6", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz", + "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", + "dev": true, + "requires": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + } + }, + "ajv-errors": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/ajv-errors/-/ajv-errors-1.0.1.tgz", + "integrity": "sha512-DCRfO/4nQ+89p/RK43i8Ezd41EqdGIU4ld7nGF8OQ14oc/we5rEntLCUa7+jrn3nn83BosfwZA0wb4pon2o8iQ==", + "dev": true + }, + "ajv-keywords": { + "version": "3.5.2", + "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz", + "integrity": "sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ==", + "dev": true + }, + "algoliasearch": { + "version": "3.35.1", + "resolved": "https://registry.npmjs.org/algoliasearch/-/algoliasearch-3.35.1.tgz", + "integrity": "sha512-K4yKVhaHkXfJ/xcUnil04xiSrB8B8yHZoFEhWNpXg23eiCnqvTZw1tn/SqvdsANlYHLJlKl0qi3I/Q2Sqo7LwQ==", + "dev": true, + "requires": { + "agentkeepalive": "^2.2.0", + "debug": "^2.6.9", + "envify": "^4.0.0", + "es6-promise": "^4.1.0", + "events": "^1.1.0", + "foreach": "^2.0.5", + "global": "^4.3.2", + "inherits": "^2.0.1", + "isarray": "^2.0.1", + "load-script": "^1.0.0", + "object-keys": "^1.0.11", + "querystring-es3": "^0.2.1", + "reduce": "^1.0.1", + "semver": "^5.1.0", + "tunnel-agent": "^0.6.0" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "events": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/events/-/events-1.1.1.tgz", + "integrity": "sha1-nr23Y1rQmccNzEwqH1AEKI6L2SQ=", + "dev": true + }, + "isarray": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.5.tgz", + "integrity": "sha512-xHjhDr3cNBK0BzdUJSPXZntQUx/mwMS5Rw4A7lPJ90XGAO6ISP/ePDNuo0vhqOZU+UD5JoodwCAAoZQd3FeAKw==", + "dev": true + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + } + } + }, + "alphanum-sort": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/alphanum-sort/-/alphanum-sort-1.0.2.tgz", + "integrity": "sha1-l6ERlkmyEa0zaR2fn0hqjsn74KM=", + "dev": true + }, + "ansi-align": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-3.0.0.tgz", + "integrity": "sha512-ZpClVKqXN3RGBmKibdfWzqCY4lnjEuoNzU5T0oEFpfd/z5qJHVarukridD4juLO2FXMiwUQxr9WqQtaYa8XRYw==", + "dev": true, + "requires": { + "string-width": "^3.0.0" + } + }, + "ansi-colors": { + "version": "3.2.4", + "resolved": "https://registry.npmjs.org/ansi-colors/-/ansi-colors-3.2.4.tgz", + "integrity": "sha512-hHUXGagefjN2iRrID63xckIvotOXOojhQKWIPUZ4mNUZ9nLZW+7FMNoE1lOkEhNWYsx/7ysGIuJYCiMAA9FnrA==", + "dev": true + }, + "ansi-escapes": { + "version": "4.3.2", + "resolved": "https://registry.npmjs.org/ansi-escapes/-/ansi-escapes-4.3.2.tgz", + "integrity": "sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==", + "dev": true, + "requires": { + "type-fest": "^0.21.3" + } + }, + "ansi-html": { + "version": "0.0.7", + "resolved": "https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz", + "integrity": "sha1-gTWEAhliqenm/QOflA0S9WynhZ4=", + "dev": true + }, + "ansi-regex": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.1.1.tgz", + "integrity": "sha1-w7M6te42DYbg5ijwRorn7yfWVN8=", + "dev": true + }, + "ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, + "requires": { + "color-convert": "^1.9.0" + } + }, + "anymatch": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-2.0.0.tgz", + "integrity": "sha512-5teOsQWABXHHBFP9y3skS5P3d/WfWXpv3FUpy+LorMrNYaT9pI4oLMQX7jzQ2KklNpGpWHzdCXTDT2Y3XGlZBw==", + "dev": true, + "requires": { + "micromatch": "^3.1.4", + "normalize-path": "^2.1.1" + }, + "dependencies": { + "normalize-path": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-2.1.1.tgz", + "integrity": "sha1-GrKLVW4Zg2Oowab35vogE3/mrtk=", + "dev": true, + "requires": { + "remove-trailing-separator": "^1.0.1" + } + } + } + }, + "aproba": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/aproba/-/aproba-1.2.0.tgz", + "integrity": "sha512-Y9J6ZjXtoYh8RnXVCMOU/ttDmk1aBjunq9vO0ta5x85WDQiQfUF9sIPBITdbiiIVcBo03Hi3jMxigBtsddlXRw==", + "dev": true + }, + "are-we-there-yet": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/are-we-there-yet/-/are-we-there-yet-1.1.5.tgz", + "integrity": "sha512-5hYdAkZlcG8tOLujVDTgCT+uPX0VnpAH28gWsLfzpXYm7wP6mp5Q/gYyR7YQ0cKVJcXJnl3j2kpBan13PtQf6w==", + "dev": true, + "requires": { + "delegates": "^1.0.0", + "readable-stream": "^2.0.6" + } + }, + "argparse": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "dev": true, + "requires": { + "sprintf-js": "~1.0.2" + } + }, + "arr-diff": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/arr-diff/-/arr-diff-4.0.0.tgz", + "integrity": "sha1-1kYQdP6/7HHn4VI1dhoyml3HxSA=", + "dev": true + }, + "arr-flatten": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/arr-flatten/-/arr-flatten-1.1.0.tgz", + "integrity": "sha512-L3hKV5R/p5o81R7O02IGnwpDmkp6E982XhtbuwSe3O4qOtMMMtodicASA1Cny2U+aCXcNpml+m4dPsvsJ3jatg==", + "dev": true + }, + "arr-union": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/arr-union/-/arr-union-3.1.0.tgz", + "integrity": "sha1-45sJrqne+Gao8gbiiK9jkZuuOcQ=", + "dev": true + }, + "array-flatten": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-2.1.2.tgz", + "integrity": "sha512-hNfzcOV8W4NdualtqBFPyVO+54DSJuZGY9qT4pRroB6S9e3iiido2ISIC5h9R2sPJ8H3FHCIiEnsv1lPXO3KtQ==", + "dev": true + }, + "array-union": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/array-union/-/array-union-1.0.2.tgz", + "integrity": "sha1-mjRBDk9OPaI96jdb5b5w8kd47Dk=", + "dev": true, + "requires": { + "array-uniq": "^1.0.1" + } + }, + "array-uniq": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/array-uniq/-/array-uniq-1.0.3.tgz", + "integrity": "sha1-r2rId6Jcx/dOBYiUdThY39sk/bY=", + "dev": true + }, + "array-unique": { + "version": "0.3.2", + "resolved": "https://registry.npmjs.org/array-unique/-/array-unique-0.3.2.tgz", + "integrity": "sha1-qJS3XUvE9s1nnvMkSp/Y9Gri1Cg=", + "dev": true + }, + "asn1": { + "version": "0.2.4", + "resolved": "https://registry.npmjs.org/asn1/-/asn1-0.2.4.tgz", + "integrity": "sha512-jxwzQpLQjSmWXgwaCZE9Nz+glAG01yF1QnWgbhGwHI5A6FRIEY6IVqtHhIepHqI7/kyEyQEagBC5mBEFlIYvdg==", + "dev": true, + "requires": { + "safer-buffer": "~2.1.0" + } + }, + "asn1.js": { + "version": "5.4.1", + "resolved": "https://registry.npmjs.org/asn1.js/-/asn1.js-5.4.1.tgz", + "integrity": "sha512-+I//4cYPccV8LdmBLiX8CYvf9Sp3vQsrqu2QNXRcrbiWvcx/UdlFiqUJJzxRQxgsZmvhXhn4cSKeSmoFjVdupA==", + "dev": true, + "requires": { + "bn.js": "^4.0.0", + "inherits": "^2.0.1", + "minimalistic-assert": "^1.0.0", + "safer-buffer": "^2.1.0" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "assert": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/assert/-/assert-1.5.0.tgz", + "integrity": "sha512-EDsgawzwoun2CZkCgtxJbv392v4nbk9XDD06zI+kQYoBM/3RBWLlEyJARDOmhAAosBjWACEkKL6S+lIZtcAubA==", + "dev": true, + "requires": { + "object-assign": "^4.1.1", + "util": "0.10.3" + }, + "dependencies": { + "inherits": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz", + "integrity": "sha1-sX0I0ya0Qj5Wjv9xn5GwscvfafE=", + "dev": true + }, + "util": { + "version": "0.10.3", + "resolved": "https://registry.npmjs.org/util/-/util-0.10.3.tgz", + "integrity": "sha1-evsa/lCAUkZInj23/g7TeTNqwPk=", + "dev": true, + "requires": { + "inherits": "2.0.1" + } + } + } + }, + "assert-plus": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/assert-plus/-/assert-plus-1.0.0.tgz", + "integrity": "sha1-8S4PPF13sLHN2RRpQuTpbB5N1SU=", + "dev": true + }, + "assign-symbols": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/assign-symbols/-/assign-symbols-1.0.0.tgz", + "integrity": "sha1-WWZ/QfrdTyDMvCu5a41Pf3jsA2c=", + "dev": true + }, + "async": { + "version": "2.6.3", + "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz", + "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==", + "dev": true, + "requires": { + "lodash": "^4.17.14" + } + }, + "async-each": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/async-each/-/async-each-1.0.3.tgz", + "integrity": "sha512-z/WhQ5FPySLdvREByI2vZiTWwCnF0moMJ1hK9YQwDTHKh6I7/uSckMetoRGb5UBZPC1z0jlw+n/XCgjeH7y1AQ==", + "dev": true + }, + "async-limiter": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.1.tgz", + "integrity": "sha512-csOlWGAcRFJaI6m+F2WKdnMKr4HhdhFVBk0H/QbJFMCr+uO2kwohwXQPxw/9OCxp05r5ghVBFSyioixx3gfkNQ==", + "dev": true + }, + "asynckit": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz", + "integrity": "sha1-x57Zf380y48robyXkLzDZkdLS3k=", + "dev": true + }, + "atob": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/atob/-/atob-2.1.2.tgz", + "integrity": "sha512-Wm6ukoaOGJi/73p/cl2GvLjTI5JM1k/O14isD73YML8StrH/7/lRFgmg8nICZgD3bZZvjwCGxtMOD3wWNAu8cg==", + "dev": true + }, + "autocomplete.js": { + "version": "0.36.0", + "resolved": "https://registry.npmjs.org/autocomplete.js/-/autocomplete.js-0.36.0.tgz", + "integrity": "sha512-jEwUXnVMeCHHutUt10i/8ZiRaCb0Wo+ZyKxeGsYwBDtw6EJHqEeDrq4UwZRD8YBSvp3g6klP678il2eeiVXN2Q==", + "dev": true, + "requires": { + "immediate": "^3.2.3" + } + }, + "autoprefixer": { + "version": "9.8.6", + "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-9.8.6.tgz", + "integrity": "sha512-XrvP4VVHdRBCdX1S3WXVD8+RyG9qeb1D5Sn1DeLiG2xfSpzellk5k54xbUERJ3M5DggQxes39UGOTP8CFrEGbg==", + "dev": true, + "requires": { + "browserslist": "^4.12.0", + "caniuse-lite": "^1.0.30001109", + "colorette": "^1.2.1", + "normalize-range": "^0.1.2", + "num2fraction": "^1.2.2", + "postcss": "^7.0.32", + "postcss-value-parser": "^4.1.0" + } + }, + "aws-sign2": { + "version": "0.7.0", + "resolved": "https://registry.npmjs.org/aws-sign2/-/aws-sign2-0.7.0.tgz", + "integrity": "sha1-tG6JCTSpWR8tL2+G1+ap8bP+dqg=", + "dev": true + }, + "aws4": { + "version": "1.11.0", + "resolved": "https://registry.npmjs.org/aws4/-/aws4-1.11.0.tgz", + "integrity": "sha512-xh1Rl34h6Fi1DC2WWKfxUTVqRsNnr6LsKz2+hfwDxQJWmrx8+c7ylaqBMcHfl1U1r2dsifOvKX3LQuLNZ+XSvA==", + "dev": true + }, + "axios": { + "version": "0.21.4", + "resolved": "https://registry.npmjs.org/axios/-/axios-0.21.4.tgz", + "integrity": "sha512-ut5vewkiu8jjGBdqpM44XxjuCjq9LAKeHVmoVfHVzy8eHgxxq8SbAVQNovDA8mVi05kP0Ea/n/UzcSHcTJQfNg==", + "dev": true, + "requires": { + "follow-redirects": "^1.14.0" + } + }, + "babel-loader": { + "version": "8.2.2", + "resolved": "https://registry.npmjs.org/babel-loader/-/babel-loader-8.2.2.tgz", + "integrity": "sha512-JvTd0/D889PQBtUXJ2PXaKU/pjZDMtHA9V2ecm+eNRmmBCMR09a+fmpGTNwnJtFmFl5Ei7Vy47LjBb+L0wQ99g==", + "dev": true, + "requires": { + "find-cache-dir": "^3.3.1", + "loader-utils": "^1.4.0", + "make-dir": "^3.1.0", + "schema-utils": "^2.6.5" + } + }, + "babel-plugin-dynamic-import-node": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/babel-plugin-dynamic-import-node/-/babel-plugin-dynamic-import-node-2.3.3.tgz", + "integrity": "sha512-jZVI+s9Zg3IqA/kdi0i6UDCybUI3aSBLnglhYbSSjKlV7yF1F/5LWv8MakQmvYpnbJDS6fcBL2KzHSxNCMtWSQ==", + "dev": true, + "requires": { + "object.assign": "^4.1.0" + } + }, + "babel-plugin-polyfill-corejs2": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.2.1.tgz", + "integrity": "sha512-hXGSPbr6IbjeMyGew+3uGIAkRjBFSOJ9FLDZNOfHuyJZCcoia4nd/72J0bSgvfytcVfUcP/dxEVcUhVJuQRtSw==", + "dev": true, + "requires": { + "@babel/compat-data": "^7.13.11", + "@babel/helper-define-polyfill-provider": "^0.2.1", + "semver": "^6.1.1" + } + }, + "babel-plugin-polyfill-corejs3": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs3/-/babel-plugin-polyfill-corejs3-0.2.1.tgz", + "integrity": "sha512-WZCqF3DLUhdTD/P381MDJfuP18hdCZ+iqJ+wHtzhWENpsiof284JJ1tMQg1CE+hfCWyG48F7e5gDMk2c3Laz7w==", + "dev": true, + "requires": { + "@babel/helper-define-polyfill-provider": "^0.2.1", + "core-js-compat": "^3.9.1" + } + }, + "babel-plugin-polyfill-regenerator": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-regenerator/-/babel-plugin-polyfill-regenerator-0.2.1.tgz", + "integrity": "sha512-T3bYyL3Sll2EtC94v3f+fA8M28q7YPTOZdB++SRHjvYZTvtd+WorMUq3tDTD4Q7Kjk1LG0gGromslKjcO5p2TA==", + "dev": true, + "requires": { + "@babel/helper-define-polyfill-provider": "^0.2.1" + } + }, + "balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true + }, + "base": { + "version": "0.11.2", + "resolved": "https://registry.npmjs.org/base/-/base-0.11.2.tgz", + "integrity": "sha512-5T6P4xPgpp0YDFvSWwEZ4NoE3aM4QBQXDzmVbraCkFj8zHM+mba8SyqB5DbZWyR7mYHo6Y7BdQo3MoA4m0TeQg==", + "dev": true, + "requires": { + "cache-base": "^1.0.1", + "class-utils": "^0.3.5", + "component-emitter": "^1.2.1", + "define-property": "^1.0.0", + "isobject": "^3.0.1", + "mixin-deep": "^1.2.0", + "pascalcase": "^0.1.1" + }, + "dependencies": { + "define-property": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", + "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", + "dev": true, + "requires": { + "is-descriptor": "^1.0.0" + } + }, + "is-accessor-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", + "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-data-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", + "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-descriptor": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", + "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", + "dev": true, + "requires": { + "is-accessor-descriptor": "^1.0.0", + "is-data-descriptor": "^1.0.0", + "kind-of": "^6.0.2" + } + } + } + }, + "base64-js": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", + "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", + "dev": true + }, + "batch": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz", + "integrity": "sha1-3DQxT05nkxgJP8dgJyUl+UvyXBY=", + "dev": true + }, + "bcrypt-pbkdf": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/bcrypt-pbkdf/-/bcrypt-pbkdf-1.0.2.tgz", + "integrity": "sha1-pDAdOJtqQ/m2f/PKEaP2Y342Dp4=", + "dev": true, + "requires": { + "tweetnacl": "^0.14.3" + } + }, + "big.js": { + "version": "5.2.2", + "resolved": "https://registry.npmjs.org/big.js/-/big.js-5.2.2.tgz", + "integrity": "sha512-vyL2OymJxmarO8gxMr0mhChsO9QGwhynfuu4+MHTAW6czfq9humCB7rKpUjDd9YUiDPU4mzpyupFSvOClAwbmQ==", + "dev": true + }, + "binary-extensions": { + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-1.13.1.tgz", + "integrity": "sha512-Un7MIEDdUC5gNpcGDV97op1Ywk748MpHcFTHoYs6qnj1Z3j7I53VG3nwZhKzoBZmbdRNnb6WRdFlwl7tSDuZGw==", + "dev": true + }, + "bindings": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/bindings/-/bindings-1.5.0.tgz", + "integrity": "sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ==", + "dev": true, + "optional": true, + "requires": { + "file-uri-to-path": "1.0.0" + } + }, + "bluebird": { + "version": "3.7.2", + "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz", + "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==", + "dev": true + }, + "bn.js": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-5.2.0.tgz", + "integrity": "sha512-D7iWRBvnZE8ecXiLj/9wbxH7Tk79fAh8IHaTNq1RWRixsS02W+5qS+iE9yq6RYl0asXx5tw0bLhmT5pIfbSquw==", + "dev": true + }, + "body-parser": { + "version": "1.19.0", + "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.19.0.tgz", + "integrity": "sha512-dhEPs72UPbDnAQJ9ZKMNTP6ptJaionhP5cBb541nXPlW60Jepo9RV/a4fX4XWW9CuFNK22krhrj1+rgzifNCsw==", + "dev": true, + "requires": { + "bytes": "3.1.0", + "content-type": "~1.0.4", + "debug": "2.6.9", + "depd": "~1.1.2", + "http-errors": "1.7.2", + "iconv-lite": "0.4.24", + "on-finished": "~2.3.0", + "qs": "6.7.0", + "raw-body": "2.4.0", + "type-is": "~1.6.17" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "qs": { + "version": "6.7.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.7.0.tgz", + "integrity": "sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ==", + "dev": true + } + } + }, + "bonjour": { + "version": "3.5.0", + "resolved": "https://registry.npmjs.org/bonjour/-/bonjour-3.5.0.tgz", + "integrity": "sha1-jokKGD2O6aI5OzhExpGkK897yfU=", + "dev": true, + "requires": { + "array-flatten": "^2.1.0", + "deep-equal": "^1.0.1", + "dns-equal": "^1.0.0", + "dns-txt": "^2.0.2", + "multicast-dns": "^6.0.1", + "multicast-dns-service-types": "^1.1.0" + } + }, + "boolbase": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/boolbase/-/boolbase-1.0.0.tgz", + "integrity": "sha1-aN/1++YMUes3cl6p4+0xDcwed24=", + "dev": true + }, + "boxen": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/boxen/-/boxen-4.2.0.tgz", + "integrity": "sha512-eB4uT9RGzg2odpER62bBwSLvUeGC+WbRjjyyFhGsKnc8wp/m0+hQsMUvUe3H2V0D5vw0nBdO1hCJoZo5mKeuIQ==", + "dev": true, + "requires": { + "ansi-align": "^3.0.0", + "camelcase": "^5.3.1", + "chalk": "^3.0.0", + "cli-boxes": "^2.2.0", + "string-width": "^4.1.0", + "term-size": "^2.1.0", + "type-fest": "^0.8.1", + "widest-line": "^3.1.0" + }, + "dependencies": { + "ansi-regex": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz", + "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg==", + "dev": true + }, + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + }, + "chalk": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz", + "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==", + "dev": true, + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true + }, + "is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true + }, + "string-width": { + "version": "4.2.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.2.tgz", + "integrity": "sha512-XBJbT3N4JhVumXE0eoLU9DCjcaF92KLNqTmFCnG1pf8duUxFGwtP6AD6nkjw9a3IdiRtL3E2w3JDiE/xi3vOeA==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.0" + } + }, + "strip-ansi": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz", + "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.0" + } + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + }, + "type-fest": { + "version": "0.8.1", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.8.1.tgz", + "integrity": "sha512-4dbzIzqvjtgiM5rw1k5rEHtBANKmdudhGyBEajN01fEyhaAIhsoKNy6y7+IN93IfpFtwY9iqi7kD+xwKhQsNJA==", + "dev": true + } + } + }, + "brace-expansion": { + "version": "1.1.11", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz", + "integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==", + "dev": true, + "requires": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "braces": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/braces/-/braces-2.3.2.tgz", + "integrity": "sha512-aNdbnj9P8PjdXU4ybaWLK2IF3jc/EoDYbC7AazW6to3TRsfXxscC9UXOB5iDiEQrkyIbWp2SLQda4+QAa7nc3w==", + "dev": true, + "requires": { + "arr-flatten": "^1.1.0", + "array-unique": "^0.3.2", + "extend-shallow": "^2.0.1", + "fill-range": "^4.0.0", + "isobject": "^3.0.1", + "repeat-element": "^1.1.2", + "snapdragon": "^0.8.1", + "snapdragon-node": "^2.0.1", + "split-string": "^3.0.2", + "to-regex": "^3.0.1" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "brorand": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/brorand/-/brorand-1.1.0.tgz", + "integrity": "sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8=", + "dev": true + }, + "browserify-aes": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/browserify-aes/-/browserify-aes-1.2.0.tgz", + "integrity": "sha512-+7CHXqGuspUn/Sl5aO7Ea0xWGAtETPXNSAjHo48JfLdPWcMng33Xe4znFvQweqc/uzk5zSOI3H52CYnjCfb5hA==", + "dev": true, + "requires": { + "buffer-xor": "^1.0.3", + "cipher-base": "^1.0.0", + "create-hash": "^1.1.0", + "evp_bytestokey": "^1.0.3", + "inherits": "^2.0.1", + "safe-buffer": "^5.0.1" + } + }, + "browserify-cipher": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/browserify-cipher/-/browserify-cipher-1.0.1.tgz", + "integrity": "sha512-sPhkz0ARKbf4rRQt2hTpAHqn47X3llLkUGn+xEJzLjwY8LRs2p0v7ljvI5EyoRO/mexrNunNECisZs+gw2zz1w==", + "dev": true, + "requires": { + "browserify-aes": "^1.0.4", + "browserify-des": "^1.0.0", + "evp_bytestokey": "^1.0.0" + } + }, + "browserify-des": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/browserify-des/-/browserify-des-1.0.2.tgz", + "integrity": "sha512-BioO1xf3hFwz4kc6iBhI3ieDFompMhrMlnDFC4/0/vd5MokpuAc3R+LYbwTA9A5Yc9pq9UYPqffKpW2ObuwX5A==", + "dev": true, + "requires": { + "cipher-base": "^1.0.1", + "des.js": "^1.0.0", + "inherits": "^2.0.1", + "safe-buffer": "^5.1.2" + } + }, + "browserify-rsa": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/browserify-rsa/-/browserify-rsa-4.1.0.tgz", + "integrity": "sha512-AdEER0Hkspgno2aR97SAf6vi0y0k8NuOpGnVH3O99rcA5Q6sh8QxcngtHuJ6uXwnfAXNM4Gn1Gb7/MV1+Ymbog==", + "dev": true, + "requires": { + "bn.js": "^5.0.0", + "randombytes": "^2.0.1" + } + }, + "browserify-sign": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/browserify-sign/-/browserify-sign-4.2.1.tgz", + "integrity": "sha512-/vrA5fguVAKKAVTNJjgSm1tRQDHUU6DbwO9IROu/0WAzC8PKhucDSh18J0RMvVeHAn5puMd+QHC2erPRNf8lmg==", + "dev": true, + "requires": { + "bn.js": "^5.1.1", + "browserify-rsa": "^4.0.1", + "create-hash": "^1.2.0", + "create-hmac": "^1.1.7", + "elliptic": "^6.5.3", + "inherits": "^2.0.4", + "parse-asn1": "^5.1.5", + "readable-stream": "^3.6.0", + "safe-buffer": "^5.2.0" + }, + "dependencies": { + "readable-stream": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", + "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + } + }, + "safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "dev": true + } + } + }, + "browserify-zlib": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/browserify-zlib/-/browserify-zlib-0.2.0.tgz", + "integrity": "sha512-Z942RysHXmJrhqk88FmKBVq/v5tqmSkDz7p54G/MGyjMnCFFnC79XWNbg+Vta8W6Wb2qtSZTSxIGkJrRpCFEiA==", + "dev": true, + "requires": { + "pako": "~1.0.5" + } + }, + "browserslist": { + "version": "4.16.6", + "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.16.6.tgz", + "integrity": "sha512-Wspk/PqO+4W9qp5iUTJsa1B/QrYn1keNCcEP5OvP7WBwT4KaDly0uONYmC6Xa3Z5IqnUgS0KcgLYu1l74x0ZXQ==", + "dev": true, + "requires": { + "caniuse-lite": "^1.0.30001219", + "colorette": "^1.2.2", + "electron-to-chromium": "^1.3.723", + "escalade": "^3.1.1", + "node-releases": "^1.1.71" + } + }, + "buffer": { + "version": "4.9.2", + "resolved": "https://registry.npmjs.org/buffer/-/buffer-4.9.2.tgz", + "integrity": "sha512-xq+q3SRMOxGivLhBNaUdC64hDTQwejJ+H0T/NB1XMtTVEwNTrfFF3gAxiyW0Bu/xWEGhjVKgUcMhCrUy2+uCWg==", + "dev": true, + "requires": { + "base64-js": "^1.0.2", + "ieee754": "^1.1.4", + "isarray": "^1.0.0" + } + }, + "buffer-from": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.1.tgz", + "integrity": "sha512-MQcXEUbCKtEo7bhqEs6560Hyd4XaovZlO/k9V3hjVUF/zwW7KBVdSK4gIt/bzwS9MbR5qob+F5jusZsb0YQK2A==", + "dev": true + }, + "buffer-indexof": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/buffer-indexof/-/buffer-indexof-1.1.1.tgz", + "integrity": "sha512-4/rOEg86jivtPTeOUUT61jJO1Ya1TrR/OkqCSZDyq84WJh3LuuiphBYJN+fm5xufIk4XAFcEwte/8WzC8If/1g==", + "dev": true + }, + "buffer-json": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/buffer-json/-/buffer-json-2.0.0.tgz", + "integrity": "sha512-+jjPFVqyfF1esi9fvfUs3NqM0pH1ziZ36VP4hmA/y/Ssfo/5w5xHKfTw9BwQjoJ1w/oVtpLomqwUHKdefGyuHw==", + "dev": true + }, + "buffer-xor": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/buffer-xor/-/buffer-xor-1.0.3.tgz", + "integrity": "sha1-JuYe0UIvtw3ULm42cp7VHYVf6Nk=", + "dev": true + }, + "builtin-status-codes": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/builtin-status-codes/-/builtin-status-codes-3.0.0.tgz", + "integrity": "sha1-hZgoeOIbmOHGZCXgPQF0eI9Wnug=", + "dev": true + }, + "builtins": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/builtins/-/builtins-1.0.3.tgz", + "integrity": "sha1-y5T662HIaWRR2zZTThQi+U8K7og=", + "dev": true + }, + "bytes": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.0.tgz", + "integrity": "sha512-zauLjrfCG+xvoyaqLoV8bLVXXNGC4JqlxFCutSDWA6fJrTo2ZuvLYTqZ7aHBLZSMOopbzwv8f+wZcVzfVTI2Dg==", + "dev": true + }, + "cac": { + "version": "6.7.3", + "resolved": "https://registry.npmjs.org/cac/-/cac-6.7.3.tgz", + "integrity": "sha512-ECVqVZh74qgSuZG9YOt2OJPI3wGcf+EwwuF/XIOYqZBD0KZYLtgPWqFPxmDPQ6joxI1nOlvVgRV6VT53Ooyocg==", + "dev": true + }, + "cacache": { + "version": "12.0.4", + "resolved": "https://registry.npmjs.org/cacache/-/cacache-12.0.4.tgz", + "integrity": "sha512-a0tMB40oefvuInr4Cwb3GerbL9xTj1D5yg0T5xrjGCGyfvbxseIXX7BAO/u/hIXdafzOI5JC3wDwHyf24buOAQ==", + "dev": true, + "requires": { + "bluebird": "^3.5.5", + "chownr": "^1.1.1", + "figgy-pudding": "^3.5.1", + "glob": "^7.1.4", + "graceful-fs": "^4.1.15", + "infer-owner": "^1.0.3", + "lru-cache": "^5.1.1", + "mississippi": "^3.0.0", + "mkdirp": "^0.5.1", + "move-concurrently": "^1.0.1", + "promise-inflight": "^1.0.1", + "rimraf": "^2.6.3", + "ssri": "^6.0.1", + "unique-filename": "^1.1.1", + "y18n": "^4.0.0" + } + }, + "cache-base": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/cache-base/-/cache-base-1.0.1.tgz", + "integrity": "sha512-AKcdTnFSWATd5/GCPRxr2ChwIJ85CeyrEyjRHlKxQ56d4XJMGym0uAiKn0xbLOGOl3+yRpOTi484dVCEc5AUzQ==", + "dev": true, + "requires": { + "collection-visit": "^1.0.0", + "component-emitter": "^1.2.1", + "get-value": "^2.0.6", + "has-value": "^1.0.0", + "isobject": "^3.0.1", + "set-value": "^2.0.0", + "to-object-path": "^0.3.0", + "union-value": "^1.0.0", + "unset-value": "^1.0.0" + } + }, + "cache-loader": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/cache-loader/-/cache-loader-3.0.1.tgz", + "integrity": "sha512-HzJIvGiGqYsFUrMjAJNDbVZoG7qQA+vy9AIoKs7s9DscNfki0I589mf2w6/tW+kkFH3zyiknoWV5Jdynu6b/zw==", + "dev": true, + "requires": { + "buffer-json": "^2.0.0", + "find-cache-dir": "^2.1.0", + "loader-utils": "^1.2.3", + "mkdirp": "^0.5.1", + "neo-async": "^2.6.1", + "schema-utils": "^1.0.0" + }, + "dependencies": { + "find-cache-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.1.0.tgz", + "integrity": "sha512-Tq6PixE0w/VMFfCgbONnkiQIVol/JJL7nRMi20fqzA4NRs9AfeqMGeRdPi3wIhYkxjeBaWh2rxwapn5Tu3IqOQ==", + "dev": true, + "requires": { + "commondir": "^1.0.1", + "make-dir": "^2.0.0", + "pkg-dir": "^3.0.0" + } + }, + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "make-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz", + "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==", + "dev": true, + "requires": { + "pify": "^4.0.1", + "semver": "^5.6.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + }, + "pkg-dir": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", + "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", + "dev": true, + "requires": { + "find-up": "^3.0.0" + } + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + } + } + }, + "cacheable-request": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/cacheable-request/-/cacheable-request-6.1.0.tgz", + "integrity": "sha512-Oj3cAGPCqOZX7Rz64Uny2GYAZNliQSqfbePrgAQ1wKAihYmCUnraBtJtKcGR4xz7wF+LoJC+ssFZvv5BgF9Igg==", + "dev": true, + "requires": { + "clone-response": "^1.0.2", + "get-stream": "^5.1.0", + "http-cache-semantics": "^4.0.0", + "keyv": "^3.0.0", + "lowercase-keys": "^2.0.0", + "normalize-url": "^4.1.0", + "responselike": "^1.0.2" + }, + "dependencies": { + "get-stream": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-5.2.0.tgz", + "integrity": "sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA==", + "dev": true, + "requires": { + "pump": "^3.0.0" + } + }, + "lowercase-keys": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/lowercase-keys/-/lowercase-keys-2.0.0.tgz", + "integrity": "sha512-tqNXrS78oMOE73NMxK4EMLQsQowWf8jKooH9g7xPavRT706R6bkQJ6DY2Te7QukaZsulxa30wQ7bk0pm4XiHmA==", + "dev": true + }, + "normalize-url": { + "version": "4.5.1", + "resolved": "https://registry.npmjs.org/normalize-url/-/normalize-url-4.5.1.tgz", + "integrity": "sha512-9UZCFRHQdNrfTpGg8+1INIg93B6zE0aXMVFkw1WFwvO4SlZywU6aLg5Of0Ap/PgcbSw4LNxvMWXMeugwMCX0AA==", + "dev": true + } + } + }, + "call-bind": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz", + "integrity": "sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==", + "dev": true, + "requires": { + "function-bind": "^1.1.1", + "get-intrinsic": "^1.0.2" + } + }, + "call-me-maybe": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/call-me-maybe/-/call-me-maybe-1.0.1.tgz", + "integrity": "sha1-JtII6onje1y95gJQoV8DHBak1ms=", + "dev": true + }, + "caller-callsite": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/caller-callsite/-/caller-callsite-2.0.0.tgz", + "integrity": "sha1-hH4PzgoiN1CpoCfFSzNzGtMVQTQ=", + "dev": true, + "requires": { + "callsites": "^2.0.0" + } + }, + "caller-path": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/caller-path/-/caller-path-2.0.0.tgz", + "integrity": "sha1-Ro+DBE42mrIBD6xfBs7uFbsssfQ=", + "dev": true, + "requires": { + "caller-callsite": "^2.0.0" + } + }, + "callsites": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/callsites/-/callsites-2.0.0.tgz", + "integrity": "sha1-BuuE8A7qQT2oav/vrL/7Ngk7PFA=", + "dev": true + }, + "camel-case": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/camel-case/-/camel-case-3.0.0.tgz", + "integrity": "sha1-yjw2iKTpzzpM2nd9xNy8cTJJz3M=", + "dev": true, + "requires": { + "no-case": "^2.2.0", + "upper-case": "^1.1.1" + } + }, + "camelcase": { + "version": "6.2.0", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.2.0.tgz", + "integrity": "sha512-c7wVvbw3f37nuobQNtgsgG9POC9qMbNuMQmTCqZv23b6MIz0fcYpBiOlv9gEN/hdLdnZTDQhg6e9Dq5M1vKvfg==", + "dev": true + }, + "camelcase-css": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/camelcase-css/-/camelcase-css-2.0.1.tgz", + "integrity": "sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA==", + "dev": true + }, + "caniuse-api": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/caniuse-api/-/caniuse-api-3.0.0.tgz", + "integrity": "sha512-bsTwuIg/BZZK/vreVTYYbSWoe2F+71P7K5QGEX+pT250DZbfU1MQ5prOKpPR+LL6uWKK3KMwMCAS74QB3Um1uw==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "caniuse-lite": "^1.0.0", + "lodash.memoize": "^4.1.2", + "lodash.uniq": "^4.5.0" + } + }, + "caniuse-lite": { + "version": "1.0.30001228", + "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001228.tgz", + "integrity": "sha512-QQmLOGJ3DEgokHbMSA8cj2a+geXqmnpyOFT0lhQV6P3/YOJvGDEwoedcwxEQ30gJIwIIunHIicunJ2rzK5gB2A==", + "dev": true + }, + "caseless": { + "version": "0.12.0", + "resolved": "https://registry.npmjs.org/caseless/-/caseless-0.12.0.tgz", + "integrity": "sha1-G2gcIf+EAzyCZUMJBolCDRhxUdw=", + "dev": true + }, + "chalk": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.1.tgz", + "integrity": "sha512-diHzdDKxcU+bAsUboHLPEDQiw0qEe0qd7SYUn3HgcFlWgbDcfLGswOHYeGrHKzG9z6UYf01d9VFMfZxPM1xZSg==", + "dev": true, + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + } + } + }, + "check-more-types": { + "version": "2.24.0", + "resolved": "https://registry.npmjs.org/check-more-types/-/check-more-types-2.24.0.tgz", + "integrity": "sha1-FCD/sQ/URNz8ebQ4kbv//TKoRgA=", + "dev": true + }, + "chokidar": { + "version": "2.1.8", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-2.1.8.tgz", + "integrity": "sha512-ZmZUazfOzf0Nve7duiCKD23PFSCs4JPoYyccjUFF3aQkQadqBhfzhjkwBH2mNOG9cTBwhamM37EIsIkZw3nRgg==", + "dev": true, + "requires": { + "anymatch": "^2.0.0", + "async-each": "^1.0.1", + "braces": "^2.3.2", + "fsevents": "^1.2.7", + "glob-parent": "^3.1.0", + "inherits": "^2.0.3", + "is-binary-path": "^1.0.0", + "is-glob": "^4.0.0", + "normalize-path": "^3.0.0", + "path-is-absolute": "^1.0.0", + "readdirp": "^2.2.1", + "upath": "^1.1.1" + } + }, + "chownr": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz", + "integrity": "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg==", + "dev": true + }, + "chrome-trace-event": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.3.tgz", + "integrity": "sha512-p3KULyQg4S7NIHixdwbGX+nFHkoBiA4YQmyWtjb8XngSKV124nJmRysgAeujbUVb15vh+RvFUfCPqU7rXk+hZg==", + "dev": true + }, + "ci-info": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/ci-info/-/ci-info-3.1.1.tgz", + "integrity": "sha512-kdRWLBIJwdsYJWYJFtAFFYxybguqeF91qpZaggjG5Nf8QKdizFG2hjqvaTXbxFIcYbSaD74KpAXv6BSm17DHEQ==", + "dev": true + }, + "cint": { + "version": "8.2.1", + "resolved": "https://registry.npmjs.org/cint/-/cint-8.2.1.tgz", + "integrity": "sha1-cDhrG0jidz0NYxZqVa/5TvRFahI=", + "dev": true + }, + "cipher-base": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/cipher-base/-/cipher-base-1.0.4.tgz", + "integrity": "sha512-Kkht5ye6ZGmwv40uUDZztayT2ThLQGfnj/T71N/XzeZeo3nf8foyW7zGTsPYkEya3m5f3cAypH+qe7YOrM1U2Q==", + "dev": true, + "requires": { + "inherits": "^2.0.1", + "safe-buffer": "^5.0.1" + } + }, + "class-utils": { + "version": "0.3.6", + "resolved": "https://registry.npmjs.org/class-utils/-/class-utils-0.3.6.tgz", + "integrity": "sha512-qOhPa/Fj7s6TY8H8esGu5QNpMMQxz79h+urzrNYN6mn+9BnxlDGf5QZ+XeCDsxSjPqsSR56XOZOJmpeurnLMeg==", + "dev": true, + "requires": { + "arr-union": "^3.1.0", + "define-property": "^0.2.5", + "isobject": "^3.0.0", + "static-extend": "^0.1.1" + }, + "dependencies": { + "define-property": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", + "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", + "dev": true, + "requires": { + "is-descriptor": "^0.1.0" + } + } + } + }, + "clean-css": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/clean-css/-/clean-css-4.2.3.tgz", + "integrity": "sha512-VcMWDN54ZN/DS+g58HYL5/n4Zrqe8vHJpGA8KdgUXFU4fuP/aHNw8eld9SyEIyabIMJX/0RaY/fplOo5hYLSFA==", + "dev": true, + "requires": { + "source-map": "~0.6.0" + } + }, + "clean-stack": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/clean-stack/-/clean-stack-2.2.0.tgz", + "integrity": "sha512-4diC9HaTE+KRAMWhDhrGOECgWZxoevMc5TlkObMqNSsVU62PYzXZ/SMTjzyGAFF1YusgxGcSWTEXBhp0CPwQ1A==", + "dev": true + }, + "cli-boxes": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-2.2.1.tgz", + "integrity": "sha512-y4coMcylgSCdVinjiDBuR8PCC2bLjyGTwEmPb9NHR/QaNU6EUOXcTY/s6VjGMD6ENSEaeQYHCY0GNGS5jfMwPw==", + "dev": true + }, + "cli-table": { + "version": "0.3.6", + "resolved": "https://registry.npmjs.org/cli-table/-/cli-table-0.3.6.tgz", + "integrity": "sha512-ZkNZbnZjKERTY5NwC2SeMeLeifSPq/pubeRoTpdr3WchLlnZg6hEgvHkK5zL7KNFdd9PmHN8lxrENUwI3cE8vQ==", + "dev": true, + "requires": { + "colors": "1.0.3" + } + }, + "cliui": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-5.0.0.tgz", + "integrity": "sha512-PYeGSEmmHM6zvoef2w8TPzlrnNpXIjTipYK780YswmIP9vjxmd6Y2a3CB2Ks6/AU8NHjZugXvo8w3oWM2qnwXA==", + "dev": true, + "requires": { + "string-width": "^3.1.0", + "strip-ansi": "^5.2.0", + "wrap-ansi": "^5.1.0" + }, + "dependencies": { + "ansi-regex": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz", + "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg==", + "dev": true + }, + "strip-ansi": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz", + "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==", + "dev": true, + "requires": { + "ansi-regex": "^4.1.0" + } + } + } + }, + "clone-response": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/clone-response/-/clone-response-1.0.2.tgz", + "integrity": "sha1-0dyXOSAxTfZ/vrlCI7TuNQI56Ws=", + "dev": true, + "requires": { + "mimic-response": "^1.0.0" + } + }, + "coa": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/coa/-/coa-2.0.2.tgz", + "integrity": "sha512-q5/jG+YQnSy4nRTV4F7lPepBJZ8qBNJJDBuJdoejDyLXgmL7IEo+Le2JDZudFTFt7mrCqIRaSjws4ygRCTCAXA==", + "dev": true, + "requires": { + "@types/q": "^1.5.1", + "chalk": "^2.4.1", + "q": "^1.1.2" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "code-point-at": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/code-point-at/-/code-point-at-1.1.0.tgz", + "integrity": "sha1-DQcLTQQ6W+ozovGkDi7bPZpMz3c=", + "dev": true + }, + "collection-visit": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/collection-visit/-/collection-visit-1.0.0.tgz", + "integrity": "sha1-S8A3PBZLwykbTTaMgpzxqApZ3KA=", + "dev": true, + "requires": { + "map-visit": "^1.0.0", + "object-visit": "^1.0.0" + } + }, + "color": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/color/-/color-3.1.3.tgz", + "integrity": "sha512-xgXAcTHa2HeFCGLE9Xs/R82hujGtu9Jd9x4NW3T34+OMs7VoPsjwzRczKHvTAHeJwWFwX5j15+MgAppE8ztObQ==", + "dev": true, + "requires": { + "color-convert": "^1.9.1", + "color-string": "^1.5.4" + } + }, + "color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dev": true, + "requires": { + "color-name": "1.1.3" + } + }, + "color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha1-p9BVi9icQveV3UIyj3QIMcpTvCU=", + "dev": true + }, + "color-string": { + "version": "1.5.5", + "resolved": "https://registry.npmjs.org/color-string/-/color-string-1.5.5.tgz", + "integrity": "sha512-jgIoum0OfQfq9Whcfc2z/VhCNcmQjWbey6qBX0vqt7YICflUmBCh9E9CiQD5GSJ+Uehixm3NUwHVhqUAWRivZg==", + "dev": true, + "requires": { + "color-name": "^1.0.0", + "simple-swizzle": "^0.2.2" + } + }, + "colorette": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/colorette/-/colorette-1.2.2.tgz", + "integrity": "sha512-MKGMzyfeuutC/ZJ1cba9NqcNpfeqMUcYmyF1ZFY6/Cn7CNSAKx6a+s48sqLqyAiZuaP2TcqMhoo+dlwFnVxT9w==", + "dev": true + }, + "colors": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/colors/-/colors-1.0.3.tgz", + "integrity": "sha1-BDP0TYCWgP3rYO0mDxsMJi6CpAs=", + "dev": true + }, + "combined-stream": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz", + "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==", + "dev": true, + "requires": { + "delayed-stream": "~1.0.0" + } + }, + "commander": { + "version": "2.20.3", + "resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz", + "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==", + "dev": true + }, + "commondir": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/commondir/-/commondir-1.0.1.tgz", + "integrity": "sha1-3dgA2gxmEnOTzKWVDqloo6rxJTs=", + "dev": true + }, + "component-emitter": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.3.0.tgz", + "integrity": "sha512-Rd3se6QB+sO1TwqZjscQrurpEPIfO0/yYnSin6Q/rD3mOutHvUrCAhJub3r90uNb+SESBuE0QYoB90YdfatsRg==", + "dev": true + }, + "compressible": { + "version": "2.0.18", + "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz", + "integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==", + "dev": true, + "requires": { + "mime-db": ">= 1.43.0 < 2" + } + }, + "compression": { + "version": "1.7.4", + "resolved": "https://registry.npmjs.org/compression/-/compression-1.7.4.tgz", + "integrity": "sha512-jaSIDzP9pZVS4ZfQ+TzvtiWhdpFhE2RDHz8QJkpX9SIpLq88VueF5jJw6t+6CUQcAoA6t+x89MLrWAqpfDE8iQ==", + "dev": true, + "requires": { + "accepts": "~1.3.5", + "bytes": "3.0.0", + "compressible": "~2.0.16", + "debug": "2.6.9", + "on-headers": "~1.0.2", + "safe-buffer": "5.1.2", + "vary": "~1.1.2" + }, + "dependencies": { + "bytes": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz", + "integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg=", + "dev": true + }, + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + } + } + }, + "concat-map": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha1-2Klr13/Wjfd5OnMDajug1UBdR3s=", + "dev": true + }, + "concat-stream": { + "version": "1.6.2", + "resolved": "https://registry.npmjs.org/concat-stream/-/concat-stream-1.6.2.tgz", + "integrity": "sha512-27HBghJxjiZtIk3Ycvn/4kbJk/1uZuJFfuPEns6LaEvpvG1f0hTea8lilrouyo9mVc2GWdcEZ8OLoGmSADlrCw==", + "dev": true, + "requires": { + "buffer-from": "^1.0.0", + "inherits": "^2.0.3", + "readable-stream": "^2.2.2", + "typedarray": "^0.0.6" + } + }, + "configstore": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/configstore/-/configstore-5.0.1.tgz", + "integrity": "sha512-aMKprgk5YhBNyH25hj8wGt2+D52Sw1DRRIzqBwLp2Ya9mFmY8KPvvtvmna8SxVR9JMZ4kzMD68N22vlaRpkeFA==", + "dev": true, + "requires": { + "dot-prop": "^5.2.0", + "graceful-fs": "^4.1.2", + "make-dir": "^3.0.0", + "unique-string": "^2.0.0", + "write-file-atomic": "^3.0.0", + "xdg-basedir": "^4.0.0" + } + }, + "connect-history-api-fallback": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/connect-history-api-fallback/-/connect-history-api-fallback-1.6.0.tgz", + "integrity": "sha512-e54B99q/OUoH64zYYRf3HBP5z24G38h5D3qXu23JGRoigpX5Ss4r9ZnDk3g0Z8uQC2x2lPaJ+UlWBc1ZWBWdLg==", + "dev": true + }, + "consola": { + "version": "2.15.3", + "resolved": "https://registry.npmjs.org/consola/-/consola-2.15.3.tgz", + "integrity": "sha512-9vAdYbHj6x2fLKC4+oPH0kFzY/orMZyG2Aj+kNylHxKGJ/Ed4dpNyAQYwJOdqO4zdM7XpVHmyejQDcQHrnuXbw==", + "dev": true + }, + "console-browserify": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/console-browserify/-/console-browserify-1.2.0.tgz", + "integrity": "sha512-ZMkYO/LkF17QvCPqM0gxw8yUzigAOZOSWSHg91FH6orS7vcEj5dVZTidN2fQ14yBSdg97RqhSNwLUXInd52OTA==", + "dev": true + }, + "console-control-strings": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/console-control-strings/-/console-control-strings-1.1.0.tgz", + "integrity": "sha1-PXz0Rk22RG6mRL9LOVB/mFEAjo4=", + "dev": true + }, + "consolidate": { + "version": "0.15.1", + "resolved": "https://registry.npmjs.org/consolidate/-/consolidate-0.15.1.tgz", + "integrity": "sha512-DW46nrsMJgy9kqAbPt5rKaCr7uFtpo4mSUvLHIUbJEjm0vo+aY5QLwBUq3FK4tRnJr/X0Psc0C4jf/h+HtXSMw==", + "dev": true, + "requires": { + "bluebird": "^3.1.1" + } + }, + "constants-browserify": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/constants-browserify/-/constants-browserify-1.0.0.tgz", + "integrity": "sha1-wguW2MYXdIqvHBYCF2DNJ/y4y3U=", + "dev": true + }, + "content-disposition": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.3.tgz", + "integrity": "sha512-ExO0774ikEObIAEV9kDo50o+79VCUdEB6n6lzKgGwupcVeRlhrj3qGAfwq8G6uBJjkqLrhT0qEYFcWng8z1z0g==", + "dev": true, + "requires": { + "safe-buffer": "5.1.2" + } + }, + "content-type": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.4.tgz", + "integrity": "sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA==", + "dev": true + }, + "convert-source-map": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.7.0.tgz", + "integrity": "sha512-4FJkXzKXEDB1snCFZlLP4gpC3JILicCpGbzG9f9G7tGqGCzETQ2hWPrcinA9oU4wtf2biUaEH5065UnMeR33oA==", + "dev": true, + "requires": { + "safe-buffer": "~5.1.1" + } + }, + "cookie": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.0.tgz", + "integrity": "sha512-+Hp8fLp57wnUSt0tY0tHEXh4voZRDnoIrZPqlo3DPiI4y9lwg/jqx+1Om94/W6ZaPDOUbnjOt/99w66zk+l1Xg==", + "dev": true + }, + "cookie-signature": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", + "integrity": "sha1-4wOogrNCzD7oylE6eZmXNNqzriw=", + "dev": true + }, + "copy-concurrently": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/copy-concurrently/-/copy-concurrently-1.0.5.tgz", + "integrity": "sha512-f2domd9fsVDFtaFcbaRZuYXwtdmnzqbADSwhSWYxYB/Q8zsdUUFMXVRwXGDMWmbEzAn1kdRrtI1T/KTFOL4X2A==", + "dev": true, + "requires": { + "aproba": "^1.1.1", + "fs-write-stream-atomic": "^1.0.8", + "iferr": "^0.1.5", + "mkdirp": "^0.5.1", + "rimraf": "^2.5.4", + "run-queue": "^1.0.0" + } + }, + "copy-descriptor": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/copy-descriptor/-/copy-descriptor-0.1.1.tgz", + "integrity": "sha1-Z29us8OZl8LuGsOpJP1hJHSPV40=", + "dev": true + }, + "copy-webpack-plugin": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/copy-webpack-plugin/-/copy-webpack-plugin-5.1.2.tgz", + "integrity": "sha512-Uh7crJAco3AjBvgAy9Z75CjK8IG+gxaErro71THQ+vv/bl4HaQcpkexAY8KVW/T6D2W2IRr+couF/knIRkZMIQ==", + "dev": true, + "requires": { + "cacache": "^12.0.3", + "find-cache-dir": "^2.1.0", + "glob-parent": "^3.1.0", + "globby": "^7.1.1", + "is-glob": "^4.0.1", + "loader-utils": "^1.2.3", + "minimatch": "^3.0.4", + "normalize-path": "^3.0.0", + "p-limit": "^2.2.1", + "schema-utils": "^1.0.0", + "serialize-javascript": "^4.0.0", + "webpack-log": "^2.0.0" + }, + "dependencies": { + "find-cache-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.1.0.tgz", + "integrity": "sha512-Tq6PixE0w/VMFfCgbONnkiQIVol/JJL7nRMi20fqzA4NRs9AfeqMGeRdPi3wIhYkxjeBaWh2rxwapn5Tu3IqOQ==", + "dev": true, + "requires": { + "commondir": "^1.0.1", + "make-dir": "^2.0.0", + "pkg-dir": "^3.0.0" + } + }, + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "globby": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/globby/-/globby-7.1.1.tgz", + "integrity": "sha1-+yzP+UAfhgCUXfral0QMypcrhoA=", + "dev": true, + "requires": { + "array-union": "^1.0.1", + "dir-glob": "^2.0.0", + "glob": "^7.1.2", + "ignore": "^3.3.5", + "pify": "^3.0.0", + "slash": "^1.0.0" + }, + "dependencies": { + "pify": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", + "integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=", + "dev": true + } + } + }, + "ignore": { + "version": "3.3.10", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-3.3.10.tgz", + "integrity": "sha512-Pgs951kaMm5GXP7MOvxERINe3gsaVjUWFm+UZPSq9xYriQAksyhg0csnS0KXSNRD5NmNdapXEpjxG49+AKh/ug==", + "dev": true + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "make-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz", + "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==", + "dev": true, + "requires": { + "pify": "^4.0.1", + "semver": "^5.6.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + }, + "pkg-dir": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", + "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", + "dev": true, + "requires": { + "find-up": "^3.0.0" + } + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + }, + "slash": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/slash/-/slash-1.0.0.tgz", + "integrity": "sha1-xB8vbDn8FtHNF61LXYlhFK5HDVU=", + "dev": true + } + } + }, + "core-js": { + "version": "3.12.1", + "resolved": "https://registry.npmjs.org/core-js/-/core-js-3.12.1.tgz", + "integrity": "sha512-Ne9DKPHTObRuB09Dru5AjwKjY4cJHVGu+y5f7coGn1E9Grkc3p2iBwE9AI/nJzsE29mQF7oq+mhYYRqOMFN1Bw==", + "dev": true + }, + "core-js-compat": { + "version": "3.12.1", + "resolved": "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.12.1.tgz", + "integrity": "sha512-i6h5qODpw6EsHAoIdQhKoZdWn+dGBF3dSS8m5tif36RlWvW3A6+yu2S16QHUo3CrkzrnEskMAt9f8FxmY9fhWQ==", + "dev": true, + "requires": { + "browserslist": "^4.16.6", + "semver": "7.0.0" + }, + "dependencies": { + "semver": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.0.0.tgz", + "integrity": "sha512-+GB6zVA9LWh6zovYQLALHwv5rb2PHGlJi3lfiqIHxR0uuwCgefcOJc59v9fv1w8GbStwxuuqqAjI9NMAOOgq1A==", + "dev": true + } + } + }, + "core-util-is": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz", + "integrity": "sha1-tf1UIgqivFq1eqtxQMlAdUUDwac=", + "dev": true + }, + "cosmiconfig": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-5.2.1.tgz", + "integrity": "sha512-H65gsXo1SKjf8zmrJ67eJk8aIRKV5ff2D4uKZIBZShbhGSpEmsQOPW/SKMKYhSTrqR7ufy6RP69rPogdaPh/kA==", + "dev": true, + "requires": { + "import-fresh": "^2.0.0", + "is-directory": "^0.3.1", + "js-yaml": "^3.13.1", + "parse-json": "^4.0.0" + } + }, + "create-ecdh": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/create-ecdh/-/create-ecdh-4.0.4.tgz", + "integrity": "sha512-mf+TCx8wWc9VpuxfP2ht0iSISLZnt0JgWlrOKZiNqyUZWnjIaCIVNQArMHnCZKfEYRg6IM7A+NeJoN8gf/Ws0A==", + "dev": true, + "requires": { + "bn.js": "^4.1.0", + "elliptic": "^6.5.3" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "create-hash": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/create-hash/-/create-hash-1.2.0.tgz", + "integrity": "sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg==", + "dev": true, + "requires": { + "cipher-base": "^1.0.1", + "inherits": "^2.0.1", + "md5.js": "^1.3.4", + "ripemd160": "^2.0.1", + "sha.js": "^2.4.0" + } + }, + "create-hmac": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/create-hmac/-/create-hmac-1.1.7.tgz", + "integrity": "sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg==", + "dev": true, + "requires": { + "cipher-base": "^1.0.3", + "create-hash": "^1.1.0", + "inherits": "^2.0.1", + "ripemd160": "^2.0.0", + "safe-buffer": "^5.0.1", + "sha.js": "^2.4.8" + } + }, + "cross-spawn": { + "version": "7.0.3", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz", + "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==", + "dev": true, + "requires": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + } + }, + "crypto-browserify": { + "version": "3.12.0", + "resolved": "https://registry.npmjs.org/crypto-browserify/-/crypto-browserify-3.12.0.tgz", + "integrity": "sha512-fz4spIh+znjO2VjL+IdhEpRJ3YN6sMzITSBijk6FK2UvTqruSQW+/cCZTSNsMiZNvUeq0CqurF+dAbyiGOY6Wg==", + "dev": true, + "requires": { + "browserify-cipher": "^1.0.0", + "browserify-sign": "^4.0.0", + "create-ecdh": "^4.0.0", + "create-hash": "^1.1.0", + "create-hmac": "^1.1.0", + "diffie-hellman": "^5.0.0", + "inherits": "^2.0.1", + "pbkdf2": "^3.0.3", + "public-encrypt": "^4.0.0", + "randombytes": "^2.0.0", + "randomfill": "^1.0.3" + } + }, + "crypto-random-string": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-2.0.0.tgz", + "integrity": "sha512-v1plID3y9r/lPhviJ1wrXpLeyUIGAZ2SHNYTEapm7/8A9nLPoyvVp3RK/EPFqn5kEznyWgYZNsRtYYIWbuG8KA==", + "dev": true + }, + "css": { + "version": "2.2.4", + "resolved": "https://registry.npmjs.org/css/-/css-2.2.4.tgz", + "integrity": "sha512-oUnjmWpy0niI3x/mPL8dVEI1l7MnG3+HHyRPHf+YFSbK+svOhXpmSOcDURUh2aOCgl2grzrOPt1nHLuCVFULLw==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "source-map": "^0.6.1", + "source-map-resolve": "^0.5.2", + "urix": "^0.1.0" + } + }, + "css-color-names": { + "version": "0.0.4", + "resolved": "https://registry.npmjs.org/css-color-names/-/css-color-names-0.0.4.tgz", + "integrity": "sha1-gIrcLnnPhHOAabZGyyDsJ762KeA=", + "dev": true + }, + "css-declaration-sorter": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/css-declaration-sorter/-/css-declaration-sorter-4.0.1.tgz", + "integrity": "sha512-BcxQSKTSEEQUftYpBVnsH4SF05NTuBokb19/sBt6asXGKZ/6VP7PLG1CBCkFDYOnhXhPh0jMhO6xZ71oYHXHBA==", + "dev": true, + "requires": { + "postcss": "^7.0.1", + "timsort": "^0.3.0" + } + }, + "css-loader": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/css-loader/-/css-loader-2.1.1.tgz", + "integrity": "sha512-OcKJU/lt232vl1P9EEDamhoO9iKY3tIjY5GU+XDLblAykTdgs6Ux9P1hTHve8nFKy5KPpOXOsVI/hIwi3841+w==", + "dev": true, + "requires": { + "camelcase": "^5.2.0", + "icss-utils": "^4.1.0", + "loader-utils": "^1.2.3", + "normalize-path": "^3.0.0", + "postcss": "^7.0.14", + "postcss-modules-extract-imports": "^2.0.0", + "postcss-modules-local-by-default": "^2.0.6", + "postcss-modules-scope": "^2.1.0", + "postcss-modules-values": "^2.0.0", + "postcss-value-parser": "^3.3.0", + "schema-utils": "^1.0.0" + }, + "dependencies": { + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + }, + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "css-parse": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/css-parse/-/css-parse-2.0.0.tgz", + "integrity": "sha1-pGjuZnwW2BzPBcWMONKpfHgNv9Q=", + "dev": true, + "requires": { + "css": "^2.0.0" + } + }, + "css-select": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/css-select/-/css-select-2.1.0.tgz", + "integrity": "sha512-Dqk7LQKpwLoH3VovzZnkzegqNSuAziQyNZUcrdDM401iY+R5NkGBXGmtO05/yaXQziALuPogeG0b7UAgjnTJTQ==", + "dev": true, + "requires": { + "boolbase": "^1.0.0", + "css-what": "^3.2.1", + "domutils": "^1.7.0", + "nth-check": "^1.0.2" + } + }, + "css-select-base-adapter": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/css-select-base-adapter/-/css-select-base-adapter-0.1.1.tgz", + "integrity": "sha512-jQVeeRG70QI08vSTwf1jHxp74JoZsr2XSgETae8/xC8ovSnL2WF87GTLO86Sbwdt2lK4Umg4HnnwMO4YF3Ce7w==", + "dev": true + }, + "css-tree": { + "version": "1.0.0-alpha.37", + "resolved": "https://registry.npmjs.org/css-tree/-/css-tree-1.0.0-alpha.37.tgz", + "integrity": "sha512-DMxWJg0rnz7UgxKT0Q1HU/L9BeJI0M6ksor0OgqOnF+aRCDWg/N2641HmVyU9KVIu0OVVWOb2IpC9A+BJRnejg==", + "dev": true, + "requires": { + "mdn-data": "2.0.4", + "source-map": "^0.6.1" + } + }, + "css-unit-converter": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/css-unit-converter/-/css-unit-converter-1.1.2.tgz", + "integrity": "sha512-IiJwMC8rdZE0+xiEZHeru6YoONC4rfPMqGm2W85jMIbkFvv5nFTwJVFHam2eFrN6txmoUYFAFXiv8ICVeTO0MA==", + "dev": true + }, + "css-what": { + "version": "3.4.2", + "resolved": "https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz", + "integrity": "sha512-ACUm3L0/jiZTqfzRM3Hi9Q8eZqd6IK37mMWPLz9PJxkLWllYeRf+EHUSHYEtFop2Eqytaq1FizFVh7XfBnXCDQ==", + "dev": true + }, + "cssesc": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/cssesc/-/cssesc-3.0.0.tgz", + "integrity": "sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg==", + "dev": true + }, + "cssnano": { + "version": "4.1.11", + "resolved": "https://registry.npmjs.org/cssnano/-/cssnano-4.1.11.tgz", + "integrity": "sha512-6gZm2htn7xIPJOHY824ERgj8cNPgPxyCSnkXc4v7YvNW+TdVfzgngHcEhy/8D11kUWRUMbke+tC+AUcUsnMz2g==", + "dev": true, + "requires": { + "cosmiconfig": "^5.0.0", + "cssnano-preset-default": "^4.0.8", + "is-resolvable": "^1.0.0", + "postcss": "^7.0.0" + } + }, + "cssnano-preset-default": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/cssnano-preset-default/-/cssnano-preset-default-4.0.8.tgz", + "integrity": "sha512-LdAyHuq+VRyeVREFmuxUZR1TXjQm8QQU/ktoo/x7bz+SdOge1YKc5eMN6pRW7YWBmyq59CqYba1dJ5cUukEjLQ==", + "dev": true, + "requires": { + "css-declaration-sorter": "^4.0.1", + "cssnano-util-raw-cache": "^4.0.1", + "postcss": "^7.0.0", + "postcss-calc": "^7.0.1", + "postcss-colormin": "^4.0.3", + "postcss-convert-values": "^4.0.1", + "postcss-discard-comments": "^4.0.2", + "postcss-discard-duplicates": "^4.0.2", + "postcss-discard-empty": "^4.0.1", + "postcss-discard-overridden": "^4.0.1", + "postcss-merge-longhand": "^4.0.11", + "postcss-merge-rules": "^4.0.3", + "postcss-minify-font-values": "^4.0.2", + "postcss-minify-gradients": "^4.0.2", + "postcss-minify-params": "^4.0.2", + "postcss-minify-selectors": "^4.0.2", + "postcss-normalize-charset": "^4.0.1", + "postcss-normalize-display-values": "^4.0.2", + "postcss-normalize-positions": "^4.0.2", + "postcss-normalize-repeat-style": "^4.0.2", + "postcss-normalize-string": "^4.0.2", + "postcss-normalize-timing-functions": "^4.0.2", + "postcss-normalize-unicode": "^4.0.1", + "postcss-normalize-url": "^4.0.1", + "postcss-normalize-whitespace": "^4.0.2", + "postcss-ordered-values": "^4.1.2", + "postcss-reduce-initial": "^4.0.3", + "postcss-reduce-transforms": "^4.0.2", + "postcss-svgo": "^4.0.3", + "postcss-unique-selectors": "^4.0.1" + } + }, + "cssnano-util-get-arguments": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/cssnano-util-get-arguments/-/cssnano-util-get-arguments-4.0.0.tgz", + "integrity": "sha1-7ToIKZ8h11dBsg87gfGU7UnMFQ8=", + "dev": true + }, + "cssnano-util-get-match": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/cssnano-util-get-match/-/cssnano-util-get-match-4.0.0.tgz", + "integrity": "sha1-wOTKB/U4a7F+xeUiULT1lhNlFW0=", + "dev": true + }, + "cssnano-util-raw-cache": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/cssnano-util-raw-cache/-/cssnano-util-raw-cache-4.0.1.tgz", + "integrity": "sha512-qLuYtWK2b2Dy55I8ZX3ky1Z16WYsx544Q0UWViebptpwn/xDBmog2TLg4f+DBMg1rJ6JDWtn96WHbOKDWt1WQA==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "cssnano-util-same-parent": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/cssnano-util-same-parent/-/cssnano-util-same-parent-4.0.1.tgz", + "integrity": "sha512-WcKx5OY+KoSIAxBW6UBBRay1U6vkYheCdjyVNDm85zt5K9mHoGOfsOsqIszfAqrQQFIIKgjh2+FDgIj/zsl21Q==", + "dev": true + }, + "csso": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/csso/-/csso-4.2.0.tgz", + "integrity": "sha512-wvlcdIbf6pwKEk7vHj8/Bkc0B4ylXZruLvOgs9doS5eOsOpuodOV2zJChSpkp+pRpYQLQMeF04nr3Z68Sta9jA==", + "dev": true, + "requires": { + "css-tree": "^1.1.2" + }, + "dependencies": { + "css-tree": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/css-tree/-/css-tree-1.1.3.tgz", + "integrity": "sha512-tRpdppF7TRazZrjJ6v3stzv93qxRcSsFmW6cX0Zm2NVKpxE1WV1HblnghVv9TreireHkqI/VDEsfolRF1p6y7Q==", + "dev": true, + "requires": { + "mdn-data": "2.0.14", + "source-map": "^0.6.1" + } + }, + "mdn-data": { + "version": "2.0.14", + "resolved": "https://registry.npmjs.org/mdn-data/-/mdn-data-2.0.14.tgz", + "integrity": "sha512-dn6wd0uw5GsdswPFfsgMp5NSB0/aDe6fK94YJV/AJDYXL6HVLWBsxeq7js7Ad+mU2K9LAlwpk6kN2D5mwCPVow==", + "dev": true + } + } + }, + "cyclist": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/cyclist/-/cyclist-1.0.1.tgz", + "integrity": "sha1-WW6WmP0MgOEgOMK4LW6xs1tiJNk=", + "dev": true + }, + "d3": { + "version": "7.6.1", + "resolved": "https://registry.npmjs.org/d3/-/d3-7.6.1.tgz", + "integrity": "sha512-txMTdIHFbcpLx+8a0IFhZsbp+PfBBPt8yfbmukZTQFroKuFqIwqswF0qE5JXWefylaAVpSXFoKm3yP+jpNLFLw==", + "dev": true, + "requires": { + "d3-array": "3", + "d3-axis": "3", + "d3-brush": "3", + "d3-chord": "3", + "d3-color": "3", + "d3-contour": "4", + "d3-delaunay": "6", + "d3-dispatch": "3", + "d3-drag": "3", + "d3-dsv": "3", + "d3-ease": "3", + "d3-fetch": "3", + "d3-force": "3", + "d3-format": "3", + "d3-geo": "3", + "d3-hierarchy": "3", + "d3-interpolate": "3", + "d3-path": "3", + "d3-polygon": "3", + "d3-quadtree": "3", + "d3-random": "3", + "d3-scale": "4", + "d3-scale-chromatic": "3", + "d3-selection": "3", + "d3-shape": "3", + "d3-time": "3", + "d3-time-format": "4", + "d3-timer": "3", + "d3-transition": "3", + "d3-zoom": "3" + } + }, + "d3-array": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/d3-array/-/d3-array-3.2.0.tgz", + "integrity": "sha512-3yXFQo0oG3QCxbF06rMPFyGRMGJNS7NvsV1+2joOjbBE+9xvWQ8+GcMJAjRCzw06zQ3/arXeJgbPYcjUCuC+3g==", + "dev": true, + "requires": { + "internmap": "1 - 2" + } + }, + "d3-axis": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-axis/-/d3-axis-3.0.0.tgz", + "integrity": "sha512-IH5tgjV4jE/GhHkRV0HiVYPDtvfjHQlQfJHs0usq7M30XcSBvOotpmH1IgkcXsO/5gEQZD43B//fc7SRT5S+xw==", + "dev": true + }, + "d3-brush": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-3.0.0.tgz", + "integrity": "sha512-ALnjWlVYkXsVIGlOsuWH1+3udkYFI48Ljihfnh8FZPF2QS9o+PzGLBslO0PjzVoHLZ2KCVgAM8NVkXPJB2aNnQ==", + "dev": true, + "requires": { + "d3-dispatch": "1 - 3", + "d3-drag": "2 - 3", + "d3-interpolate": "1 - 3", + "d3-selection": "3", + "d3-transition": "3" + } + }, + "d3-chord": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-chord/-/d3-chord-3.0.1.tgz", + "integrity": "sha512-VE5S6TNa+j8msksl7HwjxMHDM2yNK3XCkusIlpX5kwauBfXuyLAtNg9jCp/iHH61tgI4sb6R/EIMWCqEIdjT/g==", + "dev": true, + "requires": { + "d3-path": "1 - 3" + } + }, + "d3-collection": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/d3-collection/-/d3-collection-1.0.7.tgz", + "integrity": "sha512-ii0/r5f4sjKNTfh84Di+DpztYwqKhEyUlKoPrzUFfeSkWxjW49xU2QzO9qrPrNkpdI0XJkfzvmTu8V2Zylln6A==", + "dev": true + }, + "d3-color": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/d3-color/-/d3-color-3.1.0.tgz", + "integrity": "sha512-zg/chbXyeBtMQ1LbD/WSoW2DpC3I0mpmPdW+ynRTj/x2DAWYrIY7qeZIHidozwV24m4iavr15lNwIwLxRmOxhA==", + "dev": true + }, + "d3-contour": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/d3-contour/-/d3-contour-4.0.0.tgz", + "integrity": "sha512-7aQo0QHUTu/Ko3cP9YK9yUTxtoDEiDGwnBHyLxG5M4vqlBkO/uixMRele3nfsfj6UXOcuReVpVXzAboGraYIJw==", + "dev": true, + "requires": { + "d3-array": "^3.2.0" + } + }, + "d3-delaunay": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/d3-delaunay/-/d3-delaunay-6.0.2.tgz", + "integrity": "sha512-IMLNldruDQScrcfT+MWnazhHbDJhcRJyOEBAJfwQnHle1RPh6WDuLvxNArUju2VSMSUuKlY5BGHRJ2cYyoFLQQ==", + "dev": true, + "requires": { + "delaunator": "5" + } + }, + "d3-dispatch": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-dispatch/-/d3-dispatch-3.0.1.tgz", + "integrity": "sha512-rzUyPU/S7rwUflMyLc1ETDeBj0NRuHKKAcvukozwhshr6g6c5d8zh4c2gQjY2bZ0dXeGLWc1PF174P2tVvKhfg==", + "dev": true + }, + "d3-drag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-drag/-/d3-drag-3.0.0.tgz", + "integrity": "sha512-pWbUJLdETVA8lQNJecMxoXfH6x+mO2UQo8rSmZ+QqxcbyA3hfeprFgIT//HW2nlHChWeIIMwS2Fq+gEARkhTkg==", + "dev": true, + "requires": { + "d3-dispatch": "1 - 3", + "d3-selection": "3" + } + }, + "d3-dsv": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-dsv/-/d3-dsv-3.0.1.tgz", + "integrity": "sha512-UG6OvdI5afDIFP9w4G0mNq50dSOsXHJaRE8arAS5o9ApWnIElp8GZw1Dun8vP8OyHOZ/QJUKUJwxiiCCnUwm+Q==", + "dev": true, + "requires": { + "commander": "7", + "iconv-lite": "0.6", + "rw": "1" + }, + "dependencies": { + "commander": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/commander/-/commander-7.2.0.tgz", + "integrity": "sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw==", + "dev": true + }, + "iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dev": true, + "requires": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + } + } + } + }, + "d3-ease": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-ease/-/d3-ease-3.0.1.tgz", + "integrity": "sha512-wR/XK3D3XcLIZwpbvQwQ5fK+8Ykds1ip7A2Txe0yxncXSdq1L9skcG7blcedkOX+ZcgxGAmLX1FrRGbADwzi0w==", + "dev": true + }, + "d3-fetch": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-fetch/-/d3-fetch-3.0.1.tgz", + "integrity": "sha512-kpkQIM20n3oLVBKGg6oHrUchHM3xODkTzjMoj7aWQFq5QEM+R6E4WkzT5+tojDY7yjez8KgCBRoj4aEr99Fdqw==", + "dev": true, + "requires": { + "d3-dsv": "1 - 3" + } + }, + "d3-force": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-force/-/d3-force-3.0.0.tgz", + "integrity": "sha512-zxV/SsA+U4yte8051P4ECydjD/S+qeYtnaIyAs9tgHCqfguma/aAQDjo85A9Z6EKhBirHRJHXIgJUlffT4wdLg==", + "dev": true, + "requires": { + "d3-dispatch": "1 - 3", + "d3-quadtree": "1 - 3", + "d3-timer": "1 - 3" + } + }, + "d3-format": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/d3-format/-/d3-format-3.1.0.tgz", + "integrity": "sha512-YyUI6AEuY/Wpt8KWLgZHsIU86atmikuoOmCfommt0LYHiQSPjvX2AcFc38PX0CBpr2RCyZhjex+NS/LPOv6YqA==", + "dev": true + }, + "d3-geo": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-geo/-/d3-geo-3.0.1.tgz", + "integrity": "sha512-Wt23xBych5tSy9IYAM1FR2rWIBFWa52B/oF/GYe5zbdHrg08FU8+BuI6X4PvTwPDdqdAdq04fuWJpELtsaEjeA==", + "dev": true, + "requires": { + "d3-array": "2.5.0 - 3" + } + }, + "d3-hierarchy": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/d3-hierarchy/-/d3-hierarchy-3.1.2.tgz", + "integrity": "sha512-FX/9frcub54beBdugHjDCdikxThEqjnR93Qt7PvQTOHxyiNCAlvMrHhclk3cD5VeAaq9fxmfRp+CnWw9rEMBuA==", + "dev": true + }, + "d3-interpolate": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-interpolate/-/d3-interpolate-3.0.1.tgz", + "integrity": "sha512-3bYs1rOD33uo8aqJfKP3JWPAibgw8Zm2+L9vBKEHJ2Rg+viTR7o5Mmv5mZcieN+FRYaAOWX5SJATX6k1PWz72g==", + "dev": true, + "requires": { + "d3-color": "1 - 3" + } + }, + "d3-path": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-path/-/d3-path-3.0.1.tgz", + "integrity": "sha512-gq6gZom9AFZby0YLduxT1qmrp4xpBA1YZr19OI717WIdKE2OM5ETq5qrHLb301IgxhLwcuxvGZVLeeWc/k1I6w==", + "dev": true + }, + "d3-polygon": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-polygon/-/d3-polygon-3.0.1.tgz", + "integrity": "sha512-3vbA7vXYwfe1SYhED++fPUQlWSYTTGmFmQiany/gdbiWgU/iEyQzyymwL9SkJjFFuCS4902BSzewVGsHHmHtXg==", + "dev": true + }, + "d3-quadtree": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-quadtree/-/d3-quadtree-3.0.1.tgz", + "integrity": "sha512-04xDrxQTDTCFwP5H6hRhsRcb9xxv2RzkcsygFzmkSIOJy3PeRJP7sNk3VRIbKXcog561P9oU0/rVH6vDROAgUw==", + "dev": true + }, + "d3-random": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-random/-/d3-random-3.0.1.tgz", + "integrity": "sha512-FXMe9GfxTxqd5D6jFsQ+DJ8BJS4E/fT5mqqdjovykEB2oFbTMDVdg1MGFxfQW+FBOGoB++k8swBrgwSHT1cUXQ==", + "dev": true + }, + "d3-scale": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-4.0.2.tgz", + "integrity": "sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==", + "dev": true, + "requires": { + "d3-array": "2.10.0 - 3", + "d3-format": "1 - 3", + "d3-interpolate": "1.2.0 - 3", + "d3-time": "2.1.1 - 3", + "d3-time-format": "2 - 4" + } + }, + "d3-scale-chromatic": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-scale-chromatic/-/d3-scale-chromatic-3.0.0.tgz", + "integrity": "sha512-Lx9thtxAKrO2Pq6OO2Ua474opeziKr279P/TKZsMAhYyNDD3EnCffdbgeSYN5O7m2ByQsxtuP2CSDczNUIZ22g==", + "dev": true, + "requires": { + "d3-color": "1 - 3", + "d3-interpolate": "1 - 3" + } + }, + "d3-selection": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz", + "integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==", + "dev": true + }, + "d3-shape": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/d3-shape/-/d3-shape-3.1.0.tgz", + "integrity": "sha512-tGDh1Muf8kWjEDT/LswZJ8WF85yDZLvVJpYU9Nq+8+yW1Z5enxrmXOhTArlkaElU+CTn0OTVNli+/i+HP45QEQ==", + "dev": true, + "requires": { + "d3-path": "1 - 3" + } + }, + "d3-time": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-time/-/d3-time-3.0.0.tgz", + "integrity": "sha512-zmV3lRnlaLI08y9IMRXSDshQb5Nj77smnfpnd2LrBa/2K281Jijactokeak14QacHs/kKq0AQ121nidNYlarbQ==", + "dev": true, + "requires": { + "d3-array": "2 - 3" + } + }, + "d3-time-format": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/d3-time-format/-/d3-time-format-4.1.0.tgz", + "integrity": "sha512-dJxPBlzC7NugB2PDLwo9Q8JiTR3M3e4/XANkreKSUxF8vvXKqm1Yfq4Q5dl8budlunRVlUUaDUgFt7eA8D6NLg==", + "dev": true, + "requires": { + "d3-time": "1 - 3" + } + }, + "d3-timer": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-timer/-/d3-timer-3.0.1.tgz", + "integrity": "sha512-ndfJ/JxxMd3nw31uyKoY2naivF+r29V+Lc0svZxe1JvvIRmi8hUsrMvdOwgS1o6uBHmiz91geQ0ylPP0aj1VUA==", + "dev": true + }, + "d3-transition": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/d3-transition/-/d3-transition-3.0.1.tgz", + "integrity": "sha512-ApKvfjsSR6tg06xrL434C0WydLr7JewBB3V+/39RMHsaXTOG0zmt/OAXeng5M5LBm0ojmxJrpomQVZ1aPvBL4w==", + "dev": true, + "requires": { + "d3-color": "1 - 3", + "d3-dispatch": "1 - 3", + "d3-ease": "1 - 3", + "d3-interpolate": "1 - 3", + "d3-timer": "1 - 3" + } + }, + "d3-voronoi": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/d3-voronoi/-/d3-voronoi-1.1.4.tgz", + "integrity": "sha512-dArJ32hchFsrQ8uMiTBLq256MpnZjeuBtdHpaDlYuQyjU0CVzCJl/BVW+SkszaAeH95D/8gxqAhgx0ouAWAfRg==", + "dev": true + }, + "d3-zoom": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/d3-zoom/-/d3-zoom-3.0.0.tgz", + "integrity": "sha512-b8AmV3kfQaqWAuacbPuNbL6vahnOJflOhexLzMMNLga62+/nh0JzvJ0aO/5a5MVgUFGS7Hu1P9P03o3fJkDCyw==", + "dev": true, + "requires": { + "d3-dispatch": "1 - 3", + "d3-drag": "2 - 3", + "d3-interpolate": "1 - 3", + "d3-selection": "2 - 3", + "d3-transition": "2 - 3" + } + }, + "dagre": { + "version": "0.8.5", + "resolved": "https://registry.npmjs.org/dagre/-/dagre-0.8.5.tgz", + "integrity": "sha512-/aTqmnRta7x7MCCpExk7HQL2O4owCT2h8NT//9I1OQ9vt29Pa0BzSAkR5lwFUcQ7491yVi/3CXU9jQ5o0Mn2Sw==", + "dev": true, + "requires": { + "graphlib": "^2.1.8", + "lodash": "^4.17.15" + } + }, + "dagre-d3": { + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/dagre-d3/-/dagre-d3-0.6.4.tgz", + "integrity": "sha512-e/6jXeCP7/ptlAM48clmX4xTZc5Ek6T6kagS7Oz2HrYSdqcLZFLqpAfh7ldbZRFfxCZVyh61NEPR08UQRVxJzQ==", + "dev": true, + "requires": { + "d3": "^5.14", + "dagre": "^0.8.5", + "graphlib": "^2.1.8", + "lodash": "^4.17.15" + }, + "dependencies": { + "d3": { + "version": "5.16.0", + "resolved": "https://registry.npmjs.org/d3/-/d3-5.16.0.tgz", + "integrity": "sha512-4PL5hHaHwX4m7Zr1UapXW23apo6pexCgdetdJ5kTmADpG/7T9Gkxw0M0tf/pjoB63ezCCm0u5UaFYy2aMt0Mcw==", + "dev": true, + "requires": { + "d3-array": "1", + "d3-axis": "1", + "d3-brush": "1", + "d3-chord": "1", + "d3-collection": "1", + "d3-color": "1", + "d3-contour": "1", + "d3-dispatch": "1", + "d3-drag": "1", + "d3-dsv": "1", + "d3-ease": "1", + "d3-fetch": "1", + "d3-force": "1", + "d3-format": "1", + "d3-geo": "1", + "d3-hierarchy": "1", + "d3-interpolate": "1", + "d3-path": "1", + "d3-polygon": "1", + "d3-quadtree": "1", + "d3-random": "1", + "d3-scale": "2", + "d3-scale-chromatic": "1", + "d3-selection": "1", + "d3-shape": "1", + "d3-time": "1", + "d3-time-format": "2", + "d3-timer": "1", + "d3-transition": "1", + "d3-voronoi": "1", + "d3-zoom": "1" + } + }, + "d3-array": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/d3-array/-/d3-array-1.2.4.tgz", + "integrity": "sha512-KHW6M86R+FUPYGb3R5XiYjXPq7VzwxZ22buHhAEVG5ztoEcZZMLov530mmccaqA1GghZArjQV46fuc8kUqhhHw==", + "dev": true + }, + "d3-axis": { + "version": "1.0.12", + "resolved": "https://registry.npmjs.org/d3-axis/-/d3-axis-1.0.12.tgz", + "integrity": "sha512-ejINPfPSNdGFKEOAtnBtdkpr24c4d4jsei6Lg98mxf424ivoDP2956/5HDpIAtmHo85lqT4pruy+zEgvRUBqaQ==", + "dev": true + }, + "d3-brush": { + "version": "1.1.6", + "resolved": "https://registry.npmjs.org/d3-brush/-/d3-brush-1.1.6.tgz", + "integrity": "sha512-7RW+w7HfMCPyZLifTz/UnJmI5kdkXtpCbombUSs8xniAyo0vIbrDzDwUJB6eJOgl9u5DQOt2TQlYumxzD1SvYA==", + "dev": true, + "requires": { + "d3-dispatch": "1", + "d3-drag": "1", + "d3-interpolate": "1", + "d3-selection": "1", + "d3-transition": "1" + } + }, + "d3-chord": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/d3-chord/-/d3-chord-1.0.6.tgz", + "integrity": "sha512-JXA2Dro1Fxw9rJe33Uv+Ckr5IrAa74TlfDEhE/jfLOaXegMQFQTAgAw9WnZL8+HxVBRXaRGCkrNU7pJeylRIuA==", + "dev": true, + "requires": { + "d3-array": "1", + "d3-path": "1" + } + }, + "d3-color": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/d3-color/-/d3-color-1.4.1.tgz", + "integrity": "sha512-p2sTHSLCJI2QKunbGb7ocOh7DgTAn8IrLx21QRc/BSnodXM4sv6aLQlnfpvehFMLZEfBc6g9pH9SWQccFYfJ9Q==", + "dev": true + }, + "d3-contour": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/d3-contour/-/d3-contour-1.3.2.tgz", + "integrity": "sha512-hoPp4K/rJCu0ladiH6zmJUEz6+u3lgR+GSm/QdM2BBvDraU39Vr7YdDCicJcxP1z8i9B/2dJLgDC1NcvlF8WCg==", + "dev": true, + "requires": { + "d3-array": "^1.1.1" + } + }, + "d3-dispatch": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/d3-dispatch/-/d3-dispatch-1.0.6.tgz", + "integrity": "sha512-fVjoElzjhCEy+Hbn8KygnmMS7Or0a9sI2UzGwoB7cCtvI1XpVN9GpoYlnb3xt2YV66oXYb1fLJ8GMvP4hdU1RA==", + "dev": true + }, + "d3-drag": { + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/d3-drag/-/d3-drag-1.2.5.tgz", + "integrity": "sha512-rD1ohlkKQwMZYkQlYVCrSFxsWPzI97+W+PaEIBNTMxRuxz9RF0Hi5nJWHGVJ3Om9d2fRTe1yOBINJyy/ahV95w==", + "dev": true, + "requires": { + "d3-dispatch": "1", + "d3-selection": "1" + } + }, + "d3-dsv": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/d3-dsv/-/d3-dsv-1.2.0.tgz", + "integrity": "sha512-9yVlqvZcSOMhCYzniHE7EVUws7Fa1zgw+/EAV2BxJoG3ME19V6BQFBwI855XQDsxyOuG7NibqRMTtiF/Qup46g==", + "dev": true, + "requires": { + "commander": "2", + "iconv-lite": "0.4", + "rw": "1" + } + }, + "d3-ease": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/d3-ease/-/d3-ease-1.0.7.tgz", + "integrity": "sha512-lx14ZPYkhNx0s/2HX5sLFUI3mbasHjSSpwO/KaaNACweVwxUruKyWVcb293wMv1RqTPZyZ8kSZ2NogUZNcLOFQ==", + "dev": true + }, + "d3-fetch": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/d3-fetch/-/d3-fetch-1.2.0.tgz", + "integrity": "sha512-yC78NBVcd2zFAyR/HnUiBS7Lf6inSCoWcSxFfw8FYL7ydiqe80SazNwoffcqOfs95XaLo7yebsmQqDKSsXUtvA==", + "dev": true, + "requires": { + "d3-dsv": "1" + } + }, + "d3-force": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/d3-force/-/d3-force-1.2.1.tgz", + "integrity": "sha512-HHvehyaiUlVo5CxBJ0yF/xny4xoaxFxDnBXNvNcfW9adORGZfyNF1dj6DGLKyk4Yh3brP/1h3rnDzdIAwL08zg==", + "dev": true, + "requires": { + "d3-collection": "1", + "d3-dispatch": "1", + "d3-quadtree": "1", + "d3-timer": "1" + } + }, + "d3-format": { + "version": "1.4.5", + "resolved": "https://registry.npmjs.org/d3-format/-/d3-format-1.4.5.tgz", + "integrity": "sha512-J0piedu6Z8iB6TbIGfZgDzfXxUFN3qQRMofy2oPdXzQibYGqPB/9iMcxr/TGalU+2RsyDO+U4f33id8tbnSRMQ==", + "dev": true + }, + "d3-geo": { + "version": "1.12.1", + "resolved": "https://registry.npmjs.org/d3-geo/-/d3-geo-1.12.1.tgz", + "integrity": "sha512-XG4d1c/UJSEX9NfU02KwBL6BYPj8YKHxgBEw5om2ZnTRSbIcego6dhHwcxuSR3clxh0EpE38os1DVPOmnYtTPg==", + "dev": true, + "requires": { + "d3-array": "1" + } + }, + "d3-hierarchy": { + "version": "1.1.9", + "resolved": "https://registry.npmjs.org/d3-hierarchy/-/d3-hierarchy-1.1.9.tgz", + "integrity": "sha512-j8tPxlqh1srJHAtxfvOUwKNYJkQuBFdM1+JAUfq6xqH5eAqf93L7oG1NVqDa4CpFZNvnNKtCYEUC8KY9yEn9lQ==", + "dev": true + }, + "d3-interpolate": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/d3-interpolate/-/d3-interpolate-1.4.0.tgz", + "integrity": "sha512-V9znK0zc3jOPV4VD2zZn0sDhZU3WAE2bmlxdIwwQPPzPjvyLkd8B3JUVdS1IDUFDkWZ72c9qnv1GK2ZagTZ8EA==", + "dev": true, + "requires": { + "d3-color": "1" + } + }, + "d3-path": { + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/d3-path/-/d3-path-1.0.9.tgz", + "integrity": "sha512-VLaYcn81dtHVTjEHd8B+pbe9yHWpXKZUC87PzoFmsFrJqgFwDe/qxfp5MlfsfM1V5E/iVt0MmEbWQ7FVIXh/bg==", + "dev": true + }, + "d3-polygon": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/d3-polygon/-/d3-polygon-1.0.6.tgz", + "integrity": "sha512-k+RF7WvI08PC8reEoXa/w2nSg5AUMTi+peBD9cmFc+0ixHfbs4QmxxkarVal1IkVkgxVuk9JSHhJURHiyHKAuQ==", + "dev": true + }, + "d3-quadtree": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/d3-quadtree/-/d3-quadtree-1.0.7.tgz", + "integrity": "sha512-RKPAeXnkC59IDGD0Wu5mANy0Q2V28L+fNe65pOCXVdVuTJS3WPKaJlFHer32Rbh9gIo9qMuJXio8ra4+YmIymA==", + "dev": true + }, + "d3-random": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/d3-random/-/d3-random-1.1.2.tgz", + "integrity": "sha512-6AK5BNpIFqP+cx/sreKzNjWbwZQCSUatxq+pPRmFIQaWuoD+NrbVWw7YWpHiXpCQ/NanKdtGDuB+VQcZDaEmYQ==", + "dev": true + }, + "d3-scale": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-2.2.2.tgz", + "integrity": "sha512-LbeEvGgIb8UMcAa0EATLNX0lelKWGYDQiPdHj+gLblGVhGLyNbaCn3EvrJf0A3Y/uOOU5aD6MTh5ZFCdEwGiCw==", + "dev": true, + "requires": { + "d3-array": "^1.2.0", + "d3-collection": "1", + "d3-format": "1", + "d3-interpolate": "1", + "d3-time": "1", + "d3-time-format": "2" + } + }, + "d3-scale-chromatic": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/d3-scale-chromatic/-/d3-scale-chromatic-1.5.0.tgz", + "integrity": "sha512-ACcL46DYImpRFMBcpk9HhtIyC7bTBR4fNOPxwVSl0LfulDAwyiHyPOTqcDG1+t5d4P9W7t/2NAuWu59aKko/cg==", + "dev": true, + "requires": { + "d3-color": "1", + "d3-interpolate": "1" + } + }, + "d3-selection": { + "version": "1.4.2", + "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-1.4.2.tgz", + "integrity": "sha512-SJ0BqYihzOjDnnlfyeHT0e30k0K1+5sR3d5fNueCNeuhZTnGw4M4o8mqJchSwgKMXCNFo+e2VTChiSJ0vYtXkg==", + "dev": true + }, + "d3-shape": { + "version": "1.3.7", + "resolved": "https://registry.npmjs.org/d3-shape/-/d3-shape-1.3.7.tgz", + "integrity": "sha512-EUkvKjqPFUAZyOlhY5gzCxCeI0Aep04LwIRpsZ/mLFelJiUfnK56jo5JMDSE7yyP2kLSb6LtF+S5chMk7uqPqw==", + "dev": true, + "requires": { + "d3-path": "1" + } + }, + "d3-time": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/d3-time/-/d3-time-1.1.0.tgz", + "integrity": "sha512-Xh0isrZ5rPYYdqhAVk8VLnMEidhz5aP7htAADH6MfzgmmicPkTo8LhkLxci61/lCB7n7UmE3bN0leRt+qvkLxA==", + "dev": true + }, + "d3-time-format": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/d3-time-format/-/d3-time-format-2.3.0.tgz", + "integrity": "sha512-guv6b2H37s2Uq/GefleCDtbe0XZAuy7Wa49VGkPVPMfLL9qObgBST3lEHJBMUp8S7NdLQAGIvr2KXk8Hc98iKQ==", + "dev": true, + "requires": { + "d3-time": "1" + } + }, + "d3-timer": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/d3-timer/-/d3-timer-1.0.10.tgz", + "integrity": "sha512-B1JDm0XDaQC+uvo4DT79H0XmBskgS3l6Ve+1SBCfxgmtIb1AVrPIoqd+nPSv+loMX8szQ0sVUhGngL7D5QPiXw==", + "dev": true + }, + "d3-transition": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/d3-transition/-/d3-transition-1.3.2.tgz", + "integrity": "sha512-sc0gRU4PFqZ47lPVHloMn9tlPcv8jxgOQg+0zjhfZXMQuvppjG6YuwdMBE0TuqCZjeJkLecku/l9R0JPcRhaDA==", + "dev": true, + "requires": { + "d3-color": "1", + "d3-dispatch": "1", + "d3-ease": "1", + "d3-interpolate": "1", + "d3-selection": "^1.1.0", + "d3-timer": "1" + } + }, + "d3-zoom": { + "version": "1.8.3", + "resolved": "https://registry.npmjs.org/d3-zoom/-/d3-zoom-1.8.3.tgz", + "integrity": "sha512-VoLXTK4wvy1a0JpH2Il+F2CiOhVu7VRXWF5M/LroMIh3/zBAC3WAt7QoIvPibOavVo20hN6/37vwAsdBejLyKQ==", + "dev": true, + "requires": { + "d3-dispatch": "1", + "d3-drag": "1", + "d3-interpolate": "1", + "d3-selection": "1", + "d3-transition": "1" + } + } + } + }, + "dashdash": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/dashdash/-/dashdash-1.14.1.tgz", + "integrity": "sha1-hTz6D3y+L+1d4gMmuN1YEDX24vA=", + "dev": true, + "requires": { + "assert-plus": "^1.0.0" + } + }, + "date-fns": { + "version": "1.30.1", + "resolved": "https://registry.npmjs.org/date-fns/-/date-fns-1.30.1.tgz", + "integrity": "sha512-hBSVCvSmWC+QypYObzwGOd9wqdDpOt+0wl0KbU+R+uuZBS1jN8VsD1ss3irQDknRj5NvxiTF6oj/nDRnN/UQNw==", + "dev": true + }, + "dayjs": { + "version": "1.10.4", + "resolved": "https://registry.npmjs.org/dayjs/-/dayjs-1.10.4.tgz", + "integrity": "sha512-RI/Hh4kqRc1UKLOAf/T5zdMMX5DQIlDxwUe3wSyMMnEbGunnpENCdbUgM+dW7kXidZqCttBrmw7BhN4TMddkCw==", + "dev": true + }, + "de-indent": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/de-indent/-/de-indent-1.0.2.tgz", + "integrity": "sha1-sgOOhG3DO6pXlhKNCAS0VbjB4h0=", + "dev": true + }, + "debug": { + "version": "4.3.1", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.1.tgz", + "integrity": "sha512-doEwdvm4PCeK4K3RQN2ZC2BYUBaxwLARCqZmMjtF8a51J2Rb0xpVloFRnCODwqjpwnAoao4pelN8l3RJdv3gRQ==", + "dev": true, + "requires": { + "ms": "2.1.2" + } + }, + "decamelize": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", + "integrity": "sha1-9lNNFRSCabIDUue+4m9QH5oZEpA=", + "dev": true + }, + "decode-uri-component": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz", + "integrity": "sha1-6zkTMzRYd1y4TNGh+uBiEGu4dUU=", + "dev": true + }, + "decompress-response": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-3.3.0.tgz", + "integrity": "sha1-gKTdMjdIOEv6JICDYirt7Jgq3/M=", + "dev": true, + "requires": { + "mimic-response": "^1.0.0" + } + }, + "deep-equal": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/deep-equal/-/deep-equal-1.1.1.tgz", + "integrity": "sha512-yd9c5AdiqVcR+JjcwUQb9DkhJc8ngNr0MahEBGvDiJw8puWab2yZlh+nkasOnZP+EGTAP6rRp2JzJhJZzvNF8g==", + "dev": true, + "requires": { + "is-arguments": "^1.0.4", + "is-date-object": "^1.0.1", + "is-regex": "^1.0.4", + "object-is": "^1.0.1", + "object-keys": "^1.1.1", + "regexp.prototype.flags": "^1.2.0" + } + }, + "deep-extend": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/deep-extend/-/deep-extend-0.6.0.tgz", + "integrity": "sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==", + "dev": true + }, + "deepmerge": { + "version": "1.5.2", + "resolved": "https://registry.npmjs.org/deepmerge/-/deepmerge-1.5.2.tgz", + "integrity": "sha512-95k0GDqvBjZavkuvzx/YqVLv/6YYa17fz6ILMSf7neqQITCPbnfEnQvEgMPNjH4kgobe7+WIL0yJEHku+H3qtQ==", + "dev": true + }, + "default-gateway": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/default-gateway/-/default-gateway-4.2.0.tgz", + "integrity": "sha512-h6sMrVB1VMWVrW13mSc6ia/DwYYw5MN6+exNu1OaJeFac5aSAvwM7lZ0NVfTABuSkQelr4h5oebg3KB1XPdjgA==", + "dev": true, + "requires": { + "execa": "^1.0.0", + "ip-regex": "^2.1.0" + }, + "dependencies": { + "cross-spawn": { + "version": "6.0.5", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz", + "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==", + "dev": true, + "requires": { + "nice-try": "^1.0.4", + "path-key": "^2.0.1", + "semver": "^5.5.0", + "shebang-command": "^1.2.0", + "which": "^1.2.9" + } + }, + "execa": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/execa/-/execa-1.0.0.tgz", + "integrity": "sha512-adbxcyWV46qiHyvSp50TKt05tB4tK3HcmF7/nxfAdhnox83seTDbwnaqKO4sXRy7roHAIFqJP/Rw/AuEbX61LA==", + "dev": true, + "requires": { + "cross-spawn": "^6.0.0", + "get-stream": "^4.0.0", + "is-stream": "^1.1.0", + "npm-run-path": "^2.0.0", + "p-finally": "^1.0.0", + "signal-exit": "^3.0.0", + "strip-eof": "^1.0.0" + } + }, + "get-stream": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz", + "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==", + "dev": true, + "requires": { + "pump": "^3.0.0" + } + }, + "is-stream": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-1.1.0.tgz", + "integrity": "sha1-EtSj3U5o4Lec6428hBc66A2RykQ=", + "dev": true + }, + "npm-run-path": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-2.0.2.tgz", + "integrity": "sha1-NakjLfo11wZ7TLLd8jV7GHFTbF8=", + "dev": true, + "requires": { + "path-key": "^2.0.0" + } + }, + "path-key": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-2.0.1.tgz", + "integrity": "sha1-QRyttXTFoUDTpLGRDUDYDMn0C0A=", + "dev": true + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + }, + "shebang-command": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz", + "integrity": "sha1-RKrGW2lbAzmJaMOfNj/uXer98eo=", + "dev": true, + "requires": { + "shebang-regex": "^1.0.0" + } + }, + "shebang-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-1.0.0.tgz", + "integrity": "sha1-2kL0l0DAtC2yypcoVxyxkMmO/qM=", + "dev": true + }, + "which": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", + "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + } + } + }, + "defer-to-connect": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/defer-to-connect/-/defer-to-connect-1.1.3.tgz", + "integrity": "sha512-0ISdNousHvZT2EiFlZeZAHBUvSxmKswVCEf8hW7KWgG4a8MVEu/3Vb6uWYozkjylyCxe0JBIiRB1jV45S70WVQ==", + "dev": true + }, + "define-properties": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/define-properties/-/define-properties-1.1.3.tgz", + "integrity": "sha512-3MqfYKj2lLzdMSf8ZIZE/V+Zuy+BgD6f164e8K2w7dgnpKArBDerGYpM46IYYcjnkdPNMjPk9A6VFB8+3SKlXQ==", + "dev": true, + "requires": { + "object-keys": "^1.0.12" + } + }, + "define-property": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-2.0.2.tgz", + "integrity": "sha512-jwK2UV4cnPpbcG7+VRARKTZPUWowwXA8bzH5NP6ud0oeAxyYPuGZUAC7hMugpCdz4BeSZl2Dl9k66CHJ/46ZYQ==", + "dev": true, + "requires": { + "is-descriptor": "^1.0.2", + "isobject": "^3.0.1" + }, + "dependencies": { + "is-accessor-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", + "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-data-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", + "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-descriptor": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", + "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", + "dev": true, + "requires": { + "is-accessor-descriptor": "^1.0.0", + "is-data-descriptor": "^1.0.0", + "kind-of": "^6.0.2" + } + } + } + }, + "defined": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/defined/-/defined-1.0.0.tgz", + "integrity": "sha1-yY2bzvdWdBiOEQlpFRGZ45sfppM=", + "dev": true + }, + "del": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/del/-/del-4.1.1.tgz", + "integrity": "sha512-QwGuEUouP2kVwQenAsOof5Fv8K9t3D8Ca8NxcXKrIpEHjTXK5J2nXLdP+ALI1cgv8wj7KuwBhTwBkOZSJKM5XQ==", + "dev": true, + "requires": { + "@types/glob": "^7.1.1", + "globby": "^6.1.0", + "is-path-cwd": "^2.0.0", + "is-path-in-cwd": "^2.0.0", + "p-map": "^2.0.0", + "pify": "^4.0.1", + "rimraf": "^2.6.3" + }, + "dependencies": { + "globby": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/globby/-/globby-6.1.0.tgz", + "integrity": "sha1-9abXDoOV4hyFj7BInWTfAkJNUGw=", + "dev": true, + "requires": { + "array-union": "^1.0.1", + "glob": "^7.0.3", + "object-assign": "^4.0.1", + "pify": "^2.0.0", + "pinkie-promise": "^2.0.0" + }, + "dependencies": { + "pify": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz", + "integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw=", + "dev": true + } + } + } + } + }, + "delaunator": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/delaunator/-/delaunator-5.0.0.tgz", + "integrity": "sha512-AyLvtyJdbv/U1GkiS6gUUzclRoAY4Gs75qkMygJJhU75LW4DNuSF2RMzpxs9jw9Oz1BobHjTdkG3zdP55VxAqw==", + "dev": true, + "requires": { + "robust-predicates": "^3.0.0" + } + }, + "delayed-stream": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz", + "integrity": "sha1-3zrhmayt+31ECqrgsp4icrJOxhk=", + "dev": true + }, + "delegates": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/delegates/-/delegates-1.0.0.tgz", + "integrity": "sha1-hMbhWbgZBP3KWaDvRM2HDTElD5o=", + "dev": true + }, + "depd": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz", + "integrity": "sha1-m81S4UwJd2PnSbJ0xDRu0uVgtak=", + "dev": true + }, + "des.js": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/des.js/-/des.js-1.0.1.tgz", + "integrity": "sha512-Q0I4pfFrv2VPd34/vfLrFOoRmlYj3OV50i7fskps1jZWK1kApMWWT9G6RRUeYedLcBDIhnSDaUvJMb3AhUlaEA==", + "dev": true, + "requires": { + "inherits": "^2.0.1", + "minimalistic-assert": "^1.0.0" + } + }, + "destroy": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/destroy/-/destroy-1.0.4.tgz", + "integrity": "sha1-l4hXRCxEdJ5CBmE+N5RiBYJqvYA=", + "dev": true + }, + "detect-node": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.1.0.tgz", + "integrity": "sha512-T0NIuQpnTvFDATNuHN5roPwSBG83rFsuO+MXXH9/3N1eFbn4wcPjttvjMLEPWJ0RGUYgQE7cGgS3tNxbqCGM7g==", + "dev": true + }, + "detective": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/detective/-/detective-5.2.0.tgz", + "integrity": "sha512-6SsIx+nUUbuK0EthKjv0zrdnajCCXVYGmbYYiYjFVpzcjwEs/JMDZ8tPRG29J/HhN56t3GJp2cGSWDRjjot8Pg==", + "dev": true, + "requires": { + "acorn-node": "^1.6.1", + "defined": "^1.0.0", + "minimist": "^1.1.1" + } + }, + "diffie-hellman": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/diffie-hellman/-/diffie-hellman-5.0.3.tgz", + "integrity": "sha512-kqag/Nl+f3GwyK25fhUMYj81BUOrZ9IuJsjIcDE5icNM9FJHAVm3VcUDxdLPoQtTuUylWm6ZIknYJwwaPxsUzg==", + "dev": true, + "requires": { + "bn.js": "^4.1.0", + "miller-rabin": "^4.0.0", + "randombytes": "^2.0.0" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "dir-glob": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-2.2.2.tgz", + "integrity": "sha512-f9LBi5QWzIW3I6e//uxZoLBlUt9kcp66qo0sSCxL6YZKc75R1c4MFCoe/LaZiBGmgujvQdxc5Bn3QhfyvK5Hsw==", + "dev": true, + "requires": { + "path-type": "^3.0.0" + } + }, + "dns-equal": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/dns-equal/-/dns-equal-1.0.0.tgz", + "integrity": "sha1-s55/HabrCnW6nBcySzR1PEfgZU0=", + "dev": true + }, + "dns-packet": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/dns-packet/-/dns-packet-1.3.2.tgz", + "integrity": "sha512-qH/MS6fDOeNBTsF3k/v/SrwaXlDeewxgddXGUUfwauEBZkz7u59oF+3ZNSzcZeCuPWOfkqmcAnXW1gliiFW+1A==", + "dev": true, + "requires": { + "ip": "^1.1.0", + "safe-buffer": "^5.0.1" + } + }, + "dns-txt": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/dns-txt/-/dns-txt-2.0.2.tgz", + "integrity": "sha1-uR2Ab10nGI5Ks+fRB9iBocxGQrY=", + "dev": true, + "requires": { + "buffer-indexof": "^1.0.0" + } + }, + "docsearch.js": { + "version": "2.6.3", + "resolved": "https://registry.npmjs.org/docsearch.js/-/docsearch.js-2.6.3.tgz", + "integrity": "sha512-GN+MBozuyz664ycpZY0ecdQE0ND/LSgJKhTLA0/v3arIS3S1Rpf2OJz6A35ReMsm91V5apcmzr5/kM84cvUg+A==", + "dev": true, + "requires": { + "algoliasearch": "^3.24.5", + "autocomplete.js": "0.36.0", + "hogan.js": "^3.0.2", + "request": "^2.87.0", + "stack-utils": "^1.0.1", + "to-factory": "^1.0.0", + "zepto": "^1.2.0" + } + }, + "dom-converter": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/dom-converter/-/dom-converter-0.2.0.tgz", + "integrity": "sha512-gd3ypIPfOMr9h5jIKq8E3sHOTCjeirnl0WK5ZdS1AW0Odt0b1PaWaHdJ4Qk4klv+YB9aJBS7mESXjFoDQPu6DA==", + "dev": true, + "requires": { + "utila": "~0.4" + } + }, + "dom-serializer": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-0.2.2.tgz", + "integrity": "sha512-2/xPb3ORsQ42nHYiSunXkDjPLBaEj/xTwUO4B7XCZQTRk7EBtTOPaygh10YAAh2OI1Qrp6NWfpAhzswj0ydt9g==", + "dev": true, + "requires": { + "domelementtype": "^2.0.1", + "entities": "^2.0.0" + }, + "dependencies": { + "domelementtype": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.2.0.tgz", + "integrity": "sha512-DtBMo82pv1dFtUmHyr48beiuq792Sxohr+8Hm9zoxklYPfa6n0Z3Byjj2IV7bmr2IyqClnqEQhfgHJJ5QF0R5A==", + "dev": true + }, + "entities": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-2.2.0.tgz", + "integrity": "sha512-p92if5Nz619I0w+akJrLZH0MX0Pb5DX39XOwQTtXSdQQOaYH03S1uIQp4mhOZtAXrxq4ViO67YTiLBo2638o9A==", + "dev": true + } + } + }, + "dom-walk": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/dom-walk/-/dom-walk-0.1.2.tgz", + "integrity": "sha512-6QvTW9mrGeIegrFXdtQi9pk7O/nSK6lSdXW2eqUspN5LWD7UTji2Fqw5V2YLjBpHEoU9Xl/eUWNpDeZvoyOv2w==", + "dev": true + }, + "domain-browser": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/domain-browser/-/domain-browser-1.2.0.tgz", + "integrity": "sha512-jnjyiM6eRyZl2H+W8Q/zLMA481hzi0eszAaBUzIVnmYVDBbnLxVNnfu1HgEBvCbL+71FrxMl3E6lpKH7Ge3OXA==", + "dev": true + }, + "domelementtype": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-1.3.1.tgz", + "integrity": "sha512-BSKB+TSpMpFI/HOxCNr1O8aMOTZ8hT3pM3GQ0w/mWRmkhEDSFJkkyzz4XQsBV44BChwGkrDfMyjVD0eA2aFV3w==", + "dev": true + }, + "domhandler": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/domhandler/-/domhandler-2.4.2.tgz", + "integrity": "sha512-JiK04h0Ht5u/80fdLMCEmV4zkNh2BcoMFBmZ/91WtYZ8qVXSKjiw7fXMgFPnHcSZgOo3XdinHvmnDUeMf5R4wA==", + "dev": true, + "requires": { + "domelementtype": "1" + } + }, + "dompurify": { + "version": "2.3.8", + "resolved": "https://registry.npmjs.org/dompurify/-/dompurify-2.3.8.tgz", + "integrity": "sha512-eVhaWoVibIzqdGYjwsBWodIQIaXFSB+cKDf4cfxLMsK0xiud6SE+/WCVx/Xw/UwQsa4cS3T2eITcdtmTg2UKcw==", + "dev": true + }, + "domutils": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/domutils/-/domutils-1.7.0.tgz", + "integrity": "sha512-Lgd2XcJ/NjEw+7tFvfKxOzCYKZsdct5lczQ2ZaQY8Djz7pfAD3Gbp8ySJWtreII/vDlMVmxwa6pHmdxIYgttDg==", + "dev": true, + "requires": { + "dom-serializer": "0", + "domelementtype": "1" + } + }, + "dot-prop": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-5.3.0.tgz", + "integrity": "sha512-QM8q3zDe58hqUqjraQOmzZ1LIH9SWQJTlEKCH4kJ2oQvLZk7RbQXvtDM2XEq3fwkV9CCvvH4LA0AV+ogFsBM2Q==", + "dev": true, + "requires": { + "is-obj": "^2.0.0" + } + }, + "dotenv": { + "version": "10.0.0", + "resolved": "https://registry.npmjs.org/dotenv/-/dotenv-10.0.0.tgz", + "integrity": "sha512-rlBi9d8jpv9Sf1klPjNfFAuWDjKLwTIJJ/VxtoTwIR6hnZxcEOQCZg2oIL3MWBYw5GpUDKOEnND7LXTbIpQ03Q==", + "dev": true + }, + "duplexer": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/duplexer/-/duplexer-0.1.2.tgz", + "integrity": "sha512-jtD6YG370ZCIi/9GTaJKQxWTZD045+4R4hTk/x1UyoqadyJ9x9CgSi1RlVDQF8U2sxLLSnFkCaMihqljHIWgMg==", + "dev": true + }, + "duplexer3": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/duplexer3/-/duplexer3-0.1.4.tgz", + "integrity": "sha1-7gHdHKwO08vH/b6jfcCo8c4ALOI=", + "dev": true + }, + "duplexify": { + "version": "3.7.1", + "resolved": "https://registry.npmjs.org/duplexify/-/duplexify-3.7.1.tgz", + "integrity": "sha512-07z8uv2wMyS51kKhD1KsdXJg5WQ6t93RneqRxUHnskXVtlYYkLqM0gqStQZ3pj073g687jPCHrqNfCzawLYh5g==", + "dev": true, + "requires": { + "end-of-stream": "^1.0.0", + "inherits": "^2.0.1", + "readable-stream": "^2.0.0", + "stream-shift": "^1.0.0" + } + }, + "ecc-jsbn": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/ecc-jsbn/-/ecc-jsbn-0.1.2.tgz", + "integrity": "sha1-OoOpBOVDUyh4dMVkt1SThoSamMk=", + "dev": true, + "requires": { + "jsbn": "~0.1.0", + "safer-buffer": "^2.1.0" + } + }, + "ee-first": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", + "integrity": "sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0=", + "dev": true + }, + "electron-to-chromium": { + "version": "1.3.738", + "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.738.tgz", + "integrity": "sha512-vCMf4gDOpEylPSLPLSwAEsz+R3ShP02Y3cAKMZvTqule3XcPp7tgc/0ESI7IS6ZeyBlGClE50N53fIOkcIVnpw==", + "dev": true + }, + "elliptic": { + "version": "6.5.4", + "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.5.4.tgz", + "integrity": "sha512-iLhC6ULemrljPZb+QutR5TQGB+pdW6KGD5RSegS+8sorOZT+rdQFbsQFJgvN3eRqNALqJer4oQ16YvJHlU8hzQ==", + "dev": true, + "requires": { + "bn.js": "^4.11.9", + "brorand": "^1.1.0", + "hash.js": "^1.0.0", + "hmac-drbg": "^1.0.1", + "inherits": "^2.0.4", + "minimalistic-assert": "^1.0.1", + "minimalistic-crypto-utils": "^1.0.1" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "emoji-regex": { + "version": "7.0.3", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-7.0.3.tgz", + "integrity": "sha512-CwBLREIQ7LvYFB0WyRvwhq5N5qPhc6PMjD6bYggFlI5YyDgl+0vxq5VHbMOFqLg7hfWzmu8T5Z1QofhmTIhItA==", + "dev": true + }, + "emojis-list": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/emojis-list/-/emojis-list-3.0.0.tgz", + "integrity": "sha512-/kyM18EfinwXZbno9FyUGeFh87KC8HRQBQGildHZbEuRyWFOmv1U10o9BBp8XVZDVNNuQKyIGIu5ZYAAXJ0V2Q==", + "dev": true + }, + "encodeurl": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz", + "integrity": "sha1-rT/0yG7C0CkyL1oCw6mmBslbP1k=", + "dev": true + }, + "encoding": { + "version": "0.1.13", + "resolved": "https://registry.npmjs.org/encoding/-/encoding-0.1.13.tgz", + "integrity": "sha512-ETBauow1T35Y/WZMkio9jiM0Z5xjHHmJ4XmjZOq1l/dXz3lr2sRn87nJy20RupqSh1F2m3HHPSp8ShIPQJrJ3A==", + "dev": true, + "optional": true, + "requires": { + "iconv-lite": "^0.6.2" + }, + "dependencies": { + "iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dev": true, + "optional": true, + "requires": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + } + } + } + }, + "end-of-stream": { + "version": "1.4.4", + "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.4.tgz", + "integrity": "sha512-+uw1inIHVPQoaVuHzRyXd21icM+cnt4CzD5rW+NC1wjOUSTOs+Te7FOv7AhN7vS9x/oIyhLP5PR1H+phQAHu5Q==", + "dev": true, + "requires": { + "once": "^1.4.0" + } + }, + "enhanced-resolve": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-4.5.0.tgz", + "integrity": "sha512-Nv9m36S/vxpsI+Hc4/ZGRs0n9mXqSWGGq49zxb/cJfPAQMbUtttJAlNPS4AQzaBdw/pKskw5bMbekT/Y7W/Wlg==", + "dev": true, + "requires": { + "graceful-fs": "^4.1.2", + "memory-fs": "^0.5.0", + "tapable": "^1.0.0" + }, + "dependencies": { + "memory-fs": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/memory-fs/-/memory-fs-0.5.0.tgz", + "integrity": "sha512-jA0rdU5KoQMC0e6ppoNRtpp6vjFq6+NY7r8hywnC7V+1Xj/MtHwGIbB1QaK/dunyjWteJzmkpd7ooeWg10T7GA==", + "dev": true, + "requires": { + "errno": "^0.1.3", + "readable-stream": "^2.0.1" + } + } + } + }, + "entities": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/entities/-/entities-1.1.2.tgz", + "integrity": "sha512-f2LZMYl1Fzu7YSBKg+RoROelpOaNrcGmE9AZubeDfrCEia483oW4MI4VyFd5VNHIgQ/7qm1I0wUHK1eJnn2y2w==", + "dev": true + }, + "env-paths": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/env-paths/-/env-paths-2.2.1.tgz", + "integrity": "sha512-+h1lkLKhZMTYjog1VEpJNG7NZJWcuc2DDk/qsqSTRRCOXiLjeQ1d1/udrUGhqMxUgAlwKNZ0cf2uqan5GLuS2A==", + "dev": true + }, + "envify": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/envify/-/envify-4.1.0.tgz", + "integrity": "sha512-IKRVVoAYr4pIx4yIWNsz9mOsboxlNXiu7TNBnem/K/uTHdkyzXWDzHCK7UTolqBbgaBz0tQHsD3YNls0uIIjiw==", + "dev": true, + "requires": { + "esprima": "^4.0.0", + "through": "~2.3.4" + } + }, + "envinfo": { + "version": "7.8.1", + "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.8.1.tgz", + "integrity": "sha512-/o+BXHmB7ocbHEAs6F2EnG0ogybVVUdkRunTT2glZU9XAaGmhqskrvKwqXuDfNjEO0LZKWdejEEpnq8aM0tOaw==", + "dev": true + }, + "err-code": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/err-code/-/err-code-2.0.3.tgz", + "integrity": "sha512-2bmlRpNKBxT/CRmPOlyISQpNj+qSeYvcym/uT0Jx2bMOlKLtSy1ZmLuVxSEKKyor/N5yhvp/ZiG1oE3DEYMSFA==", + "dev": true + }, + "errno": { + "version": "0.1.8", + "resolved": "https://registry.npmjs.org/errno/-/errno-0.1.8.tgz", + "integrity": "sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A==", + "dev": true, + "requires": { + "prr": "~1.0.1" + } + }, + "error-ex": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.2.tgz", + "integrity": "sha512-7dFHNmqeFSEt2ZBsCriorKnn3Z2pj+fd9kmI6QoWw4//DL+icEBfc0U7qJCisqrTsKTjw4fNFy2pW9OqStD84g==", + "dev": true, + "requires": { + "is-arrayish": "^0.2.1" + }, + "dependencies": { + "is-arrayish": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", + "integrity": "sha1-d8mYQFJ6qOyxqLppe4BkWnqSap0=", + "dev": true + } + } + }, + "es-abstract": { + "version": "1.18.0", + "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.18.0.tgz", + "integrity": "sha512-LJzK7MrQa8TS0ja2w3YNLzUgJCGPdPOV1yVvezjNnS89D+VR08+Szt2mz3YB2Dck/+w5tfIq/RoUAFqJJGM2yw==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "es-to-primitive": "^1.2.1", + "function-bind": "^1.1.1", + "get-intrinsic": "^1.1.1", + "has": "^1.0.3", + "has-symbols": "^1.0.2", + "is-callable": "^1.2.3", + "is-negative-zero": "^2.0.1", + "is-regex": "^1.1.2", + "is-string": "^1.0.5", + "object-inspect": "^1.9.0", + "object-keys": "^1.1.1", + "object.assign": "^4.1.2", + "string.prototype.trimend": "^1.0.4", + "string.prototype.trimstart": "^1.0.4", + "unbox-primitive": "^1.0.0" + } + }, + "es-to-primitive": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz", + "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==", + "dev": true, + "requires": { + "is-callable": "^1.1.4", + "is-date-object": "^1.0.1", + "is-symbol": "^1.0.2" + } + }, + "es6-promise": { + "version": "4.2.8", + "resolved": "https://registry.npmjs.org/es6-promise/-/es6-promise-4.2.8.tgz", + "integrity": "sha512-HJDGx5daxeIvxdBxvG2cb9g4tEvwIk3i8+nhX0yGrYmZUzbkdg8QbDevheDB8gd0//uPj4c1EQua8Q+MViT0/w==", + "dev": true + }, + "escalade": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz", + "integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==", + "dev": true + }, + "escape-goat": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/escape-goat/-/escape-goat-2.1.1.tgz", + "integrity": "sha512-8/uIhbG12Csjy2JEW7D9pHbreaVaS/OpN3ycnyvElTdwM5n6GY6W6e2IPemfvGZeUMqZ9A/3GqIZMgKnBhAw/Q==", + "dev": true + }, + "escape-html": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz", + "integrity": "sha1-Aljq5NPQwJdN4cFpGI7wBR0dGYg=", + "dev": true + }, + "escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ=", + "dev": true + }, + "eslint-scope": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.3.tgz", + "integrity": "sha512-p7VutNr1O/QrxysMo3E45FjYDTeXBy0iTltPFNSqKAIfjDSXC+4dj+qfyuD8bfAXrW/y6lW3O76VaYNPKfpKrg==", + "dev": true, + "requires": { + "esrecurse": "^4.1.0", + "estraverse": "^4.1.1" + } + }, + "esprima": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", + "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", + "dev": true + }, + "esrecurse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "requires": { + "estraverse": "^5.2.0" + }, + "dependencies": { + "estraverse": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.2.0.tgz", + "integrity": "sha512-BxbNGGNm0RyRYvUdHpIwv9IWzeM9XClbOxwoATuFdOE7ZE6wHL+HQ5T8hoPM+zHvmKzzsEqhgy0GrQ5X13afiQ==", + "dev": true + } + } + }, + "estraverse": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz", + "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", + "dev": true + }, + "esutils": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true + }, + "etag": { + "version": "1.8.1", + "resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz", + "integrity": "sha1-Qa4u62XvpiJorr/qg6x9eSmbCIc=", + "dev": true + }, + "event-stream": { + "version": "3.3.4", + "resolved": "https://registry.npmjs.org/event-stream/-/event-stream-3.3.4.tgz", + "integrity": "sha1-SrTJoPWlTbkzi0w02Gv86PSzVXE=", + "dev": true, + "requires": { + "duplexer": "~0.1.1", + "from": "~0", + "map-stream": "~0.1.0", + "pause-stream": "0.0.11", + "split": "0.3", + "stream-combiner": "~0.0.4", + "through": "~2.3.1" + } + }, + "eventemitter3": { + "version": "4.0.7", + "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.7.tgz", + "integrity": "sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==", + "dev": true + }, + "events": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/events/-/events-3.3.0.tgz", + "integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==", + "dev": true + }, + "eventsource": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/eventsource/-/eventsource-1.1.0.tgz", + "integrity": "sha512-VSJjT5oCNrFvCS6igjzPAt5hBzQ2qPBFIbJ03zLI9SE0mxwZpMw6BfJrbFHm1a141AavMEB8JHmBhWAd66PfCg==", + "dev": true, + "requires": { + "original": "^1.0.0" + } + }, + "evp_bytestokey": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/evp_bytestokey/-/evp_bytestokey-1.0.3.tgz", + "integrity": "sha512-/f2Go4TognH/KvCISP7OUsHn85hT9nUkxxA9BEWxFn+Oj9o8ZNLm/40hdlgSLyuOimsrTKLUMEorQexp/aPQeA==", + "dev": true, + "requires": { + "md5.js": "^1.3.4", + "safe-buffer": "^5.1.1" + } + }, + "execa": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/execa/-/execa-5.0.0.tgz", + "integrity": "sha512-ov6w/2LCiuyO4RLYGdpFGjkcs0wMTgGE8PrkTHikeUy5iJekXyPIKUjifk5CsE0pt7sMCrMZ3YNqoCj6idQOnQ==", + "dev": true, + "requires": { + "cross-spawn": "^7.0.3", + "get-stream": "^6.0.0", + "human-signals": "^2.1.0", + "is-stream": "^2.0.0", + "merge-stream": "^2.0.0", + "npm-run-path": "^4.0.1", + "onetime": "^5.1.2", + "signal-exit": "^3.0.3", + "strip-final-newline": "^2.0.0" + } + }, + "expand-brackets": { + "version": "2.1.4", + "resolved": "https://registry.npmjs.org/expand-brackets/-/expand-brackets-2.1.4.tgz", + "integrity": "sha1-t3c14xXOMPa27/D4OwQVGiJEliI=", + "dev": true, + "requires": { + "debug": "^2.3.3", + "define-property": "^0.2.5", + "extend-shallow": "^2.0.1", + "posix-character-classes": "^0.1.0", + "regex-not": "^1.0.0", + "snapdragon": "^0.8.1", + "to-regex": "^3.0.1" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "define-property": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", + "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", + "dev": true, + "requires": { + "is-descriptor": "^0.1.0" + } + }, + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + } + } + }, + "express": { + "version": "4.17.1", + "resolved": "https://registry.npmjs.org/express/-/express-4.17.1.tgz", + "integrity": "sha512-mHJ9O79RqluphRrcw2X/GTh3k9tVv8YcoyY4Kkh4WDMUYKRZUq0h1o0w2rrrxBqM7VoeUVqgb27xlEMXTnYt4g==", + "dev": true, + "requires": { + "accepts": "~1.3.7", + "array-flatten": "1.1.1", + "body-parser": "1.19.0", + "content-disposition": "0.5.3", + "content-type": "~1.0.4", + "cookie": "0.4.0", + "cookie-signature": "1.0.6", + "debug": "2.6.9", + "depd": "~1.1.2", + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "finalhandler": "~1.1.2", + "fresh": "0.5.2", + "merge-descriptors": "1.0.1", + "methods": "~1.1.2", + "on-finished": "~2.3.0", + "parseurl": "~1.3.3", + "path-to-regexp": "0.1.7", + "proxy-addr": "~2.0.5", + "qs": "6.7.0", + "range-parser": "~1.2.1", + "safe-buffer": "5.1.2", + "send": "0.17.1", + "serve-static": "1.14.1", + "setprototypeof": "1.1.1", + "statuses": "~1.5.0", + "type-is": "~1.6.18", + "utils-merge": "1.0.1", + "vary": "~1.1.2" + }, + "dependencies": { + "array-flatten": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz", + "integrity": "sha1-ml9pkFGx5wczKPKgCJaLZOopVdI=", + "dev": true + }, + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "qs": { + "version": "6.7.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.7.0.tgz", + "integrity": "sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ==", + "dev": true + } + } + }, + "extend": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==", + "dev": true + }, + "extend-shallow": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-3.0.2.tgz", + "integrity": "sha1-Jqcarwc7OfshJxcnRhMcJwQCjbg=", + "dev": true, + "requires": { + "assign-symbols": "^1.0.0", + "is-extendable": "^1.0.1" + }, + "dependencies": { + "is-extendable": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz", + "integrity": "sha512-arnXMxT1hhoKo9k1LZdmlNyJdDDfy2v0fXjFlmok4+i8ul/6WlbVge9bhM74OpNPQPMGUToDtz+KXa1PneJxOA==", + "dev": true, + "requires": { + "is-plain-object": "^2.0.4" + } + } + } + }, + "extglob": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/extglob/-/extglob-2.0.4.tgz", + "integrity": "sha512-Nmb6QXkELsuBr24CJSkilo6UHHgbekK5UiZgfE6UHD3Eb27YC6oD+bhcT+tJ6cl8dmsgdQxnWlcry8ksBIBLpw==", + "dev": true, + "requires": { + "array-unique": "^0.3.2", + "define-property": "^1.0.0", + "expand-brackets": "^2.1.4", + "extend-shallow": "^2.0.1", + "fragment-cache": "^0.2.1", + "regex-not": "^1.0.0", + "snapdragon": "^0.8.1", + "to-regex": "^3.0.1" + }, + "dependencies": { + "define-property": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", + "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", + "dev": true, + "requires": { + "is-descriptor": "^1.0.0" + } + }, + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + }, + "is-accessor-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", + "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-data-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", + "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-descriptor": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", + "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", + "dev": true, + "requires": { + "is-accessor-descriptor": "^1.0.0", + "is-data-descriptor": "^1.0.0", + "kind-of": "^6.0.2" + } + } + } + }, + "extsprintf": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/extsprintf/-/extsprintf-1.3.0.tgz", + "integrity": "sha1-lpGEQOMEGnpBT4xS48V06zw+HgU=", + "dev": true + }, + "fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "dev": true + }, + "fast-glob": { + "version": "2.2.7", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-2.2.7.tgz", + "integrity": "sha512-g1KuQwHOZAmOZMuBtHdxDtju+T2RT8jgCC9aANsbpdiDDTSnjgfuVsIBNKbUeJI3oKMRExcfNDtJl4OhbffMsw==", + "dev": true, + "requires": { + "@mrmlnc/readdir-enhanced": "^2.2.1", + "@nodelib/fs.stat": "^1.1.2", + "glob-parent": "^3.1.0", + "is-glob": "^4.0.0", + "merge2": "^1.2.3", + "micromatch": "^3.1.10" + } + }, + "fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true + }, + "fastq": { + "version": "1.11.0", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.11.0.tgz", + "integrity": "sha512-7Eczs8gIPDrVzT+EksYBcupqMyxSHXXrHOLRRxU2/DicV8789MRBRR8+Hc2uWzUupOs4YS4JzBmBxjjCVBxD/g==", + "dev": true, + "requires": { + "reusify": "^1.0.4" + } + }, + "faye-websocket": { + "version": "0.11.4", + "resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.4.tgz", + "integrity": "sha512-CzbClwlXAuiRQAlUyfqPgvPoNKTckTPGfwZV4ZdAhVcP2lh9KUxJg2b5GkE7XbjKQ3YJnQ9z6D9ntLAlB+tP8g==", + "dev": true, + "requires": { + "websocket-driver": ">=0.5.1" + } + }, + "feed": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/feed/-/feed-2.0.4.tgz", + "integrity": "sha512-sWatfulDP6d18qVaWcu34qmq9ml6UeN6nHSBJpNZ2muBqxjPAdT375whPYAHP+gqLfyabtYU5qf2Dv4nqtlp0w==", + "dev": true, + "requires": { + "luxon": "^1.3.3", + "xml": "^1.0.1" + } + }, + "figgy-pudding": { + "version": "3.5.2", + "resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.2.tgz", + "integrity": "sha512-0btnI/H8f2pavGMN8w40mlSKOfTK2SVJmBfBeVIj3kNw0swwgzyRq0d5TJVOwodFmtvpPeWPN/MCcfuWF0Ezbw==", + "dev": true + }, + "figures": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/figures/-/figures-3.2.0.tgz", + "integrity": "sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg==", + "dev": true, + "requires": { + "escape-string-regexp": "^1.0.5" + } + }, + "file-loader": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/file-loader/-/file-loader-3.0.1.tgz", + "integrity": "sha512-4sNIOXgtH/9WZq4NvlfU3Opn5ynUsqBwSLyM+I7UOwdGigTBYfVVQEwe/msZNX/j4pCJTIM14Fsw66Svo1oVrw==", + "dev": true, + "requires": { + "loader-utils": "^1.0.2", + "schema-utils": "^1.0.0" + }, + "dependencies": { + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "file-uri-to-path": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/file-uri-to-path/-/file-uri-to-path-1.0.0.tgz", + "integrity": "sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw==", + "dev": true, + "optional": true + }, + "fill-range": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-4.0.0.tgz", + "integrity": "sha1-1USBHUKPmOsGpj3EAtJAPDKMOPc=", + "dev": true, + "requires": { + "extend-shallow": "^2.0.1", + "is-number": "^3.0.0", + "repeat-string": "^1.6.1", + "to-regex-range": "^2.1.0" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "filter-obj": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/filter-obj/-/filter-obj-1.1.0.tgz", + "integrity": "sha1-mzERErxsYSehbgFsbF1/GeCAXFs=", + "dev": true + }, + "finalhandler": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.1.2.tgz", + "integrity": "sha512-aAWcW57uxVNrQZqFXjITpW3sIUQmHGG3qSb9mUah9MgMC4NeWhNOlNjXEYq3HjRAvL6arUviZGGJsBg6z0zsWA==", + "dev": true, + "requires": { + "debug": "2.6.9", + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "on-finished": "~2.3.0", + "parseurl": "~1.3.3", + "statuses": "~1.5.0", + "unpipe": "~1.0.0" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + } + } + }, + "find-cache-dir": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-3.3.1.tgz", + "integrity": "sha512-t2GDMt3oGC/v+BMwzmllWDuJF/xcDtE5j/fCGbqDD7OLuJkj0cfh1YSA5VKPvwMeLFLNDBkwOKZ2X85jGLVftQ==", + "dev": true, + "requires": { + "commondir": "^1.0.1", + "make-dir": "^3.0.2", + "pkg-dir": "^4.1.0" + } + }, + "find-up": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", + "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", + "dev": true, + "requires": { + "locate-path": "^5.0.0", + "path-exists": "^4.0.0" + } + }, + "flush-write-stream": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/flush-write-stream/-/flush-write-stream-1.1.1.tgz", + "integrity": "sha512-3Z4XhFZ3992uIq0XOqb9AreonueSYphE6oYbpt5+3u06JWklbsPkNv3ZKkP9Bz/r+1MWCaMoSQ28P85+1Yc77w==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "readable-stream": "^2.3.6" + } + }, + "follow-redirects": { + "version": "1.14.8", + "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.8.tgz", + "integrity": "sha512-1x0S9UVJHsQprFcEC/qnNzBLcIxsjAV905f/UkQxbclCsoTWlacCNOpQa/anodLl2uaEKFhfWOvM2Qg77+15zA==", + "dev": true + }, + "for-in": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/for-in/-/for-in-1.0.2.tgz", + "integrity": "sha1-gQaNKVqBQuwKxybG4iAMMPttXoA=", + "dev": true + }, + "foreach": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/foreach/-/foreach-2.0.5.tgz", + "integrity": "sha1-C+4AUBiusmDQo6865ljdATbsG5k=", + "dev": true + }, + "forever-agent": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/forever-agent/-/forever-agent-0.6.1.tgz", + "integrity": "sha1-+8cfDEGt6zf5bFd60e1C2P2sypE=", + "dev": true + }, + "form-data": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/form-data/-/form-data-2.3.3.tgz", + "integrity": "sha512-1lLKB2Mu3aGP1Q/2eCOx0fNbRMe7XdwktwOruhfqqd0rIJWwN4Dh+E3hrPSlDCXnSR7UtZ1N38rVXm+6+MEhJQ==", + "dev": true, + "requires": { + "asynckit": "^0.4.0", + "combined-stream": "^1.0.6", + "mime-types": "^2.1.12" + } + }, + "forwarded": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.1.2.tgz", + "integrity": "sha1-mMI9qxF1ZXuMBXPozszZGw/xjIQ=", + "dev": true + }, + "fp-and-or": { + "version": "0.1.3", + "resolved": "https://registry.npmjs.org/fp-and-or/-/fp-and-or-0.1.3.tgz", + "integrity": "sha512-wJaE62fLaB3jCYvY2ZHjZvmKK2iiLiiehX38rz5QZxtdN8fVPJDeZUiVvJrHStdTc+23LHlyZuSEKgFc0pxi2g==", + "dev": true + }, + "fragment-cache": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/fragment-cache/-/fragment-cache-0.2.1.tgz", + "integrity": "sha1-QpD60n8T6Jvn8zeZxrxaCr//DRk=", + "dev": true, + "requires": { + "map-cache": "^0.2.2" + } + }, + "fresh": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz", + "integrity": "sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac=", + "dev": true + }, + "from": { + "version": "0.1.7", + "resolved": "https://registry.npmjs.org/from/-/from-0.1.7.tgz", + "integrity": "sha1-g8YK/Fi5xWmXAH7Rp2izqzA6RP4=", + "dev": true + }, + "from2": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/from2/-/from2-2.3.0.tgz", + "integrity": "sha1-i/tVAr3kpNNs/e6gB/zKIdfjgq8=", + "dev": true, + "requires": { + "inherits": "^2.0.1", + "readable-stream": "^2.0.0" + } + }, + "fs-extra": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz", + "integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==", + "dev": true, + "requires": { + "graceful-fs": "^4.2.0", + "jsonfile": "^4.0.0", + "universalify": "^0.1.0" + } + }, + "fs-minipass": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/fs-minipass/-/fs-minipass-2.1.0.tgz", + "integrity": "sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg==", + "dev": true, + "requires": { + "minipass": "^3.0.0" + } + }, + "fs-write-stream-atomic": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/fs-write-stream-atomic/-/fs-write-stream-atomic-1.0.10.tgz", + "integrity": "sha1-tH31NJPvkR33VzHnCp3tAYnbQMk=", + "dev": true, + "requires": { + "graceful-fs": "^4.1.2", + "iferr": "^0.1.5", + "imurmurhash": "^0.1.4", + "readable-stream": "1 || 2" + } + }, + "fs.realpath": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", + "integrity": "sha1-FQStJSMVjKpA20onh8sBQRmU6k8=", + "dev": true + }, + "fsevents": { + "version": "1.2.13", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-1.2.13.tgz", + "integrity": "sha512-oWb1Z6mkHIskLzEJ/XWX0srkpkTQ7vaopMQkyaEIoq0fmtFVxOthb8cCxeT+p3ynTdkk/RZwbgG4brR5BeWECw==", + "dev": true, + "optional": true, + "requires": { + "bindings": "^1.5.0", + "nan": "^2.12.1" + } + }, + "function-bind": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz", + "integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==", + "dev": true + }, + "gauge": { + "version": "2.7.4", + "resolved": "https://registry.npmjs.org/gauge/-/gauge-2.7.4.tgz", + "integrity": "sha1-LANAXHU4w51+s3sxcCLjJfsBi/c=", + "dev": true, + "requires": { + "aproba": "^1.0.3", + "console-control-strings": "^1.0.0", + "has-unicode": "^2.0.0", + "object-assign": "^4.1.0", + "signal-exit": "^3.0.0", + "string-width": "^1.0.1", + "strip-ansi": "^3.0.1", + "wide-align": "^1.1.0" + }, + "dependencies": { + "is-fullwidth-code-point": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-1.0.0.tgz", + "integrity": "sha1-754xOG8DGn8NZDr4L95QxFfvAMs=", + "dev": true, + "requires": { + "number-is-nan": "^1.0.0" + } + }, + "string-width": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-1.0.2.tgz", + "integrity": "sha1-EYvfW4zcUaKn5w0hHgfisLmxB9M=", + "dev": true, + "requires": { + "code-point-at": "^1.0.0", + "is-fullwidth-code-point": "^1.0.0", + "strip-ansi": "^3.0.0" + } + } + } + }, + "gensync": { + "version": "1.0.0-beta.2", + "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", + "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==", + "dev": true + }, + "get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "dev": true + }, + "get-intrinsic": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.1.1.tgz", + "integrity": "sha512-kWZrnVM42QCiEA2Ig1bG8zjoIMOgxWwYCEeNdwY6Tv/cOSeGpcoX4pXHfKUxNKVoArnrEr2e9srnAxxGIraS9Q==", + "dev": true, + "requires": { + "function-bind": "^1.1.1", + "has": "^1.0.3", + "has-symbols": "^1.0.1" + } + }, + "get-stdin": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-8.0.0.tgz", + "integrity": "sha512-sY22aA6xchAzprjyqmSEQv4UbAAzRN0L2dQB0NlN5acTTK9Don6nhoc3eAbUnpZiCANAMfd/+40kVdKfFygohg==", + "dev": true + }, + "get-stream": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-6.0.1.tgz", + "integrity": "sha512-ts6Wi+2j3jQjqi70w5AlN8DFnkSwC+MqmxEzdEALB2qXZYV3X/b1CTfgPLGJNMeAWxdPfU8FO1ms3NUfaHCPYg==", + "dev": true + }, + "get-value": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/get-value/-/get-value-2.0.6.tgz", + "integrity": "sha1-3BXKHGcjh8p2vTesCjlbogQqLCg=", + "dev": true + }, + "getpass": { + "version": "0.1.7", + "resolved": "https://registry.npmjs.org/getpass/-/getpass-0.1.7.tgz", + "integrity": "sha1-Xv+OPmhNVprkyysSgmBOi6YhSfo=", + "dev": true, + "requires": { + "assert-plus": "^1.0.0" + } + }, + "github-markdown-css": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/github-markdown-css/-/github-markdown-css-3.0.1.tgz", + "integrity": "sha512-9G5CIPsHoyk5ObDsb/H4KTi23J8KE1oDd4KYU51qwqeM+lKWAiO7abpSgCkyWswgmSKBiuE7/4f8xUz7f2qAiQ==", + "dev": true + }, + "glob": { + "version": "7.1.7", + "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.7.tgz", + "integrity": "sha512-OvD9ENzPLbegENnYP5UUfJIirTg4+XwMWGaQfQTY0JenxNvvIKP3U3/tAQSPIu/lHxXYSZmpXlUHeqAIdKzBLQ==", + "dev": true, + "requires": { + "fs.realpath": "^1.0.0", + "inflight": "^1.0.4", + "inherits": "2", + "minimatch": "^3.0.4", + "once": "^1.3.0", + "path-is-absolute": "^1.0.0" + } + }, + "glob-parent": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz", + "integrity": "sha1-nmr2KZ2NO9K9QEMIMr0RPfkGxa4=", + "dev": true, + "requires": { + "is-glob": "^3.1.0", + "path-dirname": "^1.0.0" + }, + "dependencies": { + "is-glob": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-3.1.0.tgz", + "integrity": "sha1-e6WuJCF4BKxwcHuWkiVnSGzD6Eo=", + "dev": true, + "requires": { + "is-extglob": "^2.1.0" + } + } + } + }, + "glob-to-regexp": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.3.0.tgz", + "integrity": "sha1-jFoUlNIGbFcMw7/kSWF1rMTVAqs=", + "dev": true + }, + "global": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/global/-/global-4.4.0.tgz", + "integrity": "sha512-wv/LAoHdRE3BeTGz53FAamhGlPLhlssK45usmGFThIi4XqnBmjKQ16u+RNbP7WvigRZDxUsM0J3gcQ5yicaL0w==", + "dev": true, + "requires": { + "min-document": "^2.19.0", + "process": "^0.11.10" + } + }, + "global-dirs": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/global-dirs/-/global-dirs-2.1.0.tgz", + "integrity": "sha512-MG6kdOUh/xBnyo9cJFeIKkLEc1AyFq42QTU4XiX51i2NEdxLxLWXIjEjmqKeSuKR7pAZjTqUVoT2b2huxVLgYQ==", + "dev": true, + "requires": { + "ini": "1.3.7" + } + }, + "globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "dev": true + }, + "globby": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/globby/-/globby-9.2.0.tgz", + "integrity": "sha512-ollPHROa5mcxDEkwg6bPt3QbEf4pDQSNtd6JPL1YvOvAo/7/0VAm9TccUeoTmarjPw4pfUthSCqcyfNB1I3ZSg==", + "dev": true, + "requires": { + "@types/glob": "^7.1.1", + "array-union": "^1.0.2", + "dir-glob": "^2.2.2", + "fast-glob": "^2.2.6", + "glob": "^7.1.3", + "ignore": "^4.0.3", + "pify": "^4.0.1", + "slash": "^2.0.0" + } + }, + "got": { + "version": "9.6.0", + "resolved": "https://registry.npmjs.org/got/-/got-9.6.0.tgz", + "integrity": "sha512-R7eWptXuGYxwijs0eV+v3o6+XH1IqVK8dJOEecQfTmkncw9AV4dcw/Dhxi8MdlqPthxxpZyizMzyg8RTmEsG+Q==", + "dev": true, + "requires": { + "@sindresorhus/is": "^0.14.0", + "@szmarczak/http-timer": "^1.1.2", + "cacheable-request": "^6.0.0", + "decompress-response": "^3.3.0", + "duplexer3": "^0.1.4", + "get-stream": "^4.1.0", + "lowercase-keys": "^1.0.1", + "mimic-response": "^1.0.1", + "p-cancelable": "^1.0.0", + "to-readable-stream": "^1.0.0", + "url-parse-lax": "^3.0.0" + }, + "dependencies": { + "get-stream": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz", + "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==", + "dev": true, + "requires": { + "pump": "^3.0.0" + } + } + } + }, + "graceful-fs": { + "version": "4.2.6", + "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.6.tgz", + "integrity": "sha512-nTnJ528pbqxYanhpDYsi4Rd8MAeaBA67+RZ10CM1m3bTAVFEDcd5AuA4a6W5YkGZ1iNXHzZz8T6TBKLeBuNriQ==", + "dev": true + }, + "graphlib": { + "version": "2.1.8", + "resolved": "https://registry.npmjs.org/graphlib/-/graphlib-2.1.8.tgz", + "integrity": "sha512-jcLLfkpoVGmH7/InMC/1hIvOPSUh38oJtGhvrOFGzioE1DZ+0YW16RgmOJhHiuWTvGiJQ9Z1Ik43JvkRPRvE+A==", + "dev": true, + "requires": { + "lodash": "^4.17.15" + } + }, + "gray-matter": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/gray-matter/-/gray-matter-4.0.3.tgz", + "integrity": "sha512-5v6yZd4JK3eMI3FqqCouswVqwugaA9r4dNZB1wwcmrD02QkV5H0y7XBQW8QwQqEaZY1pM9aqORSORhJRdNK44Q==", + "dev": true, + "requires": { + "js-yaml": "^3.13.1", + "kind-of": "^6.0.2", + "section-matter": "^1.0.0", + "strip-bom-string": "^1.0.0" + } + }, + "handle-thing": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz", + "integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg==", + "dev": true + }, + "har-schema": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/har-schema/-/har-schema-2.0.0.tgz", + "integrity": "sha1-qUwiJOvKwEeCoNkDVSHyRzW37JI=", + "dev": true + }, + "har-validator": { + "version": "5.1.5", + "resolved": "https://registry.npmjs.org/har-validator/-/har-validator-5.1.5.tgz", + "integrity": "sha512-nmT2T0lljbxdQZfspsno9hgrG3Uir6Ks5afism62poxqBM6sDnMEuPmzTq8XN0OEwqKLLdh1jQI3qyE66Nzb3w==", + "dev": true, + "requires": { + "ajv": "^6.12.3", + "har-schema": "^2.0.0" + } + }, + "has": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz", + "integrity": "sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==", + "dev": true, + "requires": { + "function-bind": "^1.1.1" + } + }, + "has-ansi": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/has-ansi/-/has-ansi-2.0.0.tgz", + "integrity": "sha1-NPUEnOHs3ysGSa8+8k5F7TVBbZE=", + "dev": true, + "requires": { + "ansi-regex": "^2.0.0" + } + }, + "has-bigints": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/has-bigints/-/has-bigints-1.0.1.tgz", + "integrity": "sha512-LSBS2LjbNBTf6287JEbEzvJgftkF5qFkmCo9hDRpAzKhUOlJ+hx8dd4USs00SgsUNwc4617J9ki5YtEClM2ffA==", + "dev": true + }, + "has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha1-tdRU3CGZriJWmfNGfloH87lVuv0=", + "dev": true + }, + "has-symbols": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.2.tgz", + "integrity": "sha512-chXa79rL/UC2KlX17jo3vRGz0azaWEx5tGqZg5pO3NUyEJVB17dMruQlzCCOfUvElghKcm5194+BCRvi2Rv/Gw==", + "dev": true + }, + "has-unicode": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/has-unicode/-/has-unicode-2.0.1.tgz", + "integrity": "sha1-4Ob+aijPUROIVeCG0Wkedx3iqLk=", + "dev": true + }, + "has-value": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/has-value/-/has-value-1.0.0.tgz", + "integrity": "sha1-GLKB2lhbHFxR3vJMkw7SmgvmsXc=", + "dev": true, + "requires": { + "get-value": "^2.0.6", + "has-values": "^1.0.0", + "isobject": "^3.0.0" + } + }, + "has-values": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/has-values/-/has-values-1.0.0.tgz", + "integrity": "sha1-lbC2P+whRmGab+V/51Yo1aOe/k8=", + "dev": true, + "requires": { + "is-number": "^3.0.0", + "kind-of": "^4.0.0" + }, + "dependencies": { + "kind-of": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-4.0.0.tgz", + "integrity": "sha1-IIE989cSkosgc3hpGkUGb65y3Vc=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "has-yarn": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/has-yarn/-/has-yarn-2.1.0.tgz", + "integrity": "sha512-UqBRqi4ju7T+TqGNdqAO0PaSVGsDGJUBQvk9eUWNGRY1CFGDzYhLWoM7JQEemnlvVcv/YEmc2wNW8BC24EnUsw==", + "dev": true + }, + "hash-base": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/hash-base/-/hash-base-3.1.0.tgz", + "integrity": "sha512-1nmYp/rhMDiE7AYkDw+lLwlAzz0AntGIe51F3RfFfEqyQ3feY2eI/NcwC6umIQVOASPMsWJLJScWKSSvzL9IVA==", + "dev": true, + "requires": { + "inherits": "^2.0.4", + "readable-stream": "^3.6.0", + "safe-buffer": "^5.2.0" + }, + "dependencies": { + "readable-stream": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", + "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + } + }, + "safe-buffer": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "dev": true + } + } + }, + "hash-sum": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/hash-sum/-/hash-sum-1.0.2.tgz", + "integrity": "sha1-M7QHd3VMZDJXPBIMw4CLvRDUfwQ=", + "dev": true + }, + "hash.js": { + "version": "1.1.7", + "resolved": "https://registry.npmjs.org/hash.js/-/hash.js-1.1.7.tgz", + "integrity": "sha512-taOaskGt4z4SOANNseOviYDvjEJinIkRgmp7LbKP2YTTmVxWBl87s/uzK9r+44BclBSp2X7K1hqeNfz9JbBeXA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "minimalistic-assert": "^1.0.1" + } + }, + "he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "dev": true + }, + "hex-color-regex": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/hex-color-regex/-/hex-color-regex-1.1.0.tgz", + "integrity": "sha512-l9sfDFsuqtOqKDsQdqrMRk0U85RZc0RtOR9yPI7mRVOa4FsR/BVnZ0shmQRM96Ji99kYZP/7hn1cedc1+ApsTQ==", + "dev": true + }, + "hmac-drbg": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/hmac-drbg/-/hmac-drbg-1.0.1.tgz", + "integrity": "sha1-0nRXAQJabHdabFRXk+1QL8DGSaE=", + "dev": true, + "requires": { + "hash.js": "^1.0.3", + "minimalistic-assert": "^1.0.0", + "minimalistic-crypto-utils": "^1.0.1" + } + }, + "hogan.js": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/hogan.js/-/hogan.js-3.0.2.tgz", + "integrity": "sha1-TNnhq9QpQUbnZ55B14mHMrAse/0=", + "dev": true, + "requires": { + "mkdirp": "0.3.0", + "nopt": "1.0.10" + }, + "dependencies": { + "mkdirp": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.3.0.tgz", + "integrity": "sha1-G79asbqCevI1dRQ0kEJkVfSB/h4=", + "dev": true + } + } + }, + "hosted-git-info": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-4.0.2.tgz", + "integrity": "sha512-c9OGXbZ3guC/xOlCg1Ci/VgWlwsqDv1yMQL1CWqXDL0hDjXuNcq0zuR4xqPSuasI3kqFDhqSyTjREz5gzq0fXg==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "hpack.js": { + "version": "2.1.6", + "resolved": "https://registry.npmjs.org/hpack.js/-/hpack.js-2.1.6.tgz", + "integrity": "sha1-h3dMCUnlE/QuhFdbPEVoH63ioLI=", + "dev": true, + "requires": { + "inherits": "^2.0.1", + "obuf": "^1.0.0", + "readable-stream": "^2.0.1", + "wbuf": "^1.1.0" + } + }, + "hsl-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/hsl-regex/-/hsl-regex-1.0.0.tgz", + "integrity": "sha1-1JMwx4ntgZ4nakwNJy3/owsY/m4=", + "dev": true + }, + "hsla-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/hsla-regex/-/hsla-regex-1.0.0.tgz", + "integrity": "sha1-wc56MWjIxmFAM6S194d/OyJfnDg=", + "dev": true + }, + "html-entities": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/html-entities/-/html-entities-1.4.0.tgz", + "integrity": "sha512-8nxjcBcd8wovbeKx7h3wTji4e6+rhaVuPNpMqwWgnHh+N9ToqsCs6XztWRBPQ+UtzsoMAdKZtUENoVzU/EMtZA==", + "dev": true + }, + "html-tags": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/html-tags/-/html-tags-3.1.0.tgz", + "integrity": "sha512-1qYz89hW3lFDEazhjW0yVAV87lw8lVkrJocr72XmBkMKsoSVJCQx3W8BXsC7hO2qAt8BoVjYjtAcZ9perqGnNg==", + "dev": true + }, + "htmlparser2": { + "version": "3.10.1", + "resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-3.10.1.tgz", + "integrity": "sha512-IgieNijUMbkDovyoKObU1DUhm1iwNYE/fuifEoEHfd1oZKZDaONBSkal7Y01shxsM49R4XaMdGez3WnF9UfiCQ==", + "dev": true, + "requires": { + "domelementtype": "^1.3.1", + "domhandler": "^2.3.0", + "domutils": "^1.5.1", + "entities": "^1.1.1", + "inherits": "^2.0.1", + "readable-stream": "^3.1.1" + }, + "dependencies": { + "readable-stream": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", + "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + } + } + } + }, + "http-cache-semantics": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz", + "integrity": "sha512-carPklcUh7ROWRK7Cv27RPtdhYhUsela/ue5/jKzjegVvXDqM2ILE9Q2BGn9JZJh1g87cp56su/FgQSzcWS8cQ==", + "dev": true + }, + "http-deceiver": { + "version": "1.2.7", + "resolved": "https://registry.npmjs.org/http-deceiver/-/http-deceiver-1.2.7.tgz", + "integrity": "sha1-+nFolEq5pRnTN8sL7HKE3D5yPYc=", + "dev": true + }, + "http-errors": { + "version": "1.7.2", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.7.2.tgz", + "integrity": "sha512-uUQBt3H/cSIVfch6i1EuPNy/YsRSOUBXTVfZ+yR7Zjez3qjBz6i9+i4zjNaoqcoFVI4lQJ5plg63TvGfRSDCRg==", + "dev": true, + "requires": { + "depd": "~1.1.2", + "inherits": "2.0.3", + "setprototypeof": "1.1.1", + "statuses": ">= 1.5.0 < 2", + "toidentifier": "1.0.0" + }, + "dependencies": { + "inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4=", + "dev": true + } + } + }, + "http-parser-js": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.3.tgz", + "integrity": "sha512-t7hjvef/5HEK7RWTdUzVUhl8zkEu+LlaE0IYzdMuvbSDipxBRpOn4Uhw8ZyECEa808iVT8XCjzo6xmYt4CiLZg==", + "dev": true + }, + "http-proxy": { + "version": "1.18.1", + "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz", + "integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==", + "dev": true, + "requires": { + "eventemitter3": "^4.0.0", + "follow-redirects": "^1.0.0", + "requires-port": "^1.0.0" + } + }, + "http-proxy-agent": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/http-proxy-agent/-/http-proxy-agent-4.0.1.tgz", + "integrity": "sha512-k0zdNgqWTGA6aeIRVpvfVob4fL52dTfaehylg0Y4UvSySvOq/Y+BOyPrgpUrA7HylqvU8vIZGsRuXmspskV0Tg==", + "dev": true, + "requires": { + "@tootallnate/once": "1", + "agent-base": "6", + "debug": "4" + } + }, + "http-proxy-middleware": { + "version": "0.19.1", + "resolved": "https://registry.npmjs.org/http-proxy-middleware/-/http-proxy-middleware-0.19.1.tgz", + "integrity": "sha512-yHYTgWMQO8VvwNS22eLLloAkvungsKdKTLO8AJlftYIKNfJr3GK3zK0ZCfzDDGUBttdGc8xFy1mCitvNKQtC3Q==", + "dev": true, + "requires": { + "http-proxy": "^1.17.0", + "is-glob": "^4.0.0", + "lodash": "^4.17.11", + "micromatch": "^3.1.10" + } + }, + "http-signature": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/http-signature/-/http-signature-1.2.0.tgz", + "integrity": "sha1-muzZJRFHcvPZW2WmCruPfBj7rOE=", + "dev": true, + "requires": { + "assert-plus": "^1.0.0", + "jsprim": "^1.2.2", + "sshpk": "^1.7.0" + } + }, + "https-browserify": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/https-browserify/-/https-browserify-1.0.0.tgz", + "integrity": "sha1-7AbBDgo0wPL68Zn3/X/Hj//QPHM=", + "dev": true + }, + "https-proxy-agent": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-5.0.0.tgz", + "integrity": "sha512-EkYm5BcKUGiduxzSt3Eppko+PiNWNEpa4ySk9vTC6wDsQJW9rHSa+UhGNJoRYp7bz6Ht1eaRIa6QaJqO5rCFbA==", + "dev": true, + "requires": { + "agent-base": "6", + "debug": "4" + } + }, + "human-signals": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz", + "integrity": "sha512-B4FFZ6q/T2jhhksgkbEW3HBvWIfDW85snkQgawt07S7J5QXTk6BkNV+0yAeZrM5QpMAdYlocGoljn0sJ/WQkFw==", + "dev": true + }, + "humanize-ms": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/humanize-ms/-/humanize-ms-1.2.1.tgz", + "integrity": "sha1-xG4xWaKT9riW2ikxbYtv6Lt5u+0=", + "dev": true, + "requires": { + "ms": "^2.0.0" + } + }, + "iconv-lite": { + "version": "0.4.24", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", + "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", + "dev": true, + "requires": { + "safer-buffer": ">= 2.1.2 < 3" + } + }, + "icss-replace-symbols": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/icss-replace-symbols/-/icss-replace-symbols-1.1.0.tgz", + "integrity": "sha1-Bupvg2ead0njhs/h/oEq5dsiPe0=", + "dev": true + }, + "icss-utils": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/icss-utils/-/icss-utils-4.1.1.tgz", + "integrity": "sha512-4aFq7wvWyMHKgxsH8QQtGpvbASCf+eM3wPRLI6R+MgAnTCZ6STYsRvttLvRWK0Nfif5piF394St3HeJDaljGPA==", + "dev": true, + "requires": { + "postcss": "^7.0.14" + } + }, + "ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "dev": true + }, + "iferr": { + "version": "0.1.5", + "resolved": "https://registry.npmjs.org/iferr/-/iferr-0.1.5.tgz", + "integrity": "sha1-xg7taebY/bazEEofy8ocGS3FtQE=", + "dev": true + }, + "ignore": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-4.0.6.tgz", + "integrity": "sha512-cyFDKrqc/YdcWFniJhzI42+AzS+gNwmUzOSFcRCQYwySuBBBy/KjuxWLZ/FHEH6Moq1NizMOBWyTcv8O4OZIMg==", + "dev": true + }, + "ignore-walk": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/ignore-walk/-/ignore-walk-3.0.4.tgz", + "integrity": "sha512-PY6Ii8o1jMRA1z4F2hRkH/xN59ox43DavKvD3oDpfurRlOJyAHpifIwpbdv1n4jt4ov0jSpw3kQ4GhJnpBL6WQ==", + "dev": true, + "requires": { + "minimatch": "^3.0.4" + } + }, + "immediate": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/immediate/-/immediate-3.3.0.tgz", + "integrity": "sha512-HR7EVodfFUdQCTIeySw+WDRFJlPcLOJbXfwwZ7Oom6tjsvZ3bOkCDJHehQC3nxJrv7+f9XecwazynjU8e4Vw3Q==", + "dev": true + }, + "import-cwd": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/import-cwd/-/import-cwd-2.1.0.tgz", + "integrity": "sha1-qmzzbnInYShcs3HsZRn1PiQ1sKk=", + "dev": true, + "requires": { + "import-from": "^2.1.0" + } + }, + "import-fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-2.0.0.tgz", + "integrity": "sha1-2BNVwVYS04bGH53dOSLUMEgipUY=", + "dev": true, + "requires": { + "caller-path": "^2.0.0", + "resolve-from": "^3.0.0" + } + }, + "import-from": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/import-from/-/import-from-2.1.0.tgz", + "integrity": "sha1-M1238qev/VOqpHHUuAId7ja387E=", + "dev": true, + "requires": { + "resolve-from": "^3.0.0" + } + }, + "import-lazy": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/import-lazy/-/import-lazy-2.1.0.tgz", + "integrity": "sha1-BWmOPUXIjo1+nZLLBYTnfwlvPkM=", + "dev": true + }, + "import-local": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/import-local/-/import-local-2.0.0.tgz", + "integrity": "sha512-b6s04m3O+s3CGSbqDIyP4R6aAwAeYlVq9+WUWep6iHa8ETRf9yei1U48C5MmfJmV9AiLYYBKPMq/W+/WRpQmCQ==", + "dev": true, + "requires": { + "pkg-dir": "^3.0.0", + "resolve-cwd": "^2.0.0" + }, + "dependencies": { + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + }, + "pkg-dir": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", + "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", + "dev": true, + "requires": { + "find-up": "^3.0.0" + } + } + } + }, + "imurmurhash": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha1-khi5srkoojixPcT7a21XbyMUU+o=", + "dev": true + }, + "indent-string": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/indent-string/-/indent-string-4.0.0.tgz", + "integrity": "sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg==", + "dev": true + }, + "indexes-of": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/indexes-of/-/indexes-of-1.0.1.tgz", + "integrity": "sha1-8w9xbI4r00bHtn0985FVZqfAVgc=", + "dev": true + }, + "infer-owner": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/infer-owner/-/infer-owner-1.0.4.tgz", + "integrity": "sha512-IClj+Xz94+d7irH5qRyfJonOdfTzuDaifE6ZPWfx0N0+/ATZCbuTPq2prFl526urkQd90WyUKIh1DfBQ2hMz9A==", + "dev": true + }, + "inflight": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", + "integrity": "sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk=", + "dev": true, + "requires": { + "once": "^1.3.0", + "wrappy": "1" + } + }, + "inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", + "dev": true + }, + "ini": { + "version": "1.3.7", + "resolved": "https://registry.npmjs.org/ini/-/ini-1.3.7.tgz", + "integrity": "sha512-iKpRpXP+CrP2jyrxvg1kMUpXDyRUFDWurxbnVT1vQPx+Wz9uCYsMIqYuSBLV+PAaZG/d7kRLKRFc9oDMsH+mFQ==", + "dev": true + }, + "internal-ip": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/internal-ip/-/internal-ip-4.3.0.tgz", + "integrity": "sha512-S1zBo1D6zcsyuC6PMmY5+55YMILQ9av8lotMx447Bq6SAgo/sDK6y6uUKmuYhW7eacnIhFfsPmCNYdDzsnnDCg==", + "dev": true, + "requires": { + "default-gateway": "^4.2.0", + "ipaddr.js": "^1.9.0" + } + }, + "internmap": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/internmap/-/internmap-2.0.3.tgz", + "integrity": "sha512-5Hh7Y1wQbvY5ooGgPbDaL5iYLAPzMTUrjMulskHLH6wnv/A+1q5rgEaiuqEjB+oxGXIVZs1FF+R/KPN3ZSQYYg==", + "dev": true + }, + "ip": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/ip/-/ip-1.1.5.tgz", + "integrity": "sha1-vd7XARQpCCjAoDnnLvJfWq7ENUo=", + "dev": true + }, + "ip-regex": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/ip-regex/-/ip-regex-2.1.0.tgz", + "integrity": "sha1-+ni/XS5pE8kRzp+BnuUUa7bYROk=", + "dev": true + }, + "ipaddr.js": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz", + "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==", + "dev": true + }, + "is-absolute-url": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-2.1.0.tgz", + "integrity": "sha1-UFMN+4T8yap9vnhS6Do3uTufKqY=", + "dev": true + }, + "is-accessor-descriptor": { + "version": "0.1.6", + "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-0.1.6.tgz", + "integrity": "sha1-qeEss66Nh2cn7u84Q/igiXtcmNY=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "is-arguments": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-arguments/-/is-arguments-1.1.0.tgz", + "integrity": "sha512-1Ij4lOMPl/xB5kBDn7I+b2ttPMKa8szhEIrXDuXQD/oe3HJLTLhqhgGspwgyGd6MOywBUqVvYicF72lkgDnIHg==", + "dev": true, + "requires": { + "call-bind": "^1.0.0" + } + }, + "is-arrayish": { + "version": "0.3.2", + "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.3.2.tgz", + "integrity": "sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==", + "dev": true + }, + "is-bigint": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-bigint/-/is-bigint-1.0.2.tgz", + "integrity": "sha512-0JV5+SOCQkIdzjBK9buARcV804Ddu7A0Qet6sHi3FimE9ne6m4BGQZfRn+NZiXbBk4F4XmHfDZIipLj9pX8dSA==", + "dev": true + }, + "is-binary-path": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-1.0.1.tgz", + "integrity": "sha1-dfFmQrSA8YenEcgUFh/TpKdlWJg=", + "dev": true, + "requires": { + "binary-extensions": "^1.0.0" + } + }, + "is-boolean-object": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/is-boolean-object/-/is-boolean-object-1.1.1.tgz", + "integrity": "sha512-bXdQWkECBUIAcCkeH1unwJLIpZYaa5VvuygSyS/c2lf719mTKZDU5UdDRlpd01UjADgmW8RfqaP+mRaVPdr/Ng==", + "dev": true, + "requires": { + "call-bind": "^1.0.2" + } + }, + "is-buffer": { + "version": "1.1.6", + "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz", + "integrity": "sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w==", + "dev": true + }, + "is-callable": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.3.tgz", + "integrity": "sha512-J1DcMe8UYTBSrKezuIUTUwjXsho29693unXM2YhJUTR2txK/eG47bvNa/wipPFmZFgr/N6f1GA66dv0mEyTIyQ==", + "dev": true + }, + "is-ci": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-ci/-/is-ci-2.0.0.tgz", + "integrity": "sha512-YfJT7rkpQB0updsdHLGWrvhBJfcfzNNawYDNIyQXJz0IViGf75O8EBPKSdvw2rF+LGCsX4FZ8tcr3b19LcZq4w==", + "dev": true, + "requires": { + "ci-info": "^2.0.0" + }, + "dependencies": { + "ci-info": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ci-info/-/ci-info-2.0.0.tgz", + "integrity": "sha512-5tK7EtrZ0N+OLFMthtqOj4fI2Jeb88C4CAZPu25LDVUgXJ0A3Js4PMGqrn0JU1W0Mh1/Z8wZzYPxqUrXeBboCQ==", + "dev": true + } + } + }, + "is-color-stop": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-color-stop/-/is-color-stop-1.1.0.tgz", + "integrity": "sha1-z/9HGu5N1cnhWFmPvhKWe1za00U=", + "dev": true, + "requires": { + "css-color-names": "^0.0.4", + "hex-color-regex": "^1.1.0", + "hsl-regex": "^1.0.0", + "hsla-regex": "^1.0.0", + "rgb-regex": "^1.0.1", + "rgba-regex": "^1.0.0" + } + }, + "is-core-module": { + "version": "2.4.0", + "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.4.0.tgz", + "integrity": "sha512-6A2fkfq1rfeQZjxrZJGerpLCTHRNEBiSgnu0+obeJpEPZRUooHgsizvzv0ZjJwOz3iWIHdJtVWJ/tmPr3D21/A==", + "dev": true, + "requires": { + "has": "^1.0.3" + } + }, + "is-data-descriptor": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-0.1.4.tgz", + "integrity": "sha1-C17mSDiOLIYCgueT8YVv7D8wG1Y=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "is-date-object": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-date-object/-/is-date-object-1.0.4.tgz", + "integrity": "sha512-/b4ZVsG7Z5XVtIxs/h9W8nvfLgSAyKYdtGWQLbqy6jA1icmgjf8WCoTKgeS4wy5tYaPePouzFMANbnj94c2Z+A==", + "dev": true + }, + "is-descriptor": { + "version": "0.1.6", + "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-0.1.6.tgz", + "integrity": "sha512-avDYr0SB3DwO9zsMov0gKCESFYqCnE4hq/4z3TdUlukEy5t9C0YRq7HLrsN52NAcqXKaepeCD0n+B0arnVG3Hg==", + "dev": true, + "requires": { + "is-accessor-descriptor": "^0.1.6", + "is-data-descriptor": "^0.1.4", + "kind-of": "^5.0.0" + }, + "dependencies": { + "kind-of": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-5.1.0.tgz", + "integrity": "sha512-NGEErnH6F2vUuXDh+OlbcKW7/wOcfdRHaZ7VWtqCztfHri/++YKmP51OdWeGPuqCOba6kk2OTe5d02VmTB80Pw==", + "dev": true + } + } + }, + "is-directory": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/is-directory/-/is-directory-0.3.1.tgz", + "integrity": "sha1-YTObbyR1/Hcv2cnYP1yFddwVSuE=", + "dev": true + }, + "is-extendable": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz", + "integrity": "sha1-YrEQ4omkcUGOPsNqYX1HLjAd/Ik=", + "dev": true + }, + "is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha1-qIwCU1eR8C7TfHahueqXc8gz+MI=", + "dev": true + }, + "is-fullwidth-code-point": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-2.0.0.tgz", + "integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8=", + "dev": true + }, + "is-glob": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz", + "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==", + "dev": true, + "requires": { + "is-extglob": "^2.1.1" + } + }, + "is-installed-globally": { + "version": "0.3.2", + "resolved": "https://registry.npmjs.org/is-installed-globally/-/is-installed-globally-0.3.2.tgz", + "integrity": "sha512-wZ8x1js7Ia0kecP/CHM/3ABkAmujX7WPvQk6uu3Fly/Mk44pySulQpnHG46OMjHGXApINnV4QhY3SWnECO2z5g==", + "dev": true, + "requires": { + "global-dirs": "^2.0.1", + "is-path-inside": "^3.0.1" + }, + "dependencies": { + "is-path-inside": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-3.0.3.tgz", + "integrity": "sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==", + "dev": true + } + } + }, + "is-lambda": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/is-lambda/-/is-lambda-1.0.1.tgz", + "integrity": "sha1-PZh3iZ5qU+/AFgUEzeFfgubwYdU=", + "dev": true + }, + "is-negative-zero": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/is-negative-zero/-/is-negative-zero-2.0.1.tgz", + "integrity": "sha512-2z6JzQvZRa9A2Y7xC6dQQm4FSTSTNWjKIYYTt4246eMTJmIo0Q+ZyOsU66X8lxK1AbB92dFeglPLrhwpeRKO6w==", + "dev": true + }, + "is-npm": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/is-npm/-/is-npm-4.0.0.tgz", + "integrity": "sha512-96ECIfh9xtDDlPylNPXhzjsykHsMJZ18ASpaWzQyBr4YRTcVjUvzaHayDAES2oU/3KpljhHUjtSRNiDwi0F0ig==", + "dev": true + }, + "is-number": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz", + "integrity": "sha1-JP1iAaR4LPUFYcgQJ2r8fRLXEZU=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "is-number-object": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/is-number-object/-/is-number-object-1.0.5.tgz", + "integrity": "sha512-RU0lI/n95pMoUKu9v1BZP5MBcZuNSVJkMkAG2dJqC4z2GlkGUNeH68SuHuBKBD/XFe+LHZ+f9BKkLET60Niedw==", + "dev": true + }, + "is-obj": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-obj/-/is-obj-2.0.0.tgz", + "integrity": "sha512-drqDG3cbczxxEJRoOXcOjtdp1J/lyp1mNn0xaznRs8+muBhgQcrnbspox5X5fOw0HnMnbfDzvnEMEtqDEJEo8w==", + "dev": true + }, + "is-path-cwd": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/is-path-cwd/-/is-path-cwd-2.2.0.tgz", + "integrity": "sha512-w942bTcih8fdJPJmQHFzkS76NEP8Kzzvmw92cXsazb8intwLqPibPPdXf4ANdKV3rYMuuQYGIWtvz9JilB3NFQ==", + "dev": true + }, + "is-path-in-cwd": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-path-in-cwd/-/is-path-in-cwd-2.1.0.tgz", + "integrity": "sha512-rNocXHgipO+rvnP6dk3zI20RpOtrAM/kzbB258Uw5BWr3TpXi861yzjo16Dn4hUox07iw5AyeMLHWsujkjzvRQ==", + "dev": true, + "requires": { + "is-path-inside": "^2.1.0" + } + }, + "is-path-inside": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-2.1.0.tgz", + "integrity": "sha512-wiyhTzfDWsvwAW53OBWF5zuvaOGlZ6PwYxAbPVDhpm+gM09xKQGjBq/8uYN12aDvMxnAnq3dxTyoSoRNmg5YFg==", + "dev": true, + "requires": { + "path-is-inside": "^1.0.2" + } + }, + "is-plain-obj": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-1.1.0.tgz", + "integrity": "sha1-caUMhCnfync8kqOQpKA7OfzVHT4=", + "dev": true + }, + "is-plain-object": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz", + "integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==", + "dev": true, + "requires": { + "isobject": "^3.0.1" + } + }, + "is-regex": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.3.tgz", + "integrity": "sha512-qSVXFz28HM7y+IWX6vLCsexdlvzT1PJNFSBuaQLQ5o0IEw8UDYW6/2+eCMVyIsbM8CNLX2a/QWmSpyxYEHY7CQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "has-symbols": "^1.0.2" + } + }, + "is-resolvable": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-resolvable/-/is-resolvable-1.1.0.tgz", + "integrity": "sha512-qgDYXFSR5WvEfuS5dMj6oTMEbrrSaM0CrFk2Yiq/gXnBvD9pMa2jGXxyhGLfvhZpuMZe18CJpFxAt3CRs42NMg==", + "dev": true + }, + "is-stream": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.0.tgz", + "integrity": "sha512-XCoy+WlUr7d1+Z8GgSuXmpuUFC9fOhRXglJMx+dwLKTkL44Cjd4W1Z5P+BQZpr+cR93aGP4S/s7Ftw6Nd/kiEw==", + "dev": true + }, + "is-string": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/is-string/-/is-string-1.0.6.tgz", + "integrity": "sha512-2gdzbKUuqtQ3lYNrUTQYoClPhm7oQu4UdpSZMp1/DGgkHBT8E2Z1l0yMdb6D4zNAxwDiMv8MdulKROJGNl0Q0w==", + "dev": true + }, + "is-symbol": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-symbol/-/is-symbol-1.0.4.tgz", + "integrity": "sha512-C/CPBqKWnvdcxqIARxyOh4v1UUEOCHpgDa0WYgpKDFMszcrPcffg5uhwSgPCLD2WWxmq6isisz87tzT01tuGhg==", + "dev": true, + "requires": { + "has-symbols": "^1.0.2" + } + }, + "is-typedarray": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-typedarray/-/is-typedarray-1.0.0.tgz", + "integrity": "sha1-5HnICFjfDBsR3dppQPlgEfzaSpo=", + "dev": true + }, + "is-windows": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-windows/-/is-windows-1.0.2.tgz", + "integrity": "sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA==", + "dev": true + }, + "is-wsl": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-1.1.0.tgz", + "integrity": "sha1-HxbkqiKwTRM2tmGIpmrzxgDDpm0=", + "dev": true + }, + "is-yarn-global": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/is-yarn-global/-/is-yarn-global-0.3.0.tgz", + "integrity": "sha512-VjSeb/lHmkoyd8ryPVIKvOCn4D1koMqY+vqyjjUfc3xyKtP4dYOxM44sZrnqQSzSds3xyOrUTLTC9LVCVgLngw==", + "dev": true + }, + "isarray": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", + "integrity": "sha1-u5NdSFgsuhaMBoNJV6VKPgcSTxE=", + "dev": true + }, + "isexe": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha1-6PvzdNxVb/iUehDcsFctYz8s+hA=", + "dev": true + }, + "isobject": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz", + "integrity": "sha1-TkMekrEalzFjaqH5yNHMvP2reN8=", + "dev": true + }, + "isstream": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/isstream/-/isstream-0.1.2.tgz", + "integrity": "sha1-R+Y/evVa+m+S4VAOaQ64uFKcCZo=", + "dev": true + }, + "javascript-stringify": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/javascript-stringify/-/javascript-stringify-1.6.0.tgz", + "integrity": "sha1-FC0RHzpuPa6PSpr9d9RYVbWpzOM=", + "dev": true + }, + "jju": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/jju/-/jju-1.4.0.tgz", + "integrity": "sha1-o6vicYryQaKykE+EpiWXDzia4yo=", + "dev": true + }, + "joi": { + "version": "17.4.0", + "resolved": "https://registry.npmjs.org/joi/-/joi-17.4.0.tgz", + "integrity": "sha512-F4WiW2xaV6wc1jxete70Rw4V/VuMd6IN+a5ilZsxG4uYtUXWu2kq9W5P2dz30e7Gmw8RCbY/u/uk+dMPma9tAg==", + "dev": true, + "requires": { + "@hapi/hoek": "^9.0.0", + "@hapi/topo": "^5.0.0", + "@sideway/address": "^4.1.0", + "@sideway/formula": "^3.0.0", + "@sideway/pinpoint": "^2.0.0" + } + }, + "js-tokens": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz", + "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==", + "dev": true + }, + "js-yaml": { + "version": "3.14.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.1.tgz", + "integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==", + "dev": true, + "requires": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + } + }, + "jsbn": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/jsbn/-/jsbn-0.1.1.tgz", + "integrity": "sha1-peZUwuWi3rXyAdls77yoDA7y9RM=", + "dev": true + }, + "jsesc": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz", + "integrity": "sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA==", + "dev": true + }, + "json-buffer": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.0.tgz", + "integrity": "sha1-Wx85evx11ne96Lz8Dkfh+aPZqJg=", + "dev": true + }, + "json-parse-better-errors": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz", + "integrity": "sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==", + "dev": true + }, + "json-parse-even-better-errors": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", + "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==", + "dev": true + }, + "json-parse-helpfulerror": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/json-parse-helpfulerror/-/json-parse-helpfulerror-1.0.3.tgz", + "integrity": "sha1-E/FM4C7tTpgSl7ZOueO5MuLdE9w=", + "dev": true, + "requires": { + "jju": "^1.1.0" + } + }, + "json-schema": { + "version": "0.2.3", + "resolved": "https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz", + "integrity": "sha1-tIDIkuWaLwWVTOcnvT8qTogvnhM=", + "dev": true + }, + "json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true + }, + "json-stringify-safe": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-5.0.1.tgz", + "integrity": "sha1-Epai1Y/UXxmg9s4B1lcB4sc1tus=", + "dev": true + }, + "json3": { + "version": "3.3.3", + "resolved": "https://registry.npmjs.org/json3/-/json3-3.3.3.tgz", + "integrity": "sha512-c7/8mbUsKigAbLkD5B010BK4D9LZm7A1pNItkEwiUZRpIN66exu/e7YQWysGun+TRKaJp8MhemM+VkfWv42aCA==", + "dev": true + }, + "json5": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.0.tgz", + "integrity": "sha512-f+8cldu7X/y7RAJurMEJmdoKXGB/X550w2Nr3tTbezL6RwEE/iMcm+tZnXeoZtKuOq6ft8+CqzEkrIgx1fPoQA==", + "dev": true, + "requires": { + "minimist": "^1.2.5" + } + }, + "jsonfile": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-4.0.0.tgz", + "integrity": "sha1-h3Gq4HmbZAdrdmQPygWPnBDjPss=", + "dev": true, + "requires": { + "graceful-fs": "^4.1.6" + } + }, + "jsonlines": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/jsonlines/-/jsonlines-0.1.1.tgz", + "integrity": "sha1-T80kbcXQ44aRkHxEqwAveC0dlMw=", + "dev": true + }, + "jsonp": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/jsonp/-/jsonp-0.2.1.tgz", + "integrity": "sha1-pltPoPEL2nGaBUQep7lMVfPhW64=", + "dev": true, + "requires": { + "debug": "^2.1.3" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + } + } + }, + "jsonparse": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/jsonparse/-/jsonparse-1.3.1.tgz", + "integrity": "sha1-P02uSpH6wxX3EGL4UhzCOfE2YoA=", + "dev": true + }, + "jsprim": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/jsprim/-/jsprim-1.4.1.tgz", + "integrity": "sha1-MT5mvB5cwG5Di8G3SZwuXFastqI=", + "dev": true, + "requires": { + "assert-plus": "1.0.0", + "extsprintf": "1.3.0", + "json-schema": "0.2.3", + "verror": "1.10.0" + } + }, + "keyv": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/keyv/-/keyv-3.1.0.tgz", + "integrity": "sha512-9ykJ/46SN/9KPM/sichzQ7OvXyGDYKGTaDlKMGCAlg2UK8KRy4jb0d8sFc+0Tt0YYnThq8X2RZgCg74RPxgcVA==", + "dev": true, + "requires": { + "json-buffer": "3.0.0" + } + }, + "khroma": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/khroma/-/khroma-2.0.0.tgz", + "integrity": "sha512-2J8rDNlQWbtiNYThZRvmMv5yt44ZakX+Tz5ZIp/mN1pt4snn+m030Va5Z4v8xA0cQFDXBwO/8i42xL4QPsVk3g==", + "dev": true + }, + "killable": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/killable/-/killable-1.0.1.tgz", + "integrity": "sha512-LzqtLKlUwirEUyl/nicirVmNiPvYs7l5n8wOPP7fyJVpUPkvCnW/vuiXGpylGUlnPDnB7311rARzAt3Mhswpjg==", + "dev": true + }, + "kind-of": { + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", + "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", + "dev": true + }, + "kleur": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz", + "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==", + "dev": true + }, + "last-call-webpack-plugin": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/last-call-webpack-plugin/-/last-call-webpack-plugin-3.0.0.tgz", + "integrity": "sha512-7KI2l2GIZa9p2spzPIVZBYyNKkN+e/SQPpnjlTiPhdbDW3F86tdKKELxKpzJ5sgU19wQWsACULZmpTPYHeWO5w==", + "dev": true, + "requires": { + "lodash": "^4.17.5", + "webpack-sources": "^1.1.0" + } + }, + "latest-version": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/latest-version/-/latest-version-5.1.0.tgz", + "integrity": "sha512-weT+r0kTkRQdCdYCNtkMwWXQTMEswKrFBkm4ckQOMVhhqhIMI1UT2hMj+1iigIhgSZm5gTmrRXBNoGUgaTY1xA==", + "dev": true, + "requires": { + "package-json": "^6.3.0" + } + }, + "lazy-ass": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/lazy-ass/-/lazy-ass-1.6.0.tgz", + "integrity": "sha1-eZllXoZGwX8In90YfRUNMyTVRRM=", + "dev": true + }, + "libnpmconfig": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/libnpmconfig/-/libnpmconfig-1.2.1.tgz", + "integrity": "sha512-9esX8rTQAHqarx6qeZqmGQKBNZR5OIbl/Ayr0qQDy3oXja2iFVQQI81R6GZ2a02bSNZ9p3YOGX1O6HHCb1X7kA==", + "dev": true, + "requires": { + "figgy-pudding": "^3.5.1", + "find-up": "^3.0.0", + "ini": "^1.3.5" + }, + "dependencies": { + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + } + } + }, + "linkify-it": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/linkify-it/-/linkify-it-2.2.0.tgz", + "integrity": "sha512-GnAl/knGn+i1U/wjBz3akz2stz+HrHLsxMwHQGofCDfPvlf+gDKN58UtfmUquTY4/MXeE2x7k19KQmeoZi94Iw==", + "dev": true, + "requires": { + "uc.micro": "^1.0.1" + } + }, + "load-script": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/load-script/-/load-script-1.0.0.tgz", + "integrity": "sha1-BJGTngvuVkPuSUp+PaPSuscMbKQ=", + "dev": true + }, + "loader-runner": { + "version": "2.4.0", + "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-2.4.0.tgz", + "integrity": "sha512-Jsmr89RcXGIwivFY21FcRrisYZfvLMTWx5kOLc+JTxtpBOG6xML0vzbc6SEQG2FO9/4Fc3wW4LVcB5DmGflaRw==", + "dev": true + }, + "loader-utils": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz", + "integrity": "sha512-qH0WSMBtn/oHuwjy/NucEgbx5dbxxnxup9s4PVXJUDHZBQY+s0NWA9rJf53RBnQZxfch7euUui7hpoAPvALZdA==", + "dev": true, + "requires": { + "big.js": "^5.2.2", + "emojis-list": "^3.0.0", + "json5": "^1.0.1" + }, + "dependencies": { + "json5": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz", + "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==", + "dev": true, + "requires": { + "minimist": "^1.2.0" + } + } + } + }, + "locate-path": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", + "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", + "dev": true, + "requires": { + "p-locate": "^4.1.0" + } + }, + "lodash": { + "version": "4.17.21", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", + "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==", + "dev": true + }, + "lodash._reinterpolate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/lodash._reinterpolate/-/lodash._reinterpolate-3.0.0.tgz", + "integrity": "sha1-DM8tiRZq8Ds2Y8eWU4t1rG4RTZ0=", + "dev": true + }, + "lodash.chunk": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/lodash.chunk/-/lodash.chunk-4.2.0.tgz", + "integrity": "sha1-ZuXOH3btJ7QwPYxlEujRIW6BBrw=", + "dev": true + }, + "lodash.clonedeep": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/lodash.clonedeep/-/lodash.clonedeep-4.5.0.tgz", + "integrity": "sha1-4j8/nE+Pvd6HJSnBBxhXoIblzO8=", + "dev": true + }, + "lodash.debounce": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", + "integrity": "sha1-gteb/zCmfEAF/9XiUVMArZyk168=", + "dev": true + }, + "lodash.defaultsdeep": { + "version": "4.6.1", + "resolved": "https://registry.npmjs.org/lodash.defaultsdeep/-/lodash.defaultsdeep-4.6.1.tgz", + "integrity": "sha512-3j8wdDzYuWO3lM3Reg03MuQR957t287Rpcxp1njpEa8oDrikb+FwGdW3n+FELh/A6qib6yPit0j/pv9G/yeAqA==", + "dev": true + }, + "lodash.isempty": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/lodash.isempty/-/lodash.isempty-4.4.0.tgz", + "integrity": "sha1-b4bL7di+TsmHvpqvM8loTbGzHn4=", + "dev": true + }, + "lodash.kebabcase": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/lodash.kebabcase/-/lodash.kebabcase-4.1.1.tgz", + "integrity": "sha1-hImxyw0p/4gZXM7KRI/21swpXDY=", + "dev": true + }, + "lodash.memoize": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/lodash.memoize/-/lodash.memoize-4.1.2.tgz", + "integrity": "sha1-vMbEmkKihA7Zl/Mj6tpezRguC/4=", + "dev": true + }, + "lodash.padstart": { + "version": "4.6.1", + "resolved": "https://registry.npmjs.org/lodash.padstart/-/lodash.padstart-4.6.1.tgz", + "integrity": "sha1-0uPuv/DZ05rVD1y9G1KnvOa7YRs=", + "dev": true + }, + "lodash.sortby": { + "version": "4.7.0", + "resolved": "https://registry.npmjs.org/lodash.sortby/-/lodash.sortby-4.7.0.tgz", + "integrity": "sha1-7dFMgk4sycHgsKG0K7UhBRakJDg=", + "dev": true + }, + "lodash.template": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/lodash.template/-/lodash.template-4.5.0.tgz", + "integrity": "sha512-84vYFxIkmidUiFxidA/KjjH9pAycqW+h980j7Fuz5qxRtO9pgB7MDFTdys1N7A5mcucRiDyEq4fusljItR1T/A==", + "dev": true, + "requires": { + "lodash._reinterpolate": "^3.0.0", + "lodash.templatesettings": "^4.0.0" + } + }, + "lodash.templatesettings": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/lodash.templatesettings/-/lodash.templatesettings-4.2.0.tgz", + "integrity": "sha512-stgLz+i3Aa9mZgnjr/O+v9ruKZsPsndy7qPZOchbqk2cnTU1ZaldKK+v7m54WoKIyxiuMZTKT2H81F8BeAc3ZQ==", + "dev": true, + "requires": { + "lodash._reinterpolate": "^3.0.0" + } + }, + "lodash.toarray": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/lodash.toarray/-/lodash.toarray-4.4.0.tgz", + "integrity": "sha1-JMS/zWsvuji/0FlNsRedjptlZWE=", + "dev": true + }, + "lodash.trimend": { + "version": "4.5.1", + "resolved": "https://registry.npmjs.org/lodash.trimend/-/lodash.trimend-4.5.1.tgz", + "integrity": "sha1-EoBENyhrmMrYmWt5QU4RMAEUCC8=", + "dev": true + }, + "lodash.trimstart": { + "version": "4.5.1", + "resolved": "https://registry.npmjs.org/lodash.trimstart/-/lodash.trimstart-4.5.1.tgz", + "integrity": "sha1-j/TexTLYJIavWVc8OURZFOlEp/E=", + "dev": true + }, + "lodash.uniq": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/lodash.uniq/-/lodash.uniq-4.5.0.tgz", + "integrity": "sha1-0CJTc662Uq3BvILklFM5qEJ1R3M=", + "dev": true + }, + "loglevel": { + "version": "1.7.1", + "resolved": "https://registry.npmjs.org/loglevel/-/loglevel-1.7.1.tgz", + "integrity": "sha512-Hesni4s5UkWkwCGJMQGAh71PaLUmKFM60dHvq0zi/vDhhrzuk+4GgNbTXJ12YYQJn6ZKBDNIjYcuQGKudvqrIw==", + "dev": true + }, + "lower-case": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/lower-case/-/lower-case-1.1.4.tgz", + "integrity": "sha1-miyr0bno4K6ZOkv31YdcOcQujqw=", + "dev": true + }, + "lowercase-keys": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/lowercase-keys/-/lowercase-keys-1.0.1.tgz", + "integrity": "sha512-G2Lj61tXDnVFFOi8VZds+SoQjtQC3dgokKdDG2mTm1tx4m50NUHBOZSBwQQHyy0V12A0JTG4icfZQH+xPyh8VA==", + "dev": true + }, + "lru-cache": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz", + "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==", + "dev": true, + "requires": { + "yallist": "^3.0.2" + } + }, + "luxon": { + "version": "1.27.0", + "resolved": "https://registry.npmjs.org/luxon/-/luxon-1.27.0.tgz", + "integrity": "sha512-VKsFsPggTA0DvnxtJdiExAucKdAnwbCCNlMM5ENvHlxubqWd0xhZcdb4XgZ7QFNhaRhilXCFxHuoObP5BNA4PA==", + "dev": true + }, + "make-dir": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-3.1.0.tgz", + "integrity": "sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==", + "dev": true, + "requires": { + "semver": "^6.0.0" + } + }, + "make-fetch-happen": { + "version": "8.0.14", + "resolved": "https://registry.npmjs.org/make-fetch-happen/-/make-fetch-happen-8.0.14.tgz", + "integrity": "sha512-EsS89h6l4vbfJEtBZnENTOFk8mCRpY5ru36Xe5bcX1KYIli2mkSHqoFsp5O1wMDvTJJzxe/4THpCTtygjeeGWQ==", + "dev": true, + "requires": { + "agentkeepalive": "^4.1.3", + "cacache": "^15.0.5", + "http-cache-semantics": "^4.1.0", + "http-proxy-agent": "^4.0.1", + "https-proxy-agent": "^5.0.0", + "is-lambda": "^1.0.1", + "lru-cache": "^6.0.0", + "minipass": "^3.1.3", + "minipass-collect": "^1.0.2", + "minipass-fetch": "^1.3.2", + "minipass-flush": "^1.0.5", + "minipass-pipeline": "^1.2.4", + "promise-retry": "^2.0.1", + "socks-proxy-agent": "^5.0.0", + "ssri": "^8.0.0" + }, + "dependencies": { + "agentkeepalive": { + "version": "4.1.4", + "resolved": "https://registry.npmjs.org/agentkeepalive/-/agentkeepalive-4.1.4.tgz", + "integrity": "sha512-+V/rGa3EuU74H6wR04plBb7Ks10FbtUQgRj/FQOG7uUIEuaINI+AiqJR1k6t3SVNs7o7ZjIdus6706qqzVq8jQ==", + "dev": true, + "requires": { + "debug": "^4.1.0", + "depd": "^1.1.2", + "humanize-ms": "^1.2.1" + } + }, + "cacache": { + "version": "15.1.0", + "resolved": "https://registry.npmjs.org/cacache/-/cacache-15.1.0.tgz", + "integrity": "sha512-mfx0C+mCfWjD1PnwQ9yaOrwG1ou9FkKnx0SvzUHWdFt7r7GaRtzT+9M8HAvLu62zIHtnpQ/1m93nWNDCckJGXQ==", + "dev": true, + "requires": { + "@npmcli/move-file": "^1.0.1", + "chownr": "^2.0.0", + "fs-minipass": "^2.0.0", + "glob": "^7.1.4", + "infer-owner": "^1.0.4", + "lru-cache": "^6.0.0", + "minipass": "^3.1.1", + "minipass-collect": "^1.0.2", + "minipass-flush": "^1.0.5", + "minipass-pipeline": "^1.2.2", + "mkdirp": "^1.0.3", + "p-map": "^4.0.0", + "promise-inflight": "^1.0.1", + "rimraf": "^3.0.2", + "ssri": "^8.0.1", + "tar": "^6.0.2", + "unique-filename": "^1.1.1" + } + }, + "chownr": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-2.0.0.tgz", + "integrity": "sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==", + "dev": true + }, + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "p-map": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/p-map/-/p-map-4.0.0.tgz", + "integrity": "sha512-/bjOqmgETBYB5BoEeGVea8dmvHb2m9GLy1E9W43yeyfP6QQCZGFNa+XRceJEuDB6zqr+gKpIAmlLebMpykw/MQ==", + "dev": true, + "requires": { + "aggregate-error": "^3.0.0" + } + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "ssri": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/ssri/-/ssri-8.0.1.tgz", + "integrity": "sha512-97qShzy1AiyxvPNIkLWoGua7xoQzzPjQ0HAH4B0rWKo7SZ6USuPcrUiAFrws0UH8RrbWmgq3LMTObhPIHbbBeQ==", + "dev": true, + "requires": { + "minipass": "^3.1.1" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "map-age-cleaner": { + "version": "0.1.3", + "resolved": "https://registry.npmjs.org/map-age-cleaner/-/map-age-cleaner-0.1.3.tgz", + "integrity": "sha512-bJzx6nMoP6PDLPBFmg7+xRKeFZvFboMrGlxmNj9ClvX53KrmvM5bXFXEWjbz4cz1AFn+jWJ9z/DJSz7hrs0w3w==", + "dev": true, + "requires": { + "p-defer": "^1.0.0" + } + }, + "map-cache": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/map-cache/-/map-cache-0.2.2.tgz", + "integrity": "sha1-wyq9C9ZSXZsFFkW7TyasXcmKDb8=", + "dev": true + }, + "map-stream": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/map-stream/-/map-stream-0.1.0.tgz", + "integrity": "sha1-5WqpTEyAVaFkBKBnS3jyFffI4ZQ=", + "dev": true + }, + "map-visit": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/map-visit/-/map-visit-1.0.0.tgz", + "integrity": "sha1-7Nyo8TFE5mDxtb1B8S80edmN+48=", + "dev": true, + "requires": { + "object-visit": "^1.0.0" + } + }, + "markdown-it": { + "version": "8.4.2", + "resolved": "https://registry.npmjs.org/markdown-it/-/markdown-it-8.4.2.tgz", + "integrity": "sha512-GcRz3AWTqSUphY3vsUqQSFMbgR38a4Lh3GWlHRh/7MRwz8mcu9n2IO7HOh+bXHrR9kOPDl5RNCaEsrneb+xhHQ==", + "dev": true, + "requires": { + "argparse": "^1.0.7", + "entities": "~1.1.1", + "linkify-it": "^2.0.0", + "mdurl": "^1.0.1", + "uc.micro": "^1.0.5" + } + }, + "markdown-it-anchor": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/markdown-it-anchor/-/markdown-it-anchor-5.3.0.tgz", + "integrity": "sha512-/V1MnLL/rgJ3jkMWo84UR+K+jF1cxNG1a+KwqeXqTIJ+jtA8aWSHuigx8lTzauiIjBDbwF3NcWQMotd0Dm39jA==", + "dev": true + }, + "markdown-it-chain": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/markdown-it-chain/-/markdown-it-chain-1.3.0.tgz", + "integrity": "sha512-XClV8I1TKy8L2qsT9iX3qiV+50ZtcInGXI80CA+DP62sMs7hXlyV/RM3hfwy5O3Ad0sJm9xIwQELgANfESo8mQ==", + "dev": true, + "requires": { + "webpack-chain": "^4.9.0" + }, + "dependencies": { + "webpack-chain": { + "version": "4.12.1", + "resolved": "https://registry.npmjs.org/webpack-chain/-/webpack-chain-4.12.1.tgz", + "integrity": "sha512-BCfKo2YkDe2ByqkEWe1Rw+zko4LsyS75LVr29C6xIrxAg9JHJ4pl8kaIZ396SUSNp6b4815dRZPSTAS8LlURRQ==", + "dev": true, + "requires": { + "deepmerge": "^1.5.2", + "javascript-stringify": "^1.6.0" + } + } + } + }, + "markdown-it-container": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/markdown-it-container/-/markdown-it-container-2.0.0.tgz", + "integrity": "sha1-ABm0P9Au7+zi8ZYKKJX7qBpARpU=", + "dev": true + }, + "markdown-it-emoji": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/markdown-it-emoji/-/markdown-it-emoji-1.4.0.tgz", + "integrity": "sha1-m+4OmpkKljupbfaYDE/dsF37Tcw=", + "dev": true + }, + "markdown-it-footnote": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/markdown-it-footnote/-/markdown-it-footnote-3.0.3.tgz", + "integrity": "sha512-YZMSuCGVZAjzKMn+xqIco9d1cLGxbELHZ9do/TSYVzraooV8ypsppKNmUJ0fVH5ljkCInQAtFpm8Rb3eXSrt5w==" + }, + "markdown-it-table-of-contents": { + "version": "0.4.4", + "resolved": "https://registry.npmjs.org/markdown-it-table-of-contents/-/markdown-it-table-of-contents-0.4.4.tgz", + "integrity": "sha512-TAIHTHPwa9+ltKvKPWulm/beozQU41Ab+FIefRaQV1NRnpzwcV9QOe6wXQS5WLivm5Q/nlo0rl6laGkMDZE7Gw==", + "dev": true + }, + "md5.js": { + "version": "1.3.5", + "resolved": "https://registry.npmjs.org/md5.js/-/md5.js-1.3.5.tgz", + "integrity": "sha512-xitP+WxNPcTTOgnTJcrhM0xvdPepipPSf3I8EIpGKeFLjt3PlJLIDG3u8EX53ZIubkb+5U2+3rELYpEhHhzdkg==", + "dev": true, + "requires": { + "hash-base": "^3.0.0", + "inherits": "^2.0.1", + "safe-buffer": "^5.1.2" + } + }, + "mdn-data": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/mdn-data/-/mdn-data-2.0.4.tgz", + "integrity": "sha512-iV3XNKw06j5Q7mi6h+9vbx23Tv7JkjEVgKHW4pimwyDGWm0OIQntJJ+u1C6mg6mK1EaTv42XQ7w76yuzH7M2cA==", + "dev": true + }, + "mdurl": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz", + "integrity": "sha1-/oWy7HWlkDfyrf7BAP1sYBdhFS4=", + "dev": true + }, + "media-typer": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz", + "integrity": "sha1-hxDXrwqmJvj/+hzgAWhUUmMlV0g=", + "dev": true + }, + "mem": { + "version": "8.1.1", + "resolved": "https://registry.npmjs.org/mem/-/mem-8.1.1.tgz", + "integrity": "sha512-qFCFUDs7U3b8mBDPyz5EToEKoAkgCzqquIgi9nkkR9bixxOVOre+09lbuH7+9Kn2NFpm56M3GUWVbU2hQgdACA==", + "dev": true, + "requires": { + "map-age-cleaner": "^0.1.3", + "mimic-fn": "^3.1.0" + }, + "dependencies": { + "mimic-fn": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-3.1.0.tgz", + "integrity": "sha512-Ysbi9uYW9hFyfrThdDEQuykN4Ey6BuwPD2kpI5ES/nFTDn/98yxYNLZJcgUAKPT/mcrLLKaGzJR9YVxJrIdASQ==", + "dev": true + } + } + }, + "memory-fs": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/memory-fs/-/memory-fs-0.4.1.tgz", + "integrity": "sha1-OpoguEYlI+RHz7x+i7gO1me/xVI=", + "dev": true, + "requires": { + "errno": "^0.1.3", + "readable-stream": "^2.0.1" + } + }, + "merge-descriptors": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz", + "integrity": "sha1-sAqqVW3YtEVoFQ7J0blT8/kMu2E=", + "dev": true + }, + "merge-source-map": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/merge-source-map/-/merge-source-map-1.1.0.tgz", + "integrity": "sha512-Qkcp7P2ygktpMPh2mCQZaf3jhN6D3Z/qVZHSdWvQ+2Ef5HgRAPBO57A77+ENm0CPx2+1Ce/MYKi3ymqdfuqibw==", + "dev": true, + "requires": { + "source-map": "^0.6.1" + } + }, + "merge-stream": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", + "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==", + "dev": true + }, + "merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "dev": true + }, + "mermaid": { + "version": "9.1.2", + "resolved": "https://registry.npmjs.org/mermaid/-/mermaid-9.1.2.tgz", + "integrity": "sha512-RVf3hBKqiMfyORHboCaEjOAK1TomLO50hYRPvlTrZCXlCniM5pRpe8UlkHBjjpaLtioZnbdYv/vEVj7iKnwkJQ==", + "dev": true, + "requires": { + "@braintree/sanitize-url": "^6.0.0", + "d3": "^7.0.0", + "dagre": "^0.8.5", + "dagre-d3": "^0.6.4", + "dompurify": "2.3.8", + "graphlib": "^2.1.8", + "khroma": "^2.0.0", + "moment-mini": "^2.24.0", + "stylis": "^4.0.10" + } + }, + "methods": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz", + "integrity": "sha1-VSmk1nZUE07cxSZmVoNbD4Ua/O4=", + "dev": true + }, + "micromatch": { + "version": "3.1.10", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-3.1.10.tgz", + "integrity": "sha512-MWikgl9n9M3w+bpsY3He8L+w9eF9338xRl8IAO5viDizwSzziFEyUzo2xrrloB64ADbTf8uA8vRqqttDTOmccg==", + "dev": true, + "requires": { + "arr-diff": "^4.0.0", + "array-unique": "^0.3.2", + "braces": "^2.3.1", + "define-property": "^2.0.2", + "extend-shallow": "^3.0.2", + "extglob": "^2.0.4", + "fragment-cache": "^0.2.1", + "kind-of": "^6.0.2", + "nanomatch": "^1.2.9", + "object.pick": "^1.3.0", + "regex-not": "^1.0.0", + "snapdragon": "^0.8.1", + "to-regex": "^3.0.2" + } + }, + "miller-rabin": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/miller-rabin/-/miller-rabin-4.0.1.tgz", + "integrity": "sha512-115fLhvZVqWwHPbClyntxEVfVDfl9DLLTuJvq3g2O/Oxi8AiNouAHvDSzHS0viUJc+V5vm3eq91Xwqn9dp4jRA==", + "dev": true, + "requires": { + "bn.js": "^4.0.0", + "brorand": "^1.0.1" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "mime": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/mime/-/mime-2.5.2.tgz", + "integrity": "sha512-tqkh47FzKeCPD2PUiPB6pkbMzsCasjxAfC62/Wap5qrUWcb+sFasXUC5I3gYM5iBM8v/Qpn4UK0x+j0iHyFPDg==", + "dev": true + }, + "mime-db": { + "version": "1.47.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.47.0.tgz", + "integrity": "sha512-QBmA/G2y+IfeS4oktet3qRZ+P5kPhCKRXxXnQEudYqUaEioAU1/Lq2us3D/t1Jfo4hE9REQPrbB7K5sOczJVIw==", + "dev": true + }, + "mime-types": { + "version": "2.1.30", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.30.tgz", + "integrity": "sha512-crmjA4bLtR8m9qLpHvgxSChT+XoSlZi8J4n/aIdn3z92e/U47Z0V/yl+Wh9W046GgFVAmoNR/fmdbZYcSSIUeg==", + "dev": true, + "requires": { + "mime-db": "1.47.0" + } + }, + "mimic-fn": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz", + "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==", + "dev": true + }, + "mimic-response": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mimic-response/-/mimic-response-1.0.1.tgz", + "integrity": "sha512-j5EctnkH7amfV/q5Hgmoal1g2QHFJRraOtmx0JpIqkxhBhI/lJSl1nMpQ45hVarwNETOoWEimndZ4QK0RHxuxQ==", + "dev": true + }, + "min-document": { + "version": "2.19.0", + "resolved": "https://registry.npmjs.org/min-document/-/min-document-2.19.0.tgz", + "integrity": "sha1-e9KC4/WELtKVu3SM3Z8f+iyCRoU=", + "dev": true, + "requires": { + "dom-walk": "^0.1.0" + } + }, + "mini-css-extract-plugin": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/mini-css-extract-plugin/-/mini-css-extract-plugin-0.6.0.tgz", + "integrity": "sha512-79q5P7YGI6rdnVyIAV4NXpBQJFWdkzJxCim3Kog4078fM0piAaFlwocqbejdWtLW1cEzCexPrh6EdyFsPgVdAw==", + "dev": true, + "requires": { + "loader-utils": "^1.1.0", + "normalize-url": "^2.0.1", + "schema-utils": "^1.0.0", + "webpack-sources": "^1.1.0" + }, + "dependencies": { + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "minimalistic-assert": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz", + "integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==", + "dev": true + }, + "minimalistic-crypto-utils": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/minimalistic-crypto-utils/-/minimalistic-crypto-utils-1.0.1.tgz", + "integrity": "sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo=", + "dev": true + }, + "minimatch": { + "version": "3.0.4", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz", + "integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==", + "dev": true, + "requires": { + "brace-expansion": "^1.1.7" + } + }, + "minimist": { + "version": "1.2.6", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz", + "integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==", + "dev": true + }, + "minipass": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/minipass/-/minipass-3.1.3.tgz", + "integrity": "sha512-Mgd2GdMVzY+x3IJ+oHnVM+KG3lA5c8tnabyJKmHSaG2kAGpudxuOf8ToDkhumF7UzME7DecbQE9uOZhNm7PuJg==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + }, + "dependencies": { + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "minipass-collect": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/minipass-collect/-/minipass-collect-1.0.2.tgz", + "integrity": "sha512-6T6lH0H8OG9kITm/Jm6tdooIbogG9e0tLgpY6mphXSm/A9u8Nq1ryBG+Qspiub9LjWlBPsPS3tWQ/Botq4FdxA==", + "dev": true, + "requires": { + "minipass": "^3.0.0" + } + }, + "minipass-fetch": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/minipass-fetch/-/minipass-fetch-1.3.3.tgz", + "integrity": "sha512-akCrLDWfbdAWkMLBxJEeWTdNsjML+dt5YgOI4gJ53vuO0vrmYQkUPxa6j6V65s9CcePIr2SSWqjT2EcrNseryQ==", + "dev": true, + "requires": { + "encoding": "^0.1.12", + "minipass": "^3.1.0", + "minipass-sized": "^1.0.3", + "minizlib": "^2.0.0" + } + }, + "minipass-flush": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/minipass-flush/-/minipass-flush-1.0.5.tgz", + "integrity": "sha512-JmQSYYpPUqX5Jyn1mXaRwOda1uQ8HP5KAT/oDSLCzt1BYRhQU0/hDtsB1ufZfEEzMZ9aAVmsBw8+FWsIXlClWw==", + "dev": true, + "requires": { + "minipass": "^3.0.0" + } + }, + "minipass-json-stream": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/minipass-json-stream/-/minipass-json-stream-1.0.1.tgz", + "integrity": "sha512-ODqY18UZt/I8k+b7rl2AENgbWE8IDYam+undIJONvigAz8KR5GWblsFTEfQs0WODsjbSXWlm+JHEv8Gr6Tfdbg==", + "dev": true, + "requires": { + "jsonparse": "^1.3.1", + "minipass": "^3.0.0" + } + }, + "minipass-pipeline": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/minipass-pipeline/-/minipass-pipeline-1.2.4.tgz", + "integrity": "sha512-xuIq7cIOt09RPRJ19gdi4b+RiNvDFYe5JH+ggNvBqGqpQXcru3PcRmOZuHBKWK1Txf9+cQ+HMVN4d6z46LZP7A==", + "dev": true, + "requires": { + "minipass": "^3.0.0" + } + }, + "minipass-sized": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/minipass-sized/-/minipass-sized-1.0.3.tgz", + "integrity": "sha512-MbkQQ2CTiBMlA2Dm/5cY+9SWFEN8pzzOXi6rlM5Xxq0Yqbda5ZQy9sU75a673FE9ZK0Zsbr6Y5iP6u9nktfg2g==", + "dev": true, + "requires": { + "minipass": "^3.0.0" + } + }, + "minizlib": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/minizlib/-/minizlib-2.1.2.tgz", + "integrity": "sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg==", + "dev": true, + "requires": { + "minipass": "^3.0.0", + "yallist": "^4.0.0" + }, + "dependencies": { + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "mississippi": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/mississippi/-/mississippi-3.0.0.tgz", + "integrity": "sha512-x471SsVjUtBRtcvd4BzKE9kFC+/2TeWgKCgw0bZcw1b9l2X3QX5vCWgF+KaZaYm87Ss//rHnWryupDrgLvmSkA==", + "dev": true, + "requires": { + "concat-stream": "^1.5.0", + "duplexify": "^3.4.2", + "end-of-stream": "^1.1.0", + "flush-write-stream": "^1.0.0", + "from2": "^2.1.0", + "parallel-transform": "^1.1.0", + "pump": "^3.0.0", + "pumpify": "^1.3.3", + "stream-each": "^1.1.0", + "through2": "^2.0.0" + } + }, + "mixin-deep": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.2.tgz", + "integrity": "sha512-WRoDn//mXBiJ1H40rqa3vH0toePwSsGb45iInWlTySa+Uu4k3tYUSxa2v1KqAiLtvlrSzaExqS1gtk96A9zvEA==", + "dev": true, + "requires": { + "for-in": "^1.0.2", + "is-extendable": "^1.0.1" + }, + "dependencies": { + "is-extendable": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz", + "integrity": "sha512-arnXMxT1hhoKo9k1LZdmlNyJdDDfy2v0fXjFlmok4+i8ul/6WlbVge9bhM74OpNPQPMGUToDtz+KXa1PneJxOA==", + "dev": true, + "requires": { + "is-plain-object": "^2.0.4" + } + } + } + }, + "mkdirp": { + "version": "0.5.5", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.5.tgz", + "integrity": "sha512-NKmAlESf6jMGym1++R0Ra7wvhV+wFW63FaSOFPwRahvea0gMUcGUhVeAg/0BC0wiv9ih5NYPB1Wn1UEI1/L+xQ==", + "dev": true, + "requires": { + "minimist": "^1.2.5" + } + }, + "moment-mini": { + "version": "2.24.0", + "resolved": "https://registry.npmjs.org/moment-mini/-/moment-mini-2.24.0.tgz", + "integrity": "sha512-9ARkWHBs+6YJIvrIp0Ik5tyTTtP9PoV0Ssu2Ocq5y9v8+NOOpWiRshAp8c4rZVWTOe+157on/5G+zj5pwIQFEQ==", + "dev": true + }, + "move-concurrently": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/move-concurrently/-/move-concurrently-1.0.1.tgz", + "integrity": "sha1-viwAX9oy4LKa8fBdfEszIUxwH5I=", + "dev": true, + "requires": { + "aproba": "^1.1.1", + "copy-concurrently": "^1.0.0", + "fs-write-stream-atomic": "^1.0.8", + "mkdirp": "^0.5.1", + "rimraf": "^2.5.4", + "run-queue": "^1.0.3" + } + }, + "ms": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", + "dev": true + }, + "multicast-dns": { + "version": "6.2.3", + "resolved": "https://registry.npmjs.org/multicast-dns/-/multicast-dns-6.2.3.tgz", + "integrity": "sha512-ji6J5enbMyGRHIAkAOu3WdV8nggqviKCEKtXcOqfphZZtQrmHKycfynJ2V7eVPUA4NhJ6V7Wf4TmGbTwKE9B6g==", + "dev": true, + "requires": { + "dns-packet": "^1.3.1", + "thunky": "^1.0.2" + } + }, + "multicast-dns-service-types": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/multicast-dns-service-types/-/multicast-dns-service-types-1.1.0.tgz", + "integrity": "sha1-iZ8R2WhuXgXLkbNdXw5jt3PPyQE=", + "dev": true + }, + "nan": { + "version": "2.14.2", + "resolved": "https://registry.npmjs.org/nan/-/nan-2.14.2.tgz", + "integrity": "sha512-M2ufzIiINKCuDfBSAUr1vWQ+vuVcA9kqx8JJUsbQi6yf1uGRyb7HfpdfUr5qLXf3B/t8dPvcjhKMmlfnP47EzQ==", + "dev": true, + "optional": true + }, + "nanomatch": { + "version": "1.2.13", + "resolved": "https://registry.npmjs.org/nanomatch/-/nanomatch-1.2.13.tgz", + "integrity": "sha512-fpoe2T0RbHwBTBUOftAfBPaDEi06ufaUai0mE6Yn1kacc3SnTErfb/h+X94VXzI64rKFHYImXSvdwGGCmwOqCA==", + "dev": true, + "requires": { + "arr-diff": "^4.0.0", + "array-unique": "^0.3.2", + "define-property": "^2.0.2", + "extend-shallow": "^3.0.2", + "fragment-cache": "^0.2.1", + "is-windows": "^1.0.2", + "kind-of": "^6.0.2", + "object.pick": "^1.3.0", + "regex-not": "^1.0.0", + "snapdragon": "^0.8.1", + "to-regex": "^3.0.1" + } + }, + "negotiator": { + "version": "0.6.2", + "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.2.tgz", + "integrity": "sha512-hZXc7K2e+PgeI1eDBe/10Ard4ekbfrrqG8Ep+8Jmf4JID2bNg7NvCPOZN+kfF574pFQI7mum2AUqDidoKqcTOw==", + "dev": true + }, + "neo-async": { + "version": "2.6.2", + "resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz", + "integrity": "sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==", + "dev": true + }, + "nice-try": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/nice-try/-/nice-try-1.0.5.tgz", + "integrity": "sha512-1nh45deeb5olNY7eX82BkPO7SSxR5SSYJiPTrTdFUVYwAl8CKMA5N9PjTYkHiRjisVcxcQ1HXdLhx2qxxJzLNQ==", + "dev": true + }, + "no-case": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/no-case/-/no-case-2.3.2.tgz", + "integrity": "sha512-rmTZ9kz+f3rCvK2TD1Ue/oZlns7OGoIWP4fc3llxxRXlOkHKoWPPWJOfFYpITabSow43QJbRIoHQXtt10VldyQ==", + "dev": true, + "requires": { + "lower-case": "^1.1.1" + } + }, + "node-emoji": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/node-emoji/-/node-emoji-1.10.0.tgz", + "integrity": "sha512-Yt3384If5H6BYGVHiHwTL+99OzJKHhgp82S8/dktEK73T26BazdgZ4JZh92xSVtGNJvz9UbXdNAc5hcrXV42vw==", + "dev": true, + "requires": { + "lodash.toarray": "^4.4.0" + } + }, + "node-forge": { + "version": "0.10.0", + "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz", + "integrity": "sha512-PPmu8eEeG9saEUvI97fm4OYxXVB6bFvyNTyiUOBichBpFG8A1Ljw3bY62+5oOjDEMHRnd0Y7HQ+x7uzxOzC6JA==", + "dev": true + }, + "node-gyp": { + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/node-gyp/-/node-gyp-7.1.2.tgz", + "integrity": "sha512-CbpcIo7C3eMu3dL1c3d0xw449fHIGALIJsRP4DDPHpyiW8vcriNY7ubh9TE4zEKfSxscY7PjeFnshE7h75ynjQ==", + "dev": true, + "requires": { + "env-paths": "^2.2.0", + "glob": "^7.1.4", + "graceful-fs": "^4.2.3", + "nopt": "^5.0.0", + "npmlog": "^4.1.2", + "request": "^2.88.2", + "rimraf": "^3.0.2", + "semver": "^7.3.2", + "tar": "^6.0.2", + "which": "^2.0.2" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "nopt": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/nopt/-/nopt-5.0.0.tgz", + "integrity": "sha512-Tbj67rffqceeLpcRXrT7vKAN8CwfPeIBgM7E6iBkmKLV7bEMwpGgYLGv0jACUsECaa/vuxP0IjEont6umdMgtQ==", + "dev": true, + "requires": { + "abbrev": "1" + } + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "node-libs-browser": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.1.tgz", + "integrity": "sha512-h/zcD8H9kaDZ9ALUWwlBUDo6TKF8a7qBSCSEGfjTVIYeqsioSKaAX+BN7NgiMGp6iSIXZ3PxgCu8KS3b71YK5Q==", + "dev": true, + "requires": { + "assert": "^1.1.1", + "browserify-zlib": "^0.2.0", + "buffer": "^4.3.0", + "console-browserify": "^1.1.0", + "constants-browserify": "^1.0.0", + "crypto-browserify": "^3.11.0", + "domain-browser": "^1.1.1", + "events": "^3.0.0", + "https-browserify": "^1.0.0", + "os-browserify": "^0.3.0", + "path-browserify": "0.0.1", + "process": "^0.11.10", + "punycode": "^1.2.4", + "querystring-es3": "^0.2.0", + "readable-stream": "^2.3.3", + "stream-browserify": "^2.0.1", + "stream-http": "^2.7.2", + "string_decoder": "^1.0.0", + "timers-browserify": "^2.0.4", + "tty-browserify": "0.0.0", + "url": "^0.11.0", + "util": "^0.11.0", + "vm-browserify": "^1.0.1" + }, + "dependencies": { + "punycode": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz", + "integrity": "sha1-wNWmOycYgArY4esPpSachN1BhF4=", + "dev": true + } + } + }, + "node-releases": { + "version": "1.1.72", + "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.72.tgz", + "integrity": "sha512-LLUo+PpH3dU6XizX3iVoubUNheF/owjXCZZ5yACDxNnPtgFuludV1ZL3ayK1kVep42Rmm0+R9/Y60NQbZ2bifw==", + "dev": true + }, + "nopt": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/nopt/-/nopt-1.0.10.tgz", + "integrity": "sha1-bd0hvSoxQXuScn3Vhfim83YI6+4=", + "dev": true, + "requires": { + "abbrev": "1" + } + }, + "normalize-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz", + "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==", + "dev": true + }, + "normalize-range": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/normalize-range/-/normalize-range-0.1.2.tgz", + "integrity": "sha1-LRDAa9/TEuqXd2laTShDlFa3WUI=", + "dev": true + }, + "normalize-url": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz", + "integrity": "sha512-D6MUW4K/VzoJ4rJ01JFKxDrtY1v9wrgzCX5f2qj/lzH1m/lW6MhUZFKerVsnyjOhOsYzI9Kqqak+10l4LvLpMw==", + "dev": true, + "requires": { + "prepend-http": "^2.0.0", + "query-string": "^5.0.1", + "sort-keys": "^2.0.0" + }, + "dependencies": { + "query-string": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/query-string/-/query-string-5.1.1.tgz", + "integrity": "sha512-gjWOsm2SoGlgLEdAGt7a6slVOk9mGiXmPFMqrEhLQ68rhQuBnpfs3+EmlvqKyxnCo9/PPlF+9MtY02S1aFg+Jw==", + "dev": true, + "requires": { + "decode-uri-component": "^0.2.0", + "object-assign": "^4.1.0", + "strict-uri-encode": "^1.0.0" + } + }, + "strict-uri-encode": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/strict-uri-encode/-/strict-uri-encode-1.1.0.tgz", + "integrity": "sha1-J5siXfHVgrH1TmWt3UNS4Y+qBxM=", + "dev": true + } + } + }, + "normalize.css": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/normalize.css/-/normalize.css-8.0.1.tgz", + "integrity": "sha512-qizSNPO93t1YUuUhP22btGOo3chcvDFqFaj2TRybP0DMxkHOCTYwp3n34fel4a31ORXy4m1Xq0Gyqpb5m33qIg==", + "dev": true + }, + "npm-bundled": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/npm-bundled/-/npm-bundled-1.1.2.tgz", + "integrity": "sha512-x5DHup0SuyQcmL3s7Rx/YQ8sbw/Hzg0rj48eN0dV7hf5cmQq5PXIeioroH3raV1QC1yh3uTYuMThvEQF3iKgGQ==", + "dev": true, + "requires": { + "npm-normalize-package-bin": "^1.0.1" + } + }, + "npm-check-updates": { + "version": "11.5.13", + "resolved": "https://registry.npmjs.org/npm-check-updates/-/npm-check-updates-11.5.13.tgz", + "integrity": "sha512-4R9MOr101RdTWYKZSbIbCFIXYegxt0Z7CYMNSYUzkLuusMcueMoH3E/dgfL3h4S9W/z4XboDNbP6jV9FJP/73A==", + "dev": true, + "requires": { + "chalk": "^4.1.1", + "cint": "^8.2.1", + "cli-table": "^0.3.6", + "commander": "^6.2.1", + "find-up": "5.0.0", + "fp-and-or": "^0.1.3", + "get-stdin": "^8.0.0", + "globby": "^11.0.3", + "hosted-git-info": "^4.0.2", + "json-parse-helpfulerror": "^1.0.3", + "jsonlines": "^0.1.1", + "libnpmconfig": "^1.2.1", + "lodash": "^4.17.21", + "mem": "^8.1.1", + "minimatch": "^3.0.4", + "p-map": "^4.0.0", + "pacote": "^11.3.3", + "parse-github-url": "^1.0.2", + "progress": "^2.0.3", + "prompts": "^2.4.1", + "rc-config-loader": "^4.0.0", + "remote-git-tags": "^3.0.0", + "rimraf": "^3.0.2", + "semver": "^7.3.5", + "semver-utils": "^1.1.4", + "spawn-please": "^1.0.0", + "update-notifier": "^5.1.0" + }, + "dependencies": { + "@nodelib/fs.stat": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.4.tgz", + "integrity": "sha512-IYlHJA0clt2+Vg7bccq+TzRdJvv19c2INqBSsoOLp1je7xjtr7J26+WXR72MCdvU9q1qTzIWDfhMf+DRvQJK4Q==", + "dev": true + }, + "ansi-regex": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz", + "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg==", + "dev": true + }, + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "array-union": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz", + "integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==", + "dev": true + }, + "boxen": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/boxen/-/boxen-5.0.1.tgz", + "integrity": "sha512-49VBlw+PrWEF51aCmy7QIteYPIFZxSpvqBdP/2itCPPlJ49kj9zg/XPRFrdkne2W+CfwXUls8exMvu1RysZpKA==", + "dev": true, + "requires": { + "ansi-align": "^3.0.0", + "camelcase": "^6.2.0", + "chalk": "^4.1.0", + "cli-boxes": "^2.2.1", + "string-width": "^4.2.0", + "type-fest": "^0.20.2", + "widest-line": "^3.1.0", + "wrap-ansi": "^7.0.0" + } + }, + "braces": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", + "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", + "dev": true, + "requires": { + "fill-range": "^7.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "commander": { + "version": "6.2.1", + "resolved": "https://registry.npmjs.org/commander/-/commander-6.2.1.tgz", + "integrity": "sha512-U7VdrJFnJgo4xjrHpTzu0yrHPGImdsmD95ZlgYSEajAn2JKzDhDTPG9kBTefmObL2w/ngeZnilk+OV9CG3d7UA==", + "dev": true + }, + "dir-glob": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", + "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==", + "dev": true, + "requires": { + "path-type": "^4.0.0" + } + }, + "emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true + }, + "fast-glob": { + "version": "3.2.5", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.5.tgz", + "integrity": "sha512-2DtFcgT68wiTTiwZ2hNdJfcHNke9XOfnwmBRWXhmeKM8rF0TGwmC/Qto3S7RoZKp5cilZbxzO5iTNTQsJ+EeDg==", + "dev": true, + "requires": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.0", + "merge2": "^1.3.0", + "micromatch": "^4.0.2", + "picomatch": "^2.2.1" + } + }, + "fill-range": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", + "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", + "dev": true, + "requires": { + "to-regex-range": "^5.0.1" + } + }, + "find-up": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-5.0.0.tgz", + "integrity": "sha512-78/PXT1wlLLDgTzDs7sjq9hzz0vXD+zn+7wypEe4fXQxCmdmqfGsEPQxmiCSQI3ajFV91bVSsvNtrJRiW6nGng==", + "dev": true, + "requires": { + "locate-path": "^6.0.0", + "path-exists": "^4.0.0" + } + }, + "glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dev": true, + "requires": { + "is-glob": "^4.0.1" + } + }, + "global-dirs": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/global-dirs/-/global-dirs-3.0.0.tgz", + "integrity": "sha512-v8ho2DS5RiCjftj1nD9NmnfaOzTdud7RRnVd9kFNOjqZbISlx5DQ+OrTkywgd0dIt7oFCvKetZSHoHcP3sDdiA==", + "dev": true, + "requires": { + "ini": "2.0.0" + } + }, + "globby": { + "version": "11.0.3", + "resolved": "https://registry.npmjs.org/globby/-/globby-11.0.3.tgz", + "integrity": "sha512-ffdmosjA807y7+lA1NM0jELARVmYul/715xiILEjo3hBLPTcirgQNnXECn5g3mtR8TOLCVbkfua1Hpen25/Xcg==", + "dev": true, + "requires": { + "array-union": "^2.1.0", + "dir-glob": "^3.0.1", + "fast-glob": "^3.1.1", + "ignore": "^5.1.4", + "merge2": "^1.3.0", + "slash": "^3.0.0" + } + }, + "ignore": { + "version": "5.1.8", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.1.8.tgz", + "integrity": "sha512-BMpfD7PpiETpBl/A6S498BaIJ6Y/ABT93ETbby2fP00v4EbvPBXWEoaR1UBPKs3iR53pJY7EtZk5KACI57i1Uw==", + "dev": true + }, + "ini": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ini/-/ini-2.0.0.tgz", + "integrity": "sha512-7PnF4oN3CvZF23ADhA5wRaYEQpJ8qygSkbtTXWBeXWXmEVRXK+1ITciHWwHhsjv1TmW0MgacIv6hEi5pX5NQdA==", + "dev": true + }, + "is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true + }, + "is-installed-globally": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/is-installed-globally/-/is-installed-globally-0.4.0.tgz", + "integrity": "sha512-iwGqO3J21aaSkC7jWnHP/difazwS7SFeIqxv6wEtLU8Y5KlzFTjyqcSIT0d8s4+dDhKytsk9PJZ2BkS5eZwQRQ==", + "dev": true, + "requires": { + "global-dirs": "^3.0.0", + "is-path-inside": "^3.0.2" + } + }, + "is-npm": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/is-npm/-/is-npm-5.0.0.tgz", + "integrity": "sha512-WW/rQLOazUq+ST/bCAVBp/2oMERWLsR7OrKyt052dNDk4DHcDE0/7QSXITlmi+VBcV13DfIbysG3tZJm5RfdBA==", + "dev": true + }, + "is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "dev": true + }, + "is-path-inside": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-3.0.3.tgz", + "integrity": "sha512-Fd4gABb+ycGAmKou8eMftCupSir5lRxqf4aD/vd0cD2qc4HL07OjCeuHMr8Ro4CoMaeCKDB0/ECBOVWjTwUvPQ==", + "dev": true + }, + "locate-path": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-6.0.0.tgz", + "integrity": "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==", + "dev": true, + "requires": { + "p-locate": "^5.0.0" + } + }, + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "micromatch": { + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.4.tgz", + "integrity": "sha512-pRmzw/XUcwXGpD9aI9q/0XOwLNygjETJ8y0ao0wdqprrzDa4YnxLcz7fQRZr8voh8V10kGhABbNcHVk5wHgWwg==", + "dev": true, + "requires": { + "braces": "^3.0.1", + "picomatch": "^2.2.3" + } + }, + "p-limit": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", + "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", + "dev": true, + "requires": { + "yocto-queue": "^0.1.0" + } + }, + "p-locate": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-5.0.0.tgz", + "integrity": "sha512-LaNjtRWUBY++zB5nE/NwcaoMylSPk+S+ZHNB1TzdbMJMny6dynpAGt7X/tl/QYq3TIeE6nxHppbo2LGymrG5Pw==", + "dev": true, + "requires": { + "p-limit": "^3.0.2" + } + }, + "p-map": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/p-map/-/p-map-4.0.0.tgz", + "integrity": "sha512-/bjOqmgETBYB5BoEeGVea8dmvHb2m9GLy1E9W43yeyfP6QQCZGFNa+XRceJEuDB6zqr+gKpIAmlLebMpykw/MQ==", + "dev": true, + "requires": { + "aggregate-error": "^3.0.0" + } + }, + "path-type": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", + "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", + "dev": true + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "slash": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz", + "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==", + "dev": true + }, + "string-width": { + "version": "4.2.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.2.tgz", + "integrity": "sha512-XBJbT3N4JhVumXE0eoLU9DCjcaF92KLNqTmFCnG1pf8duUxFGwtP6AD6nkjw9a3IdiRtL3E2w3JDiE/xi3vOeA==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.0" + } + }, + "strip-ansi": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz", + "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.0" + } + }, + "to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "dev": true, + "requires": { + "is-number": "^7.0.0" + } + }, + "type-fest": { + "version": "0.20.2", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.20.2.tgz", + "integrity": "sha512-Ne+eE4r0/iWnpAxD852z3A+N0Bt5RN//NjJwRd2VFHEmrywxf5vsZlh4R6lixl6B+wz/8d+maTSAkN1FIkI3LQ==", + "dev": true + }, + "update-notifier": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/update-notifier/-/update-notifier-5.1.0.tgz", + "integrity": "sha512-ItnICHbeMh9GqUy31hFPrD1kcuZ3rpxDZbf4KUDavXwS0bW5m7SLbDQpGX3UYr072cbrF5hFUs3r5tUsPwjfHw==", + "dev": true, + "requires": { + "boxen": "^5.0.0", + "chalk": "^4.1.0", + "configstore": "^5.0.1", + "has-yarn": "^2.1.0", + "import-lazy": "^2.1.0", + "is-ci": "^2.0.0", + "is-installed-globally": "^0.4.0", + "is-npm": "^5.0.0", + "is-yarn-global": "^0.3.0", + "latest-version": "^5.1.0", + "pupa": "^2.1.1", + "semver": "^7.3.4", + "semver-diff": "^3.1.1", + "xdg-basedir": "^4.0.0" + } + }, + "wrap-ansi": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "requires": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "npm-install-checks": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/npm-install-checks/-/npm-install-checks-4.0.0.tgz", + "integrity": "sha512-09OmyDkNLYwqKPOnbI8exiOZU2GVVmQp7tgez2BPi5OZC8M82elDAps7sxC4l//uSUtotWqoEIDwjRvWH4qz8w==", + "dev": true, + "requires": { + "semver": "^7.1.1" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "npm-normalize-package-bin": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/npm-normalize-package-bin/-/npm-normalize-package-bin-1.0.1.tgz", + "integrity": "sha512-EPfafl6JL5/rU+ot6P3gRSCpPDW5VmIzX959Ob1+ySFUuuYHWHekXpwdUZcKP5C+DS4GEtdJluwBjnsNDl+fSA==", + "dev": true + }, + "npm-package-arg": { + "version": "8.1.2", + "resolved": "https://registry.npmjs.org/npm-package-arg/-/npm-package-arg-8.1.2.tgz", + "integrity": "sha512-6Eem455JsSMJY6Kpd3EyWE+n5hC+g9bSyHr9K9U2zqZb7+02+hObQ2c0+8iDk/mNF+8r1MhY44WypKJAkySIYA==", + "dev": true, + "requires": { + "hosted-git-info": "^4.0.1", + "semver": "^7.3.4", + "validate-npm-package-name": "^3.0.0" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "npm-packlist": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/npm-packlist/-/npm-packlist-2.2.2.tgz", + "integrity": "sha512-Jt01acDvJRhJGthnUJVF/w6gumWOZxO7IkpY/lsX9//zqQgnF7OJaxgQXcerd4uQOLu7W5bkb4mChL9mdfm+Zg==", + "dev": true, + "requires": { + "glob": "^7.1.6", + "ignore-walk": "^3.0.3", + "npm-bundled": "^1.1.1", + "npm-normalize-package-bin": "^1.0.1" + } + }, + "npm-pick-manifest": { + "version": "6.1.1", + "resolved": "https://registry.npmjs.org/npm-pick-manifest/-/npm-pick-manifest-6.1.1.tgz", + "integrity": "sha512-dBsdBtORT84S8V8UTad1WlUyKIY9iMsAmqxHbLdeEeBNMLQDlDWWra3wYUx9EBEIiG/YwAy0XyNHDd2goAsfuA==", + "dev": true, + "requires": { + "npm-install-checks": "^4.0.0", + "npm-normalize-package-bin": "^1.0.1", + "npm-package-arg": "^8.1.2", + "semver": "^7.3.4" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "npm-registry-fetch": { + "version": "10.1.2", + "resolved": "https://registry.npmjs.org/npm-registry-fetch/-/npm-registry-fetch-10.1.2.tgz", + "integrity": "sha512-KsM/TdPmntqgBFlfsbkOLkkE9ovZo7VpVcd+/eTdYszCrgy5zFl5JzWm+OxavFaEWlbkirpkou+ZYI00RmOBFA==", + "dev": true, + "requires": { + "lru-cache": "^6.0.0", + "make-fetch-happen": "^8.0.9", + "minipass": "^3.1.3", + "minipass-fetch": "^1.3.0", + "minipass-json-stream": "^1.0.1", + "minizlib": "^2.0.0", + "npm-package-arg": "^8.0.0" + }, + "dependencies": { + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "npm-run-path": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz", + "integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==", + "dev": true, + "requires": { + "path-key": "^3.0.0" + } + }, + "npmlog": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/npmlog/-/npmlog-4.1.2.tgz", + "integrity": "sha512-2uUqazuKlTaSI/dC8AzicUck7+IrEaOnN/e0jd3Xtt1KcGpwx30v50mL7oPyr/h9bL3E4aZccVwpwP+5W9Vjkg==", + "dev": true, + "requires": { + "are-we-there-yet": "~1.1.2", + "console-control-strings": "~1.1.0", + "gauge": "~2.7.3", + "set-blocking": "~2.0.0" + } + }, + "nprogress": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/nprogress/-/nprogress-0.2.0.tgz", + "integrity": "sha1-y480xTIT2JVyP8urkH6UIq28r7E=", + "dev": true + }, + "nth-check": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz", + "integrity": "sha512-WeBOdju8SnzPN5vTUJYxYUxLeXpCaVP5i5e0LF8fg7WORF2Wd7wFX/pk0tYZk7s8T+J7VLy0Da6J1+wCT0AtHg==", + "dev": true, + "requires": { + "boolbase": "~1.0.0" + } + }, + "num2fraction": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/num2fraction/-/num2fraction-1.2.2.tgz", + "integrity": "sha1-b2gragJ6Tp3fpFZM0lidHU5mnt4=", + "dev": true + }, + "number-is-nan": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/number-is-nan/-/number-is-nan-1.0.1.tgz", + "integrity": "sha1-CXtgK1NCKlIsGvuHkDGDNpQaAR0=", + "dev": true + }, + "oauth-sign": { + "version": "0.9.0", + "resolved": "https://registry.npmjs.org/oauth-sign/-/oauth-sign-0.9.0.tgz", + "integrity": "sha512-fexhUFFPTGV8ybAtSIGbV6gOkSv8UtRbDBnAyLQw4QPKkgNlsH2ByPGtMUqdWkos6YCRmAqViwgZrJc/mRDzZQ==", + "dev": true + }, + "object-assign": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", + "integrity": "sha1-IQmtx5ZYh8/AXLvUQsrIv7s2CGM=", + "dev": true + }, + "object-copy": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/object-copy/-/object-copy-0.1.0.tgz", + "integrity": "sha1-fn2Fi3gb18mRpBupde04EnVOmYw=", + "dev": true, + "requires": { + "copy-descriptor": "^0.1.0", + "define-property": "^0.2.5", + "kind-of": "^3.0.3" + }, + "dependencies": { + "define-property": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", + "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", + "dev": true, + "requires": { + "is-descriptor": "^0.1.0" + } + }, + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "object-hash": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/object-hash/-/object-hash-2.1.1.tgz", + "integrity": "sha512-VOJmgmS+7wvXf8CjbQmimtCnEx3IAoLxI3fp2fbWehxrWBcAQFbk+vcwb6vzR0VZv/eNCJ/27j151ZTwqW/JeQ==", + "dev": true + }, + "object-inspect": { + "version": "1.10.3", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.10.3.tgz", + "integrity": "sha512-e5mCJlSH7poANfC8z8S9s9S2IN5/4Zb3aZ33f5s8YqoazCFzNLloLU8r5VCG+G7WoqLvAAZoVMcy3tp/3X0Plw==", + "dev": true + }, + "object-is": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/object-is/-/object-is-1.1.5.tgz", + "integrity": "sha512-3cyDsyHgtmi7I7DfSSI2LDp6SK2lwvtbg0p0R1e0RvTqF5ceGx+K2dfSjm1bKDMVCFEDAQvy+o8c6a7VujOddw==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3" + } + }, + "object-keys": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz", + "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA==", + "dev": true + }, + "object-visit": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/object-visit/-/object-visit-1.0.1.tgz", + "integrity": "sha1-95xEk68MU3e1n+OdOV5BBC3QRbs=", + "dev": true, + "requires": { + "isobject": "^3.0.0" + } + }, + "object.assign": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/object.assign/-/object.assign-4.1.2.tgz", + "integrity": "sha512-ixT2L5THXsApyiUPYKmW+2EHpXXe5Ii3M+f4e+aJFAHao5amFRW6J0OO6c/LU8Be47utCx2GL89hxGB6XSmKuQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.0", + "define-properties": "^1.1.3", + "has-symbols": "^1.0.1", + "object-keys": "^1.1.1" + } + }, + "object.getownpropertydescriptors": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/object.getownpropertydescriptors/-/object.getownpropertydescriptors-2.1.2.tgz", + "integrity": "sha512-WtxeKSzfBjlzL+F9b7M7hewDzMwy+C8NRssHd1YrNlzHzIDrXcXiNOMrezdAEM4UXixgV+vvnyBeN7Rygl2ttQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3", + "es-abstract": "^1.18.0-next.2" + } + }, + "object.pick": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/object.pick/-/object.pick-1.3.0.tgz", + "integrity": "sha1-h6EKxMFpS9Lhy/U1kaZhQftd10c=", + "dev": true, + "requires": { + "isobject": "^3.0.1" + } + }, + "object.values": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/object.values/-/object.values-1.1.3.tgz", + "integrity": "sha512-nkF6PfDB9alkOUxpf1HNm/QlkeW3SReqL5WXeBLpEJJnlPSvRaDQpW3gQTksTN3fgJX4hL42RzKyOin6ff3tyw==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3", + "es-abstract": "^1.18.0-next.2", + "has": "^1.0.3" + } + }, + "obuf": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/obuf/-/obuf-1.1.2.tgz", + "integrity": "sha512-PX1wu0AmAdPqOL1mWhqmlOd8kOIZQwGZw6rh7uby9fTc5lhaOWFLX3I6R1hrF9k3zUY40e6igsLGkDXK92LJNg==", + "dev": true + }, + "on-finished": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz", + "integrity": "sha1-IPEzZIGwg811M3mSoWlxqi2QaUc=", + "dev": true, + "requires": { + "ee-first": "1.1.1" + } + }, + "on-headers": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/on-headers/-/on-headers-1.0.2.tgz", + "integrity": "sha512-pZAE+FJLoyITytdqK0U5s+FIpjN0JP3OzFi/u8Rx+EV5/W+JTWGXG8xFzevE7AjBfDqHv/8vL8qQsIhHnqRkrA==", + "dev": true + }, + "once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha1-WDsap3WWHUsROsF9nFC6753Xa9E=", + "dev": true, + "requires": { + "wrappy": "1" + } + }, + "onetime": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz", + "integrity": "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==", + "dev": true, + "requires": { + "mimic-fn": "^2.1.0" + } + }, + "opencollective-postinstall": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/opencollective-postinstall/-/opencollective-postinstall-2.0.3.tgz", + "integrity": "sha512-8AV/sCtuzUeTo8gQK5qDZzARrulB3egtLzFgteqB2tcT4Mw7B8Kt7JcDHmltjz6FOAHsvTevk70gZEbhM4ZS9Q==", + "dev": true + }, + "opn": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/opn/-/opn-5.5.0.tgz", + "integrity": "sha512-PqHpggC9bLV0VeWcdKhkpxY+3JTzetLSqTCWL/z/tFIbI6G8JCjondXklT1JinczLz2Xib62sSp0T/gKT4KksA==", + "dev": true, + "requires": { + "is-wsl": "^1.1.0" + } + }, + "optimize-css-assets-webpack-plugin": { + "version": "5.0.6", + "resolved": "https://registry.npmjs.org/optimize-css-assets-webpack-plugin/-/optimize-css-assets-webpack-plugin-5.0.6.tgz", + "integrity": "sha512-JAYw7WrIAIuHWoKeSBB3lJ6ZG9PSDK3JJduv/FMpIY060wvbA8Lqn/TCtxNGICNlg0X5AGshLzIhpYrkltdq+A==", + "dev": true, + "requires": { + "cssnano": "^4.1.10", + "last-call-webpack-plugin": "^3.0.0" + } + }, + "original": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/original/-/original-1.0.2.tgz", + "integrity": "sha512-hyBVl6iqqUOJ8FqRe+l/gS8H+kKYjrEndd5Pm1MfBtsEKA038HkkdbAl/72EAXGyonD/PFsvmVG+EvcIpliMBg==", + "dev": true, + "requires": { + "url-parse": "^1.4.3" + } + }, + "os-browserify": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/os-browserify/-/os-browserify-0.3.0.tgz", + "integrity": "sha1-hUNzx/XCMVkU/Jv8a9gjj92h7Cc=", + "dev": true + }, + "p-cancelable": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/p-cancelable/-/p-cancelable-1.1.0.tgz", + "integrity": "sha512-s73XxOZ4zpt1edZYZzvhqFa6uvQc1vwUa0K0BdtIZgQMAJj9IbebH+JkgKZc9h+B05PKHLOTl4ajG1BmNrVZlw==", + "dev": true + }, + "p-defer": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/p-defer/-/p-defer-1.0.0.tgz", + "integrity": "sha1-n26xgvbJqozXQwBKfU+WsZaw+ww=", + "dev": true + }, + "p-finally": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/p-finally/-/p-finally-1.0.0.tgz", + "integrity": "sha1-P7z7FbiZpEEjs0ttzBi3JDNqLK4=", + "dev": true + }, + "p-limit": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", + "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", + "dev": true, + "requires": { + "p-try": "^2.0.0" + } + }, + "p-locate": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", + "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", + "dev": true, + "requires": { + "p-limit": "^2.2.0" + } + }, + "p-map": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/p-map/-/p-map-2.1.0.tgz", + "integrity": "sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw==", + "dev": true + }, + "p-retry": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-3.0.1.tgz", + "integrity": "sha512-XE6G4+YTTkT2a0UWb2kjZe8xNwf8bIbnqpc/IS/idOBVhyves0mK5OJgeocjx7q5pvX/6m23xuzVPYT1uGM73w==", + "dev": true, + "requires": { + "retry": "^0.12.0" + } + }, + "p-try": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz", + "integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==", + "dev": true + }, + "package-json": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/package-json/-/package-json-6.5.0.tgz", + "integrity": "sha512-k3bdm2n25tkyxcjSKzB5x8kfVxlMdgsbPr0GkZcwHsLpba6cBjqCt1KlcChKEvxHIcTB1FVMuwoijZ26xex5MQ==", + "dev": true, + "requires": { + "got": "^9.6.0", + "registry-auth-token": "^4.0.0", + "registry-url": "^5.0.0", + "semver": "^6.2.0" + } + }, + "pacote": { + "version": "11.3.3", + "resolved": "https://registry.npmjs.org/pacote/-/pacote-11.3.3.tgz", + "integrity": "sha512-GQxBX+UcVZrrJRYMK2HoG+gPeSUX/rQhnbPkkGrCYa4n2F/bgClFPaMm0nsdnYrxnmUy85uMHoFXZ0jTD0drew==", + "dev": true, + "requires": { + "@npmcli/git": "^2.0.1", + "@npmcli/installed-package-contents": "^1.0.6", + "@npmcli/promise-spawn": "^1.2.0", + "@npmcli/run-script": "^1.8.2", + "cacache": "^15.0.5", + "chownr": "^2.0.0", + "fs-minipass": "^2.1.0", + "infer-owner": "^1.0.4", + "minipass": "^3.1.3", + "mkdirp": "^1.0.3", + "npm-package-arg": "^8.0.1", + "npm-packlist": "^2.1.4", + "npm-pick-manifest": "^6.0.0", + "npm-registry-fetch": "^10.0.0", + "promise-retry": "^2.0.1", + "read-package-json-fast": "^2.0.1", + "rimraf": "^3.0.2", + "ssri": "^8.0.1", + "tar": "^6.1.0" + }, + "dependencies": { + "cacache": { + "version": "15.1.0", + "resolved": "https://registry.npmjs.org/cacache/-/cacache-15.1.0.tgz", + "integrity": "sha512-mfx0C+mCfWjD1PnwQ9yaOrwG1ou9FkKnx0SvzUHWdFt7r7GaRtzT+9M8HAvLu62zIHtnpQ/1m93nWNDCckJGXQ==", + "dev": true, + "requires": { + "@npmcli/move-file": "^1.0.1", + "chownr": "^2.0.0", + "fs-minipass": "^2.0.0", + "glob": "^7.1.4", + "infer-owner": "^1.0.4", + "lru-cache": "^6.0.0", + "minipass": "^3.1.1", + "minipass-collect": "^1.0.2", + "minipass-flush": "^1.0.5", + "minipass-pipeline": "^1.2.2", + "mkdirp": "^1.0.3", + "p-map": "^4.0.0", + "promise-inflight": "^1.0.1", + "rimraf": "^3.0.2", + "ssri": "^8.0.1", + "tar": "^6.0.2", + "unique-filename": "^1.1.1" + } + }, + "chownr": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-2.0.0.tgz", + "integrity": "sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==", + "dev": true + }, + "lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dev": true, + "requires": { + "yallist": "^4.0.0" + } + }, + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "p-map": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/p-map/-/p-map-4.0.0.tgz", + "integrity": "sha512-/bjOqmgETBYB5BoEeGVea8dmvHb2m9GLy1E9W43yeyfP6QQCZGFNa+XRceJEuDB6zqr+gKpIAmlLebMpykw/MQ==", + "dev": true, + "requires": { + "aggregate-error": "^3.0.0" + } + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "ssri": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/ssri/-/ssri-8.0.1.tgz", + "integrity": "sha512-97qShzy1AiyxvPNIkLWoGua7xoQzzPjQ0HAH4B0rWKo7SZ6USuPcrUiAFrws0UH8RrbWmgq3LMTObhPIHbbBeQ==", + "dev": true, + "requires": { + "minipass": "^3.1.1" + } + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "pako": { + "version": "1.0.11", + "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz", + "integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw==", + "dev": true + }, + "parallel-transform": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/parallel-transform/-/parallel-transform-1.2.0.tgz", + "integrity": "sha512-P2vSmIu38uIlvdcU7fDkyrxj33gTUy/ABO5ZUbGowxNCopBq/OoD42bP4UmMrJoPyk4Uqf0mu3mtWBhHCZD8yg==", + "dev": true, + "requires": { + "cyclist": "^1.0.1", + "inherits": "^2.0.3", + "readable-stream": "^2.1.5" + } + }, + "param-case": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/param-case/-/param-case-2.1.1.tgz", + "integrity": "sha1-35T9jPZTHs915r75oIWPvHK+Ikc=", + "dev": true, + "requires": { + "no-case": "^2.2.0" + } + }, + "parse-asn1": { + "version": "5.1.6", + "resolved": "https://registry.npmjs.org/parse-asn1/-/parse-asn1-5.1.6.tgz", + "integrity": "sha512-RnZRo1EPU6JBnra2vGHj0yhp6ebyjBZpmUCLHWiFhxlzvBCCpAuZ7elsBp1PVAbQN0/04VD/19rfzlBSwLstMw==", + "dev": true, + "requires": { + "asn1.js": "^5.2.0", + "browserify-aes": "^1.0.0", + "evp_bytestokey": "^1.0.0", + "pbkdf2": "^3.0.3", + "safe-buffer": "^5.1.1" + } + }, + "parse-github-url": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/parse-github-url/-/parse-github-url-1.0.2.tgz", + "integrity": "sha512-kgBf6avCbO3Cn6+RnzRGLkUsv4ZVqv/VfAYkRsyBcgkshNvVBkRn1FEZcW0Jb+npXQWm2vHPnnOqFteZxRRGNw==", + "dev": true + }, + "parse-json": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-4.0.0.tgz", + "integrity": "sha1-vjX1Qlvh9/bHRxhPmKeIy5lHfuA=", + "dev": true, + "requires": { + "error-ex": "^1.3.1", + "json-parse-better-errors": "^1.0.1" + } + }, + "parseurl": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", + "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==", + "dev": true + }, + "pascalcase": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/pascalcase/-/pascalcase-0.1.1.tgz", + "integrity": "sha1-s2PlXoAGym/iF4TS2yK9FdeRfxQ=", + "dev": true + }, + "path-browserify": { + "version": "0.0.1", + "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-0.0.1.tgz", + "integrity": "sha512-BapA40NHICOS+USX9SN4tyhq+A2RrN/Ws5F0Z5aMHDp98Fl86lX8Oti8B7uN93L4Ifv4fHOEA+pQw87gmMO/lQ==", + "dev": true + }, + "path-dirname": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/path-dirname/-/path-dirname-1.0.2.tgz", + "integrity": "sha1-zDPSTVJeCZpTiMAzbG4yuRYGCeA=", + "dev": true + }, + "path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true + }, + "path-is-absolute": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", + "integrity": "sha1-F0uSaHNVNP+8es5r9TpanhtcX18=", + "dev": true + }, + "path-is-inside": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/path-is-inside/-/path-is-inside-1.0.2.tgz", + "integrity": "sha1-NlQX3t5EQw0cEa9hAn+s8HS9/FM=", + "dev": true + }, + "path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true + }, + "path-parse": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", + "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==", + "dev": true + }, + "path-to-regexp": { + "version": "0.1.7", + "resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz", + "integrity": "sha1-32BBeABfUi8V60SQ5yR6G/qmf4w=", + "dev": true + }, + "path-type": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-type/-/path-type-3.0.0.tgz", + "integrity": "sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==", + "dev": true, + "requires": { + "pify": "^3.0.0" + }, + "dependencies": { + "pify": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", + "integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=", + "dev": true + } + } + }, + "pause-stream": { + "version": "0.0.11", + "resolved": "https://registry.npmjs.org/pause-stream/-/pause-stream-0.0.11.tgz", + "integrity": "sha1-/lo0sMvOErWqaitAPuLnO2AvFEU=", + "dev": true, + "requires": { + "through": "~2.3" + } + }, + "pbkdf2": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/pbkdf2/-/pbkdf2-3.1.2.tgz", + "integrity": "sha512-iuh7L6jA7JEGu2WxDwtQP1ddOpaJNC4KlDEFfdQajSGgGPNi4OyDc2R7QnbY2bR9QjBVGwgvTdNJZoE7RaxUMA==", + "dev": true, + "requires": { + "create-hash": "^1.1.2", + "create-hmac": "^1.1.4", + "ripemd160": "^2.0.1", + "safe-buffer": "^5.0.1", + "sha.js": "^2.4.8" + } + }, + "performance-now": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/performance-now/-/performance-now-2.1.0.tgz", + "integrity": "sha1-Ywn04OX6kT7BxpMHrjZLSzd8nns=", + "dev": true + }, + "picomatch": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.0.tgz", + "integrity": "sha512-lY1Q/PiJGC2zOv/z391WOTD+Z02bCgsFfvxoXXf6h7kv9o+WmsmzYqrAwY63sNgOxE4xEdq0WyUnXfKeBrSvYw==", + "dev": true + }, + "pify": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz", + "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==", + "dev": true + }, + "pinkie": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/pinkie/-/pinkie-2.0.4.tgz", + "integrity": "sha1-clVrgM+g1IqXToDnckjoDtT3+HA=", + "dev": true + }, + "pinkie-promise": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/pinkie-promise/-/pinkie-promise-2.0.1.tgz", + "integrity": "sha1-ITXW36ejWMBprJsXh3YogihFD/o=", + "dev": true, + "requires": { + "pinkie": "^2.0.0" + } + }, + "pkg-dir": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", + "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", + "dev": true, + "requires": { + "find-up": "^4.0.0" + } + }, + "portfinder": { + "version": "1.0.28", + "resolved": "https://registry.npmjs.org/portfinder/-/portfinder-1.0.28.tgz", + "integrity": "sha512-Se+2isanIcEqf2XMHjyUKskczxbPH7dQnlMjXX6+dybayyHvAf/TCgyMRlzf/B6QDhAEFOGes0pzRo3by4AbMA==", + "dev": true, + "requires": { + "async": "^2.6.2", + "debug": "^3.1.1", + "mkdirp": "^0.5.5" + }, + "dependencies": { + "debug": { + "version": "3.2.7", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz", + "integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==", + "dev": true, + "requires": { + "ms": "^2.1.1" + } + } + } + }, + "posix-character-classes": { + "version": "0.1.1", + "resolved": "https://registry.npmjs.org/posix-character-classes/-/posix-character-classes-0.1.1.tgz", + "integrity": "sha1-AerA/jta9xoqbAL+q7jB/vfgDqs=", + "dev": true + }, + "postcss": { + "version": "7.0.35", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz", + "integrity": "sha512-3QT8bBJeX/S5zKTTjTCIjRF3If4avAT6kqxcASlTWEtAFCb9NH0OUxNDfgZSWdP5fJnBYCMEWkIFfWeugjzYMg==", + "dev": true, + "requires": { + "chalk": "^2.4.2", + "source-map": "^0.6.1", + "supports-color": "^6.1.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "dependencies": { + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + } + } + }, + "postcss-calc": { + "version": "7.0.5", + "resolved": "https://registry.npmjs.org/postcss-calc/-/postcss-calc-7.0.5.tgz", + "integrity": "sha512-1tKHutbGtLtEZF6PT4JSihCHfIVldU72mZ8SdZHIYriIZ9fh9k9aWSppaT8rHsyI3dX+KSR+W+Ix9BMY3AODrg==", + "dev": true, + "requires": { + "postcss": "^7.0.27", + "postcss-selector-parser": "^6.0.2", + "postcss-value-parser": "^4.0.2" + } + }, + "postcss-colormin": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/postcss-colormin/-/postcss-colormin-4.0.3.tgz", + "integrity": "sha512-WyQFAdDZpExQh32j0U0feWisZ0dmOtPl44qYmJKkq9xFWY3p+4qnRzCHeNrkeRhwPHz9bQ3mo0/yVkaply0MNw==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "color": "^3.0.0", + "has": "^1.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-convert-values": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-convert-values/-/postcss-convert-values-4.0.1.tgz", + "integrity": "sha512-Kisdo1y77KUC0Jmn0OXU/COOJbzM8cImvw1ZFsBgBgMgb1iL23Zs/LXRe3r+EZqM3vGYKdQ2YJVQ5VkJI+zEJQ==", + "dev": true, + "requires": { + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-discard-comments": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-discard-comments/-/postcss-discard-comments-4.0.2.tgz", + "integrity": "sha512-RJutN259iuRf3IW7GZyLM5Sw4GLTOH8FmsXBnv8Ab/Tc2k4SR4qbV4DNbyyY4+Sjo362SyDmW2DQ7lBSChrpkg==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "postcss-discard-duplicates": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-discard-duplicates/-/postcss-discard-duplicates-4.0.2.tgz", + "integrity": "sha512-ZNQfR1gPNAiXZhgENFfEglF93pciw0WxMkJeVmw8eF+JZBbMD7jp6C67GqJAXVZP2BWbOztKfbsdmMp/k8c6oQ==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "postcss-discard-empty": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-discard-empty/-/postcss-discard-empty-4.0.1.tgz", + "integrity": "sha512-B9miTzbznhDjTfjvipfHoqbWKwd0Mj+/fL5s1QOz06wufguil+Xheo4XpOnc4NqKYBCNqqEzgPv2aPBIJLox0w==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "postcss-discard-overridden": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-discard-overridden/-/postcss-discard-overridden-4.0.1.tgz", + "integrity": "sha512-IYY2bEDD7g1XM1IDEsUT4//iEYCxAmP5oDSFMVU/JVvT7gh+l4fmjciLqGgwjdWpQIdb0Che2VX00QObS5+cTg==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "postcss-functions": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/postcss-functions/-/postcss-functions-3.0.0.tgz", + "integrity": "sha1-DpTQFERwCkgd4g3k1V+yZAVkJQ4=", + "dev": true, + "requires": { + "glob": "^7.1.2", + "object-assign": "^4.1.1", + "postcss": "^6.0.9", + "postcss-value-parser": "^3.3.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "postcss": { + "version": "6.0.23", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz", + "integrity": "sha512-soOk1h6J3VMTZtVeVpv15/Hpdl2cBLX3CAw4TAbkpTJiNPk9YP/zWcD1ND+xEtvyuuvKzbxliTOIyvkSeSJ6ag==", + "dev": true, + "requires": { + "chalk": "^2.4.1", + "source-map": "^0.6.1", + "supports-color": "^5.4.0" + } + }, + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "postcss-js": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/postcss-js/-/postcss-js-2.0.3.tgz", + "integrity": "sha512-zS59pAk3deu6dVHyrGqmC3oDXBdNdajk4k1RyxeVXCrcEDBUBHoIhE4QTsmhxgzXxsaqFDAkUZfmMa5f/N/79w==", + "dev": true, + "requires": { + "camelcase-css": "^2.0.1", + "postcss": "^7.0.18" + } + }, + "postcss-load-config": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/postcss-load-config/-/postcss-load-config-2.1.2.tgz", + "integrity": "sha512-/rDeGV6vMUo3mwJZmeHfEDvwnTKKqQ0S7OHUi/kJvvtx3aWtyWG2/0ZWnzCt2keEclwN6Tf0DST2v9kITdOKYw==", + "dev": true, + "requires": { + "cosmiconfig": "^5.0.0", + "import-cwd": "^2.0.0" + } + }, + "postcss-loader": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/postcss-loader/-/postcss-loader-3.0.0.tgz", + "integrity": "sha512-cLWoDEY5OwHcAjDnkyRQzAXfs2jrKjXpO/HQFcc5b5u/r7aa471wdmChmwfnv7x2u840iat/wi0lQ5nbRgSkUA==", + "dev": true, + "requires": { + "loader-utils": "^1.1.0", + "postcss": "^7.0.0", + "postcss-load-config": "^2.0.0", + "schema-utils": "^1.0.0" + }, + "dependencies": { + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "postcss-merge-longhand": { + "version": "4.0.11", + "resolved": "https://registry.npmjs.org/postcss-merge-longhand/-/postcss-merge-longhand-4.0.11.tgz", + "integrity": "sha512-alx/zmoeXvJjp7L4mxEMjh8lxVlDFX1gqWHzaaQewwMZiVhLo42TEClKaeHbRf6J7j82ZOdTJ808RtN0ZOZwvw==", + "dev": true, + "requires": { + "css-color-names": "0.0.4", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0", + "stylehacks": "^4.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-merge-rules": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/postcss-merge-rules/-/postcss-merge-rules-4.0.3.tgz", + "integrity": "sha512-U7e3r1SbvYzO0Jr3UT/zKBVgYYyhAz0aitvGIYOYK5CPmkNih+WDSsS5tvPrJ8YMQYlEMvsZIiqmn7HdFUaeEQ==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "caniuse-api": "^3.0.0", + "cssnano-util-same-parent": "^4.0.0", + "postcss": "^7.0.0", + "postcss-selector-parser": "^3.0.0", + "vendors": "^1.0.0" + }, + "dependencies": { + "postcss-selector-parser": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz", + "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==", + "dev": true, + "requires": { + "dot-prop": "^5.2.0", + "indexes-of": "^1.0.1", + "uniq": "^1.0.1" + } + } + } + }, + "postcss-minify-font-values": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-minify-font-values/-/postcss-minify-font-values-4.0.2.tgz", + "integrity": "sha512-j85oO6OnRU9zPf04+PZv1LYIYOprWm6IA6zkXkrJXyRveDEuQggG6tvoy8ir8ZwjLxLuGfNkCZEQG7zan+Hbtg==", + "dev": true, + "requires": { + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-minify-gradients": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-minify-gradients/-/postcss-minify-gradients-4.0.2.tgz", + "integrity": "sha512-qKPfwlONdcf/AndP1U8SJ/uzIJtowHlMaSioKzebAXSG4iJthlWC9iSWznQcX4f66gIWX44RSA841HTHj3wK+Q==", + "dev": true, + "requires": { + "cssnano-util-get-arguments": "^4.0.0", + "is-color-stop": "^1.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-minify-params": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-minify-params/-/postcss-minify-params-4.0.2.tgz", + "integrity": "sha512-G7eWyzEx0xL4/wiBBJxJOz48zAKV2WG3iZOqVhPet/9geefm/Px5uo1fzlHu+DOjT+m0Mmiz3jkQzVHe6wxAWg==", + "dev": true, + "requires": { + "alphanum-sort": "^1.0.0", + "browserslist": "^4.0.0", + "cssnano-util-get-arguments": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0", + "uniqs": "^2.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-minify-selectors": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-minify-selectors/-/postcss-minify-selectors-4.0.2.tgz", + "integrity": "sha512-D5S1iViljXBj9kflQo4YutWnJmwm8VvIsU1GeXJGiG9j8CIg9zs4voPMdQDUmIxetUOh60VilsNzCiAFTOqu3g==", + "dev": true, + "requires": { + "alphanum-sort": "^1.0.0", + "has": "^1.0.0", + "postcss": "^7.0.0", + "postcss-selector-parser": "^3.0.0" + }, + "dependencies": { + "postcss-selector-parser": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz", + "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==", + "dev": true, + "requires": { + "dot-prop": "^5.2.0", + "indexes-of": "^1.0.1", + "uniq": "^1.0.1" + } + } + } + }, + "postcss-modules-extract-imports": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/postcss-modules-extract-imports/-/postcss-modules-extract-imports-2.0.0.tgz", + "integrity": "sha512-LaYLDNS4SG8Q5WAWqIJgdHPJrDDr/Lv775rMBFUbgjTz6j34lUznACHcdRWroPvXANP2Vj7yNK57vp9eFqzLWQ==", + "dev": true, + "requires": { + "postcss": "^7.0.5" + } + }, + "postcss-modules-local-by-default": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/postcss-modules-local-by-default/-/postcss-modules-local-by-default-2.0.6.tgz", + "integrity": "sha512-oLUV5YNkeIBa0yQl7EYnxMgy4N6noxmiwZStaEJUSe2xPMcdNc8WmBQuQCx18H5psYbVxz8zoHk0RAAYZXP9gA==", + "dev": true, + "requires": { + "postcss": "^7.0.6", + "postcss-selector-parser": "^6.0.0", + "postcss-value-parser": "^3.3.1" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-modules-scope": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/postcss-modules-scope/-/postcss-modules-scope-2.2.0.tgz", + "integrity": "sha512-YyEgsTMRpNd+HmyC7H/mh3y+MeFWevy7V1evVhJWewmMbjDHIbZbOXICC2y+m1xI1UVfIT1HMW/O04Hxyu9oXQ==", + "dev": true, + "requires": { + "postcss": "^7.0.6", + "postcss-selector-parser": "^6.0.0" + } + }, + "postcss-modules-values": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/postcss-modules-values/-/postcss-modules-values-2.0.0.tgz", + "integrity": "sha512-Ki7JZa7ff1N3EIMlPnGTZfUMe69FFwiQPnVSXC9mnn3jozCRBYIxiZd44yJOV2AmabOo4qFf8s0dC/+lweG7+w==", + "dev": true, + "requires": { + "icss-replace-symbols": "^1.1.0", + "postcss": "^7.0.6" + } + }, + "postcss-nested": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/postcss-nested/-/postcss-nested-4.2.3.tgz", + "integrity": "sha512-rOv0W1HquRCamWy2kFl3QazJMMe1ku6rCFoAAH+9AcxdbpDeBr6k968MLWuLjvjMcGEip01ak09hKOEgpK9hvw==", + "dev": true, + "requires": { + "postcss": "^7.0.32", + "postcss-selector-parser": "^6.0.2" + } + }, + "postcss-normalize-charset": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-normalize-charset/-/postcss-normalize-charset-4.0.1.tgz", + "integrity": "sha512-gMXCrrlWh6G27U0hF3vNvR3w8I1s2wOBILvA87iNXaPvSNo5uZAMYsZG7XjCUf1eVxuPfyL4TJ7++SGZLc9A3g==", + "dev": true, + "requires": { + "postcss": "^7.0.0" + } + }, + "postcss-normalize-display-values": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-display-values/-/postcss-normalize-display-values-4.0.2.tgz", + "integrity": "sha512-3F2jcsaMW7+VtRMAqf/3m4cPFhPD3EFRgNs18u+k3lTJJlVe7d0YPO+bnwqo2xg8YiRpDXJI2u8A0wqJxMsQuQ==", + "dev": true, + "requires": { + "cssnano-util-get-match": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-positions": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-positions/-/postcss-normalize-positions-4.0.2.tgz", + "integrity": "sha512-Dlf3/9AxpxE+NF1fJxYDeggi5WwV35MXGFnnoccP/9qDtFrTArZ0D0R+iKcg5WsUd8nUYMIl8yXDCtcrT8JrdA==", + "dev": true, + "requires": { + "cssnano-util-get-arguments": "^4.0.0", + "has": "^1.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-repeat-style": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-repeat-style/-/postcss-normalize-repeat-style-4.0.2.tgz", + "integrity": "sha512-qvigdYYMpSuoFs3Is/f5nHdRLJN/ITA7huIoCyqqENJe9PvPmLhNLMu7QTjPdtnVf6OcYYO5SHonx4+fbJE1+Q==", + "dev": true, + "requires": { + "cssnano-util-get-arguments": "^4.0.0", + "cssnano-util-get-match": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-string": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-string/-/postcss-normalize-string-4.0.2.tgz", + "integrity": "sha512-RrERod97Dnwqq49WNz8qo66ps0swYZDSb6rM57kN2J+aoyEAJfZ6bMx0sx/F9TIEX0xthPGCmeyiam/jXif0eA==", + "dev": true, + "requires": { + "has": "^1.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-timing-functions": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-timing-functions/-/postcss-normalize-timing-functions-4.0.2.tgz", + "integrity": "sha512-acwJY95edP762e++00Ehq9L4sZCEcOPyaHwoaFOhIwWCDfik6YvqsYNxckee65JHLKzuNSSmAdxwD2Cud1Z54A==", + "dev": true, + "requires": { + "cssnano-util-get-match": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-unicode": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-normalize-unicode/-/postcss-normalize-unicode-4.0.1.tgz", + "integrity": "sha512-od18Uq2wCYn+vZ/qCOeutvHjB5jm57ToxRaMeNuf0nWVHaP9Hua56QyMF6fs/4FSUnVIw0CBPsU0K4LnBPwYwg==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-url": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-normalize-url/-/postcss-normalize-url-4.0.1.tgz", + "integrity": "sha512-p5oVaF4+IHwu7VpMan/SSpmpYxcJMtkGppYf0VbdH5B6hN8YNmVyJLuY9FmLQTzY3fag5ESUUHDqM+heid0UVA==", + "dev": true, + "requires": { + "is-absolute-url": "^2.0.0", + "normalize-url": "^3.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "normalize-url": { + "version": "3.3.0", + "resolved": "https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz", + "integrity": "sha512-U+JJi7duF1o+u2pynbp2zXDW2/PADgC30f0GsHZtRh+HOcXHnw137TrNlyxxRvWW5fjKd3bcLHPxofWuCjaeZg==", + "dev": true + }, + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-normalize-whitespace": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-normalize-whitespace/-/postcss-normalize-whitespace-4.0.2.tgz", + "integrity": "sha512-tO8QIgrsI3p95r8fyqKV+ufKlSHh9hMJqACqbv2XknufqEDhDvbguXGBBqxw9nsQoXWf0qOqppziKJKHMD4GtA==", + "dev": true, + "requires": { + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-ordered-values": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/postcss-ordered-values/-/postcss-ordered-values-4.1.2.tgz", + "integrity": "sha512-2fCObh5UanxvSxeXrtLtlwVThBvHn6MQcu4ksNT2tsaV2Fg76R2CV98W7wNSlX+5/pFwEyaDwKLLoEV7uRybAw==", + "dev": true, + "requires": { + "cssnano-util-get-arguments": "^4.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-reduce-initial": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/postcss-reduce-initial/-/postcss-reduce-initial-4.0.3.tgz", + "integrity": "sha512-gKWmR5aUulSjbzOfD9AlJiHCGH6AEVLaM0AV+aSioxUDd16qXP1PCh8d1/BGVvpdWn8k/HiK7n6TjeoXN1F7DA==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "caniuse-api": "^3.0.0", + "has": "^1.0.0", + "postcss": "^7.0.0" + } + }, + "postcss-reduce-transforms": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-reduce-transforms/-/postcss-reduce-transforms-4.0.2.tgz", + "integrity": "sha512-EEVig1Q2QJ4ELpJXMZR8Vt5DQx8/mo+dGWSR7vWXqcob2gQLyQGsionYcGKATXvQzMPn6DSN1vTN7yFximdIAg==", + "dev": true, + "requires": { + "cssnano-util-get-match": "^4.0.0", + "has": "^1.0.0", + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-safe-parser": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/postcss-safe-parser/-/postcss-safe-parser-4.0.2.tgz", + "integrity": "sha512-Uw6ekxSWNLCPesSv/cmqf2bY/77z11O7jZGPax3ycZMFU/oi2DMH9i89AdHc1tRwFg/arFoEwX0IS3LCUxJh1g==", + "dev": true, + "requires": { + "postcss": "^7.0.26" + } + }, + "postcss-selector-parser": { + "version": "6.0.6", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-6.0.6.tgz", + "integrity": "sha512-9LXrvaaX3+mcv5xkg5kFwqSzSH1JIObIx51PrndZwlmznwXRfxMddDvo9gve3gVR8ZTKgoFDdWkbRFmEhT4PMg==", + "dev": true, + "requires": { + "cssesc": "^3.0.0", + "util-deprecate": "^1.0.2" + } + }, + "postcss-svgo": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/postcss-svgo/-/postcss-svgo-4.0.3.tgz", + "integrity": "sha512-NoRbrcMWTtUghzuKSoIm6XV+sJdvZ7GZSc3wdBN0W19FTtp2ko8NqLsgoh/m9CzNhU3KLPvQmjIwtaNFkaFTvw==", + "dev": true, + "requires": { + "postcss": "^7.0.0", + "postcss-value-parser": "^3.0.0", + "svgo": "^1.0.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "postcss-unique-selectors": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/postcss-unique-selectors/-/postcss-unique-selectors-4.0.1.tgz", + "integrity": "sha512-+JanVaryLo9QwZjKrmJgkI4Fn8SBgRO6WXQBJi7KiAVPlmxikB5Jzc4EvXMT2H0/m0RjrVVm9rGNhZddm/8Spg==", + "dev": true, + "requires": { + "alphanum-sort": "^1.0.0", + "postcss": "^7.0.0", + "uniqs": "^2.0.0" + } + }, + "postcss-value-parser": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.1.0.tgz", + "integrity": "sha512-97DXOFbQJhk71ne5/Mt6cOu6yxsSfM0QGQyl0L25Gca4yGWEGJaig7l7gbCX623VqTBNGLRLaVUCnNkcedlRSQ==", + "dev": true + }, + "prepend-http": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/prepend-http/-/prepend-http-2.0.0.tgz", + "integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc=", + "dev": true + }, + "prettier": { + "version": "1.19.1", + "resolved": "https://registry.npmjs.org/prettier/-/prettier-1.19.1.tgz", + "integrity": "sha512-s7PoyDv/II1ObgQunCbB9PdLmUcBZcnWOcxDh7O0N/UwDEsHyqkW+Qh28jW+mVuCdx7gLB0BotYI1Y6uI9iyew==", + "dev": true, + "optional": true + }, + "pretty-error": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/pretty-error/-/pretty-error-2.1.2.tgz", + "integrity": "sha512-EY5oDzmsX5wvuynAByrmY0P0hcp+QpnAKbJng2A2MPjVKXCxrDSUkzghVJ4ZGPIv+JC4gX8fPUWscC0RtjsWGw==", + "dev": true, + "requires": { + "lodash": "^4.17.20", + "renderkid": "^2.0.4" + } + }, + "pretty-hrtime": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/pretty-hrtime/-/pretty-hrtime-1.0.3.tgz", + "integrity": "sha1-t+PqQkNaTJsnWdmeDyAesZWALuE=", + "dev": true + }, + "pretty-time": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/pretty-time/-/pretty-time-1.1.0.tgz", + "integrity": "sha512-28iF6xPQrP8Oa6uxE6a1biz+lWeTOAPKggvjB8HAs6nVMKZwf5bG++632Dx614hIWgUPkgivRfG+a8uAXGTIbA==", + "dev": true + }, + "prismjs": { + "version": "1.27.0", + "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.27.0.tgz", + "integrity": "sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA==", + "dev": true + }, + "process": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", + "integrity": "sha1-czIwDoQBYb2j5podHZGn1LwW8YI=", + "dev": true + }, + "process-nextick-args": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", + "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", + "dev": true + }, + "progress": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/progress/-/progress-2.0.3.tgz", + "integrity": "sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==", + "dev": true + }, + "promise-inflight": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/promise-inflight/-/promise-inflight-1.0.1.tgz", + "integrity": "sha1-mEcocL8igTL8vdhoEputEsPAKeM=", + "dev": true + }, + "promise-retry": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/promise-retry/-/promise-retry-2.0.1.tgz", + "integrity": "sha512-y+WKFlBR8BGXnsNlIHFGPZmyDf3DFMoLhaflAnyZgV6rG6xu+JwesTo2Q9R6XwYmtmwAFCkAk3e35jEdoeh/3g==", + "dev": true, + "requires": { + "err-code": "^2.0.2", + "retry": "^0.12.0" + } + }, + "prompts": { + "version": "2.4.1", + "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.1.tgz", + "integrity": "sha512-EQyfIuO2hPDsX1L/blblV+H7I0knhgAd82cVneCwcdND9B8AuCDuRcBH6yIcG4dFzlOUqbazQqwGjx5xmsNLuQ==", + "dev": true, + "requires": { + "kleur": "^3.0.3", + "sisteransi": "^1.0.5" + } + }, + "proxy-addr": { + "version": "2.0.6", + "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.6.tgz", + "integrity": "sha512-dh/frvCBVmSsDYzw6n926jv974gddhkFPfiN8hPOi30Wax25QZyZEGveluCgliBnqmuM+UJmBErbAUFIoDbjOw==", + "dev": true, + "requires": { + "forwarded": "~0.1.2", + "ipaddr.js": "1.9.1" + } + }, + "prr": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/prr/-/prr-1.0.1.tgz", + "integrity": "sha1-0/wRS6BplaRexok/SEzrHXj19HY=", + "dev": true + }, + "ps-tree": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/ps-tree/-/ps-tree-1.2.0.tgz", + "integrity": "sha512-0VnamPPYHl4uaU/nSFeZZpR21QAWRz+sRv4iW9+v/GS/J5U5iZB5BNN6J0RMoOvdx2gWM2+ZFMIm58q24e4UYA==", + "dev": true, + "requires": { + "event-stream": "=3.3.4" + } + }, + "pseudomap": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/pseudomap/-/pseudomap-1.0.2.tgz", + "integrity": "sha1-8FKijacOYYkX7wqKw0wa5aaChrM=", + "dev": true + }, + "psl": { + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/psl/-/psl-1.8.0.tgz", + "integrity": "sha512-RIdOzyoavK+hA18OGGWDqUTsCLhtA7IcZ/6NCs4fFJaHBDab+pDDmDIByWFRQJq2Cd7r1OoQxBGKOaztq+hjIQ==", + "dev": true + }, + "public-encrypt": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/public-encrypt/-/public-encrypt-4.0.3.tgz", + "integrity": "sha512-zVpa8oKZSz5bTMTFClc1fQOnyyEzpl5ozpi1B5YcvBrdohMjH2rfsBtyXcuNuwjsDIXmBYlF2N5FlJYhR29t8Q==", + "dev": true, + "requires": { + "bn.js": "^4.1.0", + "browserify-rsa": "^4.0.0", + "create-hash": "^1.1.0", + "parse-asn1": "^5.0.0", + "randombytes": "^2.0.1", + "safe-buffer": "^5.1.2" + }, + "dependencies": { + "bn.js": { + "version": "4.12.0", + "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", + "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", + "dev": true + } + } + }, + "pump": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pump/-/pump-3.0.0.tgz", + "integrity": "sha512-LwZy+p3SFs1Pytd/jYct4wpv49HiYCqd9Rlc5ZVdk0V+8Yzv6jR5Blk3TRmPL1ft69TxP0IMZGJ+WPFU2BFhww==", + "dev": true, + "requires": { + "end-of-stream": "^1.1.0", + "once": "^1.3.1" + } + }, + "pumpify": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/pumpify/-/pumpify-1.5.1.tgz", + "integrity": "sha512-oClZI37HvuUJJxSKKrC17bZ9Cu0ZYhEAGPsPUy9KlMUmv9dKX2o77RUmq7f3XjIxbwyGwYzbzQ1L2Ks8sIradQ==", + "dev": true, + "requires": { + "duplexify": "^3.6.0", + "inherits": "^2.0.3", + "pump": "^2.0.0" + }, + "dependencies": { + "pump": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/pump/-/pump-2.0.1.tgz", + "integrity": "sha512-ruPMNRkN3MHP1cWJc9OWr+T/xDP0jhXYCLfJcBuX54hhfIBnaQmAUMfDcG4DM5UMWByBbJY69QSphm3jtDKIkA==", + "dev": true, + "requires": { + "end-of-stream": "^1.1.0", + "once": "^1.3.1" + } + } + } + }, + "punycode": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", + "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", + "dev": true + }, + "pupa": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/pupa/-/pupa-2.1.1.tgz", + "integrity": "sha512-l1jNAspIBSFqbT+y+5FosojNpVpF94nlI+wDUpqP9enwOTfHx9f0gh5nB96vl+6yTpsJsypeNrwfzPrKuHB41A==", + "dev": true, + "requires": { + "escape-goat": "^2.0.0" + } + }, + "purgecss": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/purgecss/-/purgecss-2.3.0.tgz", + "integrity": "sha512-BE5CROfVGsx2XIhxGuZAT7rTH9lLeQx/6M0P7DTXQH4IUc3BBzs9JUzt4yzGf3JrH9enkeq6YJBe9CTtkm1WmQ==", + "dev": true, + "requires": { + "commander": "^5.0.0", + "glob": "^7.0.0", + "postcss": "7.0.32", + "postcss-selector-parser": "^6.0.2" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "dependencies": { + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "commander": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/commander/-/commander-5.1.0.tgz", + "integrity": "sha512-P0CysNDQ7rtVw4QIQtm+MRxV66vKFSvlsQvGYXZWR3qFU0jlMKHZZZgw8e+8DSah4UDKMqnknRDQz+xuQXQ/Zg==", + "dev": true + }, + "postcss": { + "version": "7.0.32", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz", + "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==", + "dev": true, + "requires": { + "chalk": "^2.4.2", + "source-map": "^0.6.1", + "supports-color": "^6.1.0" + } + } + } + }, + "q": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/q/-/q-1.5.1.tgz", + "integrity": "sha1-fjL3W0E4EpHQRhHxvxQQmsAGUdc=", + "dev": true + }, + "qs": { + "version": "6.10.1", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.10.1.tgz", + "integrity": "sha512-M528Hph6wsSVOBiYUnGf+K/7w0hNshs/duGsNXPUCLH5XAqjEtiPGwNONLV0tBH8NoGb0mvD5JubnUTrujKDTg==", + "dev": true, + "requires": { + "side-channel": "^1.0.4" + } + }, + "query-string": { + "version": "6.14.1", + "resolved": "https://registry.npmjs.org/query-string/-/query-string-6.14.1.tgz", + "integrity": "sha512-XDxAeVmpfu1/6IjyT/gXHOl+S0vQ9owggJ30hhWKdHAsNPOcasn5o9BW0eejZqL2e4vMjhAxoW3jVHcD6mbcYw==", + "dev": true, + "requires": { + "decode-uri-component": "^0.2.0", + "filter-obj": "^1.1.0", + "split-on-first": "^1.0.0", + "strict-uri-encode": "^2.0.0" + } + }, + "querystring": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/querystring/-/querystring-0.2.0.tgz", + "integrity": "sha1-sgmEkgO7Jd+CDadW50cAWHhSFiA=", + "dev": true + }, + "querystring-es3": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/querystring-es3/-/querystring-es3-0.2.1.tgz", + "integrity": "sha1-nsYfeQSYdXB9aUFFlv2Qek1xHnM=", + "dev": true + }, + "querystringify": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/querystringify/-/querystringify-2.2.0.tgz", + "integrity": "sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ==", + "dev": true + }, + "queue-microtask": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", + "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", + "dev": true + }, + "randombytes": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz", + "integrity": "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ==", + "dev": true, + "requires": { + "safe-buffer": "^5.1.0" + } + }, + "randomfill": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/randomfill/-/randomfill-1.0.4.tgz", + "integrity": "sha512-87lcbR8+MhcWcUiQ+9e+Rwx8MyR2P7qnt15ynUlbm3TU/fjbgz4GsvfSUDTemtCCtVCqb4ZcEFlyPNTh9bBTLw==", + "dev": true, + "requires": { + "randombytes": "^2.0.5", + "safe-buffer": "^5.1.0" + } + }, + "range-parser": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz", + "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==", + "dev": true + }, + "raw-body": { + "version": "2.4.0", + "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.4.0.tgz", + "integrity": "sha512-4Oz8DUIwdvoa5qMJelxipzi/iJIi40O5cGV1wNYp5hvZP8ZN0T+jiNkL0QepXs+EsQ9XJ8ipEDoiH70ySUJP3Q==", + "dev": true, + "requires": { + "bytes": "3.1.0", + "http-errors": "1.7.2", + "iconv-lite": "0.4.24", + "unpipe": "1.0.0" + } + }, + "rc": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/rc/-/rc-1.2.8.tgz", + "integrity": "sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==", + "dev": true, + "requires": { + "deep-extend": "^0.6.0", + "ini": "~1.3.0", + "minimist": "^1.2.0", + "strip-json-comments": "~2.0.1" + } + }, + "rc-config-loader": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/rc-config-loader/-/rc-config-loader-4.0.0.tgz", + "integrity": "sha512-//LRTblJEcqbmmro1GCmZ39qZXD+JqzuD8Y5/IZU3Dhp3A1Yr0Xn68ks8MQ6qKfKvYCWDveUmRDKDA40c+sCXw==", + "dev": true, + "requires": { + "debug": "^4.1.1", + "js-yaml": "^4.0.0", + "json5": "^2.1.2", + "require-from-string": "^2.0.2" + }, + "dependencies": { + "argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", + "dev": true + }, + "js-yaml": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz", + "integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==", + "dev": true, + "requires": { + "argparse": "^2.0.1" + } + } + } + }, + "read-package-json-fast": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/read-package-json-fast/-/read-package-json-fast-2.0.2.tgz", + "integrity": "sha512-5fyFUyO9B799foVk4n6ylcoAktG/FbE3jwRKxvwaeSrIunaoMc0u81dzXxjeAFKOce7O5KncdfwpGvvs6r5PsQ==", + "dev": true, + "requires": { + "json-parse-even-better-errors": "^2.3.0", + "npm-normalize-package-bin": "^1.0.1" + } + }, + "readable-stream": { + "version": "2.3.7", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz", + "integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==", + "dev": true, + "requires": { + "core-util-is": "~1.0.0", + "inherits": "~2.0.3", + "isarray": "~1.0.0", + "process-nextick-args": "~2.0.0", + "safe-buffer": "~5.1.1", + "string_decoder": "~1.1.1", + "util-deprecate": "~1.0.1" + } + }, + "readdirp": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-2.2.1.tgz", + "integrity": "sha512-1JU/8q+VgFZyxwrJ+SVIOsh+KywWGpds3NTqikiKpDMZWScmAYyKIgqkO+ARvNWJfXeXR1zxz7aHF4u4CyH6vQ==", + "dev": true, + "requires": { + "graceful-fs": "^4.1.11", + "micromatch": "^3.1.10", + "readable-stream": "^2.0.2" + } + }, + "reduce": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/reduce/-/reduce-1.0.2.tgz", + "integrity": "sha512-xX7Fxke/oHO5IfZSk77lvPa/7bjMh9BuCk4OOoX5XTXrM7s0Z+MkPfSDfz0q7r91BhhGSs8gii/VEN/7zhCPpQ==", + "dev": true, + "requires": { + "object-keys": "^1.1.0" + } + }, + "reduce-css-calc": { + "version": "2.1.8", + "resolved": "https://registry.npmjs.org/reduce-css-calc/-/reduce-css-calc-2.1.8.tgz", + "integrity": "sha512-8liAVezDmUcH+tdzoEGrhfbGcP7nOV4NkGE3a74+qqvE7nt9i4sKLGBuZNOnpI4WiGksiNPklZxva80061QiPg==", + "dev": true, + "requires": { + "css-unit-converter": "^1.1.1", + "postcss-value-parser": "^3.3.0" + }, + "dependencies": { + "postcss-value-parser": { + "version": "3.3.1", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-3.3.1.tgz", + "integrity": "sha512-pISE66AbVkp4fDQ7VHBwRNXzAAKJjw4Vw7nWI/+Q3vuly7SNfgYXvm6i5IgFylHGK5sP/xHAbB7N49OS4gWNyQ==", + "dev": true + } + } + }, + "regenerate": { + "version": "1.4.2", + "resolved": "https://registry.npmjs.org/regenerate/-/regenerate-1.4.2.tgz", + "integrity": "sha512-zrceR/XhGYU/d/opr2EKO7aRHUeiBI8qjtfHqADTwZd6Szfy16la6kqD0MIUs5z5hx6AaKa+PixpPrR289+I0A==", + "dev": true + }, + "regenerate-unicode-properties": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/regenerate-unicode-properties/-/regenerate-unicode-properties-8.2.0.tgz", + "integrity": "sha512-F9DjY1vKLo/tPePDycuH3dn9H1OTPIkVD9Kz4LODu+F2C75mgjAJ7x/gwy6ZcSNRAAkhNlJSOHRe8k3p+K9WhA==", + "dev": true, + "requires": { + "regenerate": "^1.4.0" + } + }, + "regenerator-runtime": { + "version": "0.13.7", + "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.7.tgz", + "integrity": "sha512-a54FxoJDIr27pgf7IgeQGxmqUNYrcV338lf/6gH456HZ/PhX+5BcwHXG9ajESmwe6WRO0tAzRUrRmNONWgkrew==", + "dev": true + }, + "regenerator-transform": { + "version": "0.14.5", + "resolved": "https://registry.npmjs.org/regenerator-transform/-/regenerator-transform-0.14.5.tgz", + "integrity": "sha512-eOf6vka5IO151Jfsw2NO9WpGX58W6wWmefK3I1zEGr0lOD0u8rwPaNqQL1aRxUaxLeKO3ArNh3VYg1KbaD+FFw==", + "dev": true, + "requires": { + "@babel/runtime": "^7.8.4" + } + }, + "regex-not": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/regex-not/-/regex-not-1.0.2.tgz", + "integrity": "sha512-J6SDjUgDxQj5NusnOtdFxDwN/+HWykR8GELwctJ7mdqhcyy1xEc4SRFHUXvxTp661YaVKAjfRLZ9cCqS6tn32A==", + "dev": true, + "requires": { + "extend-shallow": "^3.0.2", + "safe-regex": "^1.1.0" + } + }, + "regexp.prototype.flags": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/regexp.prototype.flags/-/regexp.prototype.flags-1.3.1.tgz", + "integrity": "sha512-JiBdRBq91WlY7uRJ0ds7R+dU02i6LKi8r3BuQhNXn+kmeLN+EfHhfjqMRis1zJxnlu88hq/4dx0P2OP3APRTOA==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3" + } + }, + "regexpu-core": { + "version": "4.7.1", + "resolved": "https://registry.npmjs.org/regexpu-core/-/regexpu-core-4.7.1.tgz", + "integrity": "sha512-ywH2VUraA44DZQuRKzARmw6S66mr48pQVva4LBeRhcOltJ6hExvWly5ZjFLYo67xbIxb6W1q4bAGtgfEl20zfQ==", + "dev": true, + "requires": { + "regenerate": "^1.4.0", + "regenerate-unicode-properties": "^8.2.0", + "regjsgen": "^0.5.1", + "regjsparser": "^0.6.4", + "unicode-match-property-ecmascript": "^1.0.4", + "unicode-match-property-value-ecmascript": "^1.2.0" + } + }, + "registry-auth-token": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/registry-auth-token/-/registry-auth-token-4.2.1.tgz", + "integrity": "sha512-6gkSb4U6aWJB4SF2ZvLb76yCBjcvufXBqvvEx1HbmKPkutswjW1xNVRY0+daljIYRbogN7O0etYSlbiaEQyMyw==", + "dev": true, + "requires": { + "rc": "^1.2.8" + } + }, + "registry-url": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/registry-url/-/registry-url-5.1.0.tgz", + "integrity": "sha512-8acYXXTI0AkQv6RAOjE3vOaIXZkT9wo4LOFbBKYQEEnnMNBpKqdUrI6S4NT0KPIo/WVvJ5tE/X5LF/TQUf0ekw==", + "dev": true, + "requires": { + "rc": "^1.2.8" + } + }, + "regjsgen": { + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/regjsgen/-/regjsgen-0.5.2.tgz", + "integrity": "sha512-OFFT3MfrH90xIW8OOSyUrk6QHD5E9JOTeGodiJeBS3J6IwlgzJMNE/1bZklWz5oTg+9dCMyEetclvCVXOPoN3A==", + "dev": true + }, + "regjsparser": { + "version": "0.6.9", + "resolved": "https://registry.npmjs.org/regjsparser/-/regjsparser-0.6.9.tgz", + "integrity": "sha512-ZqbNRz1SNjLAiYuwY0zoXW8Ne675IX5q+YHioAGbCw4X96Mjl2+dcX9B2ciaeyYjViDAfvIjFpQjJgLttTEERQ==", + "dev": true, + "requires": { + "jsesc": "~0.5.0" + }, + "dependencies": { + "jsesc": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz", + "integrity": "sha1-597mbjXW/Bb3EP6R1c9p9w8IkR0=", + "dev": true + } + } + }, + "relateurl": { + "version": "0.2.7", + "resolved": "https://registry.npmjs.org/relateurl/-/relateurl-0.2.7.tgz", + "integrity": "sha1-VNvzd+UUQKypCkzSdGANP/LYiKk=", + "dev": true + }, + "remote-git-tags": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/remote-git-tags/-/remote-git-tags-3.0.0.tgz", + "integrity": "sha512-C9hAO4eoEsX+OXA4rla66pXZQ+TLQ8T9dttgQj18yuKlPMTVkIkdYXvlMC55IuUsIkV6DpmQYi10JKFLaU+l7w==", + "dev": true + }, + "remove-markdown": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/remove-markdown/-/remove-markdown-0.3.0.tgz", + "integrity": "sha1-XktmdJOpNXlyjz1S7MHbnKUF3Jg=", + "dev": true + }, + "remove-trailing-separator": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/remove-trailing-separator/-/remove-trailing-separator-1.1.0.tgz", + "integrity": "sha1-wkvOKig62tW8P1jg1IJJuSN52O8=", + "dev": true + }, + "renderkid": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/renderkid/-/renderkid-2.0.5.tgz", + "integrity": "sha512-ccqoLg+HLOHq1vdfYNm4TBeaCDIi1FLt3wGojTDSvdewUv65oTmI3cnT2E4hRjl1gzKZIPK+KZrXzlUYKnR+vQ==", + "dev": true, + "requires": { + "css-select": "^2.0.2", + "dom-converter": "^0.2", + "htmlparser2": "^3.10.1", + "lodash": "^4.17.20", + "strip-ansi": "^3.0.0" + } + }, + "repeat-element": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/repeat-element/-/repeat-element-1.1.4.tgz", + "integrity": "sha512-LFiNfRcSu7KK3evMyYOuCzv3L10TW7yC1G2/+StMjK8Y6Vqd2MG7r/Qjw4ghtuCOjFvlnms/iMmLqpvW/ES/WQ==", + "dev": true + }, + "repeat-string": { + "version": "1.6.1", + "resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz", + "integrity": "sha1-jcrkcOHIirwtYA//Sndihtp15jc=", + "dev": true + }, + "request": { + "version": "2.88.2", + "resolved": "https://registry.npmjs.org/request/-/request-2.88.2.tgz", + "integrity": "sha512-MsvtOrfG9ZcrOwAW+Qi+F6HbD0CWXEh9ou77uOb7FM2WPhwT7smM833PzanhJLsgXjN89Ir6V2PczXNnMpwKhw==", + "dev": true, + "requires": { + "aws-sign2": "~0.7.0", + "aws4": "^1.8.0", + "caseless": "~0.12.0", + "combined-stream": "~1.0.6", + "extend": "~3.0.2", + "forever-agent": "~0.6.1", + "form-data": "~2.3.2", + "har-validator": "~5.1.3", + "http-signature": "~1.2.0", + "is-typedarray": "~1.0.0", + "isstream": "~0.1.2", + "json-stringify-safe": "~5.0.1", + "mime-types": "~2.1.19", + "oauth-sign": "~0.9.0", + "performance-now": "^2.1.0", + "qs": "~6.5.2", + "safe-buffer": "^5.1.2", + "tough-cookie": "~2.5.0", + "tunnel-agent": "^0.6.0", + "uuid": "^3.3.2" + }, + "dependencies": { + "qs": { + "version": "6.5.2", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.5.2.tgz", + "integrity": "sha512-N5ZAX4/LxJmF+7wN74pUD6qAh9/wnvdQcjq9TZjevvXzSUo7bfmw91saqMjzGS2xq91/odN2dW/WOl7qQHNDGA==", + "dev": true + } + } + }, + "require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha1-jGStX9MNqxyXbiNE/+f3kqam30I=", + "dev": true + }, + "require-from-string": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", + "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", + "dev": true + }, + "require-main-filename": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz", + "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg==", + "dev": true + }, + "requires-port": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz", + "integrity": "sha1-kl0mAdOaxIXgkc8NpcbmlNw9yv8=", + "dev": true + }, + "resolve": { + "version": "1.20.0", + "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.20.0.tgz", + "integrity": "sha512-wENBPt4ySzg4ybFQW2TT1zMQucPK95HSh/nq2CFTZVOGut2+pQvSsgtda4d26YrYcr067wjbmzOG8byDPBX63A==", + "dev": true, + "requires": { + "is-core-module": "^2.2.0", + "path-parse": "^1.0.6" + } + }, + "resolve-cwd": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz", + "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=", + "dev": true, + "requires": { + "resolve-from": "^3.0.0" + } + }, + "resolve-from": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-3.0.0.tgz", + "integrity": "sha1-six699nWiBvItuZTM17rywoYh0g=", + "dev": true + }, + "resolve-url": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/resolve-url/-/resolve-url-0.2.1.tgz", + "integrity": "sha1-LGN/53yJOv0qZj/iGqkIAGjiBSo=", + "dev": true + }, + "responselike": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/responselike/-/responselike-1.0.2.tgz", + "integrity": "sha1-kYcg7ztjHFZCvgaPFa3lpG9Loec=", + "dev": true, + "requires": { + "lowercase-keys": "^1.0.0" + } + }, + "ret": { + "version": "0.1.15", + "resolved": "https://registry.npmjs.org/ret/-/ret-0.1.15.tgz", + "integrity": "sha512-TTlYpa+OL+vMMNG24xSlQGEJ3B/RzEfUlLct7b5G/ytav+wPrplCpVMFuwzXbkecJrb6IYo1iFb0S9v37754mg==", + "dev": true + }, + "retry": { + "version": "0.12.0", + "resolved": "https://registry.npmjs.org/retry/-/retry-0.12.0.tgz", + "integrity": "sha1-G0KmJmoh8HQh0bC1S33BZ7AcATs=", + "dev": true + }, + "reusify": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", + "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", + "dev": true + }, + "rgb-regex": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/rgb-regex/-/rgb-regex-1.0.1.tgz", + "integrity": "sha1-wODWiC3w4jviVKR16O3UGRX+rrE=", + "dev": true + }, + "rgba-regex": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/rgba-regex/-/rgba-regex-1.0.0.tgz", + "integrity": "sha1-QzdOLiyglosO8VI0YLfXMP8i7rM=", + "dev": true + }, + "rimraf": { + "version": "2.7.1", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz", + "integrity": "sha512-uWjbaKIK3T1OSVptzX7Nl6PvQ3qAGtKEtVRjRuazjfL3Bx5eI409VZSqgND+4UNnmzLVdPj9FqFJNPqBZFve4w==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "ripemd160": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/ripemd160/-/ripemd160-2.0.2.tgz", + "integrity": "sha512-ii4iagi25WusVoiC4B4lq7pbXfAp3D9v5CwfkY33vffw2+pkDjY1D8GaN7spsxvCSx8dkPqOZCEZyfxcmJG2IA==", + "dev": true, + "requires": { + "hash-base": "^3.0.0", + "inherits": "^2.0.1" + } + }, + "robust-predicates": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/robust-predicates/-/robust-predicates-3.0.1.tgz", + "integrity": "sha512-ndEIpszUHiG4HtDsQLeIuMvRsDnn8c8rYStabochtUeCvfuvNptb5TUbVD68LRAILPX7p9nqQGh4xJgn3EHS/g==", + "dev": true + }, + "run-parallel": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", + "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", + "dev": true, + "requires": { + "queue-microtask": "^1.2.2" + } + }, + "run-queue": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/run-queue/-/run-queue-1.0.3.tgz", + "integrity": "sha1-6Eg5bwV9Ij8kOGkkYY4laUFh7Ec=", + "dev": true, + "requires": { + "aproba": "^1.1.1" + } + }, + "rw": { + "version": "1.3.3", + "resolved": "https://registry.npmjs.org/rw/-/rw-1.3.3.tgz", + "integrity": "sha512-PdhdWy89SiZogBLaw42zdeqtRJ//zFd2PgQavcICDUgJT5oW10QCRKbJ6bg4r0/UY2M6BWd5tkxuGFRvCkgfHQ==", + "dev": true + }, + "rxjs": { + "version": "6.6.7", + "resolved": "https://registry.npmjs.org/rxjs/-/rxjs-6.6.7.tgz", + "integrity": "sha512-hTdwr+7yYNIT5n4AMYp85KA6yw2Va0FLa3Rguvbpa4W3I5xynaBZo41cM3XM+4Q6fRMj3sBYIR1VAmZMXYJvRQ==", + "dev": true, + "requires": { + "tslib": "^1.9.0" + } + }, + "safe-buffer": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", + "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", + "dev": true + }, + "safe-regex": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/safe-regex/-/safe-regex-1.1.0.tgz", + "integrity": "sha1-QKNmnzsHfR6UPURinhV91IAjvy4=", + "dev": true, + "requires": { + "ret": "~0.1.10" + } + }, + "safer-buffer": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", + "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==", + "dev": true + }, + "sax": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/sax/-/sax-1.2.4.tgz", + "integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==", + "dev": true + }, + "schema-utils": { + "version": "2.7.1", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.1.tgz", + "integrity": "sha512-SHiNtMOUGWBQJwzISiVYKu82GiV4QYGePp3odlY1tuKO7gPtphAT5R/py0fA6xtbgLL/RvtJZnU9b8s0F1q0Xg==", + "dev": true, + "requires": { + "@types/json-schema": "^7.0.5", + "ajv": "^6.12.4", + "ajv-keywords": "^3.5.2" + } + }, + "section-matter": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/section-matter/-/section-matter-1.0.0.tgz", + "integrity": "sha512-vfD3pmTzGpufjScBh50YHKzEu2lxBWhVEHsNGoEXmCmn2hKGfeNLYMzCJpe8cD7gqX7TJluOVpBkAequ6dgMmA==", + "dev": true, + "requires": { + "extend-shallow": "^2.0.1", + "kind-of": "^6.0.0" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "select-hose": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/select-hose/-/select-hose-2.0.0.tgz", + "integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo=", + "dev": true + }, + "selfsigned": { + "version": "1.10.11", + "resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-1.10.11.tgz", + "integrity": "sha512-aVmbPOfViZqOZPgRBT0+3u4yZFHpmnIghLMlAcb5/xhp5ZtB/RVnKhz5vl2M32CLXAqR4kha9zfhNg0Lf/sxKA==", + "dev": true, + "requires": { + "node-forge": "^0.10.0" + } + }, + "semver": { + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz", + "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw==", + "dev": true + }, + "semver-diff": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/semver-diff/-/semver-diff-3.1.1.tgz", + "integrity": "sha512-GX0Ix/CJcHyB8c4ykpHGIAvLyOwOobtM/8d+TQkAd81/bEjgPHrfba41Vpesr7jX/t8Uh+R3EX9eAS5be+jQYg==", + "dev": true, + "requires": { + "semver": "^6.3.0" + } + }, + "semver-utils": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/semver-utils/-/semver-utils-1.1.4.tgz", + "integrity": "sha512-EjnoLE5OGmDAVV/8YDoN5KiajNadjzIp9BAHOhYeQHt7j0UWxjmgsx4YD48wp4Ue1Qogq38F1GNUJNqF1kKKxA==", + "dev": true + }, + "send": { + "version": "0.17.1", + "resolved": "https://registry.npmjs.org/send/-/send-0.17.1.tgz", + "integrity": "sha512-BsVKsiGcQMFwT8UxypobUKyv7irCNRHk1T0G680vk88yf6LBByGcZJOTJCrTP2xVN6yI+XjPJcNuE3V4fT9sAg==", + "dev": true, + "requires": { + "debug": "2.6.9", + "depd": "~1.1.2", + "destroy": "~1.0.4", + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "etag": "~1.8.1", + "fresh": "0.5.2", + "http-errors": "~1.7.2", + "mime": "1.6.0", + "ms": "2.1.1", + "on-finished": "~2.3.0", + "range-parser": "~1.2.1", + "statuses": "~1.5.0" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + }, + "dependencies": { + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + } + } + }, + "mime": { + "version": "1.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz", + "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==", + "dev": true + }, + "ms": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.1.tgz", + "integrity": "sha512-tgp+dl5cGk28utYktBsrFqA7HKgrhgPsg6Z/EfhWI4gl1Hwq8B/GmY/0oXZ6nF8hDVesS/FpnYaD/kOWhYQvyg==", + "dev": true + } + } + }, + "serialize-javascript": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-4.0.0.tgz", + "integrity": "sha512-GaNA54380uFefWghODBWEGisLZFj00nS5ACs6yHa9nLqlLpVLO8ChDGeKRjZnV4Nh4n0Qi7nhYZD/9fCPzEqkw==", + "dev": true, + "requires": { + "randombytes": "^2.1.0" + } + }, + "serve-index": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/serve-index/-/serve-index-1.9.1.tgz", + "integrity": "sha1-03aNabHn2C5c4FD/9bRTvqEqkjk=", + "dev": true, + "requires": { + "accepts": "~1.3.4", + "batch": "0.6.1", + "debug": "2.6.9", + "escape-html": "~1.0.3", + "http-errors": "~1.6.2", + "mime-types": "~2.1.17", + "parseurl": "~1.3.2" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "http-errors": { + "version": "1.6.3", + "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz", + "integrity": "sha1-i1VoC7S+KDoLW/TqLjhYC+HZMg0=", + "dev": true, + "requires": { + "depd": "~1.1.2", + "inherits": "2.0.3", + "setprototypeof": "1.1.0", + "statuses": ">= 1.4.0 < 2" + } + }, + "inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4=", + "dev": true + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "setprototypeof": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz", + "integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ==", + "dev": true + } + } + }, + "serve-static": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.14.1.tgz", + "integrity": "sha512-JMrvUwE54emCYWlTI+hGrGv5I8dEwmco/00EvkzIIsR7MqrHonbD9pO2MOfFnpFntl7ecpZs+3mW+XbQZu9QCg==", + "dev": true, + "requires": { + "encodeurl": "~1.0.2", + "escape-html": "~1.0.3", + "parseurl": "~1.3.3", + "send": "0.17.1" + } + }, + "set-blocking": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/set-blocking/-/set-blocking-2.0.0.tgz", + "integrity": "sha1-BF+XgtARrppoA93TgrJDkrPYkPc=", + "dev": true + }, + "set-value": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz", + "integrity": "sha512-JxHc1weCN68wRY0fhCoXpyK55m/XPHafOmK4UWD7m2CI14GMcFypt4w/0+NV5f/ZMby2F6S2wwA7fgynh9gWSw==", + "dev": true, + "requires": { + "extend-shallow": "^2.0.1", + "is-extendable": "^0.1.1", + "is-plain-object": "^2.0.3", + "split-string": "^3.0.1" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "setimmediate": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", + "integrity": "sha1-KQy7Iy4waULX1+qbg3Mqt4VvgoU=", + "dev": true + }, + "setprototypeof": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.1.tgz", + "integrity": "sha512-JvdAWfbXeIGaZ9cILp38HntZSFSo3mWg6xGcJJsd+d4aRMOqauag1C63dJfDw7OaMYwEbHMOxEZ1lqVRYP2OAw==", + "dev": true + }, + "sha.js": { + "version": "2.4.11", + "resolved": "https://registry.npmjs.org/sha.js/-/sha.js-2.4.11.tgz", + "integrity": "sha512-QMEp5B7cftE7APOjk5Y6xgrbWu+WkLVQwk8JNjZ8nKRciZaByEW6MubieAiToS7+dwvrjGhH8jRXz3MVd0AYqQ==", + "dev": true, + "requires": { + "inherits": "^2.0.1", + "safe-buffer": "^5.0.1" + } + }, + "shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "requires": { + "shebang-regex": "^3.0.0" + } + }, + "shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true + }, + "side-channel": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.4.tgz", + "integrity": "sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==", + "dev": true, + "requires": { + "call-bind": "^1.0.0", + "get-intrinsic": "^1.0.2", + "object-inspect": "^1.9.0" + } + }, + "signal-exit": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.3.tgz", + "integrity": "sha512-VUJ49FC8U1OxwZLxIbTTrDvLnf/6TDgxZcK8wxR8zs13xpx7xbG60ndBlhNrFi2EMuFRoeDoJO7wthSLq42EjA==", + "dev": true + }, + "simple-swizzle": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/simple-swizzle/-/simple-swizzle-0.2.2.tgz", + "integrity": "sha1-pNprY1/8zMoz9w0Xy5JZLeleVXo=", + "dev": true, + "requires": { + "is-arrayish": "^0.3.1" + } + }, + "sisteransi": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz", + "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==", + "dev": true + }, + "sitemap": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/sitemap/-/sitemap-3.2.2.tgz", + "integrity": "sha512-TModL/WU4m2q/mQcrDgNANn0P4LwprM9MMvG4hu5zP4c6IIKs2YLTu6nXXnNr8ODW/WFtxKggiJ1EGn2W0GNmg==", + "dev": true, + "requires": { + "lodash.chunk": "^4.2.0", + "lodash.padstart": "^4.6.1", + "whatwg-url": "^7.0.0", + "xmlbuilder": "^13.0.0" + } + }, + "slash": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/slash/-/slash-2.0.0.tgz", + "integrity": "sha512-ZYKh3Wh2z1PpEXWr0MpSBZ0V6mZHAQfYevttO11c51CaWjGTaadiKZ+wVt1PbMlDV5qhMFslpZCemhwOK7C89A==", + "dev": true + }, + "smart-buffer": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.1.0.tgz", + "integrity": "sha512-iVICrxOzCynf/SNaBQCw34eM9jROU/s5rzIhpOvzhzuYHfJR/DhZfDkXiZSgKXfgv26HT3Yni3AV/DGw0cGnnw==", + "dev": true + }, + "smoothscroll-polyfill": { + "version": "0.4.4", + "resolved": "https://registry.npmjs.org/smoothscroll-polyfill/-/smoothscroll-polyfill-0.4.4.tgz", + "integrity": "sha512-TK5ZA9U5RqCwMpfoMq/l1mrH0JAR7y7KRvOBx0n2869aLxch+gT9GhN3yUfjiw+d/DiF1mKo14+hd62JyMmoBg==", + "dev": true + }, + "snapdragon": { + "version": "0.8.2", + "resolved": "https://registry.npmjs.org/snapdragon/-/snapdragon-0.8.2.tgz", + "integrity": "sha512-FtyOnWN/wCHTVXOMwvSv26d+ko5vWlIDD6zoUJ7LW8vh+ZBC8QdljveRP+crNrtBwioEUWy/4dMtbBjA4ioNlg==", + "dev": true, + "requires": { + "base": "^0.11.1", + "debug": "^2.2.0", + "define-property": "^0.2.5", + "extend-shallow": "^2.0.1", + "map-cache": "^0.2.2", + "source-map": "^0.5.6", + "source-map-resolve": "^0.5.0", + "use": "^3.1.0" + }, + "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "define-property": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", + "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", + "dev": true, + "requires": { + "is-descriptor": "^0.1.0" + } + }, + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "source-map": { + "version": "0.5.7", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha1-igOdLRAh0i0eoUyA2OpGi6LvP8w=", + "dev": true + } + } + }, + "snapdragon-node": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/snapdragon-node/-/snapdragon-node-2.1.1.tgz", + "integrity": "sha512-O27l4xaMYt/RSQ5TR3vpWCAB5Kb/czIcqUFOM/C4fYcLnbZUc1PkjTAMjof2pBWaSTwOUd6qUHcFGVGj7aIwnw==", + "dev": true, + "requires": { + "define-property": "^1.0.0", + "isobject": "^3.0.0", + "snapdragon-util": "^3.0.1" + }, + "dependencies": { + "define-property": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", + "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", + "dev": true, + "requires": { + "is-descriptor": "^1.0.0" + } + }, + "is-accessor-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", + "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-data-descriptor": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", + "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", + "dev": true, + "requires": { + "kind-of": "^6.0.0" + } + }, + "is-descriptor": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", + "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", + "dev": true, + "requires": { + "is-accessor-descriptor": "^1.0.0", + "is-data-descriptor": "^1.0.0", + "kind-of": "^6.0.2" + } + } + } + }, + "snapdragon-util": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/snapdragon-util/-/snapdragon-util-3.0.1.tgz", + "integrity": "sha512-mbKkMdQKsjX4BAL4bRYTj21edOf8cN7XHdYUJEe+Zn99hVEYcMvKPct1IqNe7+AZPirn8BCDOQBHQZknqmKlZQ==", + "dev": true, + "requires": { + "kind-of": "^3.2.0" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "sockjs": { + "version": "0.3.21", + "resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.21.tgz", + "integrity": "sha512-DhbPFGpxjc6Z3I+uX07Id5ZO2XwYsWOrYjaSeieES78cq+JaJvVe5q/m1uvjIQhXinhIeCFRH6JgXe+mvVMyXw==", + "dev": true, + "requires": { + "faye-websocket": "^0.11.3", + "uuid": "^3.4.0", + "websocket-driver": "^0.7.4" + } + }, + "sockjs-client": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/sockjs-client/-/sockjs-client-1.5.1.tgz", + "integrity": "sha512-VnVAb663fosipI/m6pqRXakEOw7nvd7TUgdr3PlR/8V2I95QIdwT8L4nMxhyU8SmDBHYXU1TOElaKOmKLfYzeQ==", + "dev": true, + "requires": { + "debug": "^3.2.6", + "eventsource": "^1.0.7", + "faye-websocket": "^0.11.3", + "inherits": "^2.0.4", + "json3": "^3.3.3", + "url-parse": "^1.5.1" + }, + "dependencies": { + "debug": { + "version": "3.2.7", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz", + "integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==", + "dev": true, + "requires": { + "ms": "^2.1.1" + } + } + } + }, + "socks": { + "version": "2.6.1", + "resolved": "https://registry.npmjs.org/socks/-/socks-2.6.1.tgz", + "integrity": "sha512-kLQ9N5ucj8uIcxrDwjm0Jsqk06xdpBjGNQtpXy4Q8/QY2k+fY7nZH8CARy+hkbG+SGAovmzzuauCpBlb8FrnBA==", + "dev": true, + "requires": { + "ip": "^1.1.5", + "smart-buffer": "^4.1.0" + } + }, + "socks-proxy-agent": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/socks-proxy-agent/-/socks-proxy-agent-5.0.0.tgz", + "integrity": "sha512-lEpa1zsWCChxiynk+lCycKuC502RxDWLKJZoIhnxrWNjLSDGYRFflHA1/228VkRcnv9TIb8w98derGbpKxJRgA==", + "dev": true, + "requires": { + "agent-base": "6", + "debug": "4", + "socks": "^2.3.3" + } + }, + "sort-keys": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/sort-keys/-/sort-keys-2.0.0.tgz", + "integrity": "sha1-ZYU1WEhh7JfXMNbPQYIuH1ZoQSg=", + "dev": true, + "requires": { + "is-plain-obj": "^1.0.0" + } + }, + "source-list-map": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/source-list-map/-/source-list-map-2.0.1.tgz", + "integrity": "sha512-qnQ7gVMxGNxsiL4lEuJwe/To8UnK7fAnmbGEEH8RpLouuKbeEm0lhbQVFIrNSuB+G7tVrAlVsZgETT5nljf+Iw==", + "dev": true + }, + "source-map": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", + "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", + "dev": true + }, + "source-map-resolve": { + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.5.3.tgz", + "integrity": "sha512-Htz+RnsXWk5+P2slx5Jh3Q66vhQj1Cllm0zvnaY98+NFx+Dv2CF/f5O/t8x+KaNdrdIAsruNzoh/KpialbqAnw==", + "dev": true, + "requires": { + "atob": "^2.1.2", + "decode-uri-component": "^0.2.0", + "resolve-url": "^0.2.1", + "source-map-url": "^0.4.0", + "urix": "^0.1.0" + } + }, + "source-map-support": { + "version": "0.5.19", + "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.19.tgz", + "integrity": "sha512-Wonm7zOCIJzBGQdB+thsPar0kYuCIzYvxZwlBa87yi/Mdjv7Tip2cyVbLj5o0cFPN4EVkuTwb3GDDyUx2DGnGw==", + "dev": true, + "requires": { + "buffer-from": "^1.0.0", + "source-map": "^0.6.0" + } + }, + "source-map-url": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/source-map-url/-/source-map-url-0.4.1.tgz", + "integrity": "sha512-cPiFOTLUKvJFIg4SKVScy4ilPPW6rFgMgfuZJPNoDuMs3nC1HbMUycBoJw77xFIp6z1UJQJOfx6C9GMH80DiTw==", + "dev": true + }, + "spawn-please": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/spawn-please/-/spawn-please-1.0.0.tgz", + "integrity": "sha512-Kz33ip6NRNKuyTRo3aDWyWxeGeM0ORDO552Fs6E1nj4pLWPkl37SrRtTnq+MEopVaqgmaO6bAvVS+v64BJ5M/A==", + "dev": true + }, + "spdy": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz", + "integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==", + "dev": true, + "requires": { + "debug": "^4.1.0", + "handle-thing": "^2.0.0", + "http-deceiver": "^1.2.7", + "select-hose": "^2.0.0", + "spdy-transport": "^3.0.0" + } + }, + "spdy-transport": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/spdy-transport/-/spdy-transport-3.0.0.tgz", + "integrity": "sha512-hsLVFE5SjA6TCisWeJXFKniGGOpBgMLmerfO2aCyCU5s7nJ/rpAepqmFifv/GCbSbueEeAJJnmSQ2rKC/g8Fcw==", + "dev": true, + "requires": { + "debug": "^4.1.0", + "detect-node": "^2.0.4", + "hpack.js": "^2.1.6", + "obuf": "^1.1.2", + "readable-stream": "^3.0.6", + "wbuf": "^1.7.3" + }, + "dependencies": { + "readable-stream": { + "version": "3.6.0", + "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz", + "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + } + } + } + }, + "split": { + "version": "0.3.3", + "resolved": "https://registry.npmjs.org/split/-/split-0.3.3.tgz", + "integrity": "sha1-zQ7qXmOiEd//frDwkcQTPi0N0o8=", + "dev": true, + "requires": { + "through": "2" + } + }, + "split-on-first": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/split-on-first/-/split-on-first-1.1.0.tgz", + "integrity": "sha512-43ZssAJaMusuKWL8sKUBQXHWOpq8d6CfN/u1p4gUzfJkM05C8rxTmYrkIPTXapZpORA6LkkzcUulJ8FqA7Uudw==", + "dev": true + }, + "split-string": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/split-string/-/split-string-3.1.0.tgz", + "integrity": "sha512-NzNVhJDYpwceVVii8/Hu6DKfD2G+NrQHlS/V/qgv763EYudVwEcMQNxd2lh+0VrUByXN/oJkl5grOhYWvQUYiw==", + "dev": true, + "requires": { + "extend-shallow": "^3.0.0" + } + }, + "sprintf-js": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", + "integrity": "sha1-BOaSb2YolTVPPdAVIDYzuFcpfiw=", + "dev": true + }, + "sshpk": { + "version": "1.16.1", + "resolved": "https://registry.npmjs.org/sshpk/-/sshpk-1.16.1.tgz", + "integrity": "sha512-HXXqVUq7+pcKeLqqZj6mHFUMvXtOJt1uoUx09pFW6011inTMxqI8BA8PM95myrIyyKwdnzjdFjLiE6KBPVtJIg==", + "dev": true, + "requires": { + "asn1": "~0.2.3", + "assert-plus": "^1.0.0", + "bcrypt-pbkdf": "^1.0.0", + "dashdash": "^1.12.0", + "ecc-jsbn": "~0.1.1", + "getpass": "^0.1.1", + "jsbn": "~0.1.0", + "safer-buffer": "^2.0.2", + "tweetnacl": "~0.14.0" + } + }, + "ssri": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/ssri/-/ssri-6.0.2.tgz", + "integrity": "sha512-cepbSq/neFK7xB6A50KHN0xHDotYzq58wWCa5LeWqnPrHG8GzfEjO/4O8kpmcGW+oaxkvhEJCWgbgNk4/ZV93Q==", + "dev": true, + "requires": { + "figgy-pudding": "^3.5.1" + } + }, + "stable": { + "version": "0.1.8", + "resolved": "https://registry.npmjs.org/stable/-/stable-0.1.8.tgz", + "integrity": "sha512-ji9qxRnOVfcuLDySj9qzhGSEFVobyt1kIOSkj1qZzYLzq7Tos/oUUWvotUPQLlrsidqsK6tBH89Bc9kL5zHA6w==", + "dev": true + }, + "stack-utils": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-1.0.5.tgz", + "integrity": "sha512-KZiTzuV3CnSnSvgMRrARVCj+Ht7rMbauGDK0LdVFRGyenwdylpajAp4Q0i6SX8rEmbTpMMf6ryq2gb8pPq2WgQ==", + "dev": true, + "requires": { + "escape-string-regexp": "^2.0.0" + }, + "dependencies": { + "escape-string-regexp": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-2.0.0.tgz", + "integrity": "sha512-UpzcLCXolUWcNu5HtVMHYdXJjArjsF9C0aNnquZYY4uW/Vu0miy5YoWvbV345HauVvcAUnpRuhMMcqTcGOY2+w==", + "dev": true + } + } + }, + "start-server-and-test": { + "version": "1.12.3", + "resolved": "https://registry.npmjs.org/start-server-and-test/-/start-server-and-test-1.12.3.tgz", + "integrity": "sha512-YNL/QdZ8gLYoAcvAFo/S2J4W0WS6Bi8HX/qZ74bMhZXEAMykvg7/8+vs0cPulhGBfoD4NGdbeEuV5wyhi1tlig==", + "dev": true, + "requires": { + "bluebird": "3.7.2", + "check-more-types": "2.24.0", + "debug": "4.3.1", + "execa": "5.0.0", + "lazy-ass": "1.6.0", + "ps-tree": "1.2.0", + "wait-on": "5.3.0" + } + }, + "static-extend": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/static-extend/-/static-extend-0.1.2.tgz", + "integrity": "sha1-YICcOcv/VTNyJv1eC1IPNB8ftcY=", + "dev": true, + "requires": { + "define-property": "^0.2.5", + "object-copy": "^0.1.0" + }, + "dependencies": { + "define-property": { + "version": "0.2.5", + "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", + "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", + "dev": true, + "requires": { + "is-descriptor": "^0.1.0" + } + } + } + }, + "statuses": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz", + "integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow=", + "dev": true + }, + "std-env": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/std-env/-/std-env-2.3.0.tgz", + "integrity": "sha512-4qT5B45+Kjef2Z6pE0BkskzsH0GO7GrND0wGlTM1ioUe3v0dGYx9ZJH0Aro/YyA8fqQ5EyIKDRjZojJYMFTflw==", + "dev": true, + "requires": { + "ci-info": "^3.0.0" + } + }, + "stream-browserify": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/stream-browserify/-/stream-browserify-2.0.2.tgz", + "integrity": "sha512-nX6hmklHs/gr2FuxYDltq8fJA1GDlxKQCz8O/IM4atRqBH8OORmBNgfvW5gG10GT/qQ9u0CzIvr2X5Pkt6ntqg==", + "dev": true, + "requires": { + "inherits": "~2.0.1", + "readable-stream": "^2.0.2" + } + }, + "stream-combiner": { + "version": "0.0.4", + "resolved": "https://registry.npmjs.org/stream-combiner/-/stream-combiner-0.0.4.tgz", + "integrity": "sha1-TV5DPBhSYd3mI8o/RMWGvPXErRQ=", + "dev": true, + "requires": { + "duplexer": "~0.1.1" + } + }, + "stream-each": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/stream-each/-/stream-each-1.2.3.tgz", + "integrity": "sha512-vlMC2f8I2u/bZGqkdfLQW/13Zihpej/7PmSiMQsbYddxuTsJp8vRe2x2FvVExZg7FaOds43ROAuFJwPR4MTZLw==", + "dev": true, + "requires": { + "end-of-stream": "^1.1.0", + "stream-shift": "^1.0.0" + } + }, + "stream-http": { + "version": "2.8.3", + "resolved": "https://registry.npmjs.org/stream-http/-/stream-http-2.8.3.tgz", + "integrity": "sha512-+TSkfINHDo4J+ZobQLWiMouQYB+UVYFttRA94FpEzzJ7ZdqcL4uUUQ7WkdkI4DSozGmgBUE/a47L+38PenXhUw==", + "dev": true, + "requires": { + "builtin-status-codes": "^3.0.0", + "inherits": "^2.0.1", + "readable-stream": "^2.3.6", + "to-arraybuffer": "^1.0.0", + "xtend": "^4.0.0" + } + }, + "stream-shift": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.1.tgz", + "integrity": "sha512-AiisoFqQ0vbGcZgQPY1cdP2I76glaVA/RauYR4G4thNFgkTqr90yXTo4LYX60Jl+sIlPNHHdGSwo01AvbKUSVQ==", + "dev": true + }, + "strict-uri-encode": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strict-uri-encode/-/strict-uri-encode-2.0.0.tgz", + "integrity": "sha1-ucczDHBChi9rFC3CdLvMWGbONUY=", + "dev": true + }, + "string-width": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz", + "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==", + "dev": true, + "requires": { + "emoji-regex": "^7.0.1", + "is-fullwidth-code-point": "^2.0.0", + "strip-ansi": "^5.1.0" + }, + "dependencies": { + "ansi-regex": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz", + "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg==", + "dev": true + }, + "strip-ansi": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz", + "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==", + "dev": true, + "requires": { + "ansi-regex": "^4.1.0" + } + } + } + }, + "string.prototype.trimend": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/string.prototype.trimend/-/string.prototype.trimend-1.0.4.tgz", + "integrity": "sha512-y9xCjw1P23Awk8EvTpcyL2NIr1j7wJ39f+k6lvRnSMz+mz9CGz9NYPelDk42kOz6+ql8xjfK8oYzy3jAP5QU5A==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3" + } + }, + "string.prototype.trimstart": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/string.prototype.trimstart/-/string.prototype.trimstart-1.0.4.tgz", + "integrity": "sha512-jh6e984OBfvxS50tdY2nRZnoC5/mLFKOREQfw8t5yytkoUsJRNxvI/E39qu1sD0OtWI3OC0XgKSmcWwziwYuZw==", + "dev": true, + "requires": { + "call-bind": "^1.0.2", + "define-properties": "^1.1.3" + } + }, + "string_decoder": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", + "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", + "dev": true, + "requires": { + "safe-buffer": "~5.1.0" + } + }, + "strip-ansi": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz", + "integrity": "sha1-ajhfuIU9lS1f8F0Oiq+UJ43GPc8=", + "dev": true, + "requires": { + "ansi-regex": "^2.0.0" + } + }, + "strip-bom-string": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/strip-bom-string/-/strip-bom-string-1.0.0.tgz", + "integrity": "sha1-5SEekiQ2n7uB1jOi8ABE3IztrZI=", + "dev": true + }, + "strip-eof": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.tgz", + "integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8=", + "dev": true + }, + "strip-final-newline": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz", + "integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==", + "dev": true + }, + "strip-json-comments": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-2.0.1.tgz", + "integrity": "sha1-PFMZQukIwml8DsNEhYwobHygpgo=", + "dev": true + }, + "striptags": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/striptags/-/striptags-3.1.1.tgz", + "integrity": "sha1-yMPn/db7S7OjKjt1LltePjgJPr0=", + "dev": true + }, + "stylehacks": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/stylehacks/-/stylehacks-4.0.3.tgz", + "integrity": "sha512-7GlLk9JwlElY4Y6a/rmbH2MhVlTyVmiJd1PfTCqFaIBEGMYNsrO/v3SeGTdhBThLg4Z+NbOk/qFMwCa+J+3p/g==", + "dev": true, + "requires": { + "browserslist": "^4.0.0", + "postcss": "^7.0.0", + "postcss-selector-parser": "^3.0.0" + }, + "dependencies": { + "postcss-selector-parser": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz", + "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==", + "dev": true, + "requires": { + "dot-prop": "^5.2.0", + "indexes-of": "^1.0.1", + "uniq": "^1.0.1" + } + } + } + }, + "stylis": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/stylis/-/stylis-4.1.1.tgz", + "integrity": "sha512-lVrM/bNdhVX2OgBFNa2YJ9Lxj7kPzylieHd3TNjuGE0Re9JB7joL5VUKOVH1kdNNJTgGPpT8hmwIAPLaSyEVFQ==", + "dev": true + }, + "stylus": { + "version": "0.54.8", + "resolved": "https://registry.npmjs.org/stylus/-/stylus-0.54.8.tgz", + "integrity": "sha512-vr54Or4BZ7pJafo2mpf0ZcwA74rpuYCZbxrHBsH8kbcXOwSfvBFwsRfpGO5OD5fhG5HDCFW737PKaawI7OqEAg==", + "dev": true, + "requires": { + "css-parse": "~2.0.0", + "debug": "~3.1.0", + "glob": "^7.1.6", + "mkdirp": "~1.0.4", + "safer-buffer": "^2.1.2", + "sax": "~1.2.4", + "semver": "^6.3.0", + "source-map": "^0.7.3" + }, + "dependencies": { + "debug": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz", + "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, + "source-map": { + "version": "0.7.3", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.7.3.tgz", + "integrity": "sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ==", + "dev": true + } + } + }, + "stylus-loader": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/stylus-loader/-/stylus-loader-3.0.2.tgz", + "integrity": "sha512-+VomPdZ6a0razP+zinir61yZgpw2NfljeSsdUF5kJuEzlo3khXhY19Fn6l8QQz1GRJGtMCo8nG5C04ePyV7SUA==", + "dev": true, + "requires": { + "loader-utils": "^1.0.2", + "lodash.clonedeep": "^4.5.0", + "when": "~3.6.x" + } + }, + "supports-color": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz", + "integrity": "sha512-qe1jfm1Mg7Nq/NSh6XE24gPXROEVsWHxC1LIx//XNlD9iw7YZQGjZNjYN7xGaEG6iKdA8EtNFW6R0gjnVXp+wQ==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + }, + "svg-tags": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/svg-tags/-/svg-tags-1.0.0.tgz", + "integrity": "sha1-WPcc7jvVGbWdSyqEO2x95krAR2Q=", + "dev": true + }, + "svgo": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/svgo/-/svgo-1.3.2.tgz", + "integrity": "sha512-yhy/sQYxR5BkC98CY7o31VGsg014AKLEPxdfhora76l36hD9Rdy5NZA/Ocn6yayNPgSamYdtX2rFJdcv07AYVw==", + "dev": true, + "requires": { + "chalk": "^2.4.1", + "coa": "^2.0.2", + "css-select": "^2.0.0", + "css-select-base-adapter": "^0.1.1", + "css-tree": "1.0.0-alpha.37", + "csso": "^4.0.2", + "js-yaml": "^3.13.1", + "mkdirp": "~0.5.1", + "object.values": "^1.1.0", + "sax": "~1.2.4", + "stable": "^0.1.8", + "unquote": "~1.1.1", + "util.promisify": "~1.0.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "tailwindcss": { + "version": "1.9.6", + "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-1.9.6.tgz", + "integrity": "sha512-nY8WYM/RLPqGsPEGEV2z63riyQPcHYZUJpAwdyBzVpxQHOHqHE+F/fvbCeXhdF1+TA5l72vSkZrtYCB9hRcwkQ==", + "dev": true, + "requires": { + "@fullhuman/postcss-purgecss": "^2.1.2", + "autoprefixer": "^9.4.5", + "browserslist": "^4.12.0", + "bytes": "^3.0.0", + "chalk": "^3.0.0 || ^4.0.0", + "color": "^3.1.2", + "detective": "^5.2.0", + "fs-extra": "^8.0.0", + "html-tags": "^3.1.0", + "lodash": "^4.17.20", + "node-emoji": "^1.8.1", + "normalize.css": "^8.0.1", + "object-hash": "^2.0.3", + "postcss": "^7.0.11", + "postcss-functions": "^3.0.0", + "postcss-js": "^2.0.0", + "postcss-nested": "^4.1.1", + "postcss-selector-parser": "^6.0.0", + "postcss-value-parser": "^4.1.0", + "pretty-hrtime": "^1.0.3", + "reduce-css-calc": "^2.1.6", + "resolve": "^1.14.2" + } + }, + "tapable": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/tapable/-/tapable-1.1.3.tgz", + "integrity": "sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA==", + "dev": true + }, + "tar": { + "version": "6.1.11", + "resolved": "https://registry.npmjs.org/tar/-/tar-6.1.11.tgz", + "integrity": "sha512-an/KZQzQUkZCkuoAA64hM92X0Urb6VpRhAFllDzz44U2mcD5scmT3zBc4VgVpkugF580+DQn8eAFSyoQt0tznA==", + "dev": true, + "requires": { + "chownr": "^2.0.0", + "fs-minipass": "^2.0.0", + "minipass": "^3.0.0", + "minizlib": "^2.1.1", + "mkdirp": "^1.0.3", + "yallist": "^4.0.0" + }, + "dependencies": { + "chownr": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/chownr/-/chownr-2.0.0.tgz", + "integrity": "sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==", + "dev": true + }, + "mkdirp": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz", + "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==", + "dev": true + }, + "yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==", + "dev": true + } + } + }, + "term-size": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/term-size/-/term-size-2.2.1.tgz", + "integrity": "sha512-wK0Ri4fOGjv/XPy8SBHZChl8CM7uMc5VML7SqiQ0zG7+J5Vr+RMQDoHa2CNT6KHUnTGIXH34UDMkPzAUyapBZg==", + "dev": true + }, + "terser": { + "version": "4.8.0", + "resolved": "https://registry.npmjs.org/terser/-/terser-4.8.0.tgz", + "integrity": "sha512-EAPipTNeWsb/3wLPeup1tVPaXfIaU68xMnVdPafIL1TV05OhASArYyIfFvnvJCNrR2NIOvDVNNTFRa+Re2MWyw==", + "dev": true, + "requires": { + "commander": "^2.20.0", + "source-map": "~0.6.1", + "source-map-support": "~0.5.12" + } + }, + "terser-webpack-plugin": { + "version": "1.4.5", + "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.4.5.tgz", + "integrity": "sha512-04Rfe496lN8EYruwi6oPQkG0vo8C+HT49X687FZnpPF0qMAIHONI6HEXYPKDOE8e5HjXTyKfqRd/agHtH0kOtw==", + "dev": true, + "requires": { + "cacache": "^12.0.2", + "find-cache-dir": "^2.1.0", + "is-wsl": "^1.1.0", + "schema-utils": "^1.0.0", + "serialize-javascript": "^4.0.0", + "source-map": "^0.6.1", + "terser": "^4.1.2", + "webpack-sources": "^1.4.0", + "worker-farm": "^1.7.0" + }, + "dependencies": { + "find-cache-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.1.0.tgz", + "integrity": "sha512-Tq6PixE0w/VMFfCgbONnkiQIVol/JJL7nRMi20fqzA4NRs9AfeqMGeRdPi3wIhYkxjeBaWh2rxwapn5Tu3IqOQ==", + "dev": true, + "requires": { + "commondir": "^1.0.1", + "make-dir": "^2.0.0", + "pkg-dir": "^3.0.0" + } + }, + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "make-dir": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz", + "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==", + "dev": true, + "requires": { + "pify": "^4.0.1", + "semver": "^5.6.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + }, + "pkg-dir": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", + "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", + "dev": true, + "requires": { + "find-up": "^3.0.0" + } + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + }, + "semver": { + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", + "dev": true + } + } + }, + "text-table": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/text-table/-/text-table-0.2.0.tgz", + "integrity": "sha1-f17oI66AUgfACvLfSoTsP8+lcLQ=", + "dev": true + }, + "through": { + "version": "2.3.8", + "resolved": "https://registry.npmjs.org/through/-/through-2.3.8.tgz", + "integrity": "sha1-DdTJ/6q8NXlgsbckEV1+Doai4fU=", + "dev": true + }, + "through2": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/through2/-/through2-2.0.5.tgz", + "integrity": "sha512-/mrRod8xqpA+IHSLyGCQ2s8SPHiCDEeQJSep1jqLYeEUClOFG2Qsh+4FU6G9VeqpZnGW/Su8LQGc4YKni5rYSQ==", + "dev": true, + "requires": { + "readable-stream": "~2.3.6", + "xtend": "~4.0.1" + } + }, + "thunky": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz", + "integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA==", + "dev": true + }, + "timers-browserify": { + "version": "2.0.12", + "resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.12.tgz", + "integrity": "sha512-9phl76Cqm6FhSX9Xe1ZUAMLtm1BLkKj2Qd5ApyWkXzsMRaA7dgr81kf4wJmQf/hAvg8EEyJxDo3du/0KlhPiKQ==", + "dev": true, + "requires": { + "setimmediate": "^1.0.4" + } + }, + "timsort": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/timsort/-/timsort-0.3.0.tgz", + "integrity": "sha1-QFQRqOfmM5/mTbmiNN4R3DHgK9Q=", + "dev": true + }, + "to-arraybuffer": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/to-arraybuffer/-/to-arraybuffer-1.0.1.tgz", + "integrity": "sha1-fSKbH8xjfkZsoIEYCDanqr/4P0M=", + "dev": true + }, + "to-factory": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/to-factory/-/to-factory-1.0.0.tgz", + "integrity": "sha1-hzivi9lxIK0dQEeXKtpVY7+UebE=", + "dev": true + }, + "to-fast-properties": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz", + "integrity": "sha1-3F5pjL0HkmW8c+A3doGk5Og/YW4=", + "dev": true + }, + "to-object-path": { + "version": "0.3.0", + "resolved": "https://registry.npmjs.org/to-object-path/-/to-object-path-0.3.0.tgz", + "integrity": "sha1-KXWIt7Dn4KwI4E5nL4XB9JmeF68=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "to-readable-stream": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/to-readable-stream/-/to-readable-stream-1.0.0.tgz", + "integrity": "sha512-Iq25XBt6zD5npPhlLVXGFN3/gyR2/qODcKNNyTMd4vbm39HUaOiAM4PMq0eMVC/Tkxz+Zjdsc55g9yyz+Yq00Q==", + "dev": true + }, + "to-regex": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/to-regex/-/to-regex-3.0.2.tgz", + "integrity": "sha512-FWtleNAtZ/Ki2qtqej2CXTOayOH9bHDQF+Q48VpWyDXjbYxA4Yz8iDB31zXOBUlOHHKidDbqGVrTUvQMPmBGBw==", + "dev": true, + "requires": { + "define-property": "^2.0.2", + "extend-shallow": "^3.0.2", + "regex-not": "^1.0.2", + "safe-regex": "^1.1.0" + } + }, + "to-regex-range": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-2.1.1.tgz", + "integrity": "sha1-fIDBe53+vlmeJzZ+DU3VWQFB2zg=", + "dev": true, + "requires": { + "is-number": "^3.0.0", + "repeat-string": "^1.6.1" + } + }, + "toidentifier": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.0.tgz", + "integrity": "sha512-yaOH/Pk/VEhBWWTlhI+qXxDFXlejDGcQipMlyxda9nthulaxLZUNcUqFxokp0vcYnvteJln5FNQDRrxj3YcbVw==", + "dev": true + }, + "toml": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/toml/-/toml-3.0.0.tgz", + "integrity": "sha512-y/mWCZinnvxjTKYhJ+pYxwD0mRLVvOtdS2Awbgxln6iEnt4rk0yBxeSBHkGJcPucRiG0e55mwWp+g/05rsrd6w==", + "dev": true + }, + "toposort": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/toposort/-/toposort-1.0.7.tgz", + "integrity": "sha1-LmhELZ9k7HILjMieZEOsbKqVACk=", + "dev": true + }, + "tough-cookie": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.5.0.tgz", + "integrity": "sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==", + "dev": true, + "requires": { + "psl": "^1.1.28", + "punycode": "^2.1.1" + } + }, + "tr46": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/tr46/-/tr46-1.0.1.tgz", + "integrity": "sha1-qLE/1r/SSJUZZ0zN5VujaTtwbQk=", + "dev": true, + "requires": { + "punycode": "^2.1.0" + } + }, + "tslib": { + "version": "1.14.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.14.1.tgz", + "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==", + "dev": true + }, + "tty-browserify": { + "version": "0.0.0", + "resolved": "https://registry.npmjs.org/tty-browserify/-/tty-browserify-0.0.0.tgz", + "integrity": "sha1-oVe6QC2iTpv5V/mqadUk7tQpAaY=", + "dev": true + }, + "tunnel-agent": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz", + "integrity": "sha1-J6XeoGs2sEoKmWZ3SykIaPD8QP0=", + "dev": true, + "requires": { + "safe-buffer": "^5.0.1" + } + }, + "tweetnacl": { + "version": "0.14.5", + "resolved": "https://registry.npmjs.org/tweetnacl/-/tweetnacl-0.14.5.tgz", + "integrity": "sha1-WuaBd/GS1EViadEIr6k/+HQ/T2Q=", + "dev": true + }, + "type-fest": { + "version": "0.21.3", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.21.3.tgz", + "integrity": "sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w==", + "dev": true + }, + "type-is": { + "version": "1.6.18", + "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz", + "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==", + "dev": true, + "requires": { + "media-typer": "0.3.0", + "mime-types": "~2.1.24" + } + }, + "typed.js": { + "version": "git+ssh://git@github.com/mattboldt/typed.js.git#337109d9ac6558475eea301693e64071dafc9961", + "from": "git+https://github.com/mattboldt/typed.js.git", + "dev": true + }, + "typedarray": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/typedarray/-/typedarray-0.0.6.tgz", + "integrity": "sha1-hnrHTjhkGHsdPUfZlqeOxciDB3c=", + "dev": true + }, + "typedarray-to-buffer": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/typedarray-to-buffer/-/typedarray-to-buffer-3.1.5.tgz", + "integrity": "sha512-zdu8XMNEDepKKR+XYOXAVPtWui0ly0NtohUscw+UmaHiAWT8hrV1rr//H6V+0DvJ3OQ19S979M0laLfX8rm82Q==", + "dev": true, + "requires": { + "is-typedarray": "^1.0.0" + } + }, + "uc.micro": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/uc.micro/-/uc.micro-1.0.6.tgz", + "integrity": "sha512-8Y75pvTYkLJW2hWQHXxoqRgV7qb9B+9vFEtidML+7koHUFapnVJAZ6cKs+Qjz5Aw3aZWHMC6u0wJE3At+nSGwA==", + "dev": true + }, + "unbox-primitive": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/unbox-primitive/-/unbox-primitive-1.0.1.tgz", + "integrity": "sha512-tZU/3NqK3dA5gpE1KtyiJUrEB0lxnGkMFHptJ7q6ewdZ8s12QrODwNbhIJStmJkd1QDXa1NRA8aF2A1zk/Ypyw==", + "dev": true, + "requires": { + "function-bind": "^1.1.1", + "has-bigints": "^1.0.1", + "has-symbols": "^1.0.2", + "which-boxed-primitive": "^1.0.2" + } + }, + "unicode-canonical-property-names-ecmascript": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/unicode-canonical-property-names-ecmascript/-/unicode-canonical-property-names-ecmascript-1.0.4.tgz", + "integrity": "sha512-jDrNnXWHd4oHiTZnx/ZG7gtUTVp+gCcTTKr8L0HjlwphROEW3+Him+IpvC+xcJEFegapiMZyZe02CyuOnRmbnQ==", + "dev": true + }, + "unicode-match-property-ecmascript": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/unicode-match-property-ecmascript/-/unicode-match-property-ecmascript-1.0.4.tgz", + "integrity": "sha512-L4Qoh15vTfntsn4P1zqnHulG0LdXgjSO035fEpdtp6YxXhMT51Q6vgM5lYdG/5X3MjS+k/Y9Xw4SFCY9IkR0rg==", + "dev": true, + "requires": { + "unicode-canonical-property-names-ecmascript": "^1.0.4", + "unicode-property-aliases-ecmascript": "^1.0.4" + } + }, + "unicode-match-property-value-ecmascript": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-1.2.0.tgz", + "integrity": "sha512-wjuQHGQVofmSJv1uVISKLE5zO2rNGzM/KCYZch/QQvez7C1hUhBIuZ701fYXExuufJFMPhv2SyL8CyoIfMLbIQ==", + "dev": true + }, + "unicode-property-aliases-ecmascript": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/unicode-property-aliases-ecmascript/-/unicode-property-aliases-ecmascript-1.1.0.tgz", + "integrity": "sha512-PqSoPh/pWetQ2phoj5RLiaqIk4kCNwoV3CI+LfGmWLKI3rE3kl1h59XpX2BjgDrmbxD9ARtQobPGU1SguCYuQg==", + "dev": true + }, + "union-value": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/union-value/-/union-value-1.0.1.tgz", + "integrity": "sha512-tJfXmxMeWYnczCVs7XAEvIV7ieppALdyepWMkHkwciRpZraG/xwT+s2JN8+pr1+8jCRf80FFzvr+MpQeeoF4Xg==", + "dev": true, + "requires": { + "arr-union": "^3.1.0", + "get-value": "^2.0.6", + "is-extendable": "^0.1.1", + "set-value": "^2.0.1" + } + }, + "uniq": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/uniq/-/uniq-1.0.1.tgz", + "integrity": "sha1-sxxa6CVIRKOoKBVBzisEuGWnNP8=", + "dev": true + }, + "uniqs": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/uniqs/-/uniqs-2.0.0.tgz", + "integrity": "sha1-/+3ks2slKQaW5uFl1KWe25mOawI=", + "dev": true + }, + "unique-filename": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/unique-filename/-/unique-filename-1.1.1.tgz", + "integrity": "sha512-Vmp0jIp2ln35UTXuryvjzkjGdRyf9b2lTXuSYUiPmzRcl3FDtYqAwOnTJkAngD9SWhnoJzDbTKwaOrZ+STtxNQ==", + "dev": true, + "requires": { + "unique-slug": "^2.0.0" + } + }, + "unique-slug": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/unique-slug/-/unique-slug-2.0.2.tgz", + "integrity": "sha512-zoWr9ObaxALD3DOPfjPSqxt4fnZiWblxHIgeWqW8x7UqDzEtHEQLzji2cuJYQFCU6KmoJikOYAZlrTHHebjx2w==", + "dev": true, + "requires": { + "imurmurhash": "^0.1.4" + } + }, + "unique-string": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/unique-string/-/unique-string-2.0.0.tgz", + "integrity": "sha512-uNaeirEPvpZWSgzwsPGtU2zVSTrn/8L5q/IexZmH0eH6SA73CmAA5U4GwORTxQAZs95TAXLNqeLoPPNO5gZfWg==", + "dev": true, + "requires": { + "crypto-random-string": "^2.0.0" + } + }, + "universalify": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/universalify/-/universalify-0.1.2.tgz", + "integrity": "sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg==", + "dev": true + }, + "unpipe": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz", + "integrity": "sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw=", + "dev": true + }, + "unquote": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/unquote/-/unquote-1.1.1.tgz", + "integrity": "sha1-j97XMk7G6IoP+LkF58CYzcCG1UQ=", + "dev": true + }, + "unset-value": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/unset-value/-/unset-value-1.0.0.tgz", + "integrity": "sha1-g3aHP30jNRef+x5vw6jtDfyKtVk=", + "dev": true, + "requires": { + "has-value": "^0.3.1", + "isobject": "^3.0.0" + }, + "dependencies": { + "has-value": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/has-value/-/has-value-0.3.1.tgz", + "integrity": "sha1-ex9YutpiyoJ+wKIHgCVlSEWZXh8=", + "dev": true, + "requires": { + "get-value": "^2.0.3", + "has-values": "^0.1.4", + "isobject": "^2.0.0" + }, + "dependencies": { + "isobject": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/isobject/-/isobject-2.1.0.tgz", + "integrity": "sha1-8GVWEJaj8dou9GJy+BXIQNh+DIk=", + "dev": true, + "requires": { + "isarray": "1.0.0" + } + } + } + }, + "has-values": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/has-values/-/has-values-0.1.4.tgz", + "integrity": "sha1-bWHeldkd/Km5oCCJrThL/49it3E=", + "dev": true + } + } + }, + "upath": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/upath/-/upath-1.2.0.tgz", + "integrity": "sha512-aZwGpamFO61g3OlfT7OQCHqhGnW43ieH9WZeP7QxN/G/jS4jfqUkZxoryvJgVPEcrl5NL/ggHsSmLMHuH64Lhg==", + "dev": true + }, + "update-notifier": { + "version": "4.1.3", + "resolved": "https://registry.npmjs.org/update-notifier/-/update-notifier-4.1.3.tgz", + "integrity": "sha512-Yld6Z0RyCYGB6ckIjffGOSOmHXj1gMeE7aROz4MG+XMkmixBX4jUngrGXNYz7wPKBmtoD4MnBa2Anu7RSKht/A==", + "dev": true, + "requires": { + "boxen": "^4.2.0", + "chalk": "^3.0.0", + "configstore": "^5.0.1", + "has-yarn": "^2.1.0", + "import-lazy": "^2.1.0", + "is-ci": "^2.0.0", + "is-installed-globally": "^0.3.1", + "is-npm": "^4.0.0", + "is-yarn-global": "^0.3.0", + "latest-version": "^5.0.0", + "pupa": "^2.0.1", + "semver-diff": "^3.1.1", + "xdg-basedir": "^4.0.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "chalk": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz", + "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==", + "dev": true, + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + } + } + }, + "upper-case": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/upper-case/-/upper-case-1.1.3.tgz", + "integrity": "sha1-9rRQHC7EzdJrp4vnIilh3ndiFZg=", + "dev": true + }, + "uri-js": { + "version": "4.4.1", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "requires": { + "punycode": "^2.1.0" + } + }, + "urix": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/urix/-/urix-0.1.0.tgz", + "integrity": "sha1-2pN/emLiH+wf0Y1Js1wpNQZ6bHI=", + "dev": true + }, + "url": { + "version": "0.11.0", + "resolved": "https://registry.npmjs.org/url/-/url-0.11.0.tgz", + "integrity": "sha1-ODjpfPxgUh63PFJajlW/3Z4uKPE=", + "dev": true, + "requires": { + "punycode": "1.3.2", + "querystring": "0.2.0" + }, + "dependencies": { + "punycode": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.3.2.tgz", + "integrity": "sha1-llOgNvt8HuQjQvIyXM7v6jkmxI0=", + "dev": true + } + } + }, + "url-loader": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/url-loader/-/url-loader-1.1.2.tgz", + "integrity": "sha512-dXHkKmw8FhPqu8asTc1puBfe3TehOCo2+RmOOev5suNCIYBcT626kxiWg1NBVkwc4rO8BGa7gP70W7VXuqHrjg==", + "dev": true, + "requires": { + "loader-utils": "^1.1.0", + "mime": "^2.0.3", + "schema-utils": "^1.0.0" + }, + "dependencies": { + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "url-parse": { + "version": "1.5.10", + "resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.10.tgz", + "integrity": "sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==", + "dev": true, + "requires": { + "querystringify": "^2.1.1", + "requires-port": "^1.0.0" + } + }, + "url-parse-lax": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/url-parse-lax/-/url-parse-lax-3.0.0.tgz", + "integrity": "sha1-FrXK/Afb42dsGxmZF3gj1lA6yww=", + "dev": true, + "requires": { + "prepend-http": "^2.0.0" + } + }, + "use": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/use/-/use-3.1.1.tgz", + "integrity": "sha512-cwESVXlO3url9YWlFW/TA9cshCEhtu7IKJ/p5soJ/gGpj7vbvFrAY/eIioQ6Dw23KjZhYgiIo8HOs1nQ2vr/oQ==", + "dev": true + }, + "util": { + "version": "0.11.1", + "resolved": "https://registry.npmjs.org/util/-/util-0.11.1.tgz", + "integrity": "sha512-HShAsny+zS2TZfaXxD9tYj4HQGlBezXZMZuM/S5PKLLoZkShZiGk9o5CzukI1LVHZvjdvZ2Sj1aW/Ndn2NB/HQ==", + "dev": true, + "requires": { + "inherits": "2.0.3" + }, + "dependencies": { + "inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4=", + "dev": true + } + } + }, + "util-deprecate": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=", + "dev": true + }, + "util.promisify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/util.promisify/-/util.promisify-1.0.1.tgz", + "integrity": "sha512-g9JpC/3He3bm38zsLupWryXHoEcS22YHthuPQSJdMy6KNrzIRzWqcsHzD/WUnqe45whVou4VIsPew37DoXWNrA==", + "dev": true, + "requires": { + "define-properties": "^1.1.3", + "es-abstract": "^1.17.2", + "has-symbols": "^1.0.1", + "object.getownpropertydescriptors": "^2.1.0" + } + }, + "utila": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/utila/-/utila-0.4.0.tgz", + "integrity": "sha1-ihagXURWV6Oupe7MWxKk+lN5dyw=", + "dev": true + }, + "utils-merge": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz", + "integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM=", + "dev": true + }, + "uuid": { + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz", + "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==", + "dev": true + }, + "validate-npm-package-name": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/validate-npm-package-name/-/validate-npm-package-name-3.0.0.tgz", + "integrity": "sha1-X6kS2B630MdK/BQN5zF/DKffQ34=", + "dev": true, + "requires": { + "builtins": "^1.0.3" + } + }, + "vary": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz", + "integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw=", + "dev": true + }, + "vendors": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/vendors/-/vendors-1.0.4.tgz", + "integrity": "sha512-/juG65kTL4Cy2su4P8HjtkTxk6VmJDiOPBufWniqQ6wknac6jNiXS9vU+hO3wgusiyqWlzTbVHi0dyJqRONg3w==", + "dev": true + }, + "verror": { + "version": "1.10.0", + "resolved": "https://registry.npmjs.org/verror/-/verror-1.10.0.tgz", + "integrity": "sha1-OhBcoXBTr1XW4nDB+CiGguGNpAA=", + "dev": true, + "requires": { + "assert-plus": "^1.0.0", + "core-util-is": "1.0.2", + "extsprintf": "^1.2.0" + } + }, + "vm-browserify": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/vm-browserify/-/vm-browserify-1.1.2.tgz", + "integrity": "sha512-2ham8XPWTONajOR0ohOKOHXkm3+gaBmGut3SRuu75xLd/RRaY6vqgh8NBYYk7+RW3u5AtzPQZG8F10LHkl0lAQ==", + "dev": true + }, + "vssue": { + "version": "1.4.8", + "resolved": "https://registry.npmjs.org/vssue/-/vssue-1.4.8.tgz", + "integrity": "sha512-Stp0CxF65Uv658qgYUgYKEDiWM8wskUfKCOT9ISJwz/Qn58N050vYnFYClnUXG060ZQi1YQxkTCuJAw8ee3YlQ==", + "dev": true, + "requires": { + "@vssue/utils": "^1.4.7", + "github-markdown-css": "^3.0.1", + "vue": "^2.6.10", + "vue-i18n": "^8.11.2", + "vue-property-decorator": "^8.1.1" + } + }, + "vue": { + "version": "2.6.12", + "resolved": "https://registry.npmjs.org/vue/-/vue-2.6.12.tgz", + "integrity": "sha512-uhmLFETqPPNyuLLbsKz6ioJ4q7AZHzD8ZVFNATNyICSZouqP2Sz0rotWQC8UNBF6VGSCs5abnKJoStA6JbCbfg==", + "dev": true + }, + "vue-class-component": { + "version": "7.2.6", + "resolved": "https://registry.npmjs.org/vue-class-component/-/vue-class-component-7.2.6.tgz", + "integrity": "sha512-+eaQXVrAm/LldalI272PpDe3+i4mPis0ORiMYxF6Ae4hyuCh15W8Idet7wPUEs4N4YptgFHGys4UrgNQOMyO6w==", + "dev": true + }, + "vue-disqus": { + "version": "3.0.5", + "resolved": "https://registry.npmjs.org/vue-disqus/-/vue-disqus-3.0.5.tgz", + "integrity": "sha512-T3Y68lXf5W2lYt6j4Y3kZ4opLPH0EAzqriy11MS4D4Q2+UN0tFuUXeYP1MxfvdyaCEboXSM6CUswxsULuNV70Q==", + "dev": true + }, + "vue-hot-reload-api": { + "version": "2.3.4", + "resolved": "https://registry.npmjs.org/vue-hot-reload-api/-/vue-hot-reload-api-2.3.4.tgz", + "integrity": "sha512-BXq3jwIagosjgNVae6tkHzzIk6a8MHFtzAdwhnV5VlvPTFxDCvIttgSiHWjdGoTJvXtmRu5HacExfdarRcFhog==", + "dev": true + }, + "vue-i18n": { + "version": "8.24.4", + "resolved": "https://registry.npmjs.org/vue-i18n/-/vue-i18n-8.24.4.tgz", + "integrity": "sha512-RZE94WUAGxEiBAANxQ0pptbRwDkNKNSXl3fnJslpFOxVMF6UkUtMDSuYGuW2blDrVgweIXVpethOVkYoNNT9xw==", + "dev": true + }, + "vue-loader": { + "version": "15.9.7", + "resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.9.7.tgz", + "integrity": "sha512-qzlsbLV1HKEMf19IqCJqdNvFJRCI58WNbS6XbPqK13MrLz65es75w392MSQ5TsARAfIjUw+ATm3vlCXUJSOH9Q==", + "dev": true, + "requires": { + "@vue/component-compiler-utils": "^3.1.0", + "hash-sum": "^1.0.2", + "loader-utils": "^1.1.0", + "vue-hot-reload-api": "^2.3.0", + "vue-style-loader": "^4.1.0" + } + }, + "vue-property-decorator": { + "version": "8.5.1", + "resolved": "https://registry.npmjs.org/vue-property-decorator/-/vue-property-decorator-8.5.1.tgz", + "integrity": "sha512-O6OUN2OMsYTGPvgFtXeBU3jPnX5ffQ9V4I1WfxFQ6dqz6cOUbR3Usou7kgFpfiXDvV7dJQSFcJ5yUPgOtPPm1Q==", + "dev": true, + "requires": { + "vue-class-component": "^7.1.0" + } + }, + "vue-router": { + "version": "3.5.1", + "resolved": "https://registry.npmjs.org/vue-router/-/vue-router-3.5.1.tgz", + "integrity": "sha512-RRQNLT8Mzr8z7eL4p7BtKvRaTSGdCbTy2+Mm5HTJvLGYSSeG9gDzNasJPP/yOYKLy+/cLG/ftrqq5fvkFwBJEw==", + "dev": true + }, + "vue-server-renderer": { + "version": "2.6.12", + "resolved": "https://registry.npmjs.org/vue-server-renderer/-/vue-server-renderer-2.6.12.tgz", + "integrity": "sha512-3LODaOsnQx7iMFTBLjki8xSyOxhCtbZ+nQie0wWY4iOVeEtTg1a3YQAjd82WvKxrWHHTshjvLb7OXMc2/dYuxw==", + "dev": true, + "requires": { + "chalk": "^1.1.3", + "hash-sum": "^1.0.2", + "he": "^1.1.0", + "lodash.template": "^4.5.0", + "lodash.uniq": "^4.5.0", + "resolve": "^1.2.0", + "serialize-javascript": "^3.1.0", + "source-map": "0.5.6" + }, + "dependencies": { + "ansi-styles": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz", + "integrity": "sha1-tDLdM1i2NM914eRmQ2gkBTPB3b4=", + "dev": true + }, + "chalk": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz", + "integrity": "sha1-qBFcVeSnAv5NFQq9OHKCKn4J/Jg=", + "dev": true, + "requires": { + "ansi-styles": "^2.2.1", + "escape-string-regexp": "^1.0.2", + "has-ansi": "^2.0.0", + "strip-ansi": "^3.0.0", + "supports-color": "^2.0.0" + } + }, + "serialize-javascript": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-3.1.0.tgz", + "integrity": "sha512-JIJT1DGiWmIKhzRsG91aS6Ze4sFUrYbltlkg2onR5OrnNM02Kl/hnY/T4FN2omvyeBbQmMJv+K4cPOpGzOTFBg==", + "dev": true, + "requires": { + "randombytes": "^2.1.0" + } + }, + "source-map": { + "version": "0.5.6", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.6.tgz", + "integrity": "sha1-dc449SvwczxafwwRjYEzSiu19BI=", + "dev": true + }, + "supports-color": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz", + "integrity": "sha1-U10EXOa2Nj+kARcIRimZXp3zJMc=", + "dev": true + } + } + }, + "vue-style-loader": { + "version": "4.1.3", + "resolved": "https://registry.npmjs.org/vue-style-loader/-/vue-style-loader-4.1.3.tgz", + "integrity": "sha512-sFuh0xfbtpRlKfm39ss/ikqs9AbKCoXZBpHeVZ8Tx650o0k0q/YCM7FRvigtxpACezfq6af+a7JeqVTWvncqDg==", + "dev": true, + "requires": { + "hash-sum": "^1.0.2", + "loader-utils": "^1.0.2" + } + }, + "vue-template-compiler": { + "version": "2.6.12", + "resolved": "https://registry.npmjs.org/vue-template-compiler/-/vue-template-compiler-2.6.12.tgz", + "integrity": "sha512-OzzZ52zS41YUbkCBfdXShQTe69j1gQDZ9HIX8miuC9C3rBCk9wIRjLiZZLrmX9V+Ftq/YEyv1JaVr5Y/hNtByg==", + "dev": true, + "requires": { + "de-indent": "^1.0.2", + "he": "^1.1.0" + } + }, + "vue-template-es2015-compiler": { + "version": "1.9.1", + "resolved": "https://registry.npmjs.org/vue-template-es2015-compiler/-/vue-template-es2015-compiler-1.9.1.tgz", + "integrity": "sha512-4gDntzrifFnCEvyoO8PqyJDmguXgVPxKiIxrBKjIowvL9l+N66196+72XVYR8BBf1Uv1Fgt3bGevJ+sEmxfZzw==", + "dev": true + }, + "vue-typed-js": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/vue-typed-js/-/vue-typed-js-0.1.2.tgz", + "integrity": "sha512-9kDGuz17uYJghe/1hepRWiWYssgUFtrKnMGPHlWjVRxdvs/d9xYYWMkgzztjhx1R1B33hXkN8ctmkzCYHQ+caA==", + "dev": true, + "requires": { + "typed.js": "git+https://github.com/mattboldt/typed.js.git" + } + }, + "vuejs-paginate": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/vuejs-paginate/-/vuejs-paginate-2.1.0.tgz", + "integrity": "sha512-gnwyXlmCiDOu9MLWxN5UJ4PGijKGNOMpHG8ujsrynCzTJljn/rp7Jq0WiDGDAMi5/u0AHuYIHhced+tUW4jblA==", + "dev": true + }, + "vuepress": { + "version": "1.8.2", + "resolved": "https://registry.npmjs.org/vuepress/-/vuepress-1.8.2.tgz", + "integrity": "sha512-BU1lUDwsA3ghf7a9ga4dsf0iTc++Z/l7BR1kUagHWVBHw7HNRgRDfAZBDDQXhllMILVToIxaTifpne9mSi94OA==", + "dev": true, + "requires": { + "@vuepress/core": "1.8.2", + "@vuepress/theme-default": "1.8.2", + "cac": "^6.5.6", + "envinfo": "^7.2.0", + "opencollective-postinstall": "^2.0.2", + "update-notifier": "^4.0.0" + } + }, + "vuepress-html-webpack-plugin": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/vuepress-html-webpack-plugin/-/vuepress-html-webpack-plugin-3.2.0.tgz", + "integrity": "sha512-BebAEl1BmWlro3+VyDhIOCY6Gef2MCBllEVAP3NUAtMguiyOwo/dClbwJ167WYmcxHJKLl7b0Chr9H7fpn1d0A==", + "dev": true, + "requires": { + "html-minifier": "^3.2.3", + "loader-utils": "^0.2.16", + "lodash": "^4.17.3", + "pretty-error": "^2.0.2", + "tapable": "^1.0.0", + "toposort": "^1.0.0", + "util.promisify": "1.0.0" + }, + "dependencies": { + "big.js": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/big.js/-/big.js-3.2.0.tgz", + "integrity": "sha512-+hN/Zh2D08Mx65pZ/4g5bsmNiZUuChDiQfTUQ7qJr4/kuopCr88xZsAXv6mBoZEsUI4OuGHlX59qE94K2mMW8Q==", + "dev": true + }, + "commander": { + "version": "2.17.1", + "resolved": "https://registry.npmjs.org/commander/-/commander-2.17.1.tgz", + "integrity": "sha512-wPMUt6FnH2yzG95SA6mzjQOEKUU3aLaDEmzs1ti+1E9h+CsrZghRlqEM/EJ4KscsQVG8uNN4uVreUeT8+drlgg==", + "dev": true + }, + "emojis-list": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/emojis-list/-/emojis-list-2.1.0.tgz", + "integrity": "sha1-TapNnbAPmBmIDHn6RXrlsJof04k=", + "dev": true + }, + "html-minifier": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/html-minifier/-/html-minifier-3.5.21.tgz", + "integrity": "sha512-LKUKwuJDhxNa3uf/LPR/KVjm/l3rBqtYeCOAekvG8F1vItxMUpueGd94i/asDDr8/1u7InxzFA5EeGjhhG5mMA==", + "dev": true, + "requires": { + "camel-case": "3.0.x", + "clean-css": "4.2.x", + "commander": "2.17.x", + "he": "1.2.x", + "param-case": "2.1.x", + "relateurl": "0.2.x", + "uglify-js": "3.4.x" + } + }, + "json5": { + "version": "0.5.1", + "resolved": "https://registry.npmjs.org/json5/-/json5-0.5.1.tgz", + "integrity": "sha1-Hq3nrMASA0rYTiOWdn6tn6VJWCE=", + "dev": true + }, + "loader-utils": { + "version": "0.2.17", + "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-0.2.17.tgz", + "integrity": "sha1-+G5jdNQyBabmxg6RlvF8Apm/s0g=", + "dev": true, + "requires": { + "big.js": "^3.1.3", + "emojis-list": "^2.0.0", + "json5": "^0.5.0", + "object-assign": "^4.0.1" + } + }, + "uglify-js": { + "version": "3.4.10", + "resolved": "https://registry.npmjs.org/uglify-js/-/uglify-js-3.4.10.tgz", + "integrity": "sha512-Y2VsbPVs0FIshJztycsO2SfPk7/KAF/T72qzv9u5EpQ4kB2hQoHlhNQTsNyy6ul7lQtqJN/AoWeS23OzEiEFxw==", + "dev": true, + "requires": { + "commander": "~2.19.0", + "source-map": "~0.6.1" + }, + "dependencies": { + "commander": { + "version": "2.19.0", + "resolved": "https://registry.npmjs.org/commander/-/commander-2.19.0.tgz", + "integrity": "sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg==", + "dev": true + } + } + }, + "util.promisify": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/util.promisify/-/util.promisify-1.0.0.tgz", + "integrity": "sha512-i+6qA2MPhvoKLuxnJNpXAGhg7HphQOSUq2LKMZD0m15EiskXUkMvKdF4Uui0WYeCUGea+o2cw/ZuwehtfsrNkA==", + "dev": true, + "requires": { + "define-properties": "^1.1.2", + "object.getownpropertydescriptors": "^2.0.3" + } + } + } + }, + "vuepress-plugin-container": { + "version": "2.1.5", + "resolved": "https://registry.npmjs.org/vuepress-plugin-container/-/vuepress-plugin-container-2.1.5.tgz", + "integrity": "sha512-TQrDX/v+WHOihj3jpilVnjXu9RcTm6m8tzljNJwYhxnJUW0WWQ0hFLcDTqTBwgKIFdEiSxVOmYE+bJX/sq46MA==", + "dev": true, + "requires": { + "@vuepress/shared-utils": "^1.2.0", + "markdown-it-container": "^2.0.0" + } + }, + "vuepress-plugin-dehydrate": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/vuepress-plugin-dehydrate/-/vuepress-plugin-dehydrate-1.1.5.tgz", + "integrity": "sha512-9F2x1vLCK4poPUMkLupD4HsgWdbZ68Escvma+DE1Dk6aAJdH5FGwmfOMxj4sMCBwz7S4s6bTMna+QQgD3+bzBA==", + "dev": true, + "requires": { + "@vuepress/shared-utils": "^1.2.0" + } + }, + "vuepress-plugin-disqus": { + "version": "0.2.0", + "resolved": "https://registry.npmjs.org/vuepress-plugin-disqus/-/vuepress-plugin-disqus-0.2.0.tgz", + "integrity": "sha512-kx+AeVzjJ9lx9bufLt1/X35V1VXfnQ1srkDMIzFKD9NyQ3eycsWQRcGO1dFe1HMrY3+7fTu+1/JeUEUEpGZ5tw==", + "dev": true, + "requires": { + "vue-disqus": "^3.0.5" + } + }, + "vuepress-plugin-feed": { + "version": "0.1.9", + "resolved": "https://registry.npmjs.org/vuepress-plugin-feed/-/vuepress-plugin-feed-0.1.9.tgz", + "integrity": "sha512-iOJkR7zPmJAX0TEVdxNsUT07xNQB6lZFpU7DqsYzO01FhaPkMOOVM5Vx5a/iOOuOggAeoI9H9yuah+cRmCImlw==", + "dev": true, + "requires": { + "feed": "2.0.4", + "lodash.defaultsdeep": "4.6.1", + "lodash.isempty": "4.4.0", + "lodash.trimend": "^4.5.1", + "lodash.trimstart": "^4.5.1", + "remove-markdown": "0.3.0", + "striptags": "3.1.1" + } + }, + "vuepress-plugin-mailchimp": { + "version": "1.4.2", + "resolved": "https://registry.npmjs.org/vuepress-plugin-mailchimp/-/vuepress-plugin-mailchimp-1.4.2.tgz", + "integrity": "sha512-4t5ZaKZXu5ZkwgE+WW//7CgXgz6DEhRefGrO5aql4PwapauNXlHKgQ2JMf9FRe5y5WHjNpDHYveEDNzISZmxJw==", + "dev": true, + "requires": { + "jsonp": "^0.2.1", + "query-string": "^6.9.0" + } + }, + "vuepress-plugin-sitemap": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/vuepress-plugin-sitemap/-/vuepress-plugin-sitemap-2.3.1.tgz", + "integrity": "sha512-n+8lbukhrKrsI9H/EX0EBgkE1pn85LAQFvQ5dIvrZP4Kz6JxPOPPNTQmZMhahQV1tXbLZQCEN7A1WZH4x+arJQ==", + "dev": true, + "requires": { + "sitemap": "^3.0.0" + } + }, + "vuepress-plugin-smooth-scroll": { + "version": "0.0.3", + "resolved": "https://registry.npmjs.org/vuepress-plugin-smooth-scroll/-/vuepress-plugin-smooth-scroll-0.0.3.tgz", + "integrity": "sha512-qsQkDftLVFLe8BiviIHaLV0Ea38YLZKKonDGsNQy1IE0wllFpFIEldWD8frWZtDFdx6b/O3KDMgVQ0qp5NjJCg==", + "dev": true, + "requires": { + "smoothscroll-polyfill": "^0.4.3" + } + }, + "wait-on": { + "version": "5.3.0", + "resolved": "https://registry.npmjs.org/wait-on/-/wait-on-5.3.0.tgz", + "integrity": "sha512-DwrHrnTK+/0QFaB9a8Ol5Lna3k7WvUR4jzSKmz0YaPBpuN2sACyiPVKVfj6ejnjcajAcvn3wlbTyMIn9AZouOg==", + "dev": true, + "requires": { + "axios": "^0.21.1", + "joi": "^17.3.0", + "lodash": "^4.17.21", + "minimist": "^1.2.5", + "rxjs": "^6.6.3" + } + }, + "watchpack": { + "version": "1.7.5", + "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-1.7.5.tgz", + "integrity": "sha512-9P3MWk6SrKjHsGkLT2KHXdQ/9SNkyoJbabxnKOoJepsvJjJG8uYTR3yTPxPQvNDI3w4Nz1xnE0TLHK4RIVe/MQ==", + "dev": true, + "requires": { + "chokidar": "^3.4.1", + "graceful-fs": "^4.1.2", + "neo-async": "^2.5.0", + "watchpack-chokidar2": "^2.0.1" + }, + "dependencies": { + "anymatch": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.2.tgz", + "integrity": "sha512-P43ePfOAIupkguHUycrc4qJ9kz8ZiuOUijaETwX7THt0Y/GNK7v0aa8rY816xWjZ7rJdA5XdMcpVFTKMq+RvWg==", + "dev": true, + "optional": true, + "requires": { + "normalize-path": "^3.0.0", + "picomatch": "^2.0.4" + } + }, + "binary-extensions": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.2.0.tgz", + "integrity": "sha512-jDctJ/IVQbZoJykoeHbhXpOlNBqGNcwXJKJog42E5HDPUwQTSdjCHdihjj0DlnheQ7blbT6dHOafNAiS8ooQKA==", + "dev": true, + "optional": true + }, + "braces": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", + "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", + "dev": true, + "optional": true, + "requires": { + "fill-range": "^7.0.1" + } + }, + "chokidar": { + "version": "3.5.1", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.5.1.tgz", + "integrity": "sha512-9+s+Od+W0VJJzawDma/gvBNQqkTiqYTWLuZoyAsivsI4AaWTCzHG06/TMjsf1cYe9Cb97UCEhjz7HvnPk2p/tw==", + "dev": true, + "optional": true, + "requires": { + "anymatch": "~3.1.1", + "braces": "~3.0.2", + "fsevents": "~2.3.1", + "glob-parent": "~5.1.0", + "is-binary-path": "~2.1.0", + "is-glob": "~4.0.1", + "normalize-path": "~3.0.0", + "readdirp": "~3.5.0" + } + }, + "fill-range": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", + "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", + "dev": true, + "optional": true, + "requires": { + "to-regex-range": "^5.0.1" + } + }, + "fsevents": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz", + "integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==", + "dev": true, + "optional": true + }, + "glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dev": true, + "optional": true, + "requires": { + "is-glob": "^4.0.1" + } + }, + "is-binary-path": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz", + "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==", + "dev": true, + "optional": true, + "requires": { + "binary-extensions": "^2.0.0" + } + }, + "is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "dev": true, + "optional": true + }, + "readdirp": { + "version": "3.5.0", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.5.0.tgz", + "integrity": "sha512-cMhu7c/8rdhkHXWsY+osBhfSy0JikwpHK/5+imo+LpeasTF8ouErHrlYkwT0++njiyuDvc7OFY5T3ukvZ8qmFQ==", + "dev": true, + "optional": true, + "requires": { + "picomatch": "^2.2.1" + } + }, + "to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "dev": true, + "optional": true, + "requires": { + "is-number": "^7.0.0" + } + } + } + }, + "watchpack-chokidar2": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/watchpack-chokidar2/-/watchpack-chokidar2-2.0.1.tgz", + "integrity": "sha512-nCFfBIPKr5Sh61s4LPpy1Wtfi0HE8isJ3d2Yb5/Ppw2P2B/3eVSEBjKfN0fmHJSK14+31KwMKmcrzs2GM4P0Ww==", + "dev": true, + "optional": true, + "requires": { + "chokidar": "^2.1.8" + } + }, + "wbuf": { + "version": "1.7.3", + "resolved": "https://registry.npmjs.org/wbuf/-/wbuf-1.7.3.tgz", + "integrity": "sha512-O84QOnr0icsbFGLS0O3bI5FswxzRr8/gHwWkDlQFskhSPryQXvrTMxjxGP4+iWYoauLoBvfDpkrOauZ+0iZpDA==", + "dev": true, + "requires": { + "minimalistic-assert": "^1.0.0" + } + }, + "webidl-conversions": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-4.0.2.tgz", + "integrity": "sha512-YQ+BmxuTgd6UXZW3+ICGfyqRyHXVlD5GtQr5+qjiNW7bF0cqrzX500HVXPBOvgXb5YnzDd+h0zqyv61KUD7+Sg==", + "dev": true + }, + "webpack": { + "version": "4.46.0", + "resolved": "https://registry.npmjs.org/webpack/-/webpack-4.46.0.tgz", + "integrity": "sha512-6jJuJjg8znb/xRItk7bkT0+Q7AHCYjjFnvKIWQPkNIOyRqoCGvkOs0ipeQzrqz4l5FtN5ZI/ukEHroeX/o1/5Q==", + "dev": true, + "requires": { + "@webassemblyjs/ast": "1.9.0", + "@webassemblyjs/helper-module-context": "1.9.0", + "@webassemblyjs/wasm-edit": "1.9.0", + "@webassemblyjs/wasm-parser": "1.9.0", + "acorn": "^6.4.1", + "ajv": "^6.10.2", + "ajv-keywords": "^3.4.1", + "chrome-trace-event": "^1.0.2", + "enhanced-resolve": "^4.5.0", + "eslint-scope": "^4.0.3", + "json-parse-better-errors": "^1.0.2", + "loader-runner": "^2.4.0", + "loader-utils": "^1.2.3", + "memory-fs": "^0.4.1", + "micromatch": "^3.1.10", + "mkdirp": "^0.5.3", + "neo-async": "^2.6.1", + "node-libs-browser": "^2.2.1", + "schema-utils": "^1.0.0", + "tapable": "^1.1.3", + "terser-webpack-plugin": "^1.4.3", + "watchpack": "^1.7.4", + "webpack-sources": "^1.4.1" + }, + "dependencies": { + "acorn": { + "version": "6.4.2", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-6.4.2.tgz", + "integrity": "sha512-XtGIhXwF8YM8bJhGxG5kXgjkEuNGLTkoYqVE+KMR+aspr4KGYmKYg7yUe3KghyQ9yheNwLnjmzh/7+gfDBmHCQ==", + "dev": true + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "webpack-chain": { + "version": "6.5.1", + "resolved": "https://registry.npmjs.org/webpack-chain/-/webpack-chain-6.5.1.tgz", + "integrity": "sha512-7doO/SRtLu8q5WM0s7vPKPWX580qhi0/yBHkOxNkv50f6qB76Zy9o2wRTrrPULqYTvQlVHuvbA8v+G5ayuUDsA==", + "dev": true, + "requires": { + "deepmerge": "^1.5.2", + "javascript-stringify": "^2.0.1" + }, + "dependencies": { + "javascript-stringify": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/javascript-stringify/-/javascript-stringify-2.1.0.tgz", + "integrity": "sha512-JVAfqNPTvNq3sB/VHQJAFxN/sPgKnsKrCwyRt15zwNCdrMMJDdcEOdubuy+DuJYYdm0ox1J4uzEuYKkN+9yhVg==", + "dev": true + } + } + }, + "webpack-dev-middleware": { + "version": "3.7.3", + "resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-3.7.3.tgz", + "integrity": "sha512-djelc/zGiz9nZj/U7PTBi2ViorGJXEWo/3ltkPbDyxCXhhEXkW0ce99falaok4TPj+AsxLiXJR0EBOb0zh9fKQ==", + "dev": true, + "requires": { + "memory-fs": "^0.4.1", + "mime": "^2.4.4", + "mkdirp": "^0.5.1", + "range-parser": "^1.2.1", + "webpack-log": "^2.0.0" + } + }, + "webpack-dev-server": { + "version": "3.11.2", + "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-3.11.2.tgz", + "integrity": "sha512-A80BkuHRQfCiNtGBS1EMf2ChTUs0x+B3wGDFmOeT4rmJOHhHTCH2naNxIHhmkr0/UillP4U3yeIyv1pNp+QDLQ==", + "dev": true, + "requires": { + "ansi-html": "0.0.7", + "bonjour": "^3.5.0", + "chokidar": "^2.1.8", + "compression": "^1.7.4", + "connect-history-api-fallback": "^1.6.0", + "debug": "^4.1.1", + "del": "^4.1.1", + "express": "^4.17.1", + "html-entities": "^1.3.1", + "http-proxy-middleware": "0.19.1", + "import-local": "^2.0.0", + "internal-ip": "^4.3.0", + "ip": "^1.1.5", + "is-absolute-url": "^3.0.3", + "killable": "^1.0.1", + "loglevel": "^1.6.8", + "opn": "^5.5.0", + "p-retry": "^3.0.1", + "portfinder": "^1.0.26", + "schema-utils": "^1.0.0", + "selfsigned": "^1.10.8", + "semver": "^6.3.0", + "serve-index": "^1.9.1", + "sockjs": "^0.3.21", + "sockjs-client": "^1.5.0", + "spdy": "^4.0.2", + "strip-ansi": "^3.0.1", + "supports-color": "^6.1.0", + "url": "^0.11.0", + "webpack-dev-middleware": "^3.7.2", + "webpack-log": "^2.0.0", + "ws": "^6.2.1", + "yargs": "^13.3.2" + }, + "dependencies": { + "is-absolute-url": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-3.0.3.tgz", + "integrity": "sha512-opmNIX7uFnS96NtPmhWQgQx6/NYFgsUXYMllcfzwWKUMwfo8kku1TvE6hkNcH+Q1ts5cMVrsY7j0bxXQDciu9Q==", + "dev": true + }, + "schema-utils": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", + "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", + "dev": true, + "requires": { + "ajv": "^6.1.0", + "ajv-errors": "^1.0.0", + "ajv-keywords": "^3.1.0" + } + } + } + }, + "webpack-log": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/webpack-log/-/webpack-log-2.0.0.tgz", + "integrity": "sha512-cX8G2vR/85UYG59FgkoMamwHUIkSSlV3bBMRsbxVXVUk2j6NleCKjQ/WE9eYg9WY4w25O9w8wKP4rzNZFmUcUg==", + "dev": true, + "requires": { + "ansi-colors": "^3.0.0", + "uuid": "^3.3.2" + } + }, + "webpack-merge": { + "version": "4.2.2", + "resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-4.2.2.tgz", + "integrity": "sha512-TUE1UGoTX2Cd42j3krGYqObZbOD+xF7u28WB7tfUordytSjbWTIjK/8V0amkBfTYN4/pB/GIDlJZZ657BGG19g==", + "dev": true, + "requires": { + "lodash": "^4.17.15" + } + }, + "webpack-sources": { + "version": "1.4.3", + "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.4.3.tgz", + "integrity": "sha512-lgTS3Xhv1lCOKo7SA5TjKXMjpSM4sBjNV5+q2bqesbSPs5FjGmU6jjtBSkX9b4qW87vDIsCIlUPOEhbZrMdjeQ==", + "dev": true, + "requires": { + "source-list-map": "^2.0.0", + "source-map": "~0.6.1" + } + }, + "webpackbar": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/webpackbar/-/webpackbar-3.2.0.tgz", + "integrity": "sha512-PC4o+1c8gWWileUfwabe0gqptlXUDJd5E0zbpr2xHP1VSOVlZVPBZ8j6NCR8zM5zbKdxPhctHXahgpNK1qFDPw==", + "dev": true, + "requires": { + "ansi-escapes": "^4.1.0", + "chalk": "^2.4.1", + "consola": "^2.6.0", + "figures": "^3.0.0", + "pretty-time": "^1.1.0", + "std-env": "^2.2.1", + "text-table": "^0.2.0", + "wrap-ansi": "^5.1.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "websocket-driver": { + "version": "0.7.4", + "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz", + "integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==", + "dev": true, + "requires": { + "http-parser-js": ">=0.5.1", + "safe-buffer": ">=5.1.0", + "websocket-extensions": ">=0.1.1" + } + }, + "websocket-extensions": { + "version": "0.1.4", + "resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz", + "integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg==", + "dev": true + }, + "whatwg-url": { + "version": "7.1.0", + "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-7.1.0.tgz", + "integrity": "sha512-WUu7Rg1DroM7oQvGWfOiAK21n74Gg+T4elXEQYkOhtyLeWiJFoOGLXPKI/9gzIie9CtwVLm8wtw6YJdKyxSjeg==", + "dev": true, + "requires": { + "lodash.sortby": "^4.7.0", + "tr46": "^1.0.1", + "webidl-conversions": "^4.0.2" + } + }, + "when": { + "version": "3.6.4", + "resolved": "https://registry.npmjs.org/when/-/when-3.6.4.tgz", + "integrity": "sha1-RztRfsFZ4rhQBUl6E5g/CVQS404=", + "dev": true + }, + "which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + }, + "which-boxed-primitive": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/which-boxed-primitive/-/which-boxed-primitive-1.0.2.tgz", + "integrity": "sha512-bwZdv0AKLpplFY2KZRX6TvyuN7ojjr7lwkg6ml0roIy9YeuSr7JS372qlNW18UQYzgYK9ziGcerWqZOmEn9VNg==", + "dev": true, + "requires": { + "is-bigint": "^1.0.1", + "is-boolean-object": "^1.1.0", + "is-number-object": "^1.0.4", + "is-string": "^1.0.5", + "is-symbol": "^1.0.3" + } + }, + "which-module": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/which-module/-/which-module-2.0.0.tgz", + "integrity": "sha1-2e8H3Od7mQK4o6j6SzHD4/fm6Ho=", + "dev": true + }, + "wide-align": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/wide-align/-/wide-align-1.1.3.tgz", + "integrity": "sha512-QGkOQc8XL6Bt5PwnsExKBPuMKBxnGxWWW3fU55Xt4feHozMUhdUMaBCk290qpm/wG5u/RSKzwdAC4i51YigihA==", + "dev": true, + "requires": { + "string-width": "^1.0.2 || 2" + }, + "dependencies": { + "ansi-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz", + "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg=", + "dev": true + }, + "string-width": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz", + "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==", + "dev": true, + "requires": { + "is-fullwidth-code-point": "^2.0.0", + "strip-ansi": "^4.0.0" + } + }, + "strip-ansi": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz", + "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=", + "dev": true, + "requires": { + "ansi-regex": "^3.0.0" + } + } + } + }, + "widest-line": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-3.1.0.tgz", + "integrity": "sha512-NsmoXalsWVDMGupxZ5R08ka9flZjjiLvHVAWYOKtiKM8ujtZWr9cRffak+uSE48+Ob8ObalXpwyeUiyDD6QFgg==", + "dev": true, + "requires": { + "string-width": "^4.0.0" + }, + "dependencies": { + "ansi-regex": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz", + "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg==", + "dev": true + }, + "emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true + }, + "is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true + }, + "string-width": { + "version": "4.2.2", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.2.tgz", + "integrity": "sha512-XBJbT3N4JhVumXE0eoLU9DCjcaF92KLNqTmFCnG1pf8duUxFGwtP6AD6nkjw9a3IdiRtL3E2w3JDiE/xi3vOeA==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.0" + } + }, + "strip-ansi": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz", + "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.0" + } + } + } + }, + "worker-farm": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/worker-farm/-/worker-farm-1.7.0.tgz", + "integrity": "sha512-rvw3QTZc8lAxyVrqcSGVm5yP/IJ2UcB3U0graE3LCFoZ0Yn2x4EoVSqJKdB/T5M+FLcRPjz4TDacRf3OCfNUzw==", + "dev": true, + "requires": { + "errno": "~0.1.7" + } + }, + "wrap-ansi": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-5.1.0.tgz", + "integrity": "sha512-QC1/iN/2/RPVJ5jYK8BGttj5z83LmSKmvbvrXPNCLZSEb32KKVDJDl/MOt2N01qU2H/FkzEa9PKto1BqDjtd7Q==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.0", + "string-width": "^3.0.0", + "strip-ansi": "^5.0.0" + }, + "dependencies": { + "ansi-regex": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz", + "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg==", + "dev": true + }, + "strip-ansi": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz", + "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==", + "dev": true, + "requires": { + "ansi-regex": "^4.1.0" + } + } + } + }, + "wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=", + "dev": true + }, + "write-file-atomic": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-3.0.3.tgz", + "integrity": "sha512-AvHcyZ5JnSfq3ioSyjrBkH9yW4m7Ayk8/9My/DD9onKeu/94fwrMocemO2QAJFAlnnDN+ZDS+ZjAR5ua1/PV/Q==", + "dev": true, + "requires": { + "imurmurhash": "^0.1.4", + "is-typedarray": "^1.0.0", + "signal-exit": "^3.0.2", + "typedarray-to-buffer": "^3.1.5" + } + }, + "ws": { + "version": "6.2.2", + "resolved": "https://registry.npmjs.org/ws/-/ws-6.2.2.tgz", + "integrity": "sha512-zmhltoSR8u1cnDsD43TX59mzoMZsLKqUweyYBAIvTngR3shc0W6aOZylZmq/7hqyVxPdi+5Ud2QInblgyE72fw==", + "dev": true, + "requires": { + "async-limiter": "~1.0.0" + } + }, + "xdg-basedir": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/xdg-basedir/-/xdg-basedir-4.0.0.tgz", + "integrity": "sha512-PSNhEJDejZYV7h50BohL09Er9VaIefr2LMAf3OEmpCkjOi34eYyQYAXUTjEQtZJTKcF0E2UKTh+osDLsgNim9Q==", + "dev": true + }, + "xml": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/xml/-/xml-1.0.1.tgz", + "integrity": "sha1-eLpyAgApxbyHuKgaPPzXS0ovweU=", + "dev": true + }, + "xmlbuilder": { + "version": "13.0.2", + "resolved": "https://registry.npmjs.org/xmlbuilder/-/xmlbuilder-13.0.2.tgz", + "integrity": "sha512-Eux0i2QdDYKbdbA6AM6xE4m6ZTZr4G4xF9kahI2ukSEMCzwce2eX9WlTI5J3s+NU7hpasFsr8hWIONae7LluAQ==", + "dev": true + }, + "xtend": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz", + "integrity": "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==", + "dev": true + }, + "y18n": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.3.tgz", + "integrity": "sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ==", + "dev": true + }, + "yallist": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz", + "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==", + "dev": true + }, + "yargs": { + "version": "13.3.2", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-13.3.2.tgz", + "integrity": "sha512-AX3Zw5iPruN5ie6xGRIDgqkT+ZhnRlZMLMHAs8tg7nRruy2Nb+i5o9bwghAogtM08q1dpr2LVoS8KSTMYpWXUw==", + "dev": true, + "requires": { + "cliui": "^5.0.0", + "find-up": "^3.0.0", + "get-caller-file": "^2.0.1", + "require-directory": "^2.1.1", + "require-main-filename": "^2.0.0", + "set-blocking": "^2.0.0", + "string-width": "^3.0.0", + "which-module": "^2.0.0", + "y18n": "^4.0.0", + "yargs-parser": "^13.1.2" + }, + "dependencies": { + "find-up": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", + "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", + "dev": true, + "requires": { + "locate-path": "^3.0.0" + } + }, + "locate-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", + "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "dev": true, + "requires": { + "p-locate": "^3.0.0", + "path-exists": "^3.0.0" + } + }, + "p-locate": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", + "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "dev": true, + "requires": { + "p-limit": "^2.0.0" + } + }, + "path-exists": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", + "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "dev": true + } + } + }, + "yargs-parser": { + "version": "13.1.2", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz", + "integrity": "sha512-3lbsNRf/j+A4QuSZfDRA7HRSfWrzO0YjqTJd5kjAq37Zep1CEgaYmrH9Q3GwPiB9cHyd1Y1UwggGhJGoxipbzg==", + "dev": true, + "requires": { + "camelcase": "^5.0.0", + "decamelize": "^1.2.0" + }, + "dependencies": { + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + } + } + }, + "yocto-queue": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", + "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", + "dev": true + }, + "zepto": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/zepto/-/zepto-1.2.0.tgz", + "integrity": "sha1-4Se9nmb9hGvl6rSME5SIL3wOT5g=", + "dev": true + } + } +} diff --git a/package.json b/package.json new file mode 100644 index 000000000..717e6ae29 --- /dev/null +++ b/package.json @@ -0,0 +1,28 @@ +{ + "name": "frictionlessdata.io", + "version": "1.0.0", + "description": "Frictionless Data Website", + "scripts": { + "build": "vuepress build site", + "start": "vuepress dev site", + "update": "ncu -u" + }, + "devDependencies": { + "@limdongjin/vuepress-plugin-simple-seo": "^1.0.4-alpha.5", + "@vssue/api-github-v3": "^1.4.7", + "@vssue/vuepress-plugin-vssue": "^1.4.8", + "@vuepress/plugin-back-to-top": "^1.8.2", + "@vuepress/plugin-blog": "^1.9.4", + "dotenv": "^10.0.0", + "lodash": "^4.17.21", + "markdown-it-footnote": "", + "mermaid": "^9.1.2", + "npm-check-updates": "^11.5.13", + "start-server-and-test": "^1.12.3", + "tailwindcss": "^1.1.4", + "vue-typed-js": "^0.1.2", + "vuepress": "^1.8.2", + "vuepress-plugin-dehydrate": "^1.1.5", + "vuepress-plugin-feed": "^0.1.9" + } +} diff --git a/site/.drafts/data-package/README.md b/site/.drafts/data-package/README.md new file mode 100644 index 000000000..31eb5b356 --- /dev/null +++ b/site/.drafts/data-package/README.md @@ -0,0 +1,282 @@ +# Data Packages + +Data Package is a simple **container** format used to describe and package a collection of data. The format provides a simple contract for data interoperability that supports frictionless delivery, installation and management of data. + +Data Packages can be used to package any kind of data. At the same time, for specific common data types such as tabular data it has support for providing important additional descriptive metadata -- for example, describing the columns and data types in a CSV. + +The following core principles inform our approach: + +* Simplicity +* Extensibility and customisation by design +* Metadata that is human-editable and machine-usable +* Reuse of existing standard formats for data +* Language, technology and infrastructure agnostic + +## The Data Package Suite of Specifications + +Over time the single Data Package spec has evolved into a suite of specs -- partly through componentization whereby the original spec is in several components and partly through extension. + +The main specifications are: + +* [Data Package specification][dp], a simple format for packaging data for sharing between tools and people +* [Tabular Data Package][tdp], a format for packaging tabular data that builds on Data Package and which uses: + * [Table Schema][ts], a specification for defining a *schema* for tabular data + * [CSV Dialect Description Format][spec-csvddf] (CSV-DDF), a specification for defining a *dialect* for CSV data. + +### How do these specifications relate? + +A **Data Package** can "contain" any type of file. A **Tabular Data Package** is a type of Data Package specialized for tabular data and which "contains" one or more CSV files. In a Tabular Data Package, each CSV must have a *schema* defined using **Table Schema** and, optionally, a *dialect* defined using **CSV-DDF**. An application or library that consumes Tabular Data Packages therefore must be able to understand not only the full Data Package specification, but also Table Schema and CSV-DDF. + +![Tabular Data Package](./tabular-data-package.png) + +For more information on each specification, see below: + +## Data Package + +- [Overview][dp] +- [Full Specification][spec-dp] + +## Tabular Data Package + +- [Overview][tdp] +- [Full Specification][spec-tdp] + +## Table Schema + +- [Overview][ts] +- [Full Specification][spec-ts] + +## CSV Dialect Description Format + +- [Full Specification][spec-csvddf] + +[dp]: /tooling/data-package-tools +[dp-main]: /data-package +[tdp]: /blog/2016/07/21/publish-tabular/ +[ts]: /tooling/table-schema-tools +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io + +## Getting Started + +Creating a Data Package is very easy: all you need to do is put a `datapackage.json` "descriptor" file in the top-level directory of your set of data files. + +A minimal example Data Package would look like this on disk: + +``` +datapackage.json + +# a data file(s) (CSV in this case but could be any type of data). Data files may go either in data subdirectory or in the main directory +data +data/more-data.csv + +# (Optional!) A README (in markdown format) +README.md +``` + +Any number of additional files such as more data files, scripts (for processing or analyzing the data) and other material may be provided but are not required. + +:::tip +There is a full **[RFC-style specification of Data Package format](https://specs.frictionlessdata.io/data-package/)** to complement this quick introduction. + +The [Tabular Data Package](/blog/2016/07/21/publish-tabular/) format extends Data Packages for tabular data. It supports providing additional information such as data types of columns. +::: + +### datapackage.json + +`datapackage.json` file is the basic building block of a Data Package and is the only required file. It provides: + +* General metadata such as the name of the package, its license, its publisher and source, etc +* A "manifest" in the the form of a list of the data resources (data files) included in this data package along with information on those files (e.g. schema) + +As its file extension indicates, it must be a [JSON][json] file. Here's a very minimal example of a `datapackage.json` file: + +```json +{ + "name": "a-unique-human-readable-and-url-usable-identifier", + "title": "A nice title", + "licenses" : [ ... ], + "sources" : [...], + "resources": [{ + // see below for what a resource descriptor looks like + }] +} +``` + +**Note:** a complete list of potential attributes and their meaning can be found in the [full Data Package spec][spec]. + +[spec]: https://specs.frictionlessdata.io/data-package/ + +**Note:** the Data Package format is **extensible**: publishers may add their own additional metadata as well as constraints on the format and type of data by adding their own attributes to the `datapackage.json`. + +Here is a much more extensive example of a datapackage JSON file: + +```json +{ + "name": "a-unique-human-readable-and-url-usable-identifier", + "datapackage_version": "1.0-beta", + "title": "A nice title", + "description": "...", + "version": "2.0", + "keywords": ["name", "My new keyword"], + "licenses": [{ + "url": "http://opendatacommons.org/licenses/pddl/", + "name": "Open Data Commons Public Domain", + "version": "1.0", + "id": "odc-pddl" + }], + "sources": [{ + "name": "World Bank and OECD", + "web": "http://data.worldbank.org/indicator/NY.GDP.MKTP.CD" + }], + "contributors":[{ + "title": "Joe Bloggs", + "email": "joe@bloggs.com", + "web": "http://www.bloggs.com" + }], + "maintainers": [{ + // like contributors + }], + "publishers": [{ + // like contributors + }], + "dependencies": { + "data-package-name": ">=1.0" + }, + "resources": [ + { + // ... see below ... + } + ], + // extend your datapackage.json with attributes that are not + // part of the data package spec + // we add a views attribute to display Recline Dataset Graph Views + // in our Data Package Viewer + "views" : [ + { + ... see below ... + } + ], + // you can add your own attributes to a datapackage.json, too + "my-own-attribute": "data-packages-are-awesome", +} +``` + +### Resources + +You list data files in the resources entry of the datapackage.json. + +```json + { + // one of url or path should be present + "path": "relative-path-to-file", // e.g. data/mydata.csv + "url": "online url" // e.g http://mysite.org/some-data.csv + } +``` + +### Views + +The [Data Package Viewer](http://data.okfn.org/tools/view) will display a [Recline Dataset Graph View](http://okfnlabs.org/recline/docs/views.html) when a `views` entry is provided in the datapackage.json. + +* Include the `resourceName` property if you have more than one resource and want to display a graph for a resource other than the first resource + +* In the `state` property + * the `group` property is the name of the resource field whose values will be used on the y axis in the `bars` graph type and the x axis in all other graph types + * the `series` property is an array of one or more names of resource fields whose values will be used on the x axis in the `bars` graph type and the y axis in all other graph types + * the `graphType` may be one of `lines-and-points`, `lines`, `points`, `bars`, or `columns` + +```json +{ + "id": "graph", + "label": "Graph", + "resourceName": "a-resource-name", + "type": "Graph", + "state": { + "group": "a-resource-field-name", + "series": [ + "another-resource-field-name" + ], + "graphType": "lines-and-points" + } +} +``` + +### Examples + +Many exemplar data packages can be found on [datahub][datahub]. Specific examples: + +#### World GDP + +A Data Package which includes the data locally in the repo (data is CSV). + + + +Here's the `datapackage.json`: + +https://pkgstore.datahub.io/core/gdp/9/datapackage.json + +#### S&P 500 Companies Data + +This is an example with more than one resource in the data package. + + + +Here's the `datapackage.json`: + +https://pkgstore.datahub.io/core/s-and-p-500-companies/10/datapackage.json + +#### GeoJSON and TopoJSON + +You can see an example on how to package GeoJSON files [here](https://datahub.io/examples/geojson-tutorial). + +DataHub does not currently support the TopoJSON format. You can use “Vega Graph Spec” and display your TopoJSON data using the [Vega specification][vega]. See an example [here](https://datahub.io/examples/vega-views-tutorial-topojson). + +## Next Steps + +* Read the [full specification](https://specs.frictionlessdata.io/data-package). +* Get to know the [tools](/products/data-package). +* Understand how it can be wrapped in a [Data Package](/data-package). + +[datahub]: https://datahub.io/core +[vega]: https://vega.github.io/vega/ +[ISO 3166-2 country codes]: https://github.com/datasets/country-codes + +[dp]: /products/data-package +[dp-main]: /data-package +[tdp]: /blog/2016/07/21/publish-tabular/ +[ts]: /products/table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io + diff --git a/site/.drafts/data-package/tabular-data-package.png b/site/.drafts/data-package/tabular-data-package.png new file mode 100644 index 000000000..f2d9d2415 Binary files /dev/null and b/site/.drafts/data-package/tabular-data-package.png differ diff --git a/site/.drafts/guide/README.md b/site/.drafts/guide/README.md new file mode 100644 index 000000000..961e00689 --- /dev/null +++ b/site/.drafts/guide/README.md @@ -0,0 +1,358 @@ +# Guide + +:::tip +This guide is still early-stage. We are currently consolidating our existing materials into one place here. Thus, in addition to this guide, you may want to check out these other resources: + +* [Introduction to Table Schema][ts] +* [Introduction to Data Package][dp] +::: + +[ts]: /table-schema/ +[dp]: /data-package/ + +## Introduction + +### What is Frictionless Data? + +Frictionless Data is a progressive open-source framework for building data infrastructure -- data management, data integration, data flows, etc. + +Unlike some other frameworks, Frictionless is designed from the ground up to be both incrementally adoptable and "progressive". Its purpose is to work with, build on, and enhance your existing data and tooling (rather than replacing it). It is also extremely lightweight! + +The core of the framework is a suite of ultra-simple patterns to describe and organize data. This allows the data to flow fluidly between tools and across teams. The patterns have been refined to zen-like simplicity, and they can be picked up in minutes and immediately integrated with other libraries or existing projects. + + + +At the same time, Frictionless Data is also perfectly capable of powering sophisticated data workflows when used in combination with modern tooling and supporting libraries. This is possible because the framework follows an "atomic" approach[^atomic] where specs and tools are broken down into small components that can be used on their own but also composed together to make larger ones. This allows you to take a minimalistic approach for simple solutions and then combine elements components for more complex solutions. + +If you’d like to learn more about Frictionless before diving in, we created a video walking through the core principles. + + + + + +[^atomic]: We have borrowed the concept of Atomic Data from the web design field. For us, it means tools or specs are **a)** broken down into their minimum viable components, and **b)** these components are combinable into larger and more complex components and systems. The Atomic approach is what underpins the incremental adoptability and the ability to scale from the simplest situation to highly complex data engineering. + +### Getting Started + +:::warning INFO +The official guide assumes some basic knowledge about data. If you are completely new to working with data, for example, you haven't heard of CSV or JSON or have never used a spreadsheet, it may be best idea to get some of those basics and then come back! A good starting point would be the first module, "What is Data?", at [School of Data][scoda]. +::: + +[scoda]: https://schoolofdata.org/ + + + +## Declarative Data + +At the core of Frictionless is a system that enables us to declaratively describe data (and datasets) using a straightforward syntax. + + +## Table + +A table is a collection of related **data** represented in **rows** and **columns**. In a table, the intersection between a row and a column is called a **cell**. Tables are widely used in different contexts and fields, ranging from data analysis to data research. + +Tables come in different variants, CSV, JSON, and Excel formats. Here's an example of each of these formats. + +```csv +Name,Email,Age +Jill,jill@foo.com,25 +Jack,jack@bar.com,33 +``` + +```json +{ + "name": "Jill", + "email": "Jill@foo.com", + "age": "25" +} +``` + + +```excel +Name,Email,Age +Jill,jill@foo.com,25 +Jack,jack@bar.com,33 +``` + +### Table Schema + +Table Schema is a specification for providing a “schema” (similar to a [database schema](https://en.wikipedia.org/wiki/Database_schema)) for tabular data. This information includes the expected type of each value in a column *(“string”, “number”, “date”, etc.)*, constraints on the value *(“this string can only be at most 10 characters long”)*, and the expected format of the data *(“this field should only contain strings that look like email addresses")*. Table Schema can also specify relations between tables. + +Here's our simple `helloworld.csv` CSV (you can paste this in a local file): + +```bash +Name,Email,Age +Jill,jill@foo.com,25 +Jack,jack@bar.com,33 +``` + +As a table, it looks like this: + +| Name | Email | Age | +|------|--------------|-----| +| Jill | jill@foo.com | 25 | +| Jack | jack@bar.com | 33 | + +And here's a **Table Schema** (in JSON) to describe that file. Note that a minimum age of 18 is specified in the `Age` column, and a string that looks like an email address must be present in the `Email` column: + +```json +{ + "fields": [ + { + "name": "Name", + "type": "string", + "description": "User’s name" + }, + { + "name": "Email", + "type": "string", + "format": "email", + "description": "User’s email" + }, + { + "name": "Age", + "type": "integer", + "description": "User’s age", + "constraints": { + "minimum": 18 + } + } + ] +} +``` + +Copy and paste this into a file called `tableschema.json` in the same directory as your CSV file. + +Well done! 👏 You have just created your very first Frictionless Data! + +### Validation + +Having errors in your data is not uncommon. They also often get in the way of quick and timely data analysis for many data users. Validating data helps ease the process of collecting data by checking the **quality** and **validity** of a data source before publishing it. + +Let's take a look at how to validate the tabular data we created in the previous section using the GoodTables [Python][py] and [JavaScript][js] libraries. GoodTables is a set of libraries and command-line tool for validating and transforming tabular data. These libraries exist to identify structural and content errors in your tabular data, so they can be fixed quickly. For example, a table schema contains information on fields and their assigned data types, making it possible to highlight misplaced data types (e.g. a string in an age column where an integer is expected, or an integer in an email column where a string is expected). + +Here's an example of how to validate a tabular data using the [Python library][py]. Using your terminal, install `goodtables` with the package manager [PIP][pip]: + +```bash +pip install goodtables +``` + +You can see a list of options by using the `--help` argument: + +```bash +goodtables --help +``` + +To validate our data, we need to run the `goodtables` command followed by the path to the file: + +```bash +goodtables helloworld.csv +``` +Goodtables supports CSV, XLS, XLSX, ODS, and JSON. After running the command above, we get the following validation report, which follows the [JSON Schema Report][json]: + +```bash +DATASET +======= +{ + 'error-count': 0, + 'preset': 'nested', + 'table-count': 1, + 'time': 0.104, + 'valid': True +} + +TABLE [1] +========= +{ + 'encoding': 'utf-8', + 'error-count': 0, + 'format': 'csv', + 'headers': ['name', 'email', 'age'], + 'row-count': 3, + 'scheme': 'file', + 'source': 'helloword.csv', + 'time': 0.003, + 'valid': True +} + +``` + +Now, consider the following CSV with invalid data. Let's check for structural or content errors in the tabular data: + +```bash +id,name,age, +1,John,24,john@mail.com +1,Jane,14,jane@mail.com +1,Jane,14,jane@mail.com +,Jane,22,7 +``` + +```bash +DATASET +======= +{ + 'error-count': 2, + 'preset': 'nested', + 'table-count': 1, + 'time': 0.105, + 'valid': False +} + +TABLE [1] +========= +{ + 'encoding': 'utf-8', + 'error-count': 2, + 'format': 'csv', + 'headers': ['id', 'name', 'age', ''], + 'row-count': 5, + 'scheme': 'file', + 'source': 'invalid.csv', + 'time': 0.003, + 'valid': False +} +--------- +[-,4] [blank-header] Header in column 4 is blank +[4,-] [duplicate-row] Row 4 is duplicated to row(s) 3 +``` + +You might notice that this validation report looks a bit different. The two lines at the bottom of the report, `blank-header` and `duplicate-row`, are structural errors. + +Now, let's see how we can do the same with Goodtables [JavaScript][js] Library. First, install [Goodtables package][js]: + +```bash +npm install goodtables +``` + +After installing the package, let's create an example. Create an `index.js` file and add the following: + +```javascript +const goodtables = require('goodtables'); + +async function validate () { +// Validate a CSV File +const source = 'helloworld.csv' + +const report = await goodtables.validate(source) + +console.log(report) +} + +validate(); + +``` +The result shows that the CSV contains some structural errors: + +```bash +{ 'error-count': 1, + preset: 'nested', + 'table-count': 1, + tables: + [ { encoding: null, + 'error-count': 1, + errors: [Array], + format: null, + headers: [], + 'row-count': 0, + schema: null, + scheme: 'http', + source: 'helloworld.csv', + time: 0.001, + valid: false } ], + time: 0.004, + valid: false, + warnings: [] + } +``` +>**Additonally, here's video walkthrough of the content outlined above** + + + +[py]: https://github.com/frictionlessdata/goodtables-py +[js]: https://github.com/frictionlessdata/goodtables-js +[pip]:https://pip.pypa.io/en/stable/ +[json]:https://github.com/frictionlessdata/goodtables-py/blob/master/goodtables/schemas/report.json + +:::tip NOTE +We can also use the Goodtables online tool to validate any tabular data. +::: + +Let's head over to the [GoodTables][gt] website and login with GitHub to start the process of validating our data. + +[gt]: https://goodtables.io/ + +![goodtables dashboard](https://i.imgur.com/Mxkgsoa.png) + +Add a data source in the dashboard using GitHub (Amazon S3 is also supported, but we're only covering GitHub here): + +:::tip INFO +We need to create a GitHub repository to store our `helloworld.csv` file. Make sure you use the valid CSV from our example above. +::: + +![adding source to goodtables](https://i.imgur.com/6H7jOsf.png) + +Because we have valid and well-structured data in our`helloworld.csv`, the results will come back as valid, as seen in the image below + +![valid data](https://i.imgur.com/cfp1Jej.png) + +Now, let's change to invalid tabular data and see what the checks return: + +```bash +Name,Email,,Age +Jill,jill@foo.com +Jack,jack@bar.com,33 +23,Jane,jane@foo.com, 22, 33 +``` + +![Invalid data](https://i.imgur.com/LIDV1OC.png) + +Of course, this build will fail because some structural errors were detected by GoodTables (**"Blank Header", "Missing value", and "Extra Value"**). + +>**Additionally, here's a video walkthrough of the content outlined above** + + + +### Tabulator +[Tabulator][Tabulator] is a consistent interface for reading and writing streams of tabular data, in Python and on the command line. Tabulator is a tool designed to help navigate dealing with data flows from diverse sources. Imagine when you have to work with some data that is human-generated and some that are machine-generated, and you need to handle a range of issues related to formatting, encoding, and markup. This is exactly what the Tabulator tool was designed to help with. + +Tabulator is a useful building block for data fetching and data processing. It provides a clean, structured stream of data covering Excel, CSV, SQL, Google Sheets, etc. + +Here's an example of using the [Tabulator] library to read tabular data. First, we need the library installed: + +```bash +pip install tabulator +``` + +Let's run an example using Python: + +```python +import tabulator + +with tabulator.Stream('helloworld.csv', headers=1) as stream: + stream.headers # [header1, header2, ..] + for row in stream: + print(row) # [value1, value2, ..] +``` + +Also, [Tabulator] ships with a CLI tool and can be used to read tabular data directly: + +```bash +tabulator https://github.com/frictionlessdata/tabulator-py/raw/4c1b3943ac98be87b551d87a777d0f7ca4904701/data/table.csv.gz +id, name +1, english +2,中国人 +``` + +[Tabulator]:https://github.com/frictionlessdata/tabulator-py + + +## Ready for More? + +We’ve briefly introduced the core concepts of Frictionless Data - the rest of the guide will cover more concepts and tooling with much more details. For the next section, let's move on and learn about Data Resource. + diff --git a/site/.drafts/jobs/create-visualizations/README.md b/site/.drafts/jobs/create-visualizations/README.md new file mode 100644 index 000000000..15c90da56 --- /dev/null +++ b/site/.drafts/jobs/create-visualizations/README.md @@ -0,0 +1,27 @@ +--- +title: Create Visualizations +tagline: Present data in visual form such as graphs, tables etc +description: You want to easily visualize and interact with data. +pain: The ecosystem is quite diverse. Generally, creating interactive and beautiful visualizations takes a lot of effort. +context: Even for human-readable formats, people like to have the option to visualize datasets in multiple ways. +hexagon: create visualizations +layout: job +--- + +## Examples + +``` +Spreadsheets: + +TODO: some R / python code / + +Open Tableau +``` + +## Solutions + +**Data Package Views** +Distribute recommended ways of viewing a dataset in declarative ways. + +**data-package-render-js** +Render Data Package Views in Javascript applications. diff --git a/site/.drafts/jobs/do-analysis-and-machine-learning/README.md b/site/.drafts/jobs/do-analysis-and-machine-learning/README.md new file mode 100644 index 000000000..8c90dc968 --- /dev/null +++ b/site/.drafts/jobs/do-analysis-and-machine-learning/README.md @@ -0,0 +1,16 @@ +--- +title: Do Analysis & Machine Learning +tagline: Analysis of data including statistical models and machine learning. +description: Once you have access to proper data that fits your purpose, you want to leverage Artificial Intelligence techniques to get deeper insights or provide new features to users. +pain: The Machine Learning ecosystem may sound scary for people without mathematical training. It doesn't have to be like this. +context: You will hardly do this before having a complete data pipeline already in place. +hexagon: do analysis & machine learning +layout: job +--- + +## Examples + +``` +import scikit +... +``` diff --git a/site/.drafts/jobs/do-initial-data-exploration/README.md b/site/.drafts/jobs/do-initial-data-exploration/README.md new file mode 100644 index 000000000..06668821f --- /dev/null +++ b/site/.drafts/jobs/do-initial-data-exploration/README.md @@ -0,0 +1,25 @@ +--- +title: Do Initial Data Exploration +tagline: Quickly understand the content and quality of a dataset. +description: When searching for datasets or just receiving one from a third party, you need quick ways of checking what it contains and evaluating the quality in general. +pain: Initially, you may do this with a Unix tool such as sed or open in Excel. Over time, you will want to have descriptive statistics and attribute descriptions directly in the data hub and in the terminal. +context: It's part of the task of starting to work with a new dataset. +hexagon: do initial data exploration +layout: job +--- + +## Examples + +``` +wc mydata.csv +grep "Rome" mydata.csv | wc +xsv mydata.csv +``` + +## Solutions + +**DataHub** +It already provides simple visualizations for every dataset, right from the web. + +**Data Curator** +Quickly analyze datasets in a simple desktop application. diff --git a/site/.drafts/jobs/document-dataset/README.md b/site/.drafts/jobs/document-dataset/README.md new file mode 100644 index 000000000..1e41e24d6 --- /dev/null +++ b/site/.drafts/jobs/document-dataset/README.md @@ -0,0 +1,23 @@ +--- +title: Document Dataset +tagline: Human and machine readable documentation of a dataset. +description: You want to have standard ways of documenting what a dataset is about, how it was collected, attribute names and values. +pain: Initially, you may do this in a Google Doc or in a Markdown file versioned with Git. Over time, the Frictionless Data-way is to document together with the dataset, following the Data Package specification. +context: Generally neglected, you should start thinking about this as soon as the project starts. Your future self will thank you. +hexagon: document dataset +layout: job +--- + +## Examples + +``` +echo "# About this dataset" >> README.md +``` + +## Solutions + +**Data Package** +You may follow this specification to distribute a datapackage.json with schema and documentation for every dataset. + +**Data Package Creator** +Online tool to guide you through your first Data Package. diff --git a/site/.drafts/jobs/find-datasets/README.md b/site/.drafts/jobs/find-datasets/README.md new file mode 100644 index 000000000..32aa76cb4 --- /dev/null +++ b/site/.drafts/jobs/find-datasets/README.md @@ -0,0 +1,18 @@ +--- +title: Find Datasets +tagline: Find the data you need quickly (preferably in a single place). +description: Before getting to insights, you first need to find good datasets. +pain: Initially, you may have some datasets at hand. Over time, you will want to have good references already saved as favorites. +context: Often, it's one of the first requirements of data projects. +hexagon: find datasets +layout: job +--- + +## Examples + +Google: "Rainfall in the Amazon CSV" + +## Solutions + +**DataHub** +Created by the original authors of Frictionless Data, it's a place for finding datasets of public interest. Everything is containerized using Data Packages. diff --git a/site/.drafts/jobs/gracefully-scale-scope/README.md b/site/.drafts/jobs/gracefully-scale-scope/README.md new file mode 100644 index 000000000..2cbbd9b64 --- /dev/null +++ b/site/.drafts/jobs/gracefully-scale-scope/README.md @@ -0,0 +1,9 @@ +--- +title: Gracefully Scale Scope +tagline: Incrementally add new steps to your flow or system e.g. add validation to an existing solution. +description: Add new data-related tasks without spending energy thinking about integration issues. +pain: It's a constant concern in data projects. +context: It happens throughout the whole life of the project. +hexagon: gracefully scale scope +layout: job +--- diff --git a/site/.drafts/jobs/gracefully-scale-size/README.md b/site/.drafts/jobs/gracefully-scale-size/README.md new file mode 100644 index 000000000..79b1c1ea2 --- /dev/null +++ b/site/.drafts/jobs/gracefully-scale-size/README.md @@ -0,0 +1,9 @@ +--- +title: Gracefully Scale Size +tagline: Perform well with small or big data. +description: Increase the speed or the amount of data without spending energy thinking about scaling issues. This means you can start simple and lightweight and still scale up to large volumes. +pain: Initially, you may create solutions that provenly won't scale. Over time, after the project starts to grow, you will want to change the architecture. +context: It happens throughout the whole life of the project. +hexagon: grecefully scale size +layout: job +--- diff --git a/site/.drafts/jobs/have-a-data-hub/README.md b/site/.drafts/jobs/have-a-data-hub/README.md new file mode 100644 index 000000000..467c6ec41 --- /dev/null +++ b/site/.drafts/jobs/have-a-data-hub/README.md @@ -0,0 +1,20 @@ +--- +title: Have a Data Hub +tagline: A hub for managing/storing multiple datasets with access for machines and humans. +description: You want to easily share data, metadata, and related documentation with other people. +pain: There are multiple solutions for public interest datasets. In the private space, organizations may take years to benefit from offering this feature to their teams. +context: It's one of the last steps when building a data pipeline. +hexagon: have a data hub +layout: job +--- + +## Examples + +``` +datahub-cli publish my-data-package/ +``` + +## Solutions + +**DataHub** +It's a free and open source data hub. Built to work seamlessly with other Frictionless Data tools. diff --git a/site/.drafts/jobs/orchestrate-data-platform/README.md b/site/.drafts/jobs/orchestrate-data-platform/README.md new file mode 100644 index 000000000..7ebfc2f38 --- /dev/null +++ b/site/.drafts/jobs/orchestrate-data-platform/README.md @@ -0,0 +1,17 @@ +--- +title: Orchestrate Data Platform +tagline: Scheduling, deployment and monitoring of the other steps. +description: Once the pipeline starts to grow, you need ways to manage all the tasks and to ensure everything remains running as expected. +pain: Initially, you may do this in a single machine, scheduling tasks using cron. Over time, you will want to have one place to monitor what's currently running and logs of old processes. +context: You usually add orchestration when it's proven that the project will continue to scale over time. +hexagon: orchestrate data platform +layout: job +--- + +## Examples + +``` +python my-pipeline-as-script.py + +cron * * * * my-pipeline-as-script.py +``` diff --git a/site/.drafts/jobs/pipeline-transformations/README.md b/site/.drafts/jobs/pipeline-transformations/README.md new file mode 100644 index 000000000..28c8d6b2e --- /dev/null +++ b/site/.drafts/jobs/pipeline-transformations/README.md @@ -0,0 +1,23 @@ +--- +title: Pipeline Transformations +tagline: Clean up and transform data using an automated pipeline of operations. +description: When a project starts to grow, data transformations once in a single file need to be decoupled from others to scale. +pain: Initially, you may do this in functions inside a single script file. Over time, the Frictionless Data-way is to move these functions into a framework that more easily scales and can be understood by new contributors. +context: It's a task part of every data project. The difference is just the complexity. +hexagon: pipeline transformations +layout: job +--- + +## Examples + +``` +cat mydata.csv + | sed s/Roma/Rome/g + | head -n 50 + > cleandata.csv +``` + +## Solutions + +**Data Package Pipelines** +Write transformations in declarative files. diff --git a/site/.drafts/jobs/pull-dataset/README.md b/site/.drafts/jobs/pull-dataset/README.md new file mode 100644 index 000000000..4062e05e8 --- /dev/null +++ b/site/.drafts/jobs/pull-dataset/README.md @@ -0,0 +1,37 @@ +--- +title: Pull Dataset +tagline: Get raw data from a source to your environment quickly and repeatedly. +description: In the process of getting from data to insight, you need to download data from external sources. +pain: Initially, you may get data using just the browser or curl. Over time, you will want to pull using a CLI or library compatible with the Data Package specification. +context: Together with finding good datasets, it's one of the first steps of every data project. +hexagon: pull dataset +layout: job +--- + +# {{ $page.frontmatter.title }} + +**{{ $page.frontmatter.tagline }}** + +{{ $page.frontmatter.description }} + +## Examples + +Click the download link on the website + +``` +$ wget " +``` + +## Solutions + +**data-cli** +CLI tool for interacting with datasets hosted in DataHub. + +**Tabulator** +It creates consistent streams for interacting with datasets available via HTTP, FTP, and Amazon S3. + +**DataFlows** +Declare this task as a step in a data pipeline. + +**Data Package Pipelines** +Declare this task as a step in a data pipeline. diff --git a/site/.drafts/jobs/push-dataset/README.md b/site/.drafts/jobs/push-dataset/README.md new file mode 100644 index 000000000..dabe9945a --- /dev/null +++ b/site/.drafts/jobs/push-dataset/README.md @@ -0,0 +1,27 @@ +--- +title: Push Dataset +tagline: Push data from your environment to an external location e.g. (cloud) storage or database. +description: Following the task of producing or enriching data, you need to publish the resulting dataset so others can consume. +pain: Depending who's the consumer, if it's a machine or a person, you may start doing it with Git or even by e-mail. Over time, you want to start wrapping data in Data Packages. When reasonable, use streams, too. +context: Unless you're working alone or don't care about sharing results with others, you need to push data to a third-party. +hexagon: push dataset +layout: job +--- + +## Examples + +s3cmd put localfile.csv s3://my-bucket/file.csv + +## Solutions + +**data-cli** +A command line tool to push data to DataHub. + +**Tabulator** +Python library to work with streams of multiple formats. CSV, JSON, and XLS supported. + +**DataFlows** +You may push data as the final process of a data transformation pipeline. + +**Data Package Pipelines** +You may push data as the final process of a data transformation pipeline. diff --git a/site/.drafts/jobs/quickly-edit-dataset/README.md b/site/.drafts/jobs/quickly-edit-dataset/README.md new file mode 100644 index 000000000..2dc3a8192 --- /dev/null +++ b/site/.drafts/jobs/quickly-edit-dataset/README.md @@ -0,0 +1,21 @@ +--- +title: Quickly Edit Dataset +tagline: Quickly correct a value or a column. +description: You want ways of quickly editing a dataset, or to collaborate on one, without having to setup a complete development environment. +pain: Although convenient, Excel may cause multiple issues when opening and saving tabular data. You're better doing it in tools more appropriate for data pipelines. +context: It isn't needed in every project. When it is, you may not have time to research the best tools. +hexagon: quickly edit dataset +layout: job +--- + +## Examples + +Spreadsheets: Edit a cell in the sheet + +## Solutions + +**Data Curator** +Edit CSV and XLS files without unintentionally changing the raw data. Also, automatically detects the schema for every file. + +**Delimiter** +Edit and sync CSV files with GitHub directly in the browser. diff --git a/site/.drafts/jobs/store-dataset/README.md b/site/.drafts/jobs/store-dataset/README.md new file mode 100644 index 000000000..69308a359 --- /dev/null +++ b/site/.drafts/jobs/store-dataset/README.md @@ -0,0 +1,23 @@ +--- +title: Store Dataset +tagline: Store data somewhere, whether locally or the cloud, structured or unstructured. +description: You want to have ways to store data for later access or for sharing with others. +pain: The problem depends a lot on how you're using the data. When the size grows larger than the memory available, you better start considering how to scale. +context: Storing data is a necessity from the beginning of every project. +hexagon: store dataset +layout: job +--- + +## Examples + +``` +datahub push my-dataset/ --sql +``` + +## Solutions + +**DataHub** +Compatible with Data Packages, it offers a cloud space for storing datasets. + +**Tabulator** +You may use Tabulator to generate and use streams of data from multiple file formats. diff --git a/site/.drafts/jobs/validate-dataset/README.md b/site/.drafts/jobs/validate-dataset/README.md new file mode 100644 index 000000000..8106ebb9a --- /dev/null +++ b/site/.drafts/jobs/validate-dataset/README.md @@ -0,0 +1,28 @@ +--- +title: Validate Dataset +tagline: Check syntax and structure of data against a schema. +description: When relying on external sources, you need to ensure a dataset remains valid over time. +pain: Initially, you may do validation\ by hand or in a simple script. Over time, the Frictionless Data-way is to declare expectations in a file and have a library to continuously verify that for you. +context: It becomes a requirements when external changes often breaks your pipeline. +hexagon: validate dataset +layout: job +--- + +## Examples + +``` +for row in mydata: + if not validDate(row[0]): + log("Bad data:" + row[0] +``` + +## Solutions + +**Table Schema** +It's an implementation-agnostic way to declare a schema for tabular data. + +**GoodTables** +It validates if a dataset complies to a Table Schema. + +**Data Quality Dashboard** +It builds on top of GoodTables to provide a dashboard showing the state of multiple datasets diff --git a/site/.drafts/jobs/version-dataset/README.md b/site/.drafts/jobs/version-dataset/README.md new file mode 100644 index 000000000..d561e87a2 --- /dev/null +++ b/site/.drafts/jobs/version-dataset/README.md @@ -0,0 +1,22 @@ +--- +title: Version Dataset +tagline: Keep each revision of a dataset and its metadata and track changes between them. +description: You want to version datasets for the same reasons you version code and infrastructure – for reproducibility and for facilitating the track for changes. +pain: Often done in quick-and-dirty scripts, it works well enough in short exploratory projects. When scaling in file or team size, you want robust tools made for the job. +context: Not every project needs versioning from the start. When the necessity appears, it is in the start of the data pipeline. +hexagon: version dataset +layout: job +--- + +## Examples + +``` +git add mydata.csv +git commit -m ""Change mydata"" +git checkout --tag v1.0 +``` + +## Solutions + +**Data Package** +This specification tracks schema and business-specific changes. Everything in a datapackage.json file that goes with existing datasets. diff --git a/site/.drafts/specs/README.md b/site/.drafts/specs/README.md new file mode 100644 index 000000000..a1de87f13 --- /dev/null +++ b/site/.drafts/specs/README.md @@ -0,0 +1,14 @@ +--- +title: Frictionless Specs +tagline: Lightweight yet comprehensive data specifications. +layout: product +--- + +At the core of Frictionless is a set of patterns for describing data including Data Package (for datasets), Data Resource (for files) and Table Schema (for tables). + +## Specifications + +- [Overview](https://specs.frictionlessdata.io/) +- [Data Package](https://specs.frictionlessdata.io/data-package/) +- [Data Resource](https://specs.frictionlessdata.io/data-resource/) +- [Table Schema](https://specs.frictionlessdata.io/table-schema/) diff --git a/site/.drafts/table-schema/README.md b/site/.drafts/table-schema/README.md new file mode 100644 index 000000000..04363befb --- /dev/null +++ b/site/.drafts/table-schema/README.md @@ -0,0 +1,71 @@ +# Table Schema + +Table Schema is a specification for providing a “schema” (similar to a [database schema](https://en.wikipedia.org/wiki/Database_schema)) for tabular data. This information includes the expected type of each value in a column *(“string”, “number”, “date”, etc.)*, constraints on the value *(“this string can only be at most 10 characters long”)*, and the expected format of the data *(“this field should only contain strings that look like email addresses)*. Table Schema can also specify relations between tables. + +Given the following table of user information: + +| Name | Email | Age | +|------|--------------|-----| +| Jill | jill@foo.com | 25 | +| Jack | jack@bar.com | 33 | + +An example schema would look like the following (in JSON). Note that a minimum age of 18 is specified in the `Age` column and a string that looks like an email address must be present in the `Email` column: + + +```json +{ + "fields": [ + { + "name": "Name", + "type": "string", + "description": "User’s name" + }, + { + "name": "Email", + "type": "string", + "format": "email", + "description": "User’s email" + }, + { + "name": "Age", + "type": "integer", + "description": "User’s age", + "constraints": { + "minimum": 18 + } + } + ] +} +``` + +[Software](/products/tabulator) that supports reading and validating tabular data against a Table Schema can help publishers and ordinary users improve the quality of CSV and Excel files online by flagging validation errors based on the types, formats, and constraints specified in the schema. For an example, see [goodtables](/blog/2016/06/24/validating-data/). + +## Next Steps + +* Read the [full specification](https://specs.frictionlessdata.io/table-schema/). +* Get to know the [tools](/products/table-schema-tools). +* Understand how it can be used in a [Data Package](/data-package). + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io + diff --git a/site/.drafts/tooling/application/README.md b/site/.drafts/tooling/application/README.md new file mode 100644 index 000000000..9af63796d --- /dev/null +++ b/site/.drafts/tooling/application/README.md @@ -0,0 +1,17 @@ +--- +title: Frictionless Application +tagline: Frictionless is a visual application to describe, extract, validate, and transform tabular data. +layout: product +--- + +## Application + +> A new generation Frictionless Application is work-in-progress. At the moment, this section is an overview the existing graphical interfaces to work with Frictionless. + +## Browser Tools + +- [Data Package Creator](https://create.frictionlessdata.io/) +- [On-Demand Validation](https://try.goodtables.io/) + +## Managed Solution +- [GoodTables](https://goodtables.io) diff --git a/site/.drafts/tooling/data-package-pipelines/README.md b/site/.drafts/tooling/data-package-pipelines/README.md new file mode 100644 index 000000000..fc9776fbd --- /dev/null +++ b/site/.drafts/tooling/data-package-pipelines/README.md @@ -0,0 +1,60 @@ +--- +title: Data Package Pipelines +tagline: A framework for processing data packages in pipelines of modular components. +description: A framework for processing data packages in pipelines of modular components. +hexagon: +github: datapackage-pipelines, datapackage-pipelines-aws, datapackage-pipelines-elasticsearch, datapackage-pipelines-github, datapackage-pipelines-spss, datapackage-pipelines-ckan, datapackage-pipelines-sourcespec-registry, datapackage-pipelines-goodtables +layout: product +--- + +Data Package Pipelines is a declarative, stream-based framework for building tabular data processing pipelines. It can be used for all extract, transform, and load (ETL) tasks, and is particularly suited for working with diverse and heterogeneous data sources of varying and unknown quality. + +There are many tools and frameworks for doing ETL work with data. Data Package Pipelines is another one! The focus here is on wrangling and controlling messy data from various sources, and using the Frictionless Data tooling to transform these feeds into a stream of clean, consistent data. + +Data Package Pipelines is part of [Frictionless Data](https://frictionlessdata.io), a project funded and maintained by the [Open Knowledge Foundation](https://okfn.org) and [Datopian](https://datopian.com). + +## Check it out + +- [Get the code](https://github.com/frictionlessdata/datapackage-pipelines) +- [Integrations](https://github.com/frictionlessdata?utf8=%E2%9C%93&q=pipeline&type=&language= +) +- [Data Flows](https://github.com/datahq/dataflows) + +## Where it is used + +Data Package Pipelines is ideal for building complex ETL pipelines with a diverse collection of data sources. It uses a declarative pipeline format which can help with teams of engineers and non-technical staff working on data processing and integration projects. + +## A simple example + +A pipeline spec file + +``` +worldbank-co2-emissions: + title: CO2 emission data from the World Bank + description: Data per year, provided in metric tons per capita. + pipeline: + - + run: update_package + parameters: + name: 'co2-emissions' + title: 'CO2 emissions (metric tons per capita)' + homepage: 'http://worldbank.org/' + - + run: load + parameters: + from: "http://api.worldbank.org/v2/en/indicator/EN.ATM.CO2E.PC?downloadformat=excel" + name: 'global-data' + format: xls + headers: 4 + - + run: set_types + parameters: + resources: global-data + types: + "[12][0-9]{3}": + type: number + - + run: dump_to_zip + parameters: + out-file: co2-emissions-wb.zip +``` diff --git a/site/.drafts/tooling/datahub/README.md b/site/.drafts/tooling/datahub/README.md new file mode 100644 index 000000000..462bbfe00 --- /dev/null +++ b/site/.drafts/tooling/datahub/README.md @@ -0,0 +1,25 @@ +--- +title: DataHub +tagline: A SaaS platform built on Frictionless Data that allows publishing and sharing data, as well as discovery of high-quality curated data. +description: A SaaS platform built on Frictionless Data that allows publishing and sharing data, as well as discovery of high-quality curated data. +hexagon: +github: +layout: product +--- + +**{{ $page.frontmatter.tagline }}** + +DataHub.io provides data wranglers and publishers with a simple platform to share data, and to discover high quality, curated datasets. It is software-as-a-service that allows significant free usage, and has plans for more advanced storage and functionality requirements. Data on DataHub.io is described with [Data Package](https://specs.frictionlessdata.io/data-package/) and [Table Schema](/products/table-schema/). + +Publishing data publicly often involves a great deal of friction. It is easy to publish on code sharing platforms such as GitHub, but those are not optimized for display and discovery of data. DataHub.io seeks to address this need by providing a simple, easy to use platform for publishing data, with core functionality such as visualization and access. + +DataHub.io is part of [Frictionless Data](https://frictionlessdata.io), a project funded and maintained by the [Open Knowledge Foundation](https://okfn.org) and [Datopian](https://datopian.com). + +## Check it out + +[Visit datahub.io](https://datahub.io) + +## Where it is used + +DataHub.io is a useful solution for sharing datasets, and discovering high-quality datasets that others have produced. + diff --git a/site/.drafts/tooling/framework/README.md b/site/.drafts/tooling/framework/README.md new file mode 100644 index 000000000..6f0003547 --- /dev/null +++ b/site/.drafts/tooling/framework/README.md @@ -0,0 +1,17 @@ +--- +title: Frictionless Framework +tagline: Frictionless is a framework to describe, extract, validate, and transform tabular data. +layout: product +--- + +## Python + +Frictionless is a framework to describe, extract, validate, and transform tabular data (DEVT Framework). It supports a great deal of data schemes and formats, as well as provides popular platforms integrations. The framework is powered by the lightweight yet comprehensive Frictionless Data Specifications: + +- [Documentation](https://framework.frictionlessdata.io/) + +## JavaScript + +Frictionless.js is a lightweight, standardized "stream-plus-metadata" interface for accessing files and datasets, especially tabular ones (CSV, Excel). + +- [Documentation](https://github.com/frictionlessdata/frictionless-js) diff --git a/site/.drafts/tooling/goodtables/README.md b/site/.drafts/tooling/goodtables/README.md new file mode 100644 index 000000000..175141f9a --- /dev/null +++ b/site/.drafts/tooling/goodtables/README.md @@ -0,0 +1,50 @@ +--- +title: GoodTables +tagline: A simple yet powerful tool to ensure the quality of tabular data, in Python and on the command line. +description: A simple yet powerful tool to ensure the quality of tabular data, in Python and on the command line. +hexagon: +github: goodtables-py, goodtables.io, goodtables-ui, goodtables-js +layout: product +--- + +GoodTables is a managed service to validate tabular data. It can check the structure of your data (e.g. all rows have the same number of columns), and its contents (e.g. all dates are valid). Internally, it uses the [Data Quality Spec](https://github.com/frictionlessdata/data-quality-spec) for common tabular data errors. GoodTables also supports data described by [Data Package](/tooling/data-package-tools/) and [Table Schema](/tooling/table-schema-tools/). + +Let's visit the [GoodTables][gt] website and login with GitHub to start the process of validating our data. + +[gt]: https://goodtables.io/ + +![goodtables dashboard](https://i.imgur.com/Mxkgsoa.png) + +Add a data source in the dashboard using GitHub (Amazon S3 is also supported, but we're only covering GitHub here): + +:::tip INFO +We need to create a GitHub repository to store our `helloworld.csv` file. Make sure you use the valid CSV from our example above. +::: + +![adding source to goodtables](https://i.imgur.com/6H7jOsf.png) + +Because we have valid and well-structured data in our`helloworld.csv`, the results will come back as valid, as seen in the image below + +![valid data](https://i.imgur.com/cfp1Jej.png) + +Now, let's change to invalid tabular data and see what the checks return: + +```bash +Name,Email,,Age +Jill,jill@foo.com +Jack,jack@bar.com,33 +23,Jane,jane@foo.com, 22, 33 +``` + +![Invalid data](https://i.imgur.com/LIDV1OC.png) + +Of course, this build will fail because some structural errors were detected by GoodTables (**"Blank Header", "Missing value", and "Extra Value"**). + +>**Additionally, here's a video walkthrough of the content outlined above** + + + +[py]: https://github.com/frictionlessdata/goodtables-py +[js]: https://github.com/frictionlessdata/goodtables-js +[pip]:https://pip.pypa.io/en/stable/ +[json]:https://github.com/frictionlessdata/goodtables-py/blob/master/goodtables/schemas/report.json diff --git a/site/.drafts/tooling/labs/README.md b/site/.drafts/tooling/labs/README.md new file mode 100644 index 000000000..b27aecf85 --- /dev/null +++ b/site/.drafts/tooling/labs/README.md @@ -0,0 +1,24 @@ +--- +title: Frictionless Data Labs +tagline: Tooling and approaches to data that are in varied stages of maturity. +description: Tooling and approaches to data that are in varied stages of maturity. +hexagon: +github: data-quality-cli +layout: product +--- + +Under the Frictionless Data umbrella, there are several useful tools and ideas that are contributed by the core team and the wider community, but that are not yet fully fleshed out as stable solutions. In general, we take an approach of rough consensus and running code, and any or all of these tools could be further developed as we as a community work on new projects and use cases that require a "frictionless" approach to data. + +## Labs tools + +### Data Curator + +Data Curator is a simple desktop data editor to help describe, validate and share usable open data. It uses [Table Schema](/products/table-schema/) and [Data Package](/products/data-package/) to import and export data, and also for the internal representation of data. + +[Find out more](https://github.com/ODIQueensland/data-curator) + +### Data Quality Dashboard and CLI + +The Data Quality Dashboard (and its accompanying [CLI](https://github.com/frictionlessdata/data-quality-cli)) provide a simple way to visualize the data quality of a collection of data. There are methods to automatically ingest all tabular data on a given CKAN endpoint, and to provide a custom collection of data. The state of the data quality for each file is saved, so that change over time can be tracked. + +[Find out more](https://github.com/frictionlessdata/data-quality-dashboard) diff --git a/site/.drafts/tooling/libraries/README.md b/site/.drafts/tooling/libraries/README.md new file mode 100644 index 000000000..efb9998de --- /dev/null +++ b/site/.drafts/tooling/libraries/README.md @@ -0,0 +1,60 @@ +--- +title: Frictionless Libraries +tagline: The Frictionless code is available in 10 languages +layout: product +--- + +## Data Package + +Data Package is a simple container format used to describe and package a collection of data. The full specification is available [here](https://specs.frictionlessdata.io/data-package). + +There is a growing set of online and offline software for working with Data Packages. You will find tools for creating, viewing, validating, publishing and managing them. + +| Language | URL | +|----------|-----| +| Clojure | https://github.com/frictionlessdata/datapackage-clj | +| Go | https://github.com/frictionlessdata/datapackage-go | +| Java | https://github.com/frictionlessdata/datapackage-java | +| Javascript | https://github.com/frictionlessdata/datapackage-js | +| Julia | https://github.com/frictionlessdata/DataPackage.jl | +| MATLAB | https://github.com/KrisKusano/datapackage | +| PHP | https://github.com/frictionlessdata/datapackage-php | +| Python | https://github.com/frictionlessdata/datapackage-py | +| R | https://github.com/frictionlessdata/datapackage-r | +| Ruby | https://github.com/frictionlessdata/datapackage-rb | +| Swift | https://github.com/frictionlessdata/datapackage-swift | + +## Table Schema + +Table Schema is a specification for providing a “schema” (similar to a database schema) for tabular data. The full specification is available [here](https://specs.frictionlessdata.io/table-schema/). + +You will find multiple languages supporting Table Schema through libraries. + +| Language | URL | +|----------|-----| +| Clojure | https://github.com/frictionlessdata/tableschema-clj | +| Go | https://github.com/frictionlessdata/tableschema-go | +| Java | https://github.com/frictionlessdata/tableschema-java | +| Javascript | https://github.com/frictionlessdata/tableschema-js | +| Julia | https://github.com/frictionlessdata/TableSchema.jl | +| PHP | https://github.com/frictionlessdata/tableschema-php | +| Python | https://github.com/frictionlessdata/tableschema-py | +| R | https://github.com/frictionlessdata/tableschema-r | +| Ruby | https://github.com/frictionlessdata/tableschema-rb | +| Swift | https://github.com/frictionlessdata/tableschema-swift | + +## Others + +Also, it's worth looking to other libraries with specific use cases in mind. + +| Use case | Language | URL | +|----------|----------|-----| +| CKAN Datastore | Python | https://github.com/frictionlessdata/tableschema-datashape | +| DataShape | Python | https://github.com/frictionlessdata/tableschema-elasticsearch-py | +| Google BigQuery | Python | https://github.com/frictionlessdata/tableschema-bigquery-py | +| OpenRefine | Python | https://github.com/frictionlessdata/tableschema-openrefine-py | +| ORM | Javascript | https://github.com/frictionlessdata/tableschema-models-js | +| Pandas | Python | https://github.com/frictionlessdata/tableschema-pandas-py | +| SPSS | Python | https://github.com/frictionlessdata/tableschema-spss-py | +| SQL | Javascript | https://github.com/frictionlessdata/tableschema-sql-js | +| SQL | Python | https://github.com/frictionlessdata/tableschema-sql-py | diff --git a/site/.vuepress/components/TeamProfile.vue b/site/.vuepress/components/TeamProfile.vue new file mode 100644 index 000000000..0312e894c --- /dev/null +++ b/site/.vuepress/components/TeamProfile.vue @@ -0,0 +1,252 @@ + + + + + diff --git a/site/.vuepress/components/mermaid.vue b/site/.vuepress/components/mermaid.vue new file mode 100644 index 000000000..9abe92b8a --- /dev/null +++ b/site/.vuepress/components/mermaid.vue @@ -0,0 +1,22 @@ +// .vuepress/components/mermaid.vue + + + + + + diff --git a/site/.vuepress/config.js b/site/.vuepress/config.js new file mode 100644 index 000000000..c447b6880 --- /dev/null +++ b/site/.vuepress/config.js @@ -0,0 +1,262 @@ +require("dotenv").config(); +const lodash = require("lodash"); +const webpack = require("webpack"); + +module.exports = { + title: "Frictionless Data", + description: "Data software and standards", + head: [ + ["link", { rel: "icon", href: "./public/img/favicon.ico" }], + [ + "link", + { + rel: "icon", + type: "image/png", + sizes: "32x32", + href: "./public/img/favicon-32x32.png", + }, + ], + [ + "link", + { + rel: "icon", + type: "image/png", + sizes: "16x16", + href: "./public/img/favicon-16x16.png", + }, + ], + [ + "link", + { + rel: "apple-touch-icon", + sizes: "180x180", + href: "./public/img/apple-touch-icon.png", + }, + ], + [ + "link", + { + rel: "mask-icon", + color: "#000000", + href: "./public/img/safari-pinned-tab.svg", + }, + ], + ["link", { rel: "manifest", href: "./public/img/site.webmanifest" }], + [ + "link", + { + rel: "stylesheet", + href: + "https://fonts.googleapis.com/css?family=Lato:300,400,700,900&display=swap", + }, + ], + ], + postcss: { + plugins: [ + require("tailwindcss")("./tailwind.config.js"), + require("autoprefixer"), + ], + }, + configureWebpack: (config) => { + return { plugins: [new webpack.EnvironmentPlugin({ ...process.env })] }; + }, + markdown: { + linkify: true, + typographer: true, + breaks: true, + html: true, + toc: { + includeLevel: [2, 3], + }, + extendMarkdown: (md) => { + md.use(require("markdown-it-footnote")); + }, + }, + themeConfig: { + logo: "/img/frictionless-color-full-logo.svg", + // repo: "https://github.com/frictionlessdata", + // repoLabel: "GitHub", + docsBranch: "main", + docsRepo: "https://github.com/frictionlessdata/frictionlessdata.io", + docsDir: "site", + lastUpdated: "Last Updated", + // defaults to false, set to true to enable + editLinks: true, + smoothScroll: true, + footer_col1_title: "About", + footer_col1_row1: "About", + footer_col1_row2: "Contact", + footer_col1_row3: "Terms of Use", + footer_col1_row4: "Privacy Policy", + footer_col2_title: "Help", + footer_col2_row1: "Support", + footer_col2_row2: "Get started", + footer_col2_row3: "Community", + footer_col2_row4: "Forum", + footer_col3_title: "More", + footer_col3_row1: "Reproducible Research", + footer_col3_row2: "Design Assets", + footer_col3_row3: "Blog", + footer_col3_row4: "Contribute", + footer_col4_title: "Social", + footer_col4_row1: "GitHub", + footer_col4_row2: "Twitter", + footer_col4_row3: "Slack", + footer_col4_row4: "Matrix", + footer_col4_row5: "Dev", + navbar_icon1_link: "https://matrix.to/#/#frictionlessdata:matrix.okfn.org", + navbar_icon1_image: "/img/home/matrix.svg", + navbar_icon1_title: "Matrix", + navbar_icon2_link: + "https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg", + navbar_icon2_image: "/img/home/slack-icon.png", + navbar_icon2_title: "Slack", + navbar_icon3_link: "https://twitter.com/frictionlessd8a", + navbar_icon3_image: "/img/home/twitter-icon.svg", + navbar_icon3_title: "Twitter", + navbar_icon4_link: "https://github.com/frictionlessdata", + navbar_icon4_image: "/img/home/github-icon.svg", + navbar_icon4_title: "GitHub", + sidebar: "auto", + nav: [ + { + text: "Introduction", + link: "/introduction/", + }, + { text: "Projects", link: "/projects/" }, + { text: "Universe", link: "/universe/" }, + { text: "Adoption", link: "/adoption/" }, + { text: "People", link: "/people/" }, + { text: "Fellows", link: "https://fellows.frictionlessdata.io/" }, + { + text: "Development", + ariaLabel: "Development Menu", + items: [ + { text: "Architecture", link: "/development/architecture/" }, + { text: "Roadmap", link: "/development/roadmap/" }, + { text: "Process", link: "/development/process/" }, + ], + }, + { + text: "Work With Us", + ariaLabel: "Work With Us Menu", + items: [ + { text: "Get Help", link: "/work-with-us/get-help/" }, + { text: "Contribute", link: "/work-with-us/contribute/" }, + { text: "Code of Conduct", link: "/work-with-us/code-of-conduct/" }, + { text: "Events Calendar", link: "/work-with-us/events/" }, + { + text: "Forum", + link: "https://github.com/frictionlessdata/project/discussions", + }, + { + text: "Chat (Slack)", + link: + "https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg", + }, + { + text: "Chat (Matrix)", + link: "https://matrix.to/#/#frictionlessdata:matrix.okfn.org", + }, + ], + }, + { text: "Blog", link: "/blog/" }, + ], + }, + plugins: [ + [ + "@vuepress/blog", + { + directories: [ + { + // Unique ID of current classification + id: "blog", + // Target directory + dirname: "blog", + // Path of the `entry page` (or `list page`) + path: "/blog/", + itemPermalink: "/blog/:year/:month/:day/:slug", + pagination: { + lengthPerPage: 10, + }, + }, + ], + frontmatters: [ + { + id: "tag", + keys: ["tag", "tags"], + path: "/tag/", + layout: "Tags", + scopeLayout: "Tag", + frontmatter: { title: "Tag" }, + }, + ], + feed: { + canonical_base: "https://frictionlessdata.io", + }, + }, + ], + [ + "vuepress-plugin-feed", + { + canonical_base: "https://frictionlessdata.io", + sort: (entries) => lodash.reverse(lodash.sortBy(entries, "date")), + }, + ], + [ + "vuepress-plugin-dehydrate", + { + // disable SSR + noSSR: "404.html", + // remove scripts + noScript: [ + // support glob patterns + "foo/*.html", + "**/static.html", + ], + }, + ], + [ + "@limdongjin/vuepress-plugin-simple-seo", + { + default_site_name: "Frictionless Data", + default_image: "/img/frictionless-color-logo.png", + }, + ], + ["@vuepress/back-to-top"], + ], + head: [ + [ + "script", + { + src: "https://plausible.io/js/script.js", + "data-domain": "frictionlessdata.io", + }, + ], + ["script", { src: "https://unpkg.com/honeycomb-grid@3.1.3" }], + ["script", { src: "https://unpkg.com/svg.js@2.7.1" }], + ], +}; + +// TODO: add to the navbar if needed +// { +// text: "Jobs to be done", +// items: [ +// { text: "Create Visualizations", link: "/jobs/create-visualizations/" }, +// { text: "Do Analysis and Machine Learning", link: "/jobs/do-analysis-and-machine-learning/" }, +// { text: "Do Initial Data Exploration", link: "/jobs/do-initial-data-exploration/" }, +// { text: "Document Dataset", link: "/jobs/document-dataset/" }, +// { text: "Find Datasets", link: "/jobs/find-datasets/" }, +// { text: "Gracefully Scale Scope", link: "/jobs/gracefully-scale-scope/" }, +// { text: "Gracefully Scale Size", link: "/jobs/gracefully-scale-size/" }, +// { text: "Have a Data Hub", link: "/jobs/have-a-data-hub/" }, +// { text: "Orchestrate Data Platform", link: "/jobs/orchestrate-data-platform/" }, +// { text: "Pipeline transformations", link: "/jobs/pipeline-transformations/" }, +// { text: "Pull Dataset", link: "/jobs/pull-dataset/" }, +// { text: "Push Dataset", link: "/jobs/push-dataset/" }, +// { text: "Quickly edit dataset", link: "/jobs/quickly-edit-dataset/" }, +// { text: "Store Dataset", link: "/jobs/store-dataset/" }, +// { text: "Validate Dataset", link: "/jobs/validate-dataset/" }, +// { text: "Version dataset", link: "/jobs/version-dataset/" } +// ] +// }, diff --git a/site/.vuepress/enhanceApp.js b/site/.vuepress/enhanceApp.js new file mode 100644 index 000000000..fa9e453f4 --- /dev/null +++ b/site/.vuepress/enhanceApp.js @@ -0,0 +1,382 @@ +const redirectList = [ + { + path: "/software/", + redirect: "/", + }, + { + path: "/articles/", + redirect: "/blog/", + }, + { + path: "/docs/", + redirect: "/guide/", + }, + { + path: "/articles/nimblelearn-dpc/", + redirect: "/blog/2019/07/22/nimblelearn-dpc/", + }, + { + path: "/articles/datacurator/", + redirect: "/blog/2019/03/01/datacurator/", + }, + { + path: "/articles/nimblelearn/", + redirect: "/blog/2018/07/20/nimblelearn/", + }, + { + path: + "/articles/center-for-data-science-and-public-policy-workforce-data-initiative/", + redirect: + "/blog/2017/08/15/center-for-data-science-and-public-policy-workforce-data-initiative/", + }, + { + path: "/articles/openml/", + redirect: "/blog/2017/12/04/openml/", + }, + { + path: "/articles/zegami/", + redirect: "/blog/2017/09/28/zegami/", + }, + { + path: "/articles/cmso/", + redirect: "/blog/2017/05/23/cmso/", + }, + { + path: "/articles/collections-as-data/", + redirect: "/blog/2017/08/09/collections-as-data/", + }, + { + path: "/articles/the-data-retriever/", + redirect: "/blog/2017/05/24/the-data-retriever/", + }, + { + path: "/articles/dataship/", + redirect: "/blog/2016/11/15/dataship/", + }, + { + path: "/articles/dataworld/", + redirect: "/blog/2017/04/11/dataworld/", + }, + { + path: "/articles/john-snow-labs/", + redirect: "/blog/2017/03/28/john-snow-labs/", + }, + { + path: "/articles/tesera/", + redirect: "/blog/2016/11/15/tesera/", + }, + { + path: "/articles/open-power-system-data/", + redirect: "/blog/2016/11/15/open-power-system-data/", + }, + { + path: "/articles/matt-thompson/", + redirect: "/blog/2017/10/26/matt-thompson/", + }, + { + path: "/articles/georges-labreche/", + redirect: "/blog/2017/10/24/georges-labreche/", + }, + { + path: "/articles/oleg-lavrovsky/", + redirect: "/blog/2018/07/16/oleg-lavrovsky/", + }, + { + path: "/articles/open-knowledge-greece/", + redirect: "/blog/2017/10/27/open-knowledge-greece/", + }, + { + path: "/articles/ori-hoch/", + redirect: "/blog/2018/07/16/ori-hoch/", + }, + { + path: "/articles/daniel-fireman/", + redirect: "/blog/2017/11/01/daniel-fireman/", + }, + { + path: "/articles/nes/", + redirect: "/blog/2019/07/03/nes/", + }, + { + path: "/articles/nes-tool/", + redirect: "/blog/2020/01/23/nes-tool/", + }, + { + path: "/articles/andre-heughebaert/", + redirect: "/blog/2019/09/12/andre-heughebaert/", + }, + { + path: "/articles/stephan-max/", + redirect: "/blog/2019/07/02/stephan-max/", + }, + { + path: "/articles/open-referral/", + redirect: "/blog/2019/07/09/open-referral/", + }, + { + path: "/articles/frictionless-darwincore/", + redirect: "/blog/2020/01/22/frictionless-darwincore/", + }, + { + path: "/articles/open-referral-tool/", + redirect: "/blog/2020/01/22/open-referral-tool/", + }, + { + path: "/articles/dm4t/", + redirect: "/blog/2017/12/19/dm4t/", + }, + { + path: "/articles/ukds/", + redirect: "/blog/2017/12/12/ukds/", + }, + { + path: "/articles/university-of-pittsburgh/", + redirect: "/blog/2017/12/15/university-of-pittsburgh/", + }, + { + path: "/articles/elife/", + redirect: "/blog/2017/10/24/elife/", + }, + { + path: "/articles/causa-natura-pescando-datos/", + redirect: "/blog/2017/08/15/causa-natura-pescando-datos/", + }, + { + path: "/articles/university-of-cambridge/", + redirect: "/blog/2017/08/15/university-of-cambridge/", + }, + { + path: + "/articles/pacific-northwest-national-laboratory-active-data-biology/", + redirect: + "/blog/2018/05/07/pacific-northwest-national-laboratory-active-data-biology/", + }, + { + path: "/docs/using-data-packages-in-clojure/", + redirect: "/blog/2018/05/07/using-data-packages-in-clojure/", + }, + { + path: "/docs/using-data-packages-in-java/", + redirect: "/blog/2018/04/28/using-data-packages-in-java/", + }, + { + path: "/docs/using-data-packages-in-go/", + redirect: "/blog/2018/02/16/using-data-packages-in-go/", + }, + { + path: "/docs/creating-tabular-data-packages-in-r/", + redirect: "/blog/2018/02/14/creating-tabular-data-packages-in-r/", + }, + { + path: "/docs/using-data-packages-in-r/", + redirect: "/blog/2018/02/14/using-data-packages-in-r/", + }, + { + path: "/docs/using-data-packages-in-python/", + redirect: "/blog/2016/08/29/using-data-packages-in-python/", + }, + { + path: "/docs/creating-tabular-data-packages-in-python/", + redirect: "/blog/2016/07/21/creating-tabular-data-packages-in-python/", + }, + { + path: "/docs/creating-tabular-data-packages-in-javascript/", + redirect: "/blog/2018/04/04/creating-tabular-data-packages-in-javascript/", + }, + { + path: "/docs/joining-data-in-python/", + redirect: "/blog/2018/04/06/joining-data-in-python/", + }, + { + path: "/docs/joining-tabular-data-in-python/", + redirect: "/blog/2018/04/05/joining-tabular-data-in-python/", + }, + { + path: "/docs/publish-data-as-data-packages/", + redirect: "/blog/2018/07/16/publish-data-as-data-packages/", + }, + { + path: "/docs/applying-licenses/", + redirect: "/blog/2018/03/27/applying-licenses/", + }, + { + path: "/docs/publish/", + redirect: "/blog/2016/08/30/publish/", + }, + { + path: "/docs/publish-online/", + redirect: "/blog/2016/08/29/publish-online/", + }, + { + path: "/docs/publish-any/", + redirect: "/blog/2016/07/21/publish-any/", + }, + { + path: "/docs/publish-tabular/", + redirect: "/blog/2016/07/21/publish-tabular/", + }, + { + path: "/docs/publish-geo/", + redirect: "/blog/2016/04/30/publish-geo/", + }, + { + path: "/docs/publish-faq/", + redirect: "/blog/2016/04/20/publish-faq/", + }, + { + path: "/docs/point-location-data/", + redirect: "/blog/2018/07/16/point-location-data/", + }, + { + path: "/docs/csv/", + redirect: "/blog/2018/07/09/csv/", + }, + { + path: "/docs/validating-data/", + redirect: "/blog/2016/06/24/validating-data/", + }, + { + path: "/field-guide/", + redirect: "/tag/field-guide", + }, + { + path: "/field-guide/well-packaged-datasets/", + redirect: "/blog/2018/03/07/well-packaged-datasets/", + }, + { + path: "/field-guide/visible-findable-shareable-data/", + redirect: "/blog/2018/07/16/visible-findable-shareable-data/", + }, + { + path: "/field-guide/validated-tabular-data/", + redirect: "/blog/2018/07/16/validated-tabular-data/", + }, + { + path: "/field-guide/used-and-useful-data/", + redirect: "/blog/2019/05/20/used-and-useful-data/", + }, + { + path: "/field-guide/automatically-validated-tabular-data/", + redirect: "/blog/2018/03/12/automatically-validated-tabular-data/", + }, + { + path: "/field-guide/data-publication-workflow-example/", + redirect: "/blog/2018/03/12/data-publication-workflow-example/", + }, + { + path: "/docs/tutorial-template/", + redirect: "/contribute/", + }, + { + path: "/docs/developer-guide/", + redirect: "/contribute/", + }, + { + path: "/universe/", + redirect: "/adoption/", + }, + { + path: "/docs/data-package/", + redirect: "/standards", + }, + { + path: "/docs/table-schema/", + redirect: "/table-schema/", + }, + { + path: "/tooling/data-package-tools/", + redirect: "/tooling/python/", + }, + { + path: "/tooling/table-schema-tools/", + redirect: "/tooling/python/", + }, + { + path: "/guide/", + redirect: "/introduction/", + }, + { + path: "/tooling/application/", + redirect: "/software/", + }, + { + path: "/tooling/framework/", + redirect: "/software/", + }, + { + path: "/tooling/libraries/", + redirect: "/software/", + }, + { + path: "/tooling/goodtables/", + redirect: "/software/", + }, + { + path: "/tooling/datahub/", + redirect: "/software/", + }, + { + path: "/tooling/labs/", + redirect: "/software/", + }, + { + path: "/specs/", + redirect: "/standards/", + }, + { + path: "/reproducible-research/", + redirect: "/adoption/", + }, + { + path: "/team/", + redirect: "/people/", + }, + { + path: "/about/", + redirect: "/introduction/", + }, + { + path: "/support/", + redirect: "/work-with-us/get-help/", + }, + { + path: "/contribute/", + redirect: "/work-with-us/contribute/", + }, + { + path: "/code-of-conduct/", + redirect: "/work-with-us/code-of-conduct/", + }, + { + path: "/events/", + redirect: "/work-with-us/events/", + }, + { + path: "/table-schema", + redirect: "/standards", + }, + { + path: "/data-package/", + redirect: "/standards/", + }, + { + path: "/software/", + redirect: "/projects/", + }, + { + path: "/standards/", + redirect: "/projects/", + }, + { + path: "/software", + redirect: "/projects/", + }, + { + path: "/standards", + redirect: "/projects/", + }, +]; + +export default ({ Vue, router }) => { + router.addRoutes(redirectList); +}; diff --git a/site/.vuepress/public/CNAME b/site/.vuepress/public/CNAME new file mode 100644 index 000000000..ad11e68a8 --- /dev/null +++ b/site/.vuepress/public/CNAME @@ -0,0 +1 @@ +frictionlessdata.io diff --git a/site/.vuepress/public/data-package-diagram.png b/site/.vuepress/public/data-package-diagram.png new file mode 100644 index 000000000..277549af7 Binary files /dev/null and b/site/.vuepress/public/data-package-diagram.png differ diff --git a/site/.vuepress/public/hero.svg b/site/.vuepress/public/hero.svg new file mode 100755 index 000000000..a895fa33a --- /dev/null +++ b/site/.vuepress/public/hero.svg @@ -0,0 +1,576 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/Diagram-intro-guide.png b/site/.vuepress/public/img/Diagram-intro-guide.png new file mode 100644 index 000000000..b465b3693 Binary files /dev/null and b/site/.vuepress/public/img/Diagram-intro-guide.png differ diff --git a/site/.vuepress/public/img/Frictionless_hackathon.png b/site/.vuepress/public/img/Frictionless_hackathon.png new file mode 100644 index 000000000..254cff73d Binary files /dev/null and b/site/.vuepress/public/img/Frictionless_hackathon.png differ diff --git a/site/.vuepress/public/img/Open-data-blend.png b/site/.vuepress/public/img/Open-data-blend.png new file mode 100644 index 000000000..429ec0c9d Binary files /dev/null and b/site/.vuepress/public/img/Open-data-blend.png differ diff --git a/site/.vuepress/public/img/adoption/bcodmo.png b/site/.vuepress/public/img/adoption/bcodmo.png new file mode 100644 index 000000000..d90b12f11 Binary files /dev/null and b/site/.vuepress/public/img/adoption/bcodmo.png differ diff --git a/site/.vuepress/public/img/adoption/data-curator.png b/site/.vuepress/public/img/adoption/data-curator.png new file mode 100644 index 000000000..25e7a17b8 Binary files /dev/null and b/site/.vuepress/public/img/adoption/data-curator.png differ diff --git a/site/.vuepress/public/img/adoption/dryad.png b/site/.vuepress/public/img/adoption/dryad.png new file mode 100644 index 000000000..ef1b12d93 Binary files /dev/null and b/site/.vuepress/public/img/adoption/dryad.png differ diff --git a/site/.vuepress/public/img/adoption/etalab.png b/site/.vuepress/public/img/adoption/etalab.png new file mode 100644 index 000000000..4f3dccb0d Binary files /dev/null and b/site/.vuepress/public/img/adoption/etalab.png differ diff --git a/site/.vuepress/public/img/adoption/european-commission.svg b/site/.vuepress/public/img/adoption/european-commission.svg new file mode 100644 index 000000000..f02468583 --- /dev/null +++ b/site/.vuepress/public/img/adoption/european-commission.svg @@ -0,0 +1,233 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/adoption/hubmap.png b/site/.vuepress/public/img/adoption/hubmap.png new file mode 100644 index 000000000..b7a3c2255 Binary files /dev/null and b/site/.vuepress/public/img/adoption/hubmap.png differ diff --git a/site/.vuepress/public/img/adoption/nimblelearn.png b/site/.vuepress/public/img/adoption/nimblelearn.png new file mode 100644 index 000000000..0b85de675 Binary files /dev/null and b/site/.vuepress/public/img/adoption/nimblelearn.png differ diff --git a/site/.vuepress/public/img/adoption/odb.png b/site/.vuepress/public/img/adoption/odb.png new file mode 100644 index 000000000..8e796cdf5 Binary files /dev/null and b/site/.vuepress/public/img/adoption/odb.png differ diff --git a/site/.vuepress/public/img/adoption/openml.png b/site/.vuepress/public/img/adoption/openml.png new file mode 100644 index 000000000..2ea6eb64e Binary files /dev/null and b/site/.vuepress/public/img/adoption/openml.png differ diff --git a/site/.vuepress/public/img/adoption/oxford-drg.png b/site/.vuepress/public/img/adoption/oxford-drg.png new file mode 100644 index 000000000..cc1bfcbd9 Binary files /dev/null and b/site/.vuepress/public/img/adoption/oxford-drg.png differ diff --git a/site/.vuepress/public/img/adoption/pudl.png b/site/.vuepress/public/img/adoption/pudl.png new file mode 100644 index 000000000..67fb23323 Binary files /dev/null and b/site/.vuepress/public/img/adoption/pudl.png differ diff --git a/site/.vuepress/public/img/adoption/schema-collaboration.png b/site/.vuepress/public/img/adoption/schema-collaboration.png new file mode 100644 index 000000000..f20e01e8c Binary files /dev/null and b/site/.vuepress/public/img/adoption/schema-collaboration.png differ diff --git a/site/.vuepress/public/img/adoption/validata.png b/site/.vuepress/public/img/adoption/validata.png new file mode 100644 index 000000000..f75623e93 Binary files /dev/null and b/site/.vuepress/public/img/adoption/validata.png differ diff --git a/site/.vuepress/public/img/apple-touch-icon.png b/site/.vuepress/public/img/apple-touch-icon.png new file mode 100644 index 000000000..1c14c05ea Binary files /dev/null and b/site/.vuepress/public/img/apple-touch-icon.png differ diff --git a/site/.vuepress/public/img/blog/BCODMO-data-pipelines.png b/site/.vuepress/public/img/blog/BCODMO-data-pipelines.png new file mode 100644 index 000000000..7af3258cb Binary files /dev/null and b/site/.vuepress/public/img/blog/BCODMO-data-pipelines.png differ diff --git a/site/.vuepress/public/img/blog/CFDE-logo.png b/site/.vuepress/public/img/blog/CFDE-logo.png new file mode 100644 index 000000000..4e3459b33 Binary files /dev/null and b/site/.vuepress/public/img/blog/CFDE-logo.png differ diff --git a/site/.vuepress/public/img/blog/Community-call-Dryad.png b/site/.vuepress/public/img/blog/Community-call-Dryad.png new file mode 100644 index 000000000..21ccdfef6 Binary files /dev/null and b/site/.vuepress/public/img/blog/Community-call-Dryad.png differ diff --git a/site/.vuepress/public/img/blog/Community-call-webrecorder.png b/site/.vuepress/public/img/blog/Community-call-webrecorder.png new file mode 100644 index 000000000..bd55d0212 Binary files /dev/null and b/site/.vuepress/public/img/blog/Community-call-webrecorder.png differ diff --git a/site/.vuepress/public/img/blog/DPCKAN-blog.png b/site/.vuepress/public/img/blog/DPCKAN-blog.png new file mode 100644 index 000000000..737871a70 Binary files /dev/null and b/site/.vuepress/public/img/blog/DPCKAN-blog.png differ diff --git a/site/.vuepress/public/img/blog/Data-cataloguing-multi.png b/site/.vuepress/public/img/blog/Data-cataloguing-multi.png new file mode 100644 index 000000000..f1e0518c0 Binary files /dev/null and b/site/.vuepress/public/img/blog/Data-cataloguing-multi.png differ diff --git a/site/.vuepress/public/img/blog/Deploy-solutions.png b/site/.vuepress/public/img/blog/Deploy-solutions.png new file mode 100644 index 000000000..fa9065897 Binary files /dev/null and b/site/.vuepress/public/img/blog/Deploy-solutions.png differ diff --git a/site/.vuepress/public/img/blog/Flatterer.png b/site/.vuepress/public/img/blog/Flatterer.png new file mode 100644 index 000000000..fc26d95dc Binary files /dev/null and b/site/.vuepress/public/img/blog/Flatterer.png differ diff --git a/site/.vuepress/public/img/blog/GQ.jpeg b/site/.vuepress/public/img/blog/GQ.jpeg new file mode 100644 index 000000000..994c825ae Binary files /dev/null and b/site/.vuepress/public/img/blog/GQ.jpeg differ diff --git a/site/.vuepress/public/img/blog/HuBMAP-Retina-Logo-Color.png b/site/.vuepress/public/img/blog/HuBMAP-Retina-Logo-Color.png new file mode 100644 index 000000000..359f8a399 Binary files /dev/null and b/site/.vuepress/public/img/blog/HuBMAP-Retina-Logo-Color.png differ diff --git a/site/.vuepress/public/img/blog/June-community-call.png b/site/.vuepress/public/img/blog/June-community-call.png new file mode 100644 index 000000000..38e1803bd Binary files /dev/null and b/site/.vuepress/public/img/blog/June-community-call.png differ diff --git a/site/.vuepress/public/img/blog/Kevin-Photo.jpeg b/site/.vuepress/public/img/blog/Kevin-Photo.jpeg new file mode 100644 index 000000000..58f016fd0 Binary files /dev/null and b/site/.vuepress/public/img/blog/Kevin-Photo.jpeg differ diff --git a/site/.vuepress/public/img/blog/Libraries-Hacked.png b/site/.vuepress/public/img/blog/Libraries-Hacked.png new file mode 100644 index 000000000..bd9087e4c Binary files /dev/null and b/site/.vuepress/public/img/blog/Libraries-Hacked.png differ diff --git a/site/.vuepress/public/img/blog/Lindsay.jpeg b/site/.vuepress/public/img/blog/Lindsay.jpeg new file mode 100644 index 000000000..ac80f773b Binary files /dev/null and b/site/.vuepress/public/img/blog/Lindsay.jpeg differ diff --git a/site/.vuepress/public/img/blog/Livemark page.png b/site/.vuepress/public/img/blog/Livemark page.png new file mode 100644 index 000000000..0aa73369c Binary files /dev/null and b/site/.vuepress/public/img/blog/Livemark page.png differ diff --git a/site/.vuepress/public/img/blog/Melvin.jpeg b/site/.vuepress/public/img/blog/Melvin.jpeg new file mode 100644 index 000000000..2f9f9809a Binary files /dev/null and b/site/.vuepress/public/img/blog/Melvin.jpeg differ diff --git a/site/.vuepress/public/img/blog/November-community-call.png b/site/.vuepress/public/img/blog/November-community-call.png new file mode 100644 index 000000000..2a45b997d Binary files /dev/null and b/site/.vuepress/public/img/blog/November-community-call.png differ diff --git a/site/.vuepress/public/img/blog/OpenReferral.png b/site/.vuepress/public/img/blog/OpenReferral.png new file mode 100644 index 000000000..39777a578 Binary files /dev/null and b/site/.vuepress/public/img/blog/OpenReferral.png differ diff --git a/site/.vuepress/public/img/blog/README.md b/site/.vuepress/public/img/blog/README.md new file mode 100644 index 000000000..d32e49865 --- /dev/null +++ b/site/.vuepress/public/img/blog/README.md @@ -0,0 +1,44 @@ +--- +title: Frictionless Public Utility Data - A Pilot Study +date: 2020-03-18 +tags: +category: frictionless-data +--- + +_This blog post describes a Frictionless Data Pilot with the Public Utility Data Liberation project. Pilot projects are part of the [Frictionless Data for Reproducible Research project](https://frictionlessdata.io/reproducible-research/). Written by Zane Selvans, Christina Gosnell, and Lilly Winfree._ + + + +The Public Utility Data Liberation project, [PUDL](https://catalyst.coop/pudl/), aims to make US energy data easier to access and use. Much of this data, including information about the cost of electricity, how much fuel is being burned, powerplant usage, and emissions, is not well documented or is in difficult to use formats. Last year, PUDL joined forces with the Frictionless Data for Reproducible Research team as a Pilot project to release this public utility data. PUDL takes the original spreadsheets, CSV files, and databases and turns them into unified Frictionless [tabular data packages(https://frictionlessdata.io/docs/tabular-data-package/)] that can be used to populate a database, or read in directly with Python, R, Microsoft Access, and many other tools. + +![Catalyst Logo](./SimpleSquareWalking.png) + +## What is PUDL? +The PUDL project, which is coordinated by [Catalyst Cooperative](https://catalyst.coop/pudl/), is focused on creating an energy utility data product that can serve a wide range of users. PUDL was inspired to make this data more accessible because the current US utility data ecosystem fragmented, and commercial products are expensive. There are hundreds of gigabytes of information available from government agencies, but they are often difficult to work with, and different sources can be hard to combine. + +PUDL users include researchers, activists, journalists, and policy makers. They have a wide range of technical backgrounds, from grassroots organizers who might only feel comfortable with spreadsheets, to PhDs with cloud computing resources, so it was important to provide data that would work for all users. + +Before PUDL, much of this data was freely available to download from various sources, but it was typically messy and not well documented. This led to a lack of uniformity and reproducibility amongst projects that were using this data. The users were scraping the data together in their own way, making it hard to compare analyses or understand outcomes. Therefore, one of the goals for PUDL was to minimize these duplicated efforts, and enable the creation of lasting, cumulative outputs. + +## What were the main Pilot goals? +The main focus of this Pilot was to create a way to openly share the utility data in a reproducible way that would be understandable to PUDL’s many potential users. The first change Catalyst identified they wanted to make during the Pilot was with their data storage medium. PUDL was previously creating a Postgresql database as the main data output. However many users, even those with technical experience, found setting up the separate database software a major hurdle that prevented them from accessing and using the processed data. They also desired a static, archivable, platform-independent format. Therefore, Catalyst decided to transition PUDL away from PostgreSQL, and instead try Frictionless Tabular Data Packages. They also wanted a way to share the processed data without needing to commit to long-term maintenance and curation, meaning they needed the outputs to continue being useful to users even if they only had minimal resources to dedicate to the maintenance and updates. The team decided to package their data into Tabular Data Packages and identified Zenodo as a good option for openly hosting that packaged data. + +Catalyst also recognized that most users only want to download the outputs and use them directly, and did not care about reproducing the data processing pipeline themselves, but it was still important to provide the processing pipeline code publicly to support transparency and reproducibility. Therefore, in this Pilot, they focused on transitioning their existing ETL pipeline from outputting a PostgreSQL database, that was defined using SQLAlchemy, to outputting datapackages which could then be archived publicly on Zenodo. Importantly, they needed this pipeline to maintain the metadata, information about data type, and database structural information that had already been accumulated. This rich metadata needed to be stored alongside the data itself, so future users could understand where the data came from and understand its meaning. The Catalyst team used Tabular Data Packages to record and store this metadata (see the code here: https://github.com/catalyst-cooperative/pudl/blob/master/src/pudl/load/metadata.py). + +Another complicating factor is that many of the PUDL datasets are fairly entangled with each other. The PUDL team ideally wanted users to be able to pick and choose which datasets they actually wanted to download and use without requiring them to download it all (currently about 100GB of data when uncompressed). However, they were worried that if single datasets were downloaded, the users might miss that some of the datasets were meant to be used together. So, the PUDL team created information, which they call “glue”, that shows which datasets are linked together and that should ideally be used in tandem. + +The cumulation of this Pilot was a release of the PUDL data (access it here – https://zenodo.org/record/3672068 and read the corresponding documentation here – https://catalystcoop-pudl.readthedocs.io/en/v0.3.2/), which includes integrated data from the EIA Form 860, EIA Form 923, The EPA Continuous Emissions Monitoring System (CEMS), The EPA Integrated Planning Model (IPM), and FERC Form 1. + +## What problems were encountered during this Pilot? +One issue that the group encountered during the Pilot was that the data types available in Postgres are substantially richer than those natively in the Tabular Data Package standard. However, this issue is an endemic problem of wanting to work with several different platforms, and so the team compromised and worked with the least common denominator. In the future, PUDL might store several different sets of data types for use in different contexts, for example, one for freezing the data out into data packages, one for SQLite, and one for Pandas. + +Another problem encountered during the Pilot resulted from testing the limits of the draft Tabular Data Package specifications. There were aspects of the specifications that the Catalyst team assumed were fully implemented in the reference (Python) implementation of the Frictionless toolset, but were in fact still works in progress. This work led the Frictionless team to start a documentation improvement project, including a revision of the specifications website to incorporate this feedback. + +Through the pilot, the teams worked to implement new Frictionless features, including the specification of composite primary keys and foreign key references that point to external data packages. Other new Frictionless functionality that was created with this Pilot included partitioning of large resources into resource groups in which all resources use identical table schemas, and adding gzip compression of resources. The Pilot also focused on implementing more complete validation through goodtables, including bytes/hash checks, foreign keys checks, and primary keys checks, though there is still more work to be done here. + +## Future Directions +A common problem with using publicly available energy data is that the federal agencies creating the data do not use version control or maintain change logs for the data they publish, but they do frequently go back years after the fact to revise or alter previously published data — with no notification. To combat this problem, Catalyst is using data packages to encapsulate the raw inputs to the ETL process. They are setting up a process which will periodically check to see if the federal agencies’ posted data has been updated or changed, create an archive, and upload it to Zenodo. They will also store metadata in non-tabular data packages, indicating which information is stored in each file (year, state, month, etc.) so that there can be a uniform process of querying those raw input data packages. This will mean the raw inputs won’t have to be archived alongside every data release. Instead one can simply refer to these other versioned archives of the inputs. Catalyst hopes these version controlled raw archives will also be useful to other researchers. + +Another next step for Catalyst will be to make the ETL and new dataset integration more modular to hopefully make it easier for others to integrate new datasets. For instance, they are planning on integrating the EIA 861 and the ISO/RTO LMP data next. Other future plans include simplifying metadata storage, using Docker to containerize the ETL process for better reproducibility, and setting up a [Pangeo](https://pangeo.io/) instance for live interactive data access without requiring anyone to download any data at all. The team would also like to build visualizations that sit on top of the database, making an interactive, regularly updated map of US coal plants and their operating costs, compared to new renewable energy in the same area. They would also like to visualize power plant operational attributes from EPA CEMS (e.g., ramp rates, min/max operating loads, relationship between load factor and heat rate, marginal additional fuel required for a startup event…). + +Have you used PUDL? The team would love to hear feedback from users of the published data so that they can understand how to improve it, based on real user experiences. If you are integrating other US energy/electricity data of interest, please talk to the PUDL team about whether they might want to integrate it into PUDL to help ensure that it’s all more standardized and can be maintained long term. Also let them know what other datasets you would find useful (E.g. FERC EQR, FERC 714, PHMSA Pipelines, MSHA mines…). If you have questions, please ask them on GitHub (https://github.com/catalyst-cooperative/pudl) so that the answers will be public for others to find as well. diff --git a/site/.vuepress/public/img/blog/Repository.png b/site/.vuepress/public/img/blog/Repository.png new file mode 100644 index 000000000..f11aec12a Binary files /dev/null and b/site/.vuepress/public/img/blog/Repository.png differ diff --git a/site/.vuepress/public/img/blog/Shashis-presentation.png b/site/.vuepress/public/img/blog/Shashis-presentation.png new file mode 100644 index 000000000..15c531dfc Binary files /dev/null and b/site/.vuepress/public/img/blog/Shashis-presentation.png differ diff --git a/site/.vuepress/public/img/blog/SimpleSquareWalking.png b/site/.vuepress/public/img/blog/SimpleSquareWalking.png new file mode 100644 index 000000000..67fb23323 Binary files /dev/null and b/site/.vuepress/public/img/blog/SimpleSquareWalking.png differ diff --git a/site/.vuepress/public/img/blog/TU-Delft-feedback.png b/site/.vuepress/public/img/blog/TU-Delft-feedback.png new file mode 100644 index 000000000..c577748a4 Binary files /dev/null and b/site/.vuepress/public/img/blog/TU-Delft-feedback.png differ diff --git a/site/.vuepress/public/img/blog/TUDelft-training.png b/site/.vuepress/public/img/blog/TUDelft-training.png new file mode 100644 index 000000000..27cf0e9b3 Binary files /dev/null and b/site/.vuepress/public/img/blog/TUDelft-training.png differ diff --git a/site/.vuepress/public/img/blog/Victoria.jpeg b/site/.vuepress/public/img/blog/Victoria.jpeg new file mode 100644 index 000000000..dc952bb4b Binary files /dev/null and b/site/.vuepress/public/img/blog/Victoria.jpeg differ diff --git a/site/.vuepress/public/img/blog/Zarena.jpeg b/site/.vuepress/public/img/blog/Zarena.jpeg new file mode 100644 index 000000000..44bd2c52c Binary files /dev/null and b/site/.vuepress/public/img/blog/Zarena.jpeg differ diff --git a/site/.vuepress/public/img/blog/andre.png b/site/.vuepress/public/img/blog/andre.png new file mode 100644 index 000000000..812f34cee Binary files /dev/null and b/site/.vuepress/public/img/blog/andre.png differ diff --git a/site/.vuepress/public/img/blog/anne-fellow.png b/site/.vuepress/public/img/blog/anne-fellow.png new file mode 100644 index 000000000..b43de7d19 Binary files /dev/null and b/site/.vuepress/public/img/blog/anne-fellow.png differ diff --git a/site/.vuepress/public/img/blog/april.png b/site/.vuepress/public/img/blog/april.png new file mode 100644 index 000000000..39ba5d938 Binary files /dev/null and b/site/.vuepress/public/img/blog/april.png differ diff --git a/site/.vuepress/public/img/blog/auto-validate.png b/site/.vuepress/public/img/blog/auto-validate.png new file mode 100644 index 000000000..96212af10 Binary files /dev/null and b/site/.vuepress/public/img/blog/auto-validate.png differ diff --git a/site/.vuepress/public/img/blog/bcodmoLogo.jpg b/site/.vuepress/public/img/blog/bcodmoLogo.jpg new file mode 100644 index 000000000..3d525f3e7 Binary files /dev/null and b/site/.vuepress/public/img/blog/bcodmoLogo.jpg differ diff --git a/site/.vuepress/public/img/blog/cambridge.png b/site/.vuepress/public/img/blog/cambridge.png new file mode 100644 index 000000000..cd4114f1e Binary files /dev/null and b/site/.vuepress/public/img/blog/cambridge.png differ diff --git a/site/.vuepress/public/img/blog/carlos.jpg b/site/.vuepress/public/img/blog/carlos.jpg new file mode 100644 index 000000000..c204bbfca Binary files /dev/null and b/site/.vuepress/public/img/blog/carlos.jpg differ diff --git a/site/.vuepress/public/img/blog/causanatura.png b/site/.vuepress/public/img/blog/causanatura.png new file mode 100644 index 000000000..12f6142f3 Binary files /dev/null and b/site/.vuepress/public/img/blog/causanatura.png differ diff --git a/site/.vuepress/public/img/blog/chicago.png b/site/.vuepress/public/img/blog/chicago.png new file mode 100755 index 000000000..ea792363a Binary files /dev/null and b/site/.vuepress/public/img/blog/chicago.png differ diff --git a/site/.vuepress/public/img/blog/cmoa-logo.png b/site/.vuepress/public/img/blog/cmoa-logo.png new file mode 100644 index 000000000..4a49eefa5 Binary files /dev/null and b/site/.vuepress/public/img/blog/cmoa-logo.png differ diff --git a/site/.vuepress/public/img/blog/cmso-logo.png b/site/.vuepress/public/img/blog/cmso-logo.png new file mode 100644 index 000000000..0631e3918 Binary files /dev/null and b/site/.vuepress/public/img/blog/cmso-logo.png differ diff --git a/site/.vuepress/public/img/blog/commallama.png b/site/.vuepress/public/img/blog/commallama.png new file mode 100644 index 000000000..8b502fabc Binary files /dev/null and b/site/.vuepress/public/img/blog/commallama.png differ diff --git a/site/.vuepress/public/img/blog/community-call-april-img.png b/site/.vuepress/public/img/blog/community-call-april-img.png new file mode 100644 index 000000000..bf3f3b9b2 Binary files /dev/null and b/site/.vuepress/public/img/blog/community-call-april-img.png differ diff --git a/site/.vuepress/public/img/blog/community-call-pic.png b/site/.vuepress/public/img/blog/community-call-pic.png new file mode 100644 index 000000000..8c6c35aa4 Binary files /dev/null and b/site/.vuepress/public/img/blog/community-call-pic.png differ diff --git a/site/.vuepress/public/img/blog/community.jpg b/site/.vuepress/public/img/blog/community.jpg new file mode 100644 index 000000000..8f63b5419 Binary files /dev/null and b/site/.vuepress/public/img/blog/community.jpg differ diff --git a/site/.vuepress/public/img/blog/dani-fellow.png b/site/.vuepress/public/img/blog/dani-fellow.png new file mode 100644 index 000000000..5e296790e Binary files /dev/null and b/site/.vuepress/public/img/blog/dani-fellow.png differ diff --git a/site/.vuepress/public/img/blog/daniel-fireman-image.jpg b/site/.vuepress/public/img/blog/daniel-fireman-image.jpg new file mode 100644 index 000000000..2047b0f31 Binary files /dev/null and b/site/.vuepress/public/img/blog/daniel-fireman-image.jpg differ diff --git a/site/.vuepress/public/img/blog/data-curator-logo.png b/site/.vuepress/public/img/blog/data-curator-logo.png new file mode 100644 index 000000000..511566264 Binary files /dev/null and b/site/.vuepress/public/img/blog/data-curator-logo.png differ diff --git a/site/.vuepress/public/img/blog/data-dag-blog.png b/site/.vuepress/public/img/blog/data-dag-blog.png new file mode 100644 index 000000000..0cedeb6dd Binary files /dev/null and b/site/.vuepress/public/img/blog/data-dag-blog.png differ diff --git a/site/.vuepress/public/img/blog/data-retriever-logo.png b/site/.vuepress/public/img/blog/data-retriever-logo.png new file mode 100755 index 000000000..a589739a8 Binary files /dev/null and b/site/.vuepress/public/img/blog/data-retriever-logo.png differ diff --git a/site/.vuepress/public/img/blog/data-world-logo.png b/site/.vuepress/public/img/blog/data-world-logo.png new file mode 100755 index 000000000..d732dd44e Binary files /dev/null and b/site/.vuepress/public/img/blog/data-world-logo.png differ diff --git a/site/.vuepress/public/img/blog/dataship-logo.png b/site/.vuepress/public/img/blog/dataship-logo.png new file mode 100755 index 000000000..e84ea77b7 Binary files /dev/null and b/site/.vuepress/public/img/blog/dataship-logo.png differ diff --git a/site/.vuepress/public/img/blog/dataship.gif b/site/.vuepress/public/img/blog/dataship.gif new file mode 100755 index 000000000..008820251 Binary files /dev/null and b/site/.vuepress/public/img/blog/dataship.gif differ diff --git a/site/.vuepress/public/img/blog/deploy-solutions-logo.png b/site/.vuepress/public/img/blog/deploy-solutions-logo.png new file mode 100644 index 000000000..cb9f6e030 Binary files /dev/null and b/site/.vuepress/public/img/blog/deploy-solutions-logo.png differ diff --git a/site/.vuepress/public/img/blog/dm4t.png b/site/.vuepress/public/img/blog/dm4t.png new file mode 100644 index 000000000..4b49302cf Binary files /dev/null and b/site/.vuepress/public/img/blog/dm4t.png differ diff --git a/site/.vuepress/public/img/blog/eclipse_epc.png b/site/.vuepress/public/img/blog/eclipse_epc.png new file mode 100644 index 000000000..4c008747a Binary files /dev/null and b/site/.vuepress/public/img/blog/eclipse_epc.png differ diff --git a/site/.vuepress/public/img/blog/elife-logo.png b/site/.vuepress/public/img/blog/elife-logo.png new file mode 100644 index 000000000..586cc3586 Binary files /dev/null and b/site/.vuepress/public/img/blog/elife-logo.png differ diff --git a/site/.vuepress/public/img/blog/evelyn-fellow.jpg b/site/.vuepress/public/img/blog/evelyn-fellow.jpg new file mode 100644 index 000000000..899a1b6d0 Binary files /dev/null and b/site/.vuepress/public/img/blog/evelyn-fellow.jpg differ diff --git a/site/.vuepress/public/img/blog/facebook-color.png b/site/.vuepress/public/img/blog/facebook-color.png new file mode 100644 index 000000000..64991147e Binary files /dev/null and b/site/.vuepress/public/img/blog/facebook-color.png differ diff --git a/site/.vuepress/public/img/blog/fd-home.png b/site/.vuepress/public/img/blog/fd-home.png new file mode 100644 index 000000000..54a537a69 Binary files /dev/null and b/site/.vuepress/public/img/blog/fd-home.png differ diff --git a/site/.vuepress/public/img/blog/fd_reproducible.png b/site/.vuepress/public/img/blog/fd_reproducible.png new file mode 100644 index 000000000..9485df2c3 Binary files /dev/null and b/site/.vuepress/public/img/blog/fd_reproducible.png differ diff --git a/site/.vuepress/public/img/blog/fdwc.png b/site/.vuepress/public/img/blog/fdwc.png new file mode 100644 index 000000000..88863ba2f Binary files /dev/null and b/site/.vuepress/public/img/blog/fdwc.png differ diff --git a/site/.vuepress/public/img/blog/february.png b/site/.vuepress/public/img/blog/february.png new file mode 100644 index 000000000..a32394938 Binary files /dev/null and b/site/.vuepress/public/img/blog/february.png differ diff --git a/site/.vuepress/public/img/blog/fellows-cohort3.png b/site/.vuepress/public/img/blog/fellows-cohort3.png new file mode 100644 index 000000000..145cb8485 Binary files /dev/null and b/site/.vuepress/public/img/blog/fellows-cohort3.png differ diff --git a/site/.vuepress/public/img/blog/fellows-ending.jpg b/site/.vuepress/public/img/blog/fellows-ending.jpg new file mode 100644 index 000000000..ac3921777 Binary files /dev/null and b/site/.vuepress/public/img/blog/fellows-ending.jpg differ diff --git a/site/.vuepress/public/img/blog/fosdem2020.jpeg b/site/.vuepress/public/img/blog/fosdem2020.jpeg new file mode 100644 index 000000000..46ef498c3 Binary files /dev/null and b/site/.vuepress/public/img/blog/fosdem2020.jpeg differ diff --git a/site/.vuepress/public/img/blog/framework.png b/site/.vuepress/public/img/blog/framework.png new file mode 100644 index 000000000..a01c5dcf4 Binary files /dev/null and b/site/.vuepress/public/img/blog/framework.png differ diff --git a/site/.vuepress/public/img/blog/frictionless-logo.png b/site/.vuepress/public/img/blog/frictionless-logo.png new file mode 100644 index 000000000..13b459c71 Binary files /dev/null and b/site/.vuepress/public/img/blog/frictionless-logo.png differ diff --git a/site/.vuepress/public/img/blog/frictionlessdata-hangout.png b/site/.vuepress/public/img/blog/frictionlessdata-hangout.png new file mode 100644 index 000000000..c4dea0c58 Binary files /dev/null and b/site/.vuepress/public/img/blog/frictionlessdata-hangout.png differ diff --git a/site/.vuepress/public/img/blog/georges-labreche-image.png b/site/.vuepress/public/img/blog/georges-labreche-image.png new file mode 100755 index 000000000..83ab981e2 Binary files /dev/null and b/site/.vuepress/public/img/blog/georges-labreche-image.png differ diff --git a/site/.vuepress/public/img/blog/gift.jpg b/site/.vuepress/public/img/blog/gift.jpg new file mode 100644 index 000000000..df57ca9b5 Binary files /dev/null and b/site/.vuepress/public/img/blog/gift.jpg differ diff --git a/site/.vuepress/public/img/blog/intermine.png b/site/.vuepress/public/img/blog/intermine.png new file mode 100644 index 000000000..6fb7599e9 Binary files /dev/null and b/site/.vuepress/public/img/blog/intermine.png differ diff --git a/site/.vuepress/public/img/blog/interoperability-test-bed-eu-commission.png b/site/.vuepress/public/img/blog/interoperability-test-bed-eu-commission.png new file mode 100644 index 000000000..7356f6799 Binary files /dev/null and b/site/.vuepress/public/img/blog/interoperability-test-bed-eu-commission.png differ diff --git a/site/.vuepress/public/img/blog/jacqueline-fellow.jpg b/site/.vuepress/public/img/blog/jacqueline-fellow.jpg new file mode 100644 index 000000000..7229285ca Binary files /dev/null and b/site/.vuepress/public/img/blog/jacqueline-fellow.jpg differ diff --git a/site/.vuepress/public/img/blog/january.png b/site/.vuepress/public/img/blog/january.png new file mode 100644 index 000000000..fa5bb96c2 Binary files /dev/null and b/site/.vuepress/public/img/blog/january.png differ diff --git a/site/.vuepress/public/img/blog/john-snow-labs-logo.png b/site/.vuepress/public/img/blog/john-snow-labs-logo.png new file mode 100755 index 000000000..61f16cb3e Binary files /dev/null and b/site/.vuepress/public/img/blog/john-snow-labs-logo.png differ diff --git a/site/.vuepress/public/img/blog/july-community-call-flatterer.png b/site/.vuepress/public/img/blog/july-community-call-flatterer.png new file mode 100644 index 000000000..95bee2506 Binary files /dev/null and b/site/.vuepress/public/img/blog/july-community-call-flatterer.png differ diff --git a/site/.vuepress/public/img/blog/june21-community-call.png b/site/.vuepress/public/img/blog/june21-community-call.png new file mode 100644 index 000000000..5e6236b44 Binary files /dev/null and b/site/.vuepress/public/img/blog/june21-community-call.png differ diff --git a/site/.vuepress/public/img/blog/kate-fellow.png b/site/.vuepress/public/img/blog/kate-fellow.png new file mode 100644 index 000000000..885389a02 Binary files /dev/null and b/site/.vuepress/public/img/blog/kate-fellow.png differ diff --git a/site/.vuepress/public/img/blog/katerina-fellow.jpg b/site/.vuepress/public/img/blog/katerina-fellow.jpg new file mode 100644 index 000000000..8cc8924b5 Binary files /dev/null and b/site/.vuepress/public/img/blog/katerina-fellow.jpg differ diff --git a/site/.vuepress/public/img/blog/libraries-hacked-logo.png b/site/.vuepress/public/img/blog/libraries-hacked-logo.png new file mode 100644 index 000000000..723682590 Binary files /dev/null and b/site/.vuepress/public/img/blog/libraries-hacked-logo.png differ diff --git a/site/.vuepress/public/img/blog/livemark-page.png b/site/.vuepress/public/img/blog/livemark-page.png new file mode 100644 index 000000000..0aa73369c Binary files /dev/null and b/site/.vuepress/public/img/blog/livemark-page.png differ diff --git a/site/.vuepress/public/img/blog/march.png b/site/.vuepress/public/img/blog/march.png new file mode 100644 index 000000000..9688f0e09 Binary files /dev/null and b/site/.vuepress/public/img/blog/march.png differ diff --git a/site/.vuepress/public/img/blog/matt-thompson-image.png b/site/.vuepress/public/img/blog/matt-thompson-image.png new file mode 100644 index 000000000..ff4d808a1 Binary files /dev/null and b/site/.vuepress/public/img/blog/matt-thompson-image.png differ diff --git a/site/.vuepress/public/img/blog/may.png b/site/.vuepress/public/img/blog/may.png new file mode 100644 index 000000000..693fb7e28 Binary files /dev/null and b/site/.vuepress/public/img/blog/may.png differ diff --git a/site/.vuepress/public/img/blog/metrics-spec.png b/site/.vuepress/public/img/blog/metrics-spec.png new file mode 100644 index 000000000..d9732dc9d Binary files /dev/null and b/site/.vuepress/public/img/blog/metrics-spec.png differ diff --git a/site/.vuepress/public/img/blog/nes_logo.png b/site/.vuepress/public/img/blog/nes_logo.png new file mode 100644 index 000000000..6443fea9d Binary files /dev/null and b/site/.vuepress/public/img/blog/nes_logo.png differ diff --git a/site/.vuepress/public/img/blog/nikhilvats.jpeg b/site/.vuepress/public/img/blog/nikhilvats.jpeg new file mode 100644 index 000000000..08d5433eb Binary files /dev/null and b/site/.vuepress/public/img/blog/nikhilvats.jpeg differ diff --git a/site/.vuepress/public/img/blog/nimblelearn-logo.png b/site/.vuepress/public/img/blog/nimblelearn-logo.png new file mode 100644 index 000000000..dbe11b538 Binary files /dev/null and b/site/.vuepress/public/img/blog/nimblelearn-logo.png differ diff --git a/site/.vuepress/public/img/blog/odi.jpg b/site/.vuepress/public/img/blog/odi.jpg new file mode 100644 index 000000000..498f73a9e Binary files /dev/null and b/site/.vuepress/public/img/blog/odi.jpg differ diff --git a/site/.vuepress/public/img/blog/oleg-lavrovsky-image.jpg b/site/.vuepress/public/img/blog/oleg-lavrovsky-image.jpg new file mode 100644 index 000000000..3b44b3abb Binary files /dev/null and b/site/.vuepress/public/img/blog/oleg-lavrovsky-image.jpg differ diff --git a/site/.vuepress/public/img/blog/open-access-week-2019.png b/site/.vuepress/public/img/blog/open-access-week-2019.png new file mode 100644 index 000000000..2eb87711b Binary files /dev/null and b/site/.vuepress/public/img/blog/open-access-week-2019.png differ diff --git a/site/.vuepress/public/img/blog/open-access-week-2020.png b/site/.vuepress/public/img/blog/open-access-week-2020.png new file mode 100644 index 000000000..8ee074bbe Binary files /dev/null and b/site/.vuepress/public/img/blog/open-access-week-2020.png differ diff --git a/site/.vuepress/public/img/blog/open-data-blend-home-page.png b/site/.vuepress/public/img/blog/open-data-blend-home-page.png new file mode 100644 index 000000000..469156c4f Binary files /dev/null and b/site/.vuepress/public/img/blog/open-data-blend-home-page.png differ diff --git a/site/.vuepress/public/img/blog/open-knowledge-greece-logo.png b/site/.vuepress/public/img/blog/open-knowledge-greece-logo.png new file mode 100755 index 000000000..36cecfdee Binary files /dev/null and b/site/.vuepress/public/img/blog/open-knowledge-greece-logo.png differ diff --git a/site/.vuepress/public/img/blog/openml-logo.png b/site/.vuepress/public/img/blog/openml-logo.png new file mode 100644 index 000000000..904e91adf Binary files /dev/null and b/site/.vuepress/public/img/blog/openml-logo.png differ diff --git a/site/.vuepress/public/img/blog/opsd-logo.png b/site/.vuepress/public/img/blog/opsd-logo.png new file mode 100644 index 000000000..e8fdaadf0 Binary files /dev/null and b/site/.vuepress/public/img/blog/opsd-logo.png differ diff --git a/site/.vuepress/public/img/blog/opsd-logo.svg b/site/.vuepress/public/img/blog/opsd-logo.svg new file mode 100755 index 000000000..93fcefff2 --- /dev/null +++ b/site/.vuepress/public/img/blog/opsd-logo.svg @@ -0,0 +1,769 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/blog/ori-hoch-image.png b/site/.vuepress/public/img/blog/ori-hoch-image.png new file mode 100755 index 000000000..f9131cafa Binary files /dev/null and b/site/.vuepress/public/img/blog/ori-hoch-image.png differ diff --git a/site/.vuepress/public/img/blog/pnnl.png b/site/.vuepress/public/img/blog/pnnl.png new file mode 100644 index 000000000..88f43c0e6 Binary files /dev/null and b/site/.vuepress/public/img/blog/pnnl.png differ diff --git a/site/.vuepress/public/img/blog/ritwik-fellow.png b/site/.vuepress/public/img/blog/ritwik-fellow.png new file mode 100644 index 000000000..010f037c3 Binary files /dev/null and b/site/.vuepress/public/img/blog/ritwik-fellow.png differ diff --git a/site/.vuepress/public/img/blog/sam-fellow.jpeg b/site/.vuepress/public/img/blog/sam-fellow.jpeg new file mode 100644 index 000000000..82d36fcd9 Binary files /dev/null and b/site/.vuepress/public/img/blog/sam-fellow.jpeg differ diff --git a/site/.vuepress/public/img/blog/sara.jpeg b/site/.vuepress/public/img/blog/sara.jpeg new file mode 100644 index 000000000..3cb9bc606 Binary files /dev/null and b/site/.vuepress/public/img/blog/sara.jpeg differ diff --git a/site/.vuepress/public/img/blog/sara.png b/site/.vuepress/public/img/blog/sara.png new file mode 100644 index 000000000..458c1f7b3 Binary files /dev/null and b/site/.vuepress/public/img/blog/sara.png differ diff --git a/site/.vuepress/public/img/blog/schema-collaboration.png b/site/.vuepress/public/img/blog/schema-collaboration.png new file mode 100644 index 000000000..f20e01e8c Binary files /dev/null and b/site/.vuepress/public/img/blog/schema-collaboration.png differ diff --git a/site/.vuepress/public/img/blog/schema.gouv.fr.png b/site/.vuepress/public/img/blog/schema.gouv.fr.png new file mode 100644 index 000000000..ca2be01e1 Binary files /dev/null and b/site/.vuepress/public/img/blog/schema.gouv.fr.png differ diff --git a/site/.vuepress/public/img/blog/stephanmax.jpg b/site/.vuepress/public/img/blog/stephanmax.jpg new file mode 100644 index 000000000..864939021 Binary files /dev/null and b/site/.vuepress/public/img/blog/stephanmax.jpg differ diff --git a/site/.vuepress/public/img/blog/tesera-logo.png b/site/.vuepress/public/img/blog/tesera-logo.png new file mode 100755 index 000000000..7fbe332bd Binary files /dev/null and b/site/.vuepress/public/img/blog/tesera-logo.png differ diff --git a/site/.vuepress/public/img/blog/ukds-logo.png b/site/.vuepress/public/img/blog/ukds-logo.png new file mode 100644 index 000000000..9d80310cc Binary files /dev/null and b/site/.vuepress/public/img/blog/ukds-logo.png differ diff --git a/site/.vuepress/public/img/blog/uop-logo.jpg b/site/.vuepress/public/img/blog/uop-logo.jpg new file mode 100644 index 000000000..494905cab Binary files /dev/null and b/site/.vuepress/public/img/blog/uop-logo.jpg differ diff --git a/site/.vuepress/public/img/blog/uop-logo.png b/site/.vuepress/public/img/blog/uop-logo.png new file mode 100644 index 000000000..33d5678c6 Binary files /dev/null and b/site/.vuepress/public/img/blog/uop-logo.png differ diff --git a/site/.vuepress/public/img/blog/used.png b/site/.vuepress/public/img/blog/used.png new file mode 100644 index 000000000..dbe1c15c5 Binary files /dev/null and b/site/.vuepress/public/img/blog/used.png differ diff --git a/site/.vuepress/public/img/blog/valid.png b/site/.vuepress/public/img/blog/valid.png new file mode 100644 index 000000000..e202b5044 Binary files /dev/null and b/site/.vuepress/public/img/blog/valid.png differ diff --git a/site/.vuepress/public/img/blog/visible.png b/site/.vuepress/public/img/blog/visible.png new file mode 100644 index 000000000..2166f272b Binary files /dev/null and b/site/.vuepress/public/img/blog/visible.png differ diff --git a/site/.vuepress/public/img/blog/well-packaged.png b/site/.vuepress/public/img/blog/well-packaged.png new file mode 100644 index 000000000..dc817c131 Binary files /dev/null and b/site/.vuepress/public/img/blog/well-packaged.png differ diff --git a/site/.vuepress/public/img/blog/wheat.png b/site/.vuepress/public/img/blog/wheat.png new file mode 100644 index 000000000..7cbac8f1c Binary files /dev/null and b/site/.vuepress/public/img/blog/wheat.png differ diff --git a/site/.vuepress/public/img/blog/workflow.png b/site/.vuepress/public/img/blog/workflow.png new file mode 100644 index 000000000..ade27630b Binary files /dev/null and b/site/.vuepress/public/img/blog/workflow.png differ diff --git a/site/.vuepress/public/img/blog/zegami-logo.png b/site/.vuepress/public/img/blog/zegami-logo.png new file mode 100644 index 000000000..a98407935 Binary files /dev/null and b/site/.vuepress/public/img/blog/zegami-logo.png differ diff --git a/site/.vuepress/public/img/favicon-16x16.png b/site/.vuepress/public/img/favicon-16x16.png new file mode 100644 index 000000000..d81596c27 Binary files /dev/null and b/site/.vuepress/public/img/favicon-16x16.png differ diff --git a/site/.vuepress/public/img/favicon-32x32.png b/site/.vuepress/public/img/favicon-32x32.png new file mode 100644 index 000000000..8e8afc7ee Binary files /dev/null and b/site/.vuepress/public/img/favicon-32x32.png differ diff --git a/site/.vuepress/public/img/favicon.ico b/site/.vuepress/public/img/favicon.ico new file mode 100644 index 000000000..a220cc26c Binary files /dev/null and b/site/.vuepress/public/img/favicon.ico differ diff --git a/site/.vuepress/public/img/frictionless-black-full-logo-blackfont.svg b/site/.vuepress/public/img/frictionless-black-full-logo-blackfont.svg new file mode 100644 index 000000000..f55d21f6b --- /dev/null +++ b/site/.vuepress/public/img/frictionless-black-full-logo-blackfont.svg @@ -0,0 +1,27 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/frictionless-black-full-logo.svg b/site/.vuepress/public/img/frictionless-black-full-logo.svg new file mode 100644 index 000000000..83a844fae --- /dev/null +++ b/site/.vuepress/public/img/frictionless-black-full-logo.svg @@ -0,0 +1,178 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/frictionless-black-logo.svg b/site/.vuepress/public/img/frictionless-black-logo.svg new file mode 100644 index 000000000..1adcf6476 --- /dev/null +++ b/site/.vuepress/public/img/frictionless-black-logo.svg @@ -0,0 +1,173 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/frictionless-color-full-logo.svg b/site/.vuepress/public/img/frictionless-color-full-logo.svg new file mode 100644 index 000000000..4b3b8264a --- /dev/null +++ b/site/.vuepress/public/img/frictionless-color-full-logo.svg @@ -0,0 +1,99 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/frictionless-color-logo.png b/site/.vuepress/public/img/frictionless-color-logo.png new file mode 100644 index 000000000..13b459c71 Binary files /dev/null and b/site/.vuepress/public/img/frictionless-color-logo.png differ diff --git a/site/.vuepress/public/img/frictionless-color-logo.svg b/site/.vuepress/public/img/frictionless-color-logo.svg new file mode 100644 index 000000000..dec6eddfd --- /dev/null +++ b/site/.vuepress/public/img/frictionless-color-logo.svg @@ -0,0 +1,87 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/18f.png b/site/.vuepress/public/img/home/18f.png new file mode 100644 index 000000000..6d44063ac Binary files /dev/null and b/site/.vuepress/public/img/home/18f.png differ diff --git a/site/.vuepress/public/img/home/Bzdusek_Karolina.jpg b/site/.vuepress/public/img/home/Bzdusek_Karolina.jpg new file mode 100644 index 000000000..517af0af1 Binary files /dev/null and b/site/.vuepress/public/img/home/Bzdusek_Karolina.jpg differ diff --git a/site/.vuepress/public/img/home/alfred.svg b/site/.vuepress/public/img/home/alfred.svg new file mode 100755 index 000000000..dccc04282 --- /dev/null +++ b/site/.vuepress/public/img/home/alfred.svg @@ -0,0 +1,30 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/bcodmo.png b/site/.vuepress/public/img/home/bcodmo.png new file mode 100644 index 000000000..8bdd5fce2 Binary files /dev/null and b/site/.vuepress/public/img/home/bcodmo.png differ diff --git a/site/.vuepress/public/img/home/beam.svg b/site/.vuepress/public/img/home/beam.svg new file mode 100755 index 000000000..3b668da21 --- /dev/null +++ b/site/.vuepress/public/img/home/beam.svg @@ -0,0 +1,94 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/data-gouv-fr.png b/site/.vuepress/public/img/home/data-gouv-fr.png new file mode 100644 index 000000000..c11f39ed7 Binary files /dev/null and b/site/.vuepress/public/img/home/data-gouv-fr.png differ diff --git a/site/.vuepress/public/img/home/data-gov-uk.png b/site/.vuepress/public/img/home/data-gov-uk.png new file mode 100644 index 000000000..67970edb9 Binary files /dev/null and b/site/.vuepress/public/img/home/data-gov-uk.png differ diff --git a/site/.vuepress/public/img/home/data-package-new.svg b/site/.vuepress/public/img/home/data-package-new.svg new file mode 100755 index 000000000..e6a4904bd --- /dev/null +++ b/site/.vuepress/public/img/home/data-package-new.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/data-package-orange.svg b/site/.vuepress/public/img/home/data-package-orange.svg new file mode 100755 index 000000000..ceb6d6f01 --- /dev/null +++ b/site/.vuepress/public/img/home/data-package-orange.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/dataflows-new.svg b/site/.vuepress/public/img/home/dataflows-new.svg new file mode 100755 index 000000000..54eecd9df --- /dev/null +++ b/site/.vuepress/public/img/home/dataflows-new.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/dataflows.svg b/site/.vuepress/public/img/home/dataflows.svg new file mode 100755 index 000000000..5051faf2b --- /dev/null +++ b/site/.vuepress/public/img/home/dataflows.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/datahub-new.svg b/site/.vuepress/public/img/home/datahub-new.svg new file mode 100755 index 000000000..f516fa328 --- /dev/null +++ b/site/.vuepress/public/img/home/datahub-new.svg @@ -0,0 +1,12 @@ + + + + + + + + DATAHUB + + + diff --git a/site/.vuepress/public/img/home/datahub-new2.svg b/site/.vuepress/public/img/home/datahub-new2.svg new file mode 100755 index 000000000..1f7c00564 --- /dev/null +++ b/site/.vuepress/public/img/home/datahub-new2.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/datopian-logo-white.png b/site/.vuepress/public/img/home/datopian-logo-white.png new file mode 100644 index 000000000..5655b9a16 Binary files /dev/null and b/site/.vuepress/public/img/home/datopian-logo-white.png differ diff --git a/site/.vuepress/public/img/home/datopian.svg b/site/.vuepress/public/img/home/datopian.svg new file mode 100644 index 000000000..509018bb8 --- /dev/null +++ b/site/.vuepress/public/img/home/datopian.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/site/.vuepress/public/img/home/dengineers-color.png b/site/.vuepress/public/img/home/dengineers-color.png new file mode 100755 index 000000000..060e79596 Binary files /dev/null and b/site/.vuepress/public/img/home/dengineers-color.png differ diff --git a/site/.vuepress/public/img/home/dengineers-color.svg b/site/.vuepress/public/img/home/dengineers-color.svg new file mode 100755 index 000000000..0d7a2f8b3 --- /dev/null +++ b/site/.vuepress/public/img/home/dengineers-color.svg @@ -0,0 +1,679 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/dev.svg b/site/.vuepress/public/img/home/dev.svg new file mode 100644 index 000000000..5e0857364 --- /dev/null +++ b/site/.vuepress/public/img/home/dev.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/discord-icon.svg b/site/.vuepress/public/img/home/discord-icon.svg new file mode 100644 index 000000000..c975de50c --- /dev/null +++ b/site/.vuepress/public/img/home/discord-icon.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/site/.vuepress/public/img/home/discord.svg b/site/.vuepress/public/img/home/discord.svg new file mode 100644 index 000000000..6e7714e4a --- /dev/null +++ b/site/.vuepress/public/img/home/discord.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/site/.vuepress/public/img/home/dmai.png b/site/.vuepress/public/img/home/dmai.png new file mode 100644 index 000000000..650adffb2 Binary files /dev/null and b/site/.vuepress/public/img/home/dmai.png differ diff --git a/site/.vuepress/public/img/home/dscientists-color.png b/site/.vuepress/public/img/home/dscientists-color.png new file mode 100755 index 000000000..16bb9dc74 Binary files /dev/null and b/site/.vuepress/public/img/home/dscientists-color.png differ diff --git a/site/.vuepress/public/img/home/dscientists-color.svg b/site/.vuepress/public/img/home/dscientists-color.svg new file mode 100755 index 000000000..4d89ea672 --- /dev/null +++ b/site/.vuepress/public/img/home/dscientists-color.svg @@ -0,0 +1,148 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/for-data-engineers-thick.svg b/site/.vuepress/public/img/home/for-data-engineers-thick.svg new file mode 100755 index 000000000..ec7c857d8 --- /dev/null +++ b/site/.vuepress/public/img/home/for-data-engineers-thick.svg @@ -0,0 +1,707 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/for-data-scientists-thick.svg b/site/.vuepress/public/img/home/for-data-scientists-thick.svg new file mode 100755 index 000000000..2d6ea16d1 --- /dev/null +++ b/site/.vuepress/public/img/home/for-data-scientists-thick.svg @@ -0,0 +1,144 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/for-researchers-thick.svg b/site/.vuepress/public/img/home/for-researchers-thick.svg new file mode 100755 index 000000000..6b96541ac --- /dev/null +++ b/site/.vuepress/public/img/home/for-researchers-thick.svg @@ -0,0 +1,101 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/francisco-alvez.png b/site/.vuepress/public/img/home/francisco-alvez.png new file mode 100644 index 000000000..58b7272bb Binary files /dev/null and b/site/.vuepress/public/img/home/francisco-alvez.png differ diff --git a/site/.vuepress/public/img/home/github-icon.svg b/site/.vuepress/public/img/home/github-icon.svg new file mode 100644 index 000000000..487febc8f --- /dev/null +++ b/site/.vuepress/public/img/home/github-icon.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/site/.vuepress/public/img/home/github.svg b/site/.vuepress/public/img/home/github.svg new file mode 100755 index 000000000..c3ce5e793 --- /dev/null +++ b/site/.vuepress/public/img/home/github.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/gitter.svg b/site/.vuepress/public/img/home/gitter.svg new file mode 100755 index 000000000..46b70e238 --- /dev/null +++ b/site/.vuepress/public/img/home/gitter.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/home/goodtables-new.svg b/site/.vuepress/public/img/home/goodtables-new.svg new file mode 100755 index 000000000..d64da5442 --- /dev/null +++ b/site/.vuepress/public/img/home/goodtables-new.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/goodtables.svg b/site/.vuepress/public/img/home/goodtables.svg new file mode 100755 index 000000000..d3bde8622 --- /dev/null +++ b/site/.vuepress/public/img/home/goodtables.svg @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/jen_thomas.jpeg b/site/.vuepress/public/img/home/jen_thomas.jpeg new file mode 100644 index 000000000..5f07585ca Binary files /dev/null and b/site/.vuepress/public/img/home/jen_thomas.jpeg differ diff --git a/site/.vuepress/public/img/home/kaggle.png b/site/.vuepress/public/img/home/kaggle.png new file mode 100644 index 000000000..6c570749e Binary files /dev/null and b/site/.vuepress/public/img/home/kaggle.png differ diff --git a/site/.vuepress/public/img/home/logo-thicker.svg b/site/.vuepress/public/img/home/logo-thicker.svg new file mode 100755 index 000000000..a8f445f7e --- /dev/null +++ b/site/.vuepress/public/img/home/logo-thicker.svg @@ -0,0 +1,12 @@ + + + frictionless + + data + frictionless + + + + data + + diff --git a/site/.vuepress/public/img/home/logo-white-thicker.svg b/site/.vuepress/public/img/home/logo-white-thicker.svg new file mode 100755 index 000000000..46de55816 --- /dev/null +++ b/site/.vuepress/public/img/home/logo-white-thicker.svg @@ -0,0 +1,12 @@ + + + frictionless + + data + frictionless + + + + data + + diff --git a/site/.vuepress/public/img/home/matrix.svg b/site/.vuepress/public/img/home/matrix.svg new file mode 100644 index 000000000..1f13e4a16 --- /dev/null +++ b/site/.vuepress/public/img/home/matrix.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/site/.vuepress/public/img/home/okfn-logo-white.png b/site/.vuepress/public/img/home/okfn-logo-white.png new file mode 100644 index 000000000..85455d0f3 Binary files /dev/null and b/site/.vuepress/public/img/home/okfn-logo-white.png differ diff --git a/site/.vuepress/public/img/home/okn.svg b/site/.vuepress/public/img/home/okn.svg new file mode 100644 index 000000000..9a14a0015 --- /dev/null +++ b/site/.vuepress/public/img/home/okn.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/site/.vuepress/public/img/home/oleg-lavrovsky.png b/site/.vuepress/public/img/home/oleg-lavrovsky.png new file mode 100644 index 000000000..b0f9555f5 Binary files /dev/null and b/site/.vuepress/public/img/home/oleg-lavrovsky.png differ diff --git a/site/.vuepress/public/img/home/opendata.svg b/site/.vuepress/public/img/home/opendata.svg new file mode 100755 index 000000000..3c8c691ab --- /dev/null +++ b/site/.vuepress/public/img/home/opendata.svg @@ -0,0 +1,6 @@ + + + + + + diff --git a/site/.vuepress/public/img/home/openrefine.png b/site/.vuepress/public/img/home/openrefine.png new file mode 100644 index 000000000..44c200055 Binary files /dev/null and b/site/.vuepress/public/img/home/openrefine.png differ diff --git a/site/.vuepress/public/img/home/pandas.png b/site/.vuepress/public/img/home/pandas.png new file mode 100644 index 000000000..19b4698be Binary files /dev/null and b/site/.vuepress/public/img/home/pandas.png differ diff --git a/site/.vuepress/public/img/home/pushing-data.svg b/site/.vuepress/public/img/home/pushing-data.svg new file mode 100755 index 000000000..5efc07972 --- /dev/null +++ b/site/.vuepress/public/img/home/pushing-data.svg @@ -0,0 +1,13 @@ + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/researchers-color.png b/site/.vuepress/public/img/home/researchers-color.png new file mode 100755 index 000000000..85a90255e Binary files /dev/null and b/site/.vuepress/public/img/home/researchers-color.png differ diff --git a/site/.vuepress/public/img/home/researchers-color.svg b/site/.vuepress/public/img/home/researchers-color.svg new file mode 100755 index 000000000..ec87e5d18 --- /dev/null +++ b/site/.vuepress/public/img/home/researchers-color.svg @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/simon-tyrrell.jpg b/site/.vuepress/public/img/home/simon-tyrrell.jpg new file mode 100644 index 000000000..9fe93eac4 Binary files /dev/null and b/site/.vuepress/public/img/home/simon-tyrrell.jpg differ diff --git a/site/.vuepress/public/img/home/slack-icon.png b/site/.vuepress/public/img/home/slack-icon.png new file mode 100644 index 000000000..165a4814b Binary files /dev/null and b/site/.vuepress/public/img/home/slack-icon.png differ diff --git a/site/.vuepress/public/img/home/slide-creator.png b/site/.vuepress/public/img/home/slide-creator.png new file mode 100644 index 000000000..0748a1fb0 Binary files /dev/null and b/site/.vuepress/public/img/home/slide-creator.png differ diff --git a/site/.vuepress/public/img/home/slide-datahub.png b/site/.vuepress/public/img/home/slide-datahub.png new file mode 100644 index 000000000..edf9c1cd3 Binary files /dev/null and b/site/.vuepress/public/img/home/slide-datahub.png differ diff --git a/site/.vuepress/public/img/home/slide-framework.png b/site/.vuepress/public/img/home/slide-framework.png new file mode 100644 index 000000000..a01c5dcf4 Binary files /dev/null and b/site/.vuepress/public/img/home/slide-framework.png differ diff --git a/site/.vuepress/public/img/home/slide-goodtables.png b/site/.vuepress/public/img/home/slide-goodtables.png new file mode 100644 index 000000000..75e69d444 Binary files /dev/null and b/site/.vuepress/public/img/home/slide-goodtables.png differ diff --git a/site/.vuepress/public/img/home/slide-package.png b/site/.vuepress/public/img/home/slide-package.png new file mode 100644 index 000000000..a46effbd4 Binary files /dev/null and b/site/.vuepress/public/img/home/slide-package.png differ diff --git a/site/.vuepress/public/img/home/snippet.png b/site/.vuepress/public/img/home/snippet.png new file mode 100644 index 000000000..cd2cb00b7 Binary files /dev/null and b/site/.vuepress/public/img/home/snippet.png differ diff --git a/site/.vuepress/public/img/home/software.png b/site/.vuepress/public/img/home/software.png new file mode 100644 index 000000000..49cc0310d Binary files /dev/null and b/site/.vuepress/public/img/home/software.png differ diff --git a/site/.vuepress/public/img/home/software_crop.png b/site/.vuepress/public/img/home/software_crop.png new file mode 100644 index 000000000..a6f176bf1 Binary files /dev/null and b/site/.vuepress/public/img/home/software_crop.png differ diff --git a/site/.vuepress/public/img/home/sourcing-data.svg b/site/.vuepress/public/img/home/sourcing-data.svg new file mode 100755 index 000000000..037fecdd6 --- /dev/null +++ b/site/.vuepress/public/img/home/sourcing-data.svg @@ -0,0 +1,10 @@ + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/standards.png b/site/.vuepress/public/img/home/standards.png new file mode 100644 index 000000000..58b5e555b Binary files /dev/null and b/site/.vuepress/public/img/home/standards.png differ diff --git a/site/.vuepress/public/img/home/standards_crop.png b/site/.vuepress/public/img/home/standards_crop.png new file mode 100644 index 000000000..57c0d0e12 Binary files /dev/null and b/site/.vuepress/public/img/home/standards_crop.png differ diff --git a/site/.vuepress/public/img/home/toolbox.png b/site/.vuepress/public/img/home/toolbox.png new file mode 100755 index 000000000..5a3bb3f7a Binary files /dev/null and b/site/.vuepress/public/img/home/toolbox.png differ diff --git a/site/.vuepress/public/img/home/toolbox.svg b/site/.vuepress/public/img/home/toolbox.svg new file mode 100755 index 000000000..28097eaa7 --- /dev/null +++ b/site/.vuepress/public/img/home/toolbox.svg @@ -0,0 +1,103 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/.vuepress/public/img/home/transforming-data.svg b/site/.vuepress/public/img/home/transforming-data.svg new file mode 100755 index 000000000..9c8be674c --- /dev/null +++ b/site/.vuepress/public/img/home/transforming-data.svg @@ -0,0 +1,8 @@ + + + + + + + + diff --git a/site/.vuepress/public/img/home/twitter-icon.svg b/site/.vuepress/public/img/home/twitter-icon.svg new file mode 100644 index 000000000..5e8dd39f9 --- /dev/null +++ b/site/.vuepress/public/img/home/twitter-icon.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/site/.vuepress/public/img/home/twitter.svg b/site/.vuepress/public/img/home/twitter.svg new file mode 100755 index 000000000..317e03d7c --- /dev/null +++ b/site/.vuepress/public/img/home/twitter.svg @@ -0,0 +1,3 @@ + + + diff --git a/site/.vuepress/public/img/introduction/audience.png b/site/.vuepress/public/img/introduction/audience.png new file mode 100644 index 000000000..68c1a58c3 Binary files /dev/null and b/site/.vuepress/public/img/introduction/audience.png differ diff --git a/site/.vuepress/public/img/introduction/report.png b/site/.vuepress/public/img/introduction/report.png new file mode 100644 index 000000000..484698f0d Binary files /dev/null and b/site/.vuepress/public/img/introduction/report.png differ diff --git a/site/.vuepress/public/img/introduction/structure.png b/site/.vuepress/public/img/introduction/structure.png new file mode 100644 index 000000000..69171fa67 Binary files /dev/null and b/site/.vuepress/public/img/introduction/structure.png differ diff --git a/site/.vuepress/public/img/job-stories.png b/site/.vuepress/public/img/job-stories.png new file mode 100644 index 000000000..6ddad832d Binary files /dev/null and b/site/.vuepress/public/img/job-stories.png differ diff --git a/site/.vuepress/public/img/planet-earth.png b/site/.vuepress/public/img/planet-earth.png new file mode 100644 index 000000000..9e005e50d Binary files /dev/null and b/site/.vuepress/public/img/planet-earth.png differ diff --git a/site/.vuepress/public/img/safari-pinned-tab.svg b/site/.vuepress/public/img/safari-pinned-tab.svg new file mode 100644 index 000000000..48ec21adf --- /dev/null +++ b/site/.vuepress/public/img/safari-pinned-tab.svg @@ -0,0 +1,41 @@ + + + + +Created by potrace 1.11, written by Peter Selinger 2001-2013 + + + + + diff --git a/site/.vuepress/public/img/site.webmanifest b/site/.vuepress/public/img/site.webmanifest new file mode 100644 index 000000000..b20abb7cb --- /dev/null +++ b/site/.vuepress/public/img/site.webmanifest @@ -0,0 +1,19 @@ +{ + "name": "", + "short_name": "", + "icons": [ + { + "src": "/android-chrome-192x192.png", + "sizes": "192x192", + "type": "image/png" + }, + { + "src": "/android-chrome-512x512.png", + "sizes": "512x512", + "type": "image/png" + } + ], + "theme_color": "#ffffff", + "background_color": "#ffffff", + "display": "standalone" +} diff --git a/site/.vuepress/public/img/software/coming-soon.png b/site/.vuepress/public/img/software/coming-soon.png new file mode 100644 index 000000000..b4c3f575e Binary files /dev/null and b/site/.vuepress/public/img/software/coming-soon.png differ diff --git a/site/.vuepress/public/img/software/components.png b/site/.vuepress/public/img/software/components.png new file mode 100644 index 000000000..3118ad8f4 Binary files /dev/null and b/site/.vuepress/public/img/software/components.png differ diff --git a/site/.vuepress/public/img/software/datahub.png b/site/.vuepress/public/img/software/datahub.png new file mode 100644 index 000000000..659e213fa Binary files /dev/null and b/site/.vuepress/public/img/software/datahub.png differ diff --git a/site/.vuepress/public/img/software/framework.png b/site/.vuepress/public/img/software/framework.png new file mode 100644 index 000000000..4b43af171 Binary files /dev/null and b/site/.vuepress/public/img/software/framework.png differ diff --git a/site/.vuepress/public/img/software/libraries.png b/site/.vuepress/public/img/software/libraries.png new file mode 100644 index 000000000..70aecdf25 Binary files /dev/null and b/site/.vuepress/public/img/software/libraries.png differ diff --git a/site/.vuepress/public/img/software/livemark.png b/site/.vuepress/public/img/software/livemark.png new file mode 100644 index 000000000..2520fee9a Binary files /dev/null and b/site/.vuepress/public/img/software/livemark.png differ diff --git a/site/.vuepress/public/img/software/repository.png b/site/.vuepress/public/img/software/repository.png new file mode 100644 index 000000000..f53193316 Binary files /dev/null and b/site/.vuepress/public/img/software/repository.png differ diff --git a/site/.vuepress/public/img/standards/csv-dialect.png b/site/.vuepress/public/img/standards/csv-dialect.png new file mode 100644 index 000000000..2dc644649 Binary files /dev/null and b/site/.vuepress/public/img/standards/csv-dialect.png differ diff --git a/site/.vuepress/public/img/standards/data-package-views.png b/site/.vuepress/public/img/standards/data-package-views.png new file mode 100644 index 000000000..f45bd5fd0 Binary files /dev/null and b/site/.vuepress/public/img/standards/data-package-views.png differ diff --git a/site/.vuepress/public/img/standards/data-package.png b/site/.vuepress/public/img/standards/data-package.png new file mode 100644 index 000000000..bf70b0b06 Binary files /dev/null and b/site/.vuepress/public/img/standards/data-package.png differ diff --git a/site/.vuepress/public/img/standards/data-resource.png b/site/.vuepress/public/img/standards/data-resource.png new file mode 100644 index 000000000..6907970c0 Binary files /dev/null and b/site/.vuepress/public/img/standards/data-resource.png differ diff --git a/site/.vuepress/public/img/standards/fiscal-data-package.png b/site/.vuepress/public/img/standards/fiscal-data-package.png new file mode 100644 index 000000000..008500430 Binary files /dev/null and b/site/.vuepress/public/img/standards/fiscal-data-package.png differ diff --git a/site/.vuepress/public/img/standards/specs-diagram.png b/site/.vuepress/public/img/standards/specs-diagram.png new file mode 100644 index 000000000..ea64940a3 Binary files /dev/null and b/site/.vuepress/public/img/standards/specs-diagram.png differ diff --git a/site/.vuepress/public/img/standards/table-schema.png b/site/.vuepress/public/img/standards/table-schema.png new file mode 100644 index 000000000..e138a95d4 Binary files /dev/null and b/site/.vuepress/public/img/standards/table-schema.png differ diff --git a/site/.vuepress/public/img/structure.png b/site/.vuepress/public/img/structure.png new file mode 100644 index 000000000..5b8033cf1 Binary files /dev/null and b/site/.vuepress/public/img/structure.png differ diff --git a/site/.vuepress/public/img/suitcase.png b/site/.vuepress/public/img/suitcase.png new file mode 100644 index 000000000..ccc6014f9 Binary files /dev/null and b/site/.vuepress/public/img/suitcase.png differ diff --git a/site/.vuepress/public/img/tag.png b/site/.vuepress/public/img/tag.png new file mode 100644 index 000000000..cd08f72ab Binary files /dev/null and b/site/.vuepress/public/img/tag.png differ diff --git a/site/.vuepress/public/img/translation.png b/site/.vuepress/public/img/translation.png new file mode 100644 index 000000000..723938151 Binary files /dev/null and b/site/.vuepress/public/img/translation.png differ diff --git a/site/.vuepress/theme/components/BlogIndex.vue b/site/.vuepress/theme/components/BlogIndex.vue new file mode 100644 index 000000000..99caee88e --- /dev/null +++ b/site/.vuepress/theme/components/BlogIndex.vue @@ -0,0 +1,96 @@ + + + + + diff --git a/site/.vuepress/theme/components/BlogPost.vue b/site/.vuepress/theme/components/BlogPost.vue new file mode 100644 index 000000000..52e510351 --- /dev/null +++ b/site/.vuepress/theme/components/BlogPost.vue @@ -0,0 +1,33 @@ + + + + diff --git a/site/.vuepress/theme/components/BlogTag.vue b/site/.vuepress/theme/components/BlogTag.vue new file mode 100644 index 000000000..46b51d91d --- /dev/null +++ b/site/.vuepress/theme/components/BlogTag.vue @@ -0,0 +1,17 @@ + + + \ No newline at end of file diff --git a/site/.vuepress/theme/components/Footer.vue b/site/.vuepress/theme/components/Footer.vue new file mode 100644 index 000000000..8cb4e2930 --- /dev/null +++ b/site/.vuepress/theme/components/Footer.vue @@ -0,0 +1,119 @@ + + + diff --git a/site/.vuepress/theme/components/FooterSidebar.vue b/site/.vuepress/theme/components/FooterSidebar.vue new file mode 100644 index 000000000..cfdfc403a --- /dev/null +++ b/site/.vuepress/theme/components/FooterSidebar.vue @@ -0,0 +1,55 @@ + + + diff --git a/site/.vuepress/theme/components/Home.vue b/site/.vuepress/theme/components/Home.vue new file mode 100644 index 000000000..7e67bec12 --- /dev/null +++ b/site/.vuepress/theme/components/Home.vue @@ -0,0 +1,93 @@ + + + + + diff --git a/site/.vuepress/theme/components/Job.vue b/site/.vuepress/theme/components/Job.vue new file mode 100644 index 000000000..7ef72d61c --- /dev/null +++ b/site/.vuepress/theme/components/Job.vue @@ -0,0 +1,70 @@ + + + + + diff --git a/site/.vuepress/theme/components/JobsDiagram.vue b/site/.vuepress/theme/components/JobsDiagram.vue new file mode 100644 index 000000000..1dd05837e --- /dev/null +++ b/site/.vuepress/theme/components/JobsDiagram.vue @@ -0,0 +1,172 @@ + + + + + diff --git a/site/.vuepress/theme/components/JobsDiagramSmall.vue b/site/.vuepress/theme/components/JobsDiagramSmall.vue new file mode 100644 index 000000000..de50ff084 --- /dev/null +++ b/site/.vuepress/theme/components/JobsDiagramSmall.vue @@ -0,0 +1,172 @@ + + + + + diff --git a/site/.vuepress/theme/components/Navbar.vue b/site/.vuepress/theme/components/Navbar.vue new file mode 100644 index 000000000..60acdb9e2 --- /dev/null +++ b/site/.vuepress/theme/components/Navbar.vue @@ -0,0 +1,150 @@ + + + + + diff --git a/site/.vuepress/theme/components/Page.vue b/site/.vuepress/theme/components/Page.vue new file mode 100644 index 000000000..a5a704d6c --- /dev/null +++ b/site/.vuepress/theme/components/Page.vue @@ -0,0 +1,36 @@ + + + + + diff --git a/site/.vuepress/theme/components/Product.vue b/site/.vuepress/theme/components/Product.vue new file mode 100644 index 000000000..8bb4e1d7c --- /dev/null +++ b/site/.vuepress/theme/components/Product.vue @@ -0,0 +1,69 @@ + + + + + diff --git a/site/.vuepress/theme/components/Tag.vue b/site/.vuepress/theme/components/Tag.vue new file mode 100644 index 000000000..7d2699410 --- /dev/null +++ b/site/.vuepress/theme/components/Tag.vue @@ -0,0 +1,61 @@ + + + + + diff --git a/site/.vuepress/theme/index.js b/site/.vuepress/theme/index.js new file mode 100644 index 000000000..b91b8a576 --- /dev/null +++ b/site/.vuepress/theme/index.js @@ -0,0 +1,3 @@ +module.exports = { + extend: '@vuepress/theme-default' +} diff --git a/site/.vuepress/theme/layouts/Layout.vue b/site/.vuepress/theme/layouts/Layout.vue new file mode 100644 index 000000000..bae6d4c2d --- /dev/null +++ b/site/.vuepress/theme/layouts/Layout.vue @@ -0,0 +1,184 @@ + + + diff --git a/site/.vuepress/theme/layouts/Tags.vue b/site/.vuepress/theme/layouts/Tags.vue new file mode 100644 index 000000000..9e6f67a6d --- /dev/null +++ b/site/.vuepress/theme/layouts/Tags.vue @@ -0,0 +1,13 @@ + diff --git a/site/.vuepress/theme/styles/index.styl b/site/.vuepress/theme/styles/index.styl new file mode 100644 index 000000000..0fc386f52 --- /dev/null +++ b/site/.vuepress/theme/styles/index.styl @@ -0,0 +1,43 @@ +@tailwind base; +@tailwind components; +@tailwind utilities; + +body + font-family 'Lato', sans-serif + max-width: 1920px + margin 0 auto + color black +.navbar + border-bottom 1px solid #f8f8f8 + color black + .site-name + display none +.dropdown-wrapper .dropdown-title + pointer-events none + +@media (max-width: 720px) + .dropdown-wrapper .dropdown-title + pointer-events auto + outline none + +.dropdown-wrapper span { + color:black; +} + +.theme-default-content:not(.custom) { + max-width: 992px !important; +} + +img.logo { + padding-left: 10px; +} + +p > span, h1, h2, h3 { + font-weight: 500; + color: black; +} + +.custom-block + &.tip + background-color #f3f5f7 + /* border-color #ea6d4c */ diff --git a/site/.vuepress/theme/styles/palette.styl b/site/.vuepress/theme/styles/palette.styl new file mode 100644 index 000000000..a7a334280 --- /dev/null +++ b/site/.vuepress/theme/styles/palette.styl @@ -0,0 +1,2 @@ +$accentColor = #EA6D4C +$navbarHeight = 4.6rem \ No newline at end of file diff --git a/site/.vuepress/theme/util/index.js b/site/.vuepress/theme/util/index.js new file mode 100644 index 000000000..204696992 --- /dev/null +++ b/site/.vuepress/theme/util/index.js @@ -0,0 +1,251 @@ +export const hashRE = /#.*$/ +export const extRE = /\.(md|html)$/ +export const endingSlashRE = /\/$/ +export const outboundRE = /^(https?:|mailto:|tel:|[a-zA-Z]{4,}:)/ + +export function formatDate(date) { + var options = { year: 'numeric', month: 'long', day: 'numeric' }; + const d = new Date(date); + return d.toLocaleDateString("en-US", options) +} + +export function normalize (path) { + return decodeURI(path) + .replace(hashRE, '') + .replace(extRE, '') +} + +export function getHash (path) { + const match = path.match(hashRE) + if (match) { + return match[0] + } +} + +export function isExternal (path) { + return outboundRE.test(path) +} + +export function isMailto (path) { + return /^mailto:/.test(path) +} + +export function isTel (path) { + return /^tel:/.test(path) +} + +export function ensureExt (path) { + if (isExternal(path)) { + return path + } + const hashMatch = path.match(hashRE) + const hash = hashMatch ? hashMatch[0] : '' + const normalized = normalize(path) + + if (endingSlashRE.test(normalized)) { + return path + } + return normalized + '.html' + hash +} + +export function isActive (route, path) { + const routeHash = route.hash + const linkHash = getHash(path) + if (linkHash && routeHash !== linkHash) { + return false + } + const routePath = normalize(route.path) + const pagePath = normalize(path) + return routePath === pagePath +} + +export function resolvePage (pages, rawPath, base) { + if (isExternal(rawPath)) { + return { + type: 'external', + path: rawPath + } + } + if (base) { + rawPath = resolvePath(rawPath, base) + } + const path = normalize(rawPath) + for (let i = 0; i < pages.length; i++) { + if (normalize(pages[i].regularPath) === path) { + return Object.assign({}, pages[i], { + type: 'page', + path: ensureExt(pages[i].path) + }) + } + } + console.error(`[vuepress] No matching page found for sidebar item "${rawPath}"`) + return {} +} + +function resolvePath (relative, base, append) { + const firstChar = relative.charAt(0) + if (firstChar === '/') { + return relative + } + + if (firstChar === '?' || firstChar === '#') { + return base + relative + } + + const stack = base.split('/') + + // remove trailing segment if: + // - not appending + // - appending to trailing slash (last segment is empty) + if (!append || !stack[stack.length - 1]) { + stack.pop() + } + + // resolve relative path + const segments = relative.replace(/^\//, '').split('/') + for (let i = 0; i < segments.length; i++) { + const segment = segments[i] + if (segment === '..') { + stack.pop() + } else if (segment !== '.') { + stack.push(segment) + } + } + + // ensure leading slash + if (stack[0] !== '') { + stack.unshift('') + } + + return stack.join('/') +} + +/** + * @param { Page } page + * @param { string } regularPath + * @param { SiteData } site + * @param { string } localePath + * @returns { SidebarGroup } + */ +export function resolveSidebarItems (page, regularPath, site, localePath) { + const { pages, themeConfig } = site + + const localeConfig = localePath && themeConfig.locales + ? themeConfig.locales[localePath] || themeConfig + : themeConfig + + const pageSidebarConfig = page.frontmatter.sidebar || localeConfig.sidebar || themeConfig.sidebar + if (pageSidebarConfig === 'auto') { + return resolveHeaders(page) + } + + const sidebarConfig = localeConfig.sidebar || themeConfig.sidebar + if (!sidebarConfig) { + return [] + } else { + const { base, config } = resolveMatchingConfig(regularPath, sidebarConfig) + return config + ? config.map(item => resolveItem(item, pages, base)) + : [] + } +} + +/** + * @param { Page } page + * @returns { SidebarGroup } + */ +function resolveHeaders (page) { + const headers = groupHeaders(page.headers || []) + return [{ + type: 'group', + collapsable: false, + title: page.title, + path: null, + children: headers.map(h => ({ + type: 'auto', + title: h.title, + basePath: page.path, + path: page.path + '#' + h.slug, + children: h.children || [] + })) + }] +} + +export function groupHeaders (headers) { + // group h3s under h2 + headers = headers.map(h => Object.assign({}, h)) + let lastH2 + headers.forEach(h => { + if (h.level === 2) { + lastH2 = h + } else if (lastH2) { + (lastH2.children || (lastH2.children = [])).push(h) + } + }) + return headers.filter(h => h.level === 2) +} + +export function resolveNavLinkItem (linkItem) { + return Object.assign(linkItem, { + type: linkItem.items && linkItem.items.length ? 'links' : 'link' + }) +} + +/** + * @param { Route } route + * @param { Array | Array | [link: string]: SidebarConfig } config + * @returns { base: string, config: SidebarConfig } + */ +export function resolveMatchingConfig (regularPath, config) { + if (Array.isArray(config)) { + return { + base: '/', + config: config + } + } + for (const base in config) { + if (ensureEndingSlash(regularPath).indexOf(encodeURI(base)) === 0) { + return { + base, + config: config[base] + } + } + } + return {} +} + +function ensureEndingSlash (path) { + return /(\.html|\/)$/.test(path) + ? path + : path + '/' +} + +function resolveItem (item, pages, base, groupDepth = 1) { + if (typeof item === 'string') { + return resolvePage(pages, item, base) + } else if (Array.isArray(item)) { + return Object.assign(resolvePage(pages, item[0], base), { + title: item[1] + }) + } else { + if (groupDepth > 3) { + console.error( + '[vuepress] detected a too deep nested sidebar group.' + ) + } + const children = item.children || [] + if (children.length === 0 && item.path) { + return Object.assign(resolvePage(pages, item.path, base), { + title: item.title + }) + } + return { + type: 'group', + path: item.path, + title: item.title, + sidebarDepth: item.sidebarDepth, + children: children.map(child => resolveItem(child, pages, base, groupDepth + 1)), + collapsable: item.collapsable !== false + } + } +} diff --git a/site/README.md b/site/README.md new file mode 100644 index 000000000..f054f0064 --- /dev/null +++ b/site/README.md @@ -0,0 +1,337 @@ +--- +layout: home +title: Frictionless Data +description: Data software and standards +heroImage: /img/home/toolbox.png +heroText: The frictionless toolkit for data integration +tagline: Frictionless is an open-source toolkit that brings simplicity to the data experience - whether you're wrangling a CSV or engineering complex pipelines. +features: +- title: Approachable + details: A lean and minimal core. Quick to understand, quick to use. +- title: Incrementally Adoptable + details: Start with just what you need, scale as you grow. +- title: Progressive + details: Enhance, rather than replace, your existing tools and workflows. +--- + + + + +
+

For anyone who works with data
Especially complex data or across tools or teams.

+
+
+
+ +

Researchers

+

Make your research data more reproducible

+
+
+
+
+ +

Data Scientists

+

Easily create data processing pipelines

+
+
+
+
+ +

Data Engineers

+

Standardize complex data platforms

+
+
+
+
+ +
+
+

Frictionless Data Integration and Management


Data integration is the job of bringing complex data together, cleaning it up, knitting it together and pushing it into downstream applications, analytics or warehouses – and you can do this reliably, repeatedly and automatedly with Frictionless.
+
+
+
+
+
+ +
+
+

Packaging Data

+

Package data with its metadata and schema for increased usability and clarity.

+
+
+
+
+ +
+
+

Transforming Data

+

Data often requires some transformations, like cleaning or conversions from one format to another.

+
+
+
+
+ +
+
+

Pushing and Storing Data

+

Frictionless has several plugins for accessing and storing data, for example in a SQL database.

+
+
+
+
+ +
+
+
+ + + +
+ +
+

User Testimonials

+ Frictionless Data project has been adopted by many organizations and individuals. +
+
+ +
+
+ +

"The Frictionless Data stack is proving itself to be a solid foundation on which to build the next wave of open data. It promotes FAIR data from inception in order to build modern Open Data Portals!"

+

- Francisco Alves, Frictionless Contributor

+
+
+ +
+
+ +

"Frictionless is the happy mix of being easy to understand and use along with being extensible and open, it's pretty much the perfect way of bundling data and metadata together. Don't leave home without it :-)"

+

- Simon Tyrrell, Frictionless Tool Fund Grantee

+
+
+ +
+
+ +

"Data standards are a powerful instrument to support the next generation of users, develop compelling use cases and define new ecosystems that create jobs. Frictionless Data has helped us to achieve all of the above."

+

- Oleg Lavrovsky, Frictionless Tool Fund Grantee

+
+
+ + + + +
+
+
+ + + + diff --git a/site/adoption/README.md b/site/adoption/README.md new file mode 100644 index 000000000..594fd6480 --- /dev/null +++ b/site/adoption/README.md @@ -0,0 +1,536 @@ +--- +title: Frictionless Adoption +--- + +# Frictionless Adoption + +Projects and collaborations that use Frictionless. + +The Frictionless Data project provides software and standards to work with data. On this page we share projects and collaborations that use Frictionless, including collaborations with the Frictionless Team and also community projects that use our toolkit. + +:::tip +If you use Frictionless in your work and want to share it with community, please write to the Frictionless Team using any available contact provided on this site and we will add your project to this page. +::: + +## Pilot Collaborations + +We work closely with data researchers and institutions to help them integrate Frictionless into their workflow. Click on individual Pilots to learn more. + +
+
+ + +
+
+ + +

BCO-DMO

+
+

A Pilot with the Biological and Chemical Oceanography Data Management Office (BCO-DMO).

+
+
+ + +
+
+ + +

PUDL

+
+

A pilot with the Public Utility Data Liberation project, PUDL, aims to make US energy data easier to use.

+
+
+ + +
+
+ + +

Dryad

+
+

A pilot to add Frictionless Data Validation within Dryad, a curated resource that makes research data discoverable, freely reusable, and citable.

+
+
+ + +
+
+ + +

Data Readiness Group

+
+

A pilot with Dr. Philippe Rocca-Serra at Oxford's Data Readiness Group to remove the friction in reported scientific experimental results by applying the Data Package specifications.

+
+
+ + +
+
+ + +

Data Management for TEDDINET

+
+

A pilot to use Frictionless Data approaches to address data legacy issues facing the TEDDINET project, a research network addressing the challenges of transforming energy demand in our buildings.

+
+
+ + +
+
+ + +

Western Pennsylvania Regional Data Center

+
+

A pilot to showcase an implementation that expounds on quality and description of datasets in CKAN-based open data portals with the Western Pennsylvania Regional Data Center - a part of The University of Pittsburgh Center for Urban and Social Research.

+
+
+ + +
+
+ + +

UK Data Service

+
+

A pilot to use Frictionless Data software to assess and report on data quality and make a case for generating visualizations with ensuing data and metadata with UK data.

+
+
+ + +
+
+ + +

eLife

+
+

A pilot to explore the use of goodtables library to validate all scientific research datasets hosted by eLife and make a case for open data reuse in the field of Life and BioMedical sciences.

+
+
+ + +
+
+ + +

University of Cambridge - Retinal Mosaics

+
+

A pilot to trial Frictionless software for packaging and reading data to support computational techniques to investigate development of the nervous system.

+
+
+ + +
+
+ + +

Pacific Northwest National Laboratory - Active Data Biology

+
+

A pilot to explore the use of Frictionless Data's specifications and software to generate schemas for tabular data and validate metadata stored as part of a biological application on GitHub.

+
+
+ + +
+
+ + +

Causa Natura - Pescando Datos

+
+

A pilot to explore the use of data validation software in the Causa Natura project to improve quality of data to support fisher communities and advocacy groups.

+
+
+ + +
+
+ +## Tool Fund Grantee Projects +As part of the [Reproducible Research project](#frictionless-data-for-reproducible-research), we awarded several projects with small grants to build new tooling for open research based on the Frictionless codebase. Click on individual Tool Fund profiles to learn more. + +
+
+ + +
+
+ + +

Schema Collaboration

+
+

Data managers and researchers collaborate to write packages and tabular schemas (by Carles Pina Estany).

+
+
+ + +
+
+ + +

Frictionless Data Package for InterMine

+
+

Add data package support to InterMine, an open-source biological data warehouse (by Nikhil Vats).

+
+
+ + +
+
+ + +

Frictionless Data for Wheat

+
+

Added Frictionless support to the Designing Future Wheat project data portal which houses large-scale wheat datasets (by Simon Tyrrell and Xingdong Bian).

+
+
+ + +
+
+ + +

Metrics in Context

+
+

Developing an open standard to describe metadata of scholarly metrics by using Frictionless specifications (by Asura Enkhbayar).

+
+
+ + +
+
+ + +

Analysis of spontaneous activity patterns in developing neural circuits using Frictionless Data tools

+
+

Evaluate the use of Frictionless Data as a common format for the analysis of neuronal spontaneous activity recordings in comparison to HDF5 (by Stephen Eglen and Alexander Shtyrov).

+
+
+ + +
+
+ + +

Neuroscience Experiments System Tool Fund

+
+

Adapt the existing export component of RIDC NeuroMat's Neuroscience Experiments System to conform to the Frictionless Data specifications (by João Alexandre Peschanski, Cassiano dos Santos and Carlos Eduardo Ribas).

+
+
+ + +
+
+ + +

Frictionless DarwinCore

+
+

A tool to convert DarwinCore Archives into Frictionless Data Packages (by André Heughebaert).

+
+
+ + +
+
+ + +

Frictionless Google Sheets Tool (WIP)

+
+

Prototype a Data Package import/export add-on to Google Sheets (by Stephan Max).

+
+
+ + +
+
+ + +

Frictionless Open Referral

+
+

Implement datapackage bundling of Open Referral CSV files, which contain human health and social services data (by Shelby Switzer and Greg Bloom).

+
+
+ + +
+
+ + +

Software Libraries Grantees

+
+

In 2017, 6 grantees were awared funds to translate the Frictionless Python libraries into other software languages. The awardees and languages were: Matt Thompson - Clojure; Ori Hoch - PHP; Daniel Fireman - Go; Georges Labrèche - Java; Oleg Lavrovsky - Julie; and Open Knowledge Greece - R. You can read more about them each on the people page.

+
+
+ +
+
+ +## Community Projects + +The Frictionless Data project develops open source standards and software that can be re-used by anyone. Here is a list of projects that our community has created on top of Frictionless. If you would like your project to be featured here, let us know! + +
+
+ + +
+
+ + +

Libraries Hacked

+
+

Libraries hacked is a project started in 2014 to promote the use of open data in libraries.

+
+
+ + +
+
+ + +

Open Data Blend

+
+

Open Data Blend is a set of open data services that aim to make large and complex UK open data easier to analyse.

+
+
+ + +
+
+ + +

Data Curator

+
+

Data Curator is a simple desktop data editor to help describe, validate and share usable open data.

+
+
+ + +
+
+ + +

HubMAP

+
+

HuBMAP is creating an open, global atlas of the human body at the cellular level.

+
+
+ + +
+
+ + +

Etalab

+
+

Etalab, a department of the French interministerial digital service, launched schema.data.gouv.fr

+
+
+ + +
+
+ + +

Nimble Learn - datapackage-m

+
+

A set of functions written in Power Query M for working with Tabular Data Packages in Power BI Desktop and Power Query for Excel.

+
+
+ + +
+
+ + +

Nimble Learn - Datapackage-connector

+
+

Power BI Custom Connector that loads one or more tables from Tabular Data Packages into Power BI.

+
+
+ + +
+
+ + +

Zegami

+
+

Zegami is using Frictionless Data specifications for data management and syntactic analysis on their visual data analysis platform.

+
+
+ + +
+
+ + +

Center for Data Science and Public Policy, Workforce Data Initiative

+
+

Supporting state and local workforce boards in managing and publishing data.

+
+
+ + +
+
+ + +

Cell Migration Standardization Organization

+
+

Using Frictionless Data specs to package cell migration data and load it into Pandas for data analysis and creation of visualizations.

+
+
+ + +
+
+ + +

Collections as Data Facets - Carnegie Museum of Art Collection Data

+
+

Use of Frictionless Data specifications in the release of Carnegie Museum of Arts’ Collection Data for public access & creative use.

+
+
+ + +
+
+ + +

OpenML

+
+

OpenML is an online platform and service for machine learning, whose goal is to make ML and data analysis simple.

+
+
+ + +
+
+ + +

The Data Retriever

+
+

Data Retriever uses Frictionless Data specifications to generate and package metadata for publicly available data.

+
+
+ + +
+
+ + +

Tesera

+
+

Tesera uses Frictionless Data specifications to package data in readiness for use in different systems and components.

+
+
+ + +
+
+ + +

data.world

+
+

data.world uses Frictionless Data specifications to generate schema and metadata related to an uploaded dataset and containerize all three in a Tabular Data Package.

+
+
+ + +
+
+ + +

John Snow Labs

+
+

John Snow Labs uses Frictionless Data specifications to avail data to users for analysis.

+
+
+ + +
+
+ + +

Open Power System Data

+
+

Open Power System Data uses Frictionless Data specifications to avail energy data for analysis and modeling.

+
+
+ + +
+
+ + +

Dataship

+
+

Dataship used Frictionless Data specifications as the basis for its easy to execute, edit and share notebooks for data analysis.

+
+
+ + +
+
+ + +

European Commission

+
+

The European Commission launched a CSV schema validator using the tabular data package specification, as part of the ISA² Interoperability Testbed.

+
+
+ + +
+
+ + +

Validata

+
+

OpenDataFrance created Validata, a platform for local public administration in France to validate CSV files on the web, using the tabular data package specification.

+
+
+ +
+
+ +## Find Frictionless Datasets +Where can I find Frictionless Datasets? +- The Frictionless team maintains a list of Frictionless Datasets from GitHub on this site: data-package.frictionlessdata.io +- You can find Frictionless Datasets on Zenodo by [searching for #frictionlessdata](https://zenodo.org/search?page=1&size=20&q=keywords:%22frictionlessdata%22) +- There are several external databases that allow export of data as datapackages, including [datahub.io](https://datahub.io/), [data.world](https://data.world/), and [Intermine](http://intermine.org/). +*Don't see your database listed here? Let us know!* + +## Grant-funded work + +### Frictionless Data for Reproducible Research + +From September 2018 til December 2021, the Frictionless Data team focused on enhanced dissemination and training activities, and further iterations on our software and specifications via a range of collaborations with research partners. We aimed to use Frictionless tooling to resolve research data workflow issues, create a new wave of open science advocates, and teach about FAIR data management. This pivotal work was funded by the Alfred P. Sloan Foundation and overseen by the Frictionless team at the Open Knowledge Foundation. You can read more details about this grant [here](https://blog.okfn.org/2018/07/12/sloan-foundation-funds-frictionless-data-for-reproducible-research/). + +#### Pilot Collaborations + +Pilots are intensive, hands-on collaborations with researcher teams to resolve their research data management workflow issues with Frictionless Data software and specs. You can read about the Pilot projects on our [blog](/tag/pilot/). + +#### Tool Fund + +The Tool Fund is a $5000 grant to develop an open tool for reproducible science or research built using the Frictionless Data codebase. Learn more by reading [Tool Fund Blogs](/tag/tool-fund/) or by visiting the [Tool Fund site](https://toolfund.frictionlessdata.io/). + +#### Fellows Programme + +The [Fellows Programme](https://fellows.frictionlessdata.io/) trains early career researchers to become champions of the Frictionless Data tools and approaches in their field. Read more about the Programme, including Fellows biographies and the programme syllabus, on the [Fellows website](https://fellows.frictionlessdata.io/) + +### Data Institutions - Website Update + +In 2021, we partnered with the Open Data Institute (ODI) to improve our existing documentation and add new features on Frictionless Data to create a better user experience for all. Working with a series of feedback sessions from our community members, we created our new [documentation portal](https://framework.frictionlessdata.io/) for the Frictionless Framework and several new tutorials. Read more about this grant [here](https://blog.okfn.org/2021/04/14/unveiling-the-new-frictionless-data-documentation-portal/). + +### Frictionless Field Guide + +In 2017, OKF received funding from the Open Data Institute to create a Frictionless Data Field Guide. This guide provided step-by-step instructions for improving data publishing workflows. The [field guide](/tag/field-guide) introduced new ways of working informed by the Frictionless Data suite of software that data publishers can use independently, or adapt into existing personal and organisational workflows. You can read more details about this work [here](https://blog.okfn.org/2018/03/27/improving-your-data-publishing-workflow-with-the-frictionless-data-field-guide/). + +### Data Package Integrations + +In 2016, Google funded OKF to work on tool integration for Data Packages as part of our broader work on Frictionless Data to support the open data community. You can read more about this work [here](https://blog.okfn.org/2016/02/01/google-funds-frictionless-data-initiative-at-open-knowledge/). + +### Data Packages Development + +In 2016, OKF received funding from The Alfred P. Sloan Foundation to work on a broad range of activities to enable better research and more effective civic tech through Frictionless Data. The funding targeted standards work, tooling, and infrastructure around “data packages” as well as piloting and outreach activities to support researchers and civic technologists in addressing real problems encountered when working with data. You can read more about this work [here](https://blog.okfn.org/2016/02/29/sloan-foundation-funds-frictionless-data-tooling-and-engagement-at-open-knowledge/). diff --git a/site/blog/2016-04-20-publish-faq/README.md b/site/blog/2016-04-20-publish-faq/README.md new file mode 100644 index 000000000..2aa644b24 --- /dev/null +++ b/site/blog/2016-04-20-publish-faq/README.md @@ -0,0 +1,210 @@ +--- +title: FAQ on Publishing Data Packages +date: 2016-04-20 +tags: +category: publishing-data +--- + +FAQs and best practice patterns for publishing data packages. + +Complete specifications are available at [specs/data-package](https://specs.frictionlessdata.io/data-package/). + +## Data Package Name + +The Data Package name is used in the `name` field of the `datapackage.json`. + +*This name is also frequently used for the folder/directory in which the Data Package is stored.* + +As per the Data Package spec The name SHOULD be: + +* lower-case +* use '-' for word separators +* reasonably concise (3-4 words) + +**Naming conventions** + +For country specific datasets: + +``` +{topic} # e.g. gdp +{topic}-{2-digit-iso} # e.g. gdp-us +``` + +For time series data: + +``` +[...-]year +[...-]quarter +[...-]month +[...-]day +``` + +---- + +## Resource and File Names + +Similar to Data Package Names: + +* lower-case +* use '-' for word separators + +Resource names SHOULD, usually, be the same as the name of the associated file on disk but without the file extension. e.g. + +``` +gdp-quarterly # resource name +gdp-quarterly.csv # on disk +``` + +Naming conventions of files follow that for data packages in terms of country or time series facets. + +---- + +## Descriptor `datapackage.json` + +### Alignment + +With JSON, data is structured in a nested way through curly and squared brackets. Though the alignment of these structures is not relevant for computer programs, it makes it easier for the human reader if they are properly aligned. + +Good alignment: + +```json +{ + "name": "corruption-perceptions-index", + "title": "Corruption Perceptions Index (CPI)", + "sources": [ + { + "name": "Transparency International", + "web": "http://www.transparency.org/research/cpi/overview" + } + ], +... +} +``` + +Bad alignment: + +```json +{ + "name": "corruption-perceptions-index","title": "Corruption Perceptions Index (CPI)", + "sources": + [{ + "name": "Transparency International", + "web": "http://www.transparency.org/research/cpi/overview"}] + , +... +} +``` + +Please make sure to have your `datapackage.json` well structured to ease the understanding of your Data Package content. The [Online DataPackage.json Creator](https://create.frictionlessdata.io/) can help you create the general structure. + +### Contributors fields + +Add the 'contributors' field (original author of the package - see [specs/data-package](https://specs.frictionlessdata.io/data-package/) if you wish to keep the credits for the package. + +---- + +## Data Package Folder Names and Structure + +It is standard practice to use the Data Package name (from the `datapackage.json`) for the name of the folder/directory in which the Data Package is kept. + +If storing in e.g. git(hub) this would also be the the name of the repository. + +If you include scripts allowing to automate the data extraction process, these should be stored in a `script` folder/directory. + +---- + +## README + +A README is a text file giving (human-readable) information about your dataset. + +Data Packages SHOULD have a README. + +### Formatting + +The README SHOULD be a plain text file (no word or rich text etc) and SHOULD use markdown to allow for formatting + +### File Name + +If markdown is used the file SHOULD be named `README.md` and otherwise SHOULD be named `README.txt`. + +### Sections + +You can include anything you like in your README. It is standard practice to include some (if possible all) of the following sections: **Introduction, Data, Preparation, License**. + +We SHOULD NOT include the title of the Data Package at the top of the README. + +Each section other than the introduction should be headed with its name using level 2 heading in markdown e.g. for the data section you would have the following markdown in your README: + +``` +## Data +``` + +#### Introduction + +Start with a short description of the dataset (the first sentence and first paragraph should be extractable to provide short standalone descriptions). + +Unlike other sections **this section SHOULD NOT have a heading** as it starts the README. (i.e. you do not need the heading `## Introduction` + +#### Data + +Put specific information about the data in a Data section. This can be things like information about the source of the data, the specific structure of the data, missing values etc. + +#### Preparation + +Put information on preparing the data in a Preparation section. In particular, any instructions about how to run any preparation and processing scripts to generate the data should go here. + +#### License + +Put additional information on the permissions and licensing of the data in the Data Package in the License section. + +Since licensing information is often not clear from the data producers, the guideline here is to license the Data Package under the Public Domain Dedication and License, and then to add any relevant information or disclaimers regarding the source data. + +See, for example: + +* +* + +See also the following thread + +---- + +## Validate and preview your Data Package + +Use the [Data Package Creator][dp-creator] to check that your `datapackage.json` and Data Package are good to go. Simply drop the URL to your `datapackage.json` file in the input box, or upload from a local source, and press `Validate`. If everything is fine, `Status: Valid` is returned. + +Then use the [Online Data Package viewer app][dp-viewer] to have a preview of your Data Package. + +---- + +## Examples + +For examples of well-structured Data Package see: + +* For tabular data: +* For geospatial data: + +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our new and comprehensive [Frictionless Data Field Guide][field-guide]. + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ +[field-guide]: /tag/field-guide + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2016-04-30-publish-geo/README.md b/site/blog/2016-04-30-publish-geo/README.md new file mode 100644 index 000000000..8366e843e --- /dev/null +++ b/site/blog/2016-04-30-publish-geo/README.md @@ -0,0 +1,64 @@ +--- +title: Publishing Geospatial Data as a Data Package +date: 2016-04-30 +tags: +description: A guide on how to publish geospatial data as datapackages +category: publishing-data +--- + +Publishing your Geodata as Data Packages is very easy. + +You have two options for publishing your geodata: + +* **Geo Data Package** (Recommended). This is a basic Data Package with the requirement that data be in GeoJSON and with a few special additions to the metadata for geodata. See the next section for instructions on how to do this. +* **Generic Data Package**. This allows you to publish geodata in any kind of format (KML, Shapefiles, Spatialite etc). If you choose this option you will want to follow the standard [instructions for packaging any kind of data as a Data Package][pub-any]. + +We recommend Geo Data Package if that is possible as it makes it much easier for you to use 3rd party tools with your Data Package. For example, the [datapackage viewer][dp-viewer] on this site will automatically preview a Geo Data Package. + +::: tip +*Note: this document focuses on *vector* geodata – i.e. points, lines polygons etc (not imagery or raster data).* +::: + +## Geo Data Packages + +### Examples + +#### [Traffic signs of Hansbeke, Belgium](https://github.com/peterdesmet/traffic-signs-hansbeke) + +Example of using `point` geometries with described properties in real world situation. + +[View it with the Data Package Viewer][view-2](*deprecated*) + +[view-2]: http://data.okfn.org/tools/view?url=https%3A%2F%2Fgithub.com%2Fpeterdesmet%2Ftraffic-signs-hansbeke + +#### [GeoJSON example on DataHub](https://datahub.io/examples/geojson-tutorial) + +#### See more Geo Data Packages in the [example data packages](https://github.com/frictionlessdata/example-data-packages) GitHub repository. + +::: tip +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our [Introduction][introduction]. +::: + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ +[introduction]: /introduction + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2016-07-21-creating-tabular-data-packages-in-python/README.md b/site/blog/2016-07-21-creating-tabular-data-packages-in-python/README.md new file mode 100644 index 000000000..2e3163eba --- /dev/null +++ b/site/blog/2016-07-21-creating-tabular-data-packages-in-python/README.md @@ -0,0 +1,127 @@ +--- +title: Creating Data Packages in Python +date: 2016-07-21 +tags: ["Python"] +description: A guide on how to create datapackages in Python +category: working-with-data-packages +--- + +This tutorial will show you how to install the Python library for working with Data Packages and Table Schema, load a CSV file, infer its schema, and write a Tabular Data Package. + +## Setup + +For this tutorial, we will need the [Data Package library](https://github.com/frictionlessdata/datapackage-py) ([PyPI](https://pypi.python.org/pypi/datapackage)) library. + +```bash +pip install datapackage +``` + +## Creating basic metadata + +You can start using the library by importing `datapackage`. + +```python +import datapackage +``` + +The Package() class allows you to work with data packages. Use it to create a blank datapackage called package like so: + +```python +package = datapackage.Package() +``` + +You can then add useful metadata by adding keys to metadata dict attribute. Below, we are adding the required `name` key as well as a human-readable `title` key. For the keys supported, please consult the full [Data Package spec](https://specs.frictionlessdata.io/data-package/#metadata). Note, we will be creating the required `resources` key further down below. + +```python +package.descriptor['name'] = 'period-table' +package.descriptor['title'] = 'Periodic Table' +``` + +To view your descriptor file at any time, simply type + +```python +package.descriptor +``` + +## Inferring a CSV Schema + +Let's say we have a file called `data.csv` ([download](https://github.com/frictionlessdata/example-data-packages/blob/master/periodic-table/data.csv)) in our working directory that looks like this: + +| atomic number | symbol | name | atomic mass | metal or nonmetal? | +|----------------|--------|---------------|-------------------------|-----------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | +| 4 | Be | Beryllium | 9.012182 | alkaline earth metal | +| 5 | B | Boron | 10.811 | metalloid | + +We can extrapolate our CSV's schema by using `infer` from the Table Schema library. The `infer` function checks a small subset of your dataset and summarizes expected datatypes against each column, etc. To infer a schema for our dataset and view it, we will simply run + +```python +package.infer('periodic-table/data.csv') +package.descriptor +``` + +Where there's need to infer a schema for more than one tabular data resource, use the glob pattern `**/*.csv` instead to infer a schema: + +```python +package.infer('**/*.csv') +package.descriptor +``` + +We are now ready to save our `datapackage.json` file locally. The dp.save() function makes this possible. + +```python +dp.save('datapackage.json') +``` + +The `datapackage.json` +([download](https://github.com/frictionlessdata/example-data-packages/blob/master/periodic-table/datapackage.json)) is inlined below. Note that atomic number has been correctly inferred as an `integer` and atomic mass as a `number` (float) while every other column is a `string`. + +```json +{ + 'profile': 'tabular-data-package', + 'resources': [{ + 'path': 'data.csv', + 'profile': 'tabular-data-resource', + 'name': 'data', + 'format': 'csv', + 'mediatype': 'text/csv', + 'encoding': 'UTF-8', + 'schema': { + 'fields': [{ + 'name': 'atomic number', + 'type': 'integer', + 'format': 'default' + }, + { + 'name': 'symbol', + 'type': 'string', + 'format': 'default' + }, + { + 'name': 'name', + 'type': 'string', + 'format': 'default' + }, + { + 'name': 'atomic mass', + 'type': 'number', + 'format': 'default' + }, + { + 'name': 'metal or nonmetal?', + 'type': 'string', + 'format': 'default' + }], + 'missingValues': [''] + } + }], + 'name': 'periodic-table', + 'title': 'Periodic Table' +} +``` + +## Publishing + +Now that you have created your Data Package, you might want to [publish your data online](/blog/2016/08/29/publish-online/) so that you can share it with others. diff --git a/site/blog/2016-07-21-publish-any/README.md b/site/blog/2016-07-21-publish-any/README.md new file mode 100644 index 000000000..8f2022261 --- /dev/null +++ b/site/blog/2016-07-21-publish-any/README.md @@ -0,0 +1,63 @@ +--- +title: Publish Any Kind of Data as a Data Package +date: 2016-07-21 +tags: +description: A guide on how to publish any kind of data as datapackages +category: publishing-data +--- + + +You can publish **all and any kind of data** as Data packages. It's as simple as 1-2-3: + +1. Get your data together +2. Add a `datapackage.json` file to wrap those data files up into a useful whole (with key information like the license, title and format) +3. [optional] Share it with others, for example, by uploading the data package online + +## 1. Get your data together + +Get your data together in one folder (you can have data in subfolders of that folder too, if you wish). + +## 2. Add a datapackage.json file + +The `datapackage.json` is a small file in [JSON][json] format that describes your dataset. You'll need to create this file and then place it in the directory you created. + +*Don't worry if you don't know what JSON is - we provide some tools such as [Data Package Creator][dp-creator] that can automatically create this file for you.* + + +There are 2 options for creating the `datapackage.json`: + +**Option 1**: Use the online [datapackage.json creator tool][dp-creator] - just answer a few questions and give it your data files and it will spit out a datapackage.json for you to include in your project + +**Option 2**: Do it yourself - if you're familiar with JSON you can create this yourself. Take a look at the [Data Package Specification][spec-dp]. + +## 3. Put the data package online + +See the [step-by-step instructions for putting your Data Package online][pub-online]. + +::: tip +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our [Introduction][introduction]. +::: + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ +[introduction]: /introduction + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2016-07-21-publish-tabular/README.md b/site/blog/2016-07-21-publish-tabular/README.md new file mode 100644 index 000000000..667faa7b6 --- /dev/null +++ b/site/blog/2016-07-21-publish-tabular/README.md @@ -0,0 +1,86 @@ +--- +title: Publish Tabular Data as a Data Package +date: 2016-07-21 +tags: +description: A guide on how to publish tabular datapackages +category: publishing-data +--- + +Here's how to publish your tabular data as [Tabular Data Packages][spec-tdp]. There are 4 steps: + +1. Create a folder (directory) - this folder will hold your "data package" +2. Put your data into comma-separated values files ([CSV][csv-blog]) and add them to that folder +3. Add a `datapackage.json` file to hold some information about the data package and the data in it e.g. a title, who created it, how other people can use it (licensing), etc +4. Upload the data package online + +### 1. Create a Directory (Folder) + +### 2. Create your CSV files + +CSV is a common file format for storing a (single) table of data (for example, a single sheet in a spreadsheet). If you've got more than one table you can save multiple CSV files, one for each table. + +Put the CSV files in the directory you created -- we suggest putting them in a subdirectory called `data` so that your base directory does not get too cluttered up. + +You can produce CSV files from almost any application that handles data including spreadsheets like Excel and databases like MySQL or Postgresql. + +You can find out more about CSVs and how to produce them in our [guide to CSV][csv-blog] or by doing a quick search online for CSV + the name of your tool. + +### 3. Add a datapackage.json file + +The `datapackage.json` is a small file in [JSON][json] format that gives a bit of information about your dataset. You'll need to create this file and then place it in the directory you created. + +> *Don't worry if you don't know what JSON is - we provide some tools that can automatically create your this file for you.* + +There are three options for creating the `datapackage.json`: + +**Option 1:** Use the online [datapackage.json creator tool][dp-creator] - answer a few questions and give it your data files and it will spit out a `datapackage.json` for you to include in your project. + +**Option 2:** Do it yourself - if you're familiar with JSON you can create this yourself. Take a look at the [Data Package][spec-dp] and [Tabular Data Format][spec-tdp] specifications. + +**Option 3:** Use the Python, JavaScript, PHP, Julia, R, Clojure, Java, Ruby or Go [libraries][libraries] for working with data packages. + +### 4. Put the data package online + +See [Putting Your Data Package online][pub-online] + +---- + +## Appendix: Examples of Tabular Data Packages + +Pay special attention to the scripts directory (and look at the commit logs!) + +- [datahub.io/core/finance-vix](https://datahub.io/core/finance-vix) +- [datahub.io/core/s-and-p-500-companies](https://datahub.io/core/s-and-p-500-companies) +- [datahub.io/core/co2-fossil-global](https://datahub.io/core/co2-fossil-global) +- [datahub.io/core/imf-weo](https://datahub.io/core/imf-weo) + +::: tip +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our [introduction][introduction]. +::: + +[dp]: /data-package +[dp-main]: /data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors + +[csv-blog]: /blog/2018/07/09/csv/ + +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ +[field-guide]: /tag/field-guide + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io +[libraries]: /software +[introduction]: /introduction \ No newline at end of file diff --git a/site/blog/2016-08-29-using-data-packages-in-python/README.md b/site/blog/2016-08-29-using-data-packages-in-python/README.md new file mode 100644 index 000000000..3cfe08a8d --- /dev/null +++ b/site/blog/2016-08-29-using-data-packages-in-python/README.md @@ -0,0 +1,203 @@ +--- +title: Using Data Packages in Python +date: 2016-08-29 +tags: ["Python"] +description: A guide on how to use datapackages with Python +category: working-with-data-packages +--- + +> This tutorial uses `datapackage-py` which has been replaced with `frictionless-py` (as of 2021). See the [Frictionless Framework documentation](https://framework.frictionlessdata.io/) for help with `frictionless-py`. + +This tutorial will show you how to install the Python libraries for working with Tabular Data Packages and demonstrate a very simple example of loading a Tabular Data Package from the web and pushing it directly into a local SQL database. Short examples of pushing your dataset to Google’s BigQuery and Amazon’s RedShift follow. + +## Setup + +For this tutorial, we will need the main Python Data Package library: + + + +You can install it as follows: + +```bash +pip install datapackage +``` + +## Reading Basic Metadata + +In this case, we are using an example Tabular Data Package containing the periodic table stored on [GitHub](https://github.com/frictionlessdata/example-data-packages/tree/master/periodic-table) ([datapackage.json](https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/periodic-table/datapackage.json), [data.csv](https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/periodic-table/data.csv)). This dataset includes the atomic number, symbol, element name, atomic mass, and the metallicity of the element. Here are the first five rows: + +| atomic number | symbol | name | atomic mass | metal or nonmetal? | +|---------------|--------|-----------|-------------|----------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | +| 4 | Be | Beryllium | 9.012182 | alkaline earth metal | +| 5 | B | Boron | 10.811 | metalloid | + +You can start using the library by importing `datapackage`. Data Packages can be loaded either from a local path or directly from the web. + +```python +import datapackage +url = 'https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/periodic-table/datapackage.json' +dp = datapackage.Package(url) +``` + +At the most basic level, Data Packages provide a standardized format for general metadata (for example, the dataset title, source, author, and/or description) about your dataset. Now that you have loaded this Data Package, you have access to this metadata using the `metadata` dict attribute. Note that these fields are optional and may not be specified for all Data Packages. For more information on which fields are supported, see [the full Data Package standard](https://specs.frictionlessdata.io/data-package/). + +```python +print(dp.descriptor['title']) +> "Periodic Table" +``` + +## Reading Data + +Now that you have loaded your Data Package, you can read its data. A Data Package can contain multiple files which are accessible via the `resources` attribute. The `resources` attribute is an array of objects containing information (e.g. path, schema, description) about each file in the package. + +You can access the data in a given resource in the `resources` array by reading the `data` attribute. For example, using our our Periodic Table Data Package, we can return all elements with an atomic number of less than 10 by doing the following: + +```python +print([e['name'] for e in dp.resources[0].data if int(e['atomic number']) < 10]) + +> ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine'] +``` + +If you don't want to load all data in memory at once, you can lazily access the data using the `iter()` method on the resource: + +```python +rows = dp.resources[0].iter() +rows.next() + +> {'metal or nonmetal?': 'nonmetal', 'symbol': 'H', 'name': 'Hydrogen', 'atomic mass': '1.00794', 'atomic number': '1'} + +rows.next() + +> {'metal or nonmetal?': 'noble gas', 'symbol': 'He', 'name': 'Helium', 'atomic mass': '4.002602', 'atomic number': '2'} + +rows.next() + +> {'metal or nonmetal?': 'alkali metal', 'symbol': 'Li', 'name': 'Lithium', 'atomic mass': '6.941', 'atomic number': '3'} +``` + +## Loading into an SQL database + +[Tabular Data Packages](https://specs.frictionlessdata.io/tabular-data-package/) contains schema information about its data using [Table Schema](https://specs.frictionlessdata.io/table-schema/). This means you can easily import your Data Package into the SQL backend of your choice. In this case, we are creating an [SQLite](http://sqlite.org/) database in a new file named `datapackage.db`. + +To load the data into SQL we will need the Table Schema SQL Storage library: + + + +You can install it by doing: + +```bash +pip install tableschema-sql +``` + +Now you can load your data as follows: + +```python +# create the database connection (using SQLAlchemy) +from sqlalchemy import create_engine + +# Load and save table to SQL +engine = create_engine('sqlite:///periodic-table-datapackage.db') +dp.save(storage='sql', engine=engine) + +``` + +One way to check if your data has been saved successfully is by running + +```Python +list(engine.execute('SELECT * from data')) +``` + +Alternatively, if you have `sqlite3` installed, you can inspect and play with your newly created database. Note that column type information has been translated from the Table Schema format to native SQLite types: + +```sql +$ sqlite3 periodic-table-datapackage.db +SQLite version 3.19.3 2017-06-27 16:48:08 +Enter ".help" for usage hints. + +/*check database schema*/ + +sqlite> .schema +CREATE TABLE data ( + "atomic number" INTEGER, + symbol TEXT, + name TEXT, + "atomic mass" FLOAT, + "metal or nonmetal?" TEXT +); + +/*view all records in the data table*/ + +SELECT * from data; + +``` + + +## Loading into BigQuery + +Loading into BigQuery requires some setup on Google's infrastructure, but once that is completed, loading data can be just as frictionless. Here are the steps to follow: + +1. Create a new project - [link](https://console.cloud.google.com/iam-admin/projects) +2. Create a new service account key - [link](https://console.developers.google.com/apis/credentials) +3. Download credentials as JSON and save as `.credentials.json` +4. Create dataset for your project - [link](https://bigquery.cloud.google.com/welcome/) (e.g. "dataset") + +To load the data into BigQuery using Python, we will need the Table Schema BigQuery Storage library: + + + +You can install it as follows: + +```bash +pip install tableschema-bigquery +``` + +The code snippet below should be enough to push your dataset into the cloud! + +```python +import io +import os +import json +from tableschema import Table +from apiclient.discovery import build +from oauth2client.client import GoogleCredentials + +# Prepare BigQuery credentials +os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'dp-py/credentials.json' +credentials = GoogleCredentials.get_application_default() +service = build('bigquery', 'v2', credentials=credentials) +project = json.load(io.open('credentials.json', encoding='UTF-8'))['project_id'] + +# Load and save table to BigQuery +table = Table('data.csv', schema='schema.json') +table.save('data', storage='bigquery', service=service, project=project, dataset='dataset') +``` + +If everything is in place, you should now be able to inspect your dataset on BigQuery. + +![BigQuery Schema](./bigquery-schema.png) + +![BigQuery Preview](./bigquery-preview.png) + +## Loading into Amazon RedShift + +Similar to Google's BigQuery, Amazon RedShift requires [some setup](http://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html) on AWS. Once you've created your cluster, however, all you need to do is use your cluster endpoint to create a connection string for SQLAlchemy. + +! Note: using the [sqlalchemy-redshift dialect](https://sqlalchemy-redshift.readthedocs.io/en/latest/index.html) is optional as the `postgres://` dialect is sufficient to load your table into AWS RedShift. + +![AWS RedShift](./aws-redshift-cluster-endpoint.png) + +```python +# create the database connection (using SQLAlchemy) +REDSHIFT_URL = 'postgres://:@.redshift.amazonaws.com:5439/' +from sqlalchemy import create_engine + +# load and save table to RedShift +engine = create_engine(REDSHIFT_URL) +dp.save(storage='sql', engine=engine) + +# check if data has been saved successfully +list(engine.execute('SELECT * from data')) +``` \ No newline at end of file diff --git a/site/blog/2016-08-29-using-data-packages-in-python/aws-redshift-cluster-endpoint.png b/site/blog/2016-08-29-using-data-packages-in-python/aws-redshift-cluster-endpoint.png new file mode 100644 index 000000000..7b4006c60 Binary files /dev/null and b/site/blog/2016-08-29-using-data-packages-in-python/aws-redshift-cluster-endpoint.png differ diff --git a/site/blog/2016-08-29-using-data-packages-in-python/bigquery-preview.png b/site/blog/2016-08-29-using-data-packages-in-python/bigquery-preview.png new file mode 100644 index 000000000..f8d76355f Binary files /dev/null and b/site/blog/2016-08-29-using-data-packages-in-python/bigquery-preview.png differ diff --git a/site/blog/2016-08-29-using-data-packages-in-python/bigquery-schema.png b/site/blog/2016-08-29-using-data-packages-in-python/bigquery-schema.png new file mode 100644 index 000000000..acb62cd2d Binary files /dev/null and b/site/blog/2016-08-29-using-data-packages-in-python/bigquery-schema.png differ diff --git a/site/blog/2016-08-30-publish/README.md b/site/blog/2016-08-30-publish/README.md new file mode 100644 index 000000000..73d990dc4 --- /dev/null +++ b/site/blog/2016-08-30-publish/README.md @@ -0,0 +1,37 @@ +--- +title: Publish Data as Data Packages - Overview +date: 2016-08-30 +tags: +category: publishing-data +description: A guide on how to publish datapackages +--- + +You can publish **any kind of data** as a Data Package. + +Making existing data into a Data Package is very straightforward. Once you have packaged up your data, you can make it available for others by [putting it online][online] or sending an email. + +[online]: /blog/2016/08/29/publish-online/ + +## I want to package up and publish data that is … + +### Tabular + +Rows and columns like in a spreadsheet? It's tabular … + +[Here's a tutorial on publishing tabular data](/blog/2016/07/21/publish-tabular) + +### Geospatial + +Map or location related? It's geospatial … + +[Here's a tutorial on publishing geodata](/blog/2016/04/30/publish-geo) + +### Any Kind + +Any kind of data you have - graph, binary, RDF … + +[Here's a tutorial on publishing other types of data](/blog/2016/07/21/publish-any) + +::: tip +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our [introduction](/introduction). +::: \ No newline at end of file diff --git a/site/blog/2016-11-15-dataship/README.md b/site/blog/2016-11-15-dataship/README.md new file mode 100644 index 000000000..4a036d663 --- /dev/null +++ b/site/blog/2016-11-15-dataship/README.md @@ -0,0 +1,39 @@ +--- +title: Dataship +date: 2016-11-15 +author: Frictionless Data +tags: ["case-studies", "Data Package", "goodtables.io", "Data CLI"] +category: case-studies +interviewee: Waylon Flinn +subject_context: Dataship is using Frictionless Data specifications as the basis for its easy to execute, edit and share notebooks for data analysis. +image: /img/blog/dataship-logo.png +description: A way to share data and analysis, from simple charts to complex machine learning, with anyone in the world easily and for free. +--- + +[Dataship](https://dataship.io/) is a way to share data and analysis, from simple charts to complex machine learning, with anyone in the world easily and for free. It allows you to create notebooks that hold and deliver your data, as well as text, images and inline scripts for doing analysis and visualization. The people you share it with can read, execute and even edit a copy of your notebook and publish the remixed version as a fork. + + + +One of the main challenges we face with data is that it’s hard to share it with others. Tools like Jupyter (iPython notebook)[^jupyter] make it much easier and more affordable to do analysis (with the help of open source projects like numpy[^numpy] and pandas[^pandas]). What they don’t do is allow you to *cheaply and easily share that with the world*. **If it were as easy to share data and analysis as it is to share pictures of your breakfast, the world would be a more enlightened place.** Dataship is helping to build that world. + +Every notebook on Dataship is also a Data Package[^datapackage]. Like other Data Packages it can be downloaded, along with its data, just by giving its URL to software like data-cli[^data-cli]. Additionally, working with existing Data Packages is easy. Just as you can fork other notebooks, you can also fork existing Data Packages, even when they’re located somewhere else, like GitHub. + +![Dataship GIF](./dataship.gif)
*Dataship in action* + +Every cell in a notebook is represented by a resource entry[^resource] in an underlying Data Package. This also allows for interesting possibilities. One of these is executable Data Packages. Since the code is included inline and its dependencies are explicit and bounded, very simple software could be written to execute a Data Package-based notebook from the command line, printing the results to the console and writing images to the current directory. + +It would be useful to have a JavaScript version of some of the functionality in goodtables[^goodtables] available for use, specifically header detection in parsed csv contents (output of PapaParse), as well as an option in dpm to not put things in a ‘datapackages’ folder, as I rarely need this when downloading a dataset. + +dpm, mentioned above, is now deprecated. Check out DataHub's [data-cli](https://github.com/datahq/data-cli) + +My next task will be building and integrating the machine learning and neural network components into Dataship. After that I’ll be focusing on features that allow organizations to store private encrypted data, in addition to the default public storage. The focus of the platform will always be open data, but hosting closed data sources will allow us to nudge people towards sharing, when it makes sense. + +As for additional use cases, the volume of personal data is growing exponentially- from medical data to internet activity and media consumption. These are just a few existing examples. The rise of the Internet of Things will only accelerate this. People are also beginning to see the value in controlling their data themselves. Providing mechanisms for doing this will likely become important over the next ten years. + +[^jupyter]: Jupyter Notebook: +[^resource]: Data Package Resource: +[^numpy]: NumPy: Python package for scientific computing: +[^pandas]: Pandas: Python package for data analysis: +[^datapackage]: Data Packages: +[^data-cli]: DataHub's data commandline tool: +[^goodtables]: goodtables: diff --git a/site/blog/2016-11-15-dataship/dataship-logo.png b/site/blog/2016-11-15-dataship/dataship-logo.png new file mode 100755 index 000000000..e84ea77b7 Binary files /dev/null and b/site/blog/2016-11-15-dataship/dataship-logo.png differ diff --git a/site/blog/2016-11-15-dataship/dataship.gif b/site/blog/2016-11-15-dataship/dataship.gif new file mode 100755 index 000000000..008820251 Binary files /dev/null and b/site/blog/2016-11-15-dataship/dataship.gif differ diff --git a/site/blog/2016-11-15-open-power-system-data/README.md b/site/blog/2016-11-15-open-power-system-data/README.md new file mode 100644 index 000000000..f3ff5c16d --- /dev/null +++ b/site/blog/2016-11-15-open-power-system-data/README.md @@ -0,0 +1,82 @@ +--- +title: Open Power System Data +date: 2016-11-15 +tags: ["case-studies"] +category: case-studies +subject_context: Open Power System Data uses Frictionless Data specifications to avail energy data for analysis and modeling +image: /img/blog/opsd-logo.svg +description: A free-of-charge and open platform providing the data needed for power system analysis and modeling. +author: Lion Hirth and Ingmar Schlecht +--- + +[Open Power System Data](http://open-power-system-data.org/) aims at providing a **free-of-charge** and **open** platform[^platform] that provides the data needed for power system analysis and modeling. + +All of our project members are energy researchers. We struggled collecting this kind of data in what is typically a very burdensome and lengthy process. In doing my PhD, I spent the first year collecting data and realized that *not only had many others done that before, but that many others coming later would have to do it again*. This is arguably a huge waste of time and resources, so we thought we (Open Power System Data) should align ourselves and join forces to do this properly, once and for all, and in a free and open manner to be used by everyone. We are funded for two years by the German government. After starting work in 2015, we have about one more year to go. + +On one hand, people who are interested in European power systems are lucky because a lot of data needed for that research is available. If you work on, say, Chinese systems, and you are not employed at the Chinese power company, you probably won’t find anything. On the other hand, if you search long enough (and you know where to look), you can find stuff online (and usually free of charge) on European power systems---not everything you want, but a +big chunk, so in that respect, we are all lucky. However, this data is quite problematic for many reasons. + +[![Available Data](./opsd-1.png)](http://data.open-power-system-data.org/) + +*Data availability overview on the platform* + +Some of the problems we face in working with data include: + +- varied data sources and formats +- licensing issues +- 'dirty' data + +### Inconsistent Data Sources and formats + +First, it is scattered throughout the Internet and very hard to Google. For example, the Spanish government will only publish their data in the Spanish language, while the German government will publish only in German, so you need to speak 20 languages if we are talking about Europe. Second, it is often of low quality. For instance, we work with a lot with time series data---that is, hourly data for electricity generation and consumption. Twice a year, during the shift between summer and winter, there is sort of an “extra” or “missing” hour to account for daylight savings time. Every single data source has a different approach for how to handle that. While some datasets just ignore it, some double the hours, while others call the third hour something like "3a" and "3b". To align these data sources, you have to handle all these different approaches. In addition, some data providers, for example, provide data in one format for the years 2010 and 2011, and then for 2012 and 2013 in a different format, and 2014 and 2015 in yet another format. A lot of that data comes in little chunks, so some datasets have one file for everything (which is great) but then others provide files split by the year, the month, or even the day. **If you are not familiar with programming, you can’t write scripts to download that, and you have to manually download three years of daily data files: thousands of files**. Worse, these files come in different formats: some companies and agencies provide CSV files, others Excel files, and still others provide formats which are not very broadly used (e.g. XML and NetCDF). + +### Licensing Questions + +And maybe least known, but really tricky for us is the fact that all those data are subject to copyright. These data are open in the sense that they are on the Internet to be accessed freely, but they are not open in the legal sense; you are not allowed to use them or republish them or share them with others. If you look at the terms of use that you agree on to download, it will usually says that all those data are subject to copyright and you are not allowed to do anything with them, essentially. + +![Available Data](./opsd-open-data.png) + +This last fact is somewhat surprising. Mostly, the belief is that if something is free online then it’s “Open” but legally that, of course, doesn’t say anything; **just because something is on YouTube and you can access that for free, that doesn’t mean you can copy, resample, and sell it to someone. And the same is true for data.** So, in the project, we are trying to convince these owners and suppliers of data to change their terms of use, provide good licenses, publish data under an open license, preferably, something like Creative Commons[^cc] or the ODbL[^odbl], or something else that people from the open world use. That’s a very burdensome process; we just talked to four German transmission system operators and it took us a full year of meetings and emails to convince them. They finally signed on to open licensing last month. + +### 'Dirty' data aka the devil in the details + +Some of the most annoying problems are not the major problems, but all these surprising minor problems. As I mentioned earlier, I work a lot with time series data and there are so many weird mistakes, errors, or random facts in the data. For example, we have one source where every day, the 24th hour of the day is *simply missing* so the days only have 23 hours. Another weird phenomenon is that another data source, a huge data source that publishes a lot, only starts the year aligned on weeks, so if the first Monday falls on January 4th, they might miss the first four days of the year. If you want to model energy consumption for a year, you can’t use the data at all because the first four days are missing. So, nitty-gritty nasty stuff like this that makes work really burdensome if you look at this scale of numbers of information: you have to find these errors while looking at hundreds of thousands of data entry points. There’s of course, nothing you can easily do manually. + +Our target users are researchers, economists, or engineers interested in energy; they are mostly familiar with Excel, or some statistical software like R, SPSS, or STATA but they are not programmers or data scientists. As a result, they are not experts in data handling and not trained in detecting errors, missing data, and correct interpolation. If you know where to look to find gaps in your data, this is quickly done. However, if you are doing this kind of data wrangling for the first time (and you don’t really want to do it, but rather you want to learn something about solar power in Switzerland) then this is, of course, a long detour for a lot of our users. + +We collect time series data for renewable and thermal power plants, each of which we compile into a dataset that follows the specification for a Tabular Data Package[^tdp], consisting of a `datapackage.json` file for metadata and a CSV file containing the actual data. On top of this we include the same data in Excel format and also some differently structured CSV files to suit the needs of different user types. We also implemented a framework that parses the content of the `datapackage.json` and renders it into a more human-readable form for our website. + +Where the data in each column is homogeneous in terms of the original source, as is the case with time series data, the `datapackage.json` file is used to document the sources per column. + +We started this project only knowing what we wanted to do in vague terms, but very little understanding of how to go about it, so we weren’t clear at all about how to publish this data. The first idea that we had was to build a database without any of us knowing what a database actually was. + +**Step-by-step, we realized we would like to offer a full “package” of all data that users can download in one click and have everything they need on their hard drive.** Sort of a full model input package of everything a researcher would like with the option to just delete (or simply ignore) the data that is not useful. + +We had a first workshop[^firstworkshop] with potential users, and I think one of us, maybe it was Ingmar, Googled you and found out about the [Data Package specification](https://specs.frictionlessdata.io/data-package/). That it perfectly fit our needs was pretty evident within a few minutes, and we decided to go along with this. + +A lot of our clients are practitioners that use Microsoft Excel as a standard tool. If I look at a data source, and I open a well structured Excel sheet with colors and (visually) well structured tables, it makes it a lot easier for me to get a first glimpse of the data and an insight as to what’s in there, what’s not in there, its quality, how well it is documented, and so on. So the one difficulty I see from a user perspective with the Data Package specification (at least, in the way we use it) is that CSV and JSON files take more than one click in a browser to get a human-readable, easily understandable, picture of the data. + +The stuff that is convenient for humans to structure content---colors, headlines, bolding, the right number of decimals, different types of data sorted by blocks, with visual spaces in between; this stuff makes a table aesthetically convenient to read, but is totally unnecessary for being machine-readable. The number one priority for us is to have the data in a format that’s machine-readable and my view is that Frictionless Data/Data Packages are perfect for this. But from the *have-a-first-glimpse-at-the-data-as-a-human perspective*, having a nice colored Excel table, from my personal point of view, is still preferable. We have decided in the end just to provide both. We publish everything as a Data Package and on top of that we also publish the data in an Excel file for those who prefer it. On top of that we publish everything in an SQLite database for our clients and users who would like it in an SQL database. + +We also think there is potential to expand on the [Data Package Viewer](http://data.okfn.org/tools/view) tool provided by Open Knowledge International. In its current state, we cannot really use it, because it hangs on the big datasets we're working with. So mainly, I would imagine that for large datasets, the Data Package Viewer should not try to show and visualize all data but just, for example, show a summary. Furthermore, it would be nice if it also offered possibilities to filter the datasets for downloading of subsets. The filter criteria could be specified as part of the `datapackage.json`. + +The old data package viewer, referenced above, is now deprecated. The new data package viewer, available on [create.frictionlessdata.io](http://create.frictionlessdata.io), addresses the issues raised above. + +Generally I think such an online Data Package viewer could be made more and more feature-rich as you go. It could, for example, also offer possibilities to download the data in alternative formats such as Excel or SQLite, which would be generated by the Data Package viewer automatically on the server-side (of course, the data would then need to be cached on the server side). + +Advantages I see from those things are: + +* Ease of use for data providers: Just provide the CSV with a proper description of all fields in the `datapackage.json`, and everything else is taken care of by the online Data Package viewer. +* Ease of use for data consumers: They get what they want (filtered) in the format they prefer. +* Implicitly that would also do a proper validation of the`datapackage.json`: Because if you have an error there, then things will also be messed up in the automatically generated files. So that also ensures good `datapackage.json` metadata quality in general which is important for all sorts of things you can do with Data Packages. + +Regarding the data processing workflow we created, I would refer you to our processingscripts[^scripts] on GitHub. I talked a lot about time series data – this should give you an [overview](https://github.com/Open-Power-System-Data/time_series/blob/master/main.ipynb); here are the [processing details](https://github.com/Open-Power-System-Data/time_series/blob/master/processing.ipynb). + +In the coming days, we are going to extend the geographic scope and other various details---user friendliness, interpolation, data quality issues---so no big changes, just further work in the same direction. + +[^cc]: +[^odbl]: +[^tdp]: Tabular Data Package specifications: +[^firstworkshop]: First Workshop of Open Power System Data: +[^scripts]: GitHub repository: +[^platform]: Data Platform: \ No newline at end of file diff --git a/site/blog/2016-11-15-open-power-system-data/opsd-1.png b/site/blog/2016-11-15-open-power-system-data/opsd-1.png new file mode 100755 index 000000000..a10ed4b1b Binary files /dev/null and b/site/blog/2016-11-15-open-power-system-data/opsd-1.png differ diff --git a/site/blog/2016-11-15-open-power-system-data/opsd-logo.svg b/site/blog/2016-11-15-open-power-system-data/opsd-logo.svg new file mode 100755 index 000000000..93fcefff2 --- /dev/null +++ b/site/blog/2016-11-15-open-power-system-data/opsd-logo.svg @@ -0,0 +1,769 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/site/blog/2016-11-15-open-power-system-data/opsd-open-data.png b/site/blog/2016-11-15-open-power-system-data/opsd-open-data.png new file mode 100755 index 000000000..d9f977eed Binary files /dev/null and b/site/blog/2016-11-15-open-power-system-data/opsd-open-data.png differ diff --git a/site/blog/2016-11-15-tesera/README.md b/site/blog/2016-11-15-tesera/README.md new file mode 100644 index 000000000..55cc43305 --- /dev/null +++ b/site/blog/2016-11-15-tesera/README.md @@ -0,0 +1,137 @@ +--- +title: Tesera +date: 2016-11-15 +tags: ["case-studies"] +category: case-studies +subject_context: Tesera uses Frictionless Data specifications to package data in readiness for use in different systems and components. +image: /img/blog/tesera-logo.png +description: Creating data-driven applications in the cloud. +author: Spencer Cox +--- + +Tesera is an employee-owned company, founded in 1997. Our focus is helping our clients create data-driven applications in the cloud. We also maintain two core product lines in addition to our consulting practice. [MRAT.ca](https://www.linkedin.com/showcase/municipal-risk-assessment-tool/about/) helps municipalities identify risk of basement flooding, while [forestinventory.ca](https://cran.r-project.org/web/packages/forestinventory/index.html) (High Resolution Inventory Services) enables forest and natural resource companies to access a new level of accuracy and precision in resource inventories and carbon measurement. + +[![MRAT + HRIS](./mrathris.png)](http://tesera.com/)
*[MRAT.ca](https://www.linkedin.com/showcase/municipal-risk-assessment-tool/about/) and forestinventory.ca* + +We deal with data from a variety of sources ranging from sample plots to in situ sensors. We grab samples and measurements to remotely sensed information from LiDAR, colour infrared and others. Many proprietary specifications exist across those data sources, and to work around this, we’ve adopted CSV as our universal format. We use Data Packages[^datapackages], CSV files, and Table Schema[^tableschema] to create database tables, validate data schemas and domains, import data from S3[^amazons3] to PostgreSQL, DynamoDB[^amazondynamodb], and Elastic[^elastic]. In some cases we also use these Frictionless Data specs to move between application components, in particular where multiple technologies (Python, R, Javascript, and other) are utilized in a workflow. + +We have adopted the Data Package standard as a simple, elegant way to describe and package our CSV data for interoperability between systems and components. We use this in conjunction with the Table Schema which enables us to define rules and constraints[^tsconstraints] for each field in the CSV file. With this in mind we have set up our workflows to essentially connect S3 buckets with analytical processes. We have written some simple open-source AWS Lambda[^amazonlambda] functions that let us easily invoke validation and sanitization at the end of each process on the backend. We also expose this to the frontend of some of our applications so users can work through an import/contribution process where they are shown issues with their data that must be fixed before they can contribute. **This helps us ensure good interoperable data at a foundational level, thereby making it easier to use for analysis, visualization, or modeling without extensive ad-hoc quality control.** + +![Example of validation error ("not a number") on import driven by Table Schema metadata](./mackenzie-8.png)
*Example of validation error ("not a number") on import driven by Table Schema metadata* + +We discovered Frictionless Data through GitHub by following Max Ogden and some of the interesting work he is doing with [Dat](http://datproject.org/). We were looking for simpler, more usable alternatives to the “standards” web-services craze of the 2000s. We had implemented a large interoperability hub for observation data called the [Water and Environmental hub (WEHUB)][^wehub] which supported various [OGC](http://www.opengeospatial.org/) standards ([WaterML](http://www.opengeospatial.org/standards/waterml), [SOS](http://www.opengeospatial.org/standards/sos)) which was supposed to make important information accessible to many stakeholders, but in reality, nobody was using it. We were looking for a simpler way to enable data access and use for developers and downloaders alike. + +We are especially keen on software that enables faster interoperability, especially within an AWS environment. We envision a framework of loaders, validators, sanitizers, analyzers, and exporters, fundamentally based around Amazon S3, various databases, and Lambda or Elastic Container Service[^amazonec2] (for larger processes). **Having supported a lot of clients with a lot of projects, our goal has been to remove the common grunt work associated with data workflows to enable effort to be prioritized towards the use and application of the data.** + +For instance, every data portal needs a way to import data into the system and likely a way to export data from the system. Depending on the complexity of the application and the size of the imports and exports, various approaches were utilized which directly leveraged the database or relied on various libraries. *The friction required to load and begin to make use of the data often consumed a large portion of project budgets.* By moving towards common methods of import and export (as enabled by Data Package and Table Schema and deployed to Elastic Container Service and/or Lambda), we’ve been able to standardize that aspect of our data applications and not have to revisit it. + +As the "Internet of Things" threatens to release yet another round of standards for essentially observation data, we hope to keep things simple and use what we have for these use cases as well. Smaller imports and exports can readily be executed by Lambda; when they are more complex or resource-intensive, Lambda can trigger an ECS task to complete the work. + +We developed some basic CSV to DynamoDB and ElasticSearch loaders in support of a Common Operating Picture toolset for the [Fort McMurray Wildfires](https://en.wikipedia.org/wiki/2016_Fort_McMurray_wildfire). In the coming days, we would like to clean those up, along with our existing RDS loaders and Lambda functions and start moving towards the framework described. We are cleaning up and open sourcing a number of utilities to facilitate these workflows with the goal of being able to describe data types in CSV files, then automatically map them or input them into a model. There may be an opportunity to explicitly identify how spatial feature information is carried within a Data Package or Table Schema. + +We are kind of excited about the method and framework itself to have almost [Zapier](https://zapier.com/)- or +[IFTTT](https://ifttt.com)-like capabilities for CSV data where we can rapidly accomplish many common use cases enabling resources to be prioritized to the business value. On the application side, we have been getting pretty excited about ElasticSearch and Kibana[^kibana] and perhaps extending them to bring together more seamless exploration of large dynamic geospatial datasets, especially where the data is continuous/temporal in nature and existing GIS technology falls pretty flat. This will be important as smart cities and "Internet of Things" use cases advance. + +## Projects + +*This next section will explore two Tesera-developed projects that employ the Frictionless Data specifications: the Provincial Growth and Yield Initiative Plot Sharing App (PGYI) and Mackenzie DataStream.* + +### 1. Provincial Growth and Yield Initiative Plot Sharing App + + + +![The Provincial Growth and Yield Initiative Plot Sharing App](./fgrow-report-committed.png)
*The Provincial Growth and Yield Initiative Plot Sharing App* + +With this app, we are enabling the 16 government and industrial members of [Forest Growth Organization of Western Canada (FGrOW)](https://fgrow.friresearch.ca/) to seamlessly share forest plot measurement data with each other and know that the data will be interoperable and meet their specifications. Specifications were designed primarily with the data manager in mind and were formatted as a contribution guidelines document. From this document, the [afgo-pgyi](https://github.com/tesera/datatheme-afgo-pgyi) "Data Theme" was created which contains the Data Package details as well as the several Table Schemas required to assemble a dataset. Having access to this large and interoperable dataset will enable their members to improve their growth and yield models and respond to bioclimatic changes as they occur. + +We supported FGrOW in creating a set of data standards and then created the Table Schemas to enable a validation workflow. The members upload a set of relational CSV files which are packaged up as Data Packages, uploaded to S3, and then validated by the Lambda Data Package Validator. The results of this initial validation are returned to the user as errors (cannot proceed) or warnings (something is wrong but it can be accepted). + +![PGYI import violations](./fgrow-import-violations.png)
*PGYI import violations* + +At this stage the data is considered imported. If there are no errors the user is able to stage their dataset which uses the Lambda RDS Loader to import the Data Package into an RDS PostGreSQL instance. This triggers a number of more sophisticated validation functions relating to tree growth rates, measurement impossibilities, and sanity checks at the database level. + +![PGYI staging violations](./fgrow-staging-violations.png)
*PGYI staging violations* + +Having previously ensured the data meets the Table Schema and was loaded successfully, we have confidence in executing custom database functions without having to handle endless data anomalies and exceptions. A simple example check to see if species changes between measurements can be illustrated below: + +``` +CREATE OR REPLACE FUNCTION staging.get_upload_trees_species_violations(in_upload_id text) +RETURNS SETOF staging.violation AS $$ + +BEGIN + -- RULE 1: tree species should not change over time + RETURN QUERY + + SELECT + '0'::text, + staged_tree.upload_id, + + staged_tree.source_row_index, + 'trees'::text, + array_to_string(ARRAY[staged_tree.company, staged_tree.company_plot_number, staged_tree.tree_number::text], '-'), + + 'trees.species.change'::text, + 'warning'::text, + format('Tree species changed from %s to %s', committed_tree.species, staged_tree.species) + + FROM staging.staged_trees staged_tree + INNER JOIN staging.committed_trees committed_tree + USING (company, company_plot_number, tree_number) + + WHERE staged_tree.upload_id = in_upload_id + AND (staged_tree.species NOTNULL AND staged_tree.species <>'No') + AND staged_tree.species != committed_tree.species; + +END; +$$ LANGUAGE plpgsql; +``` + +Again the user is presented with violations as errors or warnings and can they can choose to commit the plots without errors into the shared database. Essentially this three step workflow from imported, to staged, to committed allows FGroW to ensure quality data that will be useful for their modeling and analysis purposes. + +FGroW has built a database that currently has 2400 permanent sample plots each containing many trees and all together 10s of millions of measurements across a wide variety of strata including various natural regions and natural sub-regions. This database provides the numeric power to produce and refine better growth models and enable companies to adopt their planning and management to real conditions. + +There are many cases where industries might wish to bring together measurement data in a consistent way to maximize their productivity. **One of the more obvious examples is in agriculture where precision information is increasingly collected at the local or individual farm level, but bringing this information together in aggregate would produce new and greater insight with regard to productivity, broad scale change, and perhaps adaption to climate change strategies.** + +### 2. Mackenzie DataStream + + + +![Mackenzie DataStream App](./mackenzie-2.png)
*Mackenzie DataStream App* + +[Mackenzie DataStream](http://www.mackenziedatastream.org/) is an open access platform for exploring and sharing water data in the Mackenzie River Basin. DataStream's mission is to promote knowledge sharing and advance collaborative and evidence-based decision making throughout the Basin. The Mackenzie River Basin is extremely large, measuring 1.8 million square kilometers and as such monitoring is a large challenge. To overcome this challenge, water quality monitoring is carried out by a variety of partners which include communities and Aboriginal, territorial, and federal governments. With multiple parties collecting and sharing information, Mackenzie DataStream had to overcome challenges of trust and interoperability. + +![The Mackenzie River Basin](./mackenzie-6.png)
*The Mackenzie River Basin* + +Tesera leveraged the Data Package standard as an easy way for Government and community partners alike to import data into the system. We used Table Schema to define the structure and constraints of the Data Themes which we represented in a simple visible way. + +![Table fields and validation rules derived from Table Schema](./mackenzie-1.png)
*Table fields and validation rules derived from Table Schema* + +The backend on this system also relies on the Data Package Validator and the Relational Database Loader. The observation data is then exposed to the client via a simple [Express.js](http://expressjs.com/) API as JSON. The Frictionless Data specifications help us ensure clean consistent data and make visualization a breeze. We push the data to [Plotly](https://plot.ly/) to build the charts as it provides lots of options for scientific plotting, as well as a good api, at a minimal cost. + +![Mackenzie DataStream visualization example](./mackenzie-10.png)
*Mackenzie DataStream visualization example* + +The Mackenzie DataStream is gaining momentum and partners. The [Fort Nelson First Nation](http://www.fortnelsonfirstnation.org/) has joined on as a contributing partner and the [Government of Northwest Territories](http://www.gov.nt.ca/) is looking to apply DataStream to a few other datatypes and bringing on some addition partners in water permitting and cumulative effects monitoring. We think of this as a simple and effective way to make environmental monitoring data more accessible. + +![Mackenzie DataStream environmental observation data](./mackenzie-3.png)
*Mackenzie DataStream environmental observation data* + +There are many ways to monitor the environment, but bringing the data together according to standards, ensuring that it is loaded correctly, and making it accessible via a simple API seems pretty universal. We are working through a UX/UI overhaul and then hope to open source the entire DataStream application for other organizations that are collecting environmental observation data and looking to increase its utility to citizens, scientists, and consultants alike. + +![Mackenzie DataStream summary statistics](./mackenzie-4.png)
*Mackenzie DataStream summary statistics* + +[^jupyter]: Jupyter Notebook: +[^resource]: Data Package Resource: +[^numpy]: NumPy: Python package for scientific computing: +[^pandas]: Pandas: Python package for data analysis: +[^datapackages]: Data Packages: +[^goodtables]: goodtables: +[^tableschema]: Table Schema: +[^amazons3]: Amazon Simple Storage Service (Amazon S3): +[^amazonlambda]: Amazon AWS Lambda: +[^github]: GitHub: +[^amazonec2]: Amazon EC2: Virtual Server Hosting: +[^amazondynamodb]: Amazon DynamoDB: +[^elastic]: Elastic Search: +[^kibana]: Kibana: +[^r]: The R Project for Statistical Computing: +[^tsconstraints]: Table Schema Field Constraints: +[^wehub]: Water and Environmental Hub: diff --git a/site/blog/2016-11-15-tesera/fgrow-import-violations.png b/site/blog/2016-11-15-tesera/fgrow-import-violations.png new file mode 100755 index 000000000..525f122b9 Binary files /dev/null and b/site/blog/2016-11-15-tesera/fgrow-import-violations.png differ diff --git a/site/blog/2016-11-15-tesera/fgrow-report-committed.png b/site/blog/2016-11-15-tesera/fgrow-report-committed.png new file mode 100755 index 000000000..702e2456b Binary files /dev/null and b/site/blog/2016-11-15-tesera/fgrow-report-committed.png differ diff --git a/site/blog/2016-11-15-tesera/fgrow-staging-violations.png b/site/blog/2016-11-15-tesera/fgrow-staging-violations.png new file mode 100755 index 000000000..0968fb4b1 Binary files /dev/null and b/site/blog/2016-11-15-tesera/fgrow-staging-violations.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-1.png b/site/blog/2016-11-15-tesera/mackenzie-1.png new file mode 100755 index 000000000..1b4fc2613 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-1.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-10.png b/site/blog/2016-11-15-tesera/mackenzie-10.png new file mode 100755 index 000000000..f714e5f40 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-10.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-2.png b/site/blog/2016-11-15-tesera/mackenzie-2.png new file mode 100755 index 000000000..0a5d35c8b Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-2.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-3.png b/site/blog/2016-11-15-tesera/mackenzie-3.png new file mode 100755 index 000000000..c8da53476 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-3.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-4.png b/site/blog/2016-11-15-tesera/mackenzie-4.png new file mode 100755 index 000000000..20f29168c Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-4.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-6.png b/site/blog/2016-11-15-tesera/mackenzie-6.png new file mode 100755 index 000000000..0fb6a8235 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-6.png differ diff --git a/site/blog/2016-11-15-tesera/mackenzie-8.png b/site/blog/2016-11-15-tesera/mackenzie-8.png new file mode 100755 index 000000000..cb34fc8f7 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mackenzie-8.png differ diff --git a/site/blog/2016-11-15-tesera/mrathris.png b/site/blog/2016-11-15-tesera/mrathris.png new file mode 100755 index 000000000..d1ce59947 Binary files /dev/null and b/site/blog/2016-11-15-tesera/mrathris.png differ diff --git a/site/blog/2016-11-15-tesera/tesera-logo.png b/site/blog/2016-11-15-tesera/tesera-logo.png new file mode 100755 index 000000000..db2cdbedb Binary files /dev/null and b/site/blog/2016-11-15-tesera/tesera-logo.png differ diff --git a/site/blog/2017-03-28-john-snow-labs/README.md b/site/blog/2017-03-28-john-snow-labs/README.md new file mode 100644 index 000000000..365dec88f --- /dev/null +++ b/site/blog/2017-03-28-john-snow-labs/README.md @@ -0,0 +1,33 @@ +--- +title: John Snow Labs +date: 2017-03-28 +tags: ["case-studies"] +category: case-studies +interviewee: Ida Lucente +subject_context: John Snow Labs uses Frictionless Data specifications to avail data to users for analysis +image: /img/blog/john-snow-labs-logo.png +description: Turnkey data to data science, analytics and software teams in healthcare industry. +author: Ida Lucente +--- + +[John Snow Labs](https://www.johnsnowlabs.com/) accelerates data science and analytics teams, by providing clean, rich and current data sets for analysis. Our customers typically license between 50 and 500 data sets for a given project, so providing both data and metadata in a simple, standard format that is easily usable with a wide range of tools is important. + +Each data set we license is curated by a domain expert, which then goes through both an automated DataOps platform and a manual review process. This is done in order to deal with a string of data challenges. First, it’s often hard to find the right data sets for a given problem. Second, data files come in different formats, and include dirty and missing data. Data types are inconsistent across different files, making it hard to join multiple data sets in one analysis. Null values, dates, currencies, units and identifiers are represented differently. Datasets aren’t updated on a standard or public schedule, which often requires manual labor to know when they’ve been updated. And then, data sets from different sources have different licenses - we use over 100 data sources which means well over 100 different +data licenses that we help our clients be compliant with. + +The most popular data format in which we deliver data is the Data Package [^datapackage]. Each of our datasets is available, among other formats, as a pair of data.csv and datapackage.json files, complying with the specs [^specs]. We currently provide over 900 data sets that leverage the Frictionless Data specifications. + +Two years ago, when we were defining the product requirements and architecture, we researched six different standards for metadata definition over a few months. We found Frictionless Data as part of that research, and after careful consideration have decided to adopt it for all the datasets we curate. The Frictionless Data specifications were the simplest to implement, the simplest to explain to our customers, and enable immediate loading of data into the widest variety of analytical tools. + +Our data curation guidelines have added more specific requirements, that are currently underspecified in the Frictionless Data specifications. For example, there are guidelines for dataset naming, keywords, length of the description, field naming, identifier field naming and types, and some of the properties supported for each field. Adding these to Frictionless Data would make it harder to comply with the specifications, but would also raise the quality bar of standard datasets; so it may be best to add them as recommendation. + +Another area where the Frictionless Data specifications are worth expanding is more explicit definition of the properties of each data type - in particular geospatial data, timestamp data, identifiers, currencies and units. We have found a need to extend the type system and properties for each field’s type, in order to enable consistent mapping of schemas to different analytics tools that our customers use (Hadoop, Spark, MySQL, ElasticSearch, etc). We recommend adding these to the specifications. + +We are working with [Open Knowledge International](http://www.okfn.org/) on open sourcing some of the libraries and tools we’re building. Internally, we are adding more automated validations, additional output file formats, and automated pipelines to load data into ElasticSearch[^elasticsearch] and Kibana[^kibana], to enable interactive data discovery & visualization. + +The core use case we see for Frictionless Data specs is making data ready for analytics. There is a lot of Open Data out there, but a lot of effort is still required to make it usable. This single use case expands into as many variations as there are BI & data management tools, so we have many years of work ahead of us to address this one core use case. + +[^datapackage]: Data Package: [https://specs.frictionlessdata.io/data-package/](https://specs.frictionlessdata.io/data-package/) +[^specs]: Frictionless Data Specifications [specs](https://specs.frictionlessdata.io/) +[^elasticsearch]: Elastic Search +[^kibana]: kibana diff --git a/site/blog/2017-03-28-john-snow-labs/john-snow-labs-logo.png b/site/blog/2017-03-28-john-snow-labs/john-snow-labs-logo.png new file mode 100755 index 000000000..80659a00f Binary files /dev/null and b/site/blog/2017-03-28-john-snow-labs/john-snow-labs-logo.png differ diff --git a/site/blog/2017-03-31-data-package-views-proposal/README.md b/site/blog/2017-03-31-data-package-views-proposal/README.md new file mode 100644 index 000000000..2bfafa45c --- /dev/null +++ b/site/blog/2017-03-31-data-package-views-proposal/README.md @@ -0,0 +1,1761 @@ +--- +title: Data Package Views Proposal +date: 2017-03-31 +tags: ["specs", "views"] +image: https://docs.google.com/drawings/d/1M_6Vcal4PPSHpuKpzJQGvRUbPb5yeaAdRHomIIbfnlY/pub?w=790&h=1402 +description: Producers and consumers of data want to have their data presented in tables and graphs -- "views" on the data. This outlines a proposal on a Frictionless approach this including a spec and tooling. +author: Rufus Pollock +--- + +**Update: Sep 2017: the core views proposal is now an official spec https://specs.frictionlessdata.io/views/https://specs.frictionlessdata.io/views/** + +## Introduction + +### Motivation + +Producers and consumers of data [packages] want to have their data presented in tables and graphs -- "views" on the data. + +*Why? For a range of reasons -- from simple eyeballing to drawing out key insights.* + +```mermaid +graph LR + data[Your Data] --> table[Table] + data --> grap[Graph] + + classDef implemented fill:lightblue,stroke:#333,stroke-width:4px; + class data implemented; +``` + +To achieve this we need to provide: + +* A tool-chain to create these views from the data. +* A descriptive language for specifying views such as tables, graphs, map. + +These requirements are addressed through the introduction of Data Package "Views" and associated tooling. + +```mermaid +graph LR + + subgraph Data Package + resource[Resource] + view[View] + resource -.-> view + end + + view --> toolchain + toolchain --> svg["Rendered Graph (SVG)"] + toolchain --> table[Table] +``` + +We take a "running code" approach -- as in the rest of the Frictionless Data effort. We develop the spec by building a working implementation, and iterate around the spec as we develop. As development progresses the spec crystallizes and becomes ever less subject to change. + +Our current implementation efforts focus on provide javascript tools and building a data package registry (DPR). + +### Desired Features + +* Specify views such as graphs and tables as part of a data package + * => views should be describable in a specification serializable as JSON +* Simple things are simple: adding a bar chart or line chart is fast and easy -- seconds to do and requiring minimal knowledge +* Powerful and extensible: complex and powerful graphing is also powerful +* Reuse: leverage the power of existing specs like [Vega][] (and tools like Vega and Plotly) +* Composable: the views spec should be independent but composable with the other data package specs (and even usable on its own) + +[Vega]: http://vega.github.io/ + +### Concepts and Background + +*In progress -- not quite finished* + +To generate visualizations you usually want the following 3 types of information: + +- metadata: e.g. title of graph, credits etc +- graph: description / specification of the graph itself +- data: specification of data sources for the graph including location and key metadata like types + +The data spec itself often consists of three distinct parts: + +- "raw / graph data": a spec / description of data exactly in the form needed by the visualization system. This is often a very well defined spec e.g. an array of series ... +- locate/describe: a spec of where to get data from e.g. `url` or `data` attribute plus some information on that data such as format and types. +- transform: a spec of how transform external data prior to use e.g. pivoting or filtering it + +From this description it should be clear that the latter two data specs -- locate/describe and transform -- are actually generic and independent of the specific graphing library. The only thing the graphing library really needs is a clear description of the "raw" format which it directly consumes. Thus, we can consider a natural grouping of specs as: + +- general-metadata - e.g. title of graph, credits etc [provided by e.g. Data Package / define yourself!] +- data: sourcing and transform [provided by e.g. Data Resource] + - sourcing: how to source data from external sources + - transform: how to transform data e.g. pivot it, select one field, scale a field etc +- graph description / specification [provided by e.g. Vega] + - graph data (raw): data as directly consumed by graph spec (usually JSON based if we are talking about JS web-based visualization) + +However, in many visualization tools -- including specs like Vega -- these items are combined together. This is understandable as these tools seek to offer users a "complete solution". However, **decoupling these parts and having clearly defined interfaces would offer significant benefits**: + +* Extensability: it would be easier to extend and adapt the system. For example, adding new data import options could be done without changing the graph system. +* Composability: we can combine different parts together in different ways. For example, data import and transformation could be used for generating data for tabular display as well as graphing. +* Reusability: we want to reuse existing tools and specifications wherever possible. If we keep the specs relatively separate we can reuse the best spec for each job. +* Reliability: when the system is decoupled it is easier to test and check. + +In summary, a smaller pieces, loosely joined makes it easier to adapt and evolve the specs and the associated tooling. + + +## The Tool Chain + +*We describe the tool chain first -- following Frictionless Data practice of "running code". The tool chain is what turns source data + a graph description into a rendered graph.* + +:::info +NOTE: In v1 described here there is no transform support as yet. +::: + +***Figure 1: From Data Package View Spec to Rendered output*** + +```mermaid +graph TD + pre[Pre-cursor views e.g. Recline] --bespoke conversions--> dpv[Data Package Views] + dpv --"normalize (correct any variations and ensure key fields are present)"--> dpvn["Data Package Views
(Normalized)"] + dpvn --"compile in resource & data ([future] do transforms)"--> dpvnd["Self-Contained View
(All data and schema inline)"] + dpvnd --compile to native spec--> plotly[Plotly Spec] + dpvnd --compile to native spec--> vega[Vega Spec] + plotly --render--> html[svg/png/etc] + vega --render--> html +``` + +:::info +**IMPORTANT**: an important "convention" we adopt for the "compiling-in" of data is that resource data should be inlined into an `_values` attribute. If the data is tabular this attribute should be an array of *arrays* (not objects). +::: + + +### Graphs + +***Figure 2: Conversion paths*** + +```mermaid +graph LR + inplotly["Plotly DP Spec"] --> plotly[Plotly JSON] + simple[Simple Spec] --> plotly + simple .-> vega[Vega JSON] + invega[Vega DP Spec] --> vega + vegalite[Vega Lite DP Spec] --> vega + recline[Recline] .-> simple + plotly --plotly lib--> svg[SVG / PNG] + vega --vega lib--> svg + + classDef implemented fill:lightblue,stroke:#333,stroke-width:4px; + class recline,simple,plotly,svg implemented; +``` + +Notes: + +* Implemented paths are shown in lightblue. +* Left-most column (Recline): pre-specs that we can convert to our standard specs +* Second-from-left column: DP View spec types. +* Second-from-right column: the graphing libraries we can use (which all output to SVG) + +### Geo support + +:::info +**Note**: support for customizing map is limited to JS atm - there is no real map "spec" in JSON yet beyond the trivial version. + +**Note**: vega has some geo support but geo here means full geojson style mapping. +::: + +```mermaid +graph LR + + geo[Geo Resource] --> map + map[Map Spec] --> leaflet[Leaflet] + + classDef implemented fill:lightblue,stroke:#333,stroke-width:4px; + class geo,map,leaflet implemented; +``` + +### Table support + +```mermaid +graph LR + resource[Tabular Resource] --> table + table[Table Spec] --> handsontable[HandsOnTable] + table --> html[Simple HTML Table] + + classDef implemented fill:lightblue,stroke:#333,stroke-width:4px; + class resource,table,handsontable implemented; +``` + +### Summary + +***Figure 3: From Data Package View to Rendered output flow (richer version of diagram 1)*** + + + + +--- + + +## Data Package Views Specification + +*This is the formal specification* + +Data Package Views ("Views") define data views such as graphs or tables based on the data in a Data Package. + +> TODO: could they exist independently of a data package? Yes! + +Views are defined in the `views` property of the Data Package descriptor. + +`views` MUST be an array. Each entry in the array MUST be an object. This object MUST follow the Data Package View specification set out here. + +A View MUST have the following form: + +```javascript +{ + // generic metadata - very similar to Data Resource or Data Package + "name": "..." // unique identifier for view within list of views. (should we call it id ?) + "title": "My view" // title for this graph + ... + + // data sources for this spec + "resources": [ resource1, resource2 ] + + "specType": "" // one of simple | plotly | vega + // graph spec + "spec": +} +``` + +### Data Source Spec + +The data source spec is as follows: + +``` +resources: [ resourceObjOrId, resourceObjOrId ] +``` + +That is: an array where each entry is either: + +* A number - indicating the resource index in the resource array of the parent Data Package +* A string - indicating the name of the resource in the resource array of the parent Data Package +* An Object: being a full Data Resource object + +:::info +The resources is termed "compiled" if all resources are objects and all data on those resources has been inlined onto an attribute named `_values`. At this point, the view is entirely self-contained -- all resources and their associated data is "inside" the view object and no external data loading is required. +::: + +### Graph Spec + +#### Simple Graph Spec + +The simple graph spec provides a very simple graph descriptor setup that aims for an 80/20 result. It supports only the following graph types: + +* Line `line` (single and multiple). Especially time series +* Bar `bar` - Especially time series +* (?) Pie `pie` -- we are considering excluding pie charts as they are not widely used, often poor information design +* (?) Stacked bar + + +``` +// example data + +| x | y | z | +------------- +| 1 | 8 | 5 | +------------- +| 2 | 9 | 7 | +------------- +``` + + +```javascript +{ + "type": "line", + "group": "x", + "series": [ "y", "z" ] +} +``` + +#### Table Spec + +``` +{ + "name": "table-1", + "resources": ["resource-1"] + "specType": "table" +} +``` + +#### Vega Spec + +*We are using vega as an input: raw vega plus a few tweaks to support data input out of line from their spec (e.g. resources)* + +This is straight-up Vega. The only modification that we leave out data references (where we need to know a table name we can rely on the names in the resources array). + +This example is just copied from http://vega.github.io/vega-editor/?mode=vega&spec=bar + +```javascript +{ + "width": 400, + "height": 200, + "padding": {"top": 10, "left": 30, "bottom": 30, "right": 10}, + + // NOTE: data property is MISSING here! + + "scales": [ + { + "name": "x", + "type": "ordinal", + "range": "width", + "domain": {"data": "table", "field": "x"} + }, + { + "name": "y", + "type": "linear", + "range": "height", + "domain": {"data": "table", "field": "y"}, + "nice": true + } + ], + "axes": [ + {"type": "x", "scale": "x"}, + {"type": "y", "scale": "y"} + ], + "marks": [ + { + "type": "rect", + "from": {"data": "table"}, + "properties": { + "enter": { + "x": {"scale": "x", "field": "x"}, + "width": {"scale": "x", "band": true, "offset": -1}, + "y": {"scale": "y", "field": "y"}, + "y2": {"scale": "y", "value": 0} + }, + "update": { + "fill": {"value": "steelblue"} + }, + "hover": { + "fill": {"value": "red"} + } + } + } + ] +} +``` + +To understand how this fits together with the overall spec here's the full view -- note how the data and graph spec are separated: + +```javascript +{ + "title": "My amazing bar chart" + "resources": [ + { + "name": "table", + "data": [ + {"x": 1, "y": 28}, {"x": 2, "y": 55}, + {"x": 3, "y": 43}, {"x": 4, "y": 91}, + {"x": 5, "y": 81}, {"x": 6, "y": 53}, + {"x": 7, "y": 19}, {"x": 8, "y": 87}, + {"x": 9, "y": 52}, {"x": 10, "y": 48}, + {"x": 11, "y": 24}, {"x": 12, "y": 49}, + {"x": 13, "y": 87}, {"x": 14, "y": 66}, + {"x": 15, "y": 17}, {"x": 16, "y": 27}, + {"x": 17, "y": 68}, {"x": 18, "y": 16}, + {"x": 19, "y": 49}, {"x": 20, "y": 15} + ] + } + ], + "specType": "vega", + "spec": { + "width": 400, + "height": 200, + "padding": {"top": 10, "left": 30, "bottom": 30, "right": 10}, + "scales": [ + { + "name": "x", + "type": "ordinal", + "range": "width", + "domain": {"data": "table", "field": "x"} + }, + { + "name": "y", + "type": "linear", + "range": "height", + "domain": {"data": "table", "field": "y"}, + "nice": true + } + ], + "axes": [ + {"type": "x", "scale": "x"}, + {"type": "y", "scale": "y"} + ], + "marks": [ + { + "type": "rect", + "from": {"data": "table"}, + "properties": { + "enter": { + "x": {"scale": "x", "field": "x"}, + "width": {"scale": "x", "band": true, "offset": -1}, + "y": {"scale": "y", "field": "y"}, + "y2": {"scale": "y", "value": 0} + }, + "update": { + "fill": {"value": "steelblue"} + }, + "hover": { + "fill": {"value": "red"} + } + } + } + ] + } +} +``` + +#### Vega support status 31 Mar 2017 + +> We have implemented Vega graph spec support, but there are some limitations that are described below. + +##### Support for Vega spec that does not have data transform, e.g.: + +```json +{ + "width": 800, + "height": 500, + "data": [ + { + "name": "drive", + "url": "data/driving.csv" + } + ], + ... +} +``` + +can be used in a Data Package as following: + +```json +{ + "name": "some dp", + ... + "resources": [...], + "views": [ + { + "name": "demo-dp", + "title": "demo dp", + "resources": [0], //or can be avoided as it refers to the first resource + "specType": "vega", + "spec": { + "width": 800, + "height": 500, + // NOTE no data property in here + ... + } + } + ] +} +``` +so information about dataset is moved to "resources" attribute of the Datapackage and looks like following: +```json +"resources": [ + { + "name": "drive", + "path": "data/driving.csv", + "schema": { + "fields": [ + { + "name": "side", + "type": "string" + }, + ... + ] + } + } +], +``` + +##### Multiple resources per a view + +If there are no data transforms, Vega spec with multiple datasets are also supported, e.g. https://staging.datapackaged.com/anuveyatsu/lifelines +```json +{ + "width": 400, + "height": 100, + "data": [ + { + "name": "people", + "url": "data/people.csv" + }, + { + "name": "events", + "format": {"parse":{"when":"date"}}, + "url": "data/events.csv + } + + ], + ... +} +``` + +#### Vega-lite spec + +Identical to Vega approach + +#### Plotly spec + +Identical to vega approach + + +------------------------------------------------- + +## Appendix: Simple Views - FAQ + +Why not vega-lite? + +* vega-lite multiple lines approach is a bit weird and counter-intuitive. This matters as this is very common. +* overall vega-lite retains too much complexity. + +Why not this alternative series notation: + +``` + series: [["x", "y"], ["x", "z"]] +``` + +* pros: explicit about the series ... +* pros: you can plot two different series with different x values (as long as they have some commonality ... ) +* cons: more verbose. +* cons: 2nd multiple x values point is actually confusing for most users ... (?) + +Simplicity is crucial here so those cons outweight the pros. + +The following assumes the data has been provided in standard table form - with relevant naming for tables if multiple tables. + +#### vega-lite simple example + +```javascript +{ + "mark": "line", + "encoding": { + "x": {"field": "Date", "type": "temporal"}, + "y": {"field": "VIXClose", "type": "quantitative"} + } +} +``` + +* What we don't like: having to tell it explicitly what the types are - can we infer that? + +## Appendix - Recline Views + +To specify a Recline view, it must have `type` and `state` attributes. + +### `"type"` attribute + +>We used to use this attribute to identify what graph type we need: plotyl or vega-lite. I suppose it should be used for something else. Right now, if `"type"` attribute is set to `vega-lite`, we render vega-lite chart. In all other cases we render plotly chart. + +### `"state"` attribute + +`"state"` attribute must be an object and have `"graphType"`, `"group"` and `"series"` attributes. + +`"graphType"` indicates type of the chart - line chart, bar chart, etc. Value must be a string. + +`"group"` is used to specify base axis - right now it is used as abscissa. It must be a string that is usually a primary key in a resource. + +`"series"` is used to specify ordinate - it must be an array of string elements. Each element represents a field name. + +### Example of the `views` attribute: + +``` +"views": [ + { + "type": "Graph", + "state": { + "graphType": "lines", + "group": "date", + "series": [ "autumn" ] + } + } + ] +``` + +## Appendix: Analysis of Vis libraries data objects + +Focus on "inline" data structure - i.e. data structure in memory. + +Motivation for this: need to generate this data structure. + +### Vega + +See: https://github.com/vega/vega/wiki/Data#examples + +Data structure is standard "array of objects": + +``` +[ + {"x":0, "y":3}, + {"x":1, "y":5} +] + +# note in Vega docs it is put as ... +[{"x":0, "y":3}, {"x":1, "y":5}] +``` + +Also supports simple array of values `[1,2,3]` which is implicitly mapped to: + +``` +[ + { "data": 1 }, + { "data": 2 }, + { "data": 3 } +] +``` + +Note that internally vega adds `_id` to all rows as a unique identifier (cf pandas). + +``` +# this input +[{"x":0, "y":3}, {"x":1, "y":5}] + +# internally becomes +[{"_id":0, "x":0, "y":3}, {"_id":1, "x":1, "y":5}] +``` + +You can also add a `name` attribute to name the data table and then the data is put in `values`: + +``` +{ + "name": "table", + "values": [12, 23, 47, 6, 52, 19] +} +``` + +Finally, inside the overall vega spec you put the data inside a `data` property: + +``` +{ + "width": 400, + "height": 200, + "padding": {"top": 10, "left": 30, "bottom": 30, "right": 10}, + "data": [ + { + "name": "table", + "values": [ + {"x": 1, "y": 28}, {"x": 2, "y": 55}, + ... +``` + +See https://vega.github.io/vega-editor/?mode=vega + +#### Remote Data + +Looks like a lot like Resource. Assume that data is mapped to inline structure + + +### Vega-Lite + +https://vega.github.io/vega-lite/docs/data.html + +Same as Vega except that: + +* `data` is an object not an array -- only one data source allowed + * Will be given the name `source` when converting to Vega +* Only one property allowed: `values` + * And for remote data: `url` and `format` + +### Plotly + +http://help.plot.ly/json-chart-schema/ +https://plot.ly/javascript/reference/ + +The Plotly model does not separate the data out quite as cleanly as vega does. The structure for Plotly json specs is as follows: + +* Oriented around "series" +* Each series includes its data **plus** the spec for the graph + * The data is stored in two attributes `x` and `y` +* Separate `layout` property giving overall layout (e.g. margins, titles etc) + +To give a sense of how it works this is the JS for creating a Plotly graph: + +```javascript +Plotly.plot('graphDiv', data, layout); +``` + +#### Examples + +From http://help.plot.ly/json-chart-schema/ and links therein: + +```javascript +{ + "data": [ + { + "x": [ + "giraffes", + "orangutans", + "monkeys" + ], + "y": [ + 20, + 14, + 23 + ], + "type": "bar" + } + ] +} +``` + +```javascript +[ + { + "name": "SF Zoo", + "marker": { + "color": "rgba(55, 128, 191, 0.6)", + "line": { + "color": "rgba(55, 128, 191, 1.0)", + "width": 1 + } + }, + "y": [ + "giraffes", + "orangutans", + "monkeys" + ], + "x": [ + 20, + 14, + 23 + ], + "type": "bar", + "orientation": "h", + "uid": "a4a45d" + }, + { + "name": "LA Zoo", + "marker": { + "color": "rgba(255, 153, 51, 0.6)", + "line": { + "color": "rgba(255, 153, 51, 1.0)", + "width": 1 + } + }, + "y": [ + "giraffes", + "orangutans", + "monkeys" + ], + "x": [ + 12, + 18, + 29 + ], + "type": "bar", + "orientation": "h", + "uid": "d912bc" + } +] +``` + +## Appendix: Plotly Graph spec research + +We would like users to be able to use Plotly JSON chart schema in their Data Package Views specs so they can take full advantage of Plotly's capabilities. + +Here's an example - https://plot.ly/~Dreamshot/8259/ + +```json +{ + "data": [ + { + "name": "Col2", + "uid": "babced", + "fillcolor": "rgb(224, 102, 102)", + "y": [ + "17087182", + "29354370", + "38760373", + "40912332", + ], + "x": [ + "2000-01-01", + "2001-01-01", + "2002-01-01", + "2003-01-01", + ], + "fill": "tonexty", + "type": "scatter", + "mode": "none" + } + ], + "layout": { + "autosize": false, + "yaxis": { + "range": [ + 0, + 1124750578.9473684 + ], + "type": "linear", + "autorange": true, + "title": "" + }, + "title": "Total Number of Websites", + "height": 500, + "width": 800, + "xaxis": { + "tickformat": "%Y", + "title": "...", + "showgrid": false, + "range": [ + 946702800000, + 1451624400000 + ], + "type": "date", + "autorange": true + } + } +} +``` + +So, the major requirement will be link the plotly data structure with an external data resources in the Data Package View. + +**Key point: Plotly data is of form:** + +```javascript +data: [ + { + "name": ... + x: [...] + y: [...], + z: [...] // optional + }, + ... + ] +} +``` + +We just need a way to bind these ... + +```javascript +data: [ + { + name: // by convention this must match the resource - if that is not possible use resource + resource: .... // only if name cannot match resource + x: "field name ..." // if this is a string not an array then look it up in the resource ... + y: "field name ..." + z: "field name ..." + }, + ... + ] +} +``` + +*Using this approach we would support most of Basic, Statistical and 3D charts of Plotly library. We would not support pie chart (labels, values), maps ...* + +:::info +**Data manipulations -- not supported** + +In some examples of Plotly there are manipulations (e.g. filtering) on the raw data. As this is done in Javascript outside of Plotly JSON language we would not be able to support this. +::: + + +In the `plotlyToPlotly` function: +```javascript +export function plotlyToPlotly(view) { + let plotlySpec = Object.assign({}, view.spec) + + for trace in plotlySpec.data { + if(trace.resource) { + let resource = findResourceByNameOrIndex(view, trace.resource) + const rowsAsObjects = true + const rows = getResourceCachedValues(resource, rowsAsObjects) + if(trace.xField) { + trace.x = rows.map(row => row[trace.xField]) + delete trace.xField + } + if(trace.yField) { + trace.y = rows.map(row => row[trace.yField]) + delete trace.yField + } + if(trace.zField) { + trace.z = rows.map(row => row[trace.zField]) + delete trace.zField + } + + delete trace.resource + } + } + + return plotlySpec +} +``` + + +## Appendix: Data Transform Research + +### Plotly Transforms + +No libraries for data transform have been found. + +### Vega Transforms + +https://github.com/vega/vega/wiki/Data-Transforms - v2 + +https://vega.github.io/vega/docs/transforms/ - v3 + +Vega provided Data Transforms can be used to manipulate datasets before rendering a visualisation. E.g., one may need to perform transformations such as aggregation or filtering (there many types, see link above) of a dataset and display the graph only after that. Another situation would be creating a new dataset by applying various calculations on an old one. + +Usually transforms are defined in `transform` array inside `data` property. + +"Transforms that do not filter or generate new data objects can be used within the transform array of a mark definition to specify post-encoding transforms." + +Examples: + +#### Filtering + +https://vega.github.io/vega-editor/?mode=vega&spec=parallel_coords + +This example filters rows that have both `Horsepower` and `Miles_per_Gallon` fields. + +```json +{ + "data": [ + { + "name": "cars", + "url": "data/cars.json", + "transform": [ + { + "type": "filter", + "test": "datum.Horsepower && datum.Miles_per_Gallon" + } + ] + } + ] +} +``` + +#### Geopath, aggregate, lookup, filter, sort, voronoi and linkpath + +https://vega.github.io/vega-editor/?mode=vega&spec=airports + +This example has a lot of transforms - in some cases there is only transform applied to a dataset, in other cases there are sequence of transforms. + +In the first dataset, it applies `geopath` transform which maps GeoJSON features to SVG path strings. It uses `alberUsa` projection type ([more about projection](https://vega.github.io/vega/docs/projections/)). + +In the second dataset, it applies sum operation on "count" field and outputs it as "flights" fields. + +In the third dataset: +1) it compares its "iata" field against "origin" field of "traffic" dataset. Matching values are outputed as "traffic" field. +2) Next, it filters out all values that are null. +3) After that, it applies `geo` transform as in the first dataset above. +4) Next, it filters out layout_x and layout_y values that are null. +5) Then, it sorts dataset by traffic.flights field in descending order. +6) After that, it applies `voronoi` transform to compute voronoi diagram based on "layout_x" and "layout_y" fields. + +In the last dataset: +1) First, it filters values on which there is a signal called "hover" (specified in the Vega spec's "signals" property) with "iata" attribute that matches to the dataset's "origin" field. +2) Next, it looks up matching values of "airports" dataset's "iata" field against its "origin" and "destination" fields. Output fields are saved as "_source" and "_target". +3) Filters "_source" and "_target" values that are truthy (not null). +4) Finally, linkpath transform creates visual links between nodes ([more about linkpath](https://vega.github.io/vega/docs/transforms/linkpath/)). + +```json +{ + "data": [ + { + "name": "states", + "url": "data/us-10m.json", + "format": {"type": "topojson", "feature": "states"}, + "transform": [ + { + "type": "geopath", "projection": "albersUsa", + "scale": 1200, "translate": [450, 280] + } + ] + }, + { + "name": "traffic", + "url": "data/flights-airport.csv", + "format": {"type": "csv", "parse": "auto"}, + "transform": [ + { + "type": "aggregate", "groupby": ["origin"], + "summarize": [{"field": "count", "ops": ["sum"], "as": ["flights"]}] + } + ] + }, + { + "name": "airports", + "url": "data/airports.csv", + "format": {"type": "csv", "parse": "auto"}, + "transform": [ + { + "type": "lookup", "on": "traffic", "onKey": "origin", + "keys": ["iata"], "as": ["traffic"] + }, + { + "type": "filter", + "test": "datum.traffic != null" + }, + { + "type": "geo", "projection": "albersUsa", + "scale": 1200, "translate": [450, 280], + "lon": "longitude", "lat": "latitude" + }, + { + "type": "filter", + "test": "datum.layout_x != null && datum.layout_y != null" + }, + { "type": "sort", "by": "-traffic.flights" }, + { "type": "voronoi", "x": "layout_x", "y": "layout_y" } + ] + }, + { + "name": "routes", + "url": "data/flights-airport.csv", + "format": {"type": "csv", "parse": "auto"}, + "transform": [ + { "type": "filter", "test": "hover && hover.iata == datum.origin" }, + { + "type": "lookup", "on": "airports", "onKey": "iata", + "keys": ["origin", "destination"], "as": ["_source", "_target"] + }, + { "type": "filter", "test": "datum._source && datum._target" }, + { "type": "linkpath" } + ] + } + ] +} +``` + +#### Further research on Vega transforms + +https://github.com/vega/vega-dataflow-examples/ + +It is quite difficult to me to read the code as there is not enough documentation. I have included here the simplest example: + +`vega-dataflow.js` contains Dataflow, all transforms and vega's utilities. + +```htmlmixed + + + + Dataflow CountPattern + + + + +
+ + Frequency Threshold
+

+  
+
+```
+
+`df` is a Dataflow instance where we register (.add) functions and parameters - as below on line 36-38. The same with adding transforms - lines 40-44. We can pass different parameters to the transforms depending on requirements of each of them. Event handlers can added by using `.on` method of the Dataflow instance - lines 46-48.
+
+```javascript
+var tx = vega.transforms; // all transforms 
+var out = document.querySelector('#output');
+var area = document.querySelector('#text');
+area.value = [
+  "Despite myriad tools for visualizing data, there remains a gap between the notational efficiency of high-level visualization systems and the expressiveness and accessibility of low-level graphical systems."
+].join('\n\n');
+var stopwords = "(i|me|my|myself|we|us|our|ours|ourselves|you|your|yours|yourself|yourselves|he|him|his)";
+
+var get = vega.field('data');
+
+function readText(_, pulse) {
+  if (this.value) pulse.rem = this.value;
+  return pulse.source = pulse.add = [vega.ingest(area.value)];
+}
+
+function threshold(_) {
+  var freq = _.freq,
+      f = function(t) { return t.count >= freq; };
+  return (f.fields = ['count'], f);
+}
+
+function updatePage() {
+  out.innerText = c1.value.slice()
+    .sort(function(a,b) {
+      return (b.count - a.count)
+        || (b.text > a.text ? -1 : a.text > b.text ? 1 : 0);
+    })
+    .map(function(t) {
+      return t.text + ': ' + t.count;
+    })
+    .join('\n');
+}
+
+var df = new vega.Dataflow(), // create a new Dataflow instance
+// then add various operators into Dataflow instance:
+    ft = df.add(4), // word frequency threshold
+    ff = df.add(threshold, {freq:ft})
+    rt = df.add(readText),
+    // add a transforms (tx):
+    cp = df.add(tx.CountPattern, {field:get, case:'lower',
+      pattern:'[\\w\']{2,}', stopwords:stopwords, pulse:rt}),
+    cc = df.add(tx.Collect, {pulse:cp}),
+    fc = df.add(tx.Filter, {expr:ff, pulse:cc}),
+    c1 = df.add(tx.Collect, {pulse:fc}),
+    up = df.add(updatePage, {pulse: c1});
+df.on(df.events(area, 'keyup').debounce(250), rt)
+  .on(df.events('#slider', 'input'), ft, function(_, e) { return +e.target.value; })
+  .run();
+```
+---
+> // below is old analysis
+
+There are number of transforms and they are located in different libraries. Basics are here https://github.com/vega/vega-dataflow/tree/master/src/transforms
+
+Generally, all data flow happens in the [vega-dataflow module](https://github.com/vega/vega-dataflow). There are lots of complicated operations performed to data input and parameters. Some of transform functions are inherited from another functions/classes which makes difficult to separate them:
+
+Filter function:
+```javascript
+export default function Filter(params) {
+  Transform.call(this, fastmap(), params);
+}
+
+var prototype = inherits(Filter, Transform);
+
+// more code for prototype
+```
+and Transform is:
+```javascript
+import Operator from './Operator';
+import {inherits} from 'vega-util';
+
+/**
+ * Abstract class for operators that process data tuples.
+ * Subclasses must provide a {@link transform} method for operator processing.
+ * @constructor
+ * @param {*} [init] - The initial value for this operator.
+ * @param {object} [params] - The parameters for this operator.
+ * @param {Operator} [source] - The operator from which to receive pulses.
+ */
+export default function Transform(init, params) {
+  Operator.call(this, init, null, params);
+}
+
+var prototype = inherits(Transform, Operator);
+
+/**
+ * Overrides {@link Operator.evaluate} for transform operators.
+ * Marshalls parameter values and then invokes {@link transform}.
+ * @param {Pulse} pulse - the current dataflow pulse.
+ * @return {Pulse} The output pulse (or StopPropagation). A falsy return
+     value (including undefined) will let the input pulse pass through.
+ */
+prototype.evaluate = function(pulse) {
+  var params = this.marshall(pulse.stamp),
+      out = this.transform(params, pulse);
+  params.clear();
+  return out;
+};
+
+/**
+ * Process incoming pulses.
+ * Subclasses should override this method to implement transforms.
+ * @param {Parameters} _ - The operator parameter values.
+ * @param {Pulse} pulse - The current dataflow pulse.
+ * @return {Pulse} The output pulse (or StopPropagation). A falsy return
+ *   value (including undefined) will let the input pulse pass through.
+ */
+prototype.transform = function() {};
+```
+and as we can see Transform inherits from Operator and so on.
+
+
+But some of the transform functions looks independent:
+
+Getting cross product:
+```javascript
+// filter is an optional  function for selectively including tuples in the cross product.
+function cross(input, a, b, filter) {
+  var data = [],
+      t = {},
+      n = input.length,
+      i = 0,
+      j, left;
+
+  for (; i`, and output them to the `multi-year-report` resource.
+
+The output contains two fields:
+
+- `activity` , which is called `activity` in all sources
+- `amount`, which has varying names in different resources (e.g. `Amount`, `2009_amount`, `amount` etc.)
+
+#### ***`join`***
+
+Joins two streamed resources. 
+
+"Joining" in our case means taking the *target* resource, and adding fields to each of its rows by looking up data in the _source_ resource. 
+
+A special case for the join operation is when there is no target stream, and all unique rows from the source are used to create it. 
+This mode is called _deduplication_ mode - The target resource will be created and  deduplicated rows from the source will be added to it.
+
+_Parameters_:
+
+- `source` - information regarding the _source_ resource
+  - `name` - name of the resource
+  - `key` - One of
+    - List of field names which should be used as the lookup key
+    - String, which would be interpreted as a Python format string used to form the key (e.g. `{}:{field_name_2}`)
+  - `delete` - delete from data-package after joining (`False` by default)
+- `target` - Target resource to hold the joined data. Should define at least the following properties:
+  - `name` - as in `source`
+  - `key` - as in `source`, or `null` for creating the target resource and performing _deduplication_.
+- `fields` - mapping of fields from the source resource to the target resource. 
+  Keys should be field names in the target resource.
+  Values can define two attributes:
+  - `name` - field name in the source (by default is the same as the target field name)
+
+  - `aggregate` - aggregation strategy (how to handle multiple _source_ rows with the same key). Can take the following options: 
+    - `sum` - summarise aggregated values. 
+      For numeric values it's the arithmetic sum, for strings the concatenation of strings and for other types will error.
+
+    - `avg` - calculate the average of aggregated values.
+
+      For numeric values it's the arithmetic average and for other types will err.
+
+    - `max` - calculate the maximum of aggregated values.
+
+      For numeric values it's the arithmetic maximum, for strings the dictionary maximum and for other types will error.
+
+    - `min` - calculate the minimum of aggregated values.
+
+      For numeric values it's the arithmetic minimum, for strings the dictionary minimum and for other types will error.
+
+    - `first` - take the first value encountered
+
+    - `last` - take the last value encountered
+
+    - `count` - count the number of occurrences of a specific key
+      For this method, specifying `name` is not required. In case it is specified, `count` will count the number of non-null values for that source field.
+
+    - `set` - collect all distinct values of the aggregated field, unordered 
+    
+    - `array` - collect all values of the aggregated field, in order of appearance   
+
+    - `any` - pick any value.
+
+    By default, `aggregate` takes the `any` value.
+
+  If neither `name` or `aggregate` need to be specified, the mapping can map to the empty object `{}` or to `null`.
+- `full`  - Boolean,
+  - If `True` (the default), failed lookups in the source will result in "null" values at the source.
+  - if `False`, failed lookups in the source will result in dropping the row from the target.
+
+_Important: the "source" resource **must** appear before the "target" resource in the data-package._
+
+*Examples*:
+
+```yaml
+- run: join
+  parameters: 
+    source:
+      name: world_population
+      key: ["country_code"]
+      delete: yes
+    target:
+      name: country_gdp_2015
+      key: ["CC"]
+    fields:
+      population:
+        name: "census_2015"        
+    full: true
+```
+
+The above example aims to create a package containing the GDP and Population of each country in the world.
+
+We have one resource (`world_population`) with data that looks like:
+
+| country_code | country_name   | census_2000 | census_2015 |
+| ------------ | -------------- | ----------- | ----------- |
+| UK           | United Kingdom | 58857004    | 64715810    |
+| ...          |                |             |             |
+
+And another resource (`country_gdp_2015`) with data that looks like:
+
+| CC   | GDP (£m) | Net Debt (£m) |
+| ---- | -------- | ------------- |
+| UK   | 1832318  | 1606600       |
+| ...  |          |               |
+
+The `join` command will match rows in both datasets based on the `country_code` / `CC` fields, and then copying the value in the `census_2015` field into a new `population` field.
+
+The resulting data package will have the `world_population` resource removed and the `country_gdp_2015` resource looking like:
+
+| CC   | GDP (£m) | Net Debt (£m) | population |
+| ---- | -------- | ------------- | ---------- |
+| UK   | 1832318  | 1606600       | 64715810   |
+| ...  |          |               |            |
+
+
+
+A more complex example:
+
+```yaml
+- run: join
+  parameters: 
+    source:
+      name: screen_actor_salaries
+      key: "{production} ({year})"
+    target:
+      name: mgm_movies
+      key: "{title}"
+    fields:
+      num_actors:
+        aggregate: 'count'
+      average_salary:
+        name: salary
+        aggregate: 'avg'
+      total_salaries:
+        name: salary
+        aggregate: 'sum'
+    full: false
+```
+
+This example aims to analyse salaries for screen actors in the MGM studios.
+
+Once more, we have one resource (`screen_actor_salaries`) with data that looks like:
+
+| year | production                  | actor             | salary   |
+| ---- | --------------------------- | ----------------- | -------- |
+| 2016 | Vertigo 2                   | Mr. T             | 15000000 |
+| 2016 | Vertigo 2                   | Robert Downey Jr. | 7000000  |
+| 2015 | The Fall - Resurrection     | Jeniffer Lawrence | 18000000 |
+| 2015 | Alf - The Return to Melmack | The Rock          | 12000000 |
+| ...  |                             |                   |          |
+
+And another resource (`mgm_movies`) with data that looks like:
+
+| title                     | director      | producer     |
+| ------------------------- | ------------- | ------------ |
+| Vertigo 2 (2016)          | Lindsay Lohan | Lee Ka Shing |
+| iRobot - The Movie (2018) | Mr. T         | Mr. T        |
+| ...                       |               |              |
+
+The `join` command will match rows in both datasets based on the movie name and production year. Notice how we overcome incompatible fields by using different key patterns.
+
+The resulting dataset could look like:
+
+| title            | director      | producer     | num_actors | average_salary | total_salaries |
+| ---------------- | ------------- | ------------ | ---------- | -------------- | -------------- |
+| Vertigo 2 (2016) | Lindsay Lohan | Lee Ka Shing | 2          | 11000000       | 22000000       |
+| ...              |               |              |            |                |                |
+
+
+---
+
+### Vega Dataflow usage for DP views
+
+Vega has quite a lot of data transform functions available, however, most of them require complicated JSON descriptor to use. Although we may implement them in the future, at the moment we could start with the most basic and essential ones:
+
+**List of transforms that we could use:**
+
+* Aggregate
+* Filter
+* Formula (applies given formula to dataset)
+* Sample
+
+#### Aggregate example
+
+We have dataset with 4 fields - a, b, c and d. Lets apply different aggregation methods on them - count, sum, min and max:
+
+```javascript
+const vegadataflow = require('./build/vega-dataflow.js');
+
+var tx = vegadataflow.transforms,
+    changeset = vegadataflow.changeset;
+
+var data = [
+ {
+   "a": 17.76,
+   "b": 20.14,
+   "c": 17.05,
+   "d": 17.79
+ },
+ {
+   "a": 19.19,
+   "b": 21.29,
+   "c": 19.19,
+   "d": 19.92
+ },
+ {
+   "a": 20.33,
+   "b": 22.9,
+   "c": 19.52,
+   "d": 21.12
+ },
+ {
+   "a": 20.15,
+   "b": 20.72,
+   "c": 19.04,
+   "d": 19.31
+ },
+ {
+   "a": 17.93,
+   "b": 18.09,
+   "c": 16.99,
+   "d": 17.01
+ }
+];
+
+var a = vegadataflow.field('a'),
+    b = vegadataflow.field('b'),
+    c = vegadataflow.field('c'),
+    d = vegadataflow.field('d');
+
+var df = new vegadataflow.Dataflow(),
+    col = df.add(tx.Collect),
+    agg = df.add(tx.Aggregate, {
+            fields: [a, b, c, d],
+            ops: ['count', 'sum', 'min', 'max'],
+            pulse: col
+          }),
+    out = df.add(tx.Collect, {pulse: agg});
+
+df.pulse(col, changeset().insert(data)).run();
+
+console.dir(out.value);
+```
+
+Output:
+```javascript
+[ 
+  {
+    _id: 7, 
+    count_a: 5, 
+    sum_b: 103.14, 
+    min_c: 16.99, 
+    max_d: 21.12 
+  }
+]
+```
+
+#### Filter example
+
+Using the dataset from example above, lets filter values of field `a` that are not greater than 19:
+
+```javascript
+const vegadataflow = require('./build/vega-dataflow.js');
+
+var tx = vegadataflow.transforms,
+    changeset = vegadataflow.changeset;
+
+var data = [
+ {
+   "a": 17.76,
+   "b": 20.14,
+   "c": 17.05,
+   "d": 17.79
+ },
+ {
+   "a": 19.19,
+   "b": 21.29,
+   "c": 19.19,
+   "d": 19.92
+ },
+ {
+   "a": 20.33,
+   "b": 22.9,
+   "c": 19.52,
+   "d": 21.12
+ },
+ {
+   "a": 20.15,
+   "b": 20.72,
+   "c": 19.04,
+   "d": 19.31
+ },
+ {
+   "a": 17.93,
+   "b": 18.09,
+   "c": 16.99,
+   "d": 17.01
+ }
+];
+
+var a = vegadataflow.field('a');
+
+var filter1 = vegadataflow.accessor(d => { return d.a > 19 }, ['a']);
+
+var df = new vegadataflow.Dataflow(),
+    ex = df.add(null),
+    col = df.add(tx.Collect),
+    fil = df.add(tx.Filter, {expr: ex, pulse: col}),
+    out = df.add(tx.Collect, {pulse: fil});
+
+df.pulse(col, changeset().insert(data));
+df.update(ex, filter1).run();
+
+console.log(out.value);
+
+```
+
+Output:
+```javascript
+[ 
+  { a: 19.19, b: 21.29, c: 19.19, d: 19.92, _id: 3 },
+  { a: 20.33, b: 22.9, c: 19.52, d: 21.12, _id: 4 },
+  { a: 20.15, b: 20.72, c: 19.04, d: 19.31, _id: 5 } 
+]
+```
+
+#### Formula example
+
+Using the same dataset, lets apply mapping on a field:
+
+```javascript
+const vegadataflow = require('./build/vega-dataflow.js');
+
+var tx = vegadataflow.transforms,
+    changeset = vegadataflow.changeset;
+
+var data = [
+ {
+   "a": 17.76,
+   "b": 20.14,
+   "c": 17.05,
+   "d": 17.79
+ },
+ {
+   "a": 19.19,
+   "b": 21.29,
+   "c": 19.19,
+   "d": 19.92
+ },
+ {
+   "a": 20.33,
+   "b": 22.9,
+   "c": 19.52,
+   "d": 21.12
+ },
+ {
+   "a": 20.15,
+   "b": 20.72,
+   "c": 19.04,
+   "d": 19.31
+ },
+ {
+   "a": 17.93,
+   "b": 18.09,
+   "c": 16.99,
+   "d": 17.01
+ }
+];
+
+
+var df = new vegadataflow.Dataflow(),
+    e = vegadataflow.field('e'),
+    f = vegadataflow.field('f'),
+    formula1 = vegadataflow.accessor(d => { return d.a * 10; }, ['a']),
+    formula2 = vegadataflow.accessor(d => { return d.b / 10; }, ['b']),
+    col = df.add(tx.Collect),
+    fa = df.add(tx.Formula, {expr: formula1, as: 'e', pulse: col}),
+    fb = df.add(tx.Formula, {expr: formula2, as: 'f', pulse: fa});
+
+df.pulse(col, changeset().insert(data)).run();
+
+console.log(col.value.map(e));
+console.log(col.value.map(f));
+```
+
+Output:
+```
+[ 177.60000000000002, 191.9, 203.29999999999998, 201.5, 179.3 ]
+[ 2.0140000000000002, 2.129, 2.29, 2.072, 1.809 ]
+```
+
+#### Sample example
+
+Lets create a dataset with 100 rows and take a sample of 10 from it:
+
+```javascript
+const vegadataflow = require('./build/vega-dataflow.js');
+
+var tx = vegadataflow.transforms,
+    changeset = vegadataflow.changeset;
+
+var n = 100,
+    sampleSize = 10,
+    data = Array(n),
+    i;
+
+for(i=0; i 10'
+  },
+  ...
+}
+```
+For `filter` type expression should evaluate to true or false so only truthy values will be kept.
+
+#### Formula
+
+```javascript
+{
+  ...
+  transform: {
+    type: 'formula',
+    expr: ['data.fieldName * 2', 'data.fieldName + 10'],
+    as: ['x', 'y']
+  },
+  ...
+}
+```
+
+For `formula` type, a field will be mapped with given expression and output will be stored in new fields that are specified in `as` property.
+
+#### Sample
+
+```javascript
+  ...
+  transform: {
+    type: 'sample',
+    size: 'some integer'
+  },
+  ...
+```
+In `sample` type, only size of a sample is needed.
+
+
+## Appendix: SQL Transforms
+
+Just use SQL e.g.
+
+* http://harelba.github.io/q/
+* https://github.com/agershun/alasql
+* https://github.com/google/lovefield
diff --git a/site/blog/2017-04-11-dataworld/README.md b/site/blog/2017-04-11-dataworld/README.md
new file mode 100644
index 000000000..902234fd3
--- /dev/null
+++ b/site/blog/2017-04-11-dataworld/README.md
@@ -0,0 +1,39 @@
+---
+title: data.world
+date: 2017-04-11
+tags: ["case-studies"]
+category: case-studies
+interviewee: Bryon Jacob
+subject_context: data.world uses Frictionless Data specifications to generate schema and metadata related to an uploaded dataset and containerize all three in a Tabular Data Package
+image: /img/blog/data-world-logo.png
+description: Allow users to download a version of a data.world dataset that retains the structured metadata and schema for offline analysis
+author: Bryon Jacob
+---
+
+At [data.world][dataworld], we deal with a great diversity of data, both in terms of content and in terms of source format - most people working with data are emailing each other spreadsheets or CSVs, and not formally defining schema or semantics for what’s contained in these data files.
+
+When [data.world][dataworld] ingests tabular data, we “virtualize” the tables away from their source format, and build layers of type and semantic information on top of the raw data. What this allows us to do is to produce a clean Tabular Data Package[^tdp] for any dataset, whether the input is CSV files, Excel Spreadsheets, JSON data, SQLite Database files - any format that we know how to extract tabular information from - we can present it as cleaned-up CSV data with a `datapackage.json` that describes the schema and metadata of the contents.
+
+![Available Data](./data-world-1.png) 
*Tabular Data Package structure on disk* + +We would also like to see graph data packages developed as part of the Frictionless Data specifications, or “Universal Data Packages” that can encapsulate both tabular and graph data. It would be great to be able to present tabular and graph data in the same package and develop software that knows how to use these things together. + +To elaborate on this, it makes a lot of sense to normalize tabular data down to clean, well-formed CSVs or data that is more graph-like, and to normalize it to a standard format. RDF[^rdf] is a well-established and standardized format, with many serialized forms that could be used interchangeably (RDF XML, Turtle, N-Triples, or JSON-LD, for example). The metadata in the `datapackage.json` would be extremely minimal, since the schema for RDF data is encoded into the data file itself. It might be helpful to use the `datapackage.json` descriptor to catalog the standard taxonomies and ontologies that were in use, for example, it would be useful to know if a file contained SKOS[^skos] vocabularies, or OWL[^owl] classes. + +In the coming days, we want to continue to enrich the metadata we include in Tabular Data Packages exported from [data.world][dataworld], and we’re looking into using `datapackage.json` as an import format as well as an export option. + +[data.world][dataworld] works with lots of data across many domains - what’s great about Frictionless Data is that it's a lightweight set of content specifications that can be a starting point for building domain-specific content standards - it really helps with the “first mile” of standardizing data and making it interoperable. + +![Available Data](./data-world-2.png)
*Tabular datasets can be downloaded as Tabular Data Packages* + +In a certain sense, a Tabular Data Package is sort of like an open-source, cross-platform, accessible replacement for spreadsheets that can act as a “binder” for several related tables of data. **I could easily imagine web or desktop-based tools that look and function much like a traditional spreadsheet, but use Data Packages as their serialization format.** + +To read more about Data Package integration at [data.world][dataworld], read our post: [Try This: Frictionless data.world](https://meta.data.world/try-this-frictionless-data-world-ad36b6422ceb#.rbbf8k40t). Sign up, and starting playing with data. + +[dataworld]: https://data.world +[^package]: Tabular Data Package: [/data-package/#tabular-data-package](/data-package/#tabular-data-package) +[^datapackage]: Data Packages: [/data-package](/data-package) +[^rdf]: RDF: Resource Description Framework: +[^tdp]: Tabular Data Package specifications: [https://specs.frictionlessdata.io/tabular-data-package](https://specs.frictionlessdata.io/tabular-data-package) +[^skos]: SKOS: Simple Knowledge Organization System: +[^owl]: OWL Web Ontology Language: diff --git a/site/blog/2017-04-11-dataworld/data-world-1.png b/site/blog/2017-04-11-dataworld/data-world-1.png new file mode 100755 index 000000000..b8615c886 Binary files /dev/null and b/site/blog/2017-04-11-dataworld/data-world-1.png differ diff --git a/site/blog/2017-04-11-dataworld/data-world-2.png b/site/blog/2017-04-11-dataworld/data-world-2.png new file mode 100755 index 000000000..c68492e4f Binary files /dev/null and b/site/blog/2017-04-11-dataworld/data-world-2.png differ diff --git a/site/blog/2017-04-11-dataworld/data-world-logo.png b/site/blog/2017-04-11-dataworld/data-world-logo.png new file mode 100755 index 000000000..509e074fb Binary files /dev/null and b/site/blog/2017-04-11-dataworld/data-world-logo.png differ diff --git a/site/blog/2017-05-23-cmso/README.md b/site/blog/2017-05-23-cmso/README.md new file mode 100644 index 000000000..663302bfc --- /dev/null +++ b/site/blog/2017-05-23-cmso/README.md @@ -0,0 +1,57 @@ +--- +title: Cell Migration Standardization Organization +date: 2017-05-23 +tags: ["case-studies"] +category: case-studies +interviewee: Paola Masuzzo +subject_context: CMSO uses Frictionless Data specs to package cell migration data and load it into Pandas for data analysis and creation of visualizations. +image: /img/blog/cmso-logo.png +description: Building standards for cell migration data in order to enable data sharing in the field. +author: Paola Masuzzo +--- + +Researchers worldwide try to understand how cells move, a process extremely important for many physiological and pathological conditions. [Cell migration](https://en.wikipedia.org/wiki/Cell_migration) is in fact involved in many processes, like wound healing,neuronal development and cancer invasion. The [Cell Migration Standardization Organization](https://cmso.science/) (CMSO) is a community building standards for cell migration data, in order to enable data sharing in the field. The organization has three main working groups: + +- Minimal reporting requirement (developing [MIACME](https://github.com/CellMigStandOrg/MIACME), i.e. the Minimum Information About a Cell Migration Experiment) +- Controlled Vocabularies +- Data Formats and APIs + +In our last working group, we discussed where the Data Package specifications[^datapackages] could be used or expanded for the definition of a standard format and the corresponding libraries to interact with these standards. In particular, we have started to address the standardization of cell tracking data. This is data produced using tracking software that reconstructs cell movement in time based on images from a microscope. + +![Diagram](./cmso-1.png)
_In pink, the [ISA](http://isa-tools.org/) (Investigation Study Assay) model to annotate the experimental metadata; in blue, the [OME](http://www.openmicroscopy.org/) (Open Microscopy Environment) model for the imaging data; in green, our biotracks format based on the Data Package specification for the analytics data (cell tracking, positions, features etc.);in purple, CV: Controlled Vocabulary; and in turquoise, [MIACME](https://github.com/CellMigStandOrg/MIACME): Minimum Information About a Cell Migration Experiment. [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) Credit: Paola Masuzzo (text) and CMSO (diagram)._ + +CMSO deals specifically with cell migration data (a subject of cell biology). Our main challenge lies in the heterogeneity of the data. This diversity has its origin in two factors: + +- **Experimentally**: Cell migration data can be produced using many diverse techniques (imaging, non-imaging, dynamic, static, high-throughput/screening, etc.) +- **Analytically**: These data are produced using many diverse software packages, each of these writing data to specific (sometimes proprietary) file formats. + +This diversity hampers (or at least makes very difficult) procedures like meta-analysis, data integration, data mining, and last but not least, data _reproducibility_. + +CMSO has developed and is about to release the first specification of a [Cell Tracking format](https://cellmigstandorg.github.io/Tracks/). This specification is built on a tabular representation, i.e. data are stored in tables. Current v0.1 of this specification can be seen at [here](https://cellmigstandorg.github.io/Tracks/v0.1/). + +CMSO is using the _Tabular_ Data Package[^tdp] specification to represent cell migration-derived tracking data, as illustrated +[here](https://github.com/CellMigStandOrg/biotracks/). The specification is used for two goals: + +1. **Create a Data Package representation** where the data---in our case objects (e.g. cells detected in microscopy images), links and optionally tracks---are stored in CSV files, while metadata and schema[^tableschema] information are stored in a JSON file. +2. **Write** this Data Package to a pandas[^pandas] dataframe, to aid quick inspection and visualization. + +You can see some examples [here](https://github.com/CellMigStandOrg/biotracks/tree/master/examples). + +I am an Open Science fan and advocate, so I try to keep up to date with the initiatives of the +[Open Knowledge International](https://okfn.org) teams. I think I first became aware of Frictionless Data when I saw a tweet and I checked the specs out. Also, CMSO really wanted to keep a possible specification and file format light and simple. So different people of the team must have googled for 'CSV and JSON formats' or something like that, and Frictionless Data popped out :). + +I have opened a couple of issues on the [GitHub page of the spec](https://github.com/frictionlessdata/specs), detailing what I would like to see developed in the Frictionless Data project. The CMSO is not sure yet if the Data Package representation will be the one we’ll go for in the very end, because we would first like to know how sustainable/sustained this spec will be in the future. + +CMSO is looking into expanding the [list of examples](https://github.com/CellMigStandOrg/biotracks/tree/master/examples) we have so far in terms of tracking software. Personally, I would like to choose a reference data set (a live-cell, time-lapse microscopy data set), and run different cell tracking algorithms/software packages on it. Then I want to put the results into a common, light and easy-to-interpret CSV+JSON format (the biotracks format), and show people how data containerization[^philosophy] can be the way to go to enable research data exchange and knowledge discovery at large. + +With most other specifications, cell tracking data are stored in tabular format, but metadata are never kept together with the data, which makes data interpretation and sharing very difficult. The Frictionless Data specifications take good care of this aspect. Some other formats are based on XML[^xml] annotation, which certainly does the job, but are perhaps heavier (even though perhaps more sustainable in the long term). I hate Excel formats, and unfortunately I need to parse those too. I love the integration with Python[^python] and the pandas[^pandas] system, this is a big plus when doing data science. + +As a researcher, I mostly deal with research data. I am pretty sure if this could work for cell migration data, it could work for many cell biology disciplines as well. I recommend speaking to more researchers and data producers to determine additional use cases! + +[^pandas]: Pandas: Python package for data analysis: +[^datapackages]: Data Package: [https://specs.frictionlessdata.io/data-package](https://specs.frictionlessdata.io/data-package) +[^xml]: Extensible Markup Language: +[^tdp]: Tabular Data Package: [https://specs.frictionlessdata.io/tabular-data-package](https://specs.frictionlessdata.io/tabular-data-package) +[^tableschema]: Table Schema: [https://specs.frictionlessdata.io/table-schema](https://specs.frictionlessdata.io/table-schema) +[^philosophy]: Design Philosophy: [specs](https://specs.frictionlessdata.io/) +[^python]: Data Package-aware libraries in Python: , , diff --git a/site/blog/2017-05-23-cmso/cmso-1.png b/site/blog/2017-05-23-cmso/cmso-1.png new file mode 100644 index 000000000..3fdf71b18 Binary files /dev/null and b/site/blog/2017-05-23-cmso/cmso-1.png differ diff --git a/site/blog/2017-05-23-cmso/cmso-logo.png b/site/blog/2017-05-23-cmso/cmso-logo.png new file mode 100644 index 000000000..abd8e9560 Binary files /dev/null and b/site/blog/2017-05-23-cmso/cmso-logo.png differ diff --git a/site/blog/2017-05-24-the-data-retriever/README.md b/site/blog/2017-05-24-the-data-retriever/README.md new file mode 100644 index 000000000..c36544d26 --- /dev/null +++ b/site/blog/2017-05-24-the-data-retriever/README.md @@ -0,0 +1,52 @@ +--- +title: The Data Retriever +date: 2017-05-24 +tags: ["case-studies"] +category: case-studies +interviewee: Ethan White +subject_context: Data Retriever uses Frictionless Data specifications to generate and package metadata for publicly available data +image: /img/blog/data-retriever-logo.png +description: The Data Retriever is a package manager for data. It downloads, cleans, and stores publicly available data, so that analysts spend less time cleaning and managing data, and more time analyzing it. +author: Ethan White +--- + +[The Data Retriever](http://www.data-retriever.org/) automates the tasks of finding, downloading, and cleaning up publicly available data, and then stores them in a variety of databases and file formats. This lets data analysts spend less time cleaning up and managing data, and more time analyzing it. + +We originally built the Data Retriever starting in 2010 with a focus on ecological data. Over time, we realized that the common challenges with finding downloading, and cleaning up ecological data applied to data in most other fields, so we rebranded and starting integrating data from other fields as well. + +The Data Retriever is primarily focused on *tabular* data, but we’re starting work on supporting spatial data as well. + +![Diagram](./data-retriever-install.gif)
*The Data Retriever automatically installing the [BBS (USGS North American Breeding Bird Survey)](https://www.pwrc.usgs.gov/bbs/) dataset* + +Data is often messy and needs cleaning and restructuring before it can be effectively used. It is often not feasible to modify and redistribute the data due to licensing and other limitations (Editor's note: see our [Open Power System Data case study](/blog/2016/11/15/open-power-system-data/) for more on this). + +We need to make it as easy as possible for contributors to [add new datasets](https://retriever.readthedocs.io/en/latest/retriever.lib.html#retriever-lib-package). For relatively clean datasets this means having a simple, easy-to-work-with metadata standard to describe existing data. The description for each dataset is written in a single file which gets read by our plugin infrastructure. + +To describe the structure of simple data, we originally created a YAML-like[^yaml] metadata structure. When the Data Package[^datapackage] specs were created by [Open Knowledge International](https://okfn.org/), we decided to switch over to using this standard so that others could benefit from the metadata we were creating and so that we could benefit from th standards-based infrastructure[^software] being created around the specs. + +The transition to the Data Package specification was fairly smooth as most of the fields we needed were already included in the specs. The only thing that we needed to add were fields for restructuring poorly formatted data since the spec assumes the data is well structured to begin with. For example, we use custom fields for describing how to convert [**wide** data to **long** data](https://en.wikipedia.org/wiki/Wide_and_narrow_data). + +We first learned about Frictionless Data through the [announcement](https://blog.okfn.org/2016/02/29/sloan-foundation-funds-frictionless-data-tooling-and-engagement-at-open-knowledge/) of their funding by the Sloan Foundation. Going forward, we would love to see the Data Package spec expanded to include information about "imperfections" in data. It currently assumes that the person creating the metadata can modify the raw data files to comply with the standard rules of data structure. However this doesn’t work if someone else is distributing the data, which is a very common use +case. + +The expansion of the standard would include things like a way to indicate wide versus long data with enough information to uniquely describe how to translate from one to the other as well as information on single tables that are composed from data in many separate files. We have already been adding new fields to the JSON to accomplish some of these things and would be happy to be part of a larger dialog about implementing them more widely. For the wide-data-to-long-data example mentioned above, we use `ct_column` and `ct_names` fields and a `ct-type` type to indicate how to transform the data into a properly normalized form. + +The other thing we’ve come across is the need to develop a clear specification for [semantic versioning](http://semver.org/) of Data Packages. The specification includes an optional `version` field[^version] for keeping track to changes to the package. This version has a standard structure from semantic versioning in software that includes major, minor, and patch level changes. Unlike in software there is no clearly established standard for what changes in different version numbers indicate. Since we work with a lot of different datasets, we’ve been changing a lot of version numbers over the last year; this has lead us to [open a discussion with the OKFN team](https://github.com/frictionlessdata/specs/issues/421) about developing a standard to apply to these changes. + +Our next big step is working on the challenge of **simple data integration**. One of the major challenges data analysts have after they have cleaned up and prepared individual data sources is combining them. General solutions to the data integration problem (e.g. linked data approaches) have proven to difficult but we are approaching the problem by tackling a small number of common use cases and involving humans in the metadata development describing the linkages between datasets. + +The major specification that is available for ecological data is the [Ecological Metadata Language (EML)](https://knb.ecoinformatics.org/#external//emlparser/docs/index.html). It is an XML[^xml] based spec that includes a lot of information specific to ecological datasets. The nice thing about EML---which is also its challenge---is that it is very comprehensive. This gives it a lot of strength in a linked data context, but also means that it is difficult to drive adoption by users. + +The Frictionless Data specifications line up better with our approach to data[^philosophy], which is to complement lightweight computational methods with human contributions to make data easier to work with quickly. + +Community contributions to our work are welcome. We work hard to make all of our development efforts open and inclusive (see our [Code of Conduct](https://github.com/weecology/retriever/blob/master/docs/code_of_conduct.rst)) and love it when new developers, data scientists, and domain specialists [contribute](http://www.data-retriever.org/#contribute). A contribution can be as easy as adding a new dataset by following [a set of prompts](https://retriever.readthedocs.io/en/latest/retriever.lib.html#retriever-lib-package) to create a new JSON file and submitting a [PR](https://help.github.com/articles/about-pull-requests/) on GitHub, or even just opening an issue to tell us about a dataset that would be useful to you. So, [open an issue](http://github.com/weecology/retriever/issues/new), submit a PR, or stop by our [Gitter chat channel](https://gitter.im/weecology/retriever) and say "Hi". We also participate in [Google Summer of Code](https://developers.google.com/open-source/gsoc/), which is a great opportunity for students interested in being directly supported to work on the project. + +[^pandas]: Pandas: Python package for data analysis: +[^datapackage]: Data Package: [https://specs.frictionlessdata.io/data-package](https://specs.frictionlessdata.io/data-package) +[^xml]: Extensible Markup Language: +[^tdp]: Tabular Data Package: [https://specs.frictionlessdata.io/tabular-data-package](https://specs.frictionlessdata.io/tabular-data-package) +[^tableschema]: Table Schema: [https://specs.frictionlessdata.io/table-schema](https://specs.frictionlessdata.io/table-schema) +[^philosophy]: Design Philosophy: [/specs/#design-philosophy](https://specs.frictionlessdata.io/#design-philosophy) +[^python]: Data Package-aware libraries in Python: , , +[^version]: Data Package version field: [/specs/#version](https://specs.frictionlessdata.io/patterns/#data-package-version) +[^yaml]: YAML Ain't Markup Language: diff --git a/site/blog/2017-05-24-the-data-retriever/data-retriever-install.gif b/site/blog/2017-05-24-the-data-retriever/data-retriever-install.gif new file mode 100755 index 000000000..3f2292e80 Binary files /dev/null and b/site/blog/2017-05-24-the-data-retriever/data-retriever-install.gif differ diff --git a/site/blog/2017-05-24-the-data-retriever/data-retriever-logo.png b/site/blog/2017-05-24-the-data-retriever/data-retriever-logo.png new file mode 100755 index 000000000..a589739a8 Binary files /dev/null and b/site/blog/2017-05-24-the-data-retriever/data-retriever-logo.png differ diff --git a/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/README.md b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/README.md new file mode 100644 index 000000000..966f3a664 --- /dev/null +++ b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/README.md @@ -0,0 +1,101 @@ +--- +title: Pacific Northwest National Laboratory - Active Data Biology +date: 2017-06-26 +author: Sam Payne (PNNL), Joon-Yong Lee (PNNL), Dan Fowler (OKI) +tags: ["pilot"] +category: pilots +subject_context: Sam Payne and Joon-Yong Lee work at the Pacific Northwest National Laboratory. Together, we explored use of Frictionless Data's specifications and software to generate schema for tabular data and validate metadata stored as part of a biological application on GitHub. +image: /img/blog/pnnl.png +description: Using goodtables to validate metadata stored as part of an biological application on GitHub. +--- + +## Context + +### Problem We Were Trying To Solve + +Sam Payne and his team at the Pacific Northwest National Laboratory (PNNL) have designed an application called [Active Data Biology](https://adbio.pnnl.gov/) (ADBio) which is an interactive web-based suite of tools for analyzing high-throughput omics (a set of related fields of study in biology). The goal is to visualize and analyze datasets while still enabling seamless collaboration between computational and non-computational domain experts. The tool provides several views on the same data facilitating different avenues of investigation. + +One of the high level goals of ADBio was to make collaborative data analysis work in a similar manner to collaborative software development (versioned, asynchronous, flexible, sharable, global). You can read more of the motivation in the Open Knowledge International blog post [Git for Data Analysis – why version control is essential for collaboration and for gaining public trust](https://blog.okfn.org/2016/11/29/git-for-data-analysis-why-version-control-is-essential-collaboration-public-trust/) written by Sam Payne as part of the pilot. To facilitate this goal, Sam and his team used version-controlled repositories as the storage mechanism for all required resources. Data, software (for conducting analyses), and insights (gained from these analyses) for the project all get checked into the same repository. ADBio pulls data and software directly from the repository and serves up an interactive visualization for data exploration. Any insight you choose to record gets checked back into the repository. + +![ADBio](./adbio.png) + +When we were first approached by Sam and his team, they outlined several use cases for which it might be valuable to have formal Data Package support (with the benefit of the associated tooling) within their framework. In the end, we decided to work on the first: *validating metadata associated with ADBio repositories*. + +### Use Case: Validating Metadata + +To initiate a project in Active Data Biology, users start with a dataset of quantitative molecular measurements across multiple samples combined with metadata for each sample. Each repository on ADBio contains these two types of files. For clinical experiments, the metadata may include information about a participant’s age, gender, disease stage, etc. For an environmental experiment, this may be geographical location, temperature, time of day, etc. One [example](https://github.com/ActiveDataBio/ADB-User-Study/blob/master/metadata.tsv) of a metadata file can be found at on the ADB-User-Study project repository under the [ActiveDataBio organization on GitHub](https://github.com/ActiveDataBio/). + +The metadata file can be updated or expanded during the course of analysis. This is currently not easily done within ADBio. Moreover, the researchers lacked any formal schema describing the metadata file and its contents. It was suggested that having a Data Package formalizing the metadata file would be a benefit. This would also enable validation of the contents, according to the schema stored as part of the Data Package. Finally, the researchers also requested the development of a web UI to edit the metadata file that would be an application within the ADBio suite. Users could then update the schema online, and it would be versioned through GitHub like everything else. Scenario + +A user gets updated survival information for patients in a clinical study and wants to update the metadata associated with this experiment. Within ADBio, the user opens the "Metadata" app and enters new information into the user interface. When finished, user clicks a ‘save’ button and the data is validated against the schema. If it fails, the specific cells are highlighted and annotated with failure codes. If it passes, the new metadata file is checked into the repository with a user-specified comment for the commit message. + +## The Work + +### What Did We Do + +This was a valuable pilot for several reasons. For one, the researchers interests in openness and the value of public, versioned infrastructure like GitHub for tabular, flat file datasets aligned well with the overall interests of the project. OKI’s first step was to start a new repository to track progress [in the open](https://github.com/frictionlessdata/pilot-pnnl). In addition, OKI also created their own ["fork" (i.e. versioned copy) of the repository](https://github.com/frictionlessdata/ADB-User-Study) in which PNNL stored their exemplar metadata file. + +### Data + +The `metadata.tsv` file is specially formatted compared to other TSV (tab-separated values) files in that it contains two extra rows below the header for describing a column’s *methods* and *descriptions*. While this is a neat way of storing metadata for each column, it is not particularly standard as ordinarily, we would expect all rows below the header contain actual data. Nevertheless, it provided a great start to the development of a custom schema. We used the information stored in these rows to generate a [Table Schema](https://specs.frictionlessdata.io/table-schema/) for the data compatible with our software ([the schema](https://github.com/frictionlessdata/ADB-User-Study/blob/master/metadata-schema.json)). + +For instance, if a column in the original metadata.tsv file had the text `categorical` in its `#methods` row, we knew that this translated very well to our [enum (short for enumerated list) constraint](https://specs.frictionlessdata.io/table-schema/#constraints). However, this was not enough. We had to infer from the values below in the dataset which values were actually valid categorical values for that column. So, for example, the `PlatinumStatus` column could only be one of `Resistant`, `Sensitive`, or `Tooearly` leading to the following constraint definition in Table Schema: + +``` +"constraints": { + "enum": [ + "Resistant", + "Sensitive", + "Tooearly" + ] +} +``` + +More straightforward was the translation of the `#descriptions` row; each description was translated directly into a [description attribute](https://specs.frictionlessdata.io/table-schema/#description) on the column: + +``` +"description": "It describes whether the patient was resistant to platinum (chemotherapy) treatment", +``` + +What the `metadata.tsv` file did not record at all was any information about the "type" of value expected for each column. For instance, the `days_to_death` column would never contain a value that was of a "geopoint" type, but rather always a number (and a whole number at that). Likewise, the `additional_immuno_therapy` column would always be a True/False (i.e. boolean) value. With PNNL’s domain expertise, OKI added these expectations to the schema so that `days_to_death` could be relied upon to always be an integer and `additional_immuno_therapy` a boolean (True/False) value. + +``` +{ + "name": "additional_immuno_therapy", + "type": "boolean" +} +``` + +Up to this point, the dataset provided by PNNL was adequately described by our specifications. One challenge was how to deal, though, with the many missing values in the dataset. While we had discussion on the [topic](https://github.com/frictionlessdata/specs/issues/97), we had not yet established a formal way of specifying. In part due to observed usage and the needs of the pilot, we formalized an approach to recording information about which values signal missing data in [mid-August 2016](https://twitter.com/OKFNLabs/status/765568650699018241). We added this information to the Table Schema: + +``` +"missingValues": [ + "[Not Applicable]", + "[Not Available]", + "[Pending]" +] +``` + +### Software + +Goodtables had [existed](http://okfnlabs.org/blog/2015/02/20/introducing-goodtables.html) as a Python library and web application developed by Open Knowledge International to support the validation of tabular datasets both in terms of structure and also with respect to a published schema as described above. This software was put to good use in a local government context. + +For this pilot, and in coordination with other work in the project, we took the opportunity to drastically improve the software to support the online, automated validation referenced in the above use case. We took as inspiration the workflow in use in software development environments around the world---continuous automated testing---and applied to data. This involved not only updating the Python library to reflect the specification development to date, but the design of a new data publishing workflow that is applicable beyond PNNL’s needs. It is designed to be extensible, so that custom checks and custom backends (e.g. other places where one might publish a dataset) can take advantage of this workflow. For example, in addition to datasets stored on GitHub, the new goodtables supports the automated validation of datasets stored on S3 and we are currently working on validation of datasets stored on CKAN. + +Goodtables supports validation of tabular data in GitHub repositories to solve the use case for Active Data Biology. On every update to the dataset, a validation task is run on the stored data. + +## Review + +### How Effective Was It + +The omics team at PNNL are still investigating the use of goodtables.io for their use case, but early reports are positive: + +> We created a schema and started testing it. So far so good! I think this is going to work for a lot of projects which want to store data in a repo. + +As a real test of the generality of goodtables, we also tried to apply it to another project. This second project is a public repository describing measurements of metabolites in ion mobility mass spectrometry. Here, we are again using flat files for structured data. The data is actually a library of information describing metabolites, and we know that the library will be growing. So it was very similar to the ADBio project, in that the curated data would be continually updated. (see for the project itself, and for a validation script that leverages goodtables). + +Of course, technical issues that they have encountered have been translated in GitHub issues and are being addressed: + +- +- +- diff --git a/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/adbio.png b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/adbio.png new file mode 100644 index 000000000..79c411453 Binary files /dev/null and b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/adbio.png differ diff --git a/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/pnnl.png b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/pnnl.png new file mode 100644 index 000000000..daf05162b Binary files /dev/null and b/site/blog/2017-06-26-pacific-northwest-national-laboratory-active-data-biology/pnnl.png differ diff --git a/site/blog/2017-08-09-collections-as-data/README.md b/site/blog/2017-08-09-collections-as-data/README.md new file mode 100644 index 000000000..f4aa4755c --- /dev/null +++ b/site/blog/2017-08-09-collections-as-data/README.md @@ -0,0 +1,15 @@ +--- +title: Collections as Data Facets - Carnegie Museum of Art Collection Data +date: 2017-08-09 +tags: ["case-studies"] +author: Dan Fowler +category: case-studies +interviewee: David Newbury and Dan Fowler +subject_context: In this ‘Always Already Computational - Collections as Data’ facet, Open Knowledge International’s Dan Fowler and Carnegie Museum of Arts’ (CMOA) David Newbury document the release of CMOA data on Github for public access and creative use, and use of Frictionless Data’s set of specifications in the process. +image: /img/blog/cmoa-logo.png +description: Use of Frictionless Data specifications in the release of Carnegie Museum of Arts’ Collection Data for public access & creative use +--- + +This blog post was [originally published as part of the Collections as Data Facets document collections](https://collectionsasdata.github.io/facet2/) on the Always Already Computational - Collections as Data website. + diff --git a/site/blog/2017-08-09-collections-as-data/cmoa-logo.png b/site/blog/2017-08-09-collections-as-data/cmoa-logo.png new file mode 100644 index 000000000..4a49eefa5 Binary files /dev/null and b/site/blog/2017-08-09-collections-as-data/cmoa-logo.png differ diff --git a/site/blog/2017-08-09-tutorial-template/README.md b/site/blog/2017-08-09-tutorial-template/README.md new file mode 100644 index 000000000..e0e4c66e3 --- /dev/null +++ b/site/blog/2017-08-09-tutorial-template/README.md @@ -0,0 +1,80 @@ +--- +title: Template for Tutorials +date: 2017-08-09 +tags: +--- + +This post provides you with a template for writing Frictionless Data tutorials. Specifically, tutorials of the form: **How to do X thing using Y Frictionless Data tool**. + + + +## Introduction + +You want to start introducting what you are doing e.g. + +> In this tutorial you'll learn how to {do a thing using a tool} to {provide some benefit} (This first sentence may be inspired by a [user story](http://frictionlessdata.io/user-stories/)). + +Clearly state the objective of your tutorial in the title and then once again in more detail at the very beginning of the tutorial. This gives readers an idea of what to expect and helps them determine if they want to continue reading. + +> **Tutorial time** : 20 minutes +> +> **Audience** : Beginner Data Packagers {user role} with {skill level}. + +Then continue like this: + +> ## What you'll need +> +> You'll need a basic understanding of: +> +> - JSON syntax +> - how to run commands in Terminal +> +> To complete this tutorial you'll need: +> +> - a computer (macOS or Windows) with access to the internet +> - an account on datahub.io ([here's how](https://datahub.ckan.io/about)) +> +> ## Introduction +> +> Introduce any basic concepts. +> +> To {achieve the benefit} we'll guide you through these steps: +> +> 1. [import the data](#1-import-the-data) +> 2. [generate a table schema](#2-generate-a-table-schema) +> 3. [create a data package](#3-create-a-data-package) +> 4. [publish the data package](#4-publish-the-data-package) +> +> ### 1. Import the data +> +> Write in a friendly, conversational style. Using humor is fine. +> +> ### 2. Generate a table schema +> +> Include pictures. Highlight key items on screenshots. Make sure pictures can be view in fullsize. +> +> ### 3. Create a data package +> +> Explain why something must be done, not just how to do it. +> +> ### 4. Publish the data package +> +> In this step you'll... +> +> +> ## Congratulations +> +> In 4 simple steps you've learned how {do a thing}. With this new knowledge, now you can {achieve a benefit}. +> +> Now go {do something} +> +> ## Learn more +> +> ### Related Guides +> +> - Tabular Data Package guide - +> +> ### References +> +> - [Tabular Data Package specification](/specs/tabular-data-package/) + diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/README.md b/site/blog/2017-08-15-causa-natura-pescando-datos/README.md new file mode 100644 index 000000000..e355d843b --- /dev/null +++ b/site/blog/2017-08-15-causa-natura-pescando-datos/README.md @@ -0,0 +1,88 @@ +--- +title: Causa Natura - Pescando Datos +author: Gabriela Rodriguez (Causa Natura/Engine Room), Adrià Mercader (OKI), Jo Barratt (OKI), Eduardo Rolón (Causa Natura) +date: 2017-08-15 +tags: ["pilot"] +category: pilots +subject_context: Eduardo Rolón is the Executive Director of Causa Natura and Gabriela Rodriguez from Causa Natura/Engine Room is working on the Pescando Datos platform. Together, we explored use of data validation software in the project to improve quality of data to support fisher communities and advocacy groups. +image: /img/blog/causanatura.png +description: Using goodtables to improve quality of data to support fisher communities and advocacy groups. +--- + +## Context + +Causa Natura is a non-profit organization based in Mexico. It supports public policies to allow management of natural resources respecting human rights, equity, efficiency and sustainability. This project, “Pescando Datos” seeks to advocate for improved public policies for more than just subsidies allocation, through the collection of, analysis, and visualization of data around subsidies available to fishing communities in Mexico. + +After an extended period of analysis a web platform is being built in order to explore data and visualize it with launch due for later in 2017. Following a meeting at csv,conf after a presentation by Adrià Mercader on [‘Continuous Data Validation for Everybody’](https://www.youtube.com/watch?v=Gk2F4hncAgY&index=35&list=PLg5zZXwt2ZW5UIz13oI56vfZjF6mvpIXN) we have piloted with Causa Natura to explore how our goodtables service can support the project. We spoke to Eduardo Rolón, Executive Director of Causa Natura and Gabriela Rodriguez who is working on the platform. + +### Problem We Were Trying To Solve + +Causa Natura are making a lot of freedom of Information requests in Mexico on information to do with fishers in order to understand how policies are impacting people. The data is needed to support a range of stakeholders from the many co-op fisher communities to advocacy organisations. + +> Eduardo Rolón: Advocacy organizations, either from CSOs or from the fisheries sector may be more interested in data that evaluates and supports policy recommendations. Fisher communities have more immediate needs, such as how to obtain better governmental services and support. + +> Gabriela Rodriguez: The data is important to us because Campaigns and decisions will be made based on the analysis on the data Causa Natura collected. To be able to do the required analysis we need good data. + +> Gabriela Rodriguez: Currently, there is a tedious process of cleaning to give us data that can be worked on. Much of the data Causa Natura was using came as PDFs and needed to be processed. We process a lot of PDFs and Excel files and there are a lot of problems getting the OCR to capture the information correctly to csv. For example, names are not consistent and this causes us a lot of problems. + +## The Work + +### Software + +goodtables was an existing Python library and web application developed by Open Knowledge International to support the validation of tabular datasets both in terms of structure and also with respect to a published schema as described above. We introduced goodtables in a [blog post](http://okfnlabs.org/blog/2015/02/20/introducing-goodtables.html) earlier this year. + +On top of that, Open Knowledge International has developed goodtables.io, a web service for a continuous data validation that connects to different data sources to generate structure and content reports. + + +### What Did We Do + +Let’s see how goodtables.io has helped to identify source and structural errors in the Causa Natura pilot dataset: + +![ADBio](./pescandodatos1.png) + +After we’ve signed in, we synchronize our GitHub repositories and activate the repository we want to validate (https://github.com/frictionlessdata/pilot-causanatura): + +![ADBio](./pescandodatos2.png) + +Once the repository is activated, every time there is an update on the data hosted on GitHub, the service will generate a validation report. This is how one of these reports looks like: + +![ADBio](./pescandodatos3.png) + +Here, we see that there are 59 valid tables, but the report has identified source and structural errors in 41 of the other tables hosted on the repository, including: + +* duplicate rows +* duplicate headers +* blank rows +* missing values + +The full list of checks exercised by goodtables.io can be found in the [Data Quality Spec](https://github.com/frictionlessdata/data-quality-spec/blob/master/spec.json). And the full report can be found [here](http://goodtables.io/github/frictionlessdata/pilot-causanatura/jobs/7). + +After identifying errors we went back do a manual cleanup of the data. As we mentioned, there is no need to run goodtables.io validation manually - it happens on any GitHub push for all activated repositories: + +![ADBio](./pescandodatos4.png) + +If we need to customize a validation process we can put a goodtables.yml configuration file on the repository root, allowing us to tweak settings like the actual checks to perform, limit of rows to check, etc: + +![ADBio](./pescandodatos5.png) + +And instant feedback is available via GitHub commit statuses and a goodtables.io badge that can be included in the README file: + +![ADBio](./pescandodatos6.png) + +## Review + +> Gabriela Rodriguez: Right now I have not been using it extensively yet but I have a lot of faith that it could get incorporated in the process of importing data into the Github repository. It should be easy to introduce into our workflow. I really like the process of hooks after git-push as I’m trying to get the organization to use Github for new data. I really like the validation part and that a report is generated each time data is pushed. This is very important and very useful. This makes it easier for the people who are doing the cleaning of data who may not have experience with GitHub. + +> Gabriela Rodriguez: The web interface needs a lot of usability work. But the idea is awesome. There are problems and it is kind of hard to use at the moment as it takes a long time to sync repositories and the process is not clear, but i think it has a huge potential to make a difference to the work we are doing, mostly if people use Github to store data then it could make a difference. + +## Next Steps + +### Areas for further work + +> Gabriela Rodriguez: With continuous integration it would be very helpful to be notified with messages about the problems in the data. Perhaps emails notifications would be a good way to go, or integrations with other programs - Slack for example - would be fantastic. + +One thing to note is that all the errors shown following the analysis refer to the structure of the data files (missing headers, duplicate rows, etc). Including schema validation against some of the files would be a very logical next step in testing whether the contents of the data are what is expected). We are now planning to work with Causa Natura to take the steps to identify a subset of the data and create a base schema/data package that will be easily expandable and extendable. + +### Find Out More + +To explore for the yourself and collaborate, see the Pescando Datos project on [github](https://github.com/pescandodatos/datos) and our goodtables [reports](http://goodtables.io/github/frictionlessdata/pilot-causanatura) from the project. diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/causanatura.png b/site/blog/2017-08-15-causa-natura-pescando-datos/causanatura.png new file mode 100644 index 000000000..12f6142f3 Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/causanatura.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos1.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos1.png new file mode 100644 index 000000000..e50fbfead Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos1.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos2.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos2.png new file mode 100644 index 000000000..942881f36 Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos2.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos3.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos3.png new file mode 100644 index 000000000..36d81e815 Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos3.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos4.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos4.png new file mode 100644 index 000000000..5510fe113 Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos4.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos5.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos5.png new file mode 100644 index 000000000..6e344b63f Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos5.png differ diff --git a/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos6.png b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos6.png new file mode 100644 index 000000000..d3c4de27b Binary files /dev/null and b/site/blog/2017-08-15-causa-natura-pescando-datos/pescandodatos6.png differ diff --git a/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/README.md b/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/README.md new file mode 100644 index 000000000..b53b89fc7 --- /dev/null +++ b/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/README.md @@ -0,0 +1,27 @@ +--- +title: Center for Data Science and Public Policy, Workforce Data Initiative +date: 2017-08-15 +tags: ["case-studies"] +category: case-studies +interviewee: Matt Bauman +subject_context: The Workforce Data Initiative uses Frictionless Data specifications and software to support collection and distribution of workforce data and statistics in the US. +image: /img/blog/chicago.png +description: Supporting state and local workforce boards in managing and publishing data. +--- + +The Workforce Data Initiative aims to modernize the US workforce through the use of data. One aspect of this initiative is to help state and local workforce boards collect, aggregate, and distribute statistics on the effectiveness of training providers and the programs they offer. The US Department of Labor mandates that every eligible training provider (ETP) work with state workforce +boards to track the outcomes of their students in order to receive federal funding. We are building a suite of open-source tools using open data specifications in order to help make this a reality; this collection of tools is called the Training Provider Outcomes Toolkit (TPOT). This specific tool, the etp-uploader, is a website that state workforce boards can deploy for training providers to upload their individual-level data. + +There are many hundreds or thousands of training providers within the purview of each workforce development board. Each one must securely upload their participant data to their workforce board. This means that the workforce development boards must be equipped to receive and validate the data. + +Training providers range from small trade apprenticeships to community colleges to multi-state organizations, with a wide range of data sophistication. The ways in which the workforce data board collects participant outcomes must be easy and accessible to all organizations. At the same time, it must be easy for the board itself to automatically process and validate the datasets. + +We use the Frictionless Data Table Schema specification to define the required columns and data value constraints. This is decoupled from the code, allowing each state to precisely define their requirements and easily create custom instances of the site. We expose this flexibility through a [Heroku build script](https://id.heroku.com/login). + +We have modified the [goodtables-web project](https://github.com/frictionlessdata/goodtables-web) to add support for uploading to an S3 repository. We’ve further extended it to allow for uploading metadata about the uploaded file after it is validated. This metadata is uploaded as a separate file. In the future, we may use the data package standard to describe these two files as a single data package. + +I am excited to see the new developments around goodtables-py 1.0 and beyond. It will be nice to eventually move our upload website to the new APIs. One possible area for improvement in the goodtables-web validator is better error messages when specific data values do not match constraints. I’ve imagined adding a custom “data_constraint_error” field to the Table Schema that would allow for friendlier errors, or perhaps dynamically generating such error messages using the constraints themselves. + + I think that this general structure — a validated table upload software — is very useful and could be used for a wide variety of applications. It may make sense to allow for even more easy customizations to the site. + +The extension to goodtables-web is open source and available [here](https://github.com/workforce-data-initiative/etp-uploader) with a demo also running at [http://send.dataatwork.org](http://send.dataatwork.org) \ No newline at end of file diff --git a/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/chicago.png b/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/chicago.png new file mode 100755 index 000000000..789df88b5 Binary files /dev/null and b/site/blog/2017-08-15-center-for-data-science-and-public-policy-workforce-data-initiative/chicago.png differ diff --git a/site/blog/2017-08-15-university-of-cambridge/README.md b/site/blog/2017-08-15-university-of-cambridge/README.md new file mode 100644 index 000000000..276fa2128 --- /dev/null +++ b/site/blog/2017-08-15-university-of-cambridge/README.md @@ -0,0 +1,191 @@ +--- +title: University of Cambridge - Retinal Mosaics +date: 2017-08-15 +author: Stephen Eglen (University of Cambridge), Dan Fowler (OKI) +tags: ["pilot"] +category: pilots +subject_context: Stephen Eglen is a Reader in Computational Neuroscience at the University of Cambridge. Together, we are trialling software for packaging and reading data to support computational techniques to investigate development of the nervous system. +image: /img/blog/cambridge.png +description: Investigating the applicability of the Data Package concept to facilitate data reuse in the field of Computational Neuroscience. +--- + +## Context + +### Problem We Were Trying To Solve + +Stephen Eglen was looking to investigate the applicability of the Data Package concept to facilitate data reuse in the field of Computational Neuroscience. The following figure describes the kind of data he is collecting. He was eventually seeking to get around 100–160 fields like this. Each circle is a cell drawn approximately to scale. The units of measurement are microns (micrometers). + +![ADBio](./camimage1.png) + +The basic data are quite simple: two columns of numbers that describes the center of each circle where each circle represents a [retinal ganglion cell](https://en.wikipedia.org/wiki/Retinal_ganglion_cell). The metadata that adds context to this data are as follows: the radius of each circle is 10um representing the average radius of this cell type; the dashed line represents the sample window which is the region of space within which the cells were sampled; the species from which the cells were sampled is a cat. + +The key question posed by the collection of such data in large quantities is “where to store all these metadata”. More formally, Stephen wanted a way to include the following metadata with sampled data: + + 1. Cell type: on center retinal ganglion cells + 2. Species: cat + 3. Radius of soma: 10 um + 4. Citation to original paper where data were presented: Wässle H, Boycott BB, Illing RB (1981) Morphology and mosaic of on- and off-beta cells in the cat retina and some functional considerations. Proc R Soc Lond B Biol Sci 212:177–195. + 5. Unit of measurement: micron. + 6. (Optionally) A raw image from where the data where taken. e.g. http://www.damtp.cam.ac.uk/user/sje30/data/mosaics/w81_scan/w81all.png + +The long-term goal was to build a website/repository containing about 100+ examples of these “retinal mosaics”. The website would allow people to view the mosaics, download the data files, or run online some simple analysis. + +## The Work + +### What Did We Do + +The Data Package specification is meant to be a container for data providing a consistent interface to tabular data. The Data Package specification outlines a number of different fields that can be stored within a descriptor file, `datapackage.json`. For instance, for this example, we can assign a title to this Data Package by setting the field `title` to “Example Retinal Mosaic”: + +`"title" : "Example Retinal Mosaic"` + +We can also, for instance, set a `homepage` for the dataset which is a URL for the home on the web that is related to this Data Package. + +`"homepage": "http://www.damtp.cam.ac.uk/user/sje30/data/mosaics/w81_scan/"` + +Some of the other metadata Stephen required do not, however, map well to existing specified fields in a Data Package. For instance, as the Data Package is not specialized for storing biological data, there is no field for “species”. (The Digital Curation Centre maintains a database of domain-specific metadata standards for biology and other fields.) The specification is intended to be quite flexible in these circumstances: + +>Adherence to the specification does not imply that additional, non-specified properties cannot be used: a descriptor MAY include any number of properties in additional to those described as required and optional properties […] This flexibility enables specific communities to extend Data Packages as appropriate for the data they manage. + +As an example, we stored the radius in a new field called `soma-radius` in the “root” level of the Data Package: + +`"soma-radius": 10` + +While there are many different ways in which this particular element of metadata could have been stored, this was a good start that would allow easy iteration. In storing this metadata in the datapackage.json, Data Package-aware tooling could read it with the data. + +For example, using the Data Package library (“datapkg”) written by ROpenSci, we can replace multiple calls to the data input functions (in this case, Stephen used the `read.table()` and `scan()` functions with a single call to `datapkg_read()` and store the result in a new object which combines both metadata and data called rmdp. + +```r +# install_github("frictionlessdata/datapackage-r") +library(datapkg) +rmdp <- datapkg_read() + +# read soma radius from rmdp +soma.rad = rmdp$`soma-radius` + +# read other metadata from the same object +on <- rmdp$data$points +w <- rmdp$data$window + +# plot +plot(on, type='n', asp=1, bty='n') +symbols(on, circles=rep(soma.rad, nrow(on)), add=TRUE,inch=FALSE, bg="red") rect(w[[1]], w[[3]], w[[2]], w[[4]], lty=2) +``` + +Below is the annotated datapackage.json with the metadata at the top followed by the resource information. As a start, we have included information on cell type, species, and units in the description, but we can easily create new fields to store these fields in a more structured way as we did with `soma-radius`: + +```json +{ + "name": "example-retinal-mosaic", + "title": "Example Retinal Mosaic", + "Homepage": "http://www.damtp.cam.ac.uk/user/sje30/data/mosaics/w81_scan/", + "Image": "http://www.damtp.cam.ac.uk/user/sje30/data/mosaics/w81_scan/w81all.png", + "description": "This is an example retinal mosaic Data Package.", + "cell type": "on center", + "soma-radius": 10 +} +``` + +I’ve used a `sources` array with a single source object: + +```json +{ + "sources": [{ + "name": "Wässle H, Boycott BB, Illing RB (1981) Morphology and mosaic of on- and off-beta cells in the cat retina and some functional considerations. Proc R Soc Lond B Biol Sci 212:177–195." + }] +} +``` + +The `resources` array in datapackage.json listed the files in the original datasets with `dialect` and `schema` information included. We named the first resource “points” with the filename set in `path`. Because it is a space-delimited tabular file without a header, we needed to pass that information to the `dialect` attribute so that `datapkg_read()` can read the file. The `schema` attribute specifies the `type` of all the values in the table (e.g. “number”) as well as additional `constraints` on the value. Stephen noted that you can’t have an x coordinate without the y, so we have set `required` to true for both fields. In addition, Stephen noted that the “window” rectangle is a simple validation on the data, so I have translated the x and y bounds of the window to constraining conditions on each column. We do understand that assessing the validity of such data can be more complicated, however. + +```json +{ + "resources": [ + { + "name": "points", + "path": "w81s1on.txt", + "dialect": { + "delimiter": "\t", + "header": false + }, + "schema": { + "fields": [ + { + "name": "x", + "type": "number", + "constraints": { + "required": true, + "minimum": 28.08, + "maximum": 778.08 + } + }, + { + "name": "y", + "type": "number", + "constraints": { + "required": true, + "minimum": 16.2, + "maximum": 1007.02 + } + } + ] + } + } + ] +} +``` + +For the completeness of the example, we have also added a resource for the “window” rectangle even though (a) we have already stored this data in the `constraints` key of the points matrix and (b) it is ideally stored as a vector not a data frame. The benefit of this method is the ability to load all the data files at once and reference them from a common datapackage. + +```json +{ + "name": "window", + "path": "w81s1w.txt", + "dialect": { + "delimiter": "\t", + "header": false + }, + "schema": { + "fields": [ + { + "name": "xmin", + "type": "number" + }, + { + "name": "xmax", + "type": "number" + }, + { + "name": "ymin", + "type": "number" + }, + { + "name": "ymax", + "type": "number" + } + ] + } +} +``` + +## Review + +### How Effective Was It + +The pilot tackled an interesting use case: providing a generic “home” for metadata related to an experiment, +in a way that is clear and easy to read without the overhead of a more advanced, domain-specific specification. +In a more complicated example, storing the metadata with the data for each sample---paired with a tool that could +read this metadata---could provide an “object-oriented” style of working with experimental data. + +We have not tried this out on multiple samples (this is forthcoming), so we don’t have much information yet on +the usefulness of this approach, but the exercise raised several important issues to potentially address with the +Data Package format: + + 1. Stephen’s request for a specified location for storing units in a structured way comes up often: https://github.com/frictionlessdata/specs/issues/216 + 2. More iterations, with more of a variety of data sources could help in trialling this + 3. Stephen wanted to store a non-tabular data file (an image) with the tabular datasets that comprise his datasets. This is currently not allowed, but the subsequent definition of a Tabular Data Resource could pave the way for a method of specifying types of different resources and the kind of processing, validation or otherwise, that could be done with each. + +## Next Steps + +### Areas for future work + +Stephen now has about 100 retinal mosaics that might make for a nice use case of the Data Package. In addition, The Frictionless Data Tool Fund has funded the development of the next version of the R Data Package. This will make some of the improvements brought to the Data Package specifications in the past few months available in the R library. diff --git a/site/blog/2017-08-15-university-of-cambridge/cambridge.png b/site/blog/2017-08-15-university-of-cambridge/cambridge.png new file mode 100644 index 000000000..88e1053e1 Binary files /dev/null and b/site/blog/2017-08-15-university-of-cambridge/cambridge.png differ diff --git a/site/blog/2017-08-15-university-of-cambridge/camimage1.png b/site/blog/2017-08-15-university-of-cambridge/camimage1.png new file mode 100644 index 000000000..ef2d0bbd3 Binary files /dev/null and b/site/blog/2017-08-15-university-of-cambridge/camimage1.png differ diff --git a/site/blog/2017-09-28-zegami/README.md b/site/blog/2017-09-28-zegami/README.md new file mode 100644 index 000000000..a8a9cbef4 --- /dev/null +++ b/site/blog/2017-09-28-zegami/README.md @@ -0,0 +1,60 @@ +--- +title: Zegami +date: 2017-09-28 +tags: ["case-studies"] +category: case-studies +interviewee: Roger Noble and Andrew Stretton +subject_context: Zegami is using Frictionless Data specifications for data management and syntactic analysis on their visual data analysis platform +image: /img/blog/zegami-logo.png +description: As a visual data exploration and analytics platform, Zegami makes the exploration of large collections of image rich information quick and simple. +author: Roger Noble and Andrew Stretton +--- + +[Zegami](https://www.zegami.com) makes information more visual and accessible, enabling intuitive exploration, search and discovery of large data sets. Zegami combines the power of machine learning and human pattern recognition to reveal hidden insights and new perspectives. + +![imagesearch](./zegami-2.gif)
*image search on Zegami* + +It provides a more powerful tool for visual data than what’s possible with spreadsheets or typical business intelligence tools. By presenting data within a single field of view, Zegami enables users to easily discover patterns and correlations. Facilitating new insights and discoveries that would otherwise not be possible. + +![metadatasearch](./zegami-3.gif)
*metadata search on Zegami* + +For Zegami to shine, our users need to be able to easily import their data so they can get actionable insight with minimal fuss. In building an analytics platform we face the unique challenge of having to support a wide variety of data sources and formats. The challenge is compounded by the fact that the data we deal with is rarely clean. + +At the onset, we also faced the challenge of how best to store and transmit data between our components and micro-services. In addition to an open, extensible and simple yet powerful data format, we wanted one that can preserve data types and formatting, and be parsed by all the client applications we use, which includes server-side applications, web clients and visualisation frameworks. + +We first heard about messytables[^messytables] and of the data protocols site (currently Frictionless Data Specifications[^specs]) through a lightning talk at EuroSciPy 2015. This meant when we searched for various things around jsontableschema (now tableschema[^tableschema]), we landed on the Frictionless Data project. + +We are currently using the specifications in the following ways: + +- We use tabulator.Stream[^tabulator] to parse data on our back end. +- We use schema infer from tableschema-py[^tableschemapy] to store an extended json table schema to represent data structures in our system. We are also developing custom json parsers using json paths and the ijson library + +In the coming days, We plan on using +- datapackage-pipelines[^dpp] as a spec for the way we treat joins and multi-step data operations in our system +- tabulator in a polyglot persistence scenario[^polyglot] - storing data in both storage buckets and either elasticsearch[^elasticsearch] or another column store like [druid.io](http://druid.io). + +![Diagram](./zegami-1.jpg) + +Moving forward it would be interesting to see tableschema and tabulator as a communication protocol over websockets. This would allow for a really smooth experience when using handsontable[^handsontable] spreadsheets with a datapackage of some kind. A socket-to-socket version of datapackage-pipelines which runs on container orchestration systems would also be interesting. There are few protocols similar to datapackage-pipelines, such as Dask[^dask] which, although similar, is not serialisable and therefor unsuitable for applications where front end communication is necessary or where the pipelines need to be used by non-coders. + +We are also keen to know more about repositories around the world that use datapackages[^datapackage] so that we can import the data and show users and owners of those repositories the benefits of browsing and visualising data in Zegami. + +In terms of other potential use cases, it would be useful to create a python-based alternative to the dreamfactory API server[^dreamfactory]. [wqio](http://wq.io/) is one example, but it is quite hard to use and a lighter version would be great. Perhaps CKAN[^ckan] datastore could be licensed in a more open way? + +In terms of the next steps for us, we are currently working on a SaaS implementation of Zegami which will dramatically reduce the effort required in order to start working with Zegami. We are then planning on developing a series of APIs so developers can create their own data transformation pipelines. One of our developers, Andrew Stretton, will be running Frictionless Data sessions at PyData London[^pydata] on Tuesday, October 3 and PyCon UK[^pyconuk] on Friday, October 27. + +[^messytables]: Library for parsing messy tabular data: +[^specs]: Frictionless Data Specifications: [specs](https://specs.frictionlessdata.io/) +[^tableschema]: Table Schema: [https://specs.frictionlessdata.io/table-schema](https://specs.frictionlessdata.io/table-schema) +[^tabulator]:Tabulator: library for reading and writing tabular data +[^polyglot]: Polyglot Persistence: +[^tableschemapy]: Table Schema Python Library: +[^elasticsearch]: Elastic Search: +[^handsontable]: Handsontable: Javascript spreadsheet component for web apps: +[^dpp]: Data Package Pipelines: +[^dask]:Dask Custom Graphs: +[^datapackage]: Data Packages: [https://specs.frictionlessdata.io/data-package](https://specs.frictionlessdata.io/data-package) +[^dreamfactory]: Dream Factory: +[^ckan]: CKAN: Open Source Data Portal Platform: +[^pydata]: PyData London, October 2017 Meetup: +[^pyconuk]: PyCon UK 2017 Schedule: diff --git a/site/blog/2017-09-28-zegami/zegami-1.jpg b/site/blog/2017-09-28-zegami/zegami-1.jpg new file mode 100755 index 000000000..6c9751201 Binary files /dev/null and b/site/blog/2017-09-28-zegami/zegami-1.jpg differ diff --git a/site/blog/2017-09-28-zegami/zegami-2.gif b/site/blog/2017-09-28-zegami/zegami-2.gif new file mode 100755 index 000000000..2cf7a7510 Binary files /dev/null and b/site/blog/2017-09-28-zegami/zegami-2.gif differ diff --git a/site/blog/2017-09-28-zegami/zegami-3.gif b/site/blog/2017-09-28-zegami/zegami-3.gif new file mode 100755 index 000000000..6862bb2f4 Binary files /dev/null and b/site/blog/2017-09-28-zegami/zegami-3.gif differ diff --git a/site/blog/2017-09-28-zegami/zegami-logo.png b/site/blog/2017-09-28-zegami/zegami-logo.png new file mode 100644 index 000000000..9e4c1aa4e Binary files /dev/null and b/site/blog/2017-09-28-zegami/zegami-logo.png differ diff --git a/site/blog/2017-10-24-elife/README.md b/site/blog/2017-10-24-elife/README.md new file mode 100644 index 000000000..34f02f9a6 --- /dev/null +++ b/site/blog/2017-10-24-elife/README.md @@ -0,0 +1,102 @@ +--- +title: eLife +date: 2017-10-24 +author: Naomi Penfold (eLife), Adrià Mercader (OKI), and Jo Barratt (OKI) +tags: ["pilot"] +category: pilots +subject_context: Naomi Penfold is an Innovation Officer at eLife. Together, we explored use of goodtables library to validate all scientific research datasets hosted by eLife and make a case for open data reuse in the field of Life and BioMedical sciences. +image: /img/blog/elife-logo.png +description: Investigating the applicability of the goodtables library to facilitate data validation in the field of Life and Biomedical Sciences and make a case for the reusability of data shared with eLife as additional or supporting files. +--- + +## Context + +[eLife](https://elifesciences.org/) is a non-profit organisation with a mission to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science. eLife publishes important research in all areas of life and biomedical sciences. The research is selected and evaluated by working scientists and is made freely available to all readers. + +### Problem We Were Trying To Solve + +Having met at csv,conf,v3 in Portland in May 2017, eLife's [Naomi Penfold](https://www.youtube.com/watch?v=YYWNSWNq-do&list=PLg5zZXwt2ZW5UIz13oI56vfZjF6mvpIXN&index=27) and Open Knowledge International's [Adrià Mercader](https://www.youtube.com/watch?v=Gk2F4hncAgY&index=35&list=PLg5zZXwt2ZW5UIz13oI56vfZjF6mvpIXN) determined that eLife would be a good candidate for a Frictionless Data pilot. eLife has a strong emphasis on research data, and stood to benefit from the data validation service offered by Frictionless Data's goodtables. + + +## The Work +In order to assess the potential for a goodtables integration at eLife, we first needed to measure the quality of source data shared directly through eLife. + +### Software + To explore the data published in the eLife platform we used the goodtables library[^gt-py]. Both the goodtables python library and web service[^gtweb] were developed by Open Knowledge International to support the validation of tabular datasets both in terms of structure and also with respect to a published schema. You can read more about them [in this introductory blog post](http://okfnlabs.org/blog/2015/02/20/introducing-goodtables.html). + +### What Did We Do + +The first stage was to perform validation on all files made available through the eLife API in order to generate a report on data quality - this would allow us to understand the current state of eLife-published data and present the possibility of doing more exciting things with the data such as more comprehensive tests or visualisations. + +The process: + +* We downloaded a big subset of the articles metadata made available via the eLife public API[^eLife-api]. +* We parsed all metadata files in order to extract the data files linked to each article, regardless of whether it was an additional file or a figure source. This gave us a direct link to each data file linked to the parent article. +* We then ran the validation process on each file, storing the resulting report for future analysis. + +All scripts used in the process as well as the outputs can be found in [our pilot repository](https://github.com/frictionlessdata/pilot-elife). + +Here are some high-level statistics for the process: + +We analyzed 3910 articles, 1085 of which had data files. The most common format was Microsoft Excel Open XML Format Spreadsheet (xlsx), with 89% of all 4318 files being published on this format. Older versions of Excel and CSV files made up the rest. + +![datasets analysed by eLife image](./elife1.png) *A summary of the eLife research articles analysed as part of the Frictionless Data pilot work* + +In terms of validation, more than 75% of the articles analyzed contained at least one invalid file. Of course valid data is an arbitrary term based on the tests that are set within goodtables and results need to be reviewed to adjust the checks performed. For instance errors raised by blank rows are really common on Excel files as people add a title on the first row, leaving an empty row before the data, or empty rows are detected at the end of the sheet. + +Other errors raised that might actually point to genuine errors included duplicated headers, extra headers, missing values, incorrect format values (e.g. date format instead of gene name) to give just some examples. Here’s a summary of the raw number of errors encountered. For a more complete description of each error, see the Data Quality Spec[^dq-spec]: + +| Error Type | Count | +|-------------------|-------| +| Blank rows | 45748 | +| Duplicate rows | 9384 | +| Duplicate headers | 6672 | +| Blank headers | 2479 | +| Missing values | 1032 | +| Extra values | 39 | +| Source errors | 11 | +| Format errors | 4 | + +## Review +### How Effective Was It +Following analysis of a sample of the results, the vast majority of the errors appear to be due to the data being presented in nice-looking tables, using formatting to make particular elements more visually clear, as opposed to a machine-readable format: + +![example tables image shared by Naomi](./elife3.png) *Data from Maddox et al. was shared in a machine-readable format (top), and adapted here to demonstrate how such data are often shared in a format that looks nice to the human reader (bottom). +Source: Source data +The data file is presented as is and adapted from Maddox et al. eLife 2015;4:e04995 under the Creative Commons Attribution License (CC BY 4.0).* + +This is not limited to the academic field of course, and the tendency to present data in spreadsheets so it is visually appealing is perhaps more prevalent in other areas. Perhaps because consumers of the data are even less likely to have the data processed by machines or because the data is collated by people with no experience of having to use it in their work. + +In general the eLife datasets had better quality than for instance those created by government organisations, where structural issues like missing headers, extra cells, etc are much more common. So although the results here have been good, the community could derive substantial benefit from researchers going that extra mile to make files more machine-friendly and embrace more robust data description techniques like Data Packages. + +Because these types of ‘errors’ are so common we have introduced default `ignore blank rows` and `ignore duplicate rows` options in [our standalone validator](https://try.goodtables.io) since this helps bring more complex errors to the surface and focusses attention on the errors which may be less trivial to resolve. Excluding duplicate and blank rows as well as duplicate headers (the most common but also relatively simple errors), 6.4% (277/4318) of data files had errors remaining, affecting 10% of research articles (112/1085). + +Having said this, the relevance of these errors should not be underplayed as `blank rows`, `duplicate rows` and other human-centered formatting preferences can still result in errors that prevent machine readability. Although the errors were often minor and easy to fix in our case, these seemingly simple errors can be obstructive to anyone trying to reuse data in a more computational workflow. Any computational analysis software, such as R[^rlang], requires that all column headers are variables and rows are individual observations i.e. we need variables in columns and observations in rows for any R analysis. + +Much less frequent errors were related to difficulties retrieving and opening data files. It was certainly helpful to flag articles with files that were not actually possible to open (source-error), and the eLife production team are resolving these issues. While only representing a small number of datasets, this is one use key case for goodtables: enabling publishers to regularly check continued data availability after publication. + +The use case for authors is clear — to identify how a dataset could be reshaped to make it reusable. However, this demands extra work if reshaping is a job added at the point of sharing work. In fact, it is important that any issues are resolved before final publication, to avoid adding updated versions of publications/datasets. Tools that reduce this burden by making it easy to quickly edit a datafile to resolve the errors are of interest moving forward. In the meantime, it may be helpful to consider some key best practises as datasets are collected. + +Overall, the findings from this pilot demonstrate that there are different ways of producing data for sharing: datasets are predominantly presented in an Excel file with human aesthetics in mind, rather than structured for use by a statistical program. We found few issues with the data itself beyond presentation preferences. This is encouraging and is a great starting point for venturing forward with helping researchers to make greater use of open data. + +## Next Steps +### Areas for further work + +Libraries such as goodtables help to flag the gap between the current situation and the ideal situation, which is machine-readability. Most of the issues identified by goodtables in the datasets shared with eLife relate to structuring the data for human visual consumption: adding space around the table, merging header cells, etc. We encourage researchers to make data as easy to consume as possible, and recognise that datasets built primarily to look good to humans may only be sufficient for low-level reuse. + +Moving forward, we are interested in tools and workflows that help to improve data quality earlier in the research lifecycle or make it easy to reshape at the point of sharing or reuse. + +## Find Out More +https://github.com/frictionlessdata/pilot-elife + +Parts of this post are [cross-posted](https://elifesciences.org/labs/b6de9fb0/data-reusability-a-pilot-with-goodtables) on eLife Labs[^elifelabs]. + +[^pasquetto]: Irene V. Pasquetto , Bernadette M. Randles, and Christine L. Borgman, On the Reuse of Scientific Data: +[^gtweb]: goodtables web service: +[^gt-py]: goodtables Python library: +[^csv]: csv,conf,v3: +[^eLife-api]: eLife Public API: +[^elife-repo]: eLife Frictionless Data pilot repository on Github: +[^dq-spec]: Data Quality Spec: +[^rlang]: R Programming Language: Popular open-source programming language and platform for data analysis: +[^elifelabs]: eLife Labs: diff --git a/site/blog/2017-10-24-elife/elife-logo.png b/site/blog/2017-10-24-elife/elife-logo.png new file mode 100644 index 000000000..1f8036f17 Binary files /dev/null and b/site/blog/2017-10-24-elife/elife-logo.png differ diff --git a/site/blog/2017-10-24-elife/elife1.png b/site/blog/2017-10-24-elife/elife1.png new file mode 100644 index 000000000..a125896bf Binary files /dev/null and b/site/blog/2017-10-24-elife/elife1.png differ diff --git a/site/blog/2017-10-24-elife/elife2.png b/site/blog/2017-10-24-elife/elife2.png new file mode 100644 index 000000000..fee3d9a86 Binary files /dev/null and b/site/blog/2017-10-24-elife/elife2.png differ diff --git a/site/blog/2017-10-24-elife/elife3.png b/site/blog/2017-10-24-elife/elife3.png new file mode 100644 index 000000000..f62f99271 Binary files /dev/null and b/site/blog/2017-10-24-elife/elife3.png differ diff --git a/site/blog/2017-10-24-georges-labreche/README.md b/site/blog/2017-10-24-georges-labreche/README.md new file mode 100644 index 000000000..9fe645a83 --- /dev/null +++ b/site/blog/2017-10-24-georges-labreche/README.md @@ -0,0 +1,30 @@ +--- +title: "Tool Fund Grantee: Georges Labrèche" +date: 2017-10-24 +tags: ["grantee-profiles"] +author: Georges Labreche +category: grantee-profiles +image: /img/blog/georges-labreche-image.png +# description: Tool Fund Grantee - Java +github: https://github.com/georgeslabreche +twitter: https://twitter.com/georgeslabreche +website: https://linkedin.com/in/georgeslabreche +--- + +This grantee profile features Georges Labreche for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +I arrived in Kosovo from New York back in 2014 in order to conduct field research for my Masters thesis in International Affairs: I was studying the distinct phenomenon of Digital State-Building, i.e. the use of online digital technologies to promote statehood. I didn’t pack much on my trip here but did bring along a lot of entrepreneurial drive to start a digital agency with strong elements of corporate social responsibility and tech community building. Initially, I had hoped to leverage my background as a Software Engineer to build a small service-oriented startup, but in light of Kosovo’s ongoing state-building processes and push for good governance and anti-corruption, I saw the opportunity to establish a civic-tech NGO, [Open Data Kosovo](https://opendatakosovo.org) (ODK), as a means of getting local techies to play an active part in state-building by applying their digital skills towards contributing to increasing government transparency and accountability. + +Work aside, I have a passion for continuous learning so if you were to meet me I would probably steer the conversation towards what I recently learned on my latest online edX course. My current deep-dives are around space, physics, astronautics and robotics and it is likely that you would find me happily struggling on my homework for online courses in these fields or getting excited about the next scheduled SpaceX launch in my spare time. I am also passionate about travel, particularly experiences that combine visits to UNESCO World Heritage sites, discovery of local cuisines, as well as hiking and mountain climbing in the great outdoors. + +I first heard about Frictionless Data from [Tin Geber](https://tin.fyi/) (formerly of The Engine Room). He directly contacted me with a link to the Frictionless Data Tool Fund grant and asked me to apply. A couple of days later, [Andrew Russell](https://fr.linkedin.com/in/andrew-russell-b4a4665), the UN Development Coordinator and UNDP Resident Representative in Kosovo, asked me about Frictionless Data and the Tool Fund grant on Twitter and I have since had the opportunity to explain the concept behind Frictionless Data to several people. + +At first I was just really excited about using the already available Frictionless Data Python library for a procurement data importer we were working on for an Open Contracting Data Standard (OCDS) project. Here in Kosovo, my organization has liberated public procurement datasets that we’ve transformed into an open format but without any strong nor consistent data processing methodology. As I went through the specifications, it became clear to me that it was exactly what our procurement data liberation workflow was missing. I also wanted to do more than just use it, I wanted to contribute to it and make it more accessible to other developer communities, and especially in Java, which I am proficient in. + +Data is messy and, for developers, cleaning and processing data from one project to another can quickly turn an awesome end-product idea into a burdensome chore. Data packages and Frictionless Data tools and libraries are important because they allow developers to focus more on the end-product itself without having to worry about heavy lifting in the data processing pipeline. + +Members of programming communities are, as a whole, involved in infinitely diverse projects and problem solving initiatives. Working with that diversity allows us to explore use cases that we would never have imagined when conceptualizing such libraries and tapping into such an ecosystem of programmers would serve to enhance future versions of the libraries. + +All my work around extending implementation of Frictionless Data libraries in Java will be available on Github in these two repositories: [datapackage-java](https://github.com/frictionlessdata/datapackage-java) and [tableschema-java](https://github.com/frictionlessdata/tableschema-java), and comments, forks and pull requests are welcome. \ No newline at end of file diff --git a/site/blog/2017-10-24-georges-labreche/georges-labreche-image.png b/site/blog/2017-10-24-georges-labreche/georges-labreche-image.png new file mode 100755 index 000000000..83ab981e2 Binary files /dev/null and b/site/blog/2017-10-24-georges-labreche/georges-labreche-image.png differ diff --git a/site/blog/2017-10-26-matt-thompson/README.md b/site/blog/2017-10-26-matt-thompson/README.md new file mode 100644 index 000000000..820ef4a16 --- /dev/null +++ b/site/blog/2017-10-26-matt-thompson/README.md @@ -0,0 +1,29 @@ +--- +title: "Tool Fund Grantee: Matt Thompson" +date: 2017-10-26 +tags: ["tool-fund"] +author: Matt Thompson +category: grantee-profiles +image: /img/blog/matt-thompson-image.png +github: https://github.com/cblop +twitter: https://twitter.com/_mthom +website: http://mthompson.org/ +--- + +This grantee profile features Matt Thompson for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +My name is Matt Thompson, I am from Bristol, UK, and work as a lecturer in Creative Computing at Bath Spa University. I have been involved in the Clojure community for a while, running the [Bristol Clojurians group](https://bristolclojurians.github.io) since 2014. I was involved in the [DM4T project](http://www.cs.bath.ac.uk/dm4t/) during my postdoc at Bath University, where we used Frictionless Data software to create metadata for large datasets recording domestic energy usage. + +We worked with Open Knowledge International’s Developer Advocate, Dan Fowler, on the DM4T project at Bath in a collaboration which turned into a soon-to-be-published pilot study for the project. [Dan showed us](https://github.com/frictionlessdata/pilot-dm4t) how the Frictionless Data software could allow us to quickly automate ways to annotate our datasets with metadata. We came away excited about the possibilities that the Frictionless Data software enable for the datasets we’re working with. + +When the [call for applications](https://blog.okfn.org/2017/03/01/announcing-the-frictionless-data-tool-fund/) for Frictionless Data’s Tool Fund was made in May 2017, I was already building tools in Clojure for working with Frictionless Data as part of DM4T, and I decided to apply to enable me to flesh them out into well-tested, documented libraries. + +The problem we had with the DM4T project is that the same kinds of data were being collected by many different projects run by different universities across the country. In addition to describing the energy usage of different appliances, the data also includes different types of readings as well (electric usage, gas usage, humidity levels, temperature readings, etc). Different projects store their data in different ways, with some using mySQL databases, and others using CSV tables. The Frictionless Data software allow us to create simple JSON files describing the metadata for each project in a uniform way that is easy for the collaborating researchers to understand and implement. + +Once datasets are put into the public domain, it is extremely useful to also have the metadata that describe them. This would enable people to, for example, run queries across multiple datasets. One example in our case would be to ask: “What was the energy usage of all homes in Exeter for January 2014?”. This information would be contained in datasets that are curated by different people, and so we need uniform metadata in order to be able to make these kinds of queries. + +We run Clojure events and workshops twice a month as part of the [Bristol Clojurians group](https://bristolclojurians.github.io), so interested people can drop in and discuss the work we’re doing with Frictionless Data. I’m also planning to give a talk about Frictionless Data at one of the Clojurian events. + +You can follow the development of the Clojure libraries on the [Clojure Data Package library](https://github.com/frictionlessdata/datapackage-clj) and [Clojure Table Schema library](https://github.com/frictionlessdata/tableschema-clj) Github repositories. \ No newline at end of file diff --git a/site/blog/2017-10-26-matt-thompson/matt-thompson-image.png b/site/blog/2017-10-26-matt-thompson/matt-thompson-image.png new file mode 100644 index 000000000..ff4d808a1 Binary files /dev/null and b/site/blog/2017-10-26-matt-thompson/matt-thompson-image.png differ diff --git a/site/blog/2017-10-27-open-knowledge-greece/README.md b/site/blog/2017-10-27-open-knowledge-greece/README.md new file mode 100644 index 000000000..74da8c89a --- /dev/null +++ b/site/blog/2017-10-27-open-knowledge-greece/README.md @@ -0,0 +1,35 @@ +--- +title: "Tool Fund Grantee: Open Knowledge Greece" +date: 2017-10-27 +tags: +author: Open Knowledge Greece +category: grantee-profiles +image: /img/blog/open-knowledge-greece-logo.png +# description: Tool Fund Grantee - R +github: https://github.com/okgreece +twitter: https://twitter.com/okfngr +website: http://okfn.gr/ +--- + +This grantee profile features Open Knowledge Greece for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +[Open Knowledge Greece](http://okfn.gr/), formally appointed as the Greek Chapter of Open Knowledge International, was established in 2012 by a group of academics, developers, citizens, hackers and State representatives. We are supported by a national network of volunteers, most of whom are experienced professionals in the fields of Computer Science, Mathematics, Medicine, Journalism, Agriculture etc. +Our team consists of community members who are interested in open data, linked data technologies, coding, data journalism, and who do their best in applying scientific results into community activities. + +We were very excited when we read about [Frictionless Data](/) on the [Open Knowledge International blog](https://blog.okfn.org) and have been following the progress of the project carefully. When we saw the Frictionless Data Tool Fund had been [announced](https://blog.okfn.org/2017/03/01/announcing-the-frictionless-data-tool-fund/), we were certain that we wanted to be part of this project and build tools that can help in removing the friction in working with data. + +People and organizations working with data are interested in analyzing, visualizing or building apps based on data, but they end up spending most of their time cleaning and preparing the data. + +Frictionless Data software and specifications aim to make data ready for use. There are a lot of open data repositories in different formats and they often include dirty and missing data with null values, dates, currencies, units and identifiers which are represented differently and need effort to make them usable. Moreover datasets from different sources have different licenses, are often not up to date and inconsistencies make including in in one analysis difficult. + +We received the Frictionless Data Tool Fund grant for the development of libraries in R language. R is a powerful open source programming language and environment for statistical computing and graphics that is widely used among statisticians and data miners for developing statistical software and data analysis. + +We are going to implement two Frictionless Data libraries in R - [Table Schema](https://github.com/frictionlessdata/tableschema-r) for publishing and sharing tabular-style data and [Data Package](https://github.com/frictionlessdata/datapackage-r) for describing a coherent collection of data in a single package, keeping the frictionless data [specifications](https://specs.frictionlessdata.io/data-package/). Comments, forks, and pull requests welcome in the two repositories. + +Users will be able to load a data package into R in seconds so that data can be used for analysis and visualizations, invalid data can be fixed and generally friction is removed in working with data especially when shifting from one language to another. For example you will be able to get and analyze a data package in R which will in turn make visualizations in other graphical interfaces easier. + +We are also very delighted to be hosting [Open Knowledge Festival](http://2018.okfestival.org/) (OKFest) in Thessaloniki, Greece from 3-6 May 2018. OKFest is expected to bring together over 1,500 people from more than 60 countries. During the four-day full program of the Festival, participants will work together to share their skills and experiences, build the very tools and partnerships that will further the power of openness as a positive force for change. Open Knowledge Greece CEO [Dr. Charalampos Bratsas](https://twitter.com/bratsas) and the rest team are very excited to host the biggest gathering of the open knowledge community in our country. Look out for OKFest updates on the website and on [Twitter](https://twitter.com/OKFestival). + +Interested in learning more about OK Greece? See our [Facebook](https://www.facebook.com/okfngreece/) page, read our [blog](http://okfn.gr/blog-magazine/) and watch our videos on [Youtube](https://www.youtube.com/channel/UCWk9gT45Pdgg2wUJg_bRxXg/). diff --git a/site/blog/2017-10-27-open-knowledge-greece/open-knowledge-greece-logo.png b/site/blog/2017-10-27-open-knowledge-greece/open-knowledge-greece-logo.png new file mode 100755 index 000000000..36cecfdee Binary files /dev/null and b/site/blog/2017-10-27-open-knowledge-greece/open-knowledge-greece-logo.png differ diff --git a/site/blog/2017-11-01-daniel-fireman/README.md b/site/blog/2017-11-01-daniel-fireman/README.md new file mode 100644 index 000000000..a048e25c3 --- /dev/null +++ b/site/blog/2017-11-01-daniel-fireman/README.md @@ -0,0 +1,28 @@ +--- +title: "Tool Fund Grantee: Daniel Fireman" +date: 2017-11-01 +tags: +author: Daniel Fireman +category: grantee-profiles +image: /img/blog/daniel-fireman-image.jpg +# description: Tool Fund Grantee - Go +github: https://github.com/danielfireman +twitter: https://twitter.com/daniellfireman +website: https://linkedin.com/in/danielfireman/ +--- + +This grantee profile features Daniel Fireman for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +I was born in [Maceió](https://www.google.com/search?site=&tbm=isch&source=hp&biw=1600&bih=783&q=Macei%C3%B3&oq=Macei%C3%B3&gs_l=img.3..0l7j0i30k1l3.707.4892.0.5214.9.7.0.0.0.0.245.904.0j1j3.4.0....0...1.1.64.img..5.4.903.0..35i39k1.p1SYqvZtcYw), a sunny coastal city in the Northeast of Brazil. It was 20th century still when I had a first contact with an Intel 80386 and installed Conectiva Linux Guarani 3.0. A lot has happened since, for instance, a bachelor's degree in Computer Science at UFCG after three years as a research assistant in the Distributed Systems Lab (LSD). It was already the 21st century when I realized that distributed and scalable systems were the way to go. I kept on studying the field and pursued a MSc at UFMG. From there I joined Google and spent 6 happy years working at multiple offices (NYC, ZRH, BHZ). I’ve got the chance to work on a myriad of projects, ranging from social networks to Google's default Java HTTP/RPC server framework. Currently, I'm back to UFCG doing a Ph.D. in cloud computing performance. It is easy to find me at hackathons and other efforts to increase transparency of public data. I have also been busy working on projects like [contratospublicos.info](http://www.madrid.org/cs/Satellite?pagename=PortalContratacion/Page/PCON_home) and Frictionless Data, using Go to improve data transparency in Brazil and around the world. + +I started following [Open Knowledge International (OKI) on Twitter](https://twitter.com/OKFN) after watching a talk from [Vitor Baptista](https://github.com/vitorbaptista) at UFCG. I learnt about Frictionless Data from posts by OKI and liked the overall idea a lot. I have been a Golang enthusiast for a while now, but I hadn’t thought of applying to the fund until I had a quick chat with [Nazareno Andrade](https://github.com/nazareno) that started with Golang and ended with: “what about the [Frictionless Data Tool Fund](https://toolfund.frictionlessdata.io/)?” + +Go has a lot to deliver in terms of approximating simplicity of reading/writing, correctness, and performance. I believe bringing the experience and solid specifications of Frictionless Data to the Go ecosystem will not only make data description, validation and processing easier and faster, but also help to decrease the distance between data analysis/processing and production serving systems, resulting in simpler and more solid infrastructure. + +In the coming weeks, I hope to use the Tool Fund grant I received to bring Go’s performance and concurrency capabilities to data processing and to have a set of tools distributed as standalone and multi-platform binaries which are very easy to download and install. I am currently working on my Ph.D. and one pitfall I have come across is the use of one environment/system to collect/generate data and another to process. I will be working to alleviate this issue in order to make it easier to process tabular data in Go. + +From the developer's perspective, it is really great to use open source software. This is especially true when the community around the software fosters it's usage and welcome contributors. That ends up increasing the overall quality of the software, which benefits all users. + +The source code will be hosted at Github’s [tableschema-go](https://github.com/frictionlessdata/tableschema-go) and [datapackage-go](https://github.com/frictionlessdata/datapackage-go) repositories. We are going to use issues to track development progress and next steps. \ No newline at end of file diff --git a/site/blog/2017-11-01-daniel-fireman/daniel-fireman-image.jpg b/site/blog/2017-11-01-daniel-fireman/daniel-fireman-image.jpg new file mode 100644 index 000000000..2047b0f31 Binary files /dev/null and b/site/blog/2017-11-01-daniel-fireman/daniel-fireman-image.jpg differ diff --git a/site/blog/2017-12-04-openml/README.md b/site/blog/2017-12-04-openml/README.md new file mode 100644 index 000000000..436ee9757 --- /dev/null +++ b/site/blog/2017-12-04-openml/README.md @@ -0,0 +1,40 @@ +--- +title: OpenML +date: 2017-12-04 +tags: ["case-studies"] +category: case-studies +interviewee: Heidi Seibold, Joaquin Vanschoren +subject_context: OpenML is an online platform that automatically organizes data sets, machine learning algorithms, and experiments into a coherent whole, connected to the people who created them. +image: /img/blog/openml-logo.png +description: OpenML is an online platform dedicated to creating an open, online ecosystem for machine learning +--- + +[OpenML](http://openml.org) is an online platform and service for machine learning, whose goal is to make machine learning and data analysis simple, accessible, collaborative and open with an optimal division of labour between computers and humans. People can upload and share data sets and questions (prediction tasks) on OpenML that they then collaboratively solve using machine learning algorithms. + +[![](./openml-dashboard-intro.png)](https://www.youtube.com/embed/1N3qATxXrpE) +*A brief introduction to openML* + +We offer [open source tools](https://www.openml.org/guide/api) to download data into your [favorite machine learning environments](https://www.openml.org/guide/integrations) and work with it. You can then upload your results back onto the platform so that others can learn from you. If you have data, you can use OpenML to get insights on what machine learning method works well to answer your question. Machine Learners can use OpenML to find interesting data sets and questions that are relevant for others and also for machine learning research (e.g. learning how algorithms behave on different types of data sets). + +Users typically store their data in all kinds of formats, which makes it hard to simplify the data upload process on OpenML. Currently we only allow data in ARFF format. We are looking to make it as easy as possible for users to upload data, download and work with data from OpenML while keeping the datasets in machine readable formats and availing metadata in easy to read formats for our users. We also like to avail datasets from other services on OpenML. Most of these external sources currently contain data in varied formats, but some i.e. [data.world](https://data.world/) have started adopting and using [data packages](https://specs.frictionlessdata.io/data-package/). You can read more about data.world’s adoption and use of data packages [here](/blog/2017/04/11/dataworld/) and [here](https://meta.data.world/try-this-frictionless-data-world-ad36b6422ceb). + +[![](./openml-upload-data.png)](https://biteable.com/watch/upload-data-to-openml-1575659/4500a42627a119f548c7cb0ec3ec4a25ee8a576f) +*Learn how to upload data on OpenML in 1 minute* + +We first heard about the Frictionless Data project through [School of Data](https://schoolofdata.org). One of the OpenML core members is also involved in School of Data and used data packages in one of the open data workshops from School of Data Switzerland. In the coming months, we are looking to adopt [Frictionless Data specifications](https://specs.frictionlessdata.io/) to improve user friendliness on OpenML. We hope to make it possible for users to upload and connect datasets in [data packages format](https://specs.frictionlessdata.io/data-package/). This will be a great shift because it would enable people to easily build and share machine learning models trained on any dataset in the frictionless data ecosystem. + +OpenML currently works with tabular data in Attribute Relation File Format ([ARFF](https://weka.wikispaces.com/ARFF+%28stable+version%29)) accompanied by metadata in an XML or JSON file. It is actually very similar to Frictionless Data’s [tabular data package](https://specs.frictionlessdata.io/tabular-data-package/) specification, but with ARFF instead of csv. + +![](./openml-dataset-list.png) + *Image of dataset list on OpenML* + +ARFF (Attribute-Relation File Format) is a CSV file with a header that lists the names of the attributes (columns) and their data types. Especially the latter is very important to do data analysis. For instance, say that you have a column with values 1,2,3. It is very important to know whether that is just a number (1,2,3 ice creams), a rank (1st, 2nd, 3rd place), or a category (item 1, item 2, item 3). This is missing from CSV data. ARFF also allows to connect multiple tables together, although we don’t really use this right now. + +![](./openml-dataset-overview.png) +*Image of a dataset overview on openML* + +The metadata is free-form information about the dataset. It is mostly key-value data, although some values are more structured. It is stored in our database and exported to simple JSON or XML. [Here’s an example]( https://www.openml.org/d/2/json). It covers basic information (textual description of the dataset, owner, format, license, et al) as well as statistics (number of instances, number of features, number of missing values, details about the data distribution, and results of simple machine learning algorithms run on the data), and summary statistics (mainly used for the quick overview plots). + +We firmly believe that if data packages become the go-to specification for sharing data in scientific communities, accessibility to data that’s currently ‘hidden’ in data platforms and university libraries will improve vastly, and are keen to adopt and use the specification on OpenML in the coming months. + +Interested in contributing to our quest to adopt the [data package specification](https://specs.frictionlessdata.io/data-package/) as an import and export option for data on the OpenML platform? [Start here](https://github.com/openml/OpenML/issues/482). diff --git a/site/blog/2017-12-04-openml/openml-dashboard-intro.png b/site/blog/2017-12-04-openml/openml-dashboard-intro.png new file mode 100644 index 000000000..659b4caeb Binary files /dev/null and b/site/blog/2017-12-04-openml/openml-dashboard-intro.png differ diff --git a/site/blog/2017-12-04-openml/openml-dataset-list.png b/site/blog/2017-12-04-openml/openml-dataset-list.png new file mode 100644 index 000000000..b207adcb5 Binary files /dev/null and b/site/blog/2017-12-04-openml/openml-dataset-list.png differ diff --git a/site/blog/2017-12-04-openml/openml-dataset-overview.png b/site/blog/2017-12-04-openml/openml-dataset-overview.png new file mode 100644 index 000000000..3ca0fd9e6 Binary files /dev/null and b/site/blog/2017-12-04-openml/openml-dataset-overview.png differ diff --git a/site/blog/2017-12-04-openml/openml-logo.png b/site/blog/2017-12-04-openml/openml-logo.png new file mode 100644 index 000000000..904e91adf Binary files /dev/null and b/site/blog/2017-12-04-openml/openml-logo.png differ diff --git a/site/blog/2017-12-04-openml/openml-upload-data.png b/site/blog/2017-12-04-openml/openml-upload-data.png new file mode 100644 index 000000000..3d0c9e43d Binary files /dev/null and b/site/blog/2017-12-04-openml/openml-upload-data.png differ diff --git a/site/blog/2017-12-12-ukds/README.md b/site/blog/2017-12-12-ukds/README.md new file mode 100644 index 000000000..1ff7a421f --- /dev/null +++ b/site/blog/2017-12-12-ukds/README.md @@ -0,0 +1,272 @@ +--- +title: UK Data Service +date: 2017-12-12 +author: Brook Elgie (OKI), Paul Walsh (OKI) +tags: ["pilot"] +category: pilots +subject_context: The UK Data Service collection includes major UK government-sponsored surveys, cross-national surveys, longitudinal studies, UK census data, international aggregate, business data, and qualitative data. Together, we used Frictionless Data software to assess and report on data quality, and make a case for generating visualisations with ensuing data and metadata. +image: /img/blog/ukds-logo.png +description: Using Frictionless Data software to assess and report on data quality and make a case for generating visualizations with ensuing data and metadata. +--- + +The UK Data Service, like many other research repository services, employs a range of closed source software solutions for the publication and consumption of research data. The data itself is often published in closed and proprietary data formats, and the data is not always, or purposefully, published in a way that enables data reuse. + +Based on an initial exploration of user need, we identified, together with the UK Data Service, the following areas for a Frictionless Data pilot: + +* Conversion of data and metadata to open formats using open source tools. +* Use the Frictionless Data toolchain to assess and report on data quality (as a proxy for reusability). +* Demonstrate the possibility of generating visualizations from source data and metadata, described with Frictionless Data specifications. +* Host the data with all these attributes (open formats, reusable quality, visualized) on an open source platform for data. + +We worked with data that was publicly accessible, and therefore in its post-publication phase. This also informed the way we designed the work, as a set of connected processing and transport steps, very much outside of the publication process itself. While this was acceptable for the scope of the pilot, the real power of the approach we demonstrate here is in integrating it with pre-publication phases of data, via a combined automated and manually curated data process. Indeed, we can see via this pilot the potential to streamline the workflow demonstrated into a complete research data publication process, and would welcome the opportunity to conduct one or more pilots that build on this approach, deeply integrated into pre-publication data workflows. + +## Context + +The UK Data Service offers an online repository where researchers can archive, publish and share research data, called [Reshare](http://reshare.ukdataservice.ac.uk/). Reshare exposes an [OAI-PMH](https://www.openarchives.org/pmh/) endpoint to facilitate metadata harvesting. + +[datahub.io](http://datahub.io/) is a data workflows web application build around the modular Frictionless Data toolchain, designed to find, share and publish high quality data online. Each entry has a ‘Showcase’ to display data package properties, and preview data with tables and simple visualisations. As well as the Showcase, [datahub.io](http://datahub.io/) provides straight-forward direct access to import data into a variety of tools used by researchers; R, Pandas, Python, JavaScript, and SQL. [Frictionless Data Data Packages](https://specs.frictionlessdata.io/data-package/) can be pushed to datahub.io to create dataset entries. + +### Problem We Were Trying To Solve + +We want to investigate the use of the Data Package concept, and Frictionless Data software to facilitate the reuse of data archived in Reshare. + +We are especially interested in trialling pipelines to automate data harvesting from UKDS into [datahub.io](http://datahub.io/) using Frictionless Data software such as [datapackage-pipelines](https://github.com/frictionlessdata/datapackage-pipelines), and creating appropriate processors to translate widely used statistics file formats, such as [SPSS](https://www.ibm.com/analytics/us/en/technology/spss/), to text-based tabular data formats such as CSV. + +We chose the Data Package Pipelines library because it provides us with a well tested and mature framework of established processors to work with tabular data from a variety of sources and formats. Custom processors can easily be added to extend pipeline functionality. Pipelines can be configured using a simple declarative specification. Other tools supporting the underlying Frictionless Data specifications, such as [tableschema](https://github.com/frictionlessdata/tableschema-py/) and [goodtables](https://github.com/frictionlessdata/tableschema-py/) can be easily integrated as appropriate. + +In this pilot we are trialling tools to: + +* automate data harvesting from UKDS, to [datahub.io](http://datahub.io/), through a data package pipeline. +* translate binary data formats (SPSS) to text-based tabular formats. +* validate tabular data harvested from UKDS with goodtables. +* fix or workaround common data issues identified from validation report, in the source-spec + * correct file encoding + * skip non-data rows + * skip specified validation checks (duplicate-rows) + * specify header rows in csv files + * explicitly defining tabular headers +* trial the [datahub.io](http://datahub.io/) API with real-world data +* use the Showcase features of [datahub.io](http://datahub.io/) to provide instance data previews and visualisations. + +### The Work + +#### What Did We Do + +During the pilot, we focussed on creating a reusable pipeline of processors to harvest data and dataset metadata from the UKDS Reshare service, and output valid Data Packages with tabular resources. Each pipeline processor step was created as a separate module to facilitate testing and reuse in other similar pipelines. + +UKDS datasets were selected from [the UKDS list](http://reshare.ukdataservice.ac.uk/cgi/stats/report/most_popular_eprints). Entries were selected based on the data format we intended to write processors for (.csv, .tsv, xls, or .sav), how the dataset might help demonstrate various aspects of the pipeline, and how well they might lend themselves to visualisation on [datahub.io](http://datahub.io/). + +Below is an outline of the pipeline flow from UKDS Reshare Archive to [datahub.io](http://datahub.io/) entry: + +![pipeline flow from UKDS Reshare Archive to datahub.io](./ukds-pipeline-flow.png) +*pipeline flow from UKDS Reshare Archive to [datahub.io](http://datahub.io/)* + +##### Specifying an Entry + +We wanted to ensure that each UKDS dataset to be maintained on [datahub.io](http://datahub.io/) could be easily configured to specify where to harvest its resource data and dataset metadata. We also wanted to add other configuration details to help customise the pipeline to work with tricky resources, and view specifications for subsequent visualisation on [datahub.io](http://datahub.io/). + +The source-spec for each Reshare entry defines a list of URLs for each resource in the dataset that we’re interested in harvesting, and the resource format (csv, tsv, xls, or spss). + +If an OAI ID is provided, it will be used to harvest dataset metadata from the Reshare OAI endpoint. + +As well as defining source locations, we also want to provide a way to customise downstream processor behaviour, to help work around potential resource issues. + +Below is an example yaml source-spec for two entries, demonstrating various configuration options. + +```yaml +entries: + + civil-servant-survey: # entry name + source: # a list of sources + - url: http://reshare.ukdataservice.ac.uk/851401/10/Coded_SurveyData.csv + format: csv + - url: http://reshare.ukdataservice.ac.uk/851401/2/key%20%283%29.csv + format: csv + goodtables: # custom processor config for goodtables + skip_checks: + - duplicate-row + oai-id: 851401 # OAI ID to harvest dataset metadata + + uk-gov-petitions: + source: + - url: http://reshare.ukdataservice.ac.uk/851614/1/gov_pet_metadata.tab + format: tsv + tabulator: # custom processor config for tabulator + encoding: utf-8 # explicitly define source file encoding + headers: # explicitly define missing column headers + - id + - title + - department + - starting + - closing + oai-id: 851614 + views: + - views/petitions-view.json +``` + +Sources in the first entry, *civil-servant-survey*, contain duplicate rows, which would normally fail goodtables validation. Here we will allow the `duplicate-row` check to be skipped. + +The source in the second entry has the wrong character encoding and no headers declared. We can fix these issues to allow the pipeline to process the resource by explicitly specifying the file encoding, and declaring the column headers. + +#### Data Set and Resource Harvest + +We identified that Reshare has an OAI-PMH2 compatible endpoint to harvest information about each Reshare data set. So we created an `ukds.add_oai_metadata` pipeline processor. OAI metadata is compatible with Dublin Core Elements, and the processor translates this into Data Package compatible properties. + +Resources are added to the newly created Data Package and downloaded from the URLs defined in the yaml configuration. We support adding SPSS (.sav), CSV, TSV and XLS file formats. + +To support the widely used SPSS format, we created an `spss.add_spss processor` that makes use of the [tableschema-spss](https://github.com/frictionlessdata/tableschema-spss-py) plugin to read SPSS files and create tableschema descriptors from them. + +#### Validation Reports and Common Issues + +To help ensure data quality, we want to validate the harvested tabular data before continuing the pipeline. We created a [`goodtables.validate` processor](https://github.com/frictionlessdata/datapackage-pipelines-goodtables), which will write a validation report for each resource. If a resource fails to validate against its schema, or has other data issues, the pipeline will fail. Errors can be identified from validation reports, fixed, and the pipeline re-run. + +Below are examples of issues revealed by validation that can occur when working with real-world data. + +##### “Au pairing after the au pair scheme”: specifying a xls sheet, and working around non-data rows + +The [“Au Pairing” dataset](http://reshare.ukdataservice.ac.uk/851656/) has a single .xls resource we’re interested in harvesting. The file contains four sheets, and we’re interested in the second one, which contains the data. So we specify our entry: + +```yaml +au-pairing: + source: + - url: http://reshare.ukdataservice.ac.uk/851656/6/GumtreeAds_AuPairsAnalysis1.xls + format: xls + tabulator: + sheet: 2 # use sheet 2 in the file + oai-id: 851656 +``` + +Notice we have indicated which sheet in the file to use. + +The data sheet has a single header row, but it also has this header row repeated at intervals throughout the data, presumably to aid the human reader when reviewing the data manually. + +![](./ukds-au-pairing-datasheet.png) +*screengrab of the UKDS "Au Pairing" datasheet* + +For machine processing, this isn’t ideal. In fact, it will fail our goodtables validation processor with the following (truncated) report: + +```yaml +{ + "time": 0.466, + "valid": false, + "error-count": 13, + "table-count": 1, + "tables": [ + { + ... + "errors": [ + { + "code": "duplicate-row", + "message": "Row 347 is duplicated to row(s) 236", + "row-number": 347, + "column-number": null, + "row": [ + "happy/energetic/caring/loving outlook required", + "CV requested", + "gender specified", + "cooking", + ... + ] + }, + ... + ] + } + ], + ... +} +``` + +You can find the full report [here](https://gist.github.com/brew/8401e2875ec6d829baf95b79cd677e28). + +The report tells us there are 13 errors, and lists where they are. In this case they indicate that duplicate rows are present (the repeated header). This can either be fixed within Reshare, or we can add a parameter to our entry specification to skip each row that contains the duplicate header: + +```yaml +au-pairing: + source: + - url: http://reshare.ukdataservice.ac.uk/851656/6/GumtreeAds_AuPairsAnalysis1.xls + format: xls + tabulator: + sheet: 2 + skip_rows: [237, 292, 348, 402, 458, 511, 564, 618, 673, 726, 779, 832, 886, 937, 990] + goodtables: + skip_checks: + - duplicate-row + oai-id: 851656 +``` + +Above we’ve added a `skip_rows` parameter with a list of row numbers to skip when generating the data package. We also instruct goodtables to skip the `duplicate-row` check. +The outputted csv resource file will no longer contain rows with the duplicate header. + +##### “UK government petitions”: wrong file encoding, and specifying missing headers + +The “[gov.uk](http://gov.uk/) petitions” dataset has a TSV data file we’re interested in. However, it has been saved with the wrong character encoding and attempting to open may return an error, or display some characters incorrectly. + +Additionally, there is no header row specified at the top of the file, so the resulting data package won’t have the correct header information in the resource’s schema. + +We can fix both of these issues in our entry specification: + +```yaml +uk-gov-petitions: + source: + - url: http://reshare.ukdataservice.ac.uk/851614/1/gov_pet_metadata.tab + format: tsv + tabulator: + encoding: utf-8 # specify file encoding + headers: # define missing headers + - id + - title + - department + - starting + - closing + oai-id: 851614 +``` + +Above, we have defined the character encoding we want to use when opening the file, and we’ve explicitly defined the headers to use. These headers will be added to the first row of the outputted csv resource file in the data package. + +We can also use the `headers` parameter to define which row contains header information. By default this is the first row. However, sometimes a data file will have the headers on a different row: + +![](./ukds-govt-petitions-datasheet.png) +*screengrab of the UKDS "Government Petitions" datasheet* + +This example file has its headers defined in row three, with other information, and an empty row in the first two rows. We can tell our pipeline which row contains headers by specifying it in the entry configuration: + +```yaml +example-entry: + source: + - url: http://www.newcastle.gov.uk/sites/drupalncc.newcastle.gov.uk/files/wwwfileroot/your-council/local_transparency/january_2012.csv + format: csv + tabulator: + headers: 3 # specifying which row contains headers +``` + +#### Add Data Package Views + +View specs can be added to the data package to enable [datahub.io](http://datahub.io/) to create visualisations from resource data in the data package. The `views` property is a list of file paths to json files containing [view-spec](https://specs.frictionlessdata.io/views/) compatible views. + +Currently, [datahub.io](http://datahub.io/) supports views written either with a ‘simple’ views-spec, or using Vega (v 2.6.5). See [datahub.io docs](https://datahub.io/docs/features/views) for more details about the supported views-spec. + +#### Push to [datahub.io](http://datahub.io/) + +Once the harvesting pipeline has been run the resulting data packages are pushed to [datahub.io](http://datahub.io/) using the [`datahub.dump.to_datahub`](https://github.com/datahq/datahub-cli) processor. + +This creates or updates an entry for the package on datahub. If a view has been defined in the entry configuration, this will be created on the [datahub.io](http://datahub.io/) entry Showcase page. + +### Review + +We were able to demonstrate that a data processing pipeline using Frictionless Data tools can facilitate the automated harvesting, validation, transformation, and upload to a data package-compatible third-party service, based on a simple configuration. + +### Next Steps + +The pilot data package pipeline runs locally in a development environment, but given each processor has been written as a separate module, these could be used within any pipeline. [datahub.io](http://datahub.io/) uses datapackage-pipelines within its infrastructure, and the processors developed for this project could be used within [datahub.io](http://datahub.io/) itself to facilitate the automatic harvesting of datasets from OAI-PMH enabled data sources. + +Once a pipeline is in place, it can be scheduled to run each day (or week, month, etc.). This would ensure [datahub.io](http://datahub.io/) is up-to-date with data on UKDS Reshare. + +Working with ‘real-world’ data from UKDS Reshare has helped to identify and prioritise improvements and future features for [datahub.io](http://datahub.io/). + +### Additional Resources + +* [The main code repository for this pilot](https://github.com/frictionlessdata/pilot-ukds). +* [A framework for processing data packages in pipelines of modular components](https://github.com/frictionlessdata/datapackage-pipelines). +* [A Data Package Pipelines processor for SPSS file formats](https://github.com/frictionlessdata/datapackage-pipelines-spss). +* [A Data Package Pipelines processor for validating tabular data using goodtables-py](https://github.com/frictionlessdata/datapackage-pipelines-goodtables). +* [A Data Package Pipelines processor to push data packages to datahub.io](https://github.com/datahq/datapackage-pipelines-datahub). \ No newline at end of file diff --git a/site/blog/2017-12-12-ukds/ukds-au-pairing-datasheet.png b/site/blog/2017-12-12-ukds/ukds-au-pairing-datasheet.png new file mode 100644 index 000000000..5678c9683 Binary files /dev/null and b/site/blog/2017-12-12-ukds/ukds-au-pairing-datasheet.png differ diff --git a/site/blog/2017-12-12-ukds/ukds-govt-petitions-datasheet.png b/site/blog/2017-12-12-ukds/ukds-govt-petitions-datasheet.png new file mode 100644 index 000000000..2671584dc Binary files /dev/null and b/site/blog/2017-12-12-ukds/ukds-govt-petitions-datasheet.png differ diff --git a/site/blog/2017-12-12-ukds/ukds-logo.png b/site/blog/2017-12-12-ukds/ukds-logo.png new file mode 100644 index 000000000..9d80310cc Binary files /dev/null and b/site/blog/2017-12-12-ukds/ukds-logo.png differ diff --git a/site/blog/2017-12-12-ukds/ukds-pipeline-flow.png b/site/blog/2017-12-12-ukds/ukds-pipeline-flow.png new file mode 100644 index 000000000..61b36239c Binary files /dev/null and b/site/blog/2017-12-12-ukds/ukds-pipeline-flow.png differ diff --git a/site/blog/2017-12-15-university-of-pittsburgh/README.md b/site/blog/2017-12-15-university-of-pittsburgh/README.md new file mode 100644 index 000000000..79547d7c8 --- /dev/null +++ b/site/blog/2017-12-15-university-of-pittsburgh/README.md @@ -0,0 +1,119 @@ +--- +title: Western Pennsylvania Regional Data Center +date: 2017-12-15 +author: Adria Mecarder (OKI) +tags: ["pilot"] +category: pilots +subject_context: In this pilot study, we set out to showcase a possible implementation that expounds on quality and description of datasets in CKAN-based open data portals. The Western Pennsylvania Regional Data Center is part of The University of Pittsburgh Center for Urban and Social Research. +image: /img/blog/uop-logo.jpg +description: Using ckanext-validation extension to highlight quality of datasets in CKAN-based open data portals. +--- + +## Context + +One of the main goals of the Frictionless Data project is to help improve data quality by providing easy to integrate libraries and services for data validation. We have integrated data validation seamlessly with different backends like GitHub and Amazon S3 via the online service [goodtables.io](https://goodtables.io/), but we also wanted to explore closer integrations with other platforms. + +An obvious choice for that are Open Data portals. They are still one of the main forms of dissemination of Open Data, especially for governments and other organizations. They provide a single entry point to data relating to a particular region or thematic area and provide users with tools to discover and access different datasets. On the backend, publishers also have tools available for the validation and publication of datasets. + +Data Quality varies widely across different portals, reflecting the publication processes and requirements of the hosting organizations. In general, it is difficult for users to assess the quality of the data and there is a lack of descriptors for the actual data fields. At the publisher level, while strong emphasis has been put in metadata standards and interoperability, publishers don’t generally have the same help or guidance when dealing with data quality or description. + +We believe that data quality in Open Data portals can have a central place on both these fronts, user-centric and publisher-centric, and we started this pilot to showcase a possible implementation. + +To field test our implementation we chose the [Western Pennsylvania Regional Data Center](https://www.wprdc.org) (WPRDC), managed by the [University of Pittsburgh Center for Urban and Social Research](http://ucsur.pitt.edu/). The Regional Data Center made for a good pilot as the project team takes an agile approach to managing their own CKAN instance along with support from OpenGov, members of the CKAN association. As the open data repository is used by a diverse array of data publishers (including project partners Allegheny County and the City of Pittsburgh), the Regional Data Center provides a good test case for testing the implementation across a variety of data types and publishing processes. WPRDC is a great example of a well managed Open Data portal, where datasets are actively maintained and the portal itself is just one component of a wider Open Data strategy. It also provides a good variety of publishers, including public sector agencies, academic institutions, and nonprofit organizations. The project’s partnership with the Digital Scholarship Services team at the University Library System also provides data management expertise not typically available in many open data implementations. + +## The Work + +### What Did We Do + +The portal software that we chose for this pilot is [CKAN](https://ckan.org), the world's leading open source software for Open Data portals ([source](https://github.com/jalbertbowden/open-library/blob/master/lib/d2.1-state-of-the-art-report-and-evaluation-of-existing-open-data-platforms-2015-01-06-route-to-pa.pdf)). Open Knowledge International initially fostered the CKAN project and is now a member of the [CKAN Association](https://ckan.org/about/association/). + +We created [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation), a CKAN extension that provides a low level API and readily available features for data validation and reporting that can be added to any CKAN instance. This is powered by [goodtables](https://github.com/frictionlessdata/goodtables-py), a library developed by Open Knowledge International to support the validation of tabular datasets. + +The extension allows users to perform data validation against any tabular resource, such as CSV or Excel files. This generates a report that is stored against a particular resource, describing issues found with the data, both at the structural level (missing headers, blank rows, etc) and at the data schema level (wrong data types, values out of range etc). + +![](./ckanext-validation.png) +*data validation on CKAN made possible by ckanext-validation extension* + +This provides a good overview of the quality of the data to users but also to publishers so they can improve the quality of the data file by addressing these issues. The reports can be easily accessed via badges that provide a quick visual indication of the quality of the data file. + +![](./data-validity-badges.png) +*badges indicating quality of data files on CKAN* + +There are two default modes for performing the data validation when creating or updating resources. Data validation can be automatically performed in the background asynchronously or as part of the dataset creation in the user interface. In this case the validation will be performed immediately after uploading or linking to a new tabular file, giving quick feedback to publishers. + +![](./data-validation-on-upload.png) +*data validation on upload or linking to a new tabular file on CKAN* + +The extension adds functionality to provide a [schema](https://specs.frictionlessdata.io/table-schema/) for the data that describes the expected fields and types as well as other constraints, allowing to perform validation on the actual contents of the data. Additionally the schema is also stored with the resource metadata, so it can be displayed in the UI or accessed via the API. + +The extension also provides some utility commands for CKAN maintainers, including the generation of [reports](https://github.com/frictionlessdata/ckanext-validation#data-validation-reports) showing the number of valid and invalid tabular files, a breakdown of the error types and links to the individual resources. This gives maintainers a snapshot of the general quality of the data hosted in their CKAN instance at any given moment in time. + +As mentioned before, we field tested the validation extension on the Western Pennsylvania Regional Data Center (WPRDC). At the moment of the import the portal hosted 258 datasets. Out of these, 221 datasets had tabular resources, totalling 626 files (mainly CSV and XLSX files). Taking into account that we only performed the default validation that only includes structural checks (ie not schema-based ones) these are the results: + +>466 resources - validation success + +>156 resources - validation failure + +>4 resources - validation error + +The errors found are due to current limitations in the validation extension with large files. + +Here’s a breakdown of the formats: + +| | Valid resources | Invalid / Errored resources | +|:----:|:---------------:|:---------------------------:| +| CSV | 443 | 64 | +| XLSX | 21 | 57 | +| XLS | 2 | 39 | + +And of the error types (more information about each error type can be found in the [Data Quality Specification](https://github.com/frictionlessdata/data-quality-spec/blob/master/spec.json)): + +| Type of Error | Error Count | +|------------------|-------------| +| Blank row | 19654 | +| Duplicate row | 810 | +| Blank header | 299 | +| Duplicate header | 270 | +| Source error | 30 | +| Extra value | 11 | +| Format error | 9 | +| HTTP error | 2 | +| Missing value | 1 | + +The highest number of errors are obviously caused by blank and duplicate rows. These are generally caused by Excel adding extra rows at the end of the file or by publishers formatting the files for human rather than machine consumption. Examples of this include adding a title in the first cell (like in this case: [portal page](https://data.wprdc.org/dataset/046e5b6a-0f90-4f8e-8c16-14057fd8872e/resource/b4aa617d-1cb8-42d0-8eb6-b650097cf2bf) | [file](https://data.wprdc.org/dataset/046e5b6a-0f90-4f8e-8c16-14057fd8872e/resource/b4aa617d-1cb8-42d0-8eb6-b650097cf2bf/download/30-day-blotter-data-dictionary.xlsx)) or even more complex layouts ([portal page](https://data.wprdc.org/dataset/9c4eab3b-e05d-4af8-ad18-76e4c1a71a74/resource/21a032e9-6345-42b3-b61e-10de29280946) | [file](https://data.wprdc.org/dataset/9c4eab3b-e05d-4af8-ad18-76e4c1a71a74/resource/21a032e9-6345-42b3-b61e-10de29280946/download/permitsummaryissuedmarch2015.xlsx)), with logos and links. Blank and duplicate header errors like on this case ([portal page](https://data.wprdc.org/dataset/543ae03d-3ef4-45c7-b766-2ed49338120f/resource/f587d617-7afa-4e79-8010-c0d2bdff4c04) | [file](https://data.wprdc.org/dataset/543ae03d-3ef4-45c7-b766-2ed49338120f/resource/f587d617-7afa-4e79-8010-c0d2bdff4c04/download/opendata-citiparks---summer-meal-sites-2015.csv)) are also normally caused by Excel storing extra empty columns (and something that can not be noticed directly from Excel). + +These errors are easy to spot and fix manually once the file has been opened for inspection but this is still an extra step that data consumers need to perform before using the data on their own processes. It is also true that they are errors that could be easily fixed automatically as part of a pre-process of data cleanup before publication. Perhaps this is something that could be developed in the validation extension in the future. + +Other less common errors include Source errors, which include errors that prevented the file from being read by goodtables, like encoding issues or HTTP responses or HTML files incorrectly being marked as Excel files (like in this case: [portal page](https://data.wprdc.org/dataset/9c4eab3b-e05d-4af8-ad18-76e4c1a71a74/resource/9ea45609-e3b0-445a-8ace-0addb973fdf5) | [file](https://data.wprdc.org/dataset/9c4eab3b-e05d-4af8-ad18-76e4c1a71a74/resource/9ea45609-e3b0-445a-8ace-0addb973fdf5/download/plipublicwebsitemonthlysummaryaugust2017.xls)). Extra value errors are generally caused by not properly quoting fields that contain commas, thus breaking the parser (example: [portal page](https://data.wprdc.org/dataset/3130f583-9499-472b-bb5a-f63a6ff6059a/resource/12d9e6e1-3657-4cad-a430-119d34b1a5b2) | [file](https://data.wprdc.org/dataset/3130f583-9499-472b-bb5a-f63a6ff6059a/resource/12d9e6e1-3657-4cad-a430-119d34b1a5b2/download/crashdatadictionary.csv)). + +Format errors are caused by labelling incorrectly the format of the hosted file, for instance CSV when it links to an Excel file ([portal page](https://data.wprdc.org/dataset/669b2409-bb4b-46e5-9d91-c36876b58a17/resource/e919ecd3-bb11-4883-a041-bded25dc651c) | [file](https://data.wprdc.org/dataset/669b2409-bb4b-46e5-9d91-c36876b58a17/resource/e919ecd3-bb11-4883-a041-bded25dc651c/download/2016-cveu-inspections.xlsx)), CSV linking to HTML ([portal page](https://data.wprdc.org/dataset/libraries/resource/14babf3f-4932-4828-8b49-3c9a03bae6d0) | [file](https://wprdc-maps.carto.com/u/wprdc/builder/1142950f-f054-4b3f-8c52-2f020e23cf78/embed)) or XLS linking to XLSX ([portal page](https://data.wprdc.org/dataset/40188e1c-6d2e-4f20-9391-607bd3054949/resource/cf0617a1-b950-4aa7-a36d-dc9da412ddf7) | [file](https://data.wprdc.org/dataset/40188e1c-6d2e-4f20-9391-607bd3054949/resource/cf0617a1-b950-4aa7-a36d-dc9da412ddf7/download/transportation.xls)). These are all easily fixed at the metadata level. + +Finally HTTP errors just show that the linked file hosted elsewhere does not exist or has been moved. + +Again, it is important to stress that the checks performed are just [basic and structural checks](https://github.com/frictionlessdata/goodtables-py#validation) that affect the general availability of the file and its general structure. The addition of standardized schemas would allow for a more thorough and precise validation, checking the data contents and ensuring that this is what was expected. + +Also it is interesting to note that WPRDC has the excellent good practice of publishing data dictionaries describing the contents of the data files. These are generally published in CSV format and they themselves can present validation errors as well. As we saw before, using the validation extension we can assign a schema defined in the Table Schema spec to a resource. This will be used during the validation, but the information could also be used to render it nicely on the UI or export it consistently as a CSV or PDF file. + +All the generated reports can be further analyzed using the output files stored [in this repository](https://github.com/frictionlessdata/pilot-wprdc). + +Additionally, to help browse the validation reports created from the WPRDC site we have set up a demo site that mirrors the datasets, organizations and groups hosted there (at the time we did the import). + +All tabular resources have the validation report attached, that can be accessed clicking on the data valid / invalid badges. + +## Next Steps + +### Areas for further work + +The validation extension for CKAN currently provides a very basic workflow for validation at creation and update time: basically if the validation fails in any way you are not allowed to create or edit the dataset. Maintainers can define a set of default validation options to make it more permissive but even so some publishers probably wouldn’t want to enforce all validation checks before allowing the creation of a dataset, or just apply validation to datasets from a particular organization or type. Of course the [underlying API](https://github.com/frictionlessdata/ckanext-validation#action-functions) is available for extension developers to implement these workflows, but the validation extension itself could provide some of them. + +The user interface for defining the validation options can definitely be improved, and we are planning to integrate a [Schema Creator](https://github.com/frictionlessdata/ckanext-validation/issues/10) to make easier for publishers to describe their data with a schema based on the actual fields on the file. If the resource has a schema assigned, this information can be presented nicely on the UI to the users and exported in different formats. + +The validation extension is a first iteration to demonstrate the capabilities of integrating data validation directly into CKAN, but we are keen to know about different ways in which this could be expanded or integrated in other workflows, so any feedback or thoughts is appreciated. + +### Additional Resources + +* Check the [full documentation](https://github.com/frictionlessdata/ckanext-validation/blob/master/README.md#how-it-works) for ckanext-validation, covering all details on how to install it and configure it, features and available API + +* Source material: + * [ckanext-validation codebase](https://github.com/frictionlessdata/ckanext-validation) + * [Western Pennsylvania Regional Data Center Github repository](https://github.com/frictionlessdata/pilot-wprdc) \ No newline at end of file diff --git a/site/blog/2017-12-15-university-of-pittsburgh/ckanext-validation.png b/site/blog/2017-12-15-university-of-pittsburgh/ckanext-validation.png new file mode 100644 index 000000000..8c0771941 Binary files /dev/null and b/site/blog/2017-12-15-university-of-pittsburgh/ckanext-validation.png differ diff --git a/site/blog/2017-12-15-university-of-pittsburgh/data-validation-on-upload.png b/site/blog/2017-12-15-university-of-pittsburgh/data-validation-on-upload.png new file mode 100644 index 000000000..bed2a6c87 Binary files /dev/null and b/site/blog/2017-12-15-university-of-pittsburgh/data-validation-on-upload.png differ diff --git a/site/blog/2017-12-15-university-of-pittsburgh/data-validity-badges.png b/site/blog/2017-12-15-university-of-pittsburgh/data-validity-badges.png new file mode 100644 index 000000000..f8208c258 Binary files /dev/null and b/site/blog/2017-12-15-university-of-pittsburgh/data-validity-badges.png differ diff --git a/site/blog/2017-12-15-university-of-pittsburgh/uop-logo.jpg b/site/blog/2017-12-15-university-of-pittsburgh/uop-logo.jpg new file mode 100644 index 000000000..00f027bcf Binary files /dev/null and b/site/blog/2017-12-15-university-of-pittsburgh/uop-logo.jpg differ diff --git a/site/blog/2017-12-19-dm4t/README.md b/site/blog/2017-12-19-dm4t/README.md new file mode 100644 index 000000000..13f76f59d --- /dev/null +++ b/site/blog/2017-12-19-dm4t/README.md @@ -0,0 +1,474 @@ +--- +title: Data Management for TEDDINET +date: 2017-12-19 +author: Julian Padget (DM4T), Dan Fowler (OKI), Evgeny Kariv (OKI), Paul Walsh (OKI), Jo Barratt (OKI) +tags: ["pilot"] +category: pilots +subject_context: Open Knowledge International and Data Management for TEDDINET project (DM4T) have worked together on a proof-of-concept pilot using Frictionless Data specifications to address some of the data management challenges faced by DM4T. +image: /img/blog/dm4t.png +description: Frictionless Data specifications for data legacy +--- + +# DM4T Pilot + +## Pilot Name +Data Management for TEDDINET (DM4T) + +## Authors +Julian Padget (DM4T), Dan Fowler (OKI), Evgeny Karev (OKI) + +## Field +Energy Data + +## FD Tech Involved +- Frictionless Data specs: http://specs.frictionlessdata.io/ +- Data Package Pipelines: https://github.com/frictionlessdata/datapackage-pipelines +- Goodtables: https://github.com/frictionlessdata/goodtables-py + +`packagist` has now moved to [create.frictionlessdata.io](http://create.frictionlessdata.io) + +## Context + +### Problem We Were Trying To Solve + +Open Knowledge International and the Data Management for TEDDINET project (DM4T) agreed to work together on a proof-of-concept pilot to attempt to use Frictionless Data approaches to address some of the data legacy issues facing the TEDDINET project, a research network addressing the challenges of transforming energy demand in our buildings, as a key component of the transition to an affordable, low carbon energy system. The problem as described on the DM4T Website: + +>The Engineering and Physical Sciences Research Council (EPSRC), the UK's main agency for funding research in engineering and the physical sciences, funded 22 projects over two calls in 2010 and 2012 to investigate Transforming Energy Demand through Digital Innovation' (TEDDI) as a means to find out how people use energy in homes and what can be done reduce energy consumption. A lot of data is being collected at different levels of detail in a variety of housing throughout the UK, but the level of detail are largely defined by the needs of each individual project. At the same time, the Research Councils UK (RCUK) are defining guidelines for what happens to data generated by projects they fund which require researchers to take concrete actions to store, preserve, and document their data for future reference. + +>The problem, however, is that there is relatively little awareness, limited experience and only emerging practice of how to incorporate data management into much of physical science research. This is in contrast to established procedures for data formats and sharing in the biosciences, stemming from international collaboration on the Human Genome Project, and in the social sciences, where data from national surveys, including census data, have been centrally archived for many years. Consequently, current solutions may be able to meet a minimal interpretation of the requirements, but not effectively deliver the desired data legacy. + +The DM4T group selected three suitable datasets to on which to base this work and provided domain knowledge to ensure the pilot is applicable to real use cases. + +Output was tracked here: https://github.com/frictionlessdata/pilot-dm4t/issues + +## The work + +We will use the `refit-cleaned` dataset to show the Frictionless Data specs and software capabilities. For this work, we limited the size of this dataset in order to preserve a reasonable showcasing time. However, by design the Frictionless Data software has a very good scalability and this process could be reproduced for the whole dataset. But for now it is worth noting that the speed for such a big datasets could be a bottle neck for a research work. + +### REFIT: Electrical Load Measurements (Cleaned) + +> Link to the dataset: https://github.com/frictionlessdata/pilot-dm4t/tree/delivery/datasets/refit-cleaned + +For each house in the study, this dataset consists of granular readings of electrical load. There were 20 houses in total, and each house had a different mix of devices plugged into the electrical load sensor. The dataset was distributed as a zipped file (~500MB) containing 20 CSVs with a combined ~120 million rows. + +``` +Time,Unix,Aggregate,Appliance1,Appliance2,Appliance3,Appliance4,Appliance5,Appliance6,Appliance7,Appliance8,Appliance9 +2013-10-09 13:06:17,1381323977,523,74,0,69,0,0,0,0,0,1 +2013-10-09 13:06:31,1381323991,526,75,0,69,0,0,0,0,0,1 +2013-10-09 13:06:46,1381324006,540,74,0,68,0,0,0,0,0,1 +2013-10-09 13:07:01,1381324021,532,74,0,68,0,0,0,0,0,1 +2013-10-09 13:07:15,1381324035,540,74,0,69,0,0,0,0,0,1 +2013-10-09 13:07:18,1381324038,539,74,0,69,0,0,0,0,0,1 +2013-10-09 13:07:30,1381324050,537,74,0,69,0,0,0,0,0,1 +2013-10-09 13:07:32,1381324052,537,74,0,69,0,0,0,0,0,1 +2013-10-09 13:07:44,1381324064,548,74,0,69,0,0,0,0,0,1 +``` + +Given that these datasets were already provided in well structured CSV files, it was straightforward to translate the data dictionary found in the dataset’s README into the relevant fields in the datapackage.json. We did not need to alter the CSVs that comprise the dataset. + +### Creating a data package using Datapackage Pipelines + +> Link to the Datapackage Pipelines project: https://github.com/frictionlessdata/datapackage-pipelines + +Datapackage Pipelines is a framework for declarative stream-processing of tabular data. It is built upon the concepts and tooling of the Frictionless Data project. The basic concept in this framework is the pipeline. A pipeline has a list of processing steps, and it generates a single data package as its output. Pipelines are defined in a declarative way, not in code. One or more pipelines can be defined in a `pipeline-spec.yaml` file. This file specifies the list of processors (referenced by name) and their execution parameters. + +One of the main purposes of the Frictionless Data project is data containerization. It means that instead of having two separated data knowledge sources (data files and text readme), we're going to put both of them into a container based on the `Data Package` specification. This allows us to: + +- Ensure that the dataset description is shipped with the data files +- Provide column data type information to allow type validation +- Use the Frictionless Data tooling for reading and validating datasets +- Enable usage of other software which supports Frictionless Data specifications + +First, we used the `datapackage-pipeline` library to create a data package from the raw dataset. We need a declarative file called `datapackage-pipelines.yaml` to describe data transformations steps: + +> datapackage-pipelines.yaml + +```yaml +refit-cleaned: + pipeline: + - run: add_metadata + parameters: + name: refit-electrical-load-measurements + title: 'REFIT: Electrical Load Measurements' + license: CC-BY-4.0 + description: Collection of this dataset was supported by the Engineering and Physical Sciences Research Council (EPSRC) via the project entitled Personalised Retrofit Decision Support Tools for UK Homes using Smart Home Technology (REFIT), which is a collaboration among the Universities of Strathclyde, Loughborough and East Anglia. The dataset includes data from 20 households from the Loughborough area over the period 2013 - 2015. Additional information about REFIT is available from www.refitsmarthomes.org. + sources: + - + title: 'REFIT: Electrical Load Measurements (Cleaned)' + web: 'https://pure.strath.ac.uk/portal/en/datasets/refit-electrical-load-measurements-cleaned(9ab14b0e-19ac-4279-938f-27f643078cec).html' + email: researchdataproject@strath.ac.uk + - run: add_resource + parameters: + name: 'house-1' + url: 'datasets/refit-cleaned/House_1.csv' + format: csv + + # Other resources are omitted + + - run: stream_remote_resources + - run: set_types + parameters: + resources: "house-[0-9]{1,2}" + types: + "Time": + type: datetime + format: "fmt:%Y-%m-%d %H:%M:%S" + Unix: + type: integer + Aggregate: + type: integer + "Appliance[1-9]": + type: integer + - run: processors.modify_descriptions + parameters: + resources: house-1 + descriptions: + Appliance1: + description: Fridge + Appliance2: + description: Chest Freezer + Appliance3: + description: Upright Freezer + Appliance4: + description: Tumble Dryer + Appliance5: + descripion: Washing Machine + Appliance6: + description: Dishwasher + Appliance7: + description: Computer Site + Appliance8: + description: Television Site + Appliance9: + description: Electric Heater + + # Other resources are omitted + + - run: dump.to_path + parameters: + out-path: packages/refit-cleaned +``` + +The process follows contains these steps: +- Create the data package metadata +- Add all data files from the disc +- Start resources streaming into the data package +- Update resources descriptions using a custom processor +- Save the data package to the disc + +Now we're ready to run this pipeline: + +```bash +$ dpp run ./refit-cleaned +``` + +After this step we have a data package containing a descriptor: + +> packages/refit-cleaned/datapakcage.json + +```json +{ + "bytes": 1121187, + "count_of_rows": 19980, + "description": "Collection of this dataset was supported by the Engineering and Physical Sciences Research Council (EPSRC) via the project entitled Personalised Retrofit Decision Support Tools for UK Homes using Smart Home Technology (REFIT), which is a collaboration among the Universities of Strathclyde, Loughborough and East Anglia. The dataset includes data from 20 households from the Loughborough area over the period 2013 - 2015. Additional information about REFIT is available from www.refitsmarthomes.org.", + "hash": "433ff35135e0a43af6f00f04cb8e666d", + "license": "CC-BY-4.0", + "name": "refit-electrical-load-measurements", + "resources": [ + { + "bytes": 55251, + "count_of_rows": 999, + "dialect": { + "delimiter": ",", + "doubleQuote": true, + "lineTerminator": "\r\n", + "quoteChar": "\"", + "skipInitialSpace": false + }, + "dpp:streamedFrom": "datasets/refit-cleaned/House_1.csv", + "dpp:streaming": true, + "encoding": "utf-8", + "format": "csv", + "hash": "ad42fbf1302cabe30e217ff105d5a7fd", + "name": "house-1", + "path": "data/house-1.csv", + "schema": { + "fields": [ + { + "format": "%Y-%m-%d %H:%M:%S", + "name": "Time", + "type": "datetime" + }, + { + "name": "Unix", + "type": "integer" + }, + { + "name": "Aggregate", + "type": "integer" + }, + { + "description": "Fridge", + "name": "Appliance1", + "type": "integer" + }, + { + "description": "Chest Freezer", + "name": "Appliance2", + "type": "integer" + }, + { + "description": "Upright Freezer", + "name": "Appliance3", + "type": "integer" + }, + { + "description": "Tumble Dryer", + "name": "Appliance4", + "type": "integer" + }, + { + "descripion": "Washing Machine", + "name": "Appliance5", + "type": "integer" + }, + { + "description": "Dishwasher", + "name": "Appliance6", + "type": "integer" + }, + { + "description": "Computer Site", + "name": "Appliance7", + "type": "integer" + }, + { + "description": "Television Site", + "name": "Appliance8", + "type": "integer" + }, + { + "description": "Electric Heater", + "name": "Appliance9", + "type": "integer" + } + ] + } + }, + + # Other resources is omitted + + ] +} +``` + +And a list of data files linked in the descriptor: + +```bash +$ ls packages/refit-cleaned/data +house-10.csv house-13.csv house-17.csv house-1.csv house-2.csv house-5.csv house-8.csv +house-11.csv house-15.csv house-18.csv house-20.csv house-3.csv house-6.csv house-9.csv +house-12.csv house-16.csv house-19.csv house-21.csv house-4.csv house-7.csv +``` + +### Validating a data package using Goodtables + +Goodtables is a software family for tabular data validation. It's available as a Python library, a command line tool, [web application](https://try.goodtables.io/) and [continuous validation service](https://goodtables.io/). + +The main features of Goodtables are: + +- Structural checks: Ensure that there are no empty rows, no blank headers, etc. +- Content checks: Ensure that the values have the correct types ("string", "number", "date", etc.), that their format is valid ("string must be an e-mail"), and that they respect the constraints ("age must be a number greater than 18"). +- Support for multiple tabular formats: CSV, Excel files, LibreOffice, Data Package, etc. +- Parallelized validations for multi-table datasets + +Because we have provided data types for the columns at the wrapping stage, here we validate both the data structure and compliance to the data types using the Goodtables command line interface: + +```bash +$ goodtables packages/refit-cleaned/datapackage.json +DATASET +======= +{'error-count': 0, + 'preset': 'datapackage', + 'table-count': 20, + 'time': 4.694, + 'valid': True} +``` + +### Modifying a data package using Packagist + +If we need to modify our data package, we could use the [Packagist](https://create.frictionlessdata.io/). It incorporates a straightforward UI to modify and validate data package descriptors. With its easy to use interface we are able to: + +- Load/validate/save a data package +- Update a data package metadata +- Add/remove/modify data package resources +- Add/remove/modify data resource fields +- Set type/format for data values + +![ADBio](./packagist.png) + +On the figure above we have loaded the `refit-cleaned` data package into the Packagist UI to make changes to the data package as needed. + +### Publishing a data package to Amazon S3 + +> Link to the published package: https://s3.eu-central-1.amazonaws.com/pilot-dm4t/pilot-dm4t/packages/refit-cleaned/datapackage.json + +In this section we will show how data packages can be moved from one data storage system to another. This is possible because it has been containerised. + +One important feature of the `datapackage-pipelines` project that it works as a conveyor. We could push our data package not only to the local disc but to other destinations. For example to the Amazon S3: + +> pipelines-spec.yml + +```yaml +refit-cleaned: + + # Initial steps are omitted + + - run: aws.dump.to_s3 + parameters: + bucket: pilot-dm4t + path: pilot-dm4t/packages/refit-cleaned +``` + +Running this command again: + +```bash +$ dpp run ./refit-cleaned +``` + +And now our data package is published to Amazon the S3 remote storage: + +![screenshot of S3 storage](https://i.imgur.com/5Z7EPDR.pnghttps://) + +### Getting insight from data using Python libraries + +> Link to the demostration script: https://github.com/frictionlessdata/pilot-dm4t/blob/delivery/scripts/refit-cleaned.py + +The Frictionless Data projects provides various Python (along with other 8 languages) libraries to work with data package programatically. We used the `datapackage` library to analyse the `refit-cleaned` data package: + +```python +import datetime +import statistics +from datapackage import Package + +# Get aggregates +consumption = {} +package = Package('packages/refit-cleaned/datapackage.json') +for resource in package.resources: + for row in resource.iter(keyed=True): + hour = row['Time'].hour + consumption.setdefault(hour, []) + consumption[hour].append(row['Aggregate']) + +# Get averages +for hour in consumption: + consumption[hour] = statistics.mean(consumption[hour]) + +# Print results +for hour in sorted(consumption): + print('Average consumption at %02d hours: %.0f' % (hour, consumption[hour])) +``` + +Now we could run it in the command line: + +```bash +$ python examles/refit-cleaned.py +Average consumption at 00 hours: 232 +Average consumption at 01 hours: 213 +Average consumption at 02 hours: 247 +Average consumption at 03 hours: 335 +Average consumption at 04 hours: 215 +Average consumption at 05 hours: 690 +Average consumption at 06 hours: 722 +Average consumption at 07 hours: 648 +Average consumption at 08 hours: 506 +Average consumption at 09 hours: 464 +Average consumption at 10 hours: 364 +Average consumption at 11 hours: 569 +Average consumption at 12 hours: 520 +Average consumption at 13 hours: 497 +Average consumption at 14 hours: 380 +Average consumption at 15 hours: 383 +Average consumption at 16 hours: 459 +Average consumption at 17 hours: 945 +Average consumption at 18 hours: 733 +Average consumption at 19 hours: 732 +Average consumption at 20 hours: 471 +Average consumption at 21 hours: 478 +Average consumption at 22 hours: 325 +Average consumption at 23 hours: 231 +``` + +Here we we're able to get the averages for electricity consumption grouped by hour. We could have achieved this in different ways, but using the Frictionless Data specs and software provides some important advantages: + +- The fact that we have data wrapped into a data package has allowed us to validate and read the data already converted for its correct types (e.g native python `datetime` object). No need for any kind of string parsing. +- The Frictionless Data software uses file streams under the hood. This means that only the current row is kept in memory, so we're able to handle datasets bigger than the available RAM memory. + +### Exporting data to an ElasticSearch cluster + +> Link to the export script: https://github.com/frictionlessdata/pilot-dm4t/blob/delivery/scripts/refit-cleaned.py + +The Frictionless Data software provides plugins to export data to various backends like SQL, BigQuery etc. We will export the first resource from our data package for future analysis: + +```python +from elasticsearch import Elasticsearch +from datapackage import Package +from tableschema_elasticsearch import Storage + +# Get resource +package = Package('packages/refit-cleaned/datapackage.json') +resource = package.get_resource('house-1') + +# Create storage +engine = Elasticsearch() +storage = Storage(engine) + +# Write data +storage.create('refit-cleaned', [('house-1', resource.schema.descriptor)]) +list(storage.write('refit-cleaned', 'house-1', resource.read(keyed=True), ['Unix'])) + +``` + +Now we are able to check that our documents are indexed: + +```bash +$ http http://localhost:9200/_cat/indices?v +``` + +### Getting insight from data using Kibana + +To demonstrate how the Frictionless Data specs and software empower the usage of other analytics tools, we will use ElasticSearch/Kibana project. On the previous step we have imported our data package into an ElasticSearch cluster. It allows us to visualize data using a simple UI: + +![screenshot of elasticsearch cluster](https://i.imgur.com/Fm373F4.png) + +In this screenshot we see the distribution of the average electricity comsumption. This is just an example of what you can do by having the ability to easily load datasets into other analytical software. + +## Review + +### The results + +In this pilot, we have been able to demonstrate the the following: + +- Packaging the `refit-cleaned` dataset as a data package using the Data Package Pipelines library +- Validating the data package using the Goodtables library +- Modifying data packages metadata using the Packagist UI +- Uploading the dataset to Amazon S3 and ElasticSearch cluster using Frictionless Data tools +- Reading and analysing in Python the created Data Package using the Frictionless Data library + +### Current limitations + +The central challenge of working with these datasets is the size. Publishing the results of these research projects as flat files for immediate analysis is beneficial, however, the scale of each of these datasets (gigabytes of data, millions of rows) is a challenge to deal with no matter how you are storing. Processing this data through Data Package pipelines takes a long time. + +### Next Steps + +- Improve the speed of the data package creation step + +### Find Out More + +- https://github.com/frictionlessdata/pilot-pnnl + +### Source Material + +- https://app.hubspot.com/sales/2281421/deal/146418008 +- https://discuss.okfn.org/c/working-groups/open-archaeology +- https://github.com/frictionlessdata/pilot-open-archaeology \ No newline at end of file diff --git a/site/blog/2017-12-19-dm4t/dm4t.png b/site/blog/2017-12-19-dm4t/dm4t.png new file mode 100644 index 000000000..c99093e95 Binary files /dev/null and b/site/blog/2017-12-19-dm4t/dm4t.png differ diff --git a/site/blog/2017-12-19-dm4t/packagist.png b/site/blog/2017-12-19-dm4t/packagist.png new file mode 100644 index 000000000..b614c51c6 Binary files /dev/null and b/site/blog/2017-12-19-dm4t/packagist.png differ diff --git a/site/blog/2018-02-14-creating-tabular-data-packages-in-r/README.md b/site/blog/2018-02-14-creating-tabular-data-packages-in-r/README.md new file mode 100644 index 000000000..0092ba3cf --- /dev/null +++ b/site/blog/2018-02-14-creating-tabular-data-packages-in-r/README.md @@ -0,0 +1,181 @@ +--- +title: Creating Data Packages in R +date: 2018-02-14 +tags: ["R"] +author: Kleanthis Koupidis +description: A guide on how to create datapackage with R +category: working-with-data-packages +--- + +[Open Knowledge Greece][okgreece] was one of 2017's [Frictionless Data Tool Fund][toolfund] grantees tasked with extending implementation of core Frictionless Data libraries in R programming language. You can read more about this in [their grantee profile][toolfund-okgreece]. In this tutorial, [Kleanthis Koupidis](https://gr.linkedin.com/in/kleanthis-koupidis-8348b88b), a Data Scientist and Statistician at Open Knowledge Greece, explains how to create Data Packages in R. + +## Creating Data Packages in R + +This tutorial will show you how to install the R library for working with Data Packages and Table Schema, load a CSV file, infer its schema, and write a Tabular Data Package. + +## Load + +For this tutorial, we will need the Data Package R library ([datapackage.r](https://github.com/frictionlessdata/datapackage-r)). +You can start using the library by loading `datapackage.r`. + +```r + library(datapackage.r) +``` + +You can add useful metadata by adding keys to metadata dict attribute. Below, we are adding the required `name` key as well as a human-readable `title` key. For the keys supported, please consult the full [Data Package spec][dp]. Note, we will be creating the required `resources` key further down below. + +```r + dataPackage = Package.load() + dataPackage$descriptor['name'] = 'period-table' + dataPackage$descriptor['title'] = 'Periodic Table' + # commit the changes to Package class + dataPackage$commit() + + ## [1] TRUE +``` + +## Infer a CSV Schema + +We will use periodic-table data from [remote path](https://raw.githubusercontent.com/frictionlessdata/datapackage-r/9eed05d1710fd69a0cb74f7941c7f142563f571b/vignettes/example_data/data.csv) + +| atomic.number | symbol | name | atomic.mass | metal.or.nonmetal. | +|---------------|--------|-----------|-------------|----------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | +| 4 | Be | Beryllium | 9.012182 | alkaline earth metal | +| 5 | B | Boron | 10.811 | metalloid | +| 6 | C | Carbon | 12.0107 | nonmetal | +| 7 | N | Nitrogen | 14.0067 | nonmetal | +| 8 | O | Oxygen | 15.9994 | nonmetal | +| 9 | F | Fluorine | 18.9984032 | halogen | +| 10 | Ne | Neon | 20.1797 | noble gas | + +We can guess our CSV's [schema][ts] by using `infer` from the Table Schema library. We pass directly the remote link to the infer function, the result of which is an inferred schema. For example, if the processor detects only integers in a given column, it will assign `integer` as a column type. + +```r + filepath = 'https://raw.githubusercontent.com/okgreece/datapackage-r/master/vignettes/exampledata/data.csv' + + schema = tableschema.r::infer(filepath) +``` + +Once we have a schema, we are now ready to add a `resource` key to the Data Package which points to the resource path and its newly created schema. Below we define resources with three ways, using json text format with usual assignment operator in R list objects and directly using `addResource` function of `Package` class: + +```r + # define resources using json text + resources = helpers.from.json.to.list( + '[{ + "name": "data", + "path": "filepath", + "schema": "schema" + }]' + ) + resources[[1]]$schema = schema + resources[[1]]$path = filepath + + # or define resources using list object + resources = list(list( + name = "data", + path = filepath, + schema = schema + )) +``` + +And now, add resources to the Data Package: + +```r + dataPackage$descriptor[['resources']] = resources + dataPackage$commit() + + ## [1] TRUE +``` + +Or you can directly add resources using `addResources` function of `Package` class: + +```r + resources = list(list( + name = "data", + path = filepath, + schema = schema + )) + + dataPackage$addResource(resources) +``` + +Now we are ready to write our `datapackage.json` file to the current working directory. + +```r + dataPackage$save('example_data') +``` + +The `datapackage.json` ([download](https://raw.githubusercontent.com/okgreece/datapackage-r/master/vignettes/exampledata/package.json)) is inlined below. Note that atomic number has been correctly inferred as an `integer` and atomic mass as a `number` (float) while every other column is a `string`. + +``` + jsonlite::prettify(helpers.from.list.to.json(dataPackage$descriptor)) + + ## { + ## "profile": "data-package", + ## "name": "period-table", + ## "title": "Periodic Table", + ## "resources": [ + ## { + ## "name": "data", + ## "path": "https://raw.githubusercontent.com/okgreece/datapackage-r/master/vignettes/exampledata/data.csv", + ## "schema": { + ## "fields": [ + ## { + ## "name": "atomic number", + ## "type": "integer", + ## "format": "default" + ## }, + ## { + ## "name": "symbol", + ## "type": "string", + ## "format": "default" + ## }, + ## { + ## "name": "name", + ## "type": "string", + ## "format": "default" + ## }, + ## { + ## "name": "atomic mass", + ## "type": "number", + ## "format": "default" + ## }, + ## { + ## "name": "metal or nonmetal?", + ## "type": "string", + ## "format": "default" + ## } + ## ], + ## "missingValues": [ + ## "" + ## ] + ## }, + ## "profile": "data-resource", + ## "encoding": "utf-8" + ## } + ## ] + ## } + ## +``` + +## Publishing + +Now that you have created your Data Package, you might want to [publish your data online](/blog/2016/08/29/publish-online/) so that you can share it with others. + +Now that you have created a data package in R, [find out how to use data packages in R in this tutorial][use-r]. + +[dp]: https://specs.frictionlessdata.io/data-package/ +[tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[okgreece]: http://okfn.gr/ +[toolfund]: https://toolfund.frictionlessdata.io +[toolfund-okgreece]:https://frictionlessdata.io/articles/open-knowledge-greece/ +[dp-r]: https://github.com/frictionlessdata/datapackage-r +[ts]: https://specs.frictionlessdata.io/table-schema/ +[r-devtools]: https://cran.r-project.org/package=devtools +[fd-gitter]: http://gitter.im/frictionlessdata/chat +[dp-r-issues]: https://github.com/frictionlessdata/datapackage-r/issues + +[use-r]: /blog/2018/02/14/using-data-packages-in-r/ diff --git a/site/blog/2018-02-14-using-data-packages-in-r/README.md b/site/blog/2018-02-14-using-data-packages-in-r/README.md new file mode 100644 index 000000000..3edff57d7 --- /dev/null +++ b/site/blog/2018-02-14-using-data-packages-in-r/README.md @@ -0,0 +1,184 @@ +--- +title: Using Data Packages in R +date: 2018-02-14 +tags: ["R"] +author: Kleanthis Koupidis +description: A guide on how to use datapackage with R +category: working-with-data-packages +--- + +[Open Knowledge Greece][okgreece] was one of 2017's [Frictionless Data Tool Fund][toolfund] grantees tasked with extending implementation of core Frictionless Data libraries in R programming language. You can read more about this in [their grantee profile][toolfund-okgreece]. In this tutorial, [Kleanthis Koupidis](https://gr.linkedin.com/in/kleanthis-koupidis-8348b88b), a Data Scientist and Statistician at Open Knowledge Greece, explains how to work with Data Packages in R. + +This tutorial will show you how to install the R libraries for working with Tabular Data Packages and demonstrate a very simple example of loading a Tabular Data Package from the web and pushing it directly into a local SQL database and send query to retrieve results. + +::: tip +For a comprehensive introduction to creating tabular data packages in R, [start by going through this tutorial][create-r]. +::: + +## Setup + +For this tutorial, we will need the Data Package R library ([datapackage.r][dp-r]). [Devtools library](https://cran.r-project.org/package=devtools) is also required to install the datapackage.r library from github. + +```bash + # Install devtools package if not already + install.packages("devtools") +``` + +And then install the development version of [datapackage.r][dp-r] from github. + +```bash + devtools::install_github("frictionlessdata/datapackage-r") +``` + +## Load + +You can start using the library by loading `datapackage.r`. + +```r + library(datapackage.r) +``` + +## Reading Basic Metadata + +In this case, we are using an example Tabular Data Package containing the periodic table stored on [GitHub](https://github.com/frictionlessdata/example-data-packages/tree/master/periodic-table) ([datapackage.json](https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/periodic-table/datapackage.json), [data.csv](https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/periodic-table/data.csv)). This dataset includes the atomic number, symbol, element name, atomic mass, and the metallicity of the element. Here are the first five rows: + +```r + url = 'https://raw.githubusercontent.com/okgreece/datapackage-r/master/vignettes/example_data/data.csv' + pt_data = read.csv2(url, sep = ',') + knitr::kable(head(pt_data, 5), align = 'c') +``` + +| atomic.number | symbol | name | atomic.mass | metal.or.nonmetal. | +|---------------|--------|-----------|-------------|----------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | +| 4 | Be | Beryllium | 9.012182 | alkaline earth metal | +| 5 | B | Boron | 10.811 | metalloid | + +Data Packages can be loaded either from a local path or directly from the web. + +```r + url = 'https://raw.githubusercontent.com/okgreece/datapackage-r/master/vignettes/exampledata/package.json' + datapackage = Package.load(url) + datapackage$resources[[1]]$descriptor$profile = 'tabular-data-resource' # tabular resource descriptor profile + datapackage$resources[[1]]$commit() # commit changes + + ## [1] TRUE +``` + +At the most basic level, Data Packages provide a standardized format for general metadata (for example, the dataset title, source, author, and/or description) about your dataset. Now that you have loaded this Data Package, you have access to this `metadata` using the metadata dict attribute. Note that these fields are optional and may not be specified for all Data Packages. For more information on which fields are supported, see [the full Data Package standard][dp]. + +```r + datapackage$descriptor$title + + ## [1] "Periodic Table" +``` + +## Reading Data + +Now that you have loaded your Data Package, you can read its data. A Data Package can contain multiple files which are accessible via the `resources` attribute. The `resources` attribute is an array of objects containing information (e.g. path, schema, description) about each file in the package. + +You can access the data in a given resource in the `resources` array by reading the `data` attribute. + +```r + table = datapackage$resources[[1]]$table + periodic_table_data = table$read() +``` + +You can further manipulate list objects in R by using + +```r + [purrr](https://cran.r-project.org/package=purrr), [rlist](https://cran.r-project.org/package=rlist) packages. +``` + +## Loading into an SQL database + +[Tabular Data Packages][tdp] contains schema information about its data using [Table Schema][ts]. This means you can easily import your Data Package into the SQL backend of your choice. In this case, we are creating an [SQLite](http://sqlite.org/) database. + +To create a new SQLite database and load the data into SQL we will need [DBI](https://cran.r-project.org/package=DBI) package and [RSQLite](https://cran.r-project.org/package=RSQLite) package, which contains [SQLite](https://www.sqlite.org/) (no external software is needed). + +You can install and load them by using: + +```r + install.packages(c("DBI","RSQLite")) + + library(DBI) + library(RSQLite) +``` + +To create a new SQLite database, you simply supply the filename to `dbConnect()`: + +```r + dp.database = dbConnect(RSQLite::SQLite(), "") # temporary database +``` + +We will use [data.table](https://cran.r-project.org/package=RSQLite) package to convert the list object with the data to a data frame object to copy them to database table. + +```r + # install data.table package if not already + # install.packages("data.table") + + periodic_table_sql = data.table::rbindlist(periodic_table_data) + periodic_table_sql = setNames(periodic_table_sql,unlist(datapackage$resources[[1]]$headers)) +``` + +You can easily copy an R data frame into a SQLite database with `dbWriteTable()`: + +```r + dbWriteTable(dp.database, "periodic_table_sql", periodic_table_sql) + # show remote tables accessible through this connection + dbListTables(dp.database) + + ## [1] "periodic_table_sql" +``` + +The data are already to the database. + +We can further issue queries to hte database and return first 5 elements: + +```r + + dbGetQuery(dp.database, 'SELECT * FROM periodic_table_sql LIMIT 5') + + ## atomic number symbol name atomic mass metal or nonmetal? + ## 1 1 H Hydrogen 1.007940 nonmetal + ## 2 2 He Helium 4.002602 noble gas + ## 3 3 Li Lithium 6.941000 alkali metal + ## 4 4 Be Beryllium 9.012182 alkaline earth metal + ## 5 5 B Boron 10.811000 metalloid +``` + +Or return all elements with an atomic number of less than 10: + +```r + dbGetQuery(dp.database, 'SELECT * FROM periodic_table_sql WHERE "atomic number" < 10') + + ## atomic number symbol name atomic mass metal or nonmetal? + ## 1 1 H Hydrogen 1.007940 nonmetal + ## 2 2 He Helium 4.002602 noble gas + ## 3 3 Li Lithium 6.941000 alkali metal + ## 4 4 Be Beryllium 9.012182 alkaline earth metal + ## 5 5 B Boron 10.811000 metalloid + ## 6 6 C Carbon 12.010700 nonmetal + ## 7 7 N Nitrogen 14.006700 nonmetal + ## 8 8 O Oxygen 15.999400 nonmetal + ## 9 9 F Fluorine 18.998403 halogen +``` + +More about using databases, SQLite in R you can find in vignettes of [DBI](https://cran.r-project.org/package=DBI) and [RSQLite](https://cran.r-project.org/package=RSQLite) packages. + +We welcome your feedback and questions via our [Frictionless Data Gitter chat][fd-gitter] or via [Github issues][dp-r-issues] on the [datapackage-r][dp-r] repository. + +[dp]: https://specs.frictionlessdata.io/data-package/ +[tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[okgreece]: http://okfn.gr/ +[toolfund]: https://toolfund.frictionlessdata.io +[toolfund-okgreece]:https://frictionlessdata.io/articles/open-knowledge-greece/ +[dp-r]: https://github.com/frictionlessdata/datapackage-r +[ts]: https://specs.frictionlessdata.io/table-schema/ +[r-devtools]: https://cran.r-project.org/package=devtools +[fd-gitter]: http://gitter.im/frictionlessdata/chat +[dp-r-issues]: https://github.com/frictionlessdata/datapackage-r/issues + +[create-r]: /blog/2018/02/14/creating-tabular-data-packages-in-r/ diff --git a/site/blog/2018-02-16-using-data-packages-in-go/README.md b/site/blog/2018-02-16-using-data-packages-in-go/README.md new file mode 100644 index 000000000..86ac8d04e --- /dev/null +++ b/site/blog/2018-02-16-using-data-packages-in-go/README.md @@ -0,0 +1,207 @@ +--- +title: Using Data Packages in Go +date: 2018-02-16 +tags: ["Go"] +author: Daniel Fireman +description: A guide on how to use datapackage with Go +category: working-with-data-packages +--- + + +Daniel Fireman was one of 2017's [Frictionless Data Tool Fund][toolfund] grantees tasked with extending implementation of core Frictionless Data libraries in Go programming language. You can read more about this in [his grantee profile][toolfund-daniel]. In this post, Fireman will show you how to install and use the [Go](http://golang.org) libraries for working with [Tabular Data Packages][tdp]. + +Our goal in this tutorial is to load a data package from the web and read its metadata and contents. + +## Setup +For this tutorial, we will need the [datapackage-go][dp-go] and [tableschema-go][ts-go] packages, which provide all the functionality to deal with a Data Package's metadata and its contents. + +We are going to use the [dep tool](https://golang.github.io/dep/) to manage the dependencies of our new project: + +```sh +$ cd $GOPATH/src/newdataproj +$ dep init +``` + +## The Periodic Table Data Package + +A [Data Package][dp] is a simple container format used to describe and package a collection of data. It consists of two parts: + +* Metadata that describes the structure and contents of the package +* Resources such as data files that form the contents of the package + +In this tutorial, we are using a [Tabular Data Package][tdp] containing the periodic table. The package descriptor ([datapackage.json][datapackage.json]) and contents ([data.csv][data.csv]) are stored on GitHub. This dataset includes the atomic number, symbol, element name, atomic mass, and the metallicity of the element. Here are the header and the first three rows: + +| atomic number | symbol | name | atomic mass | metal or nonmetal? | +|---------------|--------|----------|-------------|--------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | + +## Inspecting Package Metadata + +Let's start off by creating the `main.go`, which loads the data package and inspects some of its metadata. + +```go +package main + +import ( + "fmt" + + "github.com/frictionlessdata/datapackage-go/datapackage" +) + +func main() { + pkg, err := datapackage.Load("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json") + if err != nil { + panic(err) + } + fmt.Println("Package loaded successfully.") +} +``` + +Before running the code, you need to tell the dep tool to update our project dependencies. Don't worry; you won't need to do it again in this tutorial. + +```sh +$ dep ensure +$ go run main.go +Package loaded successfully. +``` + +Now that you have loaded the periodic table Data Package, you have access to its `title` and `name` fields through the [Package.Descriptor() function](https://godoc.org/github.com/frictionlessdata/datapackage-go/datapackage#Package.Descriptor). To do so, let's change our main function to (omitting error handling for the sake of brevity, but we know it is _very_ important): + +```go +func main() { + pkg, _ := datapackage.Load("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json") + fmt.Println("Name:", pkg.Descriptor()["name"]) + fmt.Println("Title:", pkg.Descriptor()["title"]) +} +``` + +And rerun the program: + +```sh +$ go run main.go +Name: period-table +Title: Periodic Table +``` + +And as you can see, the printed fields match the [package descriptor][datapackage.json]. For more information about the Data Package structure, please take a look at the [specification](https://specs.frictionlessdata.io/data-package/). + +## Quick Look At the Data + +Now that you have loaded your Data Package, it is time to process its contents. The package content consists of one or more resources. You can access [Resources][dp-go-resource] via the [Package.GetResource()](https://godoc.org/github.com/frictionlessdata/datapackage-go/datapackage#Package.GetResource()) method. Let's print the periodic table `data` resource contents. + +```go +func main() { + pkg, _ := datapackage.Load("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json") + res := pkg.GetResource("data") + table, _ := res.ReadAll() + for _, row := range table { + fmt.Println(row) + } +} +``` + +```sh +$ go run main.go +[atomic number symbol name atomic mass metal or nonmetal?] +[1 H Hydrogen 1.00794 nonmetal] +[2 He Helium 4.002602 noble gas] +[3 Li Lithium 6.941 alkali metal] +[4 Be Beryllium 9.012182 alkaline earth metal] +... +``` + +The [Resource.ReadAll()](https://godoc.org/github.com/frictionlessdata/datapackage-go/datapackage#Resource.ReadAll) method loads the whole table in memory as raw strings and returns it as a Go `[][]string`. This can be quick useful to take a quick look or perform a visual sanity check at the data. + +## Processing the Data Package's Content + +Even though the string representation can be useful for a quick sanity check, you probably want to use actual language types to process the data. Don't worry, you won't need to fight the casting battle yourself. Data Package Go libraries provide a rich set of methods to deal with data loading in a very idiomatic way (very similar to [encoding/json](https://golang.org/pkg/encoding/json/)). + +As an example, let's change our `main` function to use actual types to store the periodic table and print the elements with atomic mass smaller than 10. + +```go +package main + +import ( + "fmt" + + "github.com/frictionlessdata/datapackage-go/datapackage" + "github.com/frictionlessdata/tableschema-go/csv" +) + +type element struct { + Number int `tableheader:"atomic number"` + Symbol string `tableheader:"symbol"` + Name string `tableheader:"name"` + Mass float64 `tableheader:"atomic mass"` + Metal string `tableheader:"metal or nonmetal?"` +} + +func main() { + pkg, _ := datapackage.Load("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json") + resource := pkg.GetResource("data") + + var elements []element + resource.Cast(&elements, csv.LoadHeaders()) + for _, e := range elements { + if e.Mass < 10 { + fmt.Printf("%+v\n", e) + } + } +} +``` + +```sh +$ go run main.go +{Number:1 Symbol:H Name:Hydrogen Mass:1.00794 Metal:nonmetal} +{Number:2 Symbol:He Name:Helium Mass:4.002602 Metal:noble gas} +{Number:3 Symbol:Li Name:Lithium Mass:6.941 Metal:alkali metal} +{Number:4 Symbol:Be Name:Beryllium Mass:9.012182 Metal:alkaline earth metal} +``` + +In the example above, all rows in the table are loaded into memory. Then every row is parsed into an `element` object and appended to the slice. The `resource.Cast` call returns an error if the whole table cannot be successfully parsed. + +If you don't want to load all data in memory at once, you can lazily access each row using [Resource.Iter](https://godoc.org/github.com/frictionlessdata/datapackage-go/datapackage#Resource.Iter) and use [Schema.CastRow](https://godoc.org/github.com/frictionlessdata/tableschema-go/schema#Schema.CastRow) to cast each row into an `element` object. That would change our main function to: + +```go +func main() { + pkg, _ := datapackage.Load("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json") + resource := pkg.GetResource("data") + + iter, _ := resource.Iter(csv.LoadHeaders()) + sch, _ := resource.GetSchema() + var e element + for iter.Next() { + sch.CastRow(iter.Row(), &e) + if e.Mass < 10 { + fmt.Printf("%+v\n", e) + } + } +} +``` + +```sh +$ go run main.go +{Number:1 Symbol:H Name:Hydrogen Mass:1.00794 Metal:nonmetal} +{Number:2 Symbol:He Name:Helium Mass:4.002602 Metal:noble gas} +{Number:3 Symbol:Li Name:Lithium Mass:6.941 Metal:alkali metal} +{Number:4 Symbol:Be Name:Beryllium Mass:9.012182 Metal:alkaline earth metal} +``` + +And our code is ready to deal with the growth of the periodic table in a very memory-efficient way :-) + +We welcome your feedback and questions via our [Frictionless Data Gitter chat][fd-gitter] or via [GitHub issues][dp-go-issues] on the datapackage-go repository. + +[dp]: https://specs.frictionlessdata.io/data-package/ +[tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[toolfund]: https://toolfund.frictionlessdata.io +[toolfund-daniel]:/blog/2017/11/01/daniel-fireman/ +[dp-go]: https://github.com/frictionlessdata/datapackage-go +[ts-go]: https://github.com/frictionlessdata/tableschema-go +[ts]: /table-schema/ +[dp-go-resource]:https://godoc.org/github.com/frictionlessdata/datapackage-go/datapackage#Resource +[fd-gitter]: http://gitter.im/frictionlessdata/chat +[dp-go-issues]: https://github.com/frictionlessdata/datapackage-go/issues +[datapackage.json]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json +[data.csv]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/data.csv diff --git a/site/blog/2018-03-07-well-packaged-datasets/README.md b/site/blog/2018-03-07-well-packaged-datasets/README.md new file mode 100644 index 000000000..99d0d84d1 --- /dev/null +++ b/site/blog/2018-03-07-well-packaged-datasets/README.md @@ -0,0 +1,148 @@ +--- +title: Well packaged datasets +date: 2018-03-07 +tags: ["Data Package Creator", "field-guide"] +category: +image: /img/blog/well-packaged.png +description: There's an art to creating a good collection of data. Improve the quality of your datasets; making use of schemas, metadata, and data packages. +--- + +When sharing multiple datasets on a specific subject with a varied audience, it is important to ensure that whoever accesses the data understands the context around it, and can quickly access licensing and other attribution information. + +In this section, you will learn how to collate related datasets in one place, and easily create a schema that contains descriptive metadata for your data collection. + +## Write a Table Schema + +Simply put, a schema is a blueprint that tells us how your data is structured, and what type of content is to be expected in it. You can think of it as a data dictionary. Having a table schema at hand makes it possible to run more precise validation checks on your data, both at a structural and content level. + +For this section, we will use the [Data Package Creator](https://create.frictionlessdata.io) and [Gross Domestic Product dataset for all countries (1960 - 2014)](http://datahub.io/core/gdp). + +**Data Package** is a format that makes it possible to put your data collection and relevant information that provides context about your data in one container before you share it. All contextual information, such as metadata and your data schema, is published in a JSON file named *datapackage.json*. + +**Data Package Creator** is an online service that facilitates the creation and editing of data packages. The service automatically generates a *datapackage.json* file for you as you add and edit data that is part of your data collection. We refer to each piece of data in a data collection as a **data resource**. + +[Data Package Creator](https://create.frictionlessdata.io) loads with dummy data to make it easy to understand how metadata and sample resources help generate the *datapackage.json* file. There are three ways in which a user can add data resources on [Data Package Creator](https://create.frictionlessdata.io): + +1. Provide a hyperlink to your data resource (highly recommended). + + If your data resource is publicly available, like on GitHub or in a data repository, simply obtain the URL and paste it in the **Path** section. To learn how to publish your data resource online, check the publish your dataset section. + +2. Create your data resource within the service. + + If your data resource isn't published online, you'll have to define its fields from scratch. Depending on how complex is your data, this can be time consuming, but it's still easier than creating the descriptor JSON file from scratch.This option is time consuming, as a user has to manually create each field of a data resource. However, this is simpler than learning how to create a JSON file from scratch. + +3. **Load a Data Package** option + + With this option, you can load a pre-existing *datapackage.json* file to view and edit its metadata and resource fields. +*** +Let's use our [Gross Domestic Product dataset for all countries (1960 - 2014)](https://github.com/frictionlessdata/example-data-packages/blob/master/gross-domestic-product-all/data/gdp.csv) dataset, which is publicly available on GitHub. + +Obtain a link to the raw CSV file by clicking on the Raw button at the top right corner of the GitHub file preview page, as shown in figure 1 below. The resulting hyperlink looks like `https://raw.githubusercontent.com/datasets/continent-codes/master/data/continent-codes.csv` + +
+ Above, raw button highlighted in red +
+ Figure 1: Above, raw button highlighted in red. +
+
+ +Paste your hyperlink in the *Path* section and click on the *Load* button. Each column in your table translates to a *field*. You should be prompted to add all fields identified in your data resource, as in Figure 2 below. Click on the prompt to load the fields. + +
+ annotated in red, a prompt to add all fields inferred from your data resource +
+ Figure 2: annotated in red, a prompt to add all fields inferred from your data resource. +
+
+ +The page that follows looks like Figure 3 below. Each column from the GDP dataset has been mapped to a *field*. The data type for each column has been inferred correctly, and we can preview data under each field by hovering over the field name. It is also possible to edit all sections of our data resource’s fields as we can see below. + +
+ all fields inferred from your data resource +
+ Figure 3: all fields inferred from your data resource. +
+
+ +You can now edit data types and formats as necessary, and optionally add titles and descriptive information to your fields. For example, the data type for our {Year} field should be ***year*** and not ***integer***. Our {Value} column has numeric information with decimal places. + +By definition, values under the ***integer*** data type are whole numbers. The ***number*** data type is more appropriate for the {Value} column. When in doubt about what data type to use, consult the [Table Schema data types cheat sheet](https://specs.frictionlessdata.io/table-schema/#types-and-formats). + +Click on the ![settings](./settings.png) icon to pick a suitable profile for your data resource. [Here’s more information about Frictionless Data profiles](https://specs.frictionlessdata.io/profiles/). + +If your dataset has other data resources, add them by scrolling to the bottom of the page, clicking on Add Resource, and repeating the same process as we just did. + +If your dataset has other data resources, add them by scrolling to the bottom of the page, clicking on **Add Resource**, and repeating the same process as we just did. + +
+ Prompt to add more data resources +
+ Figure 4: Prompt to add more data resources. +
+
+ +## Add your dataset's metadata + +In the previous section, we described metadata for each of our datasets, but we're still missing metadata for our collection of datasets. You can add it via the **Metadata** section on the left side bar, describing things like the dataset name, description, author, license, etc. + +
+ Add Data Package Metadata +
+ +The **Profile** section under metadata allows us to specify what kind of data collection we are packaging. +* *Data Package* +This is the base, more general profile. Use it if your dataset contains resources of mixed formats, like tabular and geographical data. The base requirement for a valid Data Package profile is the *datapackage.json* file. See the [Data Package specification](https://specs.frictionlessdata.io/data-package/) for more information. + +* *Tabular Data Package* +If your data contains only tabular resources like CSVs and spreadsheets, use the Tabular Data Package profile. See the [Tabular Data Package specification](https://specs.frictionlessdata.io/tabular-data-package/) for more information. +* *Fiscal Data Package* +If your data contains fiscal information like budgets and expenditure data, use the Fiscal Data Package profile. See the [Fiscal Data Package specification](https://specs.frictionlessdata.io/fiscal-data-package/) for more information. + +In our example, as we only have a CSV data resource, the *Tabular Data Package* profile is the best option. + +In the **Keywords** section, you can add any keywords that helps make your data collection more discoverable. For our dataset, we might use the keywords *GDP, National Accounts, National GDP, Regional GDP*. Other datasets could include the country name, dataset area (e.g. "health" or "environmental"), etc. + +Now that we have created a Data Package, we can **Validate** or **Download** it. But first, let’s see what our datapackage.json file looks like. With every addition and modification, the [Data Package Creator](https://create.frictionlessdata.io) has been populating the *datapackage.json* file for us. Click on the **{···}** icon to view the *datapackage.json* file. As you can see below, any edit we make to the description of the Value field reflects on the JSON file in real time. + +The **Validate** button allows us to confirm whether we chose the correct Profile for our Data Package. The two possible outcomes at this stage are: + +
+ Data Package is Invalid +
+ +This message appears when there is some validation error like if we miss some required attribute (e.g. the data package name), or have picked an incorrect profile (e.g. Tabular Data Package with geographical data).. Review the metadata and profiles to find the mistake and try validating again. + +
+ Data Package is Valid +
+ +All good! This message means that your data package is valid, and we can download it. + +## Download your Data Package + +As we said earlier, the base requirement for a valid Data Package profile is the *datapackage.json* file, which contains your data schema and metadata. We call this the descriptor file. You can download your descriptor file by clicking on the **Download** button. + +* If your data resources, like ours, were linked from an online public source, sharing the *datapackage.json* file is sufficient, since it contains URLs to your data resources. + +* If you manually created a data resource and its fields, remember to add all your data resources and the downloaded *datapackage.json* file in one folder before sharing it. + +The way to structure your dataset depends on your data, and what extra artifacts it contains (e.g. images, scripts, reports, etc.). In this section, we'll show a complete example with: + +* **Data files**: The files with the actual data (e.g. CSV, XLS, GeoJSON, ...) +* **Documentation**: How was the data collected, any caveats, how to update it, etc. +* **Metadata**: Where the data comes from, what's in the files, what's their source and license, etc. +* **Scripts**: Software scripts that were used to generate, update, or modify the data. + +Your final Data Package file directory should look like this: + +``` +data/ + dataresource1.csv + dataresource2.csv +datapackage.json +``` +* **data/**: All data files are contained in this folder. In our example, there is only one: `data/gdp.csv` . + +* **datapackage.json**: This file describes the dataset's metadata. For example, what is the dataset, where are its files, what they contain, what each column means (for tabular data), what's the source, license, and authors, and so on. As it's a machine-readable specification, other software can import and validate your files. + +Congratulations! You have now created a schema for your data, and combined it with descriptive metadata and your data collection to create your first data package! diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-1.png b/site/blog/2018-03-07-well-packaged-datasets/figure-1.png new file mode 100644 index 000000000..2a5ecdab7 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-1.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-2.png b/site/blog/2018-03-07-well-packaged-datasets/figure-2.png new file mode 100644 index 000000000..14ab720b7 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-2.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-3.png b/site/blog/2018-03-07-well-packaged-datasets/figure-3.png new file mode 100644 index 000000000..6d21c6992 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-3.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-4.png b/site/blog/2018-03-07-well-packaged-datasets/figure-4.png new file mode 100644 index 000000000..de12a3c34 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-4.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-5.png b/site/blog/2018-03-07-well-packaged-datasets/figure-5.png new file mode 100644 index 000000000..28b85b14a Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-5.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-6.png b/site/blog/2018-03-07-well-packaged-datasets/figure-6.png new file mode 100644 index 000000000..7948125e4 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-6.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/figure-7.png b/site/blog/2018-03-07-well-packaged-datasets/figure-7.png new file mode 100644 index 000000000..b45a04bfe Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/figure-7.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/settings.png b/site/blog/2018-03-07-well-packaged-datasets/settings.png new file mode 100644 index 000000000..b6597030b Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/settings.png differ diff --git a/site/blog/2018-03-07-well-packaged-datasets/well-packaged.png b/site/blog/2018-03-07-well-packaged-datasets/well-packaged.png new file mode 100644 index 000000000..dc817c131 Binary files /dev/null and b/site/blog/2018-03-07-well-packaged-datasets/well-packaged.png differ diff --git a/site/blog/2018-03-12-automatically-validated-tabular-data/README.md b/site/blog/2018-03-12-automatically-validated-tabular-data/README.md new file mode 100644 index 000000000..3bf7511ee --- /dev/null +++ b/site/blog/2018-03-12-automatically-validated-tabular-data/README.md @@ -0,0 +1,87 @@ +--- +title: Automatically validated tabular data +date: 2018-03-12 +tags: ["goodtables.io", "field-guide"] +category: +image: /img/blog/auto-validate.png +description: Automatic validation means you'll be the first to know if a change in your data causes a problem. Learn how to incorporate automatic validation into your workflow. +--- + +One-off validation of your tabular datasets can be hectic, especially where plenty of published data is maintained and updated fairly regularly. + +Running continuous checks on data provides regular feedback and contributes to better data quality as errors can be flagged and fixed early on. This section introduces you to tools that continually check your data for errors and flag content and structural issues as they arise. By eliminating the need to run manual checks on tabular datasets every time they are updated, they make your data workflow more efficient. + +In this section, you will learn how to setup automatic tabular data validation using goodtables, so your data is validated every time it's updated. Although not strictly necessary, it's useful to [know about Data Packages and Table Schema](/blog/2018/03/07/well-packaged-datasets/) before proceeding, as they allow you to describe your data in more detail, allowing more advanced validations. + +We will show how to set up automated tabular data validations for data published on: + +* [CKAN][ckan], an open source data publishing platform; +* [GitHub](https://github.com/), a hosting service; +* [Amazon S3](https://aws.amazon.com/s3/), a data storage service. + +If you don't use any of these platforms, you can still setup the validation using [goodtables-py][gt-py], it will just require some technical knowledge + +If you do use some of these platforms, the data validation report look like: + +[![Figure 1: Goodtables.io tabular data validation report](./goodtablesio-screenshot.png)](https://goodtables.io/github/vitorbaptista/birmingham_schools/jobs/3) +*Figure 1: Goodtables.io tabular data validation report.* + +## Validate tabular data automatically on CKAN + +[CKAN](https://ckan.org/) is an open source platform for publishing data online. It is widely used across the planet, including by the federal governments of the USA, United Kingdom, Brazil, and others. + +To automatically validate tabular data on CKAN, enable the [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension, which uses goodtables to run continuous checks on your data. The [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension: + +* Adds a badge next to each dataset showing the status of their validation (valid or invalid), and +* Allows users to access the validation report, making it possible for errors to be identified and fixed. + +![Figure 2: Annotated in red, automated validation checks on datasets in CKAN](./ckan-validation.png) +*Figure 2: Annotated in red, automated validation checks on datasets in CKAN.* + +The installation and usage instructions for [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension are available on [Github](https://github.com/frictionlessdata/ckanext-validation). + + +## Validate tabular data automatically on GitHub + +If your data is hosted on GitHub, you can use goodtables web service to automatically validate it on every change. + +For this section, you will first need to create a [GitHub repository](https://help.github.com/articles/create-a-repo/) and add tabular data to it. + +Once you have tabular data in your Github repository: + +1. Login on [goodtables.io](https://goodtables.io/) using your GitHub account and accept the permissions confirmation. +1. Once we've synchronized your repository list, go to the [Manage Sources](https://goodtables.io/settings) page and enable the repository with the data you want to validate. + * If you can't find the repository, try clicking on the Refresh button on the Manage Sources page + +Goodtables will then validate all tabular data files (CSV, XLS, XLSX, ODS) and [data packages](https://specs.frictionlessdata.io/data-package/) in the repository. These validations will be executed on every change, including pull requests. + + +## Validate tabular data automatically on Amazon S3 + +If your data is hosted on Amazon S3, you can use [goodtables.io][gtio] to automatically validate it on every change. + +It is a technical process to set up, as you need to know how to configure your Amazon S3 bucket. However, once it's configured, the validations happen automatically on any tabular data created or updated. Find the detailed instructions [here][gtio:s3]. + + +## Custom setup of automatic tabular data validation + +If you don't use any of the officially supported data publishing platforms, you can use [goodtables-py][gt-py] directly to validate your data. This is the most flexible option, as you can configure exactly when, and how your tabular data is validated. For example, if your data come from an external source, you could validate it once before you process it (so you catch errors in the source data), and once after cleaning, just before you publish it, so you catch errors introduced by your data processing. + +The instructions on how to do this are technical, and can be found on [https://github.com/frictionlessdata/goodtables-py][gt-py]. + +[gtio]: https://goodtables.io/ "Goodtables.io" +[gtio:s3]: https://docs.goodtables.io/getting_started/s3.html "Goodtables.io Amazon S3 instructions" +[github]: https://github.com/ "GitHub" +[s3]: https://aws.amazon.com/s3/ "Amazon S3" +[s3-region-bug]: https://github.com/frictionlessdata/goodtables.io/issues/136 "Can't add S3 bucket with other region that Oregon (us-west-2)" +[howto-s3bucket]: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html "How do I create an S3 Bucket?" +[howto-s3upload]: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html "How do I upload files and folders to an S3 Bucket?" +[howto-iamuser]: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html?icmpid=docs_iam_console "Create an IAM User in your AWS account" +[bucket-overview]: https://s3.console.aws.amazon.com/s3/buckets/ "Amazon S3 Bucket list" +[gh-new-repo]: https://help.github.com/articles/create-a-repo/ "GitHub: Create new repository tutorial" +[gtio-managesources]: https://goodtables.io/settings "Goodtables.io: Manage sources" +[datapackage]: /data-package/ "Data Package" +[gtio-dataschema]: writing_data_schema.html "Writing a data schema" +[gtio-configuring]: configuring.html "Configuring goodtables.io" +[gt-py]: https://github.com/frictionlessdata/goodtables-py +[ckan]: https://ckan.org diff --git a/site/blog/2018-03-12-automatically-validated-tabular-data/auto-validate.png b/site/blog/2018-03-12-automatically-validated-tabular-data/auto-validate.png new file mode 100644 index 000000000..96212af10 Binary files /dev/null and b/site/blog/2018-03-12-automatically-validated-tabular-data/auto-validate.png differ diff --git a/site/blog/2018-03-12-automatically-validated-tabular-data/ckan-validation.png b/site/blog/2018-03-12-automatically-validated-tabular-data/ckan-validation.png new file mode 100644 index 000000000..1dbb4a8b8 Binary files /dev/null and b/site/blog/2018-03-12-automatically-validated-tabular-data/ckan-validation.png differ diff --git a/site/blog/2018-03-12-automatically-validated-tabular-data/goodtablesio-screenshot.png b/site/blog/2018-03-12-automatically-validated-tabular-data/goodtablesio-screenshot.png new file mode 100644 index 000000000..360951f40 Binary files /dev/null and b/site/blog/2018-03-12-automatically-validated-tabular-data/goodtablesio-screenshot.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/README.md b/site/blog/2018-03-12-data-publication-workflow-example/README.md new file mode 100644 index 000000000..e4ca321b2 --- /dev/null +++ b/site/blog/2018-03-12-data-publication-workflow-example/README.md @@ -0,0 +1,138 @@ +--- +title: Data publication workflow example +date: 2018-03-12 +tags: ["Data Package Creator", "goodtables.io", "field-guide"] +category: +image: /img/blog/workflow.png +description: There's a lot to see in the world of Frictionless Data. If you're confused about how it all comes together, take a look at our publication workflow example. +--- + +In this section, we will walk through the process of publishing, using a dataset of the periodic table of elements as an example. We will define its metadata by creating a data package, describe the structure of the CSV using a Table Schema, validate it on Goodtables, and finally publish to a public CKAN instance. Let's start. + +First, let's look at the data. It is available as a CSV file on [this link][data.csv]. The first five rows look like: + +| atomic number | symbol | name | atomic mass | metal or nonmetal? | +| --- | --- | --- | --- | --- | +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | +| 4 | Be | Beryllium | 9.012182 | alkaline earth metal | +| 5 | B | Boron | 10.811 | metalloid | + +As we can see, there are some numeric fields, both integers (atomic number) and floating point (atomic mass), and others are textual with strings. Our first objective is to describe the metadata by creating a Data Package, and its contents by creating a Table Schema. + +## Step 1. Package our data as a Data Package + +The easiest way to create a data package is using the [Data Package Creator][dp:creator]. It provides a graphical interface to describe the data package's metadata, add resources, and define the schema for tabular resources. This is what you should see when you first open it: + +![Data Package Creator](./dp-creator.png) +*Data Package Creator* + +The left side bar contains the metadata for the Data Package as a whole, and the main part on the right contains the metadata for each specific resource. + +Let's add our CSV resource. On the main section of the page, fill the inputs with: + +* **Name**: periodic-table +* **Path**: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/d2b96aaed6ab12db41d73022a2988eeb292116e9/periodic-table/data.csv + +Resource and data package names must be unique, lowercase, can contain only letters, numbers, and the characters ".", "-" and "_". + +And click on the *Load* button. After a few seconds, a new box should appear with the text "Add all inferred fields (data has 5 extra column(s))". Click on it, and the fields will be created, with their data types and formats inferred from the data. This saves us time, as we don't need to start from scratch. + +The Data Package Creator got almost all data types correctly, except the {atomic mass} column. It inferred the column as having integer values, but as we can see from the data, the numbers aren't whole, they have floating points. Just change the type to **number**, and that's it. + +You can view a sample of the data in each column by hovering the mouse below its name. + +Usually, we would now add titles and descriptions to each field. However, as this is a simple dataset, the field names should be enough. In the end, the fields are: + +| Name | Data type | Data format | +| --- | --- | --- | +| atomic number | integer | default | +| symbol | string | default | +| name | string | default | +| atomic mass | number | default | +| metal or nonmetal? | string | default | + +We can add more information about this resource by clicking on the gear icon to the left of the "Load" button. Add the following information: + +* **Title**: Periodic table +* **Profile**: Tabular Data Resource +* **Format**: csv +* **Encoding**: (blank) +* **Description**: (blank) + +After this, we're only missing metadata for the data package as a whole, available on the left side bar. Add: + +* **Name**: periodic-table +* **Title**: Periodic table +* **Profile**: Tabular Data Package +* **Description**: Periodic table of chemical elements +* **Home Page**: https://github.com/frictionlessdata/example-data-packages/tree/master/periodic-table +* **Version**: 1.0.0 +* **License**: CC0-1.0 +* **Author**: (blank) + +![](./dp-creator-filled.png) + +Let's validate the data package, to ensure we haven't missed anything. Just click on the *Validate* button on the bottom left, and you should see a green message "Data package is valid!". This means that the data package is valid, but not necessarily its contents (we'll check them in the next step). + +Save the data package by clicking on the *Download* button. This will download a "datapackage.json" file that contains everything we added here. Our next step is to use it to validate the data. + +## Step 2. Validate our data package and its contents + +We now have a data package with our CSV file, including with a table schema describing the contents and types of its columns. Our final step before publishing is validating the data, so we can avoid publishing data with errors. To do so, we'll use goodtables. + +[Goodtables][goodtables] is a tabular data validator that allows you to automatically check for errors such as blank headers, duplicate rows, data with the wrong type (e.g. should be a number but is a date), and others. As it supports data packages, we can simply load the one we created in the previous step. Let's do it. + +1. Go to https://try.goodtables.io +1. On the *Source* input, click on the *Upload File* link. +1. Click on *Browse...* and select the *datapackage.json* file you downloaded in the previous step +1. Click on *Validate* + +After a few seconds, you should see: + +![try.goodtables](./try-goodtables.png) +*try.goodtables* + +This means that: + +* The data package is valid +* The CSV file is valid +* There are no blank rows or headers, or duplicate rows +* The data is valid according to the table schema we created (numbers are numbers, and so on) + +Although it can't tell you if your data is correct, for example if the Aluminium +atomic mass is 26.9815386, it does ensure you that all atomic mass values are +numbers, among the other validations. + +Now that we've created a data package, described our data with a table schema, +and validated it, we can finally publish it. + +## Step 3. Publish the data + +Our final step is to publish the dataset. The specifics instructions will vary depend on where you're publishing to. In this example, we'll see how to publish to a public [CKAN][ckan] instance, the [Datahub](https://old.datahub.io). If you want to use it and don't have an account yet, you can request one via [our community page][datahub:request-org]. *(Note: this example is now out of date. See the [CKAN docs](https://docs.ckan.org/en/2.9/) for more updated information*). Let's start. + +After you're logged in, go to the [datasets list page][datahub:dataset-list] and click on the `Import Data Package` button. On this form, click on "Upload", select the `datapackage.json` file we created in the previous step, and choose your organisation. We'll keep the visibility as private for now, so we can review the dataset before it's made public. + +![Importing a data packate to the DataHub](./datahub-import-datapackage.png) +*Importing a data packate to the DataHub* + +If you don't see the "Import Data Package" button in your CKAN instance, install the [ckanext-datapackager][ckanext-datapackager] extension to add support for importing and exporting your datasets as data packages. + +You will be redirected to the newly created dataset on CKAN, with its metadata and resource extracted from the data package. Double check if everything seems fine, and when you're finished, click on the "Manage" button and change the visibility to "Public". + +[![Data package in CKAN](./datahub-dataset.png)][datahub:dataset] + +That's it! CKAN supports data packages via the [ckanext-datapackager][ckanext-datapackager] extension, so importing (and exporting) data packages is trivial, as all the work on describing the dataset was done while creating the data package. + +[data.csv]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/d2b96aaed6ab12db41d73022a2988eeb292116e9/periodic-table/data.csv +[dp:creator]: https://create.frictionlessdata.io/ "Data Package Creator" +[goodtables]: https://goodtables.io +[published-dataset]: https://datahub.ckan.io/dataset/period-table-9896953431 +[ckan]: https://ckan.org +[datahub]: https://datahub.ckan.io +[datahub:request-org]: https://discuss.okfn.org/c/open-knowledge-labs/datahub +[datahub:dataset-list]: https://old.datahub.io/dataset +[datahub:import-dp]: https://datahub.ckan.io/import_datapackage +[datahub:dataset]: https://datahub.ckan.io/dataset/period-table-9896953431 +[ckanext-datapackager]: https://github.com/frictionlessdata/ckanext-datapackager diff --git a/site/blog/2018-03-12-data-publication-workflow-example/datahub-dataset.png b/site/blog/2018-03-12-data-publication-workflow-example/datahub-dataset.png new file mode 100644 index 000000000..fd54c72e9 Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/datahub-dataset.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/datahub-import-datapackage.png b/site/blog/2018-03-12-data-publication-workflow-example/datahub-import-datapackage.png new file mode 100644 index 000000000..33cf5b36a Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/datahub-import-datapackage.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/dp-creator-filled.png b/site/blog/2018-03-12-data-publication-workflow-example/dp-creator-filled.png new file mode 100644 index 000000000..a4bbcb2ec Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/dp-creator-filled.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/dp-creator.png b/site/blog/2018-03-12-data-publication-workflow-example/dp-creator.png new file mode 100644 index 000000000..a2b5b8db2 Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/dp-creator.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/try-goodtables.png b/site/blog/2018-03-12-data-publication-workflow-example/try-goodtables.png new file mode 100644 index 000000000..75779d594 Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/try-goodtables.png differ diff --git a/site/blog/2018-03-12-data-publication-workflow-example/workflow.png b/site/blog/2018-03-12-data-publication-workflow-example/workflow.png new file mode 100644 index 000000000..ade27630b Binary files /dev/null and b/site/blog/2018-03-12-data-publication-workflow-example/workflow.png differ diff --git a/site/blog/2018-03-27-applying-licenses/README.md b/site/blog/2018-03-27-applying-licenses/README.md new file mode 100644 index 000000000..bf9891359 --- /dev/null +++ b/site/blog/2018-03-27-applying-licenses/README.md @@ -0,0 +1,224 @@ +--- +title: Applying licenses, waivers or public domain marks +date: 2018-03-27 +tags: ["licenses"] +description: A guide on applying licenses, waivers or public domain marks to datapackages +category: publishing-data +--- + +Applying licenses, waivers or public domain marks to [data packages](https://specs.frictionlessdata.io/data-package/) and [data resources](https://specs.frictionlessdata.io/data-resource/) helps people understand how they can use, modify and share the contents of a data package. + +It is recommended to that you apply a license, waiver or public domain mark to a data package using the [`licenses`](https://specs.frictionlessdata.io/data-package/#licenses) property. The value assigned to the data package `licenses` property applies to all the data, files and metadata in the data package unless specified otherwise. + +You can optionally apply a license to a data resource. This allows a license that differs from the data package license to be applied to the data resource. If the data resource [`licenses`](https://specs.frictionlessdata.io/data-resource/#optional-properties) property is not specified, it inherits the data package `licenses`. + +## Specifying a license +The Frictionless Data specification states that a [license](https://specs.frictionlessdata.io/data-package/#licenses) must contain a `name` property and/or a `path` property, and may contain a `title` property. + +> * `name`: The name MUST be an [Open Definition license ID](http://licenses.opendefinition.org) +> * `path`: A [url-or-path](https://specs.frictionlessdata.io/data-resource/#url-or-path) string, that is a fully qualified HTTP address, or a relative POSIX path +> * `title`: A human-readable title + +You can specify the location of a license using a URL or a Path. + +### Specify a license using a URL + +To specify a license using a URL, use the fully qualified HTTP address as the value in the `path` property, e.g. + +``` +"licenses": [{ + "path": "https://cdla.io/sharing-1-0/", + "title": "Community Data License Agreement – Sharing, Version 1.0" +}] +``` + +### Specify a license using a Path + +To specify a license using a path, use a relative POSIX path to the file in the data package as the value in the `path` property, e.g. + +``` +"licenses": [{ + "path": "LICENSE.pdf" +}] +``` + +In this example, LICENSE.pdf would be in the root of the data package folder, e.g. + +``` +folder + |- datapackage.json + |- LICENSE.pdf + |- README.md + |- data + |- data.csv + |- reference-data.csv + +``` + +It is recommended that the licence is provided in [markdown](http://commonmark.org) format to simplify its display in data platforms and other software. + +The license can be a separate file or included in the `README.md` file. If license information is included in the `README.md` file, it is recommended that it follows the [guide for formatting a README file](/blog/2016/04/20/publish-faq/#readme). + +## Applying a license + +These scenarios apply to either the data package or a data resource. + +1. [Apply an open license](#apply-an-open-license) +2. [Apply a non-open license](#apply-a-non-open-license) +3. [Apply a waiver](#apply-a-waiver) +4. [Apply a public domain mark](#apply-a-public-domain-mark) +5. [Do not apply a license](#do-not-apply-a-license) + +Other considerations: +* [Provide additional license information](#provide-additional-license-information) +* [Copyright belongs to multiple parties](#copyright-belongs-to-multiple-parties) +* [License may become legally binding](#license-may-become-legally-binding) +* [Software may not fully support the Frictionless Data specification](#software-may-not-fully-support-the-frictionless-data-specification) + +### Apply an open license + +For an [open license](http://opendefinition.org/licenses/), use `name`, `path` and `title`, e.g. + +``` +"licenses": [{ + "name": "CC-BY-4.0", + "path": "https://creativecommons.org/licenses/by/4.0/", + "title": "Creative Commons Attribution 4.0" +}] +``` + +`name` must be an [Open Definition license ID](http://licenses.opendefinition.org) however note that some license IDs are placeholders or have been retired and should not be used, e.g. [other-at](http://licenses.opendefinition.org/licenses/other-at.json), [other-open](http://licenses.opendefinition.org/licenses/other-open.json), [other-pd](http://licenses.opendefinition.org/licenses/other-pd.json), [notspecified](http://licenses.opendefinition.org/licenses/notspecified.json), [ukcrown-withrights](http://licenses.opendefinition.org/licenses/ukcrown-withrights.json). + +### Apply a non-open license + +To apply an non-open license, use the `path` and optionally the `title` properties. It is preferred that the license is published at a URL (a fully qualified HTTP address), e.g. + +``` +"licenses": [{ + "path": "https://creativecommons.org/licenses/by-nc-nd/4.0/", + "title": "Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)" +}] +``` + +If the license is not available at a URL, you can [specify a license using a path](#specify-a-license-using-a-path). + +### Apply a waiver + +You can indicate that copyright has been waived by referencing a waiver at a URL in the `path` property, e.g. + +``` +"licenses": [{ + "name": "CC0-1.0" + "path": "https://creativecommons.org/publicdomain/zero/1.0/", + "title": "CC0 1.0" +}] +``` + +If the waiver is not available at a URL, you can [specify a waiver using a path](#specify-a-license-using-a-path). + +### Apply a public domain mark + +You can indicate that there is no copyright in the data or that copyright has expired, using the [public domain mark](https://creativecommons.org/share-your-work/public-domain/pdm/) or other public domain dedications, e.g. + +``` +"licenses": [{ + "path": "http://creativecommons.org/publicdomain/mark/1.0/", + "title": "Public Domain Mark" +}] +``` + +If the public domain dedication is not available at a URL, you can [specify the public domain dedication using a path](#specify-a-license-using-a-path). + +### Do not apply a license + +If you have not decided what license to apply but still want to publish the data package, describe the situation in a file in the data package, e.g. + +``` +"licenses": [{ + "path": "README.md" +}] +``` + +## Other considerations + +### Provide additional license information + +It can be helpful to data consumers to provide additional copyright or attribution information such as: + +* copyright notice - this allows a data publisher to specify a short copyright notice +* copyright statement URL - a URL to a copyright statement +* preferred attribution text - the text to be used when attributing the creator(s) of the data +* attribution URL - a URL to be used when building an attribution link + +This is explained in the ODI [Publisher's Guide to the Open Data Rights Statement Vocabulary](https://theodi.org/guides/publishers-guide-to-the-open-data-rights-statement-vocabulary) and [Re-users Guide to the Open Data Rights Statement Vocabulary](https://theodi.org/guides/odrs-reusers-guide). + +Some licenses require that data consumers provide the copyright notice in the attribution (e.g. [CC BY 4.0 Section 3](https://creativecommons.org/licenses/by/4.0/legalcode#s3)). + +Some data publishers may waive some of their rights under a license, e.g. + +> [Noosa Wedding Locations](https://data.gov.au/dataset/noosa-wedding-locations) data by [Noosa Shire Council](https://www.noosa.qld.gov.au) is licensed under a [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) licence. +> Noosa Shire Council waives the requirements of attribution under this licence, for this data. + +You can include this information, either: +* in the file containing license information (e.g. `README.md`) +* as additional metadata properties in the datapackage.json + +The data package specification supports adding [additional metadata properties](https://specs.frictionlessdata.io/data-package/#descriptor) to the datapackage.json, e.g. + +``` +{ + "name" : "coastal-data-system-near-real-time-wave-data", + "title" : "Coastal Data System – Near real time wave data", + "licenses" : [{ + "name": "CC-BY-4.0", + "path": "https://creativecommons.org/licenses/by/4.0/", + "title": "Creative Commons Attribution 4.0" + }], + "copyrightNotice": "© The State of Queensland 1995–2017", + "copyrightStatement": "https://www.qld.gov.au/legal/copyright", + "attributionText": "Science, Information Technology and Innovation, Queensland Government, Coastal Data System – Near real time wave data, licensed under Creative Commons Attribution 4.0 sourced on 26 December 2017", + "resources": [ + { + "path": "https://data.qld.gov.au/dataset/coastal-data-system-near-real-time-wave-data", + ... + } + ] +} +``` + +### Copyright belongs to multiple parties + +Sometimes data in a resource may be combined from multiple sources that are licensed in different ways. You can indicate this by placing two or more licenses in the `licenses` property. Further explanation should be given in the `README.md`. + +``` +"licenses": [{ + "name": "PDDL-1.0", + "path": "http://opendatacommons.org/licenses/pddl/", + "title": "Open Data Commons Public Domain Dedication and License v1.0" + }, + { + "name": "CC-BY-SA-4.0", + "path": "https://creativecommons.org/licenses/by-sa/4.0/", + "title": "Creative Commons Attribution Share-Alike 4.0" + }] +``` + +### License may become legally binding + +The [specification](https://specs.frictionlessdata.io/data-package/#licenses) for `licenses` states: + +> **This property is not legally binding and does not guarantee the package is licensed under the terms defined in this property.** + +A data package may be uploaded to a data platform and the `licenses` applied to the data resources may be publicly displayed. This may make, or give the perception that, the license is legally binding. Please check your specific situation before publishing the data. + +### Software may not fully support the Frictionless Data specification + +Be aware that some data platforms or software may not fully support the Frictionless Data specification. This may result in license information being lost or other issues. Always test your data publication to ensure you communicate the correct license information. + +For example, at the time of writing: + +* [CKAN Data Package extension](https://github.com/frictionlessdata/ckanext-datapackager): + * does not upload the `README.md` file in a data package. If you have described licence information in the `README.md` file, this will be lost ([issue #60](https://github.com/frictionlessdata/ckanext-datapackager/issues/60)) + * does not display license information in the datapackage.json file correctly ([issue #62](https://github.com/frictionlessdata/ckanext-datapackager/issues/62)) + +* [Data Curator](/blog/2019/03/01/datacurator/) only allows the user to select from a limited set of open licenses to describe the data package and data resource licenses. diff --git a/site/blog/2018-04-04-creating-tabular-data-packages-in-javascript/README.md b/site/blog/2018-04-04-creating-tabular-data-packages-in-javascript/README.md new file mode 100644 index 000000000..6de43de64 --- /dev/null +++ b/site/blog/2018-04-04-creating-tabular-data-packages-in-javascript/README.md @@ -0,0 +1,35 @@ +--- +title: Creating Data Packages in JavaScript +date: 2018-04-04 +tags: ["JavaScript"] +category: working-with-data-packages +--- + +This tutorial will show you how to install the JavaScript libraries for working with Data Packages and Table Schema, load a CSV file, infer its schema, and write a Tabular Data Package. + +## Setup + +For this tutorial we will need [datapackage-js](https://github.com/frictionlessdata/datapackage-js) which is a JavaScript library for working with Data Packages. + +Using Node Package Manager (`npm`), install the latest version of `datapackage-js` by entering the following into your command line: +```bash +npm install datapackage@latest +``` + +Run the `datapackage --help` command to find out all options available to you. + +## Creating a package + +The basic building block of a data package is the `datapackage.json` file. It contains the schema and metadata of your data collections. + +Now that the node package for working with data packages has been installed, create a directory for your project, and use the command `datapackage infer path/to/file.csv` to generate a schema for your dataset. To save this file in the directory for editing and sharing, simply append `> datapackage.json` to the command above, like so: + +```bash +datapackage infer path/to/file.csv > datapackage.json +``` + +This creates a `datapackage.json` file in this directory. + +## Publishing + +Now that you have created your Data Package, you might want to [publish your data online](/blog/2016/08/29/publish-online/) so that you can share it with others. diff --git a/site/blog/2018-04-05-joining-tabular-data-in-python/README.md b/site/blog/2018-04-05-joining-tabular-data-in-python/README.md new file mode 100644 index 000000000..9455ffdc7 --- /dev/null +++ b/site/blog/2018-04-05-joining-tabular-data-in-python/README.md @@ -0,0 +1,107 @@ +--- +title: Joining Tabular Data +date: 2018-04-05 +tags: ["Python"] +category: working-with-data-packages +--- + +In a [separate guide](/blog/2018/04/06/joining-data-in-python/), I walked through joining a tabular dataset with one containing geographic information. In this guide, I will demonstrate an example of joining two tabular datasets. + +There are, of course, various, more robust ways of joining tabular data. The example listed below is intended to demonstrate how the current libraries and specifications work together to perform this common task. + +## Data + +In this case, we want to join a dataset containing the *nominal* Gross Domestic Product(GDP) per country per year with the Consumer Price Index (CPI) per country per year. By adjusting a given GDP measure by the CPI, a measure of inflation, one can yield the *real* GDP, a measure of economic output adjusted for price changes over time. To do that, of course, we need to join these independent datasets on the common values "Country Code" and "Year". + +### GDP + +| Country Name | Country Code | Year | Value | +|---|---|---|---| +| Afghanistan | AFG | 2004 | 5285461999.33739 | +| Afghanistan | AFG | 2005 | 6275076016.47174 | +| Afghanistan | AFG | 2006 | 7057598406.61553 | +| Afghanistan | AFG | 2007 | 9843842455.48323 | +| Afghanistan | AFG | 2008 | 10190529882.4878 | + +### CPI + +| Country Name | Country Code | Year | CPI | +|---|---|---|---| +| Afghanistan | AFG | 2004 | 63.1318927309 | +| Afghanistan | AFG | 2005 | 71.1409742918 | +| Afghanistan | AFG | 2006 | 76.3021776777 | +| Afghanistan | AFG | 2007 | 82.7748069188 | +| Afghanistan | AFG | 2008 | 108.0666000101 | + +## Loading the Data + +As usual, the first step is to load the Data Packages library `datapackage`. We also need to import `DictWriter` to write our merged rows to a new CSV. + +```python +import datapackage +from csv import DictWriter + +cpi_dp = datapackage.DataPackage('https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/cpi/datapackage.json') +gdp_dp = datapackage.DataPackage('https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/gross-domestic-product-all/datapackage.json') +``` + +Given that our source data has already been packaged in [Tabular Data Package](https://specs.frictionlessdata.io/tabular-data-package/) format, we know that we have a [*schema*](https://specs.frictionlessdata.io/table-schema) for each CSV which specifies useful information for each column. We'd like to merge and preserve this schema information as we'll need it for specifying the combined schema in our new Data Package. Note that we're also adding a new derived column named 'Real GDP' and giving it a type of `number`. + +```python +field_info = [] +field_info.extend(cpi_dp.resources[0].descriptor['schema']['fields']) +field_info.extend(gdp_dp.resources[0].descriptor['schema']['fields']) +field_info.append({'name': 'Real GDP', 'type': 'number'}) +``` + +Now that we have this information, we can generate a `fieldnames` array containing only the names of the columns to eventually pass to `DictWriter` when we're ready write out our new CSV. + +```python +fieldnames = [f['name'] for f in field_info] +``` + +## Joining the Data + +What follows is a fairly simple example of iterating through each row of each CSV and creating a new `merged_row` when 'Year' and 'Country Code' match on the two datasets. We are also calculating our derived 'Real GDP' column based in the information found in the original columns. + +```python +with open('real_gdp.csv', 'w') as csvfile: + writer = DictWriter(csvfile,fieldnames=fieldnames) + writer.writeheader() + for gdp_row in gdp_dp.resources[0].data: + for cpi_row in cpi_dp.resources[0].data: + if gdp_row['Year'] == cpi_row['Year'] and gdp_row['Country Code'] == cpi_row['Country Code']: + merged_row = gdp_row.copy() + merged_row.update(cpi_row) + merged_row.update({'Real GDP': 100*(float(gdp_row['Value'])/float(cpi_row['CPI']))}) + writer.writerow(merged_row) + +``` + +## Creating a New Data Package + +Now that we've created our new CSV `real_gdp.csv`, we can use the Data Package library to package it up with some useful metadata. Note that we are passing the merged `field_info` array into our `schema` definition. Given that we are generating this Data Package "by hand", we need to run the `validate` method on the new Data Package object to make sure that we are, indeed, creating a valid Data Package. After validating the Data Package metadata, we can either write the Data Package directly or save the whole thing as a zip file using the `save` method. + +```python +dp = datapackage.Package() +dp.descriptor['name'] = 'real-gdp' + { + 'name': 'data', + 'path': 'real_gdp.csv', + 'format': 'csv', + 'schema': { + 'fields': field_info + } + } +] + +resource = dp.resources[0] +resource.descriptor['path'] = 'real_gdp.csv' + +dp.validate() + +with open('datapackage.json', 'w') as f: + f.write(dp.to_json()) + +# dp.save("real_gdp.zip") +``` diff --git a/site/blog/2018-04-06-joining-data-in-python/README.md b/site/blog/2018-04-06-joining-data-in-python/README.md new file mode 100644 index 000000000..723b6e82c --- /dev/null +++ b/site/blog/2018-04-06-joining-data-in-python/README.md @@ -0,0 +1,103 @@ +--- +title: Joining Data +date: 2018-04-06 +tags: ["Python"] +description: A guide on how to join datapackage with Python +category: working-with-data-packages +--- + +Joining multiple datasets on a common value or set of values is a common data wrangling task. For instance, one might have a dataset listing Gross Domestic Product (GDP) per country and a separate dataset containing geographic outlines of country borders. If these independent datasets have a shared property (for instance, the three-letter country code as [defined in ISO 3166-1](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3)),we should be able to create one consolidated dataset to generate a map of GDP per country. This guide will walk through this simple use case. + +## Example Data + +For this example, we are going to use two example Data Packages from our [example data packages repository](https://github.com/frictionlessdata/example-data-packages/) with the properties described above. The first is an example of Data Package containing a GeoJSON file. [GeoJSON](http://geojson.org/) is a format for representing geographical features in [JSON](http://json.org/). This particular GeoJSON file lists countries on its `features` array and specifies the country code as a property on each "feature". In this case, the country code is stored on the key "ISO_A3" of the feature's `properties` object. + +```json +{ + "type": "FeatureCollection", + "features": [ + { + "type": "Feature", + "properties": { + "ADMIN": "Ukraine", + "ISO_A3": "UKR" + }, + "geometry": { + "type": "Polygon", + "coordinates": [ + "..." + ] + } + } + ] +} +``` + +The second Data Package is a typical [Tabular Data Package](https://specs.frictionlessdata.io/tabular-data-package) containing a GDP measure for each country in the world for the year 2014. Country codes are stored, naturally, on the "Country Code" column. + +| Country Name | Country Code | Year | Value | +|-------------------------------------------------|--------------|------|-------------------| +| Ukraine | UKR | 2014 | 131805126738.287 | +| United Arab Emirates | ARE | 2014 | 401646583173.427 | +| United Kingdom | GBR | 2014 | 2941885537461.48 | +| United States | USA | 2014 | 17419000000000 | +| Uruguay | URY | 2014 | 57471277325.1312 | + +## Reading and Joining Data + +As in our [Using Data Packages in Python guide](/blog/2016/08/29/using-data-packages-in-python/), the first step before joining is to read the data for each Data Package onto our computer. We do this by importing the `datapackage` library and passing the Data Package url to its `DataPackage` method. We are also importing the standard Python `json` library to read and write our GeoJSON file. + +```python +import json +import datapackage + +countries_url = 'https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/geo-countries/datapackage.json' +gdp_url = 'https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/gross-domestic-product-2014/datapackage.json' + +countries_dp = datapackage.Package(countries_url) +gdp_dp = datapackage.Package(gdp_url) + +world = json.loads(countries_dp.get_resource('countries').raw_read().decode('UTF-8')) + +``` + +Learn more about creating data packages in Python [in this tutorial](/blog/2016/07/21/creating-tabular-data-packages-in-python/). + +Our GeoJSON data is stored as a `bytes` object in the `data` attribute of the first (and only) element of the Data Package `resources` array. To create our `world` GeoJSON dict, we first need to decode this `bytes` object to a UTF-8 string and pass it to `json.loads`. + +```python +world = json.loads(countries_dp.get_resource('countries').raw_read().decode('UTF-8')) +``` + +At this point, joining the data can be accomplished by iterating through each country in the `world['features']` array and adding a property "GDP (2014)" if "Country Code" on the `gdp_dp` Data Package object matches "ISO_A3" on the given GeoJSON feature. The value of "GDP (2014)" is derived from the "Value" column on the `gdp_dp` Data Package object. + +```python +for feature in world['features']: + matches = [gdp['Value'] for gdp in gdp_dp.resources[0].data if gdp['Country Code'] == feature['properties']['ISO_A3']] + if matches: + feature['properties']['GDP (2014)'] = float(matches[0]) + else: + feature['properties']['GDP (2014)'] = 0 +``` + +Finally, we can output our consolidated GeoJSON dataset into a new file called "world_gdp_2014.geojson" using `json.dump` and create a new Data Package container for it. For a more thorough walkthrough on creating a Data Package, please consult the +[Creating Data Packages in Python](/blog/2016/07/21/creating-tabular-data-packages-in-python/) guide. + +```python +new_dp = datapackage.Package() +new_dp.descriptor['name'] = 'consolidated-dataset' +new_dp.descriptor['resources'] = [ + { + 'name': 'data', + 'path': 'world_gdp_2014.geojson' + } +] +new_dp.commit() +new_dp.save('datapackage.zip') +``` + +We can now quickly render this GeoJSON file into a [chloropleth map](https://en.wikipedia.org/wiki/Choropleth_map) using [QGIS](http://qgis.org/en/site/): + +![GDP Map Example](./gdp_map_example.png) + +Or we can rely on GitHub to render our GeoJSON for us. When you click a country, it's property list will show up featuring "ADMIN", "ISO_A3", and the newly added "GDP (2014)" property. diff --git a/site/blog/2018-04-06-joining-data-in-python/gdp_map_example.png b/site/blog/2018-04-06-joining-data-in-python/gdp_map_example.png new file mode 100644 index 000000000..a0bb09a0d Binary files /dev/null and b/site/blog/2018-04-06-joining-data-in-python/gdp_map_example.png differ diff --git a/site/blog/2018-04-28-using-data-packages-in-java/README.md b/site/blog/2018-04-28-using-data-packages-in-java/README.md new file mode 100644 index 000000000..c785c1c5b --- /dev/null +++ b/site/blog/2018-04-28-using-data-packages-in-java/README.md @@ -0,0 +1,150 @@ +--- +title: Using Data Packages in Java +date: 2018-04-28 +tags: ["Java"] +author: Georges Labrèche +description: A guide on how to use datapackage with Java +category: working-with-data-packages +--- + +Georges Labrèche was one of 2017's [Frictionless Data Tool Fund][toolfund] grantees tasked with extending implementation of core Frictionless Data libraries in Java programming language. You can read more about this in [his grantee profile](/blog/2017/10/24/georges-labreche/). + +In this post, Labrèche will show you how to install and use the [Java](https://www.java.com/en/) libraries for working with [Tabular Data Packages][tdp]. + +Our goal in this tutorial is to load tabular data from a CSV file, infer data types and the table's schema. + +## Setup + +First things first, you'll want to grab [datapackage-java][dp-java] and the [tableschema-java][ts-java] libraries. + + +## The Data + +For our example, we will use a [Tabular Data Package][tdp] containing the periodic table. You can find the [data package descriptor][datapackage.json] and the [data][data.csv] on GitHub. + +A [Data Package][dp] is a simple container format used to describe and package a collection of data. It consists of two parts: + +* Metadata that describes the structure and contents of the package +* Resources such as data files that form the contents of the package + +## Packaging + +Let's start by fetching and packaging the data: + +```java + +// fetch the data +URL url = new URL("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json"); + +// package the data +Package dp = new Package(url); + +``` + +That's it, you're all set to start playing with the packaged data. There are parameters you can set such as loading a schema or imposing strict validation so be sure to go through the project's [README][dp-java-readme] for more detail. + +## Iterating + +Now that you have a Data Package instance, let's see what the data looks like. A data package can contain more than one resource so you have to use the `Package.getResource()` method to specify which resource you'd like to access. + +Let's iterate over the data: + +```java + +// Get a resource named data from the data package +Resource resource = pkg.getResource("data"); + +// Get the Iterator +Iterator iter = resource.iter(); + +// Iterate +while(iter.hasNext()){ + String[] row = iter.next(); + String atomicNumber = row[0]; + String symbol = row[1]; + String name = row[2]; + String atomicMass = row[3]; + String metalOrNonMetal = row[4]; +} + +``` + +Notice how we're fetching all values as `String`. This may not be what you want, particularly for the atomic number and mass. Alternatively, you can trigger data type inference and casting like this: + +```java + +// Get Iterator +// Third boolean is the cast flag. +Iterator iter = resource.iter(false, false, true)); + +// Iterator +while(iter.hasNext()){ + String[] row = iter.next(); + int atomicNumber = row[0]; + String symbol = row[1]; + String name = row[2]; + float atomicMass = row[3]; + String metalOrNonMetal = row[4]; +} + +``` + +And that's it, your data is now associated with the appropriate data types! + +## Inferring the Schema + +We wouldn't have had to infer the data types if we had included a [Table Schema][ts] when creating an instance of our Data Package. If a Table Schema is not available, then it's something that can also be inferred and created with `tableschema-java`: + +```java + +URL url = new URL("https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/data.csv"); +Table table = new Table(url); +Schema schema = table.inferSchema(); +schema.write("/path/to/write/schema.json"); + +``` + +The type inference algorithm tries to cast to available types and each successful type casting increments a popularity score for the successful type cast in question. At the end, the best score so far is returned. + +The inference algorithm traverses all of the table's rows and attempts to cast every single value of the table. When dealing with large tables, you might want to limit the number of rows that the inference algorithm processes: + +```java + +// Only process the first 25 rows for type inference. +Schema schema = table.inferSchema(25); + +``` + +Be sure to go through `tableschema-java`'s [README][ts-java-readme] as well to learn more about how to operate with [Table Schema][ts]. + + +## Contributing +In case you discovered an issue that you'd like to contribute a fix for, or if you would like to extend functionality: + +```sh + +# install jabba and maven2 +$ cd tableschema-java +$ jabba install 1.8 +$ jabba use 1.8 +$ mvn install -DskipTests=true -Dmaven.javadoc.skip=true -B -V +$ mvn test -B + +``` + +Make sure that all tests pass, and submit a PR with your contributions once you're ready. + +We also welcome your feedback and questions via our [Frictionless Data Gitter chat][fd-gitter] or via [GitHub issues][dp-java-issues] on the datapackage-java repository. + +[dp]: https://specs.frictionlessdata.io/data-package/ +[tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[ts]: https://specs.frictionlessdata.io/table-schema/ +[toolfund]: https://toolfund.frictionlessdata.io +[dp-java]: https://github.com/frictionlessdata/datapackage-java +[ts-java]: https://github.com/frictionlessdata/tableschema-java +[fd-gitter]: http://gitter.im/frictionlessdata/chat +[dp-java-issues]: https://github.com/frictionlessdata/datapackage-java/issues +[dp-java-readme]: https://github.com/frictionlessdata/datapackage-java/blob/master/README.md +[ts-java-readme]: https://github.com/frictionlessdata/tableschema-java/blob/master/README.md +[datapackage.json]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json +[data.csv]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/data.csv diff --git a/site/blog/2018-05-07-using-data-packages-in-clojure/README.md b/site/blog/2018-05-07-using-data-packages-in-clojure/README.md new file mode 100644 index 000000000..68e933d24 --- /dev/null +++ b/site/blog/2018-05-07-using-data-packages-in-clojure/README.md @@ -0,0 +1,145 @@ +--- +title: Using Data Packages in Clojure +date: 2018-05-07 +tags: ["Clojure"] +author: Matt Thompson +description: A guide on how to use datapackage with Clojure +category: working-with-data-packages +--- + +Matt Thompson was one of 2017's [Frictionless Data Tool Fund][toolfund] grantees tasked with extending implementation of core Frictionless Data [data package][dp-clj] and [table schema][ts-clj] libraries in Clojure programming language. You can read more about this in [his grantee profile][toolfund-matt]. In this post, Thompson will show you how to set up and use the [Clojure](http://clojure.org) libraries for working with [Tabular Data Packages][tdp]. + +This tutorial uses a worked example of downloading a data package from a remote location on the web, and using the Frictionless Data tools to read its contents and metadata into Clojure data structures. + +## Setup + +First, we need to set up the project structure using the [Leiningen](http://leiningen.org) tool. If you don't have Leiningen set up on your system, follow the link to download and install it. Once it is set up, run the following command from the command line to create the folders and files for a basic Clojure project: + +```sh + +lein new periodic-table + +``` + +This will create the *periodic-table* folder. Inside the *periodic-table/src/periodic-table* folder should be a file named *core.clj*. This is the file you need to edit during this tutorial. + +## The Data + +For this tutorial, we will use a pre-created data package, the Periodic Table Data Package hosted by the Frictionless Data project. A [Data Package][dp] is a simple container format used to describe and package a collection of data. It consists of two parts: + +* Metadata that describes the structure and contents of the package +* Resources such as data files that form the contents of the package + +Our Clojure code will download the data package and process it using the metadata information contained in the +package. The data package can be found [here on GitHub][datapackage.json]. + +The data package contains data about elements in the periodic table, including each element's name, atomic number, symbol and atomic weight. The table below shows a sample taken from the first three rows of the CSV file: + +| atomic number | symbol | name | atomic mass | metal or nonmetal? | +|---------------|--------|----------|-------------|--------------------| +| 1 | H | Hydrogen | 1.00794 | nonmetal | +| 2 | He | Helium | 4.002602 | noble gas | +| 3 | Li | Lithium | 6.941 | alkali metal | + + +## Loading the Data Package + +The first step is to load the data package into a Clojure data structure (a map). The initial step is to require the data package library in our code (which we will give the alias **dp**). Then we can use the **load** function to load our data package into our project. Enter the following code into the core.clj file: + +```clojure +(ns periodic-table.core + (:require [frictionlessdata.datapackage :as dp] + [frictionlessdata.tableschema :as ts] + [clojure.spec.alpha :as s])) + +(def pkg + (dp/load "https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json")) +``` + +This pulls the data in from the remote GitHub location and converts the metadata into a Clojure map. We can access this metadata by using the `descriptor` function along with keys such as `:name` and `:title` to get the relevant information: + +```clojure +(println (str "Package name:" (dp/descriptor pkg :name))) +(println (str "Package title:" (dp/descriptor pkg :title))) +``` + +The package descriptor contains metadata that describes the contents of the data package. What about accessing the data itself? We can get to it using the `get-resources` function: + +```clojure +(def table (dp/get-resources pkg :data)) + +(doseq [row table] + (println row)) +``` + +The above code locates the data in the data package, then goes through it line by line and prints the contents. + +## Casting Types with core.spec + +We can use Clojure's [spec](https://clojure.org/guides/spec) library to define a schema for our data, which can then be used to cast the types of the data in the CSV file. + +Below is a spec description of a periodic element type, consisting of an atomic number, atomic symbol, the element's name, its mass, and whether or not the element is a metal or non-metal: + +```clojure +(s/def ::number int?) +(s/def ::symbol string?) +(s/def ::name string?) +(s/def ::mass float?) +(s/def ::metal string?) + +(s/def ::element (s/keys :req [::number ::symbol ::name ::mass ::metal])) +``` + +The above spec can be used to cast values in our tabular data so that they match the specified schema. The example below shows our tabular data values being cast to fit the spec description. Then the `-main` function loops through the elements, printing only those with an atomic mass of over 10. + +```clojure +(ns periodic-table.core + (:require [frictionlessdata.datapackage :as dp] + [frictionlessdata.tableschema :as ts] + [clojure.spec.alpha :as s])) + +(s/def ::number int?) +(s/def ::symbol string?) +(s/def ::name string?) +(s/def ::mass float?) +(s/def ::metal string?) + +(s/def ::element (s/keys :req [::number ::symbol ::name ::mass ::metal])) + +(def pkg + (dp/load "https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json")) + +(def resources (dp/get-resources pkg :data)) + +(def elements (dp/cast resources element)) + +(defn -main [] + (doseq [e elements] + (if (< (:mass e) 10) + (println e)))) +``` + +When run, the program produces the following output: + +```sh +$ lein run +{::number 1 ::symbol "H" ::name "Hydrogen" ::mass 1.00794 ::metal "nonmetal"} +{::number 2 ::symbol "He" ::name "Helium" ::mass 4.002602 ::metal "noble gas"} +{::number 3 ::symbol "Li" ::name "Lithium" ::mass 6.941 ::metal "alkali gas"} +{::number 4 ::symbol "Be" ::name "Beryllium" ::mass 9.012182 ::metal "alkaline earth metal"} +``` + +This concludes our simple tutorial for using the Clojure libraries for Frictionless Data. + +We welcome your feedback and questions via our [Frictionless Data Gitter chat][fd-gitter] or via [GitHub issues][dp-clj-issues] on the [datapackage-clj][dp-clj] repository. + +[dp]: https://specs.frictionlessdata.io/data-package/ +[tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[ts]: /table-schema/ +[toolfund]: https://toolfund.frictionlessdata.io +[dp-clj]: https://github.com/frictionlessdata/datapackage-clj +[ts-clj]: https://github.com/frictionlessdata/tableschema-clj +[fd-gitter]: http://gitter.im/frictionlessdata/chat +[dp-clj-issues]: https://github.com/frictionlessdata/datapackage-clj/issues +[datapackage.json]: https://raw.githubusercontent.com/frictionlessdata/example-data-packages/62d47b454d95a95b6029214b9533de79401e953a/periodic-table/datapackage.json +[toolfund-matt]: /blog/2017/10/26/matt-thompson/ diff --git a/site/blog/2018-07-09-csv/README.md b/site/blog/2018-07-09-csv/README.md new file mode 100644 index 000000000..76a9e9602 --- /dev/null +++ b/site/blog/2018-07-09-csv/README.md @@ -0,0 +1,208 @@ +--- +title: CSV - Comma Separated Values +date: 2018-07-09 +tags: ["csv"] +category: general +--- + +This page provides an overview CSV (Comma Separated Values) format for data. + + +CSV is a very old, very simple and very common "standard" for (tabular) data. +We say "standard" in quotes because there was never a formal standard for CSV, +though in 2005 someone did put together a [RFC][rfc] for it. + +CSV is supported by a **huge** number of tools from spreadsheets like Excel, +OpenOffice and Google Docs to complex databases to almost all programming +languages. As such it is probably the most widely supported structured data +format in the world. + +---- + +## The Format + +Key points are: + +* CSV is probably the simplest possible structured format for data +* CSV strikes a delicate balance, remaining readable by both machines & humans +* CSV is a two dimensional structure consisting of rows of data, each row + containing multiple cells. Rows are (usually) separated by line terminators + so each row corresponds to one line. Cells within a row are separated by + commas (hence the C(ommmas) part) + * Note that strictly we're really talking about DSV files in that we can + allow 'delimiters' between cells other than a comma. However, many people + and many programs still call such data CSV (since comma is so common as the + delimiter) +* CSV is a "text-based" format, i.e. a CSV file *is* a text file. This makes it + amenable for processing with all kinds of text-oriented tools (from text + editors to [unix tools like sed, grep etc][cldw]) + +[cldw]: https://github.com/rgrp/command-line-data-wrangling + +### What a CSV looks like + +If you open up a CSV file in a text editor it would look something like: + + A,B,C + 1,2,3 + 4,"5,3",6 + +Here there are 3 rows each of 3 columns. Notice how the second column in the last line is +"quoted" because the content of that value actually contains a "," character. Without +the quotes this character would be interpreted as a column separator. To avoid this +confusion we put quotes around the whole value. The result is that we have 3 rows each +of 3 columns (Note a CSV file does not *have* to have +the same number of columns in each row). + +### Dialects of CSVs + +As mentioned above, CSV files can have quite a bit of variation in +structure. Key options are: + +* Field delimiter: rather than comma `,` people often use things like `\t` + (tab), `;` or `|` +* Record terminator / line terminator: is `\n` (unix), `\n\r` (dos) or something else ... +* How do you quote records that contain your delimiter + +You can read more in the [CSV Dialect Description Format][spec-csvddf] which defines +a small JSON-oriented structure for specifying what options a CSV uses. + +### What is Missing in CSV? + +* CSV lacks any way to specify type information: that is, there is no way to + distinguish "1" the string from 1 the number. This shortcoming can be + addressed by adding some form of simple schema. For example [Table + Schema][ts] provides a very simple way to describe your schema externally + whilst [Linked CSV][linked-csv] is an example of doing this "inline" (that + is, in the CSV). +* No support for relationships between different "tables". This is similar to + the previous point and again [Table Schema][ts] provides a way to address + this by providing additional schema information externally. +* CSV is really only for tabular data -- it is not so good for data with + nesting or where structure is not especially tabular (though remember most + data can be put into tabular form if you try hard enough!) + +### Links + +Specifications and overviews: + +* [RFC specification of CSV][rfc] +* [CSV Dialect Description Format][csvddf] +* [CSV on Wikipedia][wiki] + +---- + +## Tools + +The great thing about CSV is the huge level of tool support. The following is +not intended to be comprehensive but is more at the electic end of the spectrum. + +### Desktop + +All spreadsheet programs including Excel, OpenOffice, Google Docs +Spreadsheets supporting opening, editing and saving CSVs. + +### View a CSV file in your Browser + +You can view a CSV file (saving you the hassle of downloading it and opening +it). Options include: + +* You can use datapipes: + + Just paste your CSV file and away you go. + +* Install this [Chrome Browser Extension][chrome-csv]. This can be used both + for online files and for files on your local disk (if you open them with your + browser!) + +### Unix Command Line Manipulation + +See + +* Using [unix command line tools on CSV][cldw] +* The wonderful [csvkit][] (python) + +### Power Tools + +* [OpenRefine][] is a powerful tool for editing and manipulating data and works + very well with CSV +* [Data Explorer][datax] supports importing CSVs and manipulating and changing + them using javascript in the browser + +### Libraries + +This is heavily biased towards python! + +#### Python + +* Built in csv library is good +* The wonderful [csvkit][] (python) +* [messytables][] (python) - convert lots of badly structured data into CSV (or + other formats) + +#### Node + +Nothing in standard lib yet and best option seems to be: + +* + +---- + +## Tips and Tricks + +### CSVs and Git + +Get git to handle CSV diffs in a sensible way (very useful if you are [using +git or another version control system to store data][git-for-data]). + +Make these changes to config files: + + # ~/.config/git/attributes + *.csv diff=csv + + # ~/.gitconfig + [diff "csv"] + wordRegex = [^,\n]+[,\n]|[,] + +Then do: + + git diff --word-diff + # make it even nicer + git diff --word-diff --color-words + +Credit for these fixups to [contributors on this question on +StackExchange](http://opendata.stackexchange.com/questions/748/is-there-a-git-for-data) +and to [James Smith](http://theodi.org/blog/adapting-git-simple-data). + +[rfc]: http://tools.ietf.org/html/rfc4180 +[wiki]: http://en.wikipedia.org/wiki/Comma-separated_values +[csvkit]: http://csvkit.readthedocs.org/ +[messytables]: http://messytables.readthedocs.org +[git-for-data]: http://blog.okfn.org/2013/07/02/git-and-github-for-data/ +[linked-csv]: http://jenit.github.io/linked-csv/ +[chrome-csv]: https://chrome.google.com/webstore/detail/recline-csv-viewer/ibfcfelnbfhlbpelldnngdcklnndhael +[OpenRefine]: http://openrefine.org/ +[datax]: http://explorer.okfnlabs.org/ + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2018-07-09-developer-guide/README.md b/site/blog/2018-07-09-developer-guide/README.md new file mode 100644 index 000000000..1c1e9810e --- /dev/null +++ b/site/blog/2018-07-09-developer-guide/README.md @@ -0,0 +1,223 @@ +--- +title: Developer Guide +date: 2018-07-09 +tags: +category: contributing +--- + + +This guide introduces you to the Frictionless Data tool stack and how you can contribute to it. *Update note (2021): this blog is out of date. Please see the [contributing guide](/work-with-us/contribute/) for updated information.* + + + +## Asking questions and getting help + +If you have a question or want help the best way to get assistance is to join our public chat channel and ask there -- prompt responses are guaranteed: + + + + +## Example and Test Data Packages + +We have prepared a variety of example and test data packages for use in development: + +* Standard test data packages in the Python test suite: +* Exemplar data packages (used in tutorials): +* Core Data Packages -- variety of of high quality "real-world" reference and indicator datasets as data packages: + + +## Key Concepts and Pre-requisites + +This entity diagram gives an overview of how the main different objects fit together. The top row is a generic Data Package and the row below shows the case of Tabular Data Package. + +This guide will focus on [Tabular Data Packages][spec-tdp] as that is the most commonly used form of Data Packages and is suited to most tools. + +![overview of data packages and tabular data packages](./overview-of-data-packages.png) +*overview of data packages and tabular data packages* + +This guide will assume you already have some high-level familiarity with the [Data Package family of specifications][spec-dp]. Please a take a few minutes to take a look at the [overview][dp-main] if you are not yet familiar with those specs. + +## Implementing a Data Package Tool Stack + +Here's a diagram that illustrates some of the core components of a full Data Package implementation. + +The *italicised items* are there to indicate that this functionality is less important and is often not included in implementations. + +![core components of a full Data Package implementation](./data-package-core-components.png) +*core components of a full Data Package implementation* + +### General Introduction + +As a Developer the primary actions you want to support are: + +* Importing data (and metadata) in a Data Package into your system +* Exporting data (and metadata) from your system to a Data Package + +Addition actions include: + +* Creating a Data Package from scratch +* Validating the data in a Data Package (is the data how it says it should be) +* Validating the metadata in a Data Package +* Visualizing the Data Package +* Publishing the Data Package to an online repository + +### Programming Language + +This is example pseudo-code for a Data Package library in a programming language like Python or Javascript. + +**Importing a Data Package** + +``` +# location of Data Package e.g. URL or path on disk +var location = /my/data/package/ + +# this "imports" the Data Package providing a native DataPackage object to work with +# Note: you usually will not load the data itself +var myDataPackage = new DataPackage(location) +var myDataResource = myDataPackage.getResource(indexOrNameOfResource) + +# this would return an iterator over row objects if the data was in rows +# optional support for casting data to the right type based on Table Schema +var dataStream = myDataResource.stream(cast=True) + +# instead of an iterator you may want simply to convert to native structured data type +# for example, in R where you have a dataframe you would do something like +var dataframe = myDataResource.asDataFrame() +``` + +**Accessing metadata** + +``` +# Here we access to Data Package metadata +# the exact accessor structure is up to you - here it an attribute called +# metadata that acts like a dictionary +print myDataPackage.descriptor['title'] +``` + +**Exporting a Data Package** + +A simple functional style approach that gets across the idea: + +``` +# e.g. a location on disk +var exportLocation = /path/to/where/data/package/should/go +export_data_package(nativeDataObject, exportLocation) +``` + +A more object-oriented model fitting with our previous examples would be: + +``` +var myDataPackage = export_data_package(nativeDataObject) +myDataPackage.save(exportLocation) + +# if the native data is more like a table a data then you might have +var myDataPackage = new DataPackage() +myDataPackage.addResourceFromNativeDataObject(nativeDataObject) + +# once exported to +myDataPackage.saveToDisk(path) + +# You can also provide access to the Data Package datapackage.json +# That way clients of your library can decide how they save this themselves +var readyForJSONSaving = myDataPackage.dataPackageJSON() +saveJson(readyForJSONSaving, '/path/to/save/datapackage.json') +``` + +**Creating a Data Package from scratch** + +``` +var myMetadata = { + title: 'My Amazing Data' +} +var myDataPackage = new DataPackage(myMetadata) +``` + +**Data Validation** + +This is Tabular Data specific. + +``` +var resource = myDataPackage.getResource() +# check the data conforms to the Table Schema +resource.validate() + +# more explicit version might look like +var schema = resource.schemaAsJSON() +var tsValidator = new TSValidator(schema) +# validate a data stream +schema.validate(resource.stream()) +``` + +**Validating Metadata** + + + +### Specific Software and Platforms + +For a particular tool or platform usually all you need is simple import or export: + +``` +# import into SQL (implemented in some language) +import_datapackage_into_sql(pathToDataPackage, sqlDatabaseInfo) + +# import into Google BigQuery +import_datapackage_into_bigquery(pathToDataPackage, bigQueryInfo) +``` + +## Examples + +### Python + +The main Python library for working with Data Packages is `datapackage`: + +See + +Additional functionality such as TS and TS integration: + +* +* +* + +`tabulator` is a utility library that provides a consistent interface for reading tabular data: + + + +Here's an overview of the Python libraries available and how they fit together: + +![how the different tableschema libraries in python fit together](./tableschema-python.png) +*how the different tableschema libraries in python fit together* + +### Javascript + +Following "Node" style we have partitioned the Javascript library into pieces, see this list of libraries: + +* + +### SQL Integration + +[Here's a walk-through](https://github.com/frictionlessdata/tableschema-sql-py) of the SQL integration for [Table Schema][spec-ts] written in python. This integration allows you to generate SQL tables, load and extract data based on [Table Schema][spec-ts] descriptors. + +Related blog post: + +[dp]: /data-package +[dp-main]: /introduction +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2018-07-09-developer-guide/data-package-core-components.png b/site/blog/2018-07-09-developer-guide/data-package-core-components.png new file mode 100644 index 000000000..d9d02e93e Binary files /dev/null and b/site/blog/2018-07-09-developer-guide/data-package-core-components.png differ diff --git a/site/blog/2018-07-09-developer-guide/overview-of-data-packages.png b/site/blog/2018-07-09-developer-guide/overview-of-data-packages.png new file mode 100644 index 000000000..79fbd6816 Binary files /dev/null and b/site/blog/2018-07-09-developer-guide/overview-of-data-packages.png differ diff --git a/site/blog/2018-07-09-developer-guide/tableschema-python.png b/site/blog/2018-07-09-developer-guide/tableschema-python.png new file mode 100644 index 000000000..c3aeb81f4 Binary files /dev/null and b/site/blog/2018-07-09-developer-guide/tableschema-python.png differ diff --git a/site/blog/2018-07-09-validating-data/README.md b/site/blog/2018-07-09-validating-data/README.md new file mode 100644 index 000000000..4e34b21fd --- /dev/null +++ b/site/blog/2018-07-09-validating-data/README.md @@ -0,0 +1,116 @@ +--- +title: Validating Data +date: 2018-07-09 +tags: +category: validating-data +--- + +Tabular data (e.g. data stored in [CSV](/blog/2018/07/09/csv/) and Excel worksheets) is one of the most common forms of data available on the web. This guide will walk through validating tabular data using Frictionless Data software. + +This guide show how you can validate your tabular data and check both: + +* Structure: are there too many rows or columns in some places? +* Schema: does the data fit its schema. Are the values in the date column actually dates? Are all the numbers greater than zero? + +We will walk through two methods of performing validation: + +* Web service: an online service called **goodtables**. This option requires no technical knowledge or expertise. +* Using the [Python goodtables library](https://github.com/frictionlessdata/goodtables-py). This allows you full control over the validation process but requires knowledge of Python. + +## goodtables + +[goodtables](http://goodtables.io/) is a free, open-source, hosted service for validating tabular data. goodtables checks your data for its *structure*, and, optionally, its adherence to a specified *schema*. Where the latter fails, goodtables highlights content errors so you can fix them speedily. + +goodtables will give quick and simple feedback on where your tabular data may not yet be quite perfect. + +![goodtables screenshot](./goodtables-screenshot.png) + +To get started with one-off validation of your tabular datasets, use [try.goodtables.io](http://try.goodtables.io). All you need to do is upload or provide a link to a CSV file and hit the "Validate" button. + +![goodtables Provide URL](./goodtables-provide-data.png) + +![goodtables Validate button](./goodtables-validate.png) + +If your data is structurally valid, you should receive the following result: + +![goodtables Valid](./goodtables-valid.png) + +If not... + +![goodtables Invalid](./goodtables-invalid.png) + +The report should highlight the structural issues found in your data for correction. For instance, a poorly structured tabular dataset may consist of a header row with too many (or too few) columns when compared to of data rows with an equal amount of columns. + +You can also provide a schema for your tabular data defined using JSON Table Schema. + +![goodtables Provide Schema](./goodtables-provide-schema.png) + +Briefly, the format allows users to specify not only the types of information within each column in a tabular dataset, but also expected values. For more information, see the [introduction](/introduction/) or [the full standard](https://specs.frictionlessdata.io/table-schema/). + +## Python + goodtables + +goodtables is also available as a Python library. The following short snippets demonstrate examples of loading and validating data in a file called `data.csv`(and in the second example, validating the same data file against `schema.json`) + +### Validating Structure + +```python +from goodtables import validate + +report = validate('data.csv') +report['valid'] +report['table-count'] +report['error-count'] +report['tables'][0]['valid'] +report['tables'][0]['source'] +report['tables'][0]['errors'][0]['code'] +``` + + +### Validating Schema + +```python +from goodtables import validate + +# sync source/schema fields order +report = validate('data.csv', schema='schema.json', order_fields=True) + +... +``` + +## Continuous Data Validation + +In a bid to streamline the process of data validation and ensure seamless integration is possible in different publishing workflows, we have set up a continuous data validation hosted service that builds on top of Frictionless Data libraries. goodtables.io provides support for different backends. At this time, users can use it to check any datasets hosted on GitHub and Amazon S3 buckets, automatically running validation against data files every time they are updated, and providing a user friendly report of any issues found. + +![Data Valid](./goodtables-continuous-validation.png) + +Start your continuous data validation here: + +Blog post on goodtables python library and goodtables web service: + +See the `README.md` for more information. + +Find more examples on validating tabular data in the [Frictionless Data Field Guide][field-guide] + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON +[field-guide]: /tag/field-guide + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2018-07-09-validating-data/goodtables-continuous-validation.png b/site/blog/2018-07-09-validating-data/goodtables-continuous-validation.png new file mode 100644 index 000000000..bf3a3b65a Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-continuous-validation.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-invalid.png b/site/blog/2018-07-09-validating-data/goodtables-invalid.png new file mode 100644 index 000000000..b2602383b Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-invalid.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-provide-data.png b/site/blog/2018-07-09-validating-data/goodtables-provide-data.png new file mode 100644 index 000000000..f82b7c718 Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-provide-data.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-provide-schema.png b/site/blog/2018-07-09-validating-data/goodtables-provide-schema.png new file mode 100644 index 000000000..96e832a05 Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-provide-schema.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-screenshot.png b/site/blog/2018-07-09-validating-data/goodtables-screenshot.png new file mode 100644 index 000000000..29d687503 Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-screenshot.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-valid.png b/site/blog/2018-07-09-validating-data/goodtables-valid.png new file mode 100644 index 000000000..7a1c7660d Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-valid.png differ diff --git a/site/blog/2018-07-09-validating-data/goodtables-validate.png b/site/blog/2018-07-09-validating-data/goodtables-validate.png new file mode 100644 index 000000000..e10a151bf Binary files /dev/null and b/site/blog/2018-07-09-validating-data/goodtables-validate.png differ diff --git a/site/blog/2018-07-16-oleg-lavrovsky/README.md b/site/blog/2018-07-16-oleg-lavrovsky/README.md new file mode 100644 index 000000000..286d57a64 --- /dev/null +++ b/site/blog/2018-07-16-oleg-lavrovsky/README.md @@ -0,0 +1,40 @@ +--- +title: "Tool Fund Grantee: Oleg Lavrovsky" +date: 2018-07-16 +tags: +author: Oleg Lavrovsky +category: grantee-profiles +image: /img/blog/oleg-lavrovsky-image.jpg +# description: Tool Fund Grantee - Julia +github: http://github.com/loleg +twitter: http://twitter.com/loleg +website: http://datalets.ch +--- + +_This grantee profile features Oleg Lavrovsky for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved._ + + + +We are digital natives, dazzled by the boundless information and cultural resources of electronic networks, tuned in to a life on - and offline, dimly aware of all kinds of borders being rewritten. I was born in the Soviet Union and grew up in Canada, immersed in the wonders of creative code on Apple II and DOS-era personal computers, doing fun things in programming environments from [BASIC](https://www.scullinsteel.com/apple2/) to C++/C#/.NET (hey [@ooswald](https://github.com/ooswald)!) to Perl (hey [@virtualsue](https://github.com/virtualsue)!) to Java (hey [@timcolson](https://github.com/timcolson)!) to JavaScript (hey [@jermolene](https://github.com/jermolene)!) to Python (hey [@gasman](https://github.com/gasman)!), all of which find some use in the freelance work I now do based in my adoptive home of Switzerland - a country of [plurality](https://en.wikipedia.org/wiki/Swiss_people). + +Over the years, I have tried other languages like Clojure and Pascal, Groovy and Go, Erlang and Haskell, Scala and R, even ARM C/C++ and x86 assembly. Some have stuck in my dev chain, others have not. As far as possible, I hope to keep a beginner’s mind open to new paradigms, a solid craft of working on code and data with care, and the wisdom to avoid jumping off every tempting new thing on the horizon. + +I first came across tendrils of Open Knowledge ten years ago while living in Oxford, a vibrant community of thinkers and civic reformers. After we started a [hackspace](https://oxhack.org/), I got more involved in extracurricular open source activities, joined barcamps and hackathons, started contributing to projects. I started to see so-called 'big IT' or 'enterprise software' challenges to be, on many levels, problems of incompatible or intractable data standards. It was in the U.K. that I also discovered civic tech and open data activism. + +Helping to start a Swiss [Open Knowledge chapter](http://make.opendata.ch/) presented me with the opportunity to be involved in an ambitious and exciting techno-political movement, and to learn from some of the most deeply ethical and forward-thinking people in Information Technology. Running the [School of Data](http://forum.schoolofdata.ch/) working group and supporting many projects in the Swiss [Opendata.ch](https://opendata.ch) association and international network is today no longer just a weekend activity: it is my `master` branch. + +I first heard the term *frictionless* from a [philosopher](https://andrewjtaggart.com/) who warned of a world where IT removes friction to the point where we live anywhere, and do anything, at the cost of social alienation - and, along with it, grave consequences to our well-being. There are parallels here to "closed datasets", which may well be padlocked for a reason. Throwing them into the wind may deprive them of the nurturing care of the original owners. The open data community offers them a softer landing. + +Some of the conversations that led to *Frictionless Data* took place at [OKCon 2013](https://opendata.ch/2013/09/okcon-2013-some-swiss-highlights/) in Geneva, where I was busy [mining the Law](https://make.opendata.ch/legal/). Max Ogden mentioned related ideas in his [talk](https://vimeo.com/channels/okcon2013/79932550) there on [Dat Project](https://datproject.org/). It later became a regular topic in the [Open Knowledge Labs hangouts](http://okfnlabs.org/) and elsewhere. My first impression was mixed: I liked the idea in principle, but found it hard to foresee what the standardization process could accomplish. It took me a couple of years to catch up, gain experience in putting the [Open Definition](http://opendefinition.org/) to use, struggle with some of the fundamental issues myself - just to wholly accept the idea of an open data ecosystem. + +Working with more unwieldy data as well as having an interest in Data Science, and the great vibe of a growing community all led me to test the waters with the [Julia language](https://julialang.org/). I quickly became a fan, and started looking for ways to include it in my workflow. Thanks to the collaboration enabled by the Frictionless Data Tool Fund, I will now be able to focus on this goal and start connecting the dots more quickly. More bridges need to be built to help open data users use Julia's computing environment, and Julia users could use sturdier access to open data. + +There are two high level use cases which I think are particularly interesting when it comes to Frictionless Data: strongly typed and easy to validate dataset schema leading to a "light" version of semantic interoperability, helping data analysts, developers, even automated agents, to see at a glance how compatible datasets might be. Take a look at [dataship](/blog/2016/11/15/dataship/), [open power system data](/blog/2016/11/15/open-power-system-data/) and other case studies at [Frictionlessdata.io](/) for examples. The other is the pipelines approach which, as a feature of Unix and [other OS](https://docs.microsoft.com/en-us/powershell/scripting/learn/understanding-the-powershell-pipeline?view=powershell-7) is the basis for an incredibly powerful system building tool, now laying the foundation of a rich and reliable world of [shared data](http://datahub.io/blog/core-data-essential-datasets-for-data-wranglers-and-data-scientists). + +At a more practical level, I have been using Data Packages to publish data for [hackathons](http://hack.opendata.ch), School of Data [workshops](http://schoolofdata.ch) and other activities in my Open Knowledge chapter, and regularly explaining the concepts and training people to use Frictionless Data tools in the Open Data module I teach at the [Bern University of Applied Sciences](https://www.bfh.ch/en/home.html). I have built support for them into [Dribdat](http://datalets.ch/dribdat), a tool we use for connecting the dots between people, code and data. + +Over the years, I have made small contributions to OKI’s codebases on projects like [CKAN](https://ckan.org/). Contributing to the Frictionless Data project clears the way to the frontlines of development: putting better tools in users’ hands, committing directly to the needs of the community, setting an elevated expectation of responsibility and quality. That said, I am a novice in Julia. But my initial ambition is modest: make a working set of tools, produce a stable [v1.0 specification](https://blog.okfn.org/2017/09/05/frictionless-data-v1-0/) release. Run tests, get reviewed, interact with the community, and iterate. This project will be a learning process, and my intention is to widen the goalposts as much as I can for others to follow. + +The Julia language also needs to be better known, so I will start threads on the [OKI forums](https://discuss.okfn.org/u/loleg), at the [School of Data](https://schoolofdata.org/), in technical and academic circles. I am likewise really looking forward to representing Frictionless Data in the diverse and wide-ranging [Julia community](https://julialang.org/community/), sharing whatever questions and needs arise both ways. The specifications, libraries and tools will help to preserve key information on widely used datasets, foster a more in-depth technical discussion between everyone involved in data sharing, and open the door to more critical feedback loops between creators, publishers and users of open data. + +I will be developing the [datapackage-jl](https://github.com/loleg/datapackage-jl) and [tableschema-jl](https://github.com/loleg/tableschema-jl) libraries on GitHub, and you can follow me on [GitHub](http://github.com/loleg/) to see how this develops and read stories about putting Frictionless Data libraries to use. Please feel free to [write me a note](http://datalets.ch/), send in your use case, respond to anything I'm working on or writing about, share a tricky dataset or any other kind of challenge - and [let's chat](https://gitter.im/frictionlessdata/chat)! diff --git a/site/blog/2018-07-16-oleg-lavrovsky/oleg-lavrovsky-image.jpg b/site/blog/2018-07-16-oleg-lavrovsky/oleg-lavrovsky-image.jpg new file mode 100644 index 000000000..3b44b3abb Binary files /dev/null and b/site/blog/2018-07-16-oleg-lavrovsky/oleg-lavrovsky-image.jpg differ diff --git a/site/blog/2018-07-16-ori-hoch/README.md b/site/blog/2018-07-16-ori-hoch/README.md new file mode 100644 index 000000000..718e6cb38 --- /dev/null +++ b/site/blog/2018-07-16-ori-hoch/README.md @@ -0,0 +1,38 @@ +--- +title: "Tool Fund Grantee: Ori Hoch" +date: 2018-07-16 +tags: +author: Ori Hoch +category: grantee-profiles +image: /img/blog/ori-hoch-image.png +# description: Tool Fund Grantee - PHP +github: https://github.com/OriHoch +twitter: https://twitter.com/OriHoch +website: https://www.linkedin.com/in/ori-hoch-bb62b033/ +--- + +_This grantee profile features Ori Hoch for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved._ + + + +My name is Ori Hoch, I am 35 years old, living in Israel and married with 2 children. I recently took my family to Midburn - the Israeli regional Burning Man event where I juggled some fire clubs in the main burn ceremony. Through the Tool Fund, I am working on implementing the PHP libraries for Frictionless Data. I am also working on several other open source data projects: [Open Knesset](https://github.com/hasadna/Open-Knesset), [Open Budget](https://github.com/OpenBudget/budgetkey-data-pipelines), [Beit Hatfutsot](https://github.com/beit-hatfutsot) - all projects are open source and fully transparent - both the code and the development process - which I think is a great way to work. I’m also very interested in community and teamwork - how to get a group of people working together on a common goal, a hard task in normal scenarios which grows even more complex when dealing with volunteers / open source contributors. Of course, besides all the philosophical ideals I’m also a hard-core technologist who loves diving into complex problems, finding and implementing the right solution. + +I first heard about the Frictionless Data ecosystem from my activity in [The Public Knowledge Workshop](http://www.hasadna.org.il/en/) where I worked with Adam Kariv and Paul Walsh. I have a lot of experience working with data, and know many of the common problems and pitfalls. One of the major obstacles is interoperability between different data sources. Having the core Frictionless Data libraries available in different languages will allow for easier interoperability and integrations between sources. + +At [Beit Hatfutsot](http://www.bh.org.il/) (The Museum of The Jewish People), we aggregate data from many sources, including some data from PHP frameworks such as MediaWiki / Wordpress. At the moment we ask developers of the external data sources to create a datapackage for us, based on a given schema. Frictionless Data libraries for PHP will make this much easier for people to do, and will have a huge effect in reducing errors. + +In addition to interoperability, the Frictionless Data specifications and software are based on the combined experience of many individuals working on a variety of data projects. Anyone using the libraries and tools will benefit from these experiences and will avoid problems and pitfalls which other people encountered in the past. + +I welcome PHP enthusiasts to join in the development effort of the [tableschema](https://github.com/frictionlessdata/tableschema-php) and [datapackage](https://github.com/frictionlessdata/datapackage-php) libraries which I am currently working on. Both repositories follow standard GitHub development flow using Issues, Pull Requests, Releases et al. Check the README and CONTRIBUTING files in the repositories above for more details and reach out to me or the rest of the Frictionless Data developer community on [the active Gitter channel](https://gitter.im/frictionlessdata/chat). + +I would also love to have PHP developers use the core libraries to write some more high-level tools. For example - consider an organization which has some data in their Wordpress / Drupal installation which they would like to publish or use with Frictionless Data compatible tools. Without a compatible plugin for their framework it would require them to either write some custom code or create the datapackage manually - both options are time consuming and error prone. If they had a ready to use plugin for their framework which publishes a compliant datapackage - it will greatly simplify the process and ensure interoperability. + +With the availability of the PHP libraries for Frictionless Data the task of developing such plugins will be greatly simplified. The libraries handle all the work of creating / loading datapackages and ensuring they conforms to the specs. Allowing the developer to focus on the plugin logic. + +Additional possibilities for leveraging the PHP libraries: + +- Import plugins - for loading datapackages into a data store +- visualization tools to allow people to view and analyze data packages from PHP based code +- Integration of existing Frictionless Data to be available for use from PHP, for example the [datapackage-pipelines](https://github.com/frictionlessdata/datapackage-pipelines) framework + +Finally, I would like to thank Open Knowledge International and The Sloan foundation for the opportunity to work on the forefront of the open data eco-system. I think that the tools we are developing now will have tremendous effects on how we manage and use data in the future and we have not yet seen all the possible benefits and outcomes from this work. diff --git a/site/blog/2018-07-16-ori-hoch/ori-hoch-image.png b/site/blog/2018-07-16-ori-hoch/ori-hoch-image.png new file mode 100755 index 000000000..f9131cafa Binary files /dev/null and b/site/blog/2018-07-16-ori-hoch/ori-hoch-image.png differ diff --git a/site/blog/2018-07-16-point-location-data/README.md b/site/blog/2018-07-16-point-location-data/README.md new file mode 100644 index 000000000..ff41fa416 --- /dev/null +++ b/site/blog/2018-07-16-point-location-data/README.md @@ -0,0 +1,419 @@ +--- +title: Point location data in CSV files +date: 2018-07-16 +tags: +category: publishing-data +--- + +This guide explores the options available to represent point location data in a CSV file within a Data Package. + + + +First, some key concepts: + +* A [Table Schema](https://specs.frictionlessdata.io/table-schema/) describes tabular data. +* Tabular data is often provided in a [CSV - Comma Separated Values][csv] file. +* Tabular data may include data about locations. +* Locations can be represented by points, lines, polygons and more complex geometry. +* Points are often represented by a longitude, latitude coordinate pair. There is much debate on [which value should go first](https://macwright.org/2015/03/23/geojson-second-bite.html#position) and [tools have their own preferences](https://macwright.org/lonlat/). Explicitly stating the [axis-order](https://www.w3.org/TR/sdw-bp/#bp-crs) of coordinates is important so when the data is used, it represents the correct location. +* To keep things simple, you should use [digital degrees](https://en.wikipedia.org/wiki/Decimal_degrees) `-27.1944, 151.32660`, not [degrees, minutes, seconds](https://en.wikipedia.org/wiki/Latitude#Preliminaries) or Northing and Eastings `27.1944° S, 151.2660° E`. +* Representing locations other than points in a CSV can be complicated as the shape is represented by many coordinate pairs that combine to make the shape (think joining the dots). +* A coordinate pair is inadequate to accurately show a location on a map. You also need a [coordinate reference system](https://en.wikipedia.org/wiki/Spatial_reference_system) and sometimes a date. +* A coordinate reference system describes the datum, [geoid](https://en.wikipedia.org/wiki/Geoid), [coordinate system](https://en.wikipedia.org/wiki/Coordinate_system), and [map projection](https://en.wikipedia.org/wiki/Map_projection) of the location data. +* Dates detailing when the location was recorded are also important because things change over time, e.g. the shape of an [electoral boundary](https://web.archive.org/web/20171029095929/http://ecq.qld.gov.au/__data/assets/pdf_file/0009/70956/26.5.17_Extraordinary-Gazette_QRC-Final-Determination.pdf), or the [location of a continent](http://www.icsm.gov.au/datum/what-gda2020). + +The key information to describe a point location is a: + +* coordinate pair and their axis order +* coordinate reference system +* date + +Assumptions are often made about coordinate reference systems and dates, e.g. + +* The coordinate reference system may be assumed to be the World Geodetic System 1984 ([WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System)), which is currently used for the Global Positioning System (GPS) satellite navigation system. This coordinate reference system used by the majority of interactive maps on the web. +* The date is often assumed to be today. + +## Point data +How can point location data be: + +1. represented in a CSV file? +2. described as part of a Data Package? + +The options for representing point locations in a CSV file are to define a field(s) of type: + +1. [geopoint, format: default](#1-geopoint-default) +2. [geopoint, format: array](#2-geopoint-array) +3. [geopoint, format: object](#3-geopoint-object) +4. [number with constraints](#4-numbers-with-constraints) +5. [string, format: default](#5-string-and-foreign-key-reference-to-well-known-place-name) and a foreign key reference +6. [string, format: uri](#6-use-a-uniform-resource-identifier-to-reference-a-location) reference to an external resource with the geometry +7. [geojson, format: default](#7-geojson) + +Each option is described below with a sample CSV file, Data Package fragment and some thoughts on pros and cons. + +Each option should, in a human and machine-readable way, specify: + +* the coordinate reference system +* the axis order of the coordinates (if not specified by the coordinate reference system) +* the date associated with the location data + +(Out of scope for the moment - geocoding using address but similar techniques will apply.) + +### 1. Geopoint, default + +The type [Geopoint](https://specs.frictionlessdata.io/table-schema/#geopoint), format: default is a string of the pattern `"lon, lat"`, where lon is the longitude and lat is the latitude (note the space is optional after the ,). E.g. `"90, 45"`. + +#### CSV + +| Office | Location (Lon, Lat) | +|--------|---------------------| +| Dalby | "151.2660, -27.1944"| + +#### Data Package fragment + +``` +{ + "fields": [ + { + "name": "Office", + "type": "string" + }, + { + "name": "Location (Lon, Lat)", + "type": "geopoint" + } + ] +} +``` + +**Thoughts** + +* [Currently](https://github.com/frictionlessdata/specs/issues/345) you cannot use the `minimum` or `maximum` constraint to limit longitude or latitude values to a to a minimum bounding rectangle +* The order of Lon, Lat is defined in the standard but: + * may not be obvious to the person looking at the file + * may not be machine-readable without referring to resources outside the Data Package + +### 2. Geopoint, array + +An array of exactly two items, where each item is a number, and the first item is longitude and the second item is latitude e.g. `[90, 45]` + +#### CSV + +| Office | Location (Lon, Lat) | +|--------|----------------------| +| Dalby | [151.2660, -27.1944] | + +#### Data Package fragment + +``` +{ + "fields": [ + { + "name": "Office)", + "type": "string" + }, + { + "name": "Location (Lon, Lat)", + "type": "geopoint", + "format": "array" + } + ] +} +``` + +** Thoughts ** + +* [Currently](https://github.com/frictionlessdata/specs/issues/345) you cannot use the `minimum` or `maximum` constraint to limit longitude or latitude values to a to a minimum bounding rectangle +* The order of Lon, Lat is defined in the standard but: + * may not be obvious to the person looking at the file + * may not be machine-readable without referring to resources outside the Data Package + +### 3. Geopoint, object + +A JSON object with exactly two keys, lat and lon and each value is a number e.g. `{"lon": 90, "lat": 45}` + +#### CSV + +| Office | Location (Lon, Lat) | +|--------|---------------------------------| +| Dalby |{"lon":151.2660, "lat": -27.1944}| + +#### Data Package fragment + +``` +{ + "fields": [ + { + "name": "Office)", + "type": "string" + }, + { + "name": "Location (Lon, Lat)", + "type": "geopoint", + "format": "object" + } + ] +} +``` + +** Thoughts ** + +* [Currently](https://github.com/frictionlessdata/specs/issues/345) you cannot use the `minimum` or `maximum` constraint to limit longitude or latitude values to a to a minimum bounding rectangle +* The axis order is explicit. [Stating how coordinate values are encoded](https://www.w3.org/TR/sdw-bp/#bp-crs) is a W3C spatial data on the web best practice. + + +### 4. Numbers with constraints +Two columns of type [number](https://specs.frictionlessdata.io/table-schema/#number) with [constraints](https://specs.frictionlessdata.io/table-schema/#constraints) to limit latitude and longitude values + +#### CSV + +| Office | Lat | Lon | +|--------|--------|--------| +| Dalby |-27.1944|151.2660| + +#### Data Package fragment + +``` +{ + "fields": [ + { + "name": "Office", + "type": "string" + }, + { + "name": "Lat", + "type": "number", + "contraints": { + "minimum": -90, + "maximum": 90 + } + }, + { + "name": "Lon", + "type": "number", + "contraints": { + "minimum": -180, + "maximum": 180 + } + } + ] +} +``` + +** Thoughts ** + +* You can constrain latitude and longitude values to a minimum bounding rectangle +* Constraints not required so invalid values possible +* Not obvious to software that the columns are location data unless specific names are used X,Y; Lat,Lon; Latitude,Longitude; and [many other combinations](http://doc.arcgis.com/en/arcgis-online/reference/csv-gpx.htm) +* Lat, Lon or Lon, Lat - you choose the order +* No way to force a pair of coordinates and support missing values. + * If you add a required constraint to both, you can’t have a missing location. + * If you don’t add required constraint, you could have lat without lon or vice versa. + +### 5. String and Foreign key reference to well-known place-name + +All the previous examples assume you know the coordinates of the location. What if you only know the name? You can use a name, of type: [string](https://specs.frictionlessdata.io/table-schema/#string), to refer to an another data resource and use the name to determine the coordinates. This data resource is often called a [Gazetteer](https://en.wikipedia.org/wiki/Gazetteer). [Often](https://en.wikipedia.org/wiki/Gazetteer#List_of_gazetteers) a website or API is placed in front of the data so you can provide a name and the location data is returned + +A date may be an additional field included in the foreign key relationship. + +#### CSV + +Offices.csv + +| office-name | town | +|----------------------|-------| +| Dalby Drop In Centre | Dalby | + + +Gazetteer.csv + +| city-or-town | location | +|--------------|---------------------------------| +| Dalby |{"lon":151.2660, "lat": -27.1944}| + +#### Data Package fragment + +``` +{ + "resources": [ + { + "name": "office-locations", + "path": "offices.csv", + "schema": { + "fields": [ + { + "name": "office-name", + "title": "Office Name", + "type": "string" + }, + { + "name": "town", + "title": "Town", + "description": "Town name in gazetteer", + "type": "string" + } + ] + }, + "foreignKeys": [ + { + "fields": "town", + "reference": { + "resource": "gazetteer", + "fields": "city-or-town" + } + } + ] + }, + { + "name": "gazetteer", + "description": "External Gazetteer", + "url": "https://example.com/gazetteer.csv", + "schema": { + "fields": [ + { + "name": "city-or-town", + "type": "string", + "constraints": { + "unique": true, + "required": true + } + }, + { + "name": "location", + "type": "geopoint", + "format": "object" + } + ] + }, + "primaryKey": [ + "city-or-town" + ] + } + ] +} +``` + +** Thoughts ** + +* Haven't come across many Gazetteers in CSV format + +### 6. Use a Uniform Resource Identifier to reference a location + +Use a type: [string](https://specs.frictionlessdata.io/table-schema/#string), format: uri, to provide a link to a resource that includes the geometry. + +#### CSV + +| office-name | Location uri | +|-------------|----------------------------------------------------------------| +| Dalby | http://nominatim.openstreetmap.org/details.php?place_id=114278 | + +#### Data Package fragment + +``` +"schema": { + "fields": [ + { + "name": "office-name", + "type": "string" + }, + { + "name": "Location uri", + "type": "string", + "format": "uri" + } + ] + } + ``` + +** Thoughts ** + +* [Link to Spatial Things from popular repositories](https://www.w3.org/TR/sdw-bp/#bp-linking-2) is a W3C spatial data on the web best practice. +* Things can move over time, consider [data versioning](https://www.w3.org/TR/sdw-bp/#bp-dataversioning), another W3C spatial data on the web best practice. +* Is there a way to define the bulk of the uri outside of the column and reduce the column entry to the id? Is this wise or desirable? + + +### 7. GeoJSON + +Use a field of type [GeoJSON](https://specs.frictionlessdata.io/table-schema/#geojson) to represent location + +#### CSV + +| Office | Location | +|--------|----------------------------------------------------| +| Dalby | {"type":"Point","coordinates":[151.2660,-27.1994]} | + +#### Data Package fragment + +``` +{ + "fields": [ + { + "name": "Office", + "type": "string" + }, + { + "name": "Location", + "type": "geojson", + "format": "default" + } + ] +} +``` + +** Thoughts ** + +* Geometry isn't constrained to a point; it could be a line or polygon. +* [GeoJSON](https://tools.ietf.org/html/rfc7946#page-12) only supports the WGS84 coordinate reference system. +* The axis order is explicit. [Stating how coordinate values are encoded](https://www.w3.org/TR/sdw-bp/#bp-crs) is a W3C spatial data on the web best practice. GeoJSON only supports lon, lat axis order. + +## Related Work + +### Frictionless data + +* [Table Schema](https://specs.frictionlessdata.io/table-schema/) +* [Publishing Geospatial Data as a Data Package](/blog/2016/04/30/publish-geo/) +* [Spatial Data Package investigation - research and report by Steve Bennett](https://research.okfn.org/spatial-data-package-investigation/) + +### World Wide Web Consortium (W3C) + +* [Data on the Web Best Practices](https://www.w3.org/TR/dwbp/) +* [Spatial Data on the Web Best Practices](http://www.w3.org/TR/sdw-bp) + +These documents advise on best practices related to the publication of data and spatial data on the web. + +### Australian Government - CSV GEO AU + +[csv-geo-au](https://github.com/TerriaJS/nationalmap/wiki/csv-geo-au) is a specification for publishing point or region-mapped Australian geospatial data in CSV format to data.gov.au and other open data portals. + +### IETF - GeoJSON + +[GeoJSON](https://tools.ietf.org/html/rfc7946) is a geospatial data interchange format based on JavaScript Object Notation (JSON). + +### OGC - Simple Feature Access + +The Open Geospatial Consortium - [OpenGIS Simple Feature Access](http://www.opengeospatial.org/standards/sfa) is also called ISO 19125. It provides a model for geometric objects associated with a Spatial Reference System. + +Recommended reading: We recently commissioned research work to determine how necessary and useful it would be to create a Geo Data Package as a core Frictionless Data offering. Follow the discussions [here on Discuss](https://discuss.okfn.org/t/geo-data-package/6143) and read [the final report into the spatial data package investigation by Steve Bennett](https://research.okfn.org/spatial-data-package-investigation/). Examples following the recommendations in this research will be added in due course. + +[dp]: /data-package +[dp-main]: /data-package +[tdp]: /data-package/#tabular-data-package +[ts]: /table-schema/ +[ts-types]: https://specs.frictionlessdata.io/table-schema/#field-descriptors +[csv]: /blog/2018/07/09/csv/ +[json]: http://en.wikipedia.org/wiki/JSON + +[spec-dp]: https://specs.frictionlessdata.io/data-package/ +[spec-tdp]: https://specs.frictionlessdata.io/tabular-data-package/ +[spec-ts]: https://specs.frictionlessdata.io/table-schema/ +[spec-csvddf]: https://specs.frictionlessdata.io/csv-dialect/ + +[publish]: /docs/publish/ +[pub-tabular]: /blog/2016/07/21/publish-tabular/ +[pub-online]: /blog/2016/08/29/publish-online/ +[pub-any]: /blog/2016/07/21/publish-any/ +[pub-geo]: /blog/2016/04/30/publish-geo/ +[pub-faq]: /blog/2016/04/20/publish-faq/ + +[dp-creator]: http://create.frictionlessdata.io +[dp-viewer]: http://create.frictionlessdata.io diff --git a/site/blog/2018-07-16-publish-data-as-data-packages/README.md b/site/blog/2018-07-16-publish-data-as-data-packages/README.md new file mode 100644 index 000000000..c86d3bcb0 --- /dev/null +++ b/site/blog/2018-07-16-publish-data-as-data-packages/README.md @@ -0,0 +1,34 @@ +--- +title: Packaging your Data +date: 2018-07-16 +tags: ["datapackage"] +category: working-with-data-packages +--- + +You can package any kind of data as a Data Package. + +1. Get your data together + 1. Get your data together in one folder (you can have data in subfolders of that folder too if you wish). +1. Add a `datapackage.json` file to package those data files into a useful whole (with key information like the license, title and format) + 1. The datapackage.json is a small file in JSON format that gives a bit of information about your dataset. You'll need to create this file and then place it in the directory you created. + 1. Don't worry if you don't know what JSON is - we provide some tools that can automatically create your this file for you. + 1. There are 2 options for creating the datapackage.json: + 1. Use the [Data Package Creator][dp-creator]) tool + 1. Just answer a few questions and give it your data files and it will spit out a datapackage.json for you to include in your project + 1. Use the [Python][dp-py], [JavaScript][dp-js], [PHP][dp-php], [Julia][dp-jl], [R][dp-r], [Clojure][dp-clj], [Java][dp-java], [Ruby][dp-rb] or [Go][dp-go] libraries for working with data packages. + +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our new and comprehensive [Frictionless Data Field Guide][field-guide]. + +[field-guide]: /tag/field-guide + +[dp-creator]: http://create.frictionlessdata.io + +[dp-js]: https://github.com/frictionlessdata/datapackage-js +[dp-py]: https://github.com/frictionlessdata/datapackage-py +[dp-php]: https://github.com/frictionlessdata/datapackage-php +[dp-java]: https://github.com/frictionlessdata/datapackage-java +[dp-clj]: https://github.com/frictionlessdata/datapackage-clj +[dp-jl]: https://github.com/frictionlessdata/datapackage-jl +[dp-r]: https://github.com/frictionlessdata/datapackage-r +[dp-go]: https://github.com/frictionlessdata/datapackage-go +[dp-rb]: https://github.com/frictionlessdata/datapackage-rb diff --git a/site/blog/2018-07-16-validated-tabular-data/README.md b/site/blog/2018-07-16-validated-tabular-data/README.md new file mode 100644 index 000000000..763b07b3c --- /dev/null +++ b/site/blog/2018-07-16-validated-tabular-data/README.md @@ -0,0 +1,113 @@ +--- +title: Validated tabular data +date: 2018-07-16 +tags: ["try.goodtables.io", "Goodtables CLI", "field-guide"] +category: +image: /img/blog/valid.png +description: When it comes to validating tabular data, you have some great tools at your disposal. We take a look at a couple of ways to utilise goodtables. +--- + +Errors in data are not uncommon. They also often get in the way of quick and timely data analysis for many data users. What if there was a way to quickly identify errors in your data to accelerate the process by which you fix them before sharing your data or using it for analysis? + +In this section, we will learn how to carry out one-time data validation using +* a free web tool called [try.goodtables.io](https://try.goodtables.io), +* the goodtables command line tool which you use in your local machine. + +Our working assumption is that you already know what a data schema and a data package are, and how to create them. If not, [start here](/blog/2018/03/07/well-packaged-datasets/). + +## One-time data validation with try.goodtables.io + +Now that you have your data package you may want to check it for errors. We refer to this process as data validation. Raw data is often ‘messy’ or ‘dirty’, which means it contains errors and irrelevant bits that make it inaccurate and difficult to quickly analyse and draw insight from existing datasets. **Goodtables** exists to identify structural and content errors in your tabular data so they can be fixed quickly. As with other tools mentioned in this field guide, goodtables aims to help data publishers improve the quality of their data before the data is shared elsewhere and used for analysis, or archived. + +**Types of errors identified in the validation process** + +Here are some of the errors that try.goodtables.io highlights. A more exhaustive list is available [here](https://github.com/frictionlessdata/goodtables-py#validation). + +| **Structural Errors** | | +|-----------------------|---------------------------------------------------------------------------------| +| blank-header | There is a blank header name. All cells in the header row must have a value. | +| duplicate-header | There are multiple columns with the same name. All column names must be unique. | +| blank-row | Rows must have at least one non-blank cell. | +| duplicate-row | Rows can't be duplicated. | +| extra-value | A row has more columns than the header. | +| missing-value | A row has less columns than the header. | +| **Content Errors** | | +| schema-error | Schema is not valid. | +| non-matching-header | The header's name in the schema is different from what's in the data. | +| extra-header | The data contains a header not defined in the schema. | +| missing-header | The data doesn't contain a header defined in the schema. | +| type-or-format-error | The value can’t be cast based on the schema type and format for this field. | + +**Load tabular data for one-time validation** + +You can add a dataset for one-time validation on [try.goodtables.io](https://try.goodtables.io) in two ways: +* If your tabular data is publicly available online, obtain a link to the tabular data you would like to validate and paste it in the **{Source}** section. +* Alternatively, Click on the Upload file prompt in the **{Source}** section to load a tabular dataset from your local machine + +**Validating data without a schema** + +In this section we will illustrate how to check tabular data for structural errors on [try.goodtables.io](https://try.goodtables.io/) where a data schema is not available. For this tutorial we will use a [sample CSV file with errors](https://raw.githubusercontent.com/frictionlessdata/goodtables-py/bc6470a970aacf65f20a3ddb7f71eb05a2a31c70/data/invalid-on-structure.csv). + +Copy and paste the file's URL to the **{Source}** input. When you click on the **{Validate}** button, [try.goodtables.io](https://try.goodtables.io/) presents an exhaustive list of structural errors in your dataset. + +![Add dataset link in the Source field, or select the Upload file option](./figure-1.png) +*Figure 1: Add dataset link in the Source field, or select the Upload file option.* + +If needed, you can disable two types of validation checks: + + * Ignore blank rows + Use this checkbox to indicate whether blank rows should be considered as errors, or simply ignored. Check this option if missing data is a known issue that cannot be fixed immediately i.e. if you are not the owner/publisher of the data. + + * Ignore duplicate rows + Use this checkbox to indicate whether duplicate rows should be considered as errors, or simply ignored. + + We will leave all boxes unchecked for our example. On validate, we receive a list of 12 errors as we can see in figure 7 below. + +![dataset errors outlined on try.goodtables.io](./figure-2.png) +*Figure 2: dataset errors outlined on try.goodtables.io.* + +[try.goodtables.io](https://try.goodtables.io) points us to specific cells containing errors so they can be fixed easily. We can use this list as a guide to fix all errors in our data manually, and run a second validation test to confirm that all issues are resolved. If there no validations could be found, the ensuing message will be as in figure 8 below: + +![valid data message on goodtables.io](./figure-3.png) +*Figure 3: valid data message on goodtables.io.* + +Improving data quality is an iterative process that should involve data publishers and maintainers. Tools such as [try.goodtables.io](https://try.goodtables.io) allow you to focus on complex errors like if the presented data is correct, instead of wasting time with simple (but very common) errors like incorrect date formats. + +**Validating tabular data with a schema** + +A data schema contains information on the structure of your tabular data. Providing a data schema as part of the validation process on [try.goodtables.io](https://try.goodtables.io) makes it possible to check your dataset for content errors. For example, a schema contains information on fields and their assigned data types, making it possible to highlight misplaced data i.e. text in an amounts column where numeric data is expected. If you haven’t yet, learn how to create a data schema for your data collection before continuing with this section. + +To test how this works, you can use: +* any of the data packages from [this Data Package collection on GitHub](https://github.com/frictionlessdata/example-data-packages), which comprises of example data packages curated by the Frictionless Data team or +* [any of the Core Data Packages on DataHub](http://datahub.io/core/). The Core Data project provides essential data for data wranglers and data science community. Read more about it [here](https://datahub.io/docs/core-data). + +In any given Data Package, the *datapackage.json* file contains the schema and the data folder contains tabular data to be validated against the schema. + +Often, you will find that you may be working in workflows that involve many datasets, which are updated regularly. In cases such as this, one-time validation on try.goodtables.io is probably not the answer. But fear not! Goodtables has the ability to automate the validation process so that errors are checked for continually. Find out more in our continuous and [automated data validation section](/blog/2018/03/12/automatically-validated-tabular-data). + +## One-time data validation with goodtables command line tool + +The same validations that we've done on try.goodtables.io, can also be done in your local machine using goodtables. This is especially useful for big datasets, or if your data is not publicly accessible online. However, this is a slightly technical task, which requires basic knowledge of the command line (CLI). If you don't know how to use the CLI, or are a bit rusty, we recommend you to read the [Introduction to the command-line tutorial](https://tutorial.djangogirls.org/en/intro_to_command_line/) before proceeding. + +For this section, you will need: +* Python, a programming language which the goodtables command-line tool is written in - [[installation instructions](https://tutorial.djangogirls.org/en/python_installation/)] +* PIP, a tool that allows you to install packages written in Python. Installing Python automatically installs PIP, but in case not - [installation instructions] +* Basic knowledge on how to use the command-line (see the [Introduction to the command-line](https://tutorial.djangogirls.org/en/intro_to_command_line/) if you want to brush up your skills) + +Once Python is set up, open your **Terminal** and install goodtables using the package manager, PIP. The command `pip install goodtables`. + +![installing goodtables command-line tool with pip in Terminal](./figure-4.gif) +*Figure 4: installing goodtables command-line tool with pip in Terminal.* + +To validate a data file, type goodtables followed by the path to your file i.e. `goodtables path/to/file.csv`. You can pass multiple file paths one after the other, or even the path to a *datapackage.json* file. + +For our first example, we will download and check [this simple location CSV data file](https://github.com/frictionlessdata/datapackage-py/blob/master/data/data.csv) for errors. In the second instance, we will validate this [Department of Data Expenses dataset, that contains errors](https://raw.githubusercontent.com/frictionlessdata/goodtables-py/bc6470a970aacf65f20a3ddb7f71eb05a2a31c70/data/invalid-on-structure.csv). + +![Validating data files using goodtables in Terminal](./figure-5.gif) +*Figure 5: Validating data files using goodtables in Terminal.* + +You can see the list of options by running `goodtables --help`. The full documentation, including the list of validation checks that can be run, is available [on the goodtables-py repository on GitHub](https://github.com/frictionlessdata/goodtables-py). + +Congratulations, you now know how to validate your tabular data using the command-line! + +If you regularly update your data or maintain many different datasets, running the validations manually can be time-consuming. The solution is to automate this process, so the data is validated every time it changes, ensuring the errors are caught as soon as possible. Find out how to do it in the "[Automating the validation checks](/blog/2018/03/12/automatically-validated-tabular-data)" section. diff --git a/site/blog/2018-07-16-validated-tabular-data/figure-1.png b/site/blog/2018-07-16-validated-tabular-data/figure-1.png new file mode 100644 index 000000000..241fb2c3d Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/figure-1.png differ diff --git a/site/blog/2018-07-16-validated-tabular-data/figure-2.png b/site/blog/2018-07-16-validated-tabular-data/figure-2.png new file mode 100644 index 000000000..4df47a6a1 Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/figure-2.png differ diff --git a/site/blog/2018-07-16-validated-tabular-data/figure-3.png b/site/blog/2018-07-16-validated-tabular-data/figure-3.png new file mode 100644 index 000000000..1eed76c46 Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/figure-3.png differ diff --git a/site/blog/2018-07-16-validated-tabular-data/figure-4.gif b/site/blog/2018-07-16-validated-tabular-data/figure-4.gif new file mode 100644 index 000000000..8eb66132e Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/figure-4.gif differ diff --git a/site/blog/2018-07-16-validated-tabular-data/figure-5.gif b/site/blog/2018-07-16-validated-tabular-data/figure-5.gif new file mode 100644 index 000000000..018587080 Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/figure-5.gif differ diff --git a/site/blog/2018-07-16-validated-tabular-data/reliable.png b/site/blog/2018-07-16-validated-tabular-data/reliable.png new file mode 100644 index 000000000..47465e4d8 Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/reliable.png differ diff --git a/site/blog/2018-07-16-validated-tabular-data/valid.png b/site/blog/2018-07-16-validated-tabular-data/valid.png new file mode 100644 index 000000000..e202b5044 Binary files /dev/null and b/site/blog/2018-07-16-validated-tabular-data/valid.png differ diff --git a/site/blog/2018-07-16-visible-findable-shareable-data/README.md b/site/blog/2018-07-16-visible-findable-shareable-data/README.md new file mode 100644 index 000000000..664fdfcda --- /dev/null +++ b/site/blog/2018-07-16-visible-findable-shareable-data/README.md @@ -0,0 +1,170 @@ +--- +title: Visible, findable, shareable data +date: 2018-07-16 +tags: ["Goodtables", "field-guide"] +category: +image: /img/blog/visible.png +description: Getting your data out into the world is a crucial step towards its being used and useful. We walk through the steps to publishing on the top data platforms. +--- + + +This section requires knowledge on how to [write a Table Schema](/blog/2018/03/07/well-packaged-datasets/#write-a-table-schema) and [attach descriptive metadata](/blog/2018/03/07/well-packaged-datasets/#add-your-dataset-s-metadata) to your data collection. + +Creating and Sharing Data Packages is important for both data publishers and data users because it provides a common and open specification to describe your dataset's metadata. This facilitates data reuse, as users don't need to understand each data publisher's specific metadata format, and as the specification is machine-readable, it also allows tools to parse the metadata. This enables software to: + +* Import the data packages into different tools and languages, like Python and R +* Validate the data contents according to the schema described in the data package +* Convert the data package into other formats, for example loading it into a SQL database for further analysis + +Although these reasons are not unique to publishing data as data packages, here's why we think data publishers should consider publishing in this format: + +* Archiving data collections using data packages ensure data publishers can update data more efficiently at any time. The associated schema is a guide on existing data fields and acceptable data types for individual tabular data resources and can be easily built upon. + +* Sharing data with descriptive metadata and its associated schema provides context for your data no matter where it is used, and significantly cuts down on time spent researching data provenance before using acquired data. + +* Data Packages allow for accountability and enrich the feedback process as data publishers can add metadata with contact information for users to reach out to them and licensing to spell out accepted use of published data. + +If don't need your own data portal, there are many platforms where you can publish your data (if you need your own, check [CKAN](https://ckan.org/)). In the section below, we dive into a few options. Read along and decide what option is most suitable: + +## Publish Packaged Data in our community CKAN instance + +CKAN is an open source platform for publishing data that makes it easy to discover, use and share data. [datahub.ckan.io](http://datahub.ckan.io) is a public instance of CKAN that allows anyone to publish their data. + +Here’s why you should consider creating an organization on [datahub.ckan.io](http://datahub.ckan.io) and publishing datasets therein: +* [datahub.ckan.io](https://datahub.ckan.io/) is free for all to use! The file upload size limit on the platform is currently 100mb. +* The decision on whether to publicly or privately publish datasets rests with data publishers. +* [datahub.ckan.io](http://datahub.ckan.io) organizations allow for multiple users to collaborate with varied privileges: + * **Admin**: Can add/edit and delete datasets, as well as manage organization members. + * **Editor**: Can add and edit datasets, but not manage organization members. + * **Member**: Can view the organization's private datasets, but not add new datasets. + +To publish data on [datahub.ckan.io](http://datahub.ckan.io): + +1. Request for a new Organization to be created on the platform for you via [our community page](https://discuss.okfn.org/c/open-knowledge-labs/datahub). + This is required only to ensure spammers don’t take up space and hog resources on the platform. + + The request format is simple and requires: + * **Title**: This will be the name of your Organization on [datahub.ckan.io](http://datahub.ckan.io) i.e.
+ *My New Organization* + + * **Slug**: This is an acronym, word or hyphenated phrase that will be added to the end of the [datahub.ckan.io](http://datahub.ckan.io) url to uniquely identify your Organization and associate your data collections with it i.e.
+ *my-new-organization* + + * **Username**: The username you provide is associated with an email address on [datahub.ckan.io](http://datahub.ckan.io) and allows us to give you admin access to your Organization on [datahub.ckan.io](http://datahub.ckan.io). + +2. Log In and add new datasets + + Adding datasets on [datahub.ckan.io](http://datahub.ckan.io) is no different from using any other CKAN platform, but [here’s a good guide by Dan Fowler](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html) for first timers. + +3.Publish and share public datasets widely. + + On [datahub.ckan.io](http://datahub.ckan.io), you can either publish datasets privately, meaning only members of your organization have access to them, or publicly, as open data. [Find out more](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html). + + +## Publish Packaged Data on DataHub.io + +DataHub.io is a platform for finding, sharing and publishing high quality data online. + +[DataHub.io](http://datahub.io) and [datahub.ckan.io](http://datahub.ckan.io) share the same name for historical reasons. [Datahub.ckan.io](http://datahub.ckan.io) used to be the DataHub, but was moved to its current address, and the current DataHub uses new software developed from scratch. + +1. Set up a data publisher / user account on [DataHub.io](http://datahub.io) + + Join the [datahub.io community group](https://gitter.im/datahubio/chat), introduce yourself and request for an account. + +2. Publish Datasets on [DataHub.io](http://datahub.io) + + [This post](http://datahub.io/docs/getting-started/publishing-data) provides helpful information on publishing datasets on [DataHub.io](http://datahub.io). + + +## Publish Packaged Data on GitHub + +GitHub is the largest repository of source code, with [more than 20 million +users](https://github.com/blog/2345-celebrating-nine-years-of-github-with-an-anniversary-sale). Although the focus is on hosting source code, any type of file can be hosted. Documents, thesis, images, shapefiles, you can even host an entire static website with [GitHub Pages](https://pages.github.com/). + +By using GitHub, you get all the advantages of using a version control system as Git, where every modification to your files is tracked. You also get an issue ticketing system, wiki pages, milestones tracking, and other useful +collaboration tools. + +** What types of datasets can be hosted on GitHub?** + +Although GitHub offers many useful functionalities, not all datasets are a good fit for it. The main limitations are: + +* Individual files have less than 100 MB +* Entire repository have less than 1 GB + * The repository size includes not only the current files, but all of their previous versions. + +You can store larger files using [git-lfs](https://git-lfs.github.com/), but we won't go in details about it on this post. + +It's also useful if your data files use text-based file formats like CSV or GeoJSON, as then git is able to show you exactly what changed between two versions of the files. However, even if you use binary file formats like XLS, GitHub is still useful. + +** Step 1. Organise your dataset folder structure ** + +The way to structure your dataset depends on your data, and what extra artifacts it contains (e.g. images, scripts, reports, etc.). In this section, we'll show a complete example with: + +* **Data files**: The files with the actual data (e.g. CSV, XLS, GeoJSON, ...) +* **Documentation**: How was the data collected, any caveats, how to update it, etc. +* **Metadata**: Where the data comes from, what's in the files, what's their source and license, etc. +* **Scripts**: Software scripts that were used to generate, update, or modify the data. + +Even though we'll see an example that has all of these different types of files, this isn't always the case. For example, datasets that were manually collected might not have any scripts. + +This is the final structure: + +``` +data/ + schools.csv + cities.csv +docs/ + screenshot.png +scripts/ + clean_data.py +Makefile +datapackage.json +README.md +``` + +* **data/**: All data files are contained in this folder. In our example, there are two: `data/schools.csv` and `data/cities.csv`. +* **docs/**: Images, sample analysis, and other documentation files regarding the dataset. The main documentation is in `README.md`, but in this folder you can add any images used in the README, and other writings about the dataset. +* **scripts/**: All scripts are contained in this folder. There could be scripts to scrape the data, join different files, clean them, etc. Depending on the programming language you use, you might also add requirements files like `requirements.txt` for Python, or `package.json` for NodeJS. +* **Makefile**: The scripts are only part of the puzzle, we also need to know how to run them. In which order they should be executed, which one should I run to update the data, and so on. You could document this information textually in the `README.md` file, but the `Makefile` allows you to have executable documentation. You can think of it as a script to run the scripts. If you have never written a Makefile, read [Why Use Make](https://bost.ocks.org/mike/make/). +* **datapackage.json**: This file describes the dataset's metadata. For example, what is the dataset, where are its files, what they contain, what each column means (for tabular data), what's the source, license, and authors, and so on. As it's a machine-readable specification, other software can import and validate your files. See HOW TO CREATE A DATA PACKAGE on instructions on writing this file. +* **README.md**: This is where the dataset is described for humans. We recommend the following sections: + * **Introduction**: A short description of the dataset, what it contains, the time or geographical area it covers + * **Data**: What the data structure? Does it use any codes? How do you define missing values (e.g. 'N/A' or '-1') + * **Preparation**: How was the data collected? How do I update the data? Was it modified in any way? If you have a `Makefile`, this section will mostly document how to run it. Otherwise you can describe how to run the scripts, or how to collect the data manually. + * **License**: There are two issues here: the license of the data itself, and the license of the package you are creating (including any scripts). Our recommendation is to license the package you created as [CC0][cc0], and add any relevant information or disclaimers regarding the source data's license. + +To summarize, these are the folders, files, and their respective contents in this structure: + +| Path | Type | Contents | +| --- | --- | --- | +| data/ | Data | Dataset's data files. | +| docs/ | Documentation | Images, analysis, and other documentation files. | +| scripts/ | Scripts | Scripts used for creating, modifying, or analysing the dataset. | +| Makefile | Scripts | Executable documentation on how to run the scripts. | +| datapackage.json | Metadata | Data Package descriptor file. | +| README.md | Documentation | Textual description of the dataset with description, preparation steps, license, etc. | + +** Step 2. Upload the dataset to GitHub ** + +1. Login (or create) a new account on GitHub +1. Create [a new repository][gh:newrepo] + * Write a short description about the dataset +1. On your repository page, click on the "Upload files" link +1. Upload the files you created in the previous step + * If your have files larger than 25 MB, you'll need to either [upload using the command line][gh:addfiles-cli], or the [GitHub Desktop client][gh:desktop]. + +** (Optional) Step 3. Enable automatic tabular data validation ** + +You can automatically validate your tabular data files using [goodtables.io][gt.io]. This will take only a few minutes, and will ensure you'll always know when there are errors with your dataset, maintaining its quality. [Read the walkthrough here](/blog/2018/03/12/automatically-validated-tabular-data). + +The sample datasets used in this example, that is, List of schools in Birmingham, UK are available [in this repository](https://github.com/vitorbaptista/birmingham_schools). + +[why-make]: https://bost.ocks.org/mike/make/ "Why use Make" +[publish-faq]: /guides/publish-faq/ "Publishing Data Packages - FAQ" +[gt.io]: https://goodtables.io +[gt.io:github]: https://docs.goodtables.io/getting_started/github.html "Validating data on GitHub" +[cc0]: https://creativecommons.org/publicdomain/zero/1.0/ "Creative Commons Public Domain Dedication" +[gh:newrepo]: https://github.com/new "GitHub New Repository" +[gh:desktop]: https://desktop.github.com/ +[gh:addfiles]: https://help.github.com/articles/adding-a-file-to-a-repository/ "Adding a file to a repository" +[gh:addfiles-cli]: https://help.github.com/articles/adding-a-file-to-a-repository-using-the-command-line/ "Adding a file to a repository using the command line" diff --git a/site/blog/2018-07-16-visible-findable-shareable-data/visible.png b/site/blog/2018-07-16-visible-findable-shareable-data/visible.png new file mode 100644 index 000000000..2166f272b Binary files /dev/null and b/site/blog/2018-07-16-visible-findable-shareable-data/visible.png differ diff --git a/site/blog/2018-07-20-nimblelearn/README.md b/site/blog/2018-07-20-nimblelearn/README.md new file mode 100644 index 000000000..2c52582b4 --- /dev/null +++ b/site/blog/2018-07-20-nimblelearn/README.md @@ -0,0 +1,37 @@ +--- +title: Nimble Learn - Data Package M (datapackage-m) +date: 2018-07-20 +tags: ["case-studies"] +author: Michael Amadi +category: case-studies +interviewee: Michael Amadi +subject_context: Nimble Learn is the Business Intelligence (BI) and Advanced Analytics consultancy behind datapackage-m, a set of functions for working with Tabular Data Packages in Power BI Desktop and Power Query for Excel. +image: /img/blog/nimblelearn-logo.png +description: Nimble Learn's datapackage-m is a set of functions for working with Tabular Data Packages in Power BI Desktop and Power Query for Excel. +--- + +[Data Package M](https://github.com/nimblelearn/datapackage-m), also known as *datapackage-m*, is a set of functions written in [Power Query M](https://docs.microsoft.com/en-us/powerquery-m/) for working with Tabular Data Packages in [Power BI Desktop](https://powerbi.microsoft.com/en-us/desktop/) and [Power Query for Excel](https://support.office.com/en-us/article/introduction-to-microsoft-power-query-for-excel-6e92e2f4-2079-4e1f-bad5-89f6269cd605) (also known as 'Get & Transform Data' in Excel 2016). + +datapackage-m makes use of the Data Package, Data Resource, Tabular Data Package, Tabular Data Resource, and Table Schema specifications, enabling you to go from data to insight in Power BI and Excel, faster. + +In 2014, while searching the web for high quality open data, we stumbled across the Frictionless Data project. On learning about [Data Packages](/data-package/), we spent some time getting acquainted with the specs and began to use Tabular Data Packages for some internal projects. datapackage-m then started off as an internal tool at Nimble Learn for working with Tabular Data Packages. + +![](./datapackage-m-power-bi.gif) +*How datapackage-m works in Power BI* + +datapackage-m now implements [v1 of the Frictionless Data specs](https://blog.okfn.org/2017/09/05/frictionless-data-v1-0/) from a Tabular Data Package consumption perspective. By implementing a broad number of the specs, datapackage-m is able to extract the tables from most [Tabular Data Packages](https://specs.frictionlessdata.io/tabular-data-package/), or Data Packages with tabular resources, in seconds. These tables can be quickly loaded into a Power BI Data Model or an Excel Worksheet (or Data Model), ready for you to analyse. datapackage-m currently handles Gzip compressed resources and we’re looking into support for Zip. We have successfully tested datapackage-m with several Data Packages from [Datahub](https://datahub.io/core) and the Frictionless Data [Example Data Packages](https://github.com/frictionlessdata/example-data-packages) GitHub repository. + +In working with data, there are often many repetitive tasks required to get data into a state that can be analysed. Even when the requirement is just to profile and assess whether a new dataset is suitable for a given use case, a lot of time can be wasted getting it into good tabular shape. [Data Packages](/data-package/) are designed to alleviate this issue, and datapackage-m makes them available for use in Power BI and Excel. + +We find that the Frictionless Data specs are simple to use from both a data publisher and data consumer perspective. We’ve seen a great number of other specifications that are feature-rich but too verbose. In contrast to these, the Frictionless Data specs are minimalist and support use cases where Data Packages are created using one’s favourite text editor. + +![](./datapackage-m-excel.gif) +*How datapackage-m works in Excel* + +There’s an ongoing discussion around a Data Resource compression pattern which is important from a data publishing perspective i.e. due to ongoing file storage and bandwidth costs. Once this pattern is agreed upon and published, it would be good to see this added to the [Data Resource](https://specs.frictionlessdata.io/data-resource/) and [Tabular Data Resource](https://specs.frictionlessdata.io/tabular-data-resource/) specs not too long after. + +Other than this, we would like to see another Data Package profile that extends the Tabular Data Package with semantic layer metadata. In addition to the Tabular Data Profile properties, this ‘Semantic Data Package’ would have properties for measure definitions, attribute hierarchies, and other semantic layer metadata. Something like this could be used to programmatically generate [Semantic Data Models](https://en.wikipedia.org/wiki/Semantic_data_model) in a data analytics tool of choice and populate it with data from the tabular data directly. + +[There are many existing use cases for Tabular Data Packages](http://okfnlabs.org/blog/2017/12/21/bootstrapping-data-standards-with-frictionless-data.html), and we see ‘Subject Area’ Tabular Data Packages as a significant additional use case that is worth exploring . By ‘Subject Area’, we mean a Tabular Data Package that combines relevant Tabular Data Resources from other high quality Tabular Data Packages. This would help to reduce the time spent seeking out related/relevant data for a given area of analysis and could save researchers tonnes of time, for example. + +In addition to datapackage-m, Nimble Learn is working on a public-facing project that is focused on publishing pre-integrated open data from various sources as subject area Tabular Data Packages. In addition to this we plan on extending datapackage-m to adopt more Frictionless Data specifications. Keep an eye out for all these updates [on GitHub](https://github.com/nimblelearn). diff --git a/site/blog/2018-07-20-nimblelearn/datapackage-m-excel.gif b/site/blog/2018-07-20-nimblelearn/datapackage-m-excel.gif new file mode 100644 index 000000000..054c899dd Binary files /dev/null and b/site/blog/2018-07-20-nimblelearn/datapackage-m-excel.gif differ diff --git a/site/blog/2018-07-20-nimblelearn/datapackage-m-power-bi.gif b/site/blog/2018-07-20-nimblelearn/datapackage-m-power-bi.gif new file mode 100644 index 000000000..054c899dd Binary files /dev/null and b/site/blog/2018-07-20-nimblelearn/datapackage-m-power-bi.gif differ diff --git a/site/blog/2018-07-20-nimblelearn/nimblelearn-logo.png b/site/blog/2018-07-20-nimblelearn/nimblelearn-logo.png new file mode 100644 index 000000000..a6874eefd Binary files /dev/null and b/site/blog/2018-07-20-nimblelearn/nimblelearn-logo.png differ diff --git a/site/blog/2018-08-29-publish-online/README.md b/site/blog/2018-08-29-publish-online/README.md new file mode 100644 index 000000000..48d6e3779 --- /dev/null +++ b/site/blog/2018-08-29-publish-online/README.md @@ -0,0 +1,98 @@ +--- +title: Publish Your Data Package Online +date: 2016-08-29 +tags: +category: publishing-data +--- + +This tutorial is about how to publish your Data Package online for others to find and use. + +It assumes you have already finished packaging up your data as a Data Package (if not, [check out the instructions here](/blog/2018/07/16/publish-data-as-data-packages/)). + +## It's Only Files Online + +Publishing your Data Package is incredibly simple: you just need to post it online somewhere that others can access. + +**Note:** if you just want to to share your Data Package with a few others you can just send it directly, for example via email. Since a Data Package is just some files there are as many ways to do this as there are ways to put files online. Here we will just provide some general tips and illustrate some of the most popular publishing options. + +**Advertise it** + +Once you have published your data package you may want to advertise it to others. One way to advertise the existence of your dataset is to add it to the catalog-list file in the [registry repo](https://github.com/datasets/registry/), it will then automagically appear as a community dataset on the [data.okfn.org](http://data.okfn.org/data) site + +## Github, Bitbucket etc + +One nice option for the more sophisticated is to manage your Data Package in a git or mercurial repo and push it to github, gitorious, bitbucket or similar. + +## S3, Google Storage etc + +Cloud storage like S3 and Google Storage are perfect for storing your Data Packages. + +## Google Drive + +The directory structure of a Data Package shared on Google Drive must be flat; that is, the Data Package must not contain any folders. + +**OK** +``` +shared-folder +|-- datapackage.json +|-- README.md +|-- data.csv +``` + +**Not OK** +``` +shared-folder +|-- datapackage.json +|-- README.md +|-- data + |-- data.csv +``` + +1. Upload your Data Package folder ([help][google-drive-upload]) + +2. Change your folder's share setting to **Public on the web - Anyone on the Internet can find and view** ([help][google-drive-share-settings]) + +3. Get a shareable link for your folder ([help][google-drive-share]) + +4. Find your folder's ID in the link + * *Example Link:* + * ```https://drive.google.com/open?id=0B-f6D5RM8awSfkdtRWpiTlpxdmhPblJRd2NhdHpHMFZPOFZKcWhpT2NkQlZCUlNWUnFwaHM&authuser=0``` + * *Example ID:* + * ```0B-f6D5RM8awSfkdtRWpiTlpxdmhPblJRd2NhdHpHMFZPOFZKcWhpT2NkQlZCUlNWUnFwaHM``` + +5. Your ```datapackage.json``` link is ```https://googledrive.com/host/{ID}/datapackage.json```; for example, using the *Example ID* from the previous step, the ```datapackage.json``` link is: + * ```https://googledrive.com/host/0B-f6D5RM8awSfkdtRWpiTlpxdmhPblJRd2NhdHpHMFZPOFZKcWhpT2NkQlZCUlNWUnFwaHM/datapackage.json``` + +[google-drive-upload]: https://support.google.com/drive/answer/2424368 +[google-drive-share-settings]: https://support.google.com/drive/answer/2494886 +[google-drive-share]: https://support.google.com/drive/answer/2494822 + +## Dropbox + +Just upload your files to Dropbox. + +You do need to be a bit careful as Dropbox does not always replicate your local file layout in its online URLs. Therefore, make sure you read the [Key Tips](#key-tips) section below. + +## Key Tips + +However you publish your Data Package there are a few key points to keep in +mind: + +* All the files in the Data Package should be accessible online +* The structure of your Data Package should be preserved. Specifically the paths between your `datapackage.json` and the data files must be preserved. For example, if your Data Package directory looked like this on disk: + + datapackage.json + data.csv + somedir/other-data.csv + + then online it should look like: + + http://your.website.com/mydatapackage/datapackage.json + http://your.website.com/mydatapackage/data.csv + http://your.website.com/mydatapackage/somedir/other-data.csv + + This can be a problem with services like e.g. Google Drive where files in a given folder don't have a web address that relates to that folder. The reason we need to preserve relative paths is that when using the Data Package client software will compute the full path from the location of the `datapackage.json` itself plus the relative path for the file give in the `datapackage.json` resources section. + +Recommended reading: Find out how to use Frictionless Data software to improve your data publishing workflow in our new and comprehensive [Frictionless Data Field Guide][field-guide]. + +[field-guide]: /tag/field-guide diff --git a/site/blog/2019-03-01-datacurator/README.md b/site/blog/2019-03-01-datacurator/README.md new file mode 100644 index 000000000..7130cb385 --- /dev/null +++ b/site/blog/2019-03-01-datacurator/README.md @@ -0,0 +1,139 @@ +--- +title: Data Curator +date: 2019-03-01 +tags: ["case-studies"] +category: case-studies +interviewee: Stephen Gates +subject_context: Data Curator makes use of the Frictionless Data Specifications to allow users to define information about their data using their desktop computer, prior to publishing it on the Internet +image: /img/blog/data-curator-logo.png +description: Data Curator is a simple desktop editor to help describe, validate, and share usable open data. +--- + +# Data Curator - share usable open data + +Open data producers are increasingly focusing on improving open data so it can be easily used to create insight and drive positive change. + +Open data is more likely to be used if data consumers can: + +- understand the structure of the data +- understand the quality of the data +- understand why and how the data was collected +- look up the meaning of codes used in the data +- access the data in an open machine-readable format +- know how the data is licensed and how it can be reused + +Data Curator enables open data producers to define all this information using their desktop computer, prior to publishing it on the Internet. + +Data Curator uses the [Frictionless Data specification](https://specs.frictionlessdata.io/) and software to package the data and supporting information in a [Tabular Data Package](https://specs.frictionlessdata.io/tabular-data-package/ "Tabular Data Package specification"). + +![Data Curator screenshot](./data-curator.png) + +## Using Data Curator + +Here's how to use Data Curator to share usable open data in a data package: + +1. Download [Data Curator](https://github.com/ODIQueensland/data-curator/releases/latest "Download Data Curator for Windows or macOS") for Windows or macOS +2. In Data Curator, either: + - create some data + - open an Excel sheet + - open a separated value file (e.g. CSV, TSV) +4. Follow the steps below... + +### Describe the data + +The Frictionless Data specification allows you to describe tabular data using a [Table Schema](https://specs.frictionlessdata.io/table-schema/ "Table Schema specification"). A Table Schema allows each field in the data to be given: +- a `name`, `title` and `description` +- a data `type` (e.g. `string`, `integer`) and `format` (e.g. `uri`, `email`) +- one or more `constraints` (e.g. `required`, `unique`) to limit data values and improve data validation + +The Table Schema also allows you to describe the characters used to represent missing values (e.g. `n/a`, `tba`), primary keys, and foreign key relationships. + +After adding data in Data Curator, to create a Table Schema: + +- Give your data a header row, if it doesn't have one +- Set the header row to give each field a `name` +- Guess column properties to give each field a `type` and `format` +- Set column properties to improve the data `type` and `format` guesses, and add a `title`, `description` and `constraints` +- Set table properties to give the table a `name`, define missing values, a primary key, and foreign keys. + +### Validate the data + +Using Data Curator, you can validate if the data complies with the field's `type`, `format` and `contraints`. Errors found can be filtered in different ways so you can correct errors by row, by column or by error type. + +In some cases data errors cannot be corrected, as they should be corrected in the source system and not as part of the data packaging process. If you're happy to publish the data with errors, the error messages can be appended to the provenance information. + +### Provide context + +Data Curator lets you add provenance information to help people understand why and how the data was collected and determine if it is fit for their purpose. + +Provenance information can be entered using [Markdown](http://commonmark.org "Markdown specification"). You can preview the Markdown formatting in Data Curator. + +![Add provenance information screenshot](./data-curator-2.png) + +You should follow the [Readme FAQ](/blog/2016/04/20/publish-faq/ "Publishing Data Packages - FAQ") when writing provenance information or, even easier, cut and paste from this [sample](https://github.com/ODIQueensland/data-curator/blob/develop/test/features/tools/sample-provenance-information.md "Sample Provenance Information Markdown file on GitHub"). + +### Explain the meaning of codes + +Data Curator supports foreign key relationships between data. Often a set of codes is used in a column of data and the list of valid codes and their description is in another table. The Frictionless Data specification enables linking this data within a table or across two tables in the same data package. + +We've implemented the [Foreign Keys to Data Packages pattern](https://specs.frictionlessdata.io/patterns/#table-schema:-foreign-keys-to-data-packages "The Foreign Keys to Data Packages pattern") so you can have foreign key relationships across two data packages. This is really useful if you want to share code-lists across organisations. + +You can define foreign key relationships in Data Curator in the table properties and the relationships are checked when you validate the data. + +### Save the data in an open format + +Data Curator lets you save data as a comma, semicolon, or tab separated value file. A matching [CSV Dialect](https://specs.frictionlessdata.io/csv-dialect/ "The CSV Dialect specification") is added to the data package. + +### Apply an open license + +Applying a license, waiver, or public domain mark to a [data package](https://specs.frictionlessdata.io/data-package/#licenses "The licenses property in the Data Package specification") and its [resources](https://specs.frictionlessdata.io/data-resource/#optional-properties "The licenses property in the Data Resource specification") helps people understand how they can use, modify, and share the contents of the data package. + +![Apply open license to data package screenshot](./data-curator-3.png) + +Although there are many ways to [apply a licence, waiver or public domain mark](/blog/2018/03/27/applying-licenses/ "Guide to applying licenses, waivers or public domain marks to data packages") to a data package, Data Curator only allows you to use open licences - after all, its purpose is to share usable open data. + +### Export the data package + +To ensure only usable open data is shared, Data Curator applies some checks before allowing a data package to be exported. These go beyond the mandatory requirements* in the Frictionless Data specification. + +To export a tabular data package, it must have: +- a header row +- a table schema* +- a table (resource) `name`* +- a data package `name`* +- provenance information +- an open licence applied to the data package + +If a data package `version` is used, it must follow the [data package version pattern](https://specs.frictionlessdata.io/patterns/#data-package-version "Data Package Version pattern"). + +Before exporting a data package you should: +- add a `title` and `description` to each field, table and data package +- acknowledge any data sources and contributors +- validate the data and add any known errors to the provenance information + +The data package is exported as a `datapackage.zip` file that contains the: + +- data files in a `/data` directory +- data package, table (resource), table schema, and csv dialect properties in a`datapackage.json` file +- provenance information in a `README.md` file + +### Share the data + +Share the `datapackage.zip` with open data consumers by publishing it on the Internet or on an open data platform. Some platforms support uploading, displaying, and downloading data packages. + +Open data consumers will be able to read the data package with one of the many applications and software libraries that work with data packages, including Data Curator. + +## Get Started + +**[Download Data Curator](https://github.com/ODIQueensland/data-curator/releases/latest "Download Data Curator for Windows or macOS")** for Windows or macOS and start sharing usable open data. + +## Who made Data Curator? + +Data Curator was made possible with funding from the [Queensland Government](https://www.qld.gov.au) and the guidance of the Open Data Policy team within the Department of Housing and Public Works. We're grateful for the ideas and testing provided by open data champions in the Department of Environment and Science, and the Department of Transport and Main Roads. + +The project was led by [Stephen Gates](https://theodi.org/article/open-data-pathway-introducing-country-level-statistics/) from the [ODI Australian Network](https://www.linkedin.com/company/odiaustraliannetwork/about/). Software development was coordinated by Gavin Kennedy and performed by Matt Mulholland from the [Queensland Cyber Infrastructure Foundation](https://www.qcif.edu.au) (QCIF). + +Data Curator uses the Frictionless Data software libraries maintained by [Open Knowledge International](https://okfn.org) and we're extremely grateful for the support provided by [the team](https://github.com/orgs/frictionlessdata/teams/core/members). + +Data Curator started life as [Comma Chameleon](http://comma-chameleon.io "Comma Chameleon - A desktop CSV editor for data publishers +"), an [experiment](https://youtu.be/wIIw0cTeUG0 "Stuart Harrison explains Comma Chameleon at CSVConf") by [the ODI](https://theodi.org "The Open Data Institute"). The ODI and the ODI Australian Network agreed to take the software in [different directions](https://theodi.org/article/odi-toolbox-application-experiments-from-comma-chameleon-to-data-curator/ "Stephen Fortune explains why Data Curator is a fork of Comma Chameleon"). diff --git a/site/blog/2019-03-01-datacurator/data-curator-2.png b/site/blog/2019-03-01-datacurator/data-curator-2.png new file mode 100644 index 000000000..522587f4e Binary files /dev/null and b/site/blog/2019-03-01-datacurator/data-curator-2.png differ diff --git a/site/blog/2019-03-01-datacurator/data-curator-3.png b/site/blog/2019-03-01-datacurator/data-curator-3.png new file mode 100644 index 000000000..770cef760 Binary files /dev/null and b/site/blog/2019-03-01-datacurator/data-curator-3.png differ diff --git a/site/blog/2019-03-01-datacurator/data-curator-logo.png b/site/blog/2019-03-01-datacurator/data-curator-logo.png new file mode 100644 index 000000000..511566264 Binary files /dev/null and b/site/blog/2019-03-01-datacurator/data-curator-logo.png differ diff --git a/site/blog/2019-03-01-datacurator/data-curator.png b/site/blog/2019-03-01-datacurator/data-curator.png new file mode 100644 index 000000000..229ca4920 Binary files /dev/null and b/site/blog/2019-03-01-datacurator/data-curator.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/README.md b/site/blog/2019-05-20-used-and-useful-data/README.md new file mode 100644 index 000000000..5bbba4c95 --- /dev/null +++ b/site/blog/2019-05-20-used-and-useful-data/README.md @@ -0,0 +1,428 @@ +--- +title: Used and useful data +date: 2019-05-20 +tags: ["Data Package Creator", "goodtables.io", "Command-line", "field-guide"] +category: +image: /img/blog/used.png +description: Concerned that your data is just not being used? We've got some great tips, and best practices to improve the uptake in your data use +--- + +## Include a data schema + +Simply put, a schema is a blueprint that tells us how your data is structured, and what type of content is to be expected in it. You can think of it as a data dictionary. Having a table schema at hand makes it possible to run more precise validation checks on your data, both at a structural and content level. + +For this section, we will use the [Data Package Creator](https://create.frictionlessdata.io) and [Gross Domestic Product dataset for all countries (1960 - 2014)](http://datahub.io/core/gdp). + +**Data Package** is a format that makes it possible to put your data collection and relevant information that provides context about your data in one container before you share it. All contextual information, such as metadata and your data schema, is published in a JSON file named *datapackage.json*. + +**Data Package Creator** is an online service that facilitates the creation and editing of data packages. The service automatically generates a *datapackage.json* file for you as you add and edit data that is part of your data collection. We refer to each piece of data in a data collection as a **data resource**. + +[Data Package Creator](https://create.frictionlessdata.io) loads with dummy data to make it easy to understand how metadata and sample resources help generate the *datapackage.json* file. There are three ways in which a user can add data resources on [Data Package Creator](https://create.frictionlessdata.io): + +1. Provide a hyperlink to your data resource (highly recommended). + + If your data resource is publicly available, like on GitHub or in a data repository, simply obtain the URL and paste it in the **Path** section. To learn how to publish your data resource online, check the publish your dataset section. + +2. Create your data resource within the service. + + If your data resource isn't published online, you'll have to define its fields from scratch. Depending on how complex is your data, this can be time consuming, but it's still easier than creating the descriptor JSON file from scratch.This option is time consuming, as a user has to manually create each field of a data resource. However, this is simpler than learning how to create a JSON file from scratch. + +3. **Load a Data Package** option + + With this option, you can load a pre-existing *datapackage.json* file to view and edit its metadata and resource fields. + +Our [Gross Domestic Product dataset for all countries (1960 - 2014)](https://github.com/frictionlessdata/example-data-packages/blob/master/gross-domestic-product-all/data/gdp.csv) dataset is publicly available on GitHub. + +Obtain a link to the raw CSV file by clicking on the Raw button at the top right corner of the GitHub file preview page, as shown in figure 1 below. The resulting hyperlink looks like https://raw.githubusercontent.com/datasets/continent-codes/master/data/continent-codes.csv + +![Above, raw button highlighted in red](./figure-1.png) +*Figure 1: Above, raw button highlighted in red.* + +Paste your hyperlink in the *Path* section and click on the *Load* button. Each column in your table translates to a *field*. You should be prompted to add all fields identified in your data resource, as in Figure 2 below. Click on the prompt to load the fields. + +![annotated in red, a prompt to add all fields inferred from your data resource](./figure-2.png) +*Figure 2: annotated in red, a prompt to add all fields inferred from your data resource.* + +The page that follows looks like Figure 3 below. Each column from the GDP dataset has been mapped to a *field*. The data type for each column has been inferred correctly, and we can preview data under each field by hovering over the field name. It is also possible to edit all sections of our data resource’s fields as we can see below. + +![all fields inferred from your data resource](./figure-3.png) +*Figure 3: all fields inferred from your data resource.* + +You can now edit data types and formats as necessary, and optionally add titles and descriptive information to your fields. For example, the data type for our {Year} field should be ***year*** and not ***integer***. Our {Value} column has numeric information with decimal places. + +By definition, values under the ***integer*** data type are whole numbers. The ***number*** data type is more appropriate for the {Value} column. When in doubt about what data type to use, consult the [Table Schema data types cheat sheet](https://specs.frictionlessdata.io/table-schema/#types-and-formats). + +Click on the ![](./settings.png) icon to pick a suitable profile for your data resource. [Here’s more information about Frictionless Data profiles](https://specs.frictionlessdata.io/profiles/). + +If your dataset has other data resources, add them by scrolling to the bottom of the page, clicking on Add Resource, and repeating the same process as we just did. + +If your dataset has other data resources, add them by scrolling to the bottom of the page, clicking on **Add Resource**, and repeating the same process as we just did. + +![Prompt to add more data resources](./figure-4.png) +*Figure 4: Prompt to add more data resources.* + + +---- + + +## Add descriptive metadata + +In the previous section, we described metadata for each of our datasets, but we're still missing metadata for our collection of datasets. You can add it via the **Metadata** section on the left side bar, describing things like the dataset name, description, author, license, etc. + +![Add Data Package Metadata](./figure-5.png) + +The **Profile** section under metadata allows us to specify what kind of data collection we are packaging. +* *Data Package* +This is the base, more general profile. Use it if your dataset contains resources of mixed formats, like tabular and geographical data. The base requirement for a valid Data Package profile is the *datapackage.json* file. See the [Data Package specification](https://specs.frictionlessdata.io/data-package/) for more information. + +* *Tabular Data Package* +If your data contains only tabular resources like CSVs and spreadsheets, use the Tabular Data Package profile. See the [Tabular Data Package specification](https://specs.frictionlessdata.io/tabular-data-package/) for more information. +* *Fiscal Data Package* +If your data contains fiscal information like budgets and expenditure data, use the Fiscal Data Package profile. See the [Fiscal Data Package specification](https://specs.frictionlessdata.io/fiscal-data-package/) for more information. + +In our example, as we only have a CSV data resource, the *Tabular Data Package* profile is the best option. + +In the **Keywords** section, you can add any keywords that helps make your data collection more discoverable. For our dataset, we might use the keywords *GDP, National Accounts, National GDP, Regional GDP*. Other datasets could include the country name, dataset area (e.g. "health" or "environmental"), etc. + +Now that we have created a Data Package, we can **Validate** or **Download** it. But first, let’s see what our datapackage.json file looks like. With every addition and modification, the [Data Package Creator](https://create.frictionlessdata.io) has been populating the *datapackage.json* file for us. Click on the **{···}** icon to view the *datapackage.json* file. As you can see below, any edit we make to the description of the Value field reflects on the JSON file in real time. + +The **Validate** button allows us to confirm whether we chose the correct Profile for our Data Package. The two possible outcomes at this stage are: + +![Data Package is Invalid](./figure-6.png) + +This message appears when there is some validation error like if we miss some required attribute (e.g. the data package name), or have picked an incorrect profile (e.g. Tabular Data Package with geographical data).. Review the metadata and profiles to find the mistake and try validating again. + +![Data Package is Valid](./figure-7.png) + +All good! This message means that your data package is valid, and we can download it. + + +---- + + +## Create Data Packages + +As we said earlier, the base requirement for a valid Data Package profile is the *datapackage.json* file, which contains your data schema and metadata. We call this the descriptor file. You can download your descriptor file by clicking on the **Download** button. + +* If your data resources, like ours, were linked from an online public source, sharing the *datapackage.json* file is sufficient, since it contains URLs to your data resources. + +* If you manually created a data resource and its fields, remember to add all your data resources and the downloaded *datapackage.json* file in one folder before sharing it. + +The way to structure your dataset depends on your data, and what extra artifacts it contains (e.g. images, scripts, reports, etc.). In this section, we'll show a complete example with: + +* **Data files**: The files with the actual data (e.g. CSV, XLS, GeoJSON, ...) +* **Documentation**: How was the data collected, any caveats, how to update it, etc. +* **Metadata**: Where the data comes from, what's in the files, what's their source and license, etc. +* **Scripts**: Software scripts that were used to generate, update, or modify the data. + +Your final Data Package file directory should look like this: + +``` +data/ + dataresource1.csv + dataresource2.csv +datapackage.json +``` +* **data/**: All data files are contained in this folder. In our example, there is only one: `data/gdp.csv` . + +* **datapackage.json**: This file describes the dataset's metadata. For example, what is the dataset, where are its files, what they contain, what each column means (for tabular data), what's the source, license, and authors, and so on. As it's a machine-readable specification, other software can import and validate your files. + +Congratulations! You have now created a schema for your data, and combined it with descriptive metadata and your data collection to create your first data package! + + +---- + + +## Validate your packaged data automatically + +Running continuous checks on data provides regular feedback and contributes to better data quality as errors can be flagged and fixed early on. + +In this section, you will learn how to setup automatic tabular data validation using goodtables, so your data is validated every time it's updated. Although not strictly necessary, it's useful to [know about Data Packages and Table Schema](/blog/2018/03/07/well-packaged-datasets/) before proceeding, as they allow you to describe your data in more detail, allowing more advanced validations. + +We will show how to set up automated tabular data validations for data published on: + +* [CKAN](https://ckan.org/), an open source platform for publishing data in the open, that makes it easy to discover, use and share data; +* [GitHub](https://github.com/), a web platform for collaborating on projects as well as publishing, sharing and storing resources, such as data files; +* [Amazon S3](https://aws.amazon.com/s3/), a data storage service by Amazon. + +Even if you don't use any of these platforms, you can still setup the validation using [goodtables-py][gt-py], it will just require some technical knowledge + +### Validate tabular data automatically on CKAN + +[CKAN](https://ckan.org/) is an open source platform for publishing data online. It is widely used across the planet, including by the federal governments of the USA, United Kingdom, Brazil, and others. + +To automatically validate tabular data on CKAN, enable the [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension, which uses goodtables to run continuous checks on your data. The [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension: + +* Adds a badge next to each dataset showing the status of their validation (valid or invalid), and +* allows users to access the validation report, making it possible for errors to be identified and fixed. + +![annotated in red, automated validation checks on datasets in CKAN](./figure-8.png) +*Figure 8: annotated in red, automated validation checks on datasets in CKAN.* + +The installation and usage instructions for [ckanext-validation](https://github.com/frictionlessdata/ckanext-validation) extension are available on [Github](https://github.com/frictionlessdata/ckanext-validation). + + +### Validate tabular data automatically on GitHub + +If your data is hosted on GitHub, you can use [https://goodtables.io][gt.io] to automatically validate it on every change. + +For this section, you will first need to create a [GitHub repository](https://help.github.com/articles/create-a-repo/) and add tabular data to it. + +Once you have tabular data in your Github repository: + +1. Login on [goodtables.io][gt.io] using your GitHub account and accept the permissions confirmation. +1. Once we've synchronized your repository list, go to the [Manage Sources](https://goodtables.io/settings) page and enable the repository with the data you want to validate. + * If you can't find the repository, try clicking on the Refresh button on the Manage Sources page + +Goodtables will then validate all tabular data files (CSV, XLS, XLSX, ODS) and [data packages](/data-package/) in the repository. These validations will be executed on every change, including pull requests. + +### Validate tabular data automatically on Amazon S3 + +If your data is hosted on Amazon S3, you can use [https://goodtables.io][gt.io] to automatically validate it on every change. + +It is a technical process to set up, as you need to know how to configure your Amazon S3 bucket. However, once it's configured, the validations happen automatically on any tabular data created or updated. Find the detailed instructions [here][gt.io:s3]. + +### Custom setup of automatic tabular data validation + +If you don't use any of the oficially supported data publishing platforms, you can use [goodtables-py][gt-py] directly to validate your data. This is the most flexible option, as you can configure exactly when, and how your tabular data is validated. For example, if your data come from an external source, you could validate it once before you process it (so you catch errors in the source data), and once after cleaning, just before you publish it, so you catch errors introduced by your data processing. + +The instructions on how to do this are technical, and can be found on [https://github.com/frictionlessdata/goodtables-py][gt-py]. + +---- + + +## Publish packaged data + +Creating and Sharing Data Packages is important for both data publishers and data users because it provides a common and open specification to describe your dataset's metadata. This facilitates data reuse, as users don't need to understand each data publisher's specific metadata format, and as the specification is machine-readable, it also allows tools to parse the metadata. This enables software to: + +* Import the data packages into different tools and languages, like Python and R +* Validate the data contents according to the schema described in the data package +* Convert the data package into other formats, for example loading it into a SQL database for further analysis + +Although these reasons are not unique to publishing data as data packages, here's why we think data publishers should consider publishing in this format: + +* Archiving data collections using data packages ensure data publishers can update data more efficiently at any time. The associated schema is a guide on existing data fields and acceptable data types for individual tabular data resources and can be easily built upon. + +* Sharing data with descriptive metadata and its associated schema provides context for your data no matter where it is used, and significantly cuts down on time spent researching data provenance before using acquired data. + +* Data Packages allow for accountability and enrich the feedback process as data publishers can add metadata with contact information for users to reach out to them and licensing to spell out accepted use of published data. + +If don't need your own data portal, there are many platforms where you can publish your data (if you need your own, check [CKAN](https://ckan.org/)). In the section below, we dive into a few options. Read along and decide what option is most suitable: + +### Publish packaged data in our community CKAN instance + +CKAN is an open source platform for publishing data that makes it easy to discover, use and share data. [datahub.ckan.io][ckan-datahub] is a public instance of CKAN that allows anyone to publish their data. + +Here’s why you should consider creating an organization on [datahub.ckan.io][ckan-datahub] and publishing datasets therein: +* [datahub.ckan.io][ckan-datahub] is free for all to use! The file upload size limit on the platform is currently 100mb. +* The decision on whether to publicly or privately publish datasets rests with data publishers. +* [datahub.ckan.io][ckan-datahub] organizations allow for multiple users to collaborate with varied privileges: + * **Admin**: Can add/edit and delete datasets, as well as manage organization members. + * **Editor**: Can add and edit datasets, but not manage organization members. + * **Member**: Can view the organization's private datasets, but not add new datasets. + +To publish data on [datahub.ckan.io][ckan-datahub]: + +1. Request for a new Organization to be created on the platform for you via [our community page](https://discuss.okfn.org/c/open-knowledge-labs/datahub). + This is required only to ensure spammers don’t take up space and hog resources on the platform. + + The request format is simple and requires: + * **Title**: This will be the name of your Organization on [datahub.ckan.io][ckan-datahub] i.e.
+ *My New Organization* + + * **Slug**: This is an acronym, word or hyphenated phrase that will be added to the end of the [datahub.ckan.io][ckan-datahub] url to uniquely identify your Organization and associate your data collections with it i.e.
+ *my-new-organization* + + * **Username**: The username you provide is associated with an email address on [datahub.ckan.io][ckan-datahub] and allows us to give you admin access to your Organization on [datahub.ckan.io][ckan-datahub]. + +2. Log In and add new datasets + + Adding datasets on [datahub.ckan.io][ckan-datahub] is no different from using any other CKAN platform, but [here’s a good guide by Dan Fowler](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html) for first timers. + +3.Publish and share public datasets widely. + + On [datahub.ckan.io][ckan-datahub], you can either publish datasets privately, meaning only members of your organization have access to them, or publicly, as open data. [Find out more](http://okfnlabs.org/blog/2016/07/25/publish-data-packages-to-datahub-ckan.html). + + +### Publish packaged data on DataHub.io + +DataHub.io is a platform for finding, sharing and publishing high quality data online. + +[DataHub.io][datahub] and [datahub.ckan.io][ckan-datahub] share the same name for historical reasons. [Datahub.ckan.io][ckan-datahub] used to be the DataHub, but was moved to its current address, and the current DataHub uses new software developed from scratch. + +1. Set up a data publisher / user account on [DataHub.io][datahub] + + Join the [datahub.io community group](https://gitter.im/datahubio/chat), introduce yourself and request for an account. + +2. Publish Datasets on [DataHub.io][datahub] + + [This post](http://datahub.io/docs/getting-started/publishing-data) provides helpful information on publishing datasets on [DataHub.io][datahub]. + + +### Publish packaged data on GitHub + +GitHub is the largest repository of source code, with [more than 20 million +users](https://github.com/blog/2345-celebrating-nine-years-of-github-with-an-anniversary-sale). Although the focus is on hosting source code, any type of file can be hosted. Documents, thesis, images, shapefiles, you can even host an entire static website with [GitHub Pages](https://pages.github.com/). + +By using GitHub, you get all the advantages of using a version control system as Git, where every modification to your files is tracked. You also get an issue ticketing system, wiki pages, milestones tracking, and other useful +collaboration tools. + +** What types of datasets can be hosted on GitHub?** + +Although GitHub offers many useful functionalities, not all datasets are a good fit for it. The main limitations are: + +* Individual files have less than 100 MB +* Entire repository have less than 1 GB + * The repository size includes not only the current files, but all of their previous versions. + +You can store larger files using [git-lfs](https://git-lfs.github.com/), but we won't go in details about it in this section. + +It's also useful if your data files use text-based file formats like CSV or GeoJSON, as then git is able to show you exactly what changed between two versions of the files. However, even if you use binary file formats like XLS, GitHub is still useful. + +** Step 1. Organise your dataset folder structure ** + +The way to structure your dataset depends on your data, and what extra artifacts it contains (e.g. images, scripts, reports, etc.). In this section, we'll show a complete example with: + +* **Data files**: The files with the actual data (e.g. CSV, XLS, GeoJSON, ...) +* **Documentation**: How was the data collected, any caveats, how to update it, etc. +* **Metadata**: Where the data comes from, what's in the files, what's their source and license, etc. +* **Scripts**: Software scripts that were used to generate, update, or modify the data. + +Even though we'll see an example that has all of these different types of files, this isn't always the case. For example, datasets that were manually collected might not have any scripts. + +Consider this folder structure: + +``` +data/ + schools.csv + cities.csv +docs/ + screenshot.png +scripts/ + clean_data.py +Makefile +datapackage.json +README.md +``` + +* **data/**: All data files are contained in this folder. In our example, there are two: `data/schools.csv` and `data/cities.csv`. +* **docs/**: Images, sample analysis, and other documentation files regarding the dataset. The main documentation is in `README.md`, but in this folder you can add any images used in the README, and other writings about the dataset. +* **scripts/**: All scripts are contained in this folder. There could be scripts to scrape the data, join different files, clean them, etc. Depending on the programming language you use, you might also add requirements files like `requirements.txt` for Python, or `package.json` for NodeJS. +* **Makefile**: The scripts are only part of the puzzle, we also need to know how to run them. In which order they should be executed, which one should I run to update the data, and so on. You could document this information textually in the `README.md` file, but the `Makefile` allows you to have executable documentation. You can think of it as a script to run the scripts. If you have never written a Makefile, read [Why Use Make](https://bost.ocks.org/mike/make/). +* **datapackage.json**: This file describes the dataset's metadata. For example, what is the dataset, where are its files, what they contain, what each column means (for tabular data), what's the source, license, and authors, and so on. As it's a machine-readable specification, other software can import and validate your files. See [how to create a data package](/blog/2018/03/07/well-packaged-datasets/) on instructions on writing this file. +* **README.md**: This is where the dataset is described for humans. We recommend the following sections: + * **Introduction**: A short description of the dataset, what it contains, the time or geographical area it covers + * **Data**: What the data structure? Does it use any codes? How do you define missing values (e.g. 'N/A' or '-1') + * **Preparation**: How was the data collected? How do I update the data? Was it modified in any way? If you have a `Makefile`, this section will mostly document how to run it. Otherwise you can describe how to run the scripts, or how to collect the data manually. + * **License**: There are two issues here: the license of the data itself, and the license of the package you are creating (including any scripts). Our recommendation is to license the package you created as [CC0][cc0], and add any relevant information or disclaimers regarding the source data's license. + +To summarize, these are the folders, files, and their respective contents in this structure: + +| Path | Type | Contents | +| --- | --- | --- | +| data/ | Data | Dataset's data files. | +| docs/ | Documentation | Images, analysis, and other documentation files. | +| scripts/ | Scripts | Scripts used for creating, modifying, or analysing the dataset. | +| Makefile | Scripts | Executable documentation on how to run the scripts. | +| datapackage.json | Metadata | Data Package descriptor file. | +| README.md | Documentation | Textual description of the dataset with description, preparation steps, license, etc. | + +** Step 2. Upload the dataset to GitHub ** + +1. Login (or create) a new account on GitHub +1. Create [a new repository][gh:newrepo] + * Write a short description about the dataset +1. On your repository page, click on the "Upload files" link +1. Upload the files you created in the previous step + * If your have files larger than 25 MB, you'll need to either [upload using the command line][gh:addfiles-cli], or the [GitHub Desktop client][gh:desktop]. + +** (Optional) Step 3. Enable automatic tabular data validation ** + +You can automatically validate your tabular data files using [goodtables.io][gt.io]. This will take only a few minutes, and will ensure you'll always know when there are errors with your dataset, maintaining its quality. [Read the walkthrough here](/blog/2018/03/12/automatically-validated-tabular-data). + +The sample datasets used in this example, that is, List of schools in Birmingham, UK are available [in this repository](https://github.com/vitorbaptista/birmingham_schools). + + +---- + + +## Share packaged data effectively + +Publishing packaged data is not enough. To avoid hiding useful information in open archives online, it is necessary to engage communities that could be interested in your data. Community engagement should not be viewed as a one-off assignment, but rather, as a continuous effort to increase the impact of your data publishing work. + +Some best practices: + +1. Publish quickly, update often. + The true value of published data lies in its use and reuse by open data communities. Publish data as soon as possible and update it regularly so users have access to the latest information. + +2. Set up feedback loops. + Your data publishing platform should aim to get community “buy in”, by encouraging participatory processes. Feedback loops are important because: + + * they allow data users to ask for clarifications and request for additional information about specific datasets, if need be. + * they allow data publishers to understand what communities need and publish data driven by user demand, increasing the chance it'll be used + * they provide an avenue for data publishers to learn how their data is used, so they can gauge its impact. + + + Examples of feedback loops that data publishers can set up include: + + + * Adding a comments section in a data platform. Needless to say, the comments section should be monitored closely to ensure that responses are sent in time, and that discussions remain respectful and on topic. + * A dedicated social platform channel, such as a Google Group or Facebook group, with a prominently placed link from the data platform for sharing updates, collating and responding to feedback. + * An e-mail address where users can contact the people responsible for the datasets for clarifications, suggestions, or to report errors. + + +3. Meet open data communities in the places they already meet + + Communities thrive when there’s continued discourse over similar interests. + Data publishers should be active in existing networks, as supporters and collaborators in community data initiatives. Some of the ways this can be done, leveraging on Open Knowledge communities and others, include: + + * Kickstarting and joining discussions in online forums + * Blogs + As a data publisher, running a data blog is a great way to create awareness about the data you publish, and an avenue to highlight how data users are drawing insight from it. This encourages use and reuse of your data. If you don’t run a data blog, there are plenty of open data blogs that welcome external contributions i.e. [here’s how](https://blog.okfn.org/submit/) you can contribute guest posts on [blog.okfn.org](http://blog.okfn.org). + + * Open Knowledge Discuss + The Open Knowledge discussion platform is a great place to invoke and contribute to conversation on specific subjects. [Dive in](https://discuss.okfn.org)! + + * Gitter + Gitter is a chat platform that’s well suited for more technical discussions around open data. If you are looking to engage technical data users, consider joining our [Open Knowledge Foundation channel](https://gitter.im/okfn/chat) or the [Frictionless Data project channel](https://gitter.im/frictionlessdata/chat). + + * In-person meetups + Organizing and participating in meetups, hackathons and domain-specific conferences is a good way to engage with communities. + + * Community calls, webinars and podcasts + +Finally, to maintain an active community of data users as a data publisher: +* Keep your datasets updated and highlight changes that might be of interest to the community. For example, if the changes are relevant to a specific data request, reach out and let the user know. +* Have a human representative play an active role in community activities. Bots can be fun and efficient, but they are limited and can get in the way of meaningful interactions. +* Be flexible and transparent. Listen to your community needs and respond appropriately and in timely fashion i.e. consider publishing datasets that are in high demand first, or more regularly. Archive, rather than delete datasets, but if one must be deleted, issue a forewarning and explain why. +* Set up a sharing system to regularly showcase notable data use cases by the the community i.e. fortnightly to inspire other community members. + + + +[why-make]: https://bost.ocks.org/mike/make/ "Why use Make" +[publish-faq]: /guides/publish-faq/ "Publishing Data Packages - FAQ" +[gt.io]: https://goodtables.io +[gt.io:github]: https://docs.goodtables.io/getting_started/github.html "Validating data on GitHub" +[gt.io:s3]: https://docs.goodtables.io/getting_started/s3.html "Validating data on Amazon S3" +[gt-py]: https://github.com/frictionlessdata/goodtables-py "Goodtables.py" +[cc0]: https://creativecommons.org/publicdomain/zero/1.0/ "Creative Commons Public Domain Dedication" +[ckan]: https://ckan.org +[ckan-datahub]: https://datahub.ckan.io +[datahub]: https://datahub.io +[gh:newrepo]: https://github.com/new "GitHub New Repository" +[gh:desktop]: https://desktop.github.com/ +[gh:addfiles]: https://help.github.com/articles/adding-a-file-to-a-repository/ "Adding a file to a repository" +[gh:addfiles-cli]: https://help.github.com/articles/adding-a-file-to-a-repository-using-the-command-line/ "Adding a file to a repository using the command line" +[gtio]: https://goodtables.io/ "Goodtables.io" +[github]: https://github.com/ "GitHub" +[s3]: https://aws.amazon.com/s3/ "Amazon S3" +[s3-region-bug]: https://github.com/frictionlessdata/goodtables.io/issues/136 "Can't add S3 bucket with other region that Oregon (us-west-2)" +[howto-s3bucket]: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html "How do I create an S3 Bucket?" +[howto-s3upload]: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/upload-objects.html "How do I upload files and folders to an S3 Bucket?" +[howto-iamuser]: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html?icmpid=docs_iam_console "Create an IAM User in your AWS account" +[bucket-overview]: https://s3.console.aws.amazon.com/s3/buckets/ "Amazon S3 Bucket list" +[gh-new-repo]: https://help.github.com/articles/create-a-repo/ "GitHub: Create new repository tutorial" +[gtio-managesources]: https://goodtables.io/settings "Goodtables.io: Manage sources" +[datapackage]: /data-package/ "Data Package" +[gtio-dataschema]: writing_data_schema.html "Writing a data schema" +[gtio-configuring]: configuring.html "Configuring goodtables.io" diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-1.png b/site/blog/2019-05-20-used-and-useful-data/figure-1.png new file mode 100644 index 000000000..2a5ecdab7 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-1.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-2.png b/site/blog/2019-05-20-used-and-useful-data/figure-2.png new file mode 100644 index 000000000..14ab720b7 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-2.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-3.png b/site/blog/2019-05-20-used-and-useful-data/figure-3.png new file mode 100644 index 000000000..6d21c6992 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-3.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-4.png b/site/blog/2019-05-20-used-and-useful-data/figure-4.png new file mode 100644 index 000000000..de12a3c34 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-4.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-5.png b/site/blog/2019-05-20-used-and-useful-data/figure-5.png new file mode 100644 index 000000000..28b85b14a Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-5.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-6.png b/site/blog/2019-05-20-used-and-useful-data/figure-6.png new file mode 100644 index 000000000..7948125e4 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-6.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-7.png b/site/blog/2019-05-20-used-and-useful-data/figure-7.png new file mode 100644 index 000000000..b45a04bfe Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-7.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/figure-8.png b/site/blog/2019-05-20-used-and-useful-data/figure-8.png new file mode 100644 index 000000000..750448f48 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/figure-8.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/settings.png b/site/blog/2019-05-20-used-and-useful-data/settings.png new file mode 100644 index 000000000..b6597030b Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/settings.png differ diff --git a/site/blog/2019-05-20-used-and-useful-data/used.png b/site/blog/2019-05-20-used-and-useful-data/used.png new file mode 100644 index 000000000..dbe1c15c5 Binary files /dev/null and b/site/blog/2019-05-20-used-and-useful-data/used.png differ diff --git a/site/blog/2019-07-02-stephan-max/README.md b/site/blog/2019-07-02-stephan-max/README.md new file mode 100644 index 000000000..83f3eeb7a --- /dev/null +++ b/site/blog/2019-07-02-stephan-max/README.md @@ -0,0 +1,35 @@ +--- +title: "Tool Fund Grantee: Stephan Max" +date: 2019-07-02 +tags: ["tool-fund"] +author: Stephan Max and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/stephanmax.jpg +# description: Tool Fund Grantee - Stephan Max +--- + +This grantee profile features Stephan Max for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +### Meet Stephan Max + +Hi, my name is Stephan Max and I am a computer scientist based in Cologne, Germany. I’ve been in the industry for over 10 years now and worked for all kinds of companies, ranging from own startup (crowd-funded online journalism), over big corporate (IBM), to established African business data startup (Asoko Insight). I am now a filter engineer at eyeo trying to make the web a fair, open, and safe place for everybody. + +I love working with kids and teenagers, cooking, and doing music—I just recently started drum lessons! + +### How did you first hear about Frictionless Data? + +I’ve been following the work of the Open Knowledge Foundation for a while now and contributed to the German branch as a mentor for the teenage hackathon weekends project “Jugend Hackt” (Youth Hacks). I first heard about the Frictionless Data program when the OKF announced funding by the Sloan Foundation in 2018. After watching Serah Njambi Rono’s talk on Youtube ([https://www.youtube.com/watch?v=3Ranx9Jz0Ro](https://www.youtube.com/watch?v=3Ranx9Jz0Ro)) and reading about the Reproducible Research Tool Fund on Twitter, I knew I wanted to contribute. + +### Why did you apply for a Tool Fund grant? + +I first heard about the concepts and challenges around Reproducible Research when taking the MOOC “Data Science” from Johns Hopkins University on Coursera. Since I had my fair share of work inside proprietary data formats and tools, I was happy to see that there are people out there making serious efforts to remedy the loss of attribution and data manipulation steps. After browsing through OKF’s Frictionless Data website, I was even happier that there are actual tools, libraries, and standards already available. Applying for the tool fund and contributing my own humble idea was a no-brainer for me. + +### What specific issues are you looking to address with the Tool Fund? + +My goal is to add a Data Package import/export add-on to Google Sheets. I understand that a lot of data wrangling is still done in Sheets, Excel, and files being swapped around. A lot of information is lost that way. Where did the data initially come from? How was it manipulated, cleaned, or otherwise altered? How can we feed spreadsheets back into a Reproducible Research pipeline? I think Data Packages is a brilliant format to model and preserve exactly that information. While I do not want to lure people away from the tools they are already familiar with, I think we can bridge the gap between Google Sheets and Frictionless Data by making Data Packages a first-class citizen. + +### How can the open data, open source, community engage with the work you are doing around Frictionless Data Google Sheets add-on? + +I think open source and data is a unique and wonderful opportunity to get access to the “wisdom of the crowd” and ensure that software and information is and remains accessible to everyone. In the first few weeks I will focus on getting a first prototype and sufficient documentation up, so you can all play with the Data Package import/export add-on as soon as possible. After that, I invite you to take a look at our Github repository ([https://github.com/frictionlessdata/googlesheets-datapackage-tools](https://github.com/frictionlessdata/googlesheets-datapackage-tools)), play around with the tool, and contribute. Raising an issue, opening a pull request, improving the documentation, giving feedback on the user experience—everything counts! I am so stoked to be part of this Frictionless Data journey and can’t wait to see what we will accomplish. Thank you very much in advance! diff --git a/site/blog/2019-07-02-stephan-max/stephanmax.jpg b/site/blog/2019-07-02-stephan-max/stephanmax.jpg new file mode 100644 index 000000000..864939021 Binary files /dev/null and b/site/blog/2019-07-02-stephan-max/stephanmax.jpg differ diff --git a/site/blog/2019-07-03-nes/Joao.jpg b/site/blog/2019-07-03-nes/Joao.jpg new file mode 100644 index 000000000..2b9999294 Binary files /dev/null and b/site/blog/2019-07-03-nes/Joao.jpg differ diff --git a/site/blog/2019-07-03-nes/README.md b/site/blog/2019-07-03-nes/README.md new file mode 100644 index 000000000..6b5a9f68f --- /dev/null +++ b/site/blog/2019-07-03-nes/README.md @@ -0,0 +1,45 @@ +--- +title: "Tool Fund Grantee: Carlos Eduardo Ribas and João Alexandre Peschanski" +date: 2019-07-03 +tags: ["tool-fund"] +author: Carlos Eduardo Ribas, João Alexandre Peschanski, and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/carlos.jpg +# description: Tool Fund Grantee - NES +--- + +This grantee profile features Carlos Eduardo Ribas and João Alexandre Peschanski from the Neuroscience Experiments System (NES) for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +## Meet Carlos, João, and RIDC NeuroMat + +João Alexandre Peschanski is the [Cásper Líbero ](https://en.wikipedia.org/wiki/Faculdade_C%C3%A1sper_L%C3%ADbero)Professor of Digital Media and Computational Journalism and the research supervisor of the dissemination team of the [Research, Innovation and Dissemination Center for Neuromathematics](https://en.wikipedia.org/wiki/NeuroMat) (RIDC NeuroMat), from the São Paulo Research Foundation. He is also the president of the [Wiki Movimento Brasi](https://meta.wikimedia.org/wiki/Wikimedia_Community_User_Group_Brasil)l, the Brazilian affiliate of the Wikimedia movement. As an academic, he has worked on open crowdsourcing resources as well as structured narratives and semantic web. + +Carlos Eduardo Ribas is the leading software developer at the RIDC NeuroMat. He holds a position at the [University of São Paulo](https://en.wikipedia.org/wiki/University_of_S%C3%A3o_Paulo) as a systems analyst. He is the development team leader of the [Neuroscience Experiments System](https://github.com/neuromat/nes). + +The RIDC NeuroMat is a research center established in 2013 at the University of São Paulo, in Brazil. Among the core missions of NeuroMat are the development of open-source computational tools, keeping an active role under the context of open knowledge, open science and scientific dissemination. The NeuroMat project was recently renewed until July 31, 2024. + +## How did you first hear about Frictionless Data and why did you apply for a Tool Fund grant? + +We learned about the Tool Fund from an [announcement](https://br.okfn.org/2019/02/21/open-knowledge-internacional-anuncia-fundo-para-ferramenta-de-frictionless-data/) in Portuguese that was posted by Open Knowledge Brasil. The Frictionless Data Tool Fund grant is also an opportunity to connect with like-minded professionals and their projects, and eventually building and supporting a community deeply engaged with the development of open science and tools. + +Public databases are seen as crucial by many members of the neuroscientific community as a means of moving forward more effectively in understanding the functioning and treatment of brain pathologies. However, only open data are not enough, it should be created in a way that can be easily shared and used. Data and metadata should be readable by researchers and machines and Frictionless Data can certainly help with this. + +In our case, NES and the NeuroMat Open Database were developed to establish a standard for data collection in neuroscientific experiments. The standardization of data collection is key for reproducible science. The advantages of the Frictionless Data approach for us is fundamentally to be able to standardize data opening and sharing within the scientific community. + +## What specific issues are you looking to address with the Tool Fund? + +NES is an open-source tool being developed that aims to assist neuroscience research laboratories in routine procedures for data collection. NES was developed to store a large amount of data in a structured way, allowing researchers to seek and share data and metadata of neuroscience experiments. To the best of our knowledge, there are no open-source software tools which provide a way to record data and metadata involved in all steps of an electrophysiological experiment and also register experimental data and its fundamental provenance information. With the anonymization of sensitive information, the data collected using NES can be publicly available through the [NeuroMat Open Database](https://neuromatdb.numec.prp.usp.br/), which allows any researcher to reproduce the experiment or simply use the data in a different study. + +The system already has some features ready to use, such as Participant registration, Experiment management, Questionnaire management and Data exportation. Some types of data that NES deals with are tasks, stimuli, instructions, EEG, EMG, TMS and questionnaires. Questionnaires are produced with [LimeSurvey](https://www.limesurvey.org/) (an open-source software). + +We propose to change the NES to rely on the philosophy for Frictionless Data. The data exportation module can be adjusted to reflect the set of specifications for data and metadata interoperability and also to be in the Data Package format, as well as any other feature to be in accordance to the philosophy proposed. A major feature to be developed is a JSON file "descriptor" with initial information related to the experiment. However, as sensitive information may be presented at this stage, public access to such data will be done after the anonymization and submission of the experiment to the NeuroMat Open Database. + +To bring NES to the philosophy for Frictionless Data opens up an opportunity for scientists to have access not only to a universe of well-documented and labeled data, but also to understand the process that generated this data. + +## How can the open data, open source, community engage with the work you are doing around Frictionless Data and NES? + +The source code is available on [GitHub](https://github.com/neuromat/nes) ([documentation link](https://nes.readthedocs.io/en/latest/)). The development has been done on Django framework. The license is Mozilla Public License Version 2.0. NES is an open source project managed using the Git version control system, so contributing is as easy as forking the project and committing your enhancements. + +As the RIDC NeuroMat has published [elsewhere](https://neuromat.numec.prp.usp.br/content/in-defense-of-public-scientific-data-sharing-a-neuromat-op-ed/), the work on NES is part of a broader agenda for the development of a database that allows public access to neuroscientific data (physiological measures and functional assessments). We hope our engagement with the Frictionless Data community will open up possibilities of sharing and partnering up for moving this agenda forward. diff --git a/site/blog/2019-07-03-nes/carlos.jpg b/site/blog/2019-07-03-nes/carlos.jpg new file mode 100644 index 000000000..c204bbfca Binary files /dev/null and b/site/blog/2019-07-03-nes/carlos.jpg differ diff --git a/site/blog/2019-07-03-nes/group3.png b/site/blog/2019-07-03-nes/group3.png new file mode 100644 index 000000000..b73989173 Binary files /dev/null and b/site/blog/2019-07-03-nes/group3.png differ diff --git a/site/blog/2019-07-09-open-referral/GB.png b/site/blog/2019-07-09-open-referral/GB.png new file mode 100644 index 000000000..f4f15eb7a Binary files /dev/null and b/site/blog/2019-07-09-open-referral/GB.png differ diff --git a/site/blog/2019-07-09-open-referral/OpenReferral.png b/site/blog/2019-07-09-open-referral/OpenReferral.png new file mode 100644 index 000000000..39777a578 Binary files /dev/null and b/site/blog/2019-07-09-open-referral/OpenReferral.png differ diff --git a/site/blog/2019-07-09-open-referral/README.md b/site/blog/2019-07-09-open-referral/README.md new file mode 100644 index 000000000..a4d462ef8 --- /dev/null +++ b/site/blog/2019-07-09-open-referral/README.md @@ -0,0 +1,32 @@ +--- +title: "Tool Fund Grantee: Greg Bloom and Shelby Switzer" +date: 2019-07-09 +tags: ["tool-fund"] +author: Greg Bloom, Shelby Switzer, and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/OpenReferral.png +# description: Tool Fund Grantee - Open Referral +github: https://github.com/openreferral/ +--- + +This grantee profile features Greg Bloom & Shelby Switzer for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +### Meet Greg, Shelby, and Open Referral + +Shelby Switzer and Greg Bloom work with [Open Referral](https://openreferral.org/), which develops data standards and open source tools for health, human, and social services. For the Tool Fund, they will be building out datapackage support for all their interfaces, from the open source tools that transform and validate human services data to the Human Services API Specification. Greg is the founder of the Open Referral Initiative, and has experience in nonprofit communications, cooperative development, and community organizing. Shelby is a long-time civic tech contributor, and will be the lead developer on this project. + +> I got my start in tech through civic tech and open data. After a variety of software development and API product management roles in my career, including most recently leading the API and integrations team at a healthcare technology company, I’ve returned to my roots to write about and contribute to open source, community-focused tech projects full-time. - Shelby + +Open Referral develops data standards and open platforms that make it easy to share and find information about community resources – i.e. the health, human, and social services available to people in need. The Open Referral Initiative is developing the Human Services Data Toolkit – a suite of open source data management tools that facilitate transformation, validation, and publication of standardized data about health, human, and social services. By leveraging the JSON datapackage specification across each of these components, we can provide a comprehensive approach to frictionless data management of information about any kind of community resources provisioned by governments, charity, and civic institutions. + +### Shelby, how did you hear about Frictionless Data? + +I think I heard about Frictionless Data first over the past year or two just through working with Open Referral. I was doing research on what tools already existed out there for data munging and CSV processing, to help inform my own with open data and specifically diverse sets of community resource data. First impressions? I thought it was awesome, and wanted to explore more to figure out how to incorporate some of FD’s specs and tools into my own pipelines. + +### What specific issues are you looking to address with the Tool Fund grant? + +I’m definitely excited about building out datapackage support for all our interfaces, from the open source tools that transform and validate human services data to the Human Services API Specification. This will help us plug-and-play tools much more efficiently to build pipelines customized to each deployment. A lot of our work is in Ruby, JavaScript, and PHP, so I think this will be an opportunity to help contribute some tools in those languages to the Frictionless Data ecosystem, for example a Ruby library for generating datapackages given an input directory or a library for generating a SQL Server database from a datapackage. We want to do more with our existing data pipeline tools, especially to link them together using the datapackage spec as a common exchange format. We’re also about to use some of these tools in specific projects in the US validating and federating community resource data sets, and we hoped that applying for a tool grant might help us have the runway to iterate on tool improvements based on what we learn from these deployments. + +### How can the open data, open source, community engage with the work you are doing around Frictionless Data and the Human Data Services Toolkit? diff --git a/site/blog/2019-07-09-open-referral/shelby.jpg b/site/blog/2019-07-09-open-referral/shelby.jpg new file mode 100644 index 000000000..daa90db90 Binary files /dev/null and b/site/blog/2019-07-09-open-referral/shelby.jpg differ diff --git a/site/blog/2019-07-22-nimblelearn-dpc/README.md b/site/blog/2019-07-22-nimblelearn-dpc/README.md new file mode 100644 index 000000000..34815d4ef --- /dev/null +++ b/site/blog/2019-07-22-nimblelearn-dpc/README.md @@ -0,0 +1,26 @@ +--- +title: Nimble Learn - Data Package Connector (datapackage-connector) +date: 2019-07-22 +tags: ["case-studies"] +category: case-studies +author: Michael Amadi +interviewee: Michael Amadi +subject_context: Nimble Learn is the Business Intelligence (BI) and Advanced Analytics consultancy behind datapackage-connector, a Power BI Custom Connector for loading tables directly from Tabular Data Packages into Power BI through the 'Get Data' experience. +image: /img/blog/nimblelearn-logo.png +description: Nimble Learn's datapackage-connector is a Power BI Custom Connector for loading tables directly from Tabular Data Packages into Power BI through the 'Get Data' experience. +--- + +[Data Package Connector](https://github.com/nimblelearn/datapackage-connector), also known as datapackage-connector, is a [Power BI Custom Connector](https://docs.microsoft.com/en-us/power-bi/desktop-connector-extensibility) that enables you to quickly load one or more tables from Tabular Data Packages into Power BI. It builds on top of one of our other Frictionless Data projects, [Data Package M](/blog/2018/07/20/nimblelearn/) (also known as datapackage-m), and provides a user friendly Power BI ‘Get Data’ experience and also allows these Power BI tables to be refreshed directly from Tabular Data Packages within the Power BI Service. This has been a sought after capability because the Data Package M functions alone don’t currently support this scenario. + +When we first created datapackage-m, we thought it would be quite powerful if it was possible to include a ‘Get Data’ experience in Power BI for Tabular Data Packages, but this wasn’t possible with Power Query M functions alone. For those of you not too familiar with Power BI, the ‘Get Data’ experience is a user interface (UI) wizard that guides you through some simple steps to get data from supported data sources in Power BI. With datapackage-connector, we’ve introduced a ‘Get Data’ experience for Tabular Data packages which makes it easier to build Power BI reports and dashboards from Tabular Data Packages. This is especially useful when a Tabular Data Package has several tables that you’d like to load into Power BI in one go. + +![](./datapackage-connector-power-bi.gif) +*How datapackage-connector works in Power BI* + +datapackage-m has one major limitation from a Power BI perspective: it doesn’t support the ability to refresh data from within the Power BI service and this means the data refreshes must be done from Power BI Desktop. datapackage-connector, being a Power BI connector, doesn’t have this limitation. This unlocks a new usage scenario where Power BI reports and dashboards can be built directly on top of Tabular Data Packages and kept up-to-date through scheduled data refreshes. + +![](./datapackage-connector-power-bi-service.png) +*datapackage-connector supports data refresh in the Power BI service* + +datapackage-connector reuses the same Power Query M functions from datapackage-m and this means that it has the same level of Frictionless Data specs support. We’ll be keeping these two +projects aligned as we further expand their support for the specs. Read more about datapackage-m [here](/blog/2018/07/20/nimblelearn/), and check out the documentation for datapackage-connector on our [GitHub repo](https://github.com/nimblelearn/datapackage-connector). diff --git a/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi-service.png b/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi-service.png new file mode 100644 index 000000000..cc71518c7 Binary files /dev/null and b/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi-service.png differ diff --git a/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi.gif b/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi.gif new file mode 100644 index 000000000..fb55e3a8b Binary files /dev/null and b/site/blog/2019-07-22-nimblelearn-dpc/datapackage-connector-power-bi.gif differ diff --git a/site/blog/2019-07-22-nimblelearn-dpc/nimblelearn-logo.png b/site/blog/2019-07-22-nimblelearn-dpc/nimblelearn-logo.png new file mode 100644 index 000000000..a6874eefd Binary files /dev/null and b/site/blog/2019-07-22-nimblelearn-dpc/nimblelearn-logo.png differ diff --git a/site/blog/2019-08-29-welcome-frictionless-fellows/README.md b/site/blog/2019-08-29-welcome-frictionless-fellows/README.md new file mode 100644 index 000000000..63cbe992f --- /dev/null +++ b/site/blog/2019-08-29-welcome-frictionless-fellows/README.md @@ -0,0 +1,27 @@ +--- +title: A warm welcome to our Frictionless Data for Reproducible Research Fellows +date: 2019-08-29 +tags: ['fellows'] +category: +image: /img/blog/fd_reproducible.png +description: We are very excited to introduce the Frictionless Data for Reproducible Research Fellows Programme +author: Lilly Winfree +--- + +As part of our commitment to opening up scientific knowledge, we recently launched the [Frictionless Data for Reproducible Research Fellows Programme](https://fellows.frictionlessdata.io/), which will run from mid-September until June 2020. + +We received over 200 impressive applications for the Programme, and are very excited to introduce the four selected Fellows: + +**Monica Granados**, a Mitacs Canadian Science Policy Fellow; +**Selene Yang**, a graduate student researcher at the National University of La Plata, Argentina; +**Daniel Ouso**, a postgraduate researcher at the International Centre of Insect Physiology and Ecology; +**Lily Zhao**, a graduate student researcher at the University of California, Santa Barbara. + +Next month, the Fellows will be writing blogs to further introduce themselves to the Frictionless Data community, so stay tuned to learn more about these impressive researchers. + +The Programme will train early career researchers to become champions of the Frictionless Data tools and approaches in their field. Fellows will learn about Frictionless Data, including how to use Frictionless tools in their domains to improve reproducible research workflows, and how to advocate for open science. Working closely with the Frictionless Data team, Fellows will lead training workshops at conferences, host events at universities and in labs, and write blogs and other communications content. + +As the programme progresses, we will be sharing the Fellows’ work on making research more reproducible with the Frictionless Data software suite by posting a series of blogs here and on the [Fellows website](https://fellows.frictionlessdata.io/). In June 2020, the Programme will culminate in a community call where all Fellows will present what they have learned over the nine months: we encourage attendance by our community. If you are interested in learning more about the Programme, the [syllabus](http://fellows.frictionlessdata.io/syllabus/), [lessons](http://fellows.frictionlessdata.io/lessons/), and [resources](http://fellows.frictionlessdata.io/resources/) are open. + +## More About Frictionless Data +The Fellows Programme is part of the Frictionless Data for Reproducible Research project at Open Knowledge Foundation. This project, funded by the Sloan Foundation, applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate data workflows in research contexts. Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. Frictionless Data’s other current projects include the [Tool Fund](https://blog.okfn.org/2019/07/04/meet-our-2019-frictionless-data-tool-fund-grantees/), in which four grantees are developing open source tooling for reproducible research. The Fellows Programme will be running until June 2020, and we will post updates to the Programme as they progress. diff --git a/site/blog/2019-09-12-andre-heughebaert/README.md b/site/blog/2019-09-12-andre-heughebaert/README.md new file mode 100644 index 000000000..25a81c71e --- /dev/null +++ b/site/blog/2019-09-12-andre-heughebaert/README.md @@ -0,0 +1,28 @@ +--- +title: "Tool Fund Grantee: André Heughebaert" +date: 2019-09-12 +tags: ["tool-fund"] +author: André Heughebaert and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/andre.png +--- + +This grantee profile features André Heughebaert for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved. + + + +### Meet André Heughebaert + +With 30+ years experience in Software Development (mostly Database, GIS and webapps), I‘ve seen a wide variety of technologies, programming languages and paradigm changes. My current job is an IT Software Engineer at the [Belgian Biodiversity Platform](https://www.biodiversity.be/) and as the Belgian GBIF Node manager. Today, my activities focus on Open Data advocacy and technical support to publication and re-use of Open Data through GBIF Network. This includes intensive use of Darwin Core standards and related Biodiversity tools. Since 2016, I'm the chair of [GBIF](https://www.gbif.org/) Participation Nodes Committee. Before that, I’ve been working in Banking systems, early Digital TV and VoD servers and e-Learning platform. Last but certainly not least, I’m the proud father of four children. I live in Brussels and work for the federal public service. + +### How did you first hear about Frictionless Data? + +Acquainted with CKAN, I discovered Frictionless Data through Twitter and the OKF website. Soon after that, I published my [first Data Package](https://datahub.io/andrejjh/junibis_data) on historical movements of troops during the Napoleonic campaign of Belgium in 1815 underlying [JunIBIS.be](http://www.junibis.be/), a website launched with a friend for the 200th anniversary of this event. + +### What specific issues are you looking to address with the Tool Fund? + +The suggested tool will automatically convert Darwin Core Archive into Frictionless Data Packages, offering new perspectives to the GBIF community of data publishers and users. I will especially pay attention to potential incompatibilities between the two standards. Limitations of the [Darwin Core Star schema](https://github.com/gbif/ipt/wiki/DwCAHowToGuide) encouraged me to investigate emerging open data standards, and the Frictionless Data Tool Fund grant is an excellent opportunity for me to bridge these two Open Data tools ecosystems. + +### How can the open data, open source, community engage with the work you are doing around Frictionless Data, Darwin Core Archive and GBIF? + +I do hope the Frictionless and GBIF communities will help me with issuing/tracking and solving incompatibilities, and also to build up new synergies. You can engage with Andre's Tool Fund at the [Frictionless DarwinCore repository](https://github.com/frictionlessdata/FrictionlessDarwinCore). diff --git a/site/blog/2019-09-12-andre-heughebaert/andre.png b/site/blog/2019-09-12-andre-heughebaert/andre.png new file mode 100644 index 000000000..812f34cee Binary files /dev/null and b/site/blog/2019-09-12-andre-heughebaert/andre.png differ diff --git a/site/blog/2019-10-21-fellows-reflect-on-open-access-week/README.md b/site/blog/2019-10-21-fellows-reflect-on-open-access-week/README.md new file mode 100644 index 000000000..de8a45af3 --- /dev/null +++ b/site/blog/2019-10-21-fellows-reflect-on-open-access-week/README.md @@ -0,0 +1,22 @@ +--- +title: The Fellows reflect on Open Access Week +date: 2019-10-21 +tags: ['fellows'] +category: +image: /img/blog/open-access-week-2019.png +description: A compilation of the Fellows' thoughts and reflections on this year's Open access Week theme. +author: Lilly Winfree +--- +The theme of this year's [Open Access Week](http://www.openaccessweek.org/) is "Open for Whom", which inspired us to reflect on what Open Access means, why it is important, and especially how the people are (positively and negatively) affected by openness in science. Below you will find short thoughts from our four Fellows: + +### Sele +The privilege of science. We speak from the opening of the production of knowledge, however many times we do not analyze what it really means how privileged this access is. Open Access to whom? Could we also add the "by whom"? What do we produce, what do we share, who enables this? This week leaves me more doubts and reflections, because it is not enough to think about who we deposit the knowledge to, we have to analyze our place from where we stand when we share it. + +### Monica +Open science touts the tantalizing prospect of making science accessible to anyone, anywhere in the world. Yet simply making data and code freely available doesn’t make science easier to access for everyone, everywhere. To me this year’s Open Access Week theme is challenging the open science community to think about how we rebuild the inequitable system we are dismantling. How do we make science both physically reachable and comprensible, while not putting the burden of transforming the system on marginalized groups? These are, perhaps, the most important questions for the open science community and we must address them before we can hope of making real progressive change to the way knowledge is created and shared. + +### Ouso +Open Access Week is an important event for open access awareness. It is to highlight advances in the realms of modern scientific research practices and communication. But, I wish one thing for you this week, that Open Access (OA) should bother you. Is your scientific practise in line with it? The week’s theme question - “Open for whom?” eagerly arrested my attention; what is its meaning from my perspective? I would love to pick your mind on it too, but I’ll stick to mine for this bit. Foremost, let me clarify, in its classical meaning OA is unrestricted access to literature, yet here I will mean unrestricted access to all products of research, more or less synonymous to Open Science. My personal summary of it is #NROA – No Restrictions, Only Attribution. However, I must remind us of the need to remain ethical which, I think, is an aspect of the “for whom?”. This question insinuates that certain risks are associated with openness, signalling a red light, and inevitably invoking fear. The fears are chiefly fuelled by ignorance on the many protective options available with openness. In most of my interactions regarding openness, the elephant in the room is security. Of the common triad of Open Science, Open- access (literature), source, and data, the latter is most affected with regards to security. Some people profit from data without consent, at the expense of the public. Also, mainly from the aspect of the former two, some researchers have feared the hijacking of their in-development ideas from open spaces, missing out on well-deserved recognition/attribution. This has led only to preferential access to closed research groups. Openness is very welcoming, attracting with it people/entities with good and bad intentions indiscriminately, but mitigations have been put or are in the process of being put in place. Such include data protection [laws](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3381593), [policies](https://www.who.int/ethics/research/en/) for anonymisation and confidentiality in research, portals for [preregistration](https://cos.io/prereg/) of research ideas and design, and micro-publishers like [flushPub](https://www.flashpub.io/about) for quick sharing of knowledge. + +### Lily +A question I ask other scientists as part of my research is: "Who or what do you consider to be the beneficiaries of your research?". The answers range from 'the broader academic community', to 'local communities', to 'everyone'. I then ask a follow-up question, "How successful do you think you have been in reaching this audience?". Interestingly, scientists who answer that their beneficiaries are other academics are more likely to consider themselves successful in reaching their target audience, citing scientific publications leading to their success. Scientists trying to reach broader audiences often feel that they are unsuccessful in having their work affect the general public or local residents near their study site. They mention limited funding, not having enough time, and feeling awkward stepping outside their comfort zone as things hindering them. However, in my opinion, they are brave in sharing these sentiments. Wanting to reach a broader audience and recognizing one's limitations is an important step in the spirit of Open Access Week. Sele mentioned that we can consider not only 'open for whom' but 'open by whom'. By explicitly considering these questions early on, in an inclusive, iterative and transparent manner, researchers, practitioners, and communities can build a more equitable and streamlined pipeline from research to impact. diff --git a/site/blog/2020-01-22-frictionless-darwincore/FDdarwin1.png b/site/blog/2020-01-22-frictionless-darwincore/FDdarwin1.png new file mode 100644 index 000000000..dffa9b11c Binary files /dev/null and b/site/blog/2020-01-22-frictionless-darwincore/FDdarwin1.png differ diff --git a/site/blog/2020-01-22-frictionless-darwincore/FDdarwin2.png b/site/blog/2020-01-22-frictionless-darwincore/FDdarwin2.png new file mode 100644 index 000000000..164038d67 Binary files /dev/null and b/site/blog/2020-01-22-frictionless-darwincore/FDdarwin2.png differ diff --git a/site/blog/2020-01-22-frictionless-darwincore/README.md b/site/blog/2020-01-22-frictionless-darwincore/README.md new file mode 100644 index 000000000..2ee17b67c --- /dev/null +++ b/site/blog/2020-01-22-frictionless-darwincore/README.md @@ -0,0 +1,46 @@ +--- +title: Frictionless DarwinCore developed by André Heughebaert +date: 2020-01-22 +tags: ["tool-fund"] +author: André Heughebaert and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/fdwc.png +--- + +This blog is part of a series showcasing projects developed during the 2019 Frictionless Data Tool Fund. + + + +Originally published [https://blog.okfn.org/2019/12/09/andre-heughebaert-frictionless-darwincore/](https://blog.okfn.org/2019/12/09/andre-heughebaert-frictionless-darwincore/) + +*The 2019 Frictionless Data Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +André Heughebaert is an open biodiversity data advocate in his work and his free time. He is an IT Software Engineer at the Belgian Biodiversity Platform and is also the Belgian GBIF (Global Biodiversity Information Facility) Node manager. During this time, he has worked with the Darwin Core Standards and Open Biodiversity data on a daily basis. This work inspired him to apply for the Tool Fund, where he has developed a tool to convert DarwinCore Archives into Frictionless Data Packages. + +The DarwinCore Archive (DwCA) is a standardised container for biodiversity data and metadata largely used amongst the GBIF community, which consists of more than 1,500 institutions around the world. The DwCA is used to publish biodiversity data about observations, collections specimens, species checklists and sampling events. However, this domain specific standard has some limitations, mainly the star schema (core table + extensions), rules that are sometimes too permissive, and a lack of controlled vocabularies for certain terms. These limitations encouraged André to investigate emerging open data standards. In 2016, he discovered Frictionless Data and published his first data package on historical data from 1815 Napoleonic Campaign of Belgium. He was then encouraged to create a tool that would, in part, build a bridge between these two open data ecosystems. + +As a result, the Frictionless DarwinCore tool converts DwCA into Frictionless Data Packages, and also gives access to the vast Frictionless Data software ecosystem enabling constraints validation and support of a fully relational data schema. Technically speaking, the tool is implemented as a Python library, and is exposed as a Command Line Interface. The tool automatically converts: + +* DwCA data schema into datapackage.json +* EML metadata into human readable markdown readme file +* data files are converted when necessary, this is when default values are described + +The resulting zip file complies to both DarwinCore and Frictionless specifications. + +![flow](./FDdarwin1.png)
*Frictionless DarwinCore Flow* + +André hopes that bridging the two standards will give an excellent opportunity for the GBIF community to provide open biodiversity data to a wider audience. He says this is also a good opportunity to discover the Frictionless Data specifications and assess their applicability to the biodiversity domain. In fact, on 9th October 2019, André presented the tool at a GBIF Global Nodes meeting. It was perceived by the nodes managers community as an exploratory and pioneering work. While the command line interface offers a simple user interface for non-programmers, others might prefer the more flexible and sophisticated Python API. André encourages anyone working with DarwinCore data, including all data publishers and data users of GBIF network, to try out the new tool. + +“I’m quite optimistic that the project will feed the necessary reflection on the evolution of our biodiversity standards and data flows.” + +To get started, installation of the tool is done through a single pip install command (full directions can be found in the project README). Central to the tool is a table of DarwinCore terms linking a Data Package type, format and constraints for every DwC term. The tool can be used as CLI directly from your terminal window or as Python Library for developers. The tool can work with either locally stored or online DwCA. Once converted to Tabular DataPackage, the DwC data can then be ingested and further processed by software such as Goodtables, OpenRefine or any other Frictionless Data software. + +André has aspirations to take the Frictionless DarwinCore tool further by encapsulating the tool in a web-service that will directly deliver Goodtables reports from a DwCA, which will make it even more user friendly. Additional ideas for further improvement would be including an import pathway for DarwinCore data into Open Refine, which is a popular tool in the GBIF community. André’s long term hope is that the Data Package will become an optional format for data download on GBIF.org. + +![workflow](./FDdarwin2.png)
+ +Further reading: + +Repository: https://github.com/frictionlessdata/FrictionlessDarwinCore + +Project blog: https://andrejjh.github.io/fdwc.github.io/ diff --git a/site/blog/2020-01-22-frictionless-darwincore/fdwc.png b/site/blog/2020-01-22-frictionless-darwincore/fdwc.png new file mode 100644 index 000000000..dda381520 Binary files /dev/null and b/site/blog/2020-01-22-frictionless-darwincore/fdwc.png differ diff --git a/site/blog/2020-01-22-open-referral-tool/OR.png b/site/blog/2020-01-22-open-referral-tool/OR.png new file mode 100644 index 000000000..be294c8e3 Binary files /dev/null and b/site/blog/2020-01-22-open-referral-tool/OR.png differ diff --git a/site/blog/2020-01-22-open-referral-tool/OpenReferral.png b/site/blog/2020-01-22-open-referral-tool/OpenReferral.png new file mode 100644 index 000000000..39777a578 Binary files /dev/null and b/site/blog/2020-01-22-open-referral-tool/OpenReferral.png differ diff --git a/site/blog/2020-01-22-open-referral-tool/README.md b/site/blog/2020-01-22-open-referral-tool/README.md new file mode 100644 index 000000000..6a6b6eb36 --- /dev/null +++ b/site/blog/2020-01-22-open-referral-tool/README.md @@ -0,0 +1,35 @@ +--- +title: Frictionless Open Referral developed by Shelby Switzer and Greg Bloom +date: 2020-01-22 +tags: ["tool-fund"] +author: Greg Bloom, Shelby Switzer, and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/OpenReferral.png +--- + +This blog is part of a series showcasing projects developed during the 2019 Tool Fund. + +Originally published at [https://blog.okfn.org/2020/01/15/frictionless-data-tool-fund-update-shelby-switzer-and-greg-bloom-open-referral/](https://blog.okfn.org/2020/01/15/frictionless-data-tool-fund-update-shelby-switzer-and-greg-bloom-open-referral/) + +*The 2019 Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This Fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +Open Referral creates standards for health, human, and social services data -- the data found in community resource directories used to help find resources for people in need. In many organizations, this data lives in a multitude of formats, from handwritten notes to Excel files on a laptop to Microsoft SQL databases in the cloud. For community resource directories to be maximally useful to the public, this disparate data must be converted into an interoperable format. Many organizations have decided to use Open Referral’s Human Services Data Specification (HSDS) as that format. However, to accurately represent this data, HSDS uses multiple linked tables, which can be challenging to work with. To make this process easier, Greg Bloom and Shelby Switzer from Open Referral decided to implement datapackage bundling of their CSV files using the Frictionless Data Tool Fund. + +In order to accurately represent the relationships between organizations, the services they provide, and the locations they are offered, Open Referral aims to use their Human Service Data Specification (HSDS) makes sense of disparate data by linking multiple CSV files together by foreign keys. Open Referral used Frictionless Data’s datapackage to specify the tables’ contents and relationships in a single machine-readable file, so that this standardized format could transport HSDS-compliant data in a way that all of the teams who work with this data can use: CSVs of linked data. + +In the Tool Fund, Open Referral worked on their HSDS Transformer tool, which enables a group or person to transform data into an HSDS-compliant data package, so that it can then be combined with other data or used in any number of applications. The HSDS-Transformer is a Ruby library that can be used during the extract, transform, load (ETL) workflow of raw community resource data. This library extracts the community resource data, transforms that data into HSDS-compliant CSVs, and generates a datapackage.json that describes the data output. The Transformer can also output the datapackage as a zip file, called HSDS Zip, enabling systems to send and receive a single compressed file rather than multiple files. The Transformer can be spun up in a docker container — and once it’s live, the API can deliver a payload that includes links to the source data and to the configuration file that maps the source data to HSDS fields. The Transformer then grabs the source data and uses the configuration file to transform the data and return a zip file of the HSDS-compliant datapackage. + +![DemoApp](./OR.png)
*A demo app consuming the API generated from the HSDS Zip* + +The Open Referral team has also been working on projects related to the HSDS Transformer and HSDS Zip. For example, the HSDS Validator checks that a given datapackage of community service data is HSDS-compliant. Additionally, they have used these tools in the field with a project in Miami. For this project, the HSDS Transformer was used to transform data from a Microsoft SQL Server into an HSDS Zip. Then that zipped datapackage was used to populate a Human Services Data API with a generated developer portal and OpenAPI Specification. + +Further, as part of this work, the team also contributed to the original source code for the datapackage-rb Ruby gem. They added a new feature to infer a datapackage.json schema from a given set of CSVs, so that you can generate the json file automatically from your dataset. + +Greg and Shelby are eager for the Open Referral community to use these new tools and provide feedback. To use these tools currently, users should either be a Ruby developer who can use the gem as part of another Ruby project, or be familiar enough with Docker and HTTP APIs to start a Docker container and make an HTTP request to it. You can use the HSDS Transformer as a Ruby gem in another project or as a standalone API. In the future, the project might expand to include hosting the HSDS Transformer as a cloud service that anyone can use to transform their data, eliminating many of these technical requirements. + +Interested in using these new tools? Open Referral wants to hear your feedback. For example, would it be useful to develop an extract-transform-load API, hosted in the cloud, that enables recurring transformation of nonstandardized human service directory data source into an HSDS-compliant datapackage? You can reach them via their GitHub repos. + +Further reading: + +Repository: https://github.com/openreferral/hsds-transformer +HSDS Transformer: https://openreferral.github.io/hsds-transformer/ diff --git a/site/blog/2020-01-23-nes-tool/README.md b/site/blog/2020-01-23-nes-tool/README.md new file mode 100644 index 000000000..7d2011671 --- /dev/null +++ b/site/blog/2020-01-23-nes-tool/README.md @@ -0,0 +1,34 @@ +--- +title: Neuroscience Experiments System Tool Fund +date: 2020-01-23 +tags: ["tool-fund"] +author: Carlos Eduardo Ribas, João Alexandre Peschanski, and Lilly Winfree +category: grantee-profiles-2019 +image: /img/blog/nes_logo.png +--- + +This blog is part of a series showcasing projects developed during the 2019 Tool Fund. + + +Originally published at [https://blog.okfn.org/2019/12/16 neuroscience-experiments-system-frictionless-tool/](https://blog.okfn.org/2019/12/16/neuroscience-experiments-system-frictionless-tool/) + +*The 2019 Tool Fund provided four mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This Fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +The Research, Innovation and Dissemination Center for Neuromathematics (RIDC NeuroMat) is a research center established in 2013 by the São Paulo Research Foundation (FAPESP) at the University of São Paulo, in Brazil. A core mission of NeuroMat is the development of open-source computational tools to aid in scientific dissemination and advance open knowledge and open science. To this end, the team has created the Neuroscience Experiments System (NES), which is an open-source tool to assist neuroscience research laboratories in routine procedures for data collection. To more effectively understand the function and treatment of brain pathologies, NES aids in recording data and metadata from various experiments, including clinical data, electrophysiological data, and fundamental provenance information. NES then stores that data in a structured way, allowing researchers to seek and share data and metadata from those neuroscience experiments. For the 2019 Tool Fund, the NES team, particularly João Alexandre Peschanski, Cassiano dos Santos and Carlos Eduardo Ribas, proposed to adapt their existing export component to conform to the Frictionless Data specifications. + +Public databases are seen as crucial by many members of the neuroscientific community as a means of moving science forward. However, simply opening up data is not enough; it should be created in a way that can be easily shared and used. For example, data and metadata should be readable by both researchers and machines, yet they typically are not. When the NES team learned about Frictionless Data, they were interested in trying to implement the specifications to help make the data and metadata in NES machine readable. For them, the advantage of the Frictionless Data approach was to be able to standardize data opening and sharing within the neuroscience community. + +Before the Tool Fund, NES had an export component that set up a file with folders and documents with information on an entire experiment (including data collected from participants, device metadata, questionnaires, etc. ), but they wanted to improve this export to be more structured and open. By implementing Frictionless Data specifications, the resulting export component includes the Data Package (datapackage.json) and the folders/files inside the archive, with a root folder called data. With this new “frictionless” export component, researchers can transport and share their export data with other researchers in a recognized open standard format (the Data Package), facilitating the understanding of that exported data. They have also implemented Goodtables into the unit tests to check data structure. + +The RIDC NeuroMat team’s expectation is that many researchers, particularly neuroscientists and experimentalists, will have an interest in using the freely available NES tool. With the anonymization of sensitive information, the data collected using NES can be publicly available through the NeuroMat Open Database, allowing any researcher to reproduce the experiment or simply use the data in a different study. In addition to storing collected experimental data and being a tool for guiding and documenting all the steps involved in a neuroscience experiment, NES has an integration with the Neuroscience Experiment Database, another NeuroMat project, based on a REST API, where NES users can send their experiments to become publicly available for other researchers to reproduce them or to use as inspiration for further experiments. + +![export](./nes1.png)
*Screenshot of the export of an experiment* +![data](./nes2.png)
*Screenshot of the export of data on participants* +![tree](./tree.png)
*Picture of a hypothetical export file tree of type Per Experiment after the Frictionless Data implementation* + +## Further reading + +* Repository: https://github.com/neuromat/nes +* User manual: https://nes.readthedocs.io/en/latest/ +* NeuroMat blog: https://neuromat.numec.prp.usp.br/ +* Post on NES at the NeuroMat blog: https://neuromat.numec.prp.usp.br/content/a-pathway-to-reproducible-science-the-neuroscience-experiments-system/ diff --git a/site/blog/2020-01-23-nes-tool/nes1.png b/site/blog/2020-01-23-nes-tool/nes1.png new file mode 100644 index 000000000..73bc88071 Binary files /dev/null and b/site/blog/2020-01-23-nes-tool/nes1.png differ diff --git a/site/blog/2020-01-23-nes-tool/nes2.png b/site/blog/2020-01-23-nes-tool/nes2.png new file mode 100644 index 000000000..89a6e7fe5 Binary files /dev/null and b/site/blog/2020-01-23-nes-tool/nes2.png differ diff --git a/site/blog/2020-01-23-nes-tool/nes_logo.png b/site/blog/2020-01-23-nes-tool/nes_logo.png new file mode 100644 index 000000000..f5aab6bff Binary files /dev/null and b/site/blog/2020-01-23-nes-tool/nes_logo.png differ diff --git a/site/blog/2020-01-23-nes-tool/tree.png b/site/blog/2020-01-23-nes-tool/tree.png new file mode 100644 index 000000000..b2befecef Binary files /dev/null and b/site/blog/2020-01-23-nes-tool/tree.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/README.md b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/README.md new file mode 100644 index 000000000..b015c962e --- /dev/null +++ b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/README.md @@ -0,0 +1,83 @@ +--- +title: Frictionless Data Pipelines for Ocean Science +date: 2020-02-10 +tags: ["pilot"] +author: Adam Shepherd, Amber York, Danie Kinkade, and Lilly Winfree +category: frictionless-data +image: /img/blog/bcodmoLogo.jpg +--- + +This blog post describes a Frictionless Data Pilot with the Biological and Chemical Oceanography Data Management Office (BCO-DMO). Pilot projects are part of the [Frictionless Data for Reproducible Research project](https://frictionlessdata.io/reproducible-research/). Written by the BCO-DMO team members Adam Shepherd, Amber York, Danie Kinkade, and development by Conrad Schloer. + + + +![BCO-DMO logo](./bcodmoLogo.jpg) + +Scientific research is implicitly reliant upon the creation, management, analysis, synthesis, and interpretation of data. When properly stewarded, data hold great potential to demonstrate the reproducibility of scientific results and accelerate scientific discovery. [The Biological and Chemical Oceanography Data Management Office (BCO-DMO)](https://www.bco-dmo.org/) is a publicly accessible earth science data repository established by the National Science Foundation [(NSF)](https://www.nsf.gov/) for the curation of biological, chemical, and biogeochemical oceanographic data from research in coastal, marine, and laboratory environments. With the groundswell surrounding the [FAIR data principles](https://doi.org/10.1038/sdata.2016.18), BCO-DMO recognized an opportunity to improve its curation services to better support reproducibility of results, while increasing process efficiencies for incoming data submissions. **In 2019, BCO-DMO worked with the Frictionless Data team at Open Knowledge Foundation to develop a web application called Laminar for creating Frictionlessdata Data Package Pipelines that help data managers process data efficiently while recording the provenance of their activities to support reproducibility of results.** + +The mission of BCO-DMO is to provide investigators with data management services that span the full data lifecycle from data management planning, to data publication, and archiving. + +BCO-DMO provides free access to oceanographic data through a web-based catalog with tools and features facilitating assessment of fitness for purpose. The result of this effort is a database containing over **9,000 datasets from a variety of oceanographic and limnological measurements** including those from: in situ sampling, moorings, floats and gliders, sediment traps; laboratory and mesocosm experiments; satellite images; derived parameters and model output; and synthesis products from data integration efforts. The project has worked with over 2,600 data contributors representing over 1,000 funded projects. + +As the catalog of data holdings continued to grow in both size and the variety of data types it curates, BCO-DMO needed to retool its data infrastructure with three goals. First, to improve the transportation of data to, from, and within BCO-DMO’s ecosystem. Second, to support reproducibility of research by making all curation activities of the office completely transparent and traceable. Finally, to improve the efficiency and consistency across data management staff. Until recently, data curation activities in the office were largely dependent on the individual capabilities of each data manager. While some of the staff were fluent in Python and other scripting languages, others were dependent on in-house custom developed tools. These in-house tools were extremely useful and flexible, but they were developed for an aging computing paradigm grounded in physical hardware accessing local data resources on disk. While locally stored data is still the convention at BCO-DMO, the distributed nature of the web coupled with the challenges of big data stretched this toolset beyond its original intention. + +In 2015, we were introduced to the idea of data containerization and the Frictionless Data project in a [Data Packages BoF](https://www.rd-alliance.org/data-packages-bof-p6-bof-session.html) at the [Research Data Alliance](https://www.rd-alliance.org/) conference in Paris, France. After evaluating the Frictionless Data specifications and tools, BCO-DMO developed a strategy to underpin its new data infrastructure on the ideas behind this project. + +While the concept of data packaging is not new, the simplicity and extendibility of the Frictionless Data implementation made it easy to adopt within an existing infrastructure. **BCO-DMO identified the Data Package Pipelines (DPP) project in the Frictionless Data toolset as key to achieving its data curation goals.** DPP implements the philosophy of declarative workflows which trade code in a specific programming language that tells a computer how a task should be completed, for imperative, structured statements that detail what should be done. These structured statements abstract the user writing the statements from the actual code executing them, and are useful for reproducibility over long periods of time where programming languages age, change or algorithms improve. This flexibility was appealing because it meant the intent of the data manager could be translated into many varying programming (and data) languages over time without having to refactor older workflows. In data management, that means that one of the languages a DPP workflow captures is provenance – a common need across oceanographic datasets for reproducibility. DPP Workflows translated into records of provenance explicitly communicates to data submitters and future data users what BCO-DMO had done during the curation phase. Secondly, because workflow steps need to be interpreted by computers into code that carries out the instructions, it helped data management staff converge on a declarative language they could all share. This convergence meant cohesiveness, consistency, and efficiency across the team if we could implement DPP in a way they could all use. + +**In 2018, BCO-DMO formed a partnership with Open Knowledge Foundation (OKF) to develop a web application that would help any BCO-DMO data manager use the declarative language they had developed in a consistent way.** Why develop a web application for DPP? As the data management staff evaluated DPP and Frictionless Data, they found that there was a learning curve to setting up the DPP environment and a deep understanding of the Frictionlessdata ‘Data Package’ specification was required. The web application abstracted this required knowledge to achieve two main goals: 1) consistently structured Data Packages (datapackage.json) with all the required metadata employed at BCO-DMO, and 2) efficiencies of time by eliminating typos and syntax errors made by data managers. Thus, the partnership with OKF focused on making the needs of scientific research data a possibility within the Frictionless Data ecosystem of specs and tools. + +[Data Package Pipelines](https://github.com/frictionlessdata/datapackage-pipelines) is implemented in Python and comes with some built-in processors that can be used in a workflow. BCO-DMO took its own declarative language and identified gaps in the built-in processors. For these gaps, BCO-DMO and OKF developed Python implementations for the missing declarations to support the curation of oceanographic data, and the result was a new set of processors made available on [Github](https://github.com/BCODMO/bcodmo_processors). + +Some notable BCO-DMO processors are: + +[boolean_add_computed_field](https://github.com/BCODMO/bcodmo_processors#bcodmo_pipeline_processorsboolean_add_computed_field)– Computes a new field to add to the data whether a particular row satisfies a certain set of criteria. +Example: Where Cruise_ID = ‘AT39-05’ and Station = 6, set Latitude to 22.1645. + +[convert_date](https://github.com/BCODMO/bcodmo_processors#bcodmo_pipeline_processorsconvert_date) – Converts any number of fields containing date information into a single date field with display format and timezone options. Often data information is reported in multiple columns such as `year`, `month`, `day`, `hours_local_time`, `minutes_local_time`, `seconds_local_time`. For spatio-temporal datasets, it’s important to know the UTC date and time of the recorded data to ensure that searches for data with a time range are accurate. Here, these columns are combined to form an ISO 8601-compliant UTC datetime value. + +[convert_to_decimal_degrees](https://github.com/BCODMO/bcodmo_processors#bcodmo_pipeline_processorsconvert_to_decimal_degrees) – Convert a single field containing coordinate information from degrees-minutes-seconds or degrees-decimal_minutes to decimal_degrees. The standard representation at BCO-DMO for spatial data conforms to the decimal degrees specification. + +[reorder_fields](https://github.com/BCODMO/bcodmo_processors#bcodmo_pipeline_processorsreorder_fields) – Changes the order of columns within the data. This is a convention within the oceanographic data community to put certain columns at the beginning of tabular data to help contextualize the following columns. Examples of columns that are typically moved to the beginning are: dates, locations, instrument or vessel identifiers, and depth at collection. + +The remaining processors used by BCO-DMO can be found at https://github.com/BCODMO/bcodmo_processors. + +How does Laminar work? +In our collaboration with OKF, BCO-DMO developed use cases based on real-world data submissions. One such example is a recent Arctic Nitrogen Fixation Rates dataset. + +![Arctic dataset](./bcodmo1.png) + +The original dataset shown above needed the following curation steps to make the data more interoperable and reusable: + +Convert lat/lon to decimal degrees +Add timestamp (UTC) in ISO format +‘Collection Depth’ with value “surface” should be changed to 0 +Remove parenthesis and units from column names (field descriptions and units captured in metadata). +Remove spaces from column names +The web application, named Laminar, built on top of DPP helps Data Managers at BCO-DMO perform these operations in a consistent way. First, Laminar prompts us to name and describe the current pipeline being developed, and assumes that the data manager wants to load some data in to start the pipeline, and prompts for a source location. + +![Laminar](./bcodmo2.png) + +After providing a name and description of our DPP workflow, we provide a data source to load, and give it the name, ‘nfix’. + +In subsequent pipeline steps, we refer to ‘nfix’ as the resource we want to transform. For example, to convert the latitude and longitude into decimal degrees, we add a new step to the pipeline, select the ‘Convert to decimal degrees’ processor, a proxy for our custom processor convert_to_decimal_degrees’, select the ‘nfix’ resource, select a field form that ‘nfix’ data source, and specify the Python regex pattern identifying where the values for the degrees, minutes and seconds can be found in each value of the latitude column. + +![processor step](./bcodmo3.png) + +Similarly, in step 7 of this pipeline, we want to generate an ISO 8601-compliant UTC datetime value by combining the pre-existing ‘Date’ and ‘Local Time’ columns. This step is depicted below: + +![date processing step](./bcodmo4.png) + +After the pipeline is completed, the interface displays all steps, and lets the data manager execute the pipeline by clicking the green ‘play’ button at the bottom. This button then generates the pipeline-spec.yaml file, executes the pipeline, and can display the resulting dataset. + +![all steps](./bcodmo5.png) + +![data](./bcodmo6.png) + +The resulting DPP workflow contained 223 lines across this 12-step operation, and for a data manager, the web application reduces the chance of error if this pipelines was being generated by hand. Ultimately, our work with OKF helped us develop processors that follow the DPP conventions. + +Our goal for the pilot project with OKF was to have BCO-DMO data managers using the Laminar for processing 80% of the data submissions we receive. The pilot was so successful, that data managers have processed 95% of new data submissions to the repository using the application. + +This is exciting from a data management processing perspective because the use of Laminar is more sustainable, and acted to bring the team together to determine best strategies for processing, documentation, etc. This increase in consistency and efficiency is welcomed from an administrative perspective and helps with the training of any new data managers coming to the team. + +The OKF team are excellent partners, who were the catalysts to a successful project. The next steps for BCO-DMO are to build on the success of The Frictionlessdata Data Package Pipelines by implementing the Frictionlessdata Goodtables specification for data validation to help us develop submission guidelines for common data types. Special thanks to the OKF team – Lilly Winfree, Evgeny Karev, and Jo Barrett. diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo1.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo1.png new file mode 100644 index 000000000..3d5456036 Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo1.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo2.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo2.png new file mode 100644 index 000000000..f86d1c471 Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo2.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo3.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo3.png new file mode 100644 index 000000000..a2b661ba6 Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo3.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo4.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo4.png new file mode 100644 index 000000000..be1c5193b Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo4.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo5.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo5.png new file mode 100644 index 000000000..d19064bbd Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo5.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo6.png b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo6.png new file mode 100644 index 000000000..b2346a9bd Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmo6.png differ diff --git a/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmoLogo.jpg b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmoLogo.jpg new file mode 100644 index 000000000..3d525f3e7 Binary files /dev/null and b/site/blog/2020-02-10-frictionless-data-pipelines-for-open-ocean/bcodmoLogo.jpg differ diff --git a/site/blog/2020-03-18-frictionless-data-pilot-study/README.md b/site/blog/2020-03-18-frictionless-data-pilot-study/README.md new file mode 100644 index 000000000..09abb08ec --- /dev/null +++ b/site/blog/2020-03-18-frictionless-data-pilot-study/README.md @@ -0,0 +1,46 @@ +--- +title: Frictionless Public Utility Data - A Pilot Study +date: 2020-03-18 +tags: ["pilot"] +author: Zane Selvans, Christina Gosnell, and Lilly Winfree +category: frictionless-data +image: /img/blog/SimpleSquareWalking.png +--- + +This blog post describes a Frictionless Data Pilot with the Public Utility Data Liberation project. Pilot projects are part of the [Frictionless Data for Reproducible Research project](https://frictionlessdata.io/reproducible-research/). Written by Zane Selvans, Christina Gosnell, and Lilly Winfree. + + + +The Public Utility Data Liberation project, [PUDL](https://catalyst.coop/pudl/), aims to make US energy data easier to access and use. Much of this data, including information about the cost of electricity, how much fuel is being burned, powerplant usage, and emissions, is not well documented or is in difficult to use formats. Last year, PUDL joined forces with the Frictionless Data for Reproducible Research team as a Pilot project to release this public utility data. PUDL takes the original spreadsheets, CSV files, and databases and turns them into unified Frictionless [tabular data packages(https://frictionlessdata.io/docs/tabular-data-package/)] that can be used to populate a database, or read in directly with Python, R, Microsoft Access, and many other tools. + +![Catalyst Logo](./SimpleSquareWalking.png) + +## What is PUDL? +The PUDL project, which is coordinated by [Catalyst Cooperative](https://catalyst.coop/pudl/), is focused on creating an energy utility data product that can serve a wide range of users. PUDL was inspired to make this data more accessible because the current US utility data ecosystem fragmented, and commercial products are expensive. There are hundreds of gigabytes of information available from government agencies, but they are often difficult to work with, and different sources can be hard to combine. + +PUDL users include researchers, activists, journalists, and policy makers. They have a wide range of technical backgrounds, from grassroots organizers who might only feel comfortable with spreadsheets, to PhDs with cloud computing resources, so it was important to provide data that would work for all users. + +Before PUDL, much of this data was freely available to download from various sources, but it was typically messy and not well documented. This led to a lack of uniformity and reproducibility amongst projects that were using this data. The users were scraping the data together in their own way, making it hard to compare analyses or understand outcomes. Therefore, one of the goals for PUDL was to minimize these duplicated efforts, and enable the creation of lasting, cumulative outputs. + +## What were the main Pilot goals? +The main focus of this Pilot was to create a way to openly share the utility data in a reproducible way that would be understandable to PUDL’s many potential users. The first change Catalyst identified they wanted to make during the Pilot was with their data storage medium. PUDL was previously creating a Postgresql database as the main data output. However many users, even those with technical experience, found setting up the separate database software a major hurdle that prevented them from accessing and using the processed data. They also desired a static, archivable, platform-independent format. Therefore, Catalyst decided to transition PUDL away from PostgreSQL, and instead try Frictionless Tabular Data Packages. They also wanted a way to share the processed data without needing to commit to long-term maintenance and curation, meaning they needed the outputs to continue being useful to users even if they only had minimal resources to dedicate to the maintenance and updates. The team decided to package their data into Tabular Data Packages and identified Zenodo as a good option for openly hosting that packaged data. + +Catalyst also recognized that most users only want to download the outputs and use them directly, and did not care about reproducing the data processing pipeline themselves, but it was still important to provide the processing pipeline code publicly to support transparency and reproducibility. Therefore, in this Pilot, they focused on transitioning their existing ETL pipeline from outputting a PostgreSQL database, that was defined using SQLAlchemy, to outputting datapackages which could then be archived publicly on Zenodo. Importantly, they needed this pipeline to maintain the metadata, information about data type, and database structural information that had already been accumulated. This rich metadata needed to be stored alongside the data itself, so future users could understand where the data came from and understand its meaning. The Catalyst team used Tabular Data Packages to record and store this metadata (see the code here: https://github.com/catalyst-cooperative/pudl/blob/master/src/pudl/load/metadata.py). + +Another complicating factor is that many of the PUDL datasets are fairly entangled with each other. The PUDL team ideally wanted users to be able to pick and choose which datasets they actually wanted to download and use without requiring them to download it all (currently about 100GB of data when uncompressed). However, they were worried that if single datasets were downloaded, the users might miss that some of the datasets were meant to be used together. So, the PUDL team created information, which they call “glue”, that shows which datasets are linked together and that should ideally be used in tandem. + +The cumulation of this Pilot was a release of the PUDL data (access it here – https://zenodo.org/record/3672068 and read the corresponding documentation here – https://catalystcoop-pudl.readthedocs.io/en/v0.3.2/), which includes integrated data from the EIA Form 860, EIA Form 923, The EPA Continuous Emissions Monitoring System (CEMS), The EPA Integrated Planning Model (IPM), and FERC Form 1. + +## What problems were encountered during this Pilot? +One issue that the group encountered during the Pilot was that the data types available in Postgres are substantially richer than those natively in the Tabular Data Package standard. However, this issue is an endemic problem of wanting to work with several different platforms, and so the team compromised and worked with the least common denominator. In the future, PUDL might store several different sets of data types for use in different contexts, for example, one for freezing the data out into data packages, one for SQLite, and one for Pandas. + +Another problem encountered during the Pilot resulted from testing the limits of the draft Tabular Data Package specifications. There were aspects of the specifications that the Catalyst team assumed were fully implemented in the reference (Python) implementation of the Frictionless toolset, but were in fact still works in progress. This work led the Frictionless team to start a documentation improvement project, including a revision of the specifications website to incorporate this feedback. + +Through the pilot, the teams worked to implement new Frictionless features, including the specification of composite primary keys and foreign key references that point to external data packages. Other new Frictionless functionality that was created with this Pilot included partitioning of large resources into resource groups in which all resources use identical table schemas, and adding gzip compression of resources. The Pilot also focused on implementing more complete validation through goodtables, including bytes/hash checks, foreign keys checks, and primary keys checks, though there is still more work to be done here. + +## Future Directions +A common problem with using publicly available energy data is that the federal agencies creating the data do not use version control or maintain change logs for the data they publish, but they do frequently go back years after the fact to revise or alter previously published data — with no notification. To combat this problem, Catalyst is using data packages to encapsulate the raw inputs to the ETL process. They are setting up a process which will periodically check to see if the federal agencies’ posted data has been updated or changed, create an archive, and upload it to Zenodo. They will also store metadata in non-tabular data packages, indicating which information is stored in each file (year, state, month, etc.) so that there can be a uniform process of querying those raw input data packages. This will mean the raw inputs won’t have to be archived alongside every data release. Instead one can simply refer to these other versioned archives of the inputs. Catalyst hopes these version controlled raw archives will also be useful to other researchers. + +Another next step for Catalyst will be to make the ETL and new dataset integration more modular to hopefully make it easier for others to integrate new datasets. For instance, they are planning on integrating the EIA 861 and the ISO/RTO LMP data next. Other future plans include simplifying metadata storage, using Docker to containerize the ETL process for better reproducibility, and setting up a [Pangeo](https://pangeo.io/) instance for live interactive data access without requiring anyone to download any data at all. The team would also like to build visualizations that sit on top of the database, making an interactive, regularly updated map of US coal plants and their operating costs, compared to new renewable energy in the same area. They would also like to visualize power plant operational attributes from EPA CEMS (e.g., ramp rates, min/max operating loads, relationship between load factor and heat rate, marginal additional fuel required for a startup event…). + +Have you used PUDL? The team would love to hear feedback from users of the published data so that they can understand how to improve it, based on real user experiences. If you are integrating other US energy/electricity data of interest, please talk to the PUDL team about whether they might want to integrate it into PUDL to help ensure that it’s all more standardized and can be maintained long term. Also let them know what other datasets you would find useful (E.g. FERC EQR, FERC 714, PHMSA Pipelines, MSHA mines…). If you have questions, please ask them on GitHub (https://github.com/catalyst-cooperative/pudl) so that the answers will be public for others to find as well. diff --git a/site/blog/2020-03-18-frictionless-data-pilot-study/SimpleSquareWalking.png b/site/blog/2020-03-18-frictionless-data-pilot-study/SimpleSquareWalking.png new file mode 100644 index 000000000..67fb23323 Binary files /dev/null and b/site/blog/2020-03-18-frictionless-data-pilot-study/SimpleSquareWalking.png differ diff --git a/site/blog/2020-03-20-joining-the-frictionless-data-team/README.md b/site/blog/2020-03-20-joining-the-frictionless-data-team/README.md new file mode 100644 index 000000000..e60273cae --- /dev/null +++ b/site/blog/2020-03-20-joining-the-frictionless-data-team/README.md @@ -0,0 +1,30 @@ +--- +title: Joining the Frictionless Data Team +date: 2020-03-20 +tags: ["team"] +category: +image: /img/blog/gift.jpg +description: Introducing a new team member - Gift Egwuenu +author: Gift Egwuenu +--- + +Hi there, My name is [Gift Egwuenu][gift] and I'm super excited to share I joined [Datopian](https://datopian.com/) as a Frontend Developer and Developer Evangelist! 🎉 + + + +[Frictionless Data](https://frictionlessdata.io) is an open-source toolkit that brings simplicity and grace to the data experience. We want every Data Engineer or Data Scientist to know about it and benefit from it. + +Part of my job involves spreading the word about Frictionless Data and encouraging community involvement by sharing what you can achieve with the toolkit 😃 + +My other day-to-day activities include the following and more: + +* Working on Frictionless Data tools +* Working closely and interacting with the Frictionless Data Community via (chats, remote hangouts, and in-person events) +* Writing documentation, guide and blog posts for Frictionless Data + +I'm glad I get to do this as a full-time job because I'm passionate about teaching and learning 🚀 and I'm excited to be a part of the [Frictionless Data community](https://frictionlessdata.io/) where I get to contribute, share, learn and interact with the data community. + +[gift]: https://giftegwuenu.com +[datopian]: https://datopian.com/ +[fd]: https://frictionlessdata.io +[fd-comm]: https://frictionlessdata.io/ diff --git a/site/blog/2020-04-16-annoucing-frictionless-data-virtual-hangout/README.md b/site/blog/2020-04-16-annoucing-frictionless-data-virtual-hangout/README.md new file mode 100644 index 000000000..c52318a72 --- /dev/null +++ b/site/blog/2020-04-16-annoucing-frictionless-data-virtual-hangout/README.md @@ -0,0 +1,28 @@ +--- +title: Announcing Frictionless Data Community Virtual Hangout - 20 April +date: 2020-04-16 +tags: ["events", "community-hangout"] +category: events +image: /img/blog/community.jpg +description: Invitation to our first virtual hangout in April 2020 +author: Gift Egwuenu +--- + +![Photo by William White on Unsplash](https://i.imgur.com/rls4pCT.jpg) + + +__We are thrilled to announce we'll be hosting a virtual community hangout to share recent developments in the Frictionless Data community. This will be a 1-hour meeting where community members come together to discuss key topics in the data community.__ + +**Here are some key discussions we hope to cover:** + +- Introductions & share the purpose of this hangout. +- Share the update on the new website release and general Frictionless Data related updates. +- Have community members share their thoughts and general feedback on Frictionless Data. +- Share information about CSV Conf. + +The hangout is scheduled to happen on **20th April 2020 at 5 pm CET**. If you would like to attend, [you can sign up for the event in advance here.](https://zoom.us/meeting/register/tJEqdOyspzgvG9wlVM_3Z_6yyL8wzc-v03Bq) Everyone is welcome. + + +Looking forward to seeing you there! + + diff --git a/site/blog/2020-04-23-table-schema-catalog/README.md b/site/blog/2020-04-23-table-schema-catalog/README.md new file mode 100644 index 000000000..cfee51ee4 --- /dev/null +++ b/site/blog/2020-04-23-table-schema-catalog/README.md @@ -0,0 +1,69 @@ +--- +title: Open Data Quality, Standardization and Why we Need a Schema Catalog +date: 2020-04-23 +tags: ["table-schema"] +category: table-schema +image: +description: The issue of open data quality has been a prominent subject of discussion for years past. +author: Johan Richer +--- + +[Jailbreak](https://www.jailbreak.paris/) is a French company founded by former employees of [Etalab](https://www.etalab.gouv.fr/), the national open data agency, and [Easter-eggs](https://www.easter-eggs.com/), GNU/Linux experts since 1997. + +# Open Data Quality: The case of France + +The issue of open data quality has been a prominent subject of discussion for years past. These articles covers more discussion on it. [Why the Open Definition Matters for Open Data: Quality, Compatibility and Simplicity][open-definition] and more recently [Open data quality – the next shift in open data?][open-data] + + +[open-definition]: https://blog.okfn.org/2014/09/30/why-the-open-definition-matters-for-open-data-quality-compatibility-and-simplicity/ +[open-data]: https://blog.okfn.org/2017/05/31/open-data-quality-the-next-shift-in-open-data/ + +Since 2017, [OpenDataFrance](http://www.opendatafrance.net/) has made it a top priority to help open data producers, mainly local governments, improve the quality of their open data. + +Jailbreak joined the team tasked to find solutions to that problem. We proposed to start with data validation as a way to point out quality problems in datasets, and choose [Goodtables](https://goodtables.io/) as a basis for that. We developed a new UI with adaptations and localization for French users, as well as some custom checks to tackle specific errors which are common in French datasets. This has given birth to a validator tool called [Validata](https://go.validata.fr/). + +Like Goodtables, it checks tabular data sources for structural problems, such as blank rows, but where it really shines is when you give it a schema to validate your data. + +## Schemas as Standards + +A schema is a file describing the way the data should be formatted. For example, if a column exists for dates, the schema is where it would be specified. This way, the validator can automatically check that all the cells in that column contain dates. + +[Table Schemas](https://frictionlessdata.io/specs/table-schema/) are perfect for open data, which is often just tabular data such as CSV or XLSX files. They're also really easy to write and, if enough people use them, they can become *de facto* standards for datasets. + +Spearheaded by OpenDataFrance, the French open data community has created 8 common open data schemas as part of a so-called "Socle commun des données locales" (Common Ground of Local Data). These are now the standards to publish datasets on subjects like grants given to non-profits, decisions voted in local assemblies or stations for electric vehicles. + +What we learned with Validata is that the s**chemas and tools we created in order to improve open data quality are only as good as their popularity**. If only a few are using the schemas to publish their data, nobody else will follow these schemas and, immediately, the validator tool is not as useful anymore. The quality is not improving if the "standards" are not used. But, most importantly, **a standard cannot be self-proclaimed.** + +## Where are all the schemas? We need catalog(s) + +A few months ago, Etalab has launched [schema.data.gouv.fr](https://schema.data.gouv.fr/), an official open data schema catalog specific to France. The idea is now to go next-level and start **a community-run schema catalog which would be both inclusive and international**. First to share Table Schemas but it could also be open to other schemas such as [Data Package Schemas](https://frictionlessdata.io/specs/data-package/) or even others. + +For schemas to become standards, they must be easily found and usable. They must be shared. We propose to open a new chapter for Table Schemas with [schemas.frictionlessdata.io](https://schemas.frictionlessdata.io/) as the place to catalog them. + +Each schema page could have link tools, calling users to appropriate actions ; for example "Validate a file" with Goodtables or Validata, "Create a file" with [CSV Good Generator](https://github.com/etalab/csv-gg) developed by Etalab or [tsfaker](https://gitlab.com/healthdatahub/tsfaker/), or "Download a template" with [table-schema-resource-template](https://framagit.org/opendataschema/table-schema-resource-template), etc. + +The website for the catalog should have all the features needed such as full-text search and filtered search (by country, etc.). It should also have an API to make use of the catalog within other tools, for example, an open data portal proposing schemas when people upload a data package. This is an idea already experimented by ODI with [Octopub](https://octopub.io/). + +Those are some ideas that need to be expanded. We have to give schemas their chance to shine! + + +:::tip Situation: Poor quality of open data +Question: How to improve the quality of open data? + +1. Problem: Standardization of common datasets +Solution: Table Schemas +Example: A schema for the names of babies born in a city in a given year. + +2. Problem: Checking the quality of datasets +Solution: Goodtables +Example: [Validata](https://go.validata.fr/), an adaptation of Goodtables for French open data. + +3. Problem: Sharing open data standards +Solution: Schema Catalog +Example: [SCDL](https://scdl.opendatafrance.net/), Schema.data.gouv.fr, Schemas.frictionlessdata.io +::: + +There's an ongoing conversation about this project on [Frictionless Data Forum][schema-catalog] and it's open to feedback and contribution. + + +[schema-catalog]: https://github.com/frictionlessdata/forum/issues/5 \ No newline at end of file diff --git a/site/blog/2020-04-28-recap-post-frictionless-data-hangout-april-2020/README.md b/site/blog/2020-04-28-recap-post-frictionless-data-hangout-april-2020/README.md new file mode 100644 index 000000000..94ca6e06b --- /dev/null +++ b/site/blog/2020-04-28-recap-post-frictionless-data-hangout-april-2020/README.md @@ -0,0 +1,32 @@ +--- +title: Recap - Frictionless Data Community Virtual Hangout April 2020 +date: 2020-04-28 +tags: ["events", "community-hangout"] +category: events +image: /img/blog/frictionlessdata-hangout.png +description: Here's a recap from the Frictionless Data community hangout with highlights and video recording. +author: Gift Egwuenu +--- + +The first edition of [Frictionless Data Community Hangout][announcement] held on 20 April 2020 and it was a huge success and a great time spent with members of the community. +We had over 16 guests join the event - the highlight from this event was community interaction. We had several people ask questions regarding things they needed clarity on about Frictionless Data and people shared what they are currently working on. + + + +[announcement]: https://frictionlessdata.io/blog/2020/04/16/annoucing-frictionless-data-virtual-hangout/ + +The event started with members of the community doing introductions across the room so we get to know each other better. People were also interested in knowing: + +- how frictionless data is different from Pandas +- how are we moving towards the Open Science direction + +Another topic that surfaced was how people are using Frictionless Data. A great example here is Johan Richer’s use case. [Johan Richer](https://twitter.com/JohanRicher), who works with the Open Data France team, shared a proposal for building a community led project called [Table Schema Catalog](https://frictionlessdata.io/blog/2020/04/23/table-schema-catalog). This project will serve as a single source of truth and a collection of table schemas from different organizations making all table schemas discoverable and usable. Here’s a great opportunity for the community to show some support! Johan is looking for collaborators, so if this project sounds interesting to you, go find more details on the [Frictionless Data Forum](https://github.com/frictionlessdata/forum/issues/5) and get yourself involved. + +Finally, we rounded up the hangout with Lilly from the [Open Knowledge Foundation](https://okfn.org/) team. She shared information about the upcoming [CSV Conf](https://csvconf.com/) on May 13-14 2020 and [Frictionless Data Tool Fund](https://blog.okfn.org/2020/03/02/announcing-the-2020-frictionless-data-tool-fund/) which, by the way is still open for applications until May 17 2020. + +The event went pretty well thanks to everyone that showed up - I think it's a great start to cultivating community growth on Frictionless Data. We've scheduled another hangout on May 21, 2020. **Early registration is on** [go register now](https://us02web.zoom.us/meeting/register/tZMsf-qrrjopHtGZwMyM7tCmp_YyPlNms6wK) so you don't miss out. We are also opening up spots for people in the community to share what they are working on and anything related to Frictionless Data that'll benefit the entire community. If this sounds appealing to you - reach out to us on [Discord](https://discordapp.com/invite/Sewv6av) and we’ll set it up. + +For more updates on the project, join our online community on [Discord](https://discordapp.com/invite/Sewv6av) and follow [@frictionlessd8a](https://twitter.com/frictionlessd8a) on twitter! + +Thank you and look forward to seeing you at our next event! + diff --git a/site/blog/2020-04-30-frictionless-data-workshop/README.md b/site/blog/2020-04-30-frictionless-data-workshop/README.md new file mode 100644 index 000000000..db784a917 --- /dev/null +++ b/site/blog/2020-04-30-frictionless-data-workshop/README.md @@ -0,0 +1,19 @@ +--- +title: Join our free virtual Frictionless Data workshop on 20th May +date: 2020-04-30 +tags: ["events"] +category: events +image: /img/blog/fd_reproducible.png +description: Join our free virtual Frictionless Data workshop on 20th May +author: Lilly Winfree +--- + +**Join us on 20th May at 4pm BST/10am CDT for a Frictionless Data workshop led by the Reproducible Research fellows.** This 90-minute long workshop will cover an introduction to the open source Frictionless Data tools. Participants will learn about data wrangling, including how to document metadata, package data into a datapackage, write a schema to describe data and validate data. The workshop is suitable for beginners and those looking to learn more about using Frictionless Data. It will be presented in English, but you can ask questions in English or Spanish. + +### Everyone is welcome to join, but you must register to attend using this [link](https://us02web.zoom.us/meeting/register/tZIlcOCoqzMpHdIDge8bHuaOpC3oiVD21zkh). + +The fellows programme is part of the [Frictionless Data for Reproducible Research project](http://frictionlessdata.io/reproducible-research/) overseen by the [Open Knowledge Foundation](https://okfn.org/). This project, funded by the Sloan Foundation, applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate data workflows in research contexts. + +At its core, Frictionless Data is a set of [specifications](https://frictionlessdata.io/specs) for data and metadata interoperability, accompanied by a collection of [software libraries](https://github.com/frictionlessdata) that implement these specifications, and a range of best practices for data management. The core specification, the Data Package, is a simple and practical “container” for data and metadata. This workshop will be led by the members of the first cohort of the fellows programme: Lily Zhao, Daniel Ouso, Monica Granados, and Selene Yang. You can read more about their work during this programme here: http://fellows.frictionlessdata.io/blog/. + +Additionally, applications are now open for the second cohort of fellows. Read more about applying here: https://blog.okfn.org/2020/04/27/apply-now-to-become-a-frictionless-data-reproducible-research-fellow/ \ No newline at end of file diff --git a/site/blog/2020-05-01-announcing-new-website/README.md b/site/blog/2020-05-01-announcing-new-website/README.md new file mode 100644 index 000000000..bfd59f00c --- /dev/null +++ b/site/blog/2020-05-01-announcing-new-website/README.md @@ -0,0 +1,35 @@ +--- +title: Announcing Our New Website Release +date: 2020-05-01 +tags: ["news"] +category: news +image: /img/blog/fd-home.png +description: In this article, We talked about the reasons we decided redesign our website with a few highlights on the new changes made. +author: Gift Egwuenu +--- + +We’re excited to announce the launch of our newly designed Frictionless Data website. The goal of the rebranding was to better communicate our brand values and improve the user experience. We want Frictionless Data to be wildly successful – we want people to not only know about us, but also also use our tools by default. + +
+ Frictionless Data Homepage +
Screenshot of Frictionless Data Homepage
+
+ +We’ve improved the layout of our content, done some general changes on our brand logo, design, as well as on the whole site structure - the navigation is now more accessible with a sidebar option integrated so you can access key items easily and you get more from a quick read. + +
+ Revamped Frictionless Brand Logo +
Revamped Frictionless Brand Logo
+
+ +We have a new [Team page](https://frictionlessdata.io/team/) with a list of Core Team Members, Tool Fund Partners, and Reproducible Research Fellows contributing effort to the project. There are also many other smaller, but impactful changes, all aiming to make the experience of the Frictionless Data website much better for you. + +
+  Team Page +
Frictionless Data Team Page
+
+ +In our bid to increase the adoption of our tooling and specifications, we are also working on rewriting our documentation. The current effort involved will birth a new subpage called the [Guide](https://frictionlessdata.io/guide/) - it's first section is even already published on the website. Furthermore, we'll be releasing different How-to's sections that'll walk our users through the steps required to solve a real-world data problem. + + +We hope you find our new website fresher, cleaner and clearer. If you have any feedback and/or improvement suggestions, please let us know on our [Discord Channel](https://discordapp.com/invite/Sewv6av) or on [Twitter](https://twitter.com/frictionlessd8a). diff --git a/site/blog/2020-05-01-announcing-new-website/brand.png b/site/blog/2020-05-01-announcing-new-website/brand.png new file mode 100644 index 000000000..b4ad3ca39 Binary files /dev/null and b/site/blog/2020-05-01-announcing-new-website/brand.png differ diff --git a/site/blog/2020-05-01-announcing-new-website/home.png b/site/blog/2020-05-01-announcing-new-website/home.png new file mode 100644 index 000000000..54a537a69 Binary files /dev/null and b/site/blog/2020-05-01-announcing-new-website/home.png differ diff --git a/site/blog/2020-05-01-announcing-new-website/team.png b/site/blog/2020-05-01-announcing-new-website/team.png new file mode 100644 index 000000000..66ca7dbc0 Binary files /dev/null and b/site/blog/2020-05-01-announcing-new-website/team.png differ diff --git a/site/blog/2020-05-20-frictionless-data-may-hangout/README.md b/site/blog/2020-05-20-frictionless-data-may-hangout/README.md new file mode 100644 index 000000000..ad4469c10 --- /dev/null +++ b/site/blog/2020-05-20-frictionless-data-may-hangout/README.md @@ -0,0 +1,22 @@ +--- +title: Frictionless Data May 2020 Virtual Hangout - 21 May +date: 2020-05-20 +tags: ["events", "community-hangout"] +category: events +image: /img/blog/community.jpg +description: +author: Gift Egwuenu +--- + +We are hosting another round of our virtual community hangout to share recent developments in the Frictionless Data community and it's also an avenue to connect with other community members. This will be a 1-hour meeting where community members come together to discuss key topics in the data community. + +![Photo by Perry Grone on Unsplash](./community.jpeg) + + +The hangout is scheduled to hold on **21st May 2020 at 5 pm BST**. If you would like to attend the hangout, [you can sign up for the event here](https://us02web.zoom.us/meeting/register/tZMsf-qrrjopHtGZwMyM7tCmp_YyPlNms6wK) + + +Looking forward to seeing you there! + + + diff --git a/site/blog/2020-05-20-frictionless-data-may-hangout/community.jpeg b/site/blog/2020-05-20-frictionless-data-may-hangout/community.jpeg new file mode 100644 index 000000000..4c7821fe4 Binary files /dev/null and b/site/blog/2020-05-20-frictionless-data-may-hangout/community.jpeg differ diff --git a/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/README.md b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/README.md new file mode 100644 index 000000000..3ed7738a0 --- /dev/null +++ b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/README.md @@ -0,0 +1,68 @@ +--- +title: schema.data.gouv.fr - An Open Data Schema Catalog for France +date: 2020-05-22 +tags: ["case-studies"] +category: case-study +image: /img/blog/schema.gouv.fr.png +description: +author: Geoffrey Aldebert, Antoine Augusti +--- + +In June 2019, [Etalab](https://etalab.gouv.fr), a department of the French interministerial digital service (DINUM), launched [schema.data.gouv.fr](https://schema.data.gouv.fr), a platform listing schemas for France. It could be described as what Johan Richer recently called a [schema catalog](https://frictionlessdata.io/blog/2020/04/23/table-schema-catalog/). This project is an initiative of data.gouv.fr, the French open data platform, which is developed and maintained by Etalab. + +![schema.gouv.fr homepage](/img/blog/schema.gouv.fr.png) + +## What's a schema? + +A schema declares a data model in a clear and precise manner, the various fields and types in a structured and consistent manner, according to a specification. For example, [Table Schema](https://specs.frictionlessdata.io/table-schema/) is a simple language to declare a schema for tabular data. + +Schemas are well suited for a wide range of applications: validating data against a schema, documenting a data model, consolidating data from multiple sources, generating example datasets, or proposing tailored input forms. This wide range of applications makes schemas an important tool for both producers and reusers. + +## Advancing open data quality +A common complaint of open data reusers has been the lack of quality of the data and data structure changes over time, without notice. The OKFN spoke about this issue in mid-2017 in a blog post, [Open data quality – the next shift in open data?](https://blog.okfn.org/2017/05/31/open-data-quality-the-next-shift-in-open-data/) + +With [schema.data.gouv.fr](schema.data.gouv.fr), Etalab promotes high-quality open data: producers are encouraged to discuss and come up with an appropriate schema for the data they want to publish, and to document it with a recognised specification. Producers will then be able to make sure that the data they publish conforms to the schema over time. Reusers benefit from high-quality documentation, a stable data structure, and increased quality of the data. + +## Impacts +The first impact of the launch of [schema.data.gouv.fr](https://schema.data.gouv.fr) has **put at the forefront the challenge of open data quality**. It acknowledges that this is not a solved problem and that producers should embrace schemas, validators, documentation, automated testing to raise the quality of the data they publish. It's also a recognition of the efforts already made by the community, for example the "Socle commun des données locales" (Common Ground of Local Data) by [OpenDataFrance](http://www.opendatafrance.net). + +To help producers discover schemas and how it can be helpful for them, we published in March 2020 a [long guide](https://guides.etalab.gouv.fr/producteurs-schemas/) going over steps producers are encouraged to follow when creating a schema: discovery, discussions, implementation, publication and finally referencing the schema on schema.data.gouv.fr. + +Since the launch, producers worked with their reusers and published various schemas: [carpooling places](https://schema.data.gouv.fr/etalab/schema-lieux-covoiturage/latest.html) or [defibrillators](https://schema.data.gouv.fr/arsante/schema-dae/latest.html) to name a few. People had in-depth discussions about their data model, encouraged by the thoroughness of the Table Schema specification. Producers worked hard to clean their data and finally reached a point where their dataset is 100% aligned with the schema, without any errors. + +## What's next + +Here are a few things we are working on and hope to be able to finish in the coming years. + +### Improved data models defined in the law + +Right now, when data models are introduced by law, the data model is often described by a table. We'd like to offer a schema when these laws are published, to ease adoption by the community and improve discoverability. + +### Integration with data.gouv.fr + +The schema.data.gouv.fr initiative is mainly based on published datasets on the French open data platform data.gouv.fr. However, these tools are still quite separated today. In the coming months, we would like to strengthen the link between schema.data.gouv.fr and data.gouv.fr by promoting existing schemas directly on the open data platform. + +First, we would like to inform users of the existence of a consolidated dataset based on an existing schema and provide them with its quality report. Such a feature is newly available on schema.data.gouv.fr. The same feature will arrive soon on data.gouv.fr. + +![Screenshot à prévoir](./schema-1.png) + +Second, we're looking into integrating schemas into the data publishing process on data.gouv.fr. We could help users by letting them know that a schema corresponding to their dataset already exists. We could suggest them what changes to make to get their data directly validated. We already started doing this with a simple implementation: we post comments on datasets which are supposed to follow a schema, letting producers know if the data is valid and if not, enabling them to access a report to troubleshoot. + +Another possibility would be to offer a new service on data.gouv.fr such as the generation of data from an automatically generated form. This is the goal of the ongoing development of [CSV-GG](https://csv-gg.etalab.studio/?schema=etalab%2Fschema-lieux-covoiturage) allowing to generate a form from an existing Table Schema. This could help users to directly produce validated data. + +![screenshot à prévoir](./schema-2.png) + + +### Automation + +In the longer term, we also plan to automate data consolidation based on a schema as much as possible. For that, we need to better know and understand available resources on the platform. This could be done by systematically analyzing the content of a new resource and try to fetch metadata such as headers or type of data for each column. + +These metadata could then be used to identify datasets with similar structures and link them to an existing schema or propose to create a new one if it does not already exist. + +We could also take advantage of the tool [CSVAPI](https://github.com/etalab/csvapi) which is actually in use on data.gouv.fr to preview data of a specific dataset. CSVAPI could evolve to offer new features such as highlighting quality problems directly in the dataset or navigating through different datasets with same - or partial - structures. The schema associated with a dataset could also help having a better preview by associating a type to each field. For example, a postal code could be recognized as such and the leading zero would not be cropped. + +## Conclusion + +All of the features mentioned in this article are intended to promote the usefulness and the value of schemas and lead to the creation of new ones. We hope this will result in an increase of the overall quality of the data hosted on data.gouv.fr. + +Furthermore, we strongly believe that these features will help to link different users and producers with similar interests and therefore be in line with the community-based nature of data.gouv.fr. diff --git a/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-1.png b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-1.png new file mode 100644 index 000000000..ea3946a26 Binary files /dev/null and b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-1.png differ diff --git a/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-2.png b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-2.png new file mode 100644 index 000000000..e429352a2 Binary files /dev/null and b/site/blog/2020-05-21-etalab-case-study-schemas-data-gouv-fr/schema-2.png differ diff --git a/site/blog/2020-06-05-june-virtual-hangout/README.md b/site/blog/2020-06-05-june-virtual-hangout/README.md new file mode 100644 index 000000000..ed5ea52f1 --- /dev/null +++ b/site/blog/2020-06-05-june-virtual-hangout/README.md @@ -0,0 +1,27 @@ +--- +title: Frictionless Data June 2020 Virtual Hangout - 25 June +date: 2020-06-05 +tags: ["events", "community-hangout"] +category: events +image: /img/blog/community.jpg +description: +author: Gift Egwuenu +--- + +We are hosting a virtual community hangout to share recent developments in the Frictionless Data community and it's also an avenue to connect with other community members. This will be a 1-hour meeting where community members come together to discuss key topics in the data community. + +![Photo by Perry Grone on Unsplash](/img/blog/community.jpg) + +The hangout is scheduled to hold on **25th June 2020 at 5 pm BST / 4 PM UTC**. If you would like to attend the hangout, [you can sign up for the event using this form](https://forms.gle/3wEGBy2q4Q6pdNfK8) + +Looking forward to seeing you there! + + +## Community Hangout Recording + +If you missed the community hangout and will like to catch up on what was discussed, here's a recording of the hangout. + + + + + diff --git a/site/blog/2020-06-26-csvconf-frictionless-recap/README.md b/site/blog/2020-06-26-csvconf-frictionless-recap/README.md new file mode 100644 index 000000000..24dbacef0 --- /dev/null +++ b/site/blog/2020-06-26-csvconf-frictionless-recap/README.md @@ -0,0 +1,18 @@ +--- +title: csv,conf,v5 Frictionless Data talks and recap +date: 2020-06-26 +tags: ["events"] +category: events +image: /img/blog/commallama.png +description: csv,conf,v5, which occurred virtually in May 2020, featured several talks about using Frictionless Data... +author: Lilly Winfree +--- +[csv,conf,v5](https://csvconf.com/), which occurred virtually in May 2020, featured several talks about using Frictionless Data, and was also organized by two members of the Frictionless Data team, Lilly Winfree and Jo Barratt. csv,conf is a community conference that brings diverse groups together to discuss data topics, and features stories about data sharing and data analysis from science, journalism, government, and open source. Over the years we have had over a hundred different talks from a huge range of speakers, most of which you can still watch back on our [YouTube Channel](http://youtube.com/csvconf). + +COVID-19 threw a wrench in our plans for csv,conf,v5, and we ended up converting the conference to a virtual event. We were looking forward to our first conference in Washington DC, but unfortunately, like many other in-person events, this was not going to be possible in 2020. However, there were many positive outcomes of moving to a virtual conference. For instance, the number of attendees quadrupled (over 1000 people registered!) and people were able to attend from all over the world. + +During the conference, there were several talks showcasing Frictionless Data. Two of the Frictionless Data Fellows, Monica Granados and Lily Zhao, presented a talk (“[How Frictionless Data Can Help You Grease Your Data](https://youtu.be/tZmu5DGPRmA)”) that had over 100 people watching live, which is many more than would have been at their talk in person. Other related projects gave talks that incorporated Frictionless Data, such as Christina Gosnell and Pablo Virgo from Catalyst Cooperative discussing “[Getting climate advocates the data they need.](https://youtu.be/ktLTC7SENHk)” I also recommend watching “[Data and Code for Reproducible Research](https://youtu.be/3Ban-orpVtc)” by Lisa Federer and Maryam Zaringhalam, and “[Low-Income Data Diaries - How “Low-Tech” Data Experiences Can Inspire Accessible Data Skills and Tool Design](https://youtu.be/XV_jxbB1cBY)” by David Selassie Opoku. You can see the full list of talks, with links to slides and videos, on the csv,conf website: [https://csvconf.com/speakers/](https://csvconf.com/speakers/). + +If you are planning on organizing a virtual event, you can read more about how csv,conf,v5 was planned here: [https://csvconf.com/going-online](https://csvconf.com/going-online). + +We hope to see some of you next year for csv,conf,v6! \ No newline at end of file diff --git a/site/blog/2020-07-10-tool-fund-intermine/README.md b/site/blog/2020-07-10-tool-fund-intermine/README.md new file mode 100644 index 000000000..92232fef6 --- /dev/null +++ b/site/blog/2020-07-10-tool-fund-intermine/README.md @@ -0,0 +1,23 @@ +--- +title: Adding Data Package Specifications to InterMine’s im-tables +date: 2020-07-10 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/nikhilvats.jpeg +description: This grantee profile features Nikhil Vats for our series of Frictionless Data Tool Fund posts... +author: Nikhil Vats +--- + +*This grantee profile features Nikhil Vats for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.* + +## Meet Nikhil Vats +I am an undergraduate student pursuing BE Computer Science and MSc Economics from BITS Pilani, India. My open-source journey started as a Google Summer of Code student with [Open Bioinformatics Foundation](https://www.open-bio.org/) in 2019 and currently, I am a mentor at [InterMine](http://intermine.org/) for Outreachy. I’ve been working part-time as a full-stack web developer for the last two years. The latest project that I worked on was [DaanCorona](https://daancorona.tech/) (daan is a Hindi word which means donation) - a non-profit initiative to help small businesses affected by Coronavirus in India. Through the Frictionless Data Tool Fund, I would like to give back to the open-source community by adding data package specifications to InterMine’s im-tables. Also, I love animals, music and cinema! + +## How did you first hear about Frictionless Data? +I first heard about Frictionless Data from my mentor Yo Yehudi. She had sent an email to the InterMine community explaining the Frictionless Data initiative. The introductory video of Frictionless Data by Rufus Pollock inspired me deeply. I researched about Frictionless Data Specifications, Data Packages, and other tools and was amazed by how useful they can be while working with data. I wanted to contribute to Frictionless Data because I loved its design philosophy and the plethora of potential tools that can go a long way in changing how we produce, consume, and reuse data in research. + +## What specific issues are you looking to address with the Tool Fund? +InterMine is an open-source biological data warehouse. Over thirty different InterMine instances exist and can be viewed using InterMine’s web interface [im-tables](https://github.com/intermine/im-tables-3), a Javascript-based query results table data displayer. The export functionality of the im-tables supports common formats like CSV, TSV, and JSON. Whilst this is standardized across different instances of InterMine, exported data doesn’t conform to any specific standards, resulting in friction in data especially while integrating with other tools. Adding data package specifications and integrating with frictionless data specifications will ensure seamless integration, reusability, and sharing of data among individuals and apps, and will affect a broad number of InterMines based in research institutes around the world. In the long run, I would also like to develop and add a specification for InterMine’s data to the Frictionless Data registry. + +## How can the open data, open source, or open science communities engage with the work you are doing? +I will be working on the [im-tables](https://github.com/intermine/im-tables-3) and [intermine](https://github.com/intermine/intermine) GitHub repository, writing blogs every month to share my progress. I also plan to write documentation, tutorials, and contributing guidelines to help new contributors get started easily. I want to encourage and welcome anyone who wants to contribute or get started with open-source to work on this project. I’ll be happy to help you get familiar with InterMine and this project. You can get in touch [here](http://chat.intermine.org/) or [here](https://discord.com/invite/2UgfM2k). Lastly, I welcome everyone to try out and use the features added during this project to make data frictionless, usable, and open! \ No newline at end of file diff --git a/site/blog/2020-07-16-tool-fund-polar-institute/README.md b/site/blog/2020-07-16-tool-fund-polar-institute/README.md new file mode 100644 index 000000000..26a7f8a98 --- /dev/null +++ b/site/blog/2020-07-16-tool-fund-polar-institute/README.md @@ -0,0 +1,29 @@ +--- +title: schema-collaboration Tool Fund +date: 2020-07-16 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/fd_reproducible.png +description: This grantee profile features Carles Pina Estany for our series of Frictionless Data Tool Fund posts... +author: Carles Pina Estany +--- + +*This grantee profile features Carles Pina Estany for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.* + +## Meet Carles Pina Estany +I’m Carles and I’m currently working part-time for the [Swiss Polar Institute](https://swisspolar.ch/) as a software engineer. I'm not a scientist but I like working with scientists, for science institutions, in research, education and with free/open source software. You can read more about me on my website: [https://carles.pina.cat/](https://carles.pina.cat/). + +One of the tasks in the institute is to publish data and encourage researchers to provide detailed metadata. Often this metadata is written by researchers together with a data manager and without a tool in place to do this, it can become tricky to keep track of versions and progress. Frictionless Data schemas provide a model on which the metadata can be written to ensure it is machine-readable and standardised but completing the metadata in JSON files is not very user-friendly. My Tool Fund project, `schema-collaboration`, will help data managers and researchers collaborate easily to document data following the already-existing Frictionless Data schemas datapackage and tableschema. + +## How did you first hear about Frictionless Data? +We have had Frictionless Data on our radar for about a year. [Lilly Winfree’s talk at FOSDEM 2020](https://fosdem.org/2020/schedule/event/open_research_frictionless_data/) gave us a good insight into how it could be used and we realised that it was a good fit. Recently we have been improving the way that we describe data for a collaborating organisation: Frictionless Data was a natural way to go and we started using it to describe all datasets. [create.frictionlessdata.io](https://create.frictionlessdata.io) was a good start for creating a first draft of the tableschema and datapackage but we missed a tool to collaborate with the researchers when describing a data set. + +## What specific issues are you looking to address with the Tool Fund? +Collaboration between data managers and researchers needs to be as easy as possible for both sides. Currently there is no tool to collaboratively document tabular data and data packages easily. Using this tool the researcher will be able to enter the information in a controlled manner and the data manager will be able to give feedback on what’s missing or what needs to be changed through a common platform. + +Hopefully this will lead to more productive use of time for both sides and having the data described with machine-readable Frictionless Data schemas will make it easier to validate, reuse and have consistent documentation. The tool will be based on [datapackage-ui](https://github.com/frictionlessdata/datapackage-ui) for the frontend, allowing all those involved to collaborate on the metadata through a user-friendly UI. Django will be used for the backend and Docker will be used for installation and deployments. + +## How can the open data, open source, or open science communities engage with the work you are doing? +This project will be based on [datapackage-ui](https://github.com/frictionlessdata/datapackage-ui) so using this tool and opening and fixing issues would be useful contributions to the project. + +Feel free to submit issues, ideas and PR on the Github repository [schema-collaboration](https://github.com/frictionlessdata/schema-collaboration) or [Discord](https://discord.com/invite/j9DNFNw) and test the project on the staging deployment when available. \ No newline at end of file diff --git a/site/blog/2020-07-21-data-matrices-pilot/README.md b/site/blog/2020-07-21-data-matrices-pilot/README.md new file mode 100644 index 000000000..4d5e7b6e5 --- /dev/null +++ b/site/blog/2020-07-21-data-matrices-pilot/README.md @@ -0,0 +1,22 @@ +--- +title: Clarifying the semantics of data matrices and results tables - a Frictionless Data Pilot +date: 2020-07-21 +tags: ['pilot'] +category: +image: /img/adoption/oxford-drg.png +description: This Pilot will focus on removing the friction in reported scientific experimental results by applying the Data Package specifications... +author: Philippe Rocca-Serra and Lilly Winfree +--- +*As part of the Frictionless Data for Reproducible Research project, funded by the Sloan Foundation, we have started a Pilot collaboration with the Data Readiness Group at the Department of Engineering Science of the University of Oxford; the group will be represented by Dr. Philippe Rocca-Serra, an Associate Member of Faculty. This Pilot will focus on removing the friction in reported scientific experimental results by applying the Data Package specifications.* + +Publishing of scientific experimental results is frequently done in ad-hoc ways that are seldom consistent. For example, results are often deposited as idiosyncratic sets of Excel files or tabular files that contain very little structure or description, making them difficult to use, understand and integrate. Interpreting such tables requires human expertise, which is both costly and slow, and leads to low reuse. Ambiguous tables of results can lead researchers to rerun analysis or computation over the raw data before they understand the published tables. This current approach is broken, does not fit users’ data mining workflows, and limits meta-analysis. A better procedure for organizing and structuring information would reduce unnecessary use of computational resources, which is where the Frictionless Data project comes into play. This Pilot collaboration aims to help researchers publish their results in a more structured, reusable way. + +In this Pilot, we will use (and possibly extend) Frictionless [tabular data packages](https://specs.frictionlessdata.io/tabular-data-package/) to devise both generic and specialized templates. These templates can be used to unambiguously report experimental results. Our short term goal from this work is to develop a set of Frictionless Data Packages for targeted use cases where impact is high. We will first focus first on creating templates for statistical comparison results, such as differential analysis, enrichment analysis, high-throughput screens, and univariate comparisons, in genomics research by using the [STATO ontology](http://stato-ontology.org/) within tabular data packages. + +Our longer term goals are that these templates will be incorporated into publishing systems to allow for more clear reporting of results, more knowledge extraction, and more reproducible science. For instance, we anticipate that this work will allow for increased consistency of table structure in publications, as well as increased data reuse owing to predictable syntax and layout. We also hope this work will ease creation of linked data graphs from table of results due to clarified semantics. + +An additional goal is to create code that is compatible with R’s [ggplot2 library](https://ggplot2.tidyverse.org/), which would allow for easy generation of data analysis plots. To this end, we plan on working with R developers in the future to create a package that will generate Frictionless Data compliant data packages. + +This work has recently begun, and will continue throughout the year. We have already met with some challenges, such as working on ways to transform, or normalize, data and ways to incorporate RDF linked data (you can read our related [conversations in GitHub](https://github.com/frictionlessdata/forum/issues/18)). We are also working on how to define a ‘generic’ table layout definition, which is broad enough to be reused in as wide a range of situation as possible. + +If you are interested in staying up to date on this work, we encourage you to check out these GitHub repositories: https://gitlab.com/datascriptor/datascriptor-fldatapackages and https://github.com/ISA-tools/frictionless-collab. Additionally, we will (virtually) be at the eLife Sprint in September to work on closely related work, which you can read about here: https://sprint.elifesciences.org/data-paper-skeleton-tools-for-life-sciences/. Throughout this Pilot, we are planning on reaching out to the community to test these ideas and get feedback. Please contact us on GitHub or in [Discord](https://discord.gg/2UgfM2k) if you are interested in contributing. \ No newline at end of file diff --git a/site/blog/2020-08-03-tool-fund-cambridge-neuro/README.md b/site/blog/2020-08-03-tool-fund-cambridge-neuro/README.md new file mode 100644 index 000000000..5a7cee101 --- /dev/null +++ b/site/blog/2020-08-03-tool-fund-cambridge-neuro/README.md @@ -0,0 +1,31 @@ +--- +title: Analysis of spontaneous activity patterns in developing neural circuits using Frictionless Data tools +date: 2020-08-03 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/fd_reproducible.png +description: This grantee profile features Stephen Eglen for our series of Frictionless Data Tool Fund posts... +author: Stephen Eglen +--- +*This grantee profile features Stephen Eglen for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.* + +## Meet Stephen Eglen + +I am a Reader in Computational Neuroscience at the University of Cambridge. A large part of my work involves analysing neuronal recordings taken from high-throughput recording devices, such as multi-electrode arrays. Despite these arrays having been in use for many years, there are still no standard formats for exchanging data, and so we spend lots of time simply reformatting data as we pass it around different groups, or use different analysis [https://doi.org/10.1186/2047-217X-3-3](https://doi.org/10.1186/2047-217X-3-3)) used HDF5; the aim of our current project is to evaluate the use of Frictionless Data as a common format for the analysis of our spontaneous activity recordings, both past and present. The bulk of the work this summer will be done by a talented Natural Science undergraduate at Cambridge, Alexander Shtyrov. + +## How did you first hear about Frictionless Data? + +I had the good fortune to meet Dr Rufus Pollock in 2015 at a scientific meeting where I was presenting our work from 2014 and he was presenting an introduction to Frictionless Data. We then developed a case study (circa 2016) using a simpler data set (the spatial distribution of neurons in the retina). Skipping forward a few years, I saw the call for applications from Frictionless Data and decided it might be a good time to see how the project had developed. Rather than developing further tools, after discussions with the Frictionless Data team, we decided to make a case study for the application of these tools. + +## What specific issues are you looking to address with the Tool Fund? + +Our goals are: + +1. Convert our existing datasets (Eglen et al 2014) into Frictionless Data containers. +2. Compare the relative merits of the containers vs HDF5 for storing "medium-sized" (megabytes, rather than gigabytes) data files. Aspects to consider will include portability, efficiency and ease of access. +3. Develop a case study for analysing spontaneous activity patterns with a generative approach to model the underlying neuronal networks. This code has been developed by colleagues at Cambridge in Matlab, but has yet to be tested on our spontaneous activity patterns. +4. Write up our findings for publication in a peer-reviewed journal. + +## How can the open data, open source, or open science communities engage with the work you are doing? + +We have a GitHub repository, but it is currently private (shared also with Frictionless Data) as it contains some recent datasets relating to human patients that are not yet ready to be shared. We hope to release it as soon as we can, where it will be linked to from my home page: [https://sje30.github.io](https://sje30.github.io). We aim to share all our findings from this project for the benefit of the community. diff --git a/site/blog/2020-08-17-frictionless-wheat/README.md b/site/blog/2020-08-17-frictionless-wheat/README.md new file mode 100644 index 000000000..5ee9bbc7a --- /dev/null +++ b/site/blog/2020-08-17-frictionless-wheat/README.md @@ -0,0 +1,32 @@ +--- +title: Frictionless Data for Wheat +date: 2020-08-17 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/fd_reproducible.png +description: This grantee profile features Simon Tyrrell, Xingdong Bian, and Robert Davey for our series of Frictionless Data Tool Fund posts... +author: Simon Tyrrell +--- +*This grantee profile features Simon Tyrrell, Xingdong Bian, and Robert Davey for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.* + +## Meet the Grassroots team + +Hi I’m Simon Tyrrell and I’m a research software engineer having spent most of my career in academia. My first degree was in Maths and I did my PhD in Cheminformatics, both done at the University of Sheffield. After some postdoctoral fellowships in Computational Chemistry, I now happily reside in the field of Bioinformatics here at the Earlham Institute (EI) writing software to a diet of tea and loud guitars, both listened to and played. + +Xingdong Bian is a member of the [Data Infrastructure group](http://www.earlham.ac.uk/davey-group), he joined the Earlham Institute in January 2010 and was involved in the development of EI’s Laboratory Information Management System (MISO) and the TGAC Browser. He has worked on solutions for data visualisation, managing servers, genomic databases and bioinformatics tools. Xingdong is now working mainly on the Grassroots project as a research software engineer. He has a BSc in Computer Science from the University of Sheffield and a MSc in Software Engineering from the University of York. + +Robert Davey leads the Data Infrastructure group at the Earlham Institute and is the PI for the Grassroots project. He has a PhD in Computer Science from the University of East Anglia, undertaken at the Roberts lab in the [National Collection of Yeast Cultures](http://www.ncyc.co.uk). Rob leads a number of large computing infrastructure development and deployment projects, is a certified [Software Carpentry](https://software-carpentry.org) Instructor and Trainer, an editorial board member for Nature Scientific Data, and a [Software Sustainability Institute](https://www.software.ac.uk) Fellow. + +Together Xingdong and I work in Robert Davey’s team at the Earlham Institute developing Grassroots. This is a set of middleware tools for sharing bioinformatics data and services so that users and developers can do scientific analyses as easily as possible. + +## How did you first hear about Frictionless Data? + +We have always been big believers in the FAIR data principles and when we saw a tweet about the Frictionless Data tool fund, the more that we read about it, the more it seemed to be exactly what we were after! Even without the fund, it is likely to have been something that we would have looked to implement anyway. + +## What specific issues are you looking to address with the Tool Fund? + +As part of the Designing Future Wheat (DFW) project, we currently have two different repositories: the [DFW data portal](https://opendata.earlham.ac.uk/), using [iRODS](https://irods.org/) with [mod_eirods_dav](https://github.com/billyfish/eirods-dav), and a [digital repository](https://ckan.grassroots.tools/) using [CKAN](https://ckan.org/). Both of these contain a wide variety of heterogeneous data such as genetic sequences, field trial experiment results, images, spreadsheets, publications, etc., and we are trying to standardise how to expose these datasets and their associated metadata. This is where Frictionless Data comes in! The ability to have consistent methods of accessing this information should make it easier for other researchers and data scientists to access and do some great work with all of this data. + +## How can the open data, open source, or open science communities engage with the work you are doing? + +We firmly believe in open source and open data and everything that we create is freely available. We plan to build a selection of Frictionless Data tools and make them available on our existing data portals so people can try them out and give any feedback. These will be rolled out incrementally so that progress is visible from early on. Our initial set of work will focus on extending the DFW data portal that uses one of our existing tools, eirods-dav (https://github.com/billyfish/eirods-dav) which is a tool for exposing the data in an iRODS repository in a user-friendly way with rich APIs for developers and data scientists too. So if anyone has any feedback, ideas, suggestions, rants :-), please raise an issue at the GitHub repo; the more, the merrier! \ No newline at end of file diff --git a/site/blog/2020-08-24-august-virtual-hangout/README.md b/site/blog/2020-08-24-august-virtual-hangout/README.md new file mode 100644 index 000000000..3aac2a1d4 --- /dev/null +++ b/site/blog/2020-08-24-august-virtual-hangout/README.md @@ -0,0 +1,43 @@ +--- +title: Frictionless Data August 2020 Virtual Hangout - 27 August +date: 2020-08-27 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community.jpg +description: +author: Sébastien Lavoie +--- + +We are hosting a virtual community hangout to share recent developments in the Frictionless Data community and it's also an avenue to connect with other community members. This will be a 1-hour meeting where community members come together to discuss key topics in the data community. + +![Photo by Perry Grone on Unsplash](/img/blog/community.jpg) + +The hangout is scheduled to hold on **27th August 2020 at 5 pm BST / 4 PM UTC**. If you would like to attend the hangout, [you can sign up for the event using this form](https://forms.gle/3wEGBy2q4Q6pdNfK8) + +Looking forward to seeing you there! + + +## Community Hangout Recording + +If you missed the community hangout and would like to catch up on what was discussed, here's a recording of the hangout. + + + +Here is a short summary of what we were up to: + +- An RFC (request for comments) we are working on and other tools: + - [Restructuring Libraries to Drivers and Toolkits](https://github.com/frictionlessdata/project/blob/master/rfcs/0006-software-structure.md) + - [tabulator-py](https://github.com/frictionlessdata/tabulator-py) + - [datapackage-py](https://github.com/frictionlessdata/datapackage-py) + - [data.js](https://github.com/datopian/data.js) + - [jsv](https://github.com/datopian/jsv) + - [frictionless-py](https://github.com/frictionlessdata/frictionless-py) + - [datapackage-swift](https://github.com/frictionlessdata/datapackage-swift) + - [tableschema-swift](https://github.com/frictionlessdata/tableschema-swift) +- Discussion on Google Analytics vs alternatives, some of them being open-source. + +## Technical presentation on frictionless-py + +We also made available a technical presentation of a new tool we are working on: [frictionless-py](https://github.com/frictionlessdata/frictionless-py). If you would like to delve deeper into the nuts and bolts of it, here it is for your enjoyment! + + diff --git a/site/blog/2020-09-01-hello-fellows-cohort2/README.md b/site/blog/2020-09-01-hello-fellows-cohort2/README.md new file mode 100644 index 000000000..698d1bfd8 --- /dev/null +++ b/site/blog/2020-09-01-hello-fellows-cohort2/README.md @@ -0,0 +1,52 @@ +--- +title: Say hello to the second cohort of Frictionless Fellows! +date: 2020-09-01 +tags: ['fellows'] +category: +image: /img/blog/fellows-ending.jpg +description: We are very excited to introduce the newest Fellows for Cohort 2 of the Frictionless Data Reproducible Research Fellows Programme... +author: Lilly Winfree +--- + +We are very excited to introduce the newest Fellows for Cohort 2 of the Frictionless Data [Reproducible Research Fellows Programme](https://fellows.frictionlessdata.io/)! Over the next nine months, these eight early career researchers will be learning about open science, data management, and how to use Frictionless Data tooling in their work to make their data more open and their research more reusable. As an introduction, each Fellow has written a short blog about themselves and their goals. Read below to meet the Fellows and click on their individual blogs to learn more about them! + +*** + +Katerina picture + +**Hi everyone, my name is Katerina Drakoulaki**, I am from Greece and Cyprus, and I’m currently doing my PhD at the National and Kapodistrian University of Athens. My PhD combines all my interests: linguistics, language disorders, music cognition, and working with children! Research reproducibility is important in order to reliably identify and provide intervention to children with difficulties. Read more about [Katerina here.](https://fellows.frictionlessdata.io/blog/hello-katerina/) +*** + +Evelyn picture + +**Hello everybody! I'm Evelyn Night**, an MSc student at the [University of Nairobi](https://www.uonbi.ac.ke/) and a research fellow at the [International Center of Insect Physiology and Ecology](http://www.icipe.org/). Growing up in a tiny village in Kano plains of Western Kenya, I always had a passion for learning. Fast forward through the years I find my way into academia pursuing a master’s degree and characterizing insect pollinator communities using morphometric and molecular tools for my thesis. My goal is to improve agricultural research capacity in the country and to also enhance formation of policies that would ensure increase in agricultural productivity. Read more about [Evelyn here.](https://fellows.frictionlessdata.io/blog/hello-evelyn/) +*** + +Dani picture + +**Hi everyone! I'm Dani**, a cognitive neuroscientist and open science enthusiast. I live and work in San Sebastian, a beautiful city by the sea in northern Spain. We have a responsibility to overcome the current incentive system in the Academy to provide more honest, accessible, and quality research. I look forward to learning more about Frictionless Data tools and incorporating them into my work so that my research is open to everyone. Read more about [Dani here.](https://fellows.frictionlessdata.io/blog/hello-dani/) +*** + +Kate picture + +**Hello hi! I’m Kate Bowie**, a 28-year-old midwesterner studying the human microbiome, or the collection of bacteria that live in and on the human body. As I dive deeper into the field of microbiome science, I am becoming an advocate for putting resources and time into improving research reproducibility. I wanted to become a Frictionless Fellow so that I could learn tools to help microbiome science data workflows become more reproducible and engage in the open science community. Read more about [Kate here.](https://fellows.frictionlessdata.io/blog/hello-kate/) +*** + +Sam picture + +**Hello! My name is Sam Wilairat.** I am currently earning a Master of Library and Information Science degree (MLIS) and have an interest in data librarianship. As a fellow, I’m hoping to learn frictionless data principles and tools to ultimately promote them at my institution via education and outreach to researchers. I believe Open Science is the future and the more people embrace it, the more equitable and innovative research will be! Read more about [Sam here.](https://fellows.frictionlessdata.io/blog/hello-sam/) +*** + +Anne picture + +**Hey everyone, I'm Anne!** I’m a graduate student based in Geneva, Switzerland that was born and bred in a few places across the United States (including New York, Chicago, Houston, and Washington DC!). Here in Switzerland, I study international institutions with the eye of an anthropologist or sociologist, through long-term ethnographic research. I’m excited to learn how to apply the Frictionless Data tools in my work throughout these nine months, and to experiment with new forms of conveying social science research in the process. +*** + +Ritwik picture + +**Hi Ritwik here!** I am based near Delhi, India and am doing my masters in Sustainable buildings, Energy conservation and Climate Change from International Institute of Information Technology Hyderabad. It is very important that the research which is carried in this domain is reproducible and available to all so we can use it to spread awareness among people. Read more about [Ritwik here.](https://fellows.frictionlessdata.io/blog/hello-ritwik/) +*** + +Jacqueline picture + +**Hi! My name is Jacqueline.** I am a Master’s Candidate and Interdisciplinary Innovation Fellow in the Department of Computer and Information Science at the University of Pennsylvania. I applied to be a Reproducible Research Fellow to build space into my research process for actively exploring open science and reproducibility issues. As a scientist, I consider it an obligation to share my knowledge as widely and freely as possible and to ensure that my findings can be vetted through replication studies and other important checks. Read more about [Jacqueline here.](https://fellows.frictionlessdata.io/blog/hello-jacqueline/) \ No newline at end of file diff --git a/site/blog/2020-09-16-goodtables-bcodmo/README.md b/site/blog/2020-09-16-goodtables-bcodmo/README.md new file mode 100644 index 000000000..53cee245d --- /dev/null +++ b/site/blog/2020-09-16-goodtables-bcodmo/README.md @@ -0,0 +1,27 @@ +--- +title: Goodtables - Expediting the data submission and submitter feedback process +date: 2020-09-16 +tags: ['pilot'] +category: +image: /img/blog/bcodmoLogo.jpg +description: This post describes the second part of our Pilot collaboration with BCO-DMO... +author: Adam Shepherd, Amber York, Danie Kinkade, and Lilly Winfree +--- +*This post was originally published on the [BCO-DMO blog](https://blog.bco-dmo.org/2020/09/14/goodtables).* + +Earlier this year, the [Biological and Chemical Oceanography Data Management Office (BCO-DMO)](https://www.bco-dmo.org/) completed a pilot project with the [Open Knowledge Foundation (OKF)](https://okfn.org/) to [streamline the data curation processes for oceanographic datasets using Frictionless Data Pipelines (FDP)](https://blog.okfn.org/2020/02/10/frictionless-data-pipelines-for-ocean-science/). The goal of this pilot was to construct reproducible workflows that transformed the original data submitted to the office into archive-quality, [FAIR-compliant](https://doi.org/10.1038/sdata.2016.18) versions. FDP lets a user define an order of processing steps to perform on some data, and the project developed new processing steps specific to the needs of these oceanographic datasets. These ordered processing steps are saved into a configuration file that is then available to be used anytime the archived version of the dataset must be reproduced. The primary value of these configuration files is that they capture and make the curation process at BCO-DMO transparent. Subsequently, we found additional value internally by using FDP in three other areas. First, they made the curation process across our data managers much more consistent versus the ad-hoc data processing scripts they individually produced before FDP. Second, we found that data managers saved time because they could reuse pre-existing pipelines to process newer versions submitted for pre-existing datasets. Finally, the configuration files helped us keep track of what processes were used in case a bug or error was ever found in the processing code. This project exceeded our goal of using FDP on at least 80% of data submissions to BCO-DMO to where we now use it almost 100% of the time. + +As a major deliverable from BCO-DMO’s [recent NSF award](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1924618) the office planned to refactor its entire data infrastructure using techniques that would allow BCO-DMO to respond more rapidly to technological change. Using Frictionless Data as a backbone for data transport is a large piece of that transformation. Continuing to work with OKF, both groups sought to continue our collaboration by focusing on how to improve the data submission process at BCO-DMO. + +![Duplication error](./duplication_error.png) + +*Goodtables noticed a duplicate row in an uploaded tabular data file.* + +Part of what makes BCO-DMO a successful data curation office is our hands-on work helping researchers achieve compliance with the [NSF’s Sample and Data Policy coming from their Ocean Sciences division](https://www.nsf.gov/pubs/2017/nsf17037/nsf17037.jsp). Yet, a steady and constant queue of data submissions means that it can take some weeks before our data managers can thoroughly review data submissions and provide necessary feedback to submitters. In response, BCO-DMO has been creating a lightweight web application for submitting data while ensuring such a tool preserves the easy experience of submitting data that presently exists. Working with OKF, we wanted to expedite the data review process by providing data submitters with as much immediate feedback as possible by using Frictionless Data’s [GoodTables project](https://goodtables.io/). + +Through a data submission platform, researchers would be able to upload data to BCO-DMO and, if tabular, get immediate feedback from Goodtables about whether it was correctly formatted or any other quality issues existed. With these reports at their disposal, submitters could update their submissions without having to wait for a BCO-DMO data manager to review. For small and minor changes this saves the submitter the headache of having to wait for simple feedback. The goal is to catch submitters at a time where they are focused on this data submission so that they don’t have to return weeks later and reconstitute their headspace around these data again. We catch them when their head is in the game. + +Goodtables provides us a framework to branch out beyond simple tabular validation by developing data profiles. These profiles would let a submitter specify the type of data they are submitting. Is the data a bottle or CTD file? Does it contain latitude, longitude time or depth observations? These questions, optional for submitters to answer, would provide even further validation steps to get improved feedback immediately. For example, specifying that a file contains latitude or longitude columns could detect whether all values fall within valid bounds. Or that a depth column contains values above the surface. Or that the column pertaining to the time of an observation has inconsistent formatting across some of the rows. BCO-DMO can expand on this platform to continue to add new and better quality checks that submitters can use. + +![Out-of-bounds longitude](./goodtables_lon-out-of-bounds.png) +*Goodtables noticed a longitude that is outside a range of -180 to 180. This happended because BCO-DMO recommends using decimal degrees format between -180 to 180 and defined a Goodtables check for longitude fields.* \ No newline at end of file diff --git a/site/blog/2020-09-16-goodtables-bcodmo/duplication_error.png b/site/blog/2020-09-16-goodtables-bcodmo/duplication_error.png new file mode 100644 index 000000000..9aff9c25e Binary files /dev/null and b/site/blog/2020-09-16-goodtables-bcodmo/duplication_error.png differ diff --git a/site/blog/2020-09-16-goodtables-bcodmo/goodtables_lon-out-of-bounds.png b/site/blog/2020-09-16-goodtables-bcodmo/goodtables_lon-out-of-bounds.png new file mode 100644 index 000000000..79b9bc460 Binary files /dev/null and b/site/blog/2020-09-16-goodtables-bcodmo/goodtables_lon-out-of-bounds.png differ diff --git a/site/blog/2020-09-17-tool-fund-metrics/README.md b/site/blog/2020-09-17-tool-fund-metrics/README.md new file mode 100644 index 000000000..9d72bdc5c --- /dev/null +++ b/site/blog/2020-09-17-tool-fund-metrics/README.md @@ -0,0 +1,44 @@ +--- +title: Metrics in Context +date: 2020-09-17 +tags: ['tool-fund'] +category: grantee-profiles +image: /img/blog/fd_reproducible.png +description: This grantee profile features Asura for our series of Frictionless Data Tool Fund posts... +author: Asura Enkhbayar +--- + +*This grantee profile features Asura for our series of Frictionless Data Tool Fund posts, written to shine a light on Frictionless Data’s Tool Fund grantees, their work and to let our technical community know how they can get involved.* + +## Meet Asura + +Hallihallöchen meine Lieben! + +I’m Asura and I’m a doctoral student at Simon Fraser University, Vancouver. I’m working in the muddy area between data science, communication, and philosophy in order to explore questions of power and systemic inequality within scholarly communication. This means that I work at the ScholCommLab as a data scientist, while exploring the philosophical issues in my doctoral project. Concretely, I am intending to develop an analytic framework for the study citations as infrastructure building on critical feminist theory and Science and Technology Studies (STS). However, I remain a coder and tinkerer at heart, which is how I ended up working with Frictionless Data on Metrics in Context. + +## How did you first hear about Frictionless Data? + +I first heard about Frictionless Data at the pre-csv,conf,v4 meetup hosted by Open Knowledge Foundation in 2019. I remember being quite impressed by the basic premise of Frictionless, although I hadn’t grasped the full picture of the technicalities yet. During the main conference I then learnt about more opportunities to get involved such as the Fellowship and the Tool Fund. I left csv,conf with great impressions and plans to work out an application but then life a.k.a my PhD happened... I had forgotten about Frictionless Data, until I recently found out that the Tool Fund is going into its second round. At the time I had started working with the Make Data Count team on data citations, then ideas and topics fell into place, and here I am now! + +## What specific issues are you looking to address with the Tool Fund? + +In this project, I want to address a common theme within the critique of modern technology in our data-driven world: the lack of context for data and, often related, biases in databases. Algorithmic and database biases have moved into the spotlight of critical thought on how technology exacerbates systemic inequalities. Following these insights, I want to address the need for different (rather than simply more) context and metadata for scholarly metrics in the face of racial, gender, and geographic biases which plague modern academia. + +It isn’t controversial to say that scholarly metrics have become an integral part of scholarship and probably they are here to stay. Controversy usually comes into play once we discuss how and for which purposes metrics are used. This typically refers to the (mis)use of citation counts and citation-based indicators1 for research assessment and governance, which also led to a considerable number of initiatives and movements calling for a responsible use of metrics2. However, I would like to take a step back and redirect the attention to the origin of the data underlying citation counts. + +These conversations about the inherent biases of citation databases are not entirely new and scholars across disciplines have been highlighting the consequential systemic issues. However, in this project I am not proposing a solution to overcome or abolish these biases per se, but rather I want to shine light on the opaque mechanism of capturing metrics which lead to the aforementioned inequalities. In other words, I propose to develop an open data standard3 for scholarly metrics which documents the context in which the data was captured. This metadata describes the properties of the capturing apparatus of a scholarly event (e.g., a citation, news mention, or tweet of an article) such as the limitations of document coverage (what kind of articles are indexed?), the kind of events captured (tweets, retweets, or the both maybe?) or other technicalities (is Facebook considered as a whole or only a subset of public pages?). + +While metrics in context don’t remove systemic inequality, they make the usually hidden and inaccessible biases visible and explicit. In doing so, they facilitate conversations about structural issues in academia and eventually contribute to the development of better infrastructures for the future. + +## How can the open data, open source, or open science communities engage with the work you are doing? + +Metrics in Context will be fully conducted out in the open which means that all resources will be available on Github and I will do my best to transparently document progress and decisions. + +The project is organized in three parts (roughly breaking down into conceptual questions, technical implementation, and scholarly application) and I invite all of you to leave your ideas, thoughts, and critiques via email or a Github issue. + +You can see the full roadmap with a detailed breakdown of tasks here: https://github.com/Bubblbu/metrics-in-context/issues/2 + +--- + 1. There is extensive literature for the critique of indicators such as the h-index or Journal Impact Factor. See Haustein and Larivière (2015) for an overview. + 2. See DORA and the Leiden Manifesto for two prominent examples of responsible research metrics initiatives + 3. I am expecting references to this xkcd comic on standards: https://xkcd.com/927/ \ No newline at end of file diff --git a/site/blog/2020-10-08-frictionless-framework/README.md b/site/blog/2020-10-08-frictionless-framework/README.md new file mode 100644 index 000000000..9d59b4460 --- /dev/null +++ b/site/blog/2020-10-08-frictionless-framework/README.md @@ -0,0 +1,69 @@ +--- +title: Announcing the New Frictionless Framework +date: 2020-10-08 +tags: ["news"] +category: news +image: /img/blog/frictionless-logo.png +description: We are excited to announce our new high-level Python framework, frictionless-py. Frictionless-py was created to... +author: Evgeny Karev and Lilly Winfree +--- +## Frictionless Framework +We are excited to announce our new high-level Python framework, frictionless-py: https://github.com/frictionlessdata/frictionless-py. Frictionless-py was created to simplify overall user-experience for working with Frictionless Data in Python. It provides several high-level improvements in addition to many low-level fixes. Read more details below, or watch this intro video by Frictionless developer Evgeny: https://youtu.be/VPnC8cc6ly0 + +## Why did we write new Python code? +Frictionless Data has been in development for almost a decade, with global users and projects spanning domains from science to government to finance. However, our main Python libraries (`datapackage`,`goodtables`, `tableschema`,`tabulator`) were originally built with some inconsistencies that have confused users over the years. We had started redoing our documentation for our existing code, and realized we had a larger issue on our hands - mainly that the disparate Python libraries had overlapping functionalities and we were not able to clearly articulate how they all fit together to form a bigger picture. We realized that overall, the existing user experience was not where we wanted it to be. Evgeny, the Frictionless Data technical lead developer, had been thinking about ways to improve the Python code for a while, and the outcome of that work is `frictionless-py`. + +## What happens to the old Python code (`datapackage-py`, `goodtables-py`, `tableschema-py`, `tabulator-py`)? How does this affect current users? +`Datapackage-py` (see [details](https://github.com/frictionlessdata/datapackage-py#datapackage-py)), `tableschema-py` (see [details](https://github.com/frictionlessdata/tableschema-py#tableschema-py)), `tabulator-py` (see [details](https://github.com/frictionlessdata/tabulator-py#tabulator-py)) still exist, will not be altered, and will be maintained. If your project is using this code, these changes are not breaking and there is no action you need to take at this point. However, we will be focusing new development on `frictionless-py`, and encourage you to consider starting to experiment with or work with `frictionless-py` during the last months of 2020 and migrate to it starting from 2021 [(here is our migration guide)](https://github.com/frictionlessdata/frictionless-py/blob/master/docs/target/migration-guide/README.md). The one important thing to note is that `goodtables-py` has been subsumed by `frictionless-py` (since version 3 of Goodtables). We will continue to bug-fix `goodtables@2.x` in [this branch](https://github.com/frictionlessdata/goodtables-py/tree/goodtables) and it is also still available on [PyPi](https://pypi.org/project/goodtables/) as it was before. Please note that `frictionless@3.x` version's API is not stable as we are continuing to work on it at the moment. We will release `frictionless@4.x` by the end of 2020 to be the first SemVer/stable version. + +## What does `frictionless-py` do? +`Frictionless-py` has four main functions for working with data: describe, extract, validate, and transform. These are inspired by typical data analysis and data management methods. + +*Describe your data*: You can infer, edit and save metadata of your data tables. This is a first step for ensuring data quality and usability. Frictionless metadata includes general information about your data like textual description, as well as field types and other tabular data details. + +*Extract your data*: You can read your data using a unified tabular interface. Data quality and consistency are guaranteed by a schema. Frictionless supports various file protocols like HTTP, FTP, and S3 and data formats like CSV, XLS, JSON, SQL, and others. + +*Validate your data*: You can validate data tables, resources, and datasets. Frictionless generates a unified validation report, as well as supports a lot of options to customize the validation process. + +*Transform your data*: You can clean, reshape, and transfer your data tables and datasets. Frictionless provides a pipeline capability and a lower-level interface to work with the data. + +Additional features: +- Powerful Python framework +- Convenient command-line interface +- Low memory consumption for data of any size +- Reasonable performance on big data +- Support for compressed files +- Custom checks and formats +- Fully pluggable architecture +- The included API server +- More than 1000+ tests + +## How can users get started? +We recommend that you begin by reading the [Getting Started Guide](https://colab.research.google.com/drive/1VyDx6C3pxF3Vab8MxH_sI86OTSNmYuDJ) and the [Introduction Guide](https://colab.research.google.com/drive/1HGXJa7BWyEgoGZLkC6tKt2DMqgeHibEY). We also have in depth documentation for [Describing Data](https://colab.research.google.com/drive/1eIq1ZTUntJplRxkGHxmqlxZ0zyXCm0wU), [Extracting Data](https://colab.research.google.com/drive/1is_PcpzFl42aWI2B2tHaBGj3jxsKZ_eZ), [Validating Data](https://colab.research.google.com/drive/1cJSZlG_v6OI3I2FtnXdKOSPjhwZNjMK1), and [Transforming Data](https://colab.research.google.com/drive/1C4dFWDExyxzGIwLUovrDQZghZK4JK2PD). + +## How can you give us feedback? +What do you think? Let us know your thoughts, suggestions, or issues by joining us in our community chat on [Discord](https://discord.com/invite/j9DNFNw) or by opening an issue in the `frictionless-py` repo: https://github.com/frictionlessdata/frictionless-py/issues. + +## FAQ’s + +### Where’s the documentation? +Are you a new user? Start here: [Getting Started](https://github.com/frictionlessdata/frictionless-py/blob/master/docs/target/getting-started/README.md) & [Introduction Guide](https://github.com/frictionlessdata/frictionless-py/blob/master/docs/target/introduction-guide/README.md) +Are you an existing user? Start here: [Migration Guide](https://github.com/frictionlessdata/frictionless-py/blob/master/docs/target/migration-guide/README.md) +The full list of documentation can be found here: https://github.com/frictionlessdata/frictionless-py#documentation + +### What’s the difference between `datapackage` and `frictionless`? +In general, `frictionless` is our new generation software while `tabulator`/`tableschema`/`datapackage`/`goodtables` are our previous generation software. `Frictionless` has a lot of improvements over them. Please see this issue for the full answer and a code example: https://github.com/frictionlessdata/frictionless-py/issues/428 + +### I’ve spotted a bug - where do I report it? +Let us know by opening an issue in the `frictionless-py` repo: https://github.com/frictionlessdata/frictionless-py/issues. For `tabulator`/`tableschema`/`datapackage` issues, please use the corresponding issue tracker and we will triage it for you. Thanks! + +### I have a question - where do I get help? +You can ask us questions in our Discord chat and someone from the main developer team or from the community will help you. Here is an invitation link: https://discord.com/invite/j9DNFNw. We also have a Twitter account [(@frictionlessd8a)](https://twitter.com/frictionlessd8a) and community calls where you can come meet the team and ask questions: http://frictionlessdata.io/events/. + +### I want to help - how do I contribute? +Amazing, thank you! We always welcome community contributions. Start here (https://frictionlessdata.io/contribute/) and here (https://github.com/frictionlessdata/frictionless-py/blob/master/CONTRIBUTING.md) and you can also reach out to Evgeny (@roll) or Lilly (@lwinfree) on GitHub if you need help. + +### Additional Links/Resources +- Intro to `frictionless-py` video: https://youtu.be/VPnC8cc6ly0 +- `frictionless-py` repository: https://github.com/frictionlessdata/frictionless-py +- Frictionless Data website: https://frictionlessdata.io/ \ No newline at end of file diff --git a/site/blog/2020-10-19/fellows-reflect-on-open-access-week/README.md b/site/blog/2020-10-19/fellows-reflect-on-open-access-week/README.md new file mode 100644 index 000000000..586fded0e --- /dev/null +++ b/site/blog/2020-10-19/fellows-reflect-on-open-access-week/README.md @@ -0,0 +1,35 @@ +--- +title: The Fellows reflect on Open Access Week +date: 2020-10-19 +tags: ['fellows'] +category: +image: /img/blog/open-access-week-2020.png +description: A compilation of the Fellows' thoughts and reflections on this year's Open access Week theme. +author: Lilly Winfree +--- + +The theme of this year's [Open Access Week](http://www.openaccessweek.org/) is "Open with Purpose: Taking Action to Build Structural Equity and Inclusion". How can we be more purposeful in the open space? How can we work towards true equity and inclusion? The following blog is a compilation of the Fellows' thoughts and reflections on this theme. + +### Katerina +When I read this year's theme I wondered how I could relate to it. Inclusion was the word that made it for me. At first I thought about how the Fellowship itself was inclusive for me, a person with a humanities background that had not had the chance to receive any institutional or structured support when it comes to programming and data management. Afterwards, what came to mind is how inclusive are the things I'm currently learning on the programme with regard with the populations I work in my clinical role. Cognitive accessibility is an effort to make online content more accessible to persons with overall cognitive difficulties, that is difficulties with memory, attention, and language. These are not rare difficulties, as they characterize individuals with learning difficulties (developmental language disorder, dyslexia), autism spectrum disorder, attention deficit-hyperactivity disorder (ADHD), dementia, aphasia and other cognitive difficulties following a stroke. I discovered a lot of initiatives and guidelines on how online content could be more accessible: using alternatives to text, such as figures, audio, or a simpler layout, making content appear in predictable ways, giving more time to individuals to interact with the content, focusing on readability of the content among others. In sum, many individuals among us have difficulties accessing online content in an optimal way. More information about what we can do about it [here](https://developer.mozilla.org/en-US/docs/Web/Accessibility/Cognitive_accessibility#WCAG_Guidelines) and [here](https://www.w3.org/WAI/cognitive/). + +### Dani +Once again, we see academia and the overall scientific research environment engaged in a discussion about who should bear the costs of scientific publications. Few have welcomed with open arms the [new agreement](https://www.nature.com/articles/d41586-020-02959-1) between a few German institutions and the Nature Publishing group. The obvious gap between what the prestigious publishing group demands and what researchers can afford has turn the news into some sort of bad joke. However, it seems that many have accepted by now other relatively cheaper Open Access publishing arrangements. At least, relatively cheaper for them. Research funding is nowadays so scarce and precarious in many countries that a simple article processing charge of 1200€ will prevent researchers from submitting to such journal. No doubt there is good will in those who fight to make the current publishing model more open. However, I can’t help but feel there is a lack of awareness of the financial gap involved in setting an acceptable threshold for article processing charges that are based on the standards of the world's major economic powers. + +### Sam +Libraries spend an enormous amount of money paying journal subscription fees in order to give their patrons access to cutting edge research. Imagine a world in which paywalls are a thing of the past and these thousands of dollars currently reserved at every library for journal subscription costs could be redistributed. Librarians need to support Open Access and to publicly reject the current systems in place that restrict access to information for the majority of the global community. Librarians should stop and ask themselves, what are the long term effects of supporting the current system? What historic injustices are being perpetuated by paying for standard subscription-based journals? If librarianship is based upon providing equitable service to all information users, supporting Open Access is a necessity. + +### Anne +My colleagues and I have been having interesting discussions about what Open Access means in the context of our respective disciplines, and so many of them have boiled down to funding models, and how to make sure that the (financial) incentives are in the right place. So when I approached these questions of structural equity and inclusion, I wondered how we can balance the ideals of open access that allow for creative collaboration, open knowledge, and more equitable contributions (all things that brought us all together at OKF) with the necessary requirements of funding and the pressure to publish. In my own discipline, these [debates have been happening for a long time](https://savageminds.org/2015/05/27/open-access-what-cultural-anthropology-gets-right-and-american-anthropologist-gets-wrong/), and were recently brought to light because of an experimental Open Access journal called [HAU](https://anthrodendum.org/2018/06/13/hau-is-dead-long-live-oa-initiatives/), which was founded by the late [David Graeber](https://www.nytimes.com/2020/09/04/books/david-graeber-dead.html). Furthermore, as a journalist, I tend not to equate open access with accessibility more generally, because making something available or open doesn't necessarily mean that it will be used (let alone understood by a wider audience!). This is the integral role that journalism can play within the open access academic community, in my view: through increased data literacy, visualisation tools, and what I call "translation through storytelling". This is what drew me to #dataviz, and why I'm creating interactive visualisations of human rights data from the United Nations with OKF. While the Universal Periodic Review is well-known for being one of the most inclusive and equitable venues at the UN, few know about it outside of Geneva. So as Open Access Week comes to a close, I've been starting to re-think the movement as "open, accessible, fundable, and understandable". Maybe it's not as catchy, but it's what I hope to embody! + +### Ritwik +Wherever I see terms as 'Open access' and 'Open Science', I usually think about how we can make changes to the current research environment so as to extract meaning from open research space and allow people to learn more about this and move beyond conventional 'Research Journals'. One of the ways we can empower structural and racial equity in research is by investing in Open Science infrastructures and services and capacity building for Open Science by including Open translation services and tools like github to lower the language barrier. Not every potential reader of openly available science is fluent in English and Automatic translation is not always correct, but mere information translations can still convey the overall meaning. We can take help from open source development programs to empower organisations like CC Extractor and other local translation free softwares so we can include languages like Spanish, Italian, Hindi, Japanese and other native languages so that everyone is able to break those barriers and understand literature promoted in different languages. Similarly provide sustainable funding mechanisms and foster decentralized, community-owned/-run non-profit open source initiatives in this space. Apply an inclusive, holistic approach to science and research in the sense of Open Scholarship to include human value education, open scholcomm and open education with a view on teaching in the seminar and classroom, etc. - basically the whole variety of research and teaching practices that define academic life, but still remain underrepresented in the larger debate around Open Science. + +### Evelyn +‘Equity’ and ‘inclusion’ are two words that I know too well given the yawning gaps that exist between the haves and have-nots in the African society. Research indeed is the core of any society, identifying calamities and solving them in the most sustainable of ways. These two words therefore occupy an integral space in the open research arena since structural equity and inclusion would mean that research knowledge is given for free irrespective of any societal construct for productive downstream research. Although open access has been lauded for promoting access to high quality research at no costs, authors have so far faced sky high publishing costs that have quite limited the number of papers that make it to the open [especially in low and middle income regions like Africa](https://www.researchprofessionalnews.com/rr-news-africa-pan-african-2020-10-open-access-publishing-fees-a-crisis-for-african-research/). The need to subsidize publishing costs to the open space is thus apparent with the overall goal of strengthening research capacity and impactful research especially for such regions to be at par with the rest of the world in research and development. Research societies and governments need to forge bilateral pacts whose main purpose is to encourage open access by introducing waivers on publishing costs and also curbing predatory journals that most often than not derail the reputation of scientists.Indeed the achievement of structural equity and inclusion will require that the authors and users of scientific papers alike get to disseminate and access knowledge for free. + +### Kate +In organizing my ideas for a coherent reflection on the theme of this year’s Open Access Week, I thought of recent news out of the United Kingdom. UK’s National Institute for Health Research (NIHR) recently revealed [new measures](https://www.nihr.ac.uk/news/nihr-responds-to-the-governments-call-for-further-reduction-in-bureaucracy-with-new-measures/25633) no longer requiring universities to have memberships to specific charters and concordats to receive grant funding. This may seem like a move towards removing roadblocks for funding, however membership to these charters, such as the [Athena SWAN Charter](https://www.advance-he.ac.uk/equality-charters/athena-swan-charter) and [Race Equality Charter](https://www.advance-he.ac.uk/equality-charters/race-equality-charter), provide universities strategies to identify and address institutional and cultural barriers. In a 2020 world in which the Open Access community picks a theme that specifically mentions “structural equity and inclusion” as its goals, institutes of power, like UK’s NIHR, seem to be tone-deaf by no longer requiring charters to guide them in those structures. I commend the Open Access community for leading the way by prioritizing equity and inclusion in its pursuit to share knowledge, and I believe we should all challenge institutional framework, like UK’s NIHR, to embrace the values of the open access community. + +Jacqueline +As a machine learning researcher, this year’s Open Access Week theme resonates. Open access, structural equity, and inclusion should be explicit goals in artificial intelligence (AI) research. To quote the [Algorithmic Justice League](https://www.ajl.org/), "Technology should serve all of us. Not just the priviledged few." However, the demographics of the AI community do not reflect societal diversity, and this can allow algorithms to reinforce harmful [systemic biases](https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/). But even if we know who is writing the algorithms that affect our lives, we often don’t know how these predictive systems make their decisions. A recent [response](https://www.nature.com/articles/s41586-020-2766-y) to a Google Health [closed source tool](https://www.nature.com/articles/s41586-019-1799-6) for breast cancer screening argues that failing to release code and training data undermines the scientific value, transparency, and reproducibility of AI systems. Ironically, however, this well-worded argument lies behind a paywall that limits transparency by design. Competing views on closed access AI publishing are captured in the 2018 [boycott](https://openaccess.engineering.oregonstate.edu/) of Nature Machine Intelligence, its [coverage](https://www.sciencemag.org/news/2018/05/why-are-ai-researchers-boycotting-new-nature-journal-and-shunning-others) in the scientific media, and the journal’s [rebuttal](https://www.nature.com/articles/s42256-020-0144-y). Whether you stand by [Plan S](https://www.coalition-s.org/) or not, open conversations around the ethics of access and transparency are important steps toward safe, equitable, and inclusive AI. diff --git a/site/blog/2020-10-28-october-virtual-hangout/README.md b/site/blog/2020-10-28-october-virtual-hangout/README.md new file mode 100644 index 000000000..e707a6c3b --- /dev/null +++ b/site/blog/2020-10-28-october-virtual-hangout/README.md @@ -0,0 +1,30 @@ +--- +title: Frictionless Data October 2020 Virtual Hangout +date: 2020-10-28 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community.jpg +description: +author: Lilly Winfree +--- + + +## Did you miss our October community call? + +We had a great presentation by Keith Hughitt, who told us about his work on using Frictionless to create infrastructure for sharing biology and genomics data packages. You can watch his presentation here: + + +## Other agenda items of note included: + +- We are hiring a community manager! The full details are here: [https://okfn.org/about/jobs/](https://okfn.org/about/jobs/) +- Help us prioritize adding new features to frictionless-py! You can vote on which features you want to see here: [https://github.com/frictionlessdata/frictionless-py/issues/486](https://github.com/frictionlessdata/frictionless-py/issues/486) +- You can read more about frictionless-py here: [https://frictionlessdata.io/blog/2020/10/08/frictionless-framework/](https://frictionlessdata.io/blog/2020/10/08/frictionless-framework/) + +## Join us next month! +Our next meeting will be on 19th November. You can sign up here: [https://forms.gle/5HeMrt2MDCYSYWxT8](https://forms.gle/5HeMrt2MDCYSYWxT8). We’ll discuss new features of frictionless-py, and there will also be time for your updates too. Do you want to share something with the community? Let us know when you sign up! + +## Call recording: +Here is the recording of the full call: + + +As always, join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions! \ No newline at end of file diff --git a/site/blog/2020-11-18-dryad-pilot/README.md b/site/blog/2020-11-18-dryad-pilot/README.md new file mode 100644 index 000000000..ee08471f3 --- /dev/null +++ b/site/blog/2020-11-18-dryad-pilot/README.md @@ -0,0 +1,16 @@ +--- +title: Dryad and Frictionless Data collaboration +date: 2020-11-18 +tags: ['pilot'] +category: +image: /img/adoption/dryad.png +description: Announcing a new Pilot collaboration with the data repository Dryad... +author: Tracy Teal +--- +*By Tracy Teal; originally posted in the Dryad blog: https://blog.datadryad.org/2020/11/18/frictionless-data/* + +Guided by our commitment to make research data publishing more seamless and also re-usable, we are thrilled to partner with Open Knowledge Foundation and the Frictionless Data team to enhance our submission processes. Integrating the Frictionless Data toolkit, Dryad will be able to directly provide feedback to authors on the structure of tabular files uploaded. This will also allow for automated file level metadata to be created at upload and available for download for published datasets. + +We are excited to get moving on this project and with support from the Sloan Foundation, Open Knowledge Foundation has just announced a job opening to contribute to this work. Please check out the posting and circulate it to any developers who may be interested in building out this functionality with us: https://okfn.org/about/jobs/ + +*Stay tuned for a project update in July 2021!* \ No newline at end of file diff --git a/site/blog/2020-11-19-november-virtual-hangout/README.md b/site/blog/2020-11-19-november-virtual-hangout/README.md new file mode 100644 index 000000000..cb386f8ae --- /dev/null +++ b/site/blog/2020-11-19-november-virtual-hangout/README.md @@ -0,0 +1,40 @@ +--- +title: Frictionless Data November 2020 Virtual Hangout +date: 2020-11-19 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community.jpg +description: +author: Sébastien Lavoie +--- + +## A recap from our November community call + +This time around, we were offered a fantastic presentation by Costas Simatos, the team leader of the [ISA2 Interoperability Test Bed Action](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository/solution/interoperability-test-bed) at the European Commission! He revealed some powerful tools to validate data against specifications, including the following: + +- A [Table Schema validator](https://www.itb.ec.europa.eu/json/tableschema/upload), sharing a [news article](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository/solution/interoperability-test-bed/news/table-schema-validator) about it; +- More validators to work with [XML](https://www.itb.ec.europa.eu/docs/guides/latest/validatingXML/), [RDF](https://www.itb.ec.europa.eu/docs/guides/latest/validatingRDF/), [JSON](https://www.itb.ec.europa.eu/docs/guides/latest/validatingJSON/) and [CSV](https://www.itb.ec.europa.eu/docs/guides/latest/validatingCSV/) (based on a fork of [tableschema-java](https://github.com/frictionlessdata/tableschema-java)) as well as a [custom CSV validator](https://www.itb.ec.europa.eu/csv/kohesio/upload) built for [Kohesio](https://kohesio.eu/), the _"Project Information Portal for Cohesion Policy"_; +- A feature-rich [conformance testing platform](https://www.itb.ec.europa.eu/itb/). + +If you would like to dive deeper and watch Costas' presentation, here it is: + + + +## Other agenda items from our hangout + +- We are hiring a community manager as well as a software developer! The full details are here: [https://okfn.org/about/jobs/](https://okfn.org/about/jobs/) +- Interested in giving your feedback on [an issue about raster geoinformation](https://github.com/frictionlessdata/frictionless-py/issues/536)? + +## Join us next month! + +Our next meeting will be on December 17. You can [sign up here](https://forms.gle/5HeMrt2MDCYSYWxT8). We’ll discuss using Frictionless data package for the web archive data package (WACZ format) and give some space to talk about geospatial data standards, coping with Covid and showing a member's platform dedicated to open data hackathons! + +As always, there will be time for your updates too. Do you want to share something with the community? Let us know when you sign up! + +## Call recording + +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2020-11-26-fellows-packaging/README.md b/site/blog/2020-11-26-fellows-packaging/README.md new file mode 100644 index 000000000..792957df7 --- /dev/null +++ b/site/blog/2020-11-26-fellows-packaging/README.md @@ -0,0 +1,75 @@ +--- +title: Packaging Research Data with the Frictionless Fellows +date: 2020-11-26 +tags: ['fellows'] +category: +image: /img/standards/data-package.png +description: Our Reproducible Research Fellows recently learned all about packaging their data by using the Data Package Creator. To help others learn how they too can package their data, the Fellows wrote about packaging their data... +author: Lilly Winfree +--- + +Have you ever been looking at a dataset and had no idea what the data values mean? What units are being used? What does that acronym in the first column mean? What is the license for this data? + +These are all very common issues that make data hard to understand and use. At Frictionless Data, we work to solve these issues by packaging data with its metadata - aka the description of the data. To help you package your data, we have [code in several languages](https://frictionlessdata.io/software/) and a browser tool, called [Data Package Creator](https://create.frictionlessdata.io/). + +Our Reproducible Research Fellows recently learned all about packaging their data by using the Data Package Creator. To help others learn how they too can package their data, the Fellows wrote about packaging their data in blogs that you can read below! + +*** + +### [Data Package is Valid! By Ouso Daniel](https://fellows.frictionlessdata.io/blog/ouso-datapackage-blog/) (Cohort 1) +“To quality-check the integrity of your data package creation, you must validate it before downloading it for sharing, among many things. The best you can get from that process is "Data package is valid!". What about before then?” + +*** + +### [Combating other people’s data by Monica Granados](https://fellows.frictionlessdata.io/blog/monica-datapackage-blog/) (Cohort 1) +“Follow the #otherpeoplesdata on Twitter and in it you will find a trove of data users trying to make sense of data they did not collect. While the data may be open, having no metadata or information about what variables mean, doesn’t make it very accessible….Without definitions and an explanation of the data, taking the data out of the context of my experiment and adding it to something like a meta-analysis is difficult. Enter Data packages. “ + +*** + +### [Data Package Blog by Lily Zhao](https://fellows.frictionlessdata.io/blog/lily-datapackage-blog/) (Cohort 1) +"When I started graduate school, I was shocked to learn that seafood is actually the most internationally traded food commodity in the world….However, for many developing countries being connected to the global seafood market can be a double-edged sword….Over the course of my master's degree, I developed a passion for studying these issues, which is why I am excited to share with you my experience turning some of the data my collaborators into a packaged dataset using the Open Knowledge Foundation’s Datapackage tool.” + +*** + +### [¿Cómo empaquetamos datos y por qué es importante organizar la bolsa del supermercado? By Sele Yang](https://fellows.frictionlessdata.io/blog/sele-datapackage-blog/) (Cohort 1) +“Empaquetando datos sobre aborto desde OpenStreetMap Esta es una publicación para compartirles sobre el proceso y pasos para crear datapackages. ¿Qué es esto? Un datapackage es básicamente un empaquetado que agiliza la forma en que compartimos y replicamos los datos. Es como un contenedor de datos listo para ser transportado por la autopista del conocimiento (geeky, right).” + +*** + +### [So you want to get your data package validated? By Katerina Drakoulaki](https://fellows.frictionlessdata.io/blog/katerina-datapackage-blog/) (Cohort 2) +“Have you ever found any kind of dataset, (or been given one by your PI/collaborator) and had no idea what the data were about? During my PhD I've had my fair share of not knowing how code works, or how stimuli were supposed to be presented, or how data were supposed to be analysed….The datapackage tool tries to solve one of these issues, more specifically creating packages in which data make sense, and have all the explanations (metadata) necessary to understand and manipulate them.” + +*** + +### [Constructing a basic data package in Python by Jacqueline Maasch](https://fellows.frictionlessdata.io/blog/jacqueline-datapackage-blog/) (Cohort 2) +“As a machine learning researcher, I am constantly scraping, merging, reshaping, exploring, modeling, and generating data. Because I do most of my data management and analysis in Python, I find it convenient to package my data in Python as well. The screenshots below are a walk-through of basic data package construction in Python.” + +*** + +### [Sharing data from your own scientific publication by Dani Alcalá-López](https://fellows.frictionlessdata.io/blog/dani-datapackage-blog/) (Cohort 2) +“What better way to start working with open data than by sharing a Data Package from one of my own publications? In this tutorial, I will explain how to use the Frictionless Data tools to share tabular data from a scientific publication openly. This will make easier for anyone to reuse this data.” + +*** + +### [Data Package Blog by Sam Wilairat](https://fellows.frictionlessdata.io/blog/sam-datapackage-blog/) (Cohort 2) +“As a library science student with an interest in pursuing data librarianship, learning how to create, manage, and share frictionless data is important. These past few months I've been learning about Frictionless Data and how to use Frictionless Data Tools to support reproducible research….To learn how to use the Frictionless Data Tools, I decided to pursue an independent project and am working on creating a comprehensive dataset of OER (open educational resources) health science materials that can be filtered by material type, media format, topic, and more.” + +*** + +### [Let's Talk Data Packaging by Evelyn Night](https://fellows.frictionlessdata.io/blog/evelyn-datapackage-blog/) (Cohort 2) +“A few weeks ago I met data packages for the first time and I was intrigued since I had spent too much time in the past wrangling missing and inconsistent values. Packaging data therefore taught me that arranging and preserving data does not have to be tedious anymore. Here, I show how I packaged a bit of my data (unpublished) into a neat json document using the Data Package creator . I am excited to show you just how much I have come from knowing nothing to being able to package and extract the json output.” + +*** + +### [[Data]packaging human rights with the Universal Periodic Review by Anne Lee Steele](https://fellows.frictionlessdata.io/blog/anne-datapackage-blog/) (Cohort 2) +“All of the records for the Universal Periodic Review have been uploaded online, and are available for the public. However, it’s not likely that the everyday user would be able to make heads or tails of what it actually means….The way I think about it, the Data Package is a way of explaining the categories used within the data itself, in case someone besides an expert is using them. While sections like "Recommendation" and "Recommending State" may be somewhat self-explanatory, I can imagine that this will get way more complicated with purely numerical data.” + +*** + +### [Creating a datapackage for microbial community data (and a phyloseq object) by Kate Bowie](https://fellows.frictionlessdata.io/blog/kate-datapackage-blog/) (Cohort 2) +“I study bacteria, and lucky for me, bacteria are everywhere….My lab often tries many different ways to handle the mock [bacteria] community, so it’s important that the analysis be documented and reproducible. To address this, I decided to generate a data package using a tool created by the Open Knowledge Foundation. Here is my experience creating a data package of our data, the metadata, and associated software.” + +*** + +### [Using Weather and Rainfall Data to Validate by Ritwik Agarwal](https://fellows.frictionlessdata.io/blog/ritwik-datapackage-blog/) (Cohort 2) +“I am using a data resource from Telangana Open Data....it is an open source data repository commissioned by the state government here in India and basically it archives and stores Weather, Topological, Agriculture and Infrastructure data which then can be used by research students and stakeholders keen to study and make reports in it….CSV files are very versatile, but cannot handle the metadata with all the necessary context. We need to make sure that people can find our data and the information they need to understand our data. That's where the Data Package comes in! ” diff --git a/site/blog/2020-12-17-december-virtual-hangout/README.md b/site/blog/2020-12-17-december-virtual-hangout/README.md new file mode 100644 index 000000000..78af91ca1 --- /dev/null +++ b/site/blog/2020-12-17-december-virtual-hangout/README.md @@ -0,0 +1,41 @@ +--- +title: Frictionless Data December 2020 Virtual Hangout +date: 2020-12-17 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community.jpg +description: +author: Sébastien Lavoie +--- + +## A recap from our December community call + +We had a presentation about "using frictionless data package for web archive data package (WACZ format)". More details in [this GitHub issue](https://github.com/frictionlessdata/forum/issues/69). + +If you would like to dive deeper and watch Ilya's presentation, you can find it here: + + + +## Other agenda items from our hangout + +* People are interested in tools dealing with tools dealing with “small” data. +* What are we up to for 2021? What’s the roadmap for Frictionless? + * Specs are stable. + * Always bet on JavaScript! We will keep focusing on working with tools for JavaScript. Its versatility is required for desktop apps, using dynamic frameworks like React, etc. + * We will keep working on [https://github.com/frictionlessdata/frictionless-js](https://github.com/frictionlessdata/frictionless-js) +* [https://github.com/datopian/data-cli](https://github.com/datopian/data-cli) Command line tool for working with data, Data Packages and the DataHub +* [https://github.com/datopian/datapub](https://github.com/datopian/datapub) React-based framework for building data publishing workflows (esp for CKAN) + +## Join us next time! + +Our next meeting will be announced in January 2021! You can [sign up here](https://forms.gle/5HeMrt2MDCYSYWxT8) to be notified when the hangout will be scheduled. We’ll give some space to talk about geospatial data standards, coping with Covid and showing a member's platform dedicated to open data hackathons! + +As always, there will be time for your updates too. Do you want to share something with the community? Let us know when you sign up! + +## Call recording + +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-01-13-partnering-with-ODI/README.md b/site/blog/2021-01-13-partnering-with-ODI/README.md new file mode 100644 index 000000000..7951e6cb9 --- /dev/null +++ b/site/blog/2021-01-13-partnering-with-ODI/README.md @@ -0,0 +1,35 @@ +--- +title: Partnering with ODI to improve Frictionless Data +date: 2021-01-13 +tags: ['news'] +category: news +image: /img/blog/fd_reproducible.png +description: +author: Sara Petti +--- +Originally published: https://blog.okfn.org/2021/01/12/partnering-with-odi-to-improve-frictionless-data/ + +_In the framework of the Open Data Institute’s [fund to develop open source tools for data institutions](https://theodi.org/article/call-for-proposals-funding-to-develop-open-source-tools-for-data-institutions/), the [Open Knowledge Foundation (OKF)](okfn.org) has been awarded funds to improve the quality and interoperability of Frictionless Data._ + +In light of our effort to make data open and accessible, we are thrilled to announce we will be partnering with the [Open Data Institute (ODI)](https://theodi.org/) to improve our existing documentation and add new features on [Frictionless Data](https://frictionlessdata.io/) to create a better user experience for all. +To achieve this, we will be working with a cohort of users from our active and engaged community to create better documentation that fits their needs. Our main goal is to make it easier for current and future users to understand and make use of the Frictionless Data tools and data libraries to their fullest potential. +We know how frustrating it can be to try and use existing code (or learn new code) that has incomplete documentation and we don’t want that to be a barrier for our users anymore. This is why we are very grateful to the ODI for granting us the opportunity to improve upon our existing documentation. +## So, what will be changing? + +* We will have a new project overview section, to help our users understand how to use Frictionless Data for their specific needs. +* We will improve the existing documentation, to make sure even brand new users can quickly understand everything. +* We will have Tutorials, to showcase real users experience and have user-friendly examples. +* We will add a FAQ session. + +## And when will all of that be ready? +Very soon! By the beginning of April everything will be online, so stay tuned (and frictionless)! + +## Call for user feedback +Feedback from our community is crucial to us, and part of this grant will be used to fund an evaluation of the existing documentation by our users in the format of user feedback sessions. +Are you using our Frictionless Data tools or our Python data library? Then we want to hear from you! +We are currently looking for novice and intermediate users to help us review our documentation, in order to make it more useful for you and all our future users. +For every user session you take part into, you will be given £50 for your time and feedback. +Are you interested? Then fill in [this form](https://docs.google.com/forms/d/e/1FAIpQLSezZVuKjqnFL9CHtuWVjDwDu8Cv1gQCAIs85TtDYQUv1t9hVw/viewform). + +## More about Frictionless Data +Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. The project is funded by the Sloan Foundation. diff --git a/site/blog/2021-01-18-schema-collaboration/README.md b/site/blog/2021-01-18-schema-collaboration/README.md new file mode 100644 index 000000000..5e1a9b5ef --- /dev/null +++ b/site/blog/2021-01-18-schema-collaboration/README.md @@ -0,0 +1,58 @@ +--- +title: Schema Collaboration +date: 2021-01-18 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/schema-collaboration.png +description: The tool is designed to help data managers and researchers document data packages. The documentation needs to be started by the data manager who then sends the URL to the researchers allowing them to edit the schema. +author: Carles Pina Estany +--- + +*This blog is part of a series showcasing projects developed during the 2020 Tool Fund. The Tool Fund provided five mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This Fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +## What problem does Schema-Collaboration solve? +As a software engineer, I’ve spent more than a decade developing software used by researchers or data managers using different technologies. I have been involved in free software communities and projects for more than 20 years. + +Whilst working for a polar research institute, we saw the opportunity to take advantage of Frictionless data packages to describe datasets in a machine readable way ready for publication. But it was difficult for data managers and researchers to collaborate effectively on this, particularly when one or both groups were not familiar with Frictionless schemas. We needed a way for researchers submitting datasets to get feedback from the data managers to ensure that the dataset’s schema was correct. + +## How does Schema-Collaboration make collaborating easier? +The Frictionless [Data Package Creator](https://create.frictionlessdata.io/) is a very good Web-based tool to create the schemas but it didn’t help out of the box on the collaboration part. The solution in this tool fund was to build a system that uses Data Package Creator to enable data managers and researchers to create and share dataset schemas, edit them, post messages and export the schemas in different formats (text, Markdown, PDF). To encourage collaboration within a project multiple researchers can work on the same schema. Being able to view the description in human-readable formats makes it easier to spot mistakes and to integrate with third-party data repositories. + +From a data manager’s perspective the tool allows them to keep tabs on the datasets being managed and their progress. It prevents details getting lost in emails and hopefully provides a nicer interface to encourage better collaboration. + +In other words: think of a very simplified “Google Docs” specialised for data packages. + +## Who can use Schema-Collaboration? +The tool is designed to help data managers(*) and researchers document data packages. The documentation (which is based on Frictionless schemas) needs to be started by the data manager who then sends the URL to the researchers allowing them to edit the schema. + +*: or anybody who wants to collaborate on creating a data package. + +![Data-packages](https://user-images.githubusercontent.com/74717970/104922881-8e788c80-599b-11eb-9260-21b9a5747a8f.png) +*Data managers can view a list of datapackages within the Schema-Collaboration tool.* + +## How can I use this tool? + To evaluate the tool it is possible to use the [public demo server](https://carles.eu.pythonanywhere.com/) or to install it locally on a computer. + +It was packaged in a Docker container to make it easier to install on servers. There is full [documentation available](https://github.com/frictionlessdata/schema-collaboration/blob/master/docker/README.md). + +Once the tool is installed it is used via a Web browser both by data managers and researchers. + +![datapackage-detail](https://user-images.githubusercontent.com/74717970/104923256-19598700-599c-11eb-9cc4-19bb7637fdaa.png) +*You can view details about the datapackage, including comments from the data manager or other users, and also edit the datapackage.* + +## Future plans for Schema-Collaboration +We plan to install the schema-collaboration at the Swiss Polar Institute to be used to describe polar data sets. + +In the upcoming January Frictionless Data community call (sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform) to join), I will do a demo and I would really appreciate feedback. Please feel free to use it and add issues (bugs or ideas) in the [GitHub repository](https://github.com/frictionlessdata/schema-collaboration). + +## Tech stack +For the curious: schema-collaboration is developed using Python and Django and uses the django-crispy-forms package to create the forms. It supports sqlite3 and MariaDB databases. + +## Thanks to… +In order to integrate Data Package Creator with schema-collaboration some changes where needed in the Data Package Creator. Evgeny (@roll on GitHub/Discord) from Frictionlessdata project made the changes to Data Package Creator needed to achieve this and helped with the integration. Thank you very much! + +**Further reading:** + +GitHub repository: https://github.com/frictionlessdata/schema-collaboration + +Meet Carles Pina Estany: https://frictionlessdata.io/blog/2020/07/16/tool-fund-polar-institute/#meet-carles-pina-estany diff --git a/site/blog/2021-01-26/sara-petti/README.md b/site/blog/2021-01-26/sara-petti/README.md new file mode 100644 index 000000000..b56392570 --- /dev/null +++ b/site/blog/2021-01-26/sara-petti/README.md @@ -0,0 +1,18 @@ +--- +title: Meet the new Frictionless Data Community Manager +date: 2021-01-26 +tags: ["team"] +category: +image: /img/blog/sara.png +description: Introducing a new team member - Sara Petti. Sara will be working as Community Manager of the Frictionless Data Project. If you are interested in knowing more about Frictionless Data, or you are already using our tools feel free to contact Sara! +author: Sara Petti +--- +Hi everyone, + +I am Sara Petti, the new [Frictionless Data](https://frictionlessdata.io/) Community Manager. After a very happy decade in Brussels, I moved to Hamburg, Germany last year in February, just in time to live the global pandemic from a brand-new place. With social life put to a stop, I finally decided to start something I had wanted to do for quite some time: learn to code in order to do data visualisation. Right now I am learning Python slowly but surely and when I am not going bananas over cleaning data I draw comics or experiment fermentation with items in my kitchen (mostly vegetables). So if you are also a passionate breeder of lactobacillus bacteria, we should definitely get in touch! + +Back in Brussels I worked with public libraries, developing projects with them, but also advocating for them to be on the EU agenda. Talking with some of the most innovative librarians, I became well aware of the importance of granting free access to information and knowledge to everyone in order to empower citizens and foster democracy. I started to take an interest in the whole open movement, monitoring projects and policy development, and quickly became passionate about it. + +I think the real turning point for me was when I got to know an amazing project on opening air quality data developed by some public libraries in Colombia and tried to replicate it in Europe. At that point, I really understood the implications of the open movement: by opening data, citizens are able to gain ownership on compelling subjects for them and to advocate for policy improvement. Sadly more often than not, when data is made available, it is not directly reusable. + +Frictionless Data provides tools that improve the quality of open data, making it more useful to society and reusable by a wide range of people. I think that this idea of serving society with a free and open service that people can use to empower themselves is what attracted me at first. Providing those kinds of services with no barriers to entry (and no friction, if you will) should be the purpose of any institution. This is why I am very excited to join the [Open Knowledge Foundation](https://okfn.org/) team working on this amazing project, doing what I think I can do best: interact with people and create links between them. My plan for the upcoming months is to build a proactive community around this project and engage with them. So if you are interested in knowing more about Frictionless Data, or you are already using our tools and would like to connect, email me: [sara.petti@okfn.org](mailto:sara.petti@okfn.org) or connect with the project on [Twitter](https://twitter.com/frictionlessd8a) or [Discord](https://discord.com/invite/Sewv6av). diff --git a/site/blog/2021-01-30-fellows-validation/README.md b/site/blog/2021-01-30-fellows-validation/README.md new file mode 100644 index 000000000..d88c79b6b --- /dev/null +++ b/site/blog/2021-01-30-fellows-validation/README.md @@ -0,0 +1,48 @@ +--- +title: Learning how to validate research data - A Fellows blog +date: 2021-01-30 +tags: ['fellows'] +category: +image: /img/introduction/report.png +description: Interested in learning more about how you can validate your data? Read on to see how the Frictionless Fellows validated their research data and learn their tips and tricks! +author: Lilly Winfree +--- + +Have you ever heard a data horror story about Excel automatically changing all numbers into dates without so much as a warning? Have you ever accidentally entered a wrong data value into a spreadsheet, or accidentally deleted a cell? What if there was an easy way to detect errors in data types and content? Well there is! That is the main goal of Goodtables, the Frictionless data validation service, and also the `Frictionless-py` `validate` function. Interested in learning more about how you can validate your data? Read on to see how the Frictionless Fellows validated their research data and learn their tips and tricks! + +:::tip +Click on the links below to read the whole blog. +::: + +### [Don't you wish your table was as clean as mine? By Monica Granados](https://fellows.frictionlessdata.io/blog/monica-goodtables-blog/) (Cohort 1) +“How many times have you gotten a data frame from a colleague or downloaded data that had missing values? Or it’s missing a column name? Do you wish you were never that person? Well introducing Goodtables – your solution to counteracting bad data frames! As part of the inaugural Frictionless Data Fellows, I took Goodtables out for a spin.” + +### [Validando datos un paquete a la vez by Sele Yang](https://fellows.frictionlessdata.io/blog/sele-goodtables-blog/) (Cohort 1) +“Yo trabajé con la base de datos que vengo utilizando para el programa que se encuentra en mi repositorio de Github. Es una base de datos geográficos sobre clínicas de aborto descargada desde OpenStreetMap a través de OverpassTurbo….Goodtables es una herramienta muy poderosa, que nos permite contar contar con la posibilidad de validación constante y de forma simple para mantener nuestras bases de datos en condiciones óptimas, no sólo para nuestro trabajo, sino también para la reproducción y uso de los mismos por otras personas.” + +### [Tabular data: Before you use the data by Ouso Daniel](https://fellows.frictionlessdata.io/blog/ouso-goodtables-blog/) (Cohort 1) +“I want to talk about goodtables, a Frictionless data (FD) tool for validating tabular data sets. As hinted by the name, you only want to work on/with tabular data in good condition; the tool highlights errors in your tabular dataset, with the precision of the exact location of your error. Again, the beautiful thing about FD tools is that they don’t discriminate on your preferences, it encompasses the Linux-based CLI, Python, GUI folks, among other languages.” + +### [Data Validation Of My Interview Dataset Using Goodtables by Lily Zhao](https://fellows.frictionlessdata.io/blog/lily-goodtables-blog/) (Cohort 1) +“I used goodtables to validate the interview data we gathered as part of the first chapter of my PhD. These data were collected in Mo'orea, French Polynesia where we interviewed both residents and scientists regarding the future of research in Mo'orea….Amplifying local involvement and unifying the perspectives of researchers and coastal communities is critical not only in reducing inequity in science, but also in securing lasting coral reef health.” + +### [Walking through the `frictionless` framework by Jacqueline Maasch](https://fellows.frictionlessdata.io/blog/jacqueline-goodtables-blog/) (Cohort 2) +“While the GoodTables web server is a convenient tool for automated data validation, the frictionless framework allows for validation right within your Python scripts. We'll demonstrate some key frictionless functionality, both in Python and command line syntax. As an illustrative point, we will use a CSV file that contains an invalid element – a remnant of careless file creation.” + +### [Validating your data before sharing with the community by Dani Alcalá-López](https://fellows.frictionlessdata.io/blog/dani-goodtables-blog/) (Cohort 2) +“Once we have decided to share our data with the rest of the world, it is important to make sure that other people will be able to reuse it. This means providing as much metadata as possible, but also checking that there are no errors in the data that might prevent others from benefiting from our data. Goodtables is a simple tool that you can use both on the web and in the command-line interface to carry out this verification process” + +### [Goodtables blog by Sam Wilairat](https://fellows.frictionlessdata.io/blog/sam-goodtables-blog/) (Cohort 2) +“Now let's try validating the same data using the Goodtables command line tool! ….Once the installation is complete, type "goodtables path/to/file.csv". You will either receive a green message stating that the data is valid, or a red message, like the one I have shown below, showing that the data is not valid!” + +### [Using goodtables to validate metadata from multiple sequencing runs by Kate Bowie](https://fellows.frictionlessdata.io/blog/kate-goodtables-blog/) (Cohort 2) +“Here, I will show you how I used a schema and GoodTables to make sure my metadata files could be combined, so I can use them for downstream microbial diversity analysis….It's extremely helpful that GoodTables pointed this ### [error] out, because if I tried to combine these metadata files in R with non-matching case as it is here, then it would create TWO separate columns for the metadata….Now I will be able to combine these metadata files together and it will make my data analysis pipeline a lot smoother.” + +### [Reflecting on 'datafication', data prep, and UTF-8 with goodtables.io by Anne Lee Steele](https://fellows.frictionlessdata.io/blog/anne-goodtables-blog/) (Cohort 2) +“Before I knew it, it was 2021, and revisiting my data in the new year has made me realize just how much time and efforts goes into cleaning, structuring, and formatting datasets – and how much more goes into making them understandable for others (i.e. through Frictionless' data-package). I'd always thought of these processes as a kind of black box, where 'data analysis' simply happens. But in reality, it's the fact that we've been spending so much time on preparatory work that points to how important these processes actually are: and how much goes into making sure that data can be used before analyzing it in the first place.” + +### [Validate it the GoodTables way! By Evelyn Night](https://fellows.frictionlessdata.io/blog/evelyn-goodtables-blog/) (Cohort 2) +“Errors may sometimes occur while describing data in a tabular format and these could be in the structure; such as missing headers and duplicated rows, or in the content for instance assigning the wrong character to a string. Some of these errors could be easily spotted by naked eyes and fixed during the data curation process while others may just go unnoticed and later impede some downstream analytical workflows. GoodTables are handy in flagging down common errors that come with tabular data handling as it recognises these discrepancies fast and efficiently to enable users debug their data easily. ” + +### [Using the frictionless framework for data validation by Katerina Drakoulaki](https://fellows.frictionlessdata.io/blog/katerina-goodtables-blog/) (Cohort 2) +“Thus, similar to what the data package creator and goodtables.io does, frictionless detects your variables and their names, and infers the type of data. However, it detected some of my variables as strings, when they are in fact integers. Of course, goodtables did not detect this, as my data were generally -in terms of formatting- valid. Not inferring the right type of data can be a problem both for future me, but also for other people looking at my data.” diff --git a/site/blog/2021-02-03-january-virtual-hangout/README.md b/site/blog/2021-02-03-january-virtual-hangout/README.md new file mode 100644 index 000000000..10668f2bc --- /dev/null +++ b/site/blog/2021-02-03-january-virtual-hangout/README.md @@ -0,0 +1,46 @@ +--- +title: Frictionless Data January 2021 Virtual Hangout +date: 2021-02-03 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/january.png +description: On January 28th we had our first Frictionless Data Community Call for 2021. It was great to see it was so well attended! +author: Sara Petti +--- + +## A recap from our January community call + +On January 28th we had our first Frictionless Data Community Call for 2021. It was great to see it was so well attended! + +We heard a presentation by Carles Pina i Estany on schema-collaboration, a system that uses Data Package Creator to enable data managers and researchers to create and share dataset schemas, edit them, post messages and export the schemas in different formats (text, Markdown, PDF). Before this tool was developed, researchers communicated with a data manager via email for each datapackage they were publishing, which slowed down considerably the whole process, besides making it more difficult. + +To discover more about schema-collaboration, have a look at it on [GitHub](https://github.com/frictionlessdata/schema-collaboration/) or read [the blog](https://frictionlessdata.io/blog/2021/01/18/schema-collaboration/) Carles wrote about the project. If you would like to dive deeper and watch Carles’ presentation, you can find it here: + + + +## Other agenda items from our hangout + +- [Open Data Day](https://opendataday.org/) is fast approaching! If you are organising something to celebrate open data on March 6th, let us know! You still have a few days to apply for mini-grants for your community events. + +- [csv,conf,v6](https://csvconf.com/) is happening on May 4-5. If you want to give a talk, make sure to submit a proposal by February 28th. More info [here](https://csvconf.com/submit/). + +## News from the community + +Giuseppe Peronato and [cividi](https://cividi.ch/) started using Frictionless Data for data pipelines using (Geo-)Spatial datasets, e.g. raster data and GeoJSONs. You can have a look [here](https://github.com/datahq/dataflows/pull/153). They have also been looking more closely at the Creator’s UI library in a [prototype](https://github.com/gperonato/archive-forger) with researchers, and releasing a [QGIS plugin](https://blog.datalets.ch/073/) for Frictionless Data. + +Thorben started working on the official vaccination publication by the German Federal Health Authority, which was replaced daily with a Data Package Pipeline saved as a Data Package by a GitHub Action. If you are interested, have a look [here](https://github.com/n0rdlicht/rki-vaccination-scraper). + +## Join us next month! + +Our next meeting will be on 25th February. Don’t miss the opportunity to get a code demonstration on frictionless.py by our very own Evgeny Karev (@roll). You can [sign up here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +## Call recording: + +On a final note, here is the recording of the full call: + + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-02-04-tableschema-to-template/README.md b/site/blog/2021-02-04-tableschema-to-template/README.md new file mode 100644 index 000000000..70735da92 --- /dev/null +++ b/site/blog/2021-02-04-tableschema-to-template/README.md @@ -0,0 +1,18 @@ +--- +title: HuBMAP - Table Schema generating an Excel template +date: 2021-02-04 +tags: ["case-studies"] +category: case-studies +image: /img/blog/HuBMAP-Retina-Logo-Color.png +description: The Human BioMolecular Atlas Program Given has developed tableschema-to-template which takes a Frictionless Table Schema as input, and returns an Excel template with embedded documentation and basic validations. +author: Chuck McCallum +--- +HuBMAP ([Human BioMolecular Atlas Program](https://portal.hubmapconsortium.org/)) is creating an open, global atlas of the human body at the cellular level. To do this, we’re incorporating data from dozens of different assay types, and as many institutions. Each assay type has its own metadata requirements, and Frictionless Table Schemas are an important part of our validation framework, to ensure that the metadata supplied by the labs is good. + +That system has worked well, as far as it goes, but when there are errors, it’s a pain for the labs to read the error message, find the original TSV, scroll to the appropriate row and column, re-enter, re-save, re-upload… and hopefully not repeat! To simplify that process, we’ve made [tableschema-to-template](https://pypi.org/project/tableschema-to-template/#description): it takes a Table Schema as input, and returns an Excel template with embedded documentation and some basic validations. + +`pip install tableschema-to-template` + +`ts2xl.py schema.yaml new-template.xlsx` + +It can be used either as a command-line tool, or as a python library. Right now the generated Excel files offer pull-downs for enum constraints, and also check that floats, integers, and booleans are the correct format, and that numbers are in bounds. Adding support for regex pattern constraints is a high priority for us… What features are important to you? Issues and PRs are welcome at the [GitHub repo](https://github.com/hubmapconsortium/tableschema-to-template). diff --git a/site/blog/2021-02-26-halfway-ODI/README.md b/site/blog/2021-02-26-halfway-ODI/README.md new file mode 100644 index 000000000..703479244 --- /dev/null +++ b/site/blog/2021-02-26-halfway-ODI/README.md @@ -0,0 +1,29 @@ +--- +title: How we are improving the quality and interoperability of Frictionless Data +date: 2021-02-26 +tags: ['news'] +category: news +image: /img/blog/odi.jpg +description: We are halfway through the process of reviewing our documentation and adding new features to Frictionless Data, and wanted to give a status update showing how this work is improving the overall Frictionless experience. +author: Sara Petti +--- +Originally published: https://blog.okfn.org/2021/02/25/how-we-are-improving-the-quality-and-interoperability-of-frictionless-data/ + +As we [announced in January](https://frictionlessdata.io/blog/2021/01/13/partnering-with-odi/#so-what-will-be-changing), the [Open Knowledge Foundation](http://okfn.org/) has been awarded funds from [the Open Data Institute](https://theodi.org/) to improve the quality and interoperability of [Frictionless Data](frictionlessdata.io). We are halfway through the process of reviewing our documentation and adding new features to Frictionless Data, and wanted to give a status update showing how this work is improving the overall Frictionless experience. + +We have already done four feedback sessions and have been delighted to meet 16 users from very diverse backgrounds and different levels of expertise using Frictionless Data, some of whom we knew and some not. In spite of the variety of users, it was very interesting to see a widespread consensus on the way the documentation can be improved. You can have a look at a few of the community PRs [here](https://github.com/frictionlessdata/frictionless-py/pull/708) and [here](https://github.com/frictionlessdata/frictionless-py/pull/637). + +We are very grateful to all the Frictionless Data users who took part in our sessions - they helped us see all of our guides with fresh eyes. It was very important for us to do this review together with the Frictionless Data community because they are (together with those to come) the one who will benefit from it, so are the best placed to flag issues and propose changes. + +Every comment is being carefully reviewed at the moment and the new documentation will soon be released. + +## What are the next steps? + +* We are going to have 8 to 12 more users giving us feedback in the coming month. +* We are also adding a FAQ section based on the questions we got from our users in the past. + +If you have any feedback and/or improvement suggestions, please let us know on our [Discord Channel](https://discordapp.com/invite/Sewv6av) or on [Twitter](https://twitter.com/frictionlessd8a). + +## More about Frictionless Data + +Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. The project is funded by the Sloan Foundation. diff --git a/site/blog/2021-03-01-february-virtual-hangout/README.md b/site/blog/2021-03-01-february-virtual-hangout/README.md new file mode 100644 index 000000000..662e25f6d --- /dev/null +++ b/site/blog/2021-03-01-february-virtual-hangout/README.md @@ -0,0 +1,38 @@ +--- +title: Frictionless Data February 2021 Virtual Hangout +date: 2021-03-01 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/february.png +description: On this February Community Call we had a top notch code demonstration of the new frictionless.py framework by our Frictionless Data senior developer Evgeny Karev. +author: Sara Petti +--- + +## A recap from our February community call + +On this February Community Call we had a top notch code demonstration of the new frictionless.py framework by our Frictionless Data senior developer Evgeny Karev. We had been looking very much forward to presenting the new framework to you all and we were very pleased that so many of you joined us. If you would like to know more about it, you can explore the new Frictionless Python framework through the [documentation portal](https://framework.frictionlessdata.io/) or on [GitHub](https://github.com/frictionlessdata/frictionless-py). + +If you couldn’t make it to the call, or you are just curious and would like to go over the presentation again, here it is: + + + +## Other agenda items from our hangout + +[Open Data Day](https://opendataday.org/) is fast approaching with over 200 events organised online on March 6th. Together with the [Frictionless Data Fellows](https://fellows.frictionlessdata.io/) we will be celebrating open research data. Join us online from 3pm UTC. [RSVP here](https://us02web.zoom.us/meeting/register/tZUvdeuspjMoGtK-rR8wV4IrnfEW_5-KdLkG) for the link to join this virtual event. This event is open to everyone. + +## Join us next month! + +Our next meeting will be on 25th March. We will hear about Hackathons to facilitate the creation of web tools to create field-specific FAIR archive files from Oleg Lavrovsky and Giuseppe Peronato. + +You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +## Call recording: + +On a final note, here is the recording of the full call: + + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-03-05-frictionless-data-for-wheat/README.md b/site/blog/2021-03-05-frictionless-data-for-wheat/README.md new file mode 100644 index 000000000..adcc5f135 --- /dev/null +++ b/site/blog/2021-03-05-frictionless-data-for-wheat/README.md @@ -0,0 +1,64 @@ +--- +title: Frictionless Data for Wheat +date: 2021-03-05 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/wheat.png +description: This blog is part of a series showcasing projects developed during the 2020-2021 Tool Fund. The Tool Fund provided five mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research. +author: Simon Tyrrell and Xingdong Bian +--- + +*This blog is part of a series showcasing projects developed during the 2020-2021 Tool Fund. The Tool Fund provided five mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This Fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +We are Simon Tyrrell and Xingdong Bian, both research software engineers, in Robert Davey’s Data Infrastructure and Algorithms group at the Earlham Institute. We built the [Grassroots Infrastructure project](https://grassroots.tools/) which aims to create an easily-deployable suite of computing middleware tools to help users and developers gain access to scientific data. This is part of the [Designing Future Wheat (DFW)](https://designingfuturewheat.org.uk/) project. There are two separate parts of this project that we have added Frictionless Data support to and we’ll now describe each of these in turn. + +## Why add Frictionless to the Designing Future Wheat project? +The first part of the Tool Fund project we added Frictionless Data to is the DFW data portal which delivers large scale wheat datasets that are also tied to semantically marked-up metadata. These datasets are heterogeneous and vary from field trial information, sequencing data, through to phenotyping images, etc. Given the different needs of users of this data, there is an increasing need to be able to manage this data and its associated metadata to allow for as easy dissemination as possible. So the issue that we had was how can we standardize the methods to access this data/metadata and label it using both well-defined ontologies and standards to deliver consistent data packages to users in an interoperable way. This is where Frictionless Data came in, allowing data scientists a consistent, well-defined standard to use when building programs or workflows to access the data stored on the portal. + +The portal uses a combination of an [iRODS](https://irods.org/) repository, to store the data and metadata, and [Apache](https://httpd.apache.org/) to host the files with our in-house developed Apache module, mod_eirods_dav, linking the two together. It was this module that we added the Frictionless Data support to and further details are available in the [documentation](https://github.com/billyfish/eirods-dav#frictionless-data-support). + +## How does the new Frictionless implementation work? +So what does it do? Well, it can generate a datapackage.json file automatically for any number of specified directories. These Data Packages can either be generated dynamically on each access or can optionally be written back to the iRODS repository and served like any other static file stored there. Since every iRODS repository can use different metadata keys for storing the information that the Data Packages require, the required key names are completely configurable by specifying the iRODS metadata keys to use in the mod_eirods_dav configuration file and you can do things like combining the values of multiple iRODS metadata keys with generic strings to produce the value that you want to use in the Data Package. Currently the Data Package’s name, title, description, authors, unique identifier and license details are all supported. For each entry within the Data Package’s resources array, the name, path checksum and size attributes are also stored. + +As well as standard entries within the Data Package, we also added support for Tabular Data Packages. As with standard entries, all of the keys for the column names can be generated from setting the required directives within the module configuration file. + +![imgblog](https://user-images.githubusercontent.com/74717970/110128100-b5154a00-7dc6-11eb-8d8a-a915a49e6742.png) +Figure1: A Data Package generated automatically by mod_eirods_dav + +![imgblog2](https://user-images.githubusercontent.com/74717970/110128509-25bc6680-7dc7-11eb-8c2e-ff966169f9c5.png) +Figure2: Tabular Data Package generated automatically by mod_eirods_dav + +## Adding CKAN support +The second of the tools that we have implemented Frictionless Data support for is the DFW CKAN website. Primarily we use this to store publications from the project output. We currently have over 300 entries in there and since its collection is getting larger and larger, we needed a more manageable way of having better data integration, especially when using other systems through the projects by our collaborators. + +So we built a simple Python Django webapp to do this: + +![imgblog3](https://user-images.githubusercontent.com/74717970/110128662-58fef580-7dc7-11eb-88c9-46e8e36b4def.png) + +By querying the REST API provided by CKAN and getting the datasets’ metadata as JSON output, followed by using the [Frictionless CKAN Mapper](https://github.com/frictionlessdata/frictionless-ckan-mapper), the JSON is converted into datapackage.json, to conform with Frictionless Data standard. If any of the resources under a dataset is CSV, the headings will be extracted as the [tabular data package schema](https://specs.frictionlessdata.io/table-schema/) and integrated into the datapackage.json file itself. As well as providing the datapackage.json file as a download through the Django web app, it is also possible to push the datapackage.json back to the CKAN as a resource file on the page. This requires the CKAN user key with the relevant permissions. + +![imgblog4](https://user-images.githubusercontent.com/74717970/110128881-94012900-7dc7-11eb-9833-e46f351477be.png) + +## How can you try this tool? +The tool can be used by accessing its REST interface: +* `/convert?q={ckan-dataset-id}` - convert CKAN dataset json to datapackage json e.g. /convert?q=0c03fa08-2142-426b-b1ca-fa852f909aa6 +* `/convert_resources?q={ckan-dataset-id}` - convert CKAN dataset json to datapackage json with resources, also if any of the resources files are CSV files, the tabular data package will be converted. e.g. /convert_resources?q=grassroots-frictionless-data-test +* `/convert_push?q={ckan-dataset-id}&key={ckan-user-key}` - push the generated datapackage.json to the CKAN entry. +An example REST query page: + +![imgblog5](https://user-images.githubusercontent.com/74717970/110129172-efcbb200-7dc7-11eb-9230-a70cbbd6d9cf.png) + +It is possible to have your own local deployment of the tool too by downloading the web app from its Github repository, installing the requirements, and running the server with + +`$manage.py runserver 8000` + +Our collaborators can utilise the datapackage.json and integrate the CKAN entries to their own tools or project with ease as it conforms to the Frictionless Data standard. + +## Next Steps for Frictionlessly Designing Future Wheat +It has been a hugely positive step to implement support for Frictionless Data Packages and we’ve already used these packages ourselves after two of our servers decided to fall over within three days of each other! Our future plans are to add support for further metadata keys within the datapackage.json files and expose more datasets as Frictionless Data Packages. For the CKAN-side, there are a few improvements that can be made in future: firstly, make the base CKAN url configurable in a config file, so this can be used for any CKAN website. Secondly, create a docker file to include the whole Django app, so it is more portable and easier to be deployed. You can keep track of the project at the following links: + +* The Designing Future Wheat Data Portal: https://opendata.earlham.ac.uk/wheat/under_license/toronto/ +* The Designing Future Wheat publications portal: https://ckan.grassroots.tools +* mod_eirods_dav: https://github.com/billyfish/eirods-dav +* CKAN Frictionless Data web application: https://github.com/TGAC/ckan-frictionlessdata + diff --git a/site/blog/2021-03-10-fellows-reproducing/README.md b/site/blog/2021-03-10-fellows-reproducing/README.md new file mode 100644 index 000000000..b82e15c52 --- /dev/null +++ b/site/blog/2021-03-10-fellows-reproducing/README.md @@ -0,0 +1,60 @@ +--- +title: Is reproducing someone else’s research data a Frictionless experience? +date: 2021-03-10 +tags: ['fellows'] +category: +image: /img/standards/data-package.png +description: As a test of research reproducibility measures, we tasked the Frictionless Fellows with reproducing each others’ data packages... +author: Lilly Winfree +--- + +The “reproducibility crisis” is a hot topic in scientific research these days. Can you reproduce published data from another laboratory? Can you follow the published scientific methods and get the same result? Unfortunately, the answer to these questions is often no. + +One of the goals of Frictionless Data is to help researchers make their work more reproducible. To achieve this, we focus on making data more understandable (make sure to document your metadata!), of higher quality (via validation checks), and easier to reuse (by standardization and packaging). + +As a test of these reproducibility measures, we tasked the Frictionless Fellows with reproducing each others’ data packages! This was a great learning experience for the Fellows and revealed some important lessons about how to make their data more (re)usable. Click on the blog links below to read more about their experiences! + +*** + +### [Reproduciendo un viaje a Mo'rea by Sele Yang](https://fellows.frictionlessdata.io/blog/sele-partner-blog/) (Cohort 1) +“Mi viaje a través de los datos de Lily, me llevó a Mo’rea, Polinesia Francesa, desde donde ella, a través de diferentes herramientas, recopiló un total de 175 entrevistas entre residentes y también investigadores/as de la región....Para reproducir los datos de Lily, utilicé inicialmente el DataPackage Creator tool para cargar su información en bruto y así empezar a revisar las especificaciones de su data type creados de manera automática por la herramienta.” + +*** + +### [Packaging Ouso’s Data by Lily Zhao](https://fellows.frictionlessdata.io/blog/lily-partner-blog/) (Cohort 1) + “This week I had the opportunity to work with my colleague's data. He created a Datapackage which I replicated. In doing so, I learned a lot about the Datapackage web interface….Using these data Ouso and his co-authors evaluate the ability of high-resolution melting analysis to identify illegally targeted wildlife species.” + +*** + +### [Data Barter: Real-life data interactions by Ouso Daniel](https://fellows.frictionlessdata.io/blog/ouso-partner-blog/) (Cohort 1) + “Exchanging data packages and working backwards from them is an important test in the illustration of the overall goal of the Frictionless Data initiative. Remember, FD seeks to facilitate and promote open and reproducible research, consequently promoting collaboration. By trying to reproduce Monica's work I was able to capture an error, which I highlighted for her attention, thus improved the work. Exactly how science is supposed to work!” + +*** + +### [On README files, sharing data and interoperability by Anne Lee Steele](https://fellows.frictionlessdata.io/blog/anne-partner-blog/) (Cohort 2) + “One of the goals of the Frictionless Data Fellowship has been to help us make our research more interoperable, which is another way of saying: something that other researchers can use, even if they have entirely different systems or tools with which they approach the same topic….What if researchers of all types wrote prototypical "data packages" about their research, that gave greater context to their work, or explained its wider relevance? In my fields, many researchers tend to find this in 'the art of the footnote', but this type of informal knowledge or context is not operationalized in any real way.” + +*** + +### [Using Frictionless tools to help you understand open data by Dani Alcalá-López](https://fellows.frictionlessdata.io/blog/dani-partner-blog/) (Cohort 2) +“A few weeks ago, the fellows did an interesting exercise: We would try to replicate each others DataPackages in pairs. We had spent some time before creating and validating DataPacakges with our own data. Now it was the time to see how would it be to work with someone else's. This experience was intended to be a way for us to check how it was to be at the other side.” + +*** + +### [Validating someone else's data! By Katerina Drakoulaki](https://fellows.frictionlessdata.io/blog/katerina-partner-blog/) (Cohort 2) +“The first thing I did was to go through the README file on my fellow's repository. Since the repository was in a completely different field, I really had to read through everything very carefully, and think about the terms they used….Validating the data (to the extent that it was possible after all) was easy using the goodtables tools.” + +*** + +### [Reproducing Jacqueline's Datapackage and Revalidating her Data! By Sam Wilairat](https://fellows.frictionlessdata.io/blog/sam-reproduce-blog/) (Cohort 2) + “Using Jacqueline's GitHub repository, Frictionless Data Package Creator, and Goodtables, I feel that I can confidently reuse her dataset for my own research purposes. While there was one piece of metadata missing from her dataset, her publicly published datapackage .JSON file on her repository helped me to quickly figure out how to interpret the unlabeled column. I also feel confident that the data is valid because after doing a visual scan of the dataset, I used the Goodtables tool to double check that the data was valid!” + +*** + +### [Reproducing a data package by Jacqueline Maasch](https://fellows.frictionlessdata.io/blog/jacqueline-pkg-reprod-blog/) (Cohort 2) + “Is it easy to reproduce someone else's data package? Sometimes, but not always. Tools that automate data management can standardize the process, making reproducibility simpler to achieve. However, accurately anticipating a tool's expected behavior is essential, especially when mixing technologies.” + +*** + +### [Validating data from Daniel Alcalá-López by Evelyn Night](https://fellows.frictionlessdata.io/blog/evelyn-partner-blog/) (Cohort 2) + “In a fast paced research world where there’s an approximate increase of 8-9% in scientific publications every year, an overload of information is usually fed to the outside world. Unfortunately for us, most of this information is often wasted due to the reproducibility crisis marred by data or code that’s often locked away. We explored the question, ‘how reproducible is your data?’ by exchanging personal data and validating them according to the instructions that are outlined in the fellows’ recent goodtables blogs.” diff --git a/site/blog/2021-03-29-february-virtual-hangout/README.md b/site/blog/2021-03-29-february-virtual-hangout/README.md new file mode 100644 index 000000000..8ad7137ad --- /dev/null +++ b/site/blog/2021-03-29-february-virtual-hangout/README.md @@ -0,0 +1,37 @@ +--- +title: Frictionless Data March 2021 Virtual Hangout +date: 2021-03-29 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/march.png +description: On our last Frictionless Data community call on March 25th, we dealt with a very current topic thanks to Thorben Westerhuys, who presented his project on Frictionless Vaccination data. +author: Sara Petti +--- + +## A recap from our March community call +On our last Frictionless Data community call on March 25th, we dealt with a very current topic thanks to Thorben Westerhuys, who presented his project on Frictionless Vaccination data. + +To compensate for the lack of time perspective in the government data, Thorben has developed a spatiotemporal tracker for state level covid vaccination data, which takes the data provided by the government, reformats it and makes it available to everyone in a structured, more machine readable form. + +To discover more about this great project, have a look at it on [GitHub](https://github.com/n0rdlicht/rki-vaccination-scraper). If you would like to dive deeper and discover all the project’s applications, you can watch Thorben’s presentation here: + + + +## Other agenda items from our hangout + +[csv,conf,v6](https://csvconf.com/) is happening on May 4-5. Registrations are open. Don’t forget to book your place! + +## Join us next month! + +Our next meeting will be on April 29th. We will hear a presentation from the Frictionless Fellows. You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +## Call recording: + +On a final note, here is the recording of the full call: + + +

 

+ +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-04-13-data-package-for-intermine/README.md b/site/blog/2021-04-13-data-package-for-intermine/README.md new file mode 100644 index 000000000..e5f2e3cd7 --- /dev/null +++ b/site/blog/2021-04-13-data-package-for-intermine/README.md @@ -0,0 +1,28 @@ +--- +title: Frictionless Data Package for InterMine +date: 2021-04-13 +tags: ["tool-fund"] +category: grantee-profiles +image: /img/blog/intermine.png +description: This blog is part of a series showcasing projects developed during the 2020 Tool Fund. +author: Nikhil Vats +--- + +*This blog is part of a series showcasing projects developed during the 2020-2021 Tool Fund. The Tool Fund provided five mini-grants of $5,000 to support individuals or organisations in developing an open tool for reproducible research built using the Frictionless Data specifications and software. This Fund is part of the Frictionless Data for Reproducible Research project, which is funded by the Sloan Foundation. This project applies our work in Frictionless Data to data-driven research disciplines, in order to facilitate reproducible data workflows in research contexts.* + +My name is Nikhil and I am a pre-final year student pursuing M.Sc. Economics and B.E. Computer Science from BITS Pilani, India. For my Frictionless Data Tool Fund, I worked with [InterMine](http://intermine.org) which is an open-source biological data warehouse and offers a webapp to query and download that data in multiple formats like CSV, TSV, JSON, XML, etc. However, it is sometimes difficult for new users to understand the InterMine data since it is complex and structured. Also, for developers to contribute to InterMine in a more effective way, they need to understand the data and its structure at the core of InterMine, and this can be difficult for new developers. + +To help resolve these user needs, my solution was to design a data package for InterMine and give users the option to download the data package along with the results of any query. This would help them understand the structure of the results like class and attributes by describing all the attributes and summarizing other important information such as data sources, primary key(s), etc. Also, other fields like the version of app, link to query and timestamp can help them trace any potential errors. The new feature to export data packages is available in both the old version of InterMine webapps and the new version (BlueGenes). Users can use any of the apps to build a query and then go to the results page, where they can click on the export button, which provides the option to export Frictionless Data Package (see the images below for detailed steps). + +Within InterMine, there are over 30 mines that provide biological data for organisms like flies, humans, rats, etc. For this Frictionless Tool Fund, the target audience is the InterMine community, whether it’s researchers in institutes around the world or Google Summer of Code and Outreachy applicants who can understand the process of querying and the structure of data to kickstart their contribution. + +While this Tool Fund is over, a future idea to improve this work is adding class and attribute descriptions in the data package using the configuration files in the InterMine codebase. The class description file already exists but we need to add the attribute descriptions. Another possible future expansion would be integrating this feature with one of the frictionless tools, like Goodtables. For more details, see the images below and read the documentation for the tool [here](https://github.com/intermine/im-docs/blob/master/versioned_docs/version-5.0.0/webapp/frictionless/index.md). + +Screenshot 1 : Step 1 to export data package +![screenshot1](https://user-images.githubusercontent.com/74717970/114539496-d6364980-9c54-11eb-8d17-b4eb35f483b4.png) + +Screenshot 2 : Step 2 to export data package +![screenshot2](https://user-images.githubusercontent.com/74717970/114539567-e9e1b000-9c54-11eb-933d-4545f79a3b65.png) + +Screenshot 3 : A sample data package +![screenshot3](https://user-images.githubusercontent.com/74717970/114539626-f49c4500-9c54-11eb-8452-fdf6bf810686.png) diff --git a/site/blog/2021-04-14-new-data-documentation-portal/README.md b/site/blog/2021-04-14-new-data-documentation-portal/README.md new file mode 100644 index 000000000..417bb0337 --- /dev/null +++ b/site/blog/2021-04-14-new-data-documentation-portal/README.md @@ -0,0 +1,38 @@ +--- +title: Unveiling the new Frictionless Data documentation portal +date: 2021-04-14 +tags: ['news'] +category: news +image: /img/blog/framework.png +description: We invite you all to read our new and improved documentation portal that we created for Frictionless Framework +author: Sara Petti +--- +Originally published: https://blog.okfn.org/2021/04/14/unveiling-the-new-frictionless-data-documentation-portal/ + +Have you used Frictionless Data documentation in the past and been confused or wanted more examples? Are you a brand new Frictionless Data user looking to get started learning? + +We invite you all to read our new and improved [documentation portal](https://framework.frictionlessdata.io/)! Thanks to a [fund that the Open Knowledge Foundation was awarded](https://frictionlessdata.io/blog/2021/01/13/partnering-with-odi/) from the [Open Data Institute](https://theodi.org/), we have completely reworked the guides of our [Frictionless Data Framework website](https://framework.frictionlessdata.io/) according to the suggestions from a cohort of users gathered in several feedback sessions throughout the months of February and March. + +We cannot stress enough how precious those feedback sessions have been to us. They were an excellent opportunity to connect with our users and reflect together with them on how to make all our guides more useful for current and future users. The enthusiasm and engagement that the community showed for the process was great to see and reminded us that the link with the community should be at the core of open source projects. + +We were amazed by the amount of extremely useful inputs that we got. While we are still digesting some of the suggestions and working out how to best implement them, we have made many changes to make the documentation a smoother, Frictionless experience. + +## So what’s new? + +A common theme from the feedback sessions was that it was sometimes difficult for novice users to understand the whole potential of the Frictionless specifications. To help make this clearer, we added a more detailed explanation, user examples and user stories to our [Introduction](https://framework.frictionlessdata.io/docs/guides/introduction). We also added some extra installation tips and a troubleshooting section to our [Quick Start guide](https://framework.frictionlessdata.io/docs/guides/quick-start). + +The users also suggested several code changes, like more realistic code examples, better explanations of functions, and the ability to run code examples in both the Command Line and Python. This last suggestion was prompted because most of the guides use a mix of Command Line and Python syntax, which was confusing to our users. We have clarified that by adding a switch in the code snippets that allows user to work with a pure Python Syntax or pure Command Line (when possible), as you can see [here](https://framework.frictionlessdata.io/docs/guides/basic-examples). We also put together an [FAQ section](https://framework.frictionlessdata.io/docs/faq/) based on questions that were often asked on our [Discord chat](https://discord.com/invite/Sewv6av). If you have suggestions for other common questions to add, let us know! + +The documentation revamping process also included the publication of new tutorials. We worked on two new Frictionless tutorials, which are published under the Notebooks link in the navigation menu. While working on those, we got inspired by the feedback sessions and realised that it made sense to give our community the possibility to contribute to the project with some real life examples of Frictionless Data use. The user selection process has started and we hope to get the new tutorials online by the end of the month, so stay tuned! + +## What’s next? + +Our commitment to continually improving our documentation is not over with this project coming to an end! Do you have suggestions for changes you would like to see in our documentation? Please reach out to us or open a [pull request](https://github.com/frictionlessdata/frictionless-py/pulls) to contribute. Everyone is welcome to contribute! Learn how to do it [here](https://framework.frictionlessdata.io/docs/development/contributing). + +## Thanks, thanks, thanks! + +Once again, we are very grateful to the Open Data Institute for giving us the chance to focus on this documentation in order to improve it. We cannot thank enough all our users who took part in the feedback sessions, your contributions were precious. + +## More about Frictionless Data + +Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. The project is funded by the Sloan Foundation. diff --git a/site/blog/2021-05-03-april-virtual-hangout/README.md b/site/blog/2021-05-03-april-virtual-hangout/README.md new file mode 100644 index 000000000..fc8d5a774 --- /dev/null +++ b/site/blog/2021-05-03-april-virtual-hangout/README.md @@ -0,0 +1,48 @@ +--- +title: Frictionless Data April 2021 Virtual Hangout +date: 2021-05-03 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/april.png +description: On our last Frictionless Data community call on April 29 we had an interactive session with our great Frictionless Data Fellows. +author: Sara Petti +--- + +On our last Frictionless Data community call on April 29th we had an interactive session with our great Frictionless Data Fellows: Daniel Alcalá López, Kate Bowie, Katerina Drakoulaki, Anne Lee, Jacqueline Maasch, Evelyn Night and Samantha Wilairat. + +The Fellows are early career researchers recruited to become champions of the Frictionless Data tools and approaches in their field. During the nine months of their fellowship, which started in August 2020, the Fellows learned how to use Frictionless tools in their domains to improve reproducible research workflows, and how to advocate for open science. It was a real pleasure to work with this amazing cohort. Sadly the fellowship is coming to an end, but we are sure we will hear a lot from them in the future. + +You can learn more about them [here](https://fellows.frictionlessdata.io/), and read all the great blogs they wrote [here](https://fellows.frictionlessdata.io/blog/). + +If you would like to hear directly from the Fellows about their experience with Frictionless Data and what the fellowship meant for them, you can have a look at the presentation they made during the community call here below: + + + +## Other agenda items from our hangout +[csv,conf,v6](https://csvconf.com/) is happening on May 4-5. It is free and virtual - register [here](https://www.eventbrite.com/e/csvconfv6-tickets-144250211265). There are two Frictionless sessions: +* May 4th: Frictionless Data workshop led by the Reproducible Research fellows, don’t miss the opportunity to meet the Fellows again! +* May 5th: Frictionless Data for Wheat by Simon Tyrrell + +Full programme here: https://csvconf.com/speakers + +## News from the Community + +Oleg Lavrovsky presented instant APIs for small Frictionless Data-powered apps. [Here](https://scene.rip/) is an example app developed during the latest Swiss OpenGLAM hackathon. To know more about it, you can also check: + +* The [source code](https://github.com/we-art-o-nauts/the-scene-lives) which uses [DataFlows](https://github.com/datahq/dataflows) for the aggregation, and the [Pandas Data Package reader](https://github.com/rgieseke/pandas-datapackage-reader) as the basis for filtering. +* The [project page](https://hack.glam.opendata.ch/project/114) and slides which outline the motivation to collect and homogenize electronic art archives. +* An [earlier attempt](https://github.com/loleg/baumkataster-data) which involves a city tree catalogue. The team is also building on this approach in several projects at [cividi](http://github.com/cividi). + +## Join us next month! + +Our next meeting will be on May 27th. We will hear a presentation from Simon Tyrrell on his Tool Fund project - Frictionless Data for Wheat. You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +## Call recording: + +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-06-01-may-virtual-hangout/README.md b/site/blog/2021-06-01-may-virtual-hangout/README.md new file mode 100644 index 000000000..18420a0b3 --- /dev/null +++ b/site/blog/2021-06-01-may-virtual-hangout/README.md @@ -0,0 +1,33 @@ +--- +title: Frictionless Data May 2021 Virtual Hangout +date: 2021-06-01 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/may.png +description: On our last Frictionless Data community call on May 29 we had Simon Tyrrell and Xingdong Bian from the Earlham Institute giving a presentation on Frictionless Data for Wheat. +author: Sara Petti +--- +On our last Frictionless Data community call on May 29th we had Simon Tyrrell and Xingdong Bian from the Earlham Institute giving a presentation on Frictionless Data for Wheat. The project was developed during the Frictionless Toolfund 2020-2021. + +Simon and Xingdong are part of the Designing Future Wheat, a research group studying how to increment the amount of wheat that is produced in a field in order to meet the global demand by 2050. To run the project, they collect a great amount of data and large scale datasets, which are shared with a great number of different users. Frictionless Data is used to make that data available, usable and interoperable for everyone. + +You can learn more about the Designing Future Wheat project [here](https://frictionlessdata.io/blog/2021/03/05/frictionless-data-for-wheat/). If you would like to dive deeper and discover all about the Frictionless implementation, you can watch Simon’s and Xingdong’s presentation here: + + + +# Other agenda items from our hangout +We are super happy to share with you [Frictionless Repository - a Github Action for the continuous data validation of your repo](https://repository.frictionlessdata.io/). +We are actively looking for feedback, so please let us know what you think. +# Join us next month! + +Our next meeting will be on June 24th. We will hear a presentation from +Nikhil Vats on Frictionless Data Package for InterMine. You can sign up [here.](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! +# Call recording: +On a final note, here is the recording of the full call: + + +

 

+ +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-06-16-new-changes-to-the-website/README.md b/site/blog/2021-06-16-new-changes-to-the-website/README.md new file mode 100644 index 000000000..0630be4c7 --- /dev/null +++ b/site/blog/2021-06-16-new-changes-to-the-website/README.md @@ -0,0 +1,22 @@ +--- +title: Announcing New Changes to Our Website +date: 2021-06-16 +tags: ["news"] +category: news +image: /img/blog/fd-home.png +description: We have finished making some new changes that we are very excited to tell you about! +author: Sara Petti +--- +Have you noticed some changes to our website? Building upon last year’s [website redesign](https://frictionlessdata.io/blog/2020/05/01/announcing-new-website/), we have finished making some new changes that we are very excited to tell you about! When we started reviewing our documentation for the [Frictionless Python Framework](http://framework.frictionlessdata.io/) with the [support of the ODI](https://frictionlessdata.io/blog/2021/01/13/partnering-with-odi/#so-what-will-be-changing) back in January, we quickly realised that our main website could benefit from some revamping as well, in order to make it more user-friendly and easier to navigate. + +We needed to clarify the relationship between our main project website and the website of all our Frictionless standards, software, and specifications, which all had different layouts and visual styles. The harmonisation process is still ongoing, but we are already very happy with the fact that the new website offers a comprehensive view of all our tools. + +It was important for us that people visiting our website for the very first time could quickly understand what Frictionless Data is and how it can be useful to them. We did that through a reorganisation of the homepage and the navigation, which was a bit confusing for some users. We also updated most of the text to better reflect the current status of the project, but also to clearly state what Frictionless Data is. Users should now be able to understand in a glance that Frictionless is composed of two main parts, [software](https://frictionlessdata.io/software/) and [standards](https://frictionlessdata.io/standards/), which make it more accessible for a broad range of people working with data. + +Schermata 2021-06-16 alle 15 03 47 + +Users will also easily find examples of [projects and collaborations that adopted Frictionless](https://staging.frictionlessdata.io/adoption/), which can be very useful to better understand the full potential of the Frictionless toolkit. + +Our goal with this new website is to give visitors an easier way to learn about Frictionless Data, encourage them to try it out and join our great community. The new architecture should reflect that, and should make it easier for people to understand that Frictionless Data is a progressive open-source framework for building data infrastructure, aiming at making it easier to work with data. Being an open-source project, we welcome and cherish everybody’s contribution. Talking about that, we would love to hear your feedback! Let us know what you think about the new website, if you have any comments or if you see any further improvement we could make. We have created a [GitHub issue](https://github.com/frictionlessdata/website/issues/198) you can use to give us your thoughts. + +Thank you! diff --git a/site/blog/2021-06-22-livemark/README.md b/site/blog/2021-06-22-livemark/README.md new file mode 100644 index 000000000..8d4b78c82 --- /dev/null +++ b/site/blog/2021-06-22-livemark/README.md @@ -0,0 +1,27 @@ +--- +title: Welcome Livemark - the New Frictionless Data Tool +date: 2021-06-22 +tags: ["news"] +category: news +image: /img/blog/livemark-page.png +description: Introducing you to Livemark, the new Frictionless Data tool that allows you to publish data articles easily. +author: Sara Petti +--- +We are very excited to announce that a new tool has been added to the Frictionless Data toolkit: Livemark. What is that? Livemark is a great tool that allows you to publish data articles very easily, giving you the possibility to see your data live on a working website in a blink of an eye. + +## How does it work? + +Livemark is a Python library generating a static page that extends Markdown with interactive charts, tables, scripts, and much much more. You can use the Frictionless framework as a `frictionless` variable to work with your tabular data in Livemark. + +Livemark offers a series of useful features, like automatically generating a table of contents and providing a scroll-to-top button when you scroll down your document. You can also customise the layout of your newly created webpage. + +## How can you get started? +Livemark is very easy to use. We invite you watch this great demo by developer Evgeny Karev: + + +

 

+ +You can also have a look at the [documentation on GitHub](https://frictionlessdata.github.io/livemark/). + +## What do you think? +If you create a site using Livemark, please let us know! Frictionless Data is an open source project, therefore we encourage you to give us feedback. Let us know your thoughts, suggestions, or issues by joining us in our community chat on [Discord]( https://discord.com/invite/Sewv6av) or by opening an issue in the [GitHub repo](https://github.com/frictionlessdata/livemark). diff --git a/site/blog/2021-06-23-frictionless-specs-european-commission/README.md b/site/blog/2021-06-23-frictionless-specs-european-commission/README.md new file mode 100644 index 000000000..8ed1693ec --- /dev/null +++ b/site/blog/2021-06-23-frictionless-specs-european-commission/README.md @@ -0,0 +1,29 @@ +--- +title: A Short Case Study Involving Table Schema Frictionless Specs at the European Union +date: 2021-06-28 +tags: ['table-schema', 'specifications', 'validator', 'tabular-data'] +category: news +image: /img/blog/interoperability-test-bed-eu-commission.png +description: The Frictionless specifications are helping with simplifying data validation for applications in production at the European Union. +author: Sébastien Lavoie +--- + +Do you remember [Costas Simatos](https://joinup.ec.europa.eu/user/73932)? He introduced the Frictionless Data community to the [Interoperability Test Bed](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository) (ITB), an online platform that can be used to test systems against technical specifications --- curious minds will find a recording of his presentation on the subject [available on YouTube](https://www.youtube.com/watch?v=pJFsJW96fuA). Amongst the tools it offers, there is a [CSV validator](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository/solution/csvvalidator) which relies on the [Table Schema specifications](https://specs.frictionlessdata.io/table-schema/). Those specifications filled a gap that the [RFC 4180](https://datatracker.ietf.org/doc/html/rfc4180) didn't address by having a structured way of defining the content of individual fields in terms of data types, formats and constraints, which is a clear benefit of the Frictionless specifications as reported back in 2020 [when a beta version of the CSV validator was launched](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository/solution/interoperability-test-bed/news/table-schema-validator). + +--- + +Frictionless specifications are flexible while allowing users to define unambiguously the expected content of a given field, therefore they were officially adopted to [realise the validator for the Kohesio pilot phase of 2014-2020](https://joinup.ec.europa.eu/collection/interoperability-test-bed-repository/solution/interoperability-test-bed/news/test-bed-support-kohesio-pilot), [Kohesio](https://kohesio.eu/) being the _"Project Information Portal for Cohesion Policy"_. The Table Schema specifications made it easy and convenient for the Interoperability Test Bed to establish constraints and describe the data to be validated in a concise way based on an initial set of [CSV syntax rules](https://joinup.ec.europa.eu/collection/semantic-interoperability-community-semic/solution/kohesio-validator/specification), converting written and mostly non-technical definitions to their Frictionless equivalent. Using simple JSON objects, Frictionless specifications allowed the ITB to enforce data validation in multiple ways as can be observed from the [schema used for the CSV validator](https://github.com/ISAITB/validator-resources-kohesio/blob/master/resources/schemas/schema.json). The following list of items calls attention to the core aspects of the Table Schema standard that were taken advantage of: + +* Dates can be defined with string formatting (e.g. `%d/%m/%Y` stands for `day/month/year`); +* Constraints can indicate whether a column can contain empty values or not; +* Constraints can also specify a valid range of values (e.g. `"minimum": 0.0` and `"maximum": 100.0`); +* Constraints can specify an enumeration of valid values to choose from (e.g. `"enum" : ["2014-2020", "2021-2027"]`). +* Constraints can be specified in custom ways, such as with [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) for powerful string matching capabilities; +* Data types can be enforced for any column; +* Columns can be forced to adapt a specific name and a description can be provided for each one of them. + +Because these specifications can be expressed as portable text files, they became part of a multitude of tools to provide greater convenience to users and the validation process has been [documented extensively](https://www.itb.ec.europa.eu/docs/guides/latest/validatingCSV/index.html). JSON code snippets from the documentation highlight the fact that this format conveys all the necessary information in a readable manner and lets users extend the original specifications as needed. In this particular instance, the CSV validator can be used as a [Docker image](https://hub.docker.com/repository/docker/isaitb/validator-kohesio), as part of a [command-line application](https://www.itb.ec.europa.eu/csv-offline/kohesio/validator.zip), inside a [web application](https://www.itb.ec.europa.eu/csv/kohesio/upload) and even as a [SOAP API](https://www.itb.ec.europa.eu/csv/soap/kohesio/validation?wsdl). + +Frictionless specifications were the missing piece of the puzzle that enabled the ITB to rely on a well-documented set of standards for their data validation needs. But there is more on the table (no pun intended): whether you need to manage files, tables or entire datasets, there are [Frictionless standards](/standards/) to cover you. As the growing [list of adopters and collaborations](/adoption/) demonstrates, there are many use cases to make a data project shine with Frictionless. + +Are you working on a great project that should become the next glowing star in the world of Frictionless Data? Feel free to [reach out](/work-with-us/get-help/) to spread the good news! diff --git a/site/blog/2021-06-25-june-virtual-hangout/README.md b/site/blog/2021-06-25-june-virtual-hangout/README.md new file mode 100644 index 000000000..6a6471e10 --- /dev/null +++ b/site/blog/2021-06-25-june-virtual-hangout/README.md @@ -0,0 +1,48 @@ +--- +title: Frictionless Data June 2021 Virtual Hangout +date: 2021-06-25 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/june21-community-call.png +description: At our June Frictionless Data community call we had Nikhil Vats giving a presentation on Frictionless Package for InterMine... +author: Sara Petti +--- + +At our last Frictionless Data community call on June 24th we had Nikhil Vats giving a presentation on Frictionless Package for InterMine. The project was developed during the Frictionless Toolfund 2020-2021. + +InterMine is an open source biological data warehouse that creates databases of biological data accessed by sophisticated web query tools. Nikhil worked on the Frictionless Data Package integration, which is extremely helpful for users, as it describes all the fields of their query, specifically: name of field, type of field, class path, field and class ontology link. + +You can learn more about the Data Package for InterMine project [here](https://frictionlessdata.io/blog/2021/04/13/data-package-for-intermine/). If you would like to dive deeper and discover all about the Frictionless implementation, you can watch Nikhil Vats’ presentation here: + + + +## Other agenda items from our hangout + +### Linked data support +Nikhil’s presentation naturally led to a discussion on adding support for linked data and ontologies to Frictionless Data. On several occasions the community has shown interest in extending Frictionless specifications by incorporating standard attributes like ontology terms for improved interoperability. There have also been several discussion about supporting JSON-LD or RDF in the main specifications for improved data linking and querying. Would this help your work? Let us know what you think and if you are potentially interested in participating in this project. + +### New tool: Livemark +We are super happy to share with you the newest entry in the Frictionless Data toolkit: Livemark - a static page generator with built-in tables and charts support (with support for data processing and validation with Frictionless): https://frictionlessdata.github.io/livemark/ + +To know more about it, check out [our latest blog](https://frictionlessdata.io/blog/2021/06/22/livemark/) (featuring a great demo by developer Evgeny Karev). + +As usual, we would love to hear what you think, so please share your thoughts, comments and feedback with us. + +# News from the community +Michael Amadi from Nimble Learn presented the [Open Data Blend project](https://www.opendatablend.io/) - a set of open data services that aim to make large and complex UK open data easier to analyse. Open Data Blend’s bulk data API is built on the Frictionless Data specs. Keep an eye out for an upcoming blog with more details! + +Frictionless contributor Peter Desmet proposed to start a Frictionless Data community on Zenodo. We are currently discussing the best way to do that on [Discord](https://discord.com/invite/j9DNFNw) in the *datasets* channel. Join us there if you are interested or have ideas! + +# Join us next month! + +Our next meeting will be on July 29th. We will hear a presentation from +Dave Rowe on Public Libraries Open Data Schema. You can sign up [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! \ No newline at end of file diff --git a/site/blog/2021-07-02-farewell-fellows/README.md b/site/blog/2021-07-02-farewell-fellows/README.md new file mode 100644 index 000000000..415a0416a --- /dev/null +++ b/site/blog/2021-07-02-farewell-fellows/README.md @@ -0,0 +1,27 @@ +--- +title: A Bittersweet Ending to Frictionless Fellows Cohort 2 +date: 2021-07-02 +tags: ['fellows'] +category: +image: /img/blog/fellows-ending.jpg +description: As a final assignment, the Fellows have written blogs reflecting upon their experiences and what they learned during the programme... +author: Lilly Winfree +--- + +To say that I am proud of the [second cohort of Frictionless Fellows](/blog/2020/09/01/hello-fellows-cohort2/) is an understatement. Their insight, discussions, and breakthroughs have been a true joy to witness, and I feel so lucky to have had the chance to work and learn with each of them. Over the last 9 months, they not only learned about Frictionless Data tooling, how to make their research more reproducible, and how to advocate for open science, they also gave many presentations (some for the first time in public!), published papers, wrote dissertations, and gained confidence in their coding skills. I know each of them will be a leader in the open space, so keep an eye on them! + +As a final assignment, the Fellows have written blogs reflecting upon their experiences and what they learned during the programme. I’ve copied blurbs from each below, but be sure to click on the links to read more from each Fellow! + +* [Endings, Beginnings, and Reflections - by Anne Lee Steele](https://fellows.frictionlessdata.io/blog/anne-final-blog/) +“What came out of this fellowship, as my colleagues have said time and time again, is much more than I ever could have imagined. Over the course of the past year, I've had fascinating debates with my cohort, and learned about how different disciplines unpack complex debates surrounding transparency, openness, and accessibility (as well as many other things). I've learned how to engage with the universe of open knowledge, and have even started working on my own related projects! With the support of OKF, I've learned how to give presentations in public, and think about data in ways I never had before.” + +* [A done deal - by Evelyn Night](https://fellows.frictionlessdata.io/blog/evelyn-final-blog/) +“The fellowship was both exhilarating and educative. I got to engage in Open Science conversations, learned about and used frictionless tools like the Data Package Creator and Goodtables. I also navigated the open data landscape using CLI, Python, and git. I also got to engage in the Frictionless Community calls where software geniuses presented their work and also held Open science-centered conversations. These discussions enhanced my understanding of the Open Science movement and I felt a great honor to be involved in such meetings. I learned so much that the 9 months flew by.” + +* [A fellowship concludes - by Jacqueline Maasch](https://fellows.frictionlessdata.io/blog/jacqueline-final-blog/) +“It is hard to believe that my time as a Reproducible Research Fellow is over. I am most grateful for this program giving me a dedicated space in which to learn, a community with which to engage, and language with which to arm myself. I have been exposed to issues in open science that I had never encountered before, and have had the privilege of discussing these issues with people from across the world. I will miss the journal clubs the most!” + +* [My experience in the fellows program - a reflection - by Katerina Drakoulaki](https://fellows.frictionlessdata.io/blog/katerina-final-blog/) +“I got into the fellowship just with the hope of getting the opportunity to learn things I didn't have the opportunity to learn on my own. That is, I did not have specific expectations, I was (and still am) grateful to be in. I feel that all the implicit expectations I might have had are all fulfilled. I got an amazing boost in my digital skills altogether and I know exactly why (no I did not gain a few IQ points). I was in a helpful community and I matured in a way that enabled me to have more of a growth mindset. I also saw other people 'fail', as in having their code not working and having to google the solution! I have to say all the readings, the discussions, the tutorials, the Frictionless tools have been amazing, but this shift in my mindset has been the greatest gift the fellowship has given me.” + +Thank you Fellows! As a bonus, here are the reflections from the first cohort of Fellows: https://blog.okfn.org/2020/06/09/reflecting-on-the-first-cohort-of-frictionless-data-reproducible-research-fellows/ diff --git a/site/blog/2021-07-12-open-data-blend/README.md b/site/blog/2021-07-12-open-data-blend/README.md new file mode 100644 index 000000000..17e7084b6 --- /dev/null +++ b/site/blog/2021-07-12-open-data-blend/README.md @@ -0,0 +1,92 @@ +--- +title: Open Data Blend +date: 2021-07-12 +tags: ["case-studies"] +category: case-studies +image: /img/blog/open-data-blend-home-page.png +description: Open Data Blend is a set of open data services that aim to make large and complex UK open data easier to analyse. +author: Michael Amadi +--- + +[Open Data Blend](https://www.opendatablend.io) is a set of open data services that aim to make large and complex UK open data easier to analyse. We source the raw open data, transform it into [dimensional models](https://en.wikipedia.org/wiki/Dimensional_modeling) (also referred to as ‘star schemas’), cleanse and enrich it, add metadata to support its reuse, and make this processed data openly available as compressed CSV, Apache ORC, and Apache Parquet data files. In summary, we provide analysis-ready open data with an emphasis on quality over quantity. We are excited to tell you more about Open Data Blend and how it uses Frictionless Data specifications to make this data easier to understand and use. + +There are two core data services: Open Data Blend Datasets and Open Data Blend Analytics. Open Data Blend Datasets has a user interface (UI) called the [Open Data Blend Dataset UI](http://opendatablend.io/datasets) and a bulk data API called the [Open Data Blend Dataset API](https://packages.opendatablend.io/v1). [Open Data blend Analytics](https://www.opendatablend.io/analytics) is an interactive analytical query service that can be used from popular BI tools like Excel, Power BI Desktop, and Tableau Desktop. + +![open-data-blend-home-page](https://user-images.githubusercontent.com/74717970/125306833-d4515480-e32f-11eb-8a6d-306ce25cc854.png) + +## Why Open Data Blend Was Created +The idea behind Open Data Blend was born at [Nimble Learn](https://www.nimblelearn.com/) in 2014 after several pain points were experienced when working with large and complex UK open datasets. One of these pain points was that a significant effort, and access to large computational resources, was needed to prepare the data for analysis in a reasonable timeframe. Another pain point was that the lookups and data dictionaries would often be buried in unstructured sources like Word documents, PDF files, and web pages. + +## Our Frictionless Data Journey +At Nimble Learn, we have over six years’ experience working with the Frictionless Data specifications. We have delivered two other Frictionless Data projects to date: Data Package M and Data Package Connector. + +[Data Package M](https://github.com/nimblelearn/datapackage-m) is a Power Query M library that simplifies the loading of Tabular Data Packages into Excel or Power BI. + +![data-package-m](https://user-images.githubusercontent.com/74717970/125307259-314d0a80-e330-11eb-938b-c70cf3de7cc6.png) + +You can read the Frictionless Data case study for Data Package M [here](https://frictionlessdata.io/blog/2018/07/20/nimblelearn/). + +[Data Package Connector](https://github.com/nimblelearn/datapackage-connector) is a [Power BI custom connector](https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-connector-extensibility#custom-connectors) that enables one or more tables from Data Packages, that implement the Table Schema specification, to be loaded directly into Power BI through the 'Get Data' experience. + +![data-package-connector](https://user-images.githubusercontent.com/74717970/125307384-4de94280-e330-11eb-9ef1-f66084ceca32.png) + +The Frictionless Data case study for Data Package Connector can be read [here](https://frictionlessdata.io/blog/2019/07/22/nimblelearn-dpc/). + +## How Open Data Blend Uses Frictionless Data +During over six years of extensive research and development into open data publishing, we reviewed and evaluated several open standards that could be used as a base for our open data API. After carefully weighing the pros and cons of each, we chose to adopt the Frictionless Data specifications because they were lightweight, simple, robust, and highly scalable. We also wanted our users to benefit from the growing ecosystem of [Frictionless Data tools](https://libraries.frictionlessdata.io/) that make Frictionless Data even more accessible. + +The Open Data Blend Dataset UI and the Open Data Blend Dataset API are both powered by Frictionless Data. When you visit the [Open Data Blend Datasets](https://www.opendatablend.io/datasets) page, all of the information that you see nicely presented is coming from a data package that conforms to the [Data Package Catalog pattern](https://specs.frictionlessdata.io/patterns/#describing-data-package-catalogs-using-the-data-package-format). Clicking on one of the datasets takes you to a dedicated dataset page that is driven by extended [Data Package metadata](https://specs.frictionlessdata.io/data-package/). The ‘Get metadata’ button at the top of each dataset page reveals the contents of the underlying datapackage.json file. + +So far, we have implemented and extended the following Frictionless Data specifications and patterns: +[Data Package](https://specs.frictionlessdata.io/data-package/) +[Table Schema](https://specs.frictionlessdata.io/table-schema/) +[Data Catalogue pattern](https://specs.frictionlessdata.io/patterns/#describing-data-package-catalogs-using-the-data-package-format) +[Compressed resources pattern](https://specs.frictionlessdata.io/patterns/#compression-of-resources) + +You can see how deeply ingrained the Frictionless Data specifications are just by skimming through the Open Data Blend Dataset API [reference documentation](https://docs.opendatablend.io/open-data-blend-datasets/dataset-api). + +## How Open Data Blend Helps +Each Open Data Blend dataset is presented with helpful metadata. The data is modelled and enriched to enable effective data analysis. The columns that contain descriptive values are carefully combined into [dimension tables](https://en.wikipedia.org/wiki/Dimension_(data_warehouse)) and those that contain measurable facts are grouped into [fact tables](https://en.wikipedia.org/wiki/Fact_table). Modelling the data in this way makes it easier to understand and analyse. You can learn more about these dimensional modelling concepts [here](https://en.wikipedia.org/wiki/Dimensional_modeling) and [here](https://en.wikipedia.org/wiki/Star_schema). + +![open-data-blend-datasets-page](https://user-images.githubusercontent.com/74717970/125307186-21cdc180-e330-11eb-9650-68bbd75ea0dd.png) + +In addition to CSVs, we make the data available as Apache ORC and Apache Parquet files because they are two of the most popular and efficient open file formats for analytical workloads. Libraries available for [Python](https://arrow.apache.org/docs/python/parquet.html), [R](https://arrow.apache.org/docs/r/), and other popular languages make it possible to query these files very quickly. If you are a data engineer, data analyst, or data scientist with access to data lake storage, such as Amazon S3 and Azure Data Lake Storage Gen2, the ORC or Parquet files can be ingested into your data lake. Once there, you can query them interactively using data lake engines like Apache Spark, Azure Synapse Analytics, Databricks, Dremio, and Trino. + +To accelerate the data acquisition process when working with Open Data Blend datasets through code, we have developed a lightweight Python package called ‘opendatablend’. Once installed, this package allows you to effortlessly cache our data files locally with just a few lines of Python. Data engineers, data analysts, and data scientists can use the opendatablend package to get data and use it with whatever data tools they prefer. For example, a data scientist might start off doing some exploratory data analysis (EDA) in [Pandas](https://pandas.pydata.org/) or [Koalas](https://koalas.readthedocs.io/) using a [Jupyter notebook](https://jupyter.org/), transition to feature engineering, and then train and score machine learning models using [scikit-learn](https://scikit-learn.org/) or [Spark MLlib](https://spark.apache.org/mllib/). + +Below is a simple example that shows how easy the opendatablend for Python is to use: + +``` +import opendatablend as odb +import pandas as pd + +dataset_path = 'https://packages.opendatablend.io/v1/open-data-blend-road-safety/datapackage.json' + +# Specify the resource name of the data file. In this example, the 'date' data file will be requested in .parquet format. +resource_name = 'date-parquet' + +# Get the data and store the output object +output = odb.get_data(dataset_path, resource_name) + +# Print the file locations +print(output.data_file_name) +print(output.metadata_file_name) + +# Read a subset of the columns into a dataframe +df_date = pd.read_parquet(output.data_file_name, columns=['drv_date_key', 'drv_date', 'drv_month_name', 'drv_month_number', 'drv_quarter_name', 'drv_quarter_number', 'drv_year']) + +# Check the contents of the dataframe +df_date +``` + +You can learn more about the opendatablend package [here](https://github.com/opendatablend/opendatablend-py). + +To further reduce the time to value and to make the open data insights more accessible, the [Open Data Blend Analytics](https://www.opendatablend.io/analytics) service can be used with business intelligence (BI) tools like Excel, Power BI Desktop, and Tableau Desktop to directly analyse the data over a live connection. Depending on the use case, this can remove the need to work with the data files altogether. + +![open-data-blend-excel-experience](https://user-images.githubusercontent.com/74717970/125306996-f77c0400-e32f-11eb-99f4-1aa898678f9e.gif) + +## Want to Learn More About Open Data Blend? +You can visit the Open Data Blend website [here](https://www.opendatablend.io) to learn more about the services. We also have some comprehensive documentation available [here](https://docs.opendatablend.io/), where Frictionless Data specific documentation can be found [here](https://docs.opendatablend.io/open-data-blend-datasets/frictionless-data). If you would like to contribute to the project, you can find out how [here](https://www.opendatablend.io/get-involved). + +Follow us on Twitter [@opendatablend](https://www.twitter.com/opendatablend) to get our latest news, feature highlights, thoughts, and tips. + diff --git a/site/blog/2021-07-21-frictionless-repository/README.md b/site/blog/2021-07-21-frictionless-repository/README.md new file mode 100644 index 000000000..e6d23265f --- /dev/null +++ b/site/blog/2021-07-21-frictionless-repository/README.md @@ -0,0 +1,32 @@ +--- +title: Frictionless Repository +date: 2021-07-21 +tags: ["news"] +category: news +image: /img/blog/Repository.png +description: Introducing you to Repository, the new Frictionless Data tool that allows you to automate the validation workflows of your datasets. +author: Sara Petti & Evgeny Karev +--- +Are you looking for a way to automate the validation workflows of your datasets? Look no further, Frictionless Repository is here! + +We are very excited to announce that a new tool has been added to the Frictionless Data toolkit: Frictionless Repository. This is a Github Action allowing the continuous data validation of your repository and it will ensure the quality of your data by reporting any problems you might have with your datasets in no time. + +## How does it work? + +Every time you add or update any tabular data file in your repository, Frictionless Repository runs a validation. Missing header? Data type mismatch? You will get a neat, visual, human-readable validation report straight away, which will show any problems your data may have. The report lets you spot immediately where the error occurred, making it extremely easy to correct it. You can even get a Markdown Badge to display in your repository to show that your data is valid. + +Frictionless Repository only requires a simple installation. It is completely serverless, and it doesn't rely on any third-party hardware except for the Github infrastructure. + +## Let’s go! + +Before you get started, have a look at developer Evgeny Karev’s demo: + + +

 

+ +We also encourage you to check out the dedicated [documentation website](https://repository.frictionlessdata.io/), to get more detailed information. + +## What do you think? + +If you use Frictionless Repository, please let us know! Frictionless Data is an open source project, therefore we encourage you to give us feedback. Let us know your thoughts, suggestions, or issues by joining us in our community chat on [Discord]( https://discord.com/invite/Sewv6av) or by opening an issue in the [GitHub repo](https://github.com/frictionlessdata/repository). + diff --git a/site/blog/2021-08-02-apply-fellows/README.md b/site/blog/2021-08-02-apply-fellows/README.md new file mode 100644 index 000000000..b08c25238 --- /dev/null +++ b/site/blog/2021-08-02-apply-fellows/README.md @@ -0,0 +1,25 @@ +--- +title: Apply Now - become a Frictionless Data Reproducible Research Fellow +date: 2021-08-02 +tags: ['fellows'] +category: +image: /img/blog/fd_reproducible.png +description: Apply today to join the Third Cohort of Frictionless Data Fellows! +author: Lilly Winfree +--- + +*The Frictionless Data Reproducible Research [Fellows Program](http://fellows.frictionlessdata.io), supported by the Sloan Foundation, aims to train graduate students, postdoctoral scholars, and early career researchers how to become champions for open, reproducible research using Frictionless Data tools and approaches in their field.* + +### Apply today to join the Third Cohort of Frictionless Data Fellows! +Fellows will learn about Frictionless Data, including how to use Frictionless tools in their domains to improve reproducible research workflows, and how to advocate for open science. Working closely with the Frictionless Data team, Fellows will lead training workshops at conferences, host events at universities and in labs, and write blogs and other communications content. In addition to mentorship, we are providing Fellows with stipends of $5,000 to support their work and time during the nine-month long Fellowship. We welcome applications using this [form](https://forms.gle/3t9EoHKWYUnBdzHF8) from 4th August until 31st August 2021, with the Fellowship starting in October. We value diversity and encourage applicants from communities that are under-represented in science and technology, people of colour, women, people with disabilities, and LGBTI+ individuals. Questions? Please read the [FAQ](https://fellows.frictionlessdata.io/apply), and feel free to email us (frictionlessdata@okfn.org) if your question is not answered in the FAQ. + +### Frictionless Data for Reproducible Research +The Fellowship is part of the [Frictionless Data for Reproducible Research](http://frictionlessdata.io/adoption/#frictionless-data-for-reproducible-research/) project at [Open Knowledge Foundation](https://okfn.org/), and is the third iteration. Frictionless Data aims to reduce the friction often found when working with data, such as when data is poorly structured, incomplete, hard to find, or is archived in difficult to use formats. This project, funded by the Sloan Foundation and the Open Knowledge Foundation, applies our work to data-driven research disciplines, in order to help researchers and the research community resolve data workflow issues. At its core, Frictionless Data is a set of specifications for data and metadata interoperability, accompanied by a collection of software libraries that implement these specifications, and a range of best practices for data management. The core specification, the Data Package, is a simple and practical “container” for data and metadata. The Frictionless Data approach aims to address identified needs for improving data-driven research such as generalized, standard metadata formats, interoperable data, and open-source tooling for data validation. + +### Fellowship program +During the Fellowship, our team will be on hand to work closely with you as you complete the work. We will help you learn Frictionless Data tooling and software, and provide you with resources to help you create workshops and presentations. Also, we will announce Fellows on the project website and will be publishing your blogs and workshops slides within our network channels. We will provide mentorship on how to work on an Open project, and will work with you to achieve your Fellowship goals. You can read more about the first two cohorts of the Programme in the Fellows blog: http://fellows.frictionlessdata.io/blog/. + +### How to apply +The Fund is open to early career research individuals, such as graduate students and postdoctoral scholars, anywhere in the world, and in any scientific discipline. Successful applicants will be enthusiastic about reproducible research and open science, have some experience with communications, writing, or giving presentations, and have some technical skills (basic experience with Python, R, or Matlab for example), but do not need to be technically proficient. If you are interested, but do not have all of the qualifications, we still encourage you to [apply](https://forms.gle/3t9EoHKWYUnBdzHF8). We welcome applications using this [form](https://forms.gle/3t9EoHKWYUnBdzHF8) from 4th August until 31st August 2021. + +If you have any questions, please email the team at frictionlessdata@okfn.org and check out the [Fellows FAQ section](https://fellows.frictionlessdata.io/apply). [Apply](https://forms.gle/3t9EoHKWYUnBdzHF8) soon, and share with your networks! diff --git a/site/blog/2021-08-06-recap-community-calls/README.md b/site/blog/2021-08-06-recap-community-calls/README.md new file mode 100644 index 000000000..48723bf59 --- /dev/null +++ b/site/blog/2021-08-06-recap-community-calls/README.md @@ -0,0 +1,29 @@ +--- +title: The first six community calls of 2021 - a recap. +date: 2021-08-06 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community-call-pic.png +description: Looking back at all that has happened in the Frictionless Community over these past 6 months. +author: Sara Petti +--- + +We are halfway through 2021 (aka 2020 part two), and we thought it would be a good moment to look back at all that has happened in the Frictionless Community over these past 6 months. We’re so grateful for everyone in the community - thanks for your contributions, discussions, and participation! A big part of the community is our monthly call, so in case you’ve missed any of the community calls of 2021, here is a quick recap. + +We started the year with a great presentation by Carles Pina i Estany. Carles is a very active member of our community and also a tool-fund grantee. He presented his tool-fund project: [Frictionless schema-collaboration](https://frictionlessdata.io/blog/2021/01/18/schema-collaboration/). What is that? It’s a system that uses Data Package Creator to enable data managers and researchers to create and share dataset schemas, edit them, post messages and export the schemas in different formats (like text, Markdown or PDF). It is a very useful tool because before researchers communicated with data managers via email for each data package they were publishing. Frictionless schema-collaboration makes it easy and faster to communicate. + +February was a great month, we started [improving the documentation of the Frictionless Framework website](https://frictionlessdata.io/blog/2021/02/26/halfway-odi/#what-are-the-next-steps) together with the community and we had a brilliant code demonstration of the newly-released Frictionless Python Framework by senior developer Evgeny Karev at the monthly community call. How great was that? That particular call broke the record of attendance, it was fantastic to have so many of you there! And in case you were not there, we recorded Evgeny’s demo and you can watch it on [YouTube](LINK). + +March marked one year since the beginning of the Covid-19 pandemic in Europe and the Americas. It seemed fair to dedicate that community call to Covid-19 data, so we had Thorben Westerhuys presenting his project on Frictionless vaccination data. Thorben developed [a spatiotemporal tracker for state level covid vaccination data in Germany](https://github.com/n0rdlicht/rki-vaccination-scraper) to solve the problems linked to governments publishing vaccination data not parsed for machines. His vaccination scraper takes that data, reformats it and makes it available to everyone in a structured, more machine readable form. + +At the end of April we had an interactive session with the [Frictionless Fellows](https://fellows.frictionlessdata.io/). Daniel Alcalá López, Kate Bowie, Katerina Drakoulaki, Anne Lee, Jacqueline Maasch, Evelyn Night and Samantha Wilairat took some time to tell the community about their journey through Open Science. They also shared with the community some of the things they learnt during their 9-months fellowship and how they plan to integrate them to their work. This cohort of fellows made us very proud, they were a true joy to work with. Keep an eye on them all, they will be leaders in Open Science! And in case you are interested in becoming a Frictionless Fellow, we are currently recruiting the 3rd cohort. More info on the programme and how to apply [here](https://frictionlessdata.io/blog/2021/08/02/apply-fellows/). + +During the April call we also got a short presentation on instant APIs for small Frictionless Data-powered apps by Oleg Lavrovsky. Oleg is also an active member of our community, you have probably already met him at many of our calls. + +May started gloriously with csv,conf, where we had two talks on Frictionless Data. One was by the Fellows, and the other one was by Simon Tyrrell. On top of the one at csv,conf, Simon gave a presentation together with Xingdong Bian about their [Frictionless Data for Wheat project](https://frictionlessdata.io/blog/2021/03/05/frictionless-data-for-wheat) at the monthly call. Simon and Xingdong are researchers at the Earlham Institute, and they are both tool-fund grantees, like Carles. They presented their project to the community and explained how they use Frictionless Data to make their large amount of data available, usable and interoperable for everyone. + +The last call we had was in June, also featuring a tool-fund grantee: Nikhil Vats. Nikhil presented the [Frictionless Data Package integration he developed for InterMine](https://frictionlessdata.io/blog/2021/04/13/data-package-for-intermine/), an open source biological data warehouse that creates databases of biological data accessed by sophisticated web query tools. Nikhil’s integration makes users’ queries more useful, as it describes all the fields of their query, specifically: name of field, type of field, class path, field and class ontology link. +In the same call, Michael Amadi announced the release of Data Blend, a great project using Frictionless Data. If you find it cool and would like to know more about it, read [this case-study](https://frictionlessdata.io/blog/2021/07/12/open-data-blend/), but also make sure you don’t miss the October community call, because we will be hearing a presentation on it! + +July’s call was canceled last minute, but it has been rescheduled to August 12th, and it’s going to be extremely interesting! In case you did not sign up yet, please do [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform). We will be hearing from Dave Rowe (aka [Libraries Hacked](https://www.librarieshacked.org/)) and how he uses Frictionless Data specs and standards for public libraries open data. +This first 2021 semester was also great because we [completed our website redesign](https://frictionlessdata.io/blog/2021/06/16/new-changes-to-the-website/) and we added two great tools to the Frictionless Data toolkit: [Livemark](https://frictionlessdata.io/blog/2021/06/22/livemark) and [Frictionless Repository](https://frictionlessdata.io/blog/2021/07/21/frictionless-repository/). These tools get better and better everyday thanks to the precious contributions of the community. Thanks to you all, for making the Frictionless Data project so great. Nothing could have happened without you! diff --git a/site/blog/2021-08-09/dryad-pilot/README.md b/site/blog/2021-08-09/dryad-pilot/README.md new file mode 100644 index 000000000..660174a17 --- /dev/null +++ b/site/blog/2021-08-09/dryad-pilot/README.md @@ -0,0 +1,22 @@ +--- +title: Frictionless Data and Dryad join forces to validate research data +date: 2021-08-09 +tags: ['pilot'] +category: +image: /img/adoption/dryad.png +description: A great way to share research data is to upload it to a repository, but how do we ensure that uploaded data doesn't have issues? Frictionless Data and Dryad join forces to revamp the upload page for the Dryad application, now including the Frictionless validation functionality to check for data quality. +author: Daniella Lowenberg and Lilly Winfree +--- +What happens to scientific data after it is generated? The answer is complicated - sometimes that data is shared with other researchers, sometimes it is hidden away on a private hard drive. Sharing research data is a key part of open science, the movement to make research more accessible and usable by everyone to drive faster advances in science. A great way to share research data is to upload it to a repository, but simply uploading data is not the final step here. Ideally, the uploaded data will be of high quality - that is, it won’t have errors or missing data, and it will have enough descriptive information that other researchers can also use it! Over the last 6 months, we collaborated with the data repository Dryad to make it easier for researchers to upload their high quality data for sharing. + +[Dryad](https://datadryad.org/stash) is a community-led data repository that allows researchers to submit data from any field, which not only promotes open science, but also helps researchers comply with open data policies from funders and journals. Because Dryad accepts all kinds of data, they need to curate that data for quality and ensure that the data does not present risk, and have comprehensive metadata to reuse the data. We quickly realized our shared goals, and formed a Pilot collaboration to add Frictionless validation functionality to the Dryad data upload page. Both teams agreed how important it is to give researchers immediate feedback about their data as they are submitting it so they can make edits in that moment, and learn about data best practices. + +The outcome of this collaboration is a revamped upload page for the Dryad application. Researchers uploading tabular data (CSV, XLS, XLSX) under 25MB will have the files automatically validated using the Frictionless tool. These checks are based on the built-in validation of Frictionless Framework (read the validation guide [here](https://framework.frictionlessdata.io/docs/guides/validation-guide)), and include checking for data errors such as blank cells, missing headers, or incorrectly formatted data. The Frictionless report will help guide researchers on which issues should be resolved, allowing researchers to edit and re-upload files before submitting their dataset for curation and publication. + +![Screen Shot 2021-08-06 at 8 10 41 AM](https://user-images.githubusercontent.com/74717970/128690898-2095f1c7-060d-4398-ac92-33f65c068c4c.png) +*When a data file is uploaded, researchers can see if the data passed the Tabular Data Checks or if there are any issues. Clicking to “View 1 Issues” shows more details describing the error.* + +![Screen Shot 2021-08-06 at 8 12 01 AM](https://user-images.githubusercontent.com/74717970/128690994-16be9845-59ec-4f3b-9b76-28a163dfa1e3.png) +*This uploaded data file has a blank header. With this information, the researcher can fix the error and re-upload the data.* + +This work was funded by the Sloan Foundation as part of the Frictionless Data for Reproducible Research project. This project was truly collaboratory - most of the technical work was completed by contractor Cassiano Reinert Novais dos Santos with supervision and support from the Dryad team: Daniella Lowenberg, Scott Fisher, Ryan Scherle, and the CDL UX team (Rachael Hu and John Kratz); as well as support from the Frictionless team, Evgeny Karev, Lilly Winfree, and Sara Petti. If you have any feedback on the Dryad upload page, please let us know! diff --git a/site/blog/2021-08-16-august-12-call/README.md b/site/blog/2021-08-16-august-12-call/README.md new file mode 100644 index 000000000..92f398de6 --- /dev/null +++ b/site/blog/2021-08-16-august-12-call/README.md @@ -0,0 +1,43 @@ +--- +title: Frictionless Data 12 August 2021 Virtual Hangout +date: 2021-08-16 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Libraries-Hacked.png +description: At our Frictionless Data community call we had Dave Rowe giving a presentation on Frictionless Data for public libraries... +author: Sara Petti +--- + +On our last Frictionless Data community call on August 12th we had Dave Rowe (aka Libraries Hacked) giving a presentation on Frictionless Data standards and tooling for public libraries’ data. + +Libraries Hacked is a project promoting open data in libraries and creating digital prototypes from that data. Public libraries hold a lot of data, but this data is often not shared and it is lacking common standards for data sharing. With the introduction of data schemas, Dave developed a series of tools to show libraries what they could do with their data. For example, Dave demonstrated membership mapping, libraries maps and a [mobile libraries dashboard](https://www.mobilelibraries.org/map) that displays mobile libraries vans, estimates their location and automatically generates paper timelines. + +You can learn more about the Libraries Hacked project [here](https://www.librarieshacked.org/). If you would like to dive deeper and discover all about what you can do with Frictionless library data, , you can watch Dave Rowe’ presentation here:: + + + +## Other agenda items from our hangout + +### Frictionless Hackathon in October! +Join the Frictionless Data community for a two-day virtual event to create new project prototypes based on existing Frictionless open source code. It’s going to be fun! +We need to decide on a date to hold this event, and are currently considering Thursday and Fridays in October. You can vote on Discord. +Keep an eye on the website for more info: https://frictionlessdata.io/hackathon/#what-s-a-hackathon + +### Recruiting the 3rd cohort of Frictionless Fellows +Are you an early career researcher interested in Open Science? We are recruiting the 3rd cohort of Frictionless Fellows! During their 9-month Fellowship, Fellows will lead training workshops, host events at universities and in labs, and write blogs and other communications content. You will be mentored by Frictionless Data product manager Lilly Winfree, PhD and we will help you learn Frictionless Data tooling and software. Applications are open until August 31st. +More info [here](https://frictionlessdata.io/blog/2021/08/02/apply-fellows/) +You can apply via this [form](https://docs.google.com/forms/d/e/1FAIpQLSdR1Qz5GL5A1BrqgFxDBOXScvNoS5AeyCWixNwtcApXUttT8Q/viewform). + +# Join us in 2 weeks! + +Yes, that’s right, August is our lucky month, we don’t have one, but two community calls! Our next meeting will be in just 2 weeks, on August 26th. We will hear a presentation from +Amber York and Adam Shepherd from BCO-DMO on Frictionless Data Pipelines. You can sign up [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-08-31-august-26-community-call/README.md b/site/blog/2021-08-31-august-26-community-call/README.md new file mode 100644 index 000000000..a69784ce3 --- /dev/null +++ b/site/blog/2021-08-31-august-26-community-call/README.md @@ -0,0 +1,44 @@ +--- +title: Frictionless Data 26 August 2021 Virtual Hangout +date: 2021-08-31 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/BCODMO-data-pipelines.png +description: At our Frictionless Data community call we had Amber York and Adam Shepherd from BCO-DMO giving a presentation on Frictionless Data Pipelines for Ocean Science... +author: Sara Petti +--- +On our last Frictionless Data community call on August 26th we had Amber York and Adam Shepherd from BCO-DMO giving a presentation on Frictionless Data Pipelines for Ocean Science. + +BCO-DMO is a biological and chemical oceanography data management office, working with scientists to make sure that their data is publicly available and archived for everyone else to use. + +BCO-DMO processes around 500 datasets a year, with all sorts of variability. In the beginning the staff was writing ad hoc scripts and software to process that data, but that quickly became a challenge, as the catalogue continued to grow in both size and the variety of data types it curates. + +Having worked for several years with Frictionless Data, BCO-DMO identified the Data Package Pipelines (DPP) project in the Frictionless toolkit as key to overcoming those challenges and achieving its data curation goals. +Together with the Frictionless Data team at Open Knowledge Foundation, BCO-DMO developed Laminar, a web application to create Frictionless Data Package Pipelines. Laminar helps data managers process data efficiently while recording the provenance of their activities to support reproducibility of results + +You can learn more on the project [here](https://frictionlessdata.io/blog/2020/02/10/frictionless-data-pipelines-for-open-ocean/). If you would like to dive deeper and discover all about Frictionless Data Pipelines, you can watch Amber York’s and Adam Shepherd’s presentation: + + + +## Other agenda items from our hangout +### Frictionless Hackathon on 7-8 October! +Join the Frictionless Data community for a two-day virtual event to create new project prototypes based on existing Frictionless open source code. It’s going to be fun! +We are currently accepting project submissions, so if you have a cool project in mind, using based on existing Frictionless open source code, this could be an excellent opportunity to prototype it, together with other Frictionless users from all around the world. You can pitch anything - your idea doesn't need to be complete/fully planned. We can also help you formulate a project if you have an idea but aren't sure about it. You can also submit ideas for existing projects you need help with! + +Use [this form](https://docs.google.com/forms/d/e/1FAIpQLSdd41pbfWaCYQHkQNTaf49kht1cUg7_Tg-NzqdP11pHWrD7yA/viewform) to submit your project. +Keep an eye [on the website](https://frictionlessdata.io/hackathon/) for more info. + +# Join us next month! + +Our next meeting will be on September 30th, exceptionally one hour later than usual. We will hear a presentation from Daniella Lowenberg and Cassiano Reinert Novais dos Santos on the Frictionless Data validation implemented for the Dryad application. + +You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + + As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-09-30/hackathon-preview/README.md b/site/blog/2021-09-30/hackathon-preview/README.md new file mode 100644 index 000000000..da5519d77 --- /dev/null +++ b/site/blog/2021-09-30/hackathon-preview/README.md @@ -0,0 +1,29 @@ +--- +title: Why you should join the Frictionless Data Hackathon +date: 2021-09-30 +tags: ['events'] +category: events +image: /img/Frictionless_hackathon.png +description: A sneak preview of the submitted projects to give you all a few good reasons to register for the Frictionless Data Hackathon +author: Sara Petti +--- +The Frictionless Data Online Hackathon is fast approaching and we just can’t wait for it to start! + +If you are not sure yet whether to participate or not, bear in mind that it will be a great opportunity to test some of the newest Frictionless tools, like Livemark, Repository, play around with Frictionless-py and other new Frictionless code. It will also be a great chance for you to meet other Frictionless users and contributors from all around the world and build a project prototype together. + +**Not convinced yet? Go and explore the proposed projects on the [Dashboard](https://frictionless-hackathon.herokuapp.com/event/1#top)! You will see, there is a project for every taste, so surely there must be one that sounds right for you!** + +Are you a big fan of geodata? In that case you will probably want to join the [frictionless-geojson team](https://frictionless-hackathon.herokuapp.com/project/9), who is planning to create a frictionless-py plugin to add support for reading, writing and inlining geojson. If you are a devoted CKAN user who would like to see more Frictionless functionalities in it, you may decide to join the [Data package manager for CKAN project](https://frictionless-hackathon.herokuapp.com/project/8). + +In case you read our [blog about Livemark](https://frictionlessdata.io/blog/2021/06/22/livemark/) and have been intrigued by this new Frictionless tool ever since, your moment has come! You can finally try it out by joining the [Citation Context Reports](https://frictionless-hackathon.herokuapp.com/project/11), the [Dataset List](https://frictionless-hackathon.herokuapp.com/project/3) project, or the [Frictionless Community Insights](http://frictionless-hackathon.herokuapp.com/project/12) project. If you are interested in datasets discoverability and linkage, you may want to join the [Things not Datasets](https://frictionless-hackathon.herokuapp.com/project/10) team. + +Oh, and please let us know in advance if you are a big bugs smasher! You will be a coveted participant for all projects and we need to make sure everybody gets a fair share of your skills, including us in our effort to improve the [Frictionless Python Framework](https://frictionless-hackathon.herokuapp.com/project/4). + +But enough of describing the projects, instead hear about them directly from the people who proposed them: + + + + +Hurry up to register for the hackathon if you haven’t done so yet, you can do it only until the end of this week via [this form](https://forms.gle/Xr4gcnQnhShMJrWeA) + +More information on the Frictionless Data Hackathon is available on the [dedicated webpage](https://frictionlessdata.io/hackathon/). You can also follow news on the day itself through [Twitter](https://twitter.com/frictionlessd8a/): #FrictionlessHackathon and #FrictionlessHack2021. diff --git a/site/blog/2021-10-06-september-community-call/README.md b/site/blog/2021-10-06-september-community-call/README.md new file mode 100644 index 000000000..ba62a74c7 --- /dev/null +++ b/site/blog/2021-10-06-september-community-call/README.md @@ -0,0 +1,43 @@ +--- +title: Frictionless Data September 2021 Virtual Hangout +date: 2021-10-06 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Community-call-Dryad.png +description: At our Frictionless Data community call we had Daniella Lowenberg and Cassiano Reinert Novais dos Santos presenting the Frictionless Data integration into Dryad... +author: Sara Petti +--- +On our last Frictionless Data community call on September 30th we had Daniella Lowenberg from Dryad and developer Cassiano Reinert Novais dos Santos giving a presentation on the Frictionless Data integration into Dryad. + +Dryad is a community-led repository that makes research data discoverable, freely reusable, and citable. To ensure the quality of the submitted data, Dryad needs to curate it. It therefore made total sense to integrate the Frictionless Data validation functionality to its uploading page. + +A pilot was started at the beginning of 2021 to add an automatic tabular data validation check to all uploaded files under 25MB, and it went live in June 2021. Since then, more than 11000 research data files have been validated, and around 1000 failed the validation test. 98,4% of the researchers whose files failed, managed to fix their errors easily and resubmit their data. + +All the code of the Frictionless Data integration is open source and lives in the [Dryad GitHub repository](https://github.com/orgs/CDL-Dryad/repositories), so go and have a look if you want and please let us know if you have any feedback. + +You can learn more on the project [here](https://frictionlessdata.io/blog/2021/08/09/dryad-pilot/). If you would like to dive deeper and discover all about the Frictionless Data validation functionality integrated into Dryad, you can watch Daniella Lowenberg’s and Cassiano Reinert Novais dos Santos’ presentation here: + + + +## Other agenda items from our hangout +### Frictionless Hackathon on 7-8 October! +Join the Frictionless Data community for a two-day virtual event to create new project prototypes based on existing Frictionless open source code. It’s going to be fun! +Go and explore the dashboard to know more about all the projects we plan to work on. +For general information, just go to the [dedicated page](https://frictionlessdata.io/hackathon/). +We are accepting last minute registrations [via this form](https://forms.gle/ZhrVfSBrNy2UPRZc9), so hurry up if you want to be on board! + +# Join us next month! +Our next meeting will be on October 28th. We will hear a presentation from Michael Amadi on Open Data Blend datasets powered by Frictionless Data. + +Ahead of our next call, you can learn more about Open Data Blend [here](https://frictionlessdata.io/blog/2021/07/12/open-data-blend/) + +You can sign up [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-10-13-hackathon-wrap/README.md b/site/blog/2021-10-13-hackathon-wrap/README.md new file mode 100644 index 000000000..e79ce5b93 --- /dev/null +++ b/site/blog/2021-10-13-hackathon-wrap/README.md @@ -0,0 +1,57 @@ +--- +title: Wrapping up the Frictionless Hackathon +date: 2021-10-13 +tags: ['events'] +category: events +image: /img/Frictionless_hackathon.png +description: It's a wrap! Recap of the Frictionless Data Hackathon on 7-8 October +author: Sara Petti +--- +The first (of many we hope!) Frictionless Data Hackathon is over, and it was great! Many thanks to all who helped make it such a success the past week. + +The prize for the best project, voted by the participants, went to the DPCKAN team. Well done André, Andrés, Carolina, Daniel, Francisco and Gabriel! +*”I feel pretty happy after this frictionless hackathon experience. We've grown in 2 days more than it could have been possible in one month. The knowledge and experience exchange was remarkable.”*, said the winning team. + +It was also great to see participants who had never taken part in a hackathon before being enthusiastic about it. *”I loved the helpfulness of the community members, as well as the diversity of participants.”* + +*“It was such a great opportunity to network with other people interested in data quality and open data!”* + +*”It was amazing to see a weightless tool used in development. I want to learn more about it and integrate it into my projects.”* + +Over 20 people signed up for the hackathon from Africa, Asia, Europe, South America and North America. We had a very diverse audience and saw a lot of new faces. The event ran from 7th to 8th October on our Discord server. The result of those 2 days of intense collaboration were four great projects: + +## DPCKAN +The DPCKAN project was proposed by a team working on the data portal of the state of Minas Gerais in Brazil. To ensure quality metadata and automate the publishing process, the team decided to develop a tool that would allow publishing and updating datasets described with Frictionless Standards in a CKAN instance. + +The main objectives for the hackathon were to refine the package update functions and clean up the documentation. + +You can check out the project’s [GitHub repository](https://github.com/dados-mg/dpckan) to see the improvements that were made during the hackathon. + +## Frictionless Tutorials +The main objective of this project was to write new tutorials using the Python Frictionless Framework. The team not only created a tutorial, but also wrote [more detailed instructions](https://docs.google.com/document/d/1zbWMmIeU8DUwzGaEih0JGJ-DMGug5-2UksRN1x4fvj8/edit?usp=sharing) on how to create new tutorials for future contributors. + +You can have a look at the tutorial written during the hackathon [here](https://colab.research.google.com/drive/1tTtynfnExykcTYon1j6Y8OgzQZEXpQvP?usp=sharing). + +## Covid tracker +The main objective of this project was to test Livemark, one of the newest Frictionless tools, with real data and provide an example of all its functionalities. Besides the charts and tables, the information is available on an interactive map, which also takes into account the accuracy of the official data. + +You can have a look at the Covid Tracker [here](https://covid-tracker.frictionlessdata.io/). + +## Frictionless Community Insight +The objective of this project, proposed by the Frictionless core team, was to build a [Livemark](https://livemark.frictionlessdata.io/) website telling a story about the Frictionless Data community using the data from the community survey we ran in September. + +The main goals for the hackathon were to clean the data from the survey, visualise it and display it as a story on the Livemark website. + +You can have a look at the [draft website](https://community-insights.frictionlessdata.io/). + +Four other great projects started the hackathon but did not finish it: + +**Dataset List**, another Livemark project to list all the datapackages on GitHub, **Frictionless Geojson**, an extension to add GeoJSON read and write support in frictionless-py, **Improve Frictionless Data Python Framework**, a project to get familiar with the codebase, and **Citation Context Reports**, a project to create Frictionless data schemas for scholarly citations data. + +Interestingly, one of the participants started off his own project during the hackathon, building a Discord matrix bridge to allow Frictionless users and contributors to join the community Discord chat using an Open standard. Even if the Matrix did not participate in the voting, it still is a notable project. If you are interested in knowing more about it you can have a look at [this GitHub issue](https://github.com/frictionlessdata/project/issues/698). + +On the last day of the hackathon, one hour before the end of the event, the teams pitched their projects. Here’s a recording of the event if you missed it and want to have a look: + + + +Thanks again to all those who took part in the hackathon and contributed with their time and enthusiasm to make it so great. We can’t wait for the next hack already! diff --git a/site/blog/2021-11-03-october-community-call/README.md b/site/blog/2021-11-03-october-community-call/README.md new file mode 100644 index 000000000..a180dbaa2 --- /dev/null +++ b/site/blog/2021-11-03-october-community-call/README.md @@ -0,0 +1,44 @@ +--- +title: Frictionless Data October 2021 Virtual Hangout +date: 2021-11-03 +tags: ['events', 'community-hangout'] +category: events +image: /img/Open-data-blend.png +description: At our Frictionless Data community call we had Michael Amadi presenting Open Data Blend and their Frictionless Data journey... +author: Sara Petti +--- +On our last Frictionless Data community call on October 28th we had Michael Amadi from Nimble Learn giving a presentation on Open Data Blend and their Frictionless Data journey. + +Open Data Blend is a set of open data services that aim to make large and complex UK open data easier to analyse. The Open Data Blend datasets have two interfaces: a UI and an API, both powered by Frictionless Data. The datasets themselves are built on top of three Frictionless Data specifications: data package, data resource and table schema; and they incorporate some Frictionless Data patterns. + +The project addresses some of the main open data challenges: +* Large data volumes that are difficult to manage due to their size +* Overwhelming complexity in data analysis +* Open data shared in sub-optimal file formats for data analysis (e.g. PDFs) +* When companies and organisation aggregate data, refine it and add value to it, they often don’t openly share the cleaned data + +You can learn more on the project [here](https://frictionlessdata.io/blog/2021/07/12/open-data-blend/). If you would like to dive deeper and discover all about how Open Data Blend uses the Frictionless Data toolkit, you can watch Michael Amadi’s presentation here: + + + +# Other agenda items from our hangout + +* Senior developer Evgeny Karev presented Livemark at PyData on October 29th. If you missed it and want to have a look, check out the recording [here](https://zoom.us/rec/play/yyFTEAW3_v4cPGUNbiHS95-vlgICgNYeVdK_N9VHOdHxLDoKbTE9EZvbVpZMjIV8-WAr3qmZ9vZPoVsU.QXvKRI1hOrCwv8Lg?startTime=1635487241000&_x_zm_rtaid=iuuaYWHFSEec21FRLG7Cig.1635861744121.d2b5a7e329a988e4ea49b64e3d6e66b6&_x_zm_rhtaid=460) (for Livemark jump at 1:03:03). +* The third cohort of Frictionless Fellows has officially kicked off mid-October. You will get to meet them next year during one of our community calls. Meanwhile, stay tuned to know more about them! +* We don’t have any presentation planned for the December community call yet. Would you like to present something? Drop us a line to let us know! + +# Join us next month! + +Next community call is one week earlier than usual (to avoid conflict with American Thanksgiving), on November 18th. We will hear a presentation from Peter Desmet on Frictionless Data exchange format for camera trapping data. + +You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: + +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-11-23-november-community-call/README.md b/site/blog/2021-11-23-november-community-call/README.md new file mode 100644 index 000000000..cfb3561de --- /dev/null +++ b/site/blog/2021-11-23-november-community-call/README.md @@ -0,0 +1,43 @@ +--- +title: Frictionless Data November 2021 Virtual Hangout +date: 2021-11-23 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/November-community-call.png +description: At our Frictionless Data community call we had Peter Desmet presenting Frictionless Data exchange format for camera trapping data... +author: Sara Petti +--- +On our last Frictionless Data community call on November 18th we had Peter Desmet from the Research Institute for Nature and Forest (INBO) giving a presentation on Frictionless Data exchange format for camera trapping data. + +Camera trapping is a non-invasive wildlife monitoring technique generating more and more data in the last few years. Darwin Core, a well established standard in the biodiversity field, does not capture the full scope of camera trapping data (e.g. it does not express your camera setup) and it is therefore not ideal. To tackle this problem, the camera trapped data package was developed, using Frictionless Data standards. The camera trapped data package is both a **model** and a **format** to exchange camera trapping data, and it is designed to capture all the essential data and metadata of camera trap studies. + +The camera trap data package model includes: +* Metadata about the project +* Deployments info about the location, the camera and the time +* Media including the file url, the timestamp and if it is a sequence +* Observation about the file (Is it blank? What kind of animal can we see? etc....) + +The format is similar to a Frictionless Data data package. It includes: **metadata** about the project and the data package structure, **csv files** for the deployments, the media captured in the deployments, and the observations in those media. + +If you would like to dive deeper and discover all about the Frictionless Data exchange format for camera trapping data, you can watch Peter Desmet’s presentation here: + + + +You can also find Peter’s presentation deck [here](https://speakerdeck.com/peterdesmet/camtrap-dp-using-frictionless-standards-for-a-camera-trapping-data-exchange-format). + +## Other agenda items from our hangout +We are part of the organisation of the FOSDEM DevRoom Open Research Tools & Technologies this year too. We would love to have someone from the Frictionless community giving a talk. If you are interested please let us know! We are very happy to help you structure your idea, if needed. Calls for participation will be issued soon. Keep an eye on [this page](https://fosdem.org/2022/news/2021-11-02-devroom-cfp/). + +# Join us next month! +Next community call is one week earlier than usual, on December 16th, because of the Winter holidays. Keith Hughitt is going to present some ideas around representing data processing flows as a DAG inside of a datapackage.json, and tools for interacting with and visualizing such DAGs. + +You can sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2021-12-07-3rd-cohort-fellows/README.md b/site/blog/2021-12-07-3rd-cohort-fellows/README.md new file mode 100644 index 000000000..6769d6f63 --- /dev/null +++ b/site/blog/2021-12-07-3rd-cohort-fellows/README.md @@ -0,0 +1,54 @@ +--- +title: Meet the 3rd cohort of Frictionless Fellows! +date: 2021-12-07 +tags: ['fellows'] +category: +image: /img/blog/fellows-cohort3.png +description: Meet the 3rd cohort of Frictionless Fellows, a great bunch of early career researchers on their way to become champions for reproducibility and open science +author: Sara Petti +--- +We are very excited to introduce you to the 3rd cohort of [Frictionless Data Reproducible Research Fellows](https://fellows.frictionlessdata.io/)! Over the coming months, this group of six early career researchers will be learning about open science, data management, and how to use Frictionless Data tooling in their work to make their data more open and their research more reusable. Keep an eye on them, as they are on their way to becoming champions of reproducibility! For now, go and read the introductory blogs they wrote about themselves to know more about them and their goals for this fellowship. + + + +**Hi, everyone! My name is Guo-Qiang Zhang**, and I am from China. Right after I finished my residency training in Pediatrics, I joined Prof Bright I. Nwaru’s group and started my doctoral studies at Krefting Research Centre in University of Gothenburg (Sweden). My doctoral project is to look at the effects of sex hormones on women’s health (especially asthma), utilizing epidemiological methods as well as evidence synthesis tools (e.g., systematic review, umbrella review). + +In my first year of doctoral studies, I had the opportunity to participate in the course “Reproducibility in Medical Research” led by Prof Nwaru. It was my first time to hear about Open Science and research reproducibility. As a “fresh” full-time doctoral student full of passion for medical research, I felt overwhelmed by waves of frustration when I came to know the reproducibility crisis. After spending some time with my frustration, I came to realize that in fact I can do something. In my first project, my colleagues and I conducted an umbrella review on a highly controversial topic on the impact of menopausal hormone therapy on women’s health. We put extensive efforts into making the review process as transparent as possible: we developed beforehand protocols for data extraction and statistical analysis, documented key steps of the review process, verified data in the published literature, and made all datasets and R scripts publicly available. +To keep on reading about Guo-Qiang click [here](https://fellows.frictionlessdata.io/blog/hello-guo-qiang/). + + + +**Hi all! My name is Victoria**. I'm a physics graduate student and recovering engineer living in Berlin. I grew up mainly in my family’s native country of Singapore, but consider myself an American, and am still workshopping a straightforward answer to “where are you from.” + +In my past life I worked in materials QA testing; currently, I’m at the German Aerospace Centre designing laser systems in the THz range - a type of non-visible light that hangs out on the electromagnetic spectrum between infrared and microwave. + +My Open Science journey has just begun and I’m stoked! I started to get interested in topics around data transparency and accessibility after a series of escalating frustrations with information dynamics in medical technology, beginning in my own field of gas sensing, then discovering similar disparities in tangential fields. +Read more about Victoria [here](https://fellows.frictionlessdata.io/blog/hello-victoria/). + + + +**Hello everybody! My name is Zarena**. I grew up in the Kyrgyz Republic, yet spent half of my life studying and working abroad. Currently, I am a Research Assistant for the project Creating Culturally Appropriate Research Ethics in Central Asia (CARE) at Nazarbayev University in Kazakhstan. I am also a Mad activist and an interdisciplinary human rights researcher. I like to consider my research activities going beyond academia to encompass and make an effect on broader socio-political structures. + +Although I believe that life would not progress without frictions, when it comes to science and research, I feel, ‘frictions' - manifested in a form of paywalls, bureaucratic and corporate management, or other structural barriers - should be deconstructed. So, I am joining the Frictionless Data Fellowship Programme with the purpose to learn more about open and FAIR research. +Learn more about Zarena [here](https://fellows.frictionlessdata.io/blog/hello-zarena/). + + + +**Hi everyone, my name is Melvin Ochieng**, and I'm a pathologist and up-coming soil scientist. I was born in Kenya in a town called Eldoret that is famous for producing Kenyan marathon champions. I was raised in Kenya in my early childhood days and in Tanzania afterwards. I like to consider myself both Kenyan and Tanzanian at heart because the two countries took part in molding the person I am today. I am currently a masters student at University of Mohammed VI polytechnic in Morocco, studying fertilizer science and technology. Over the past two years, my research focused on potato cyst nematode (PCN) which is a quarantine pest that had been reported in Kenya in 2015. + +I'm excited to start this journey as a Frictionless Data fellow with my fellows for this cohort. I just recently found out about open science and I couldn't be more excited to learn more about this concept and how it will influence me as a researcher. Advancement in technology has opened up the world in so many ways and made possible extensive networks for collaborations globally. Notably, the problems the world is facing today require a global/collaborative approach to solve. Therefore, reproducible research is of key importance in promoting this collaboration. +To know more about Melvin click [here](https://fellows.frictionlessdata.io/blog/hello-melvin/). + + + +**Hello! My name is Kevin Kidambasi**(KK). I was born and raised in Vihiga County of western Kenya. Currently, I live in Nairobi, the capital city of Kenya. I am a master’s student in Jomo Kenyatta University of Agriculture and Technology (JKUAT) registered at the department of Biochemistry. My MSc research at the International Centre of Insect Physiology and Ecology (icipe) focuses on the role of haematophagous camel-specific biting keds (Hippobosca camelina) in disease transmission in Laisamis, Marsabit County of northern Kenya. My broad research interest focuses on studying host-pathogen interactions to understand infection mechanisms of diseases in order to discover novel control and treatment targets. + +I am interested in improving research reproducibility because it allows other researchers to confirm the accuracy of my data and correct any bias as well as validate the relevance of the conclusions drawn from the results. This also allows data to be analyzed in different ways and thus, give new insights and lead the research in new directions. In addition, improving research reproducibility would allow the scientific community to understand how the conclusions of a study were made and pinpoint out any mistakes in data analyses. In general, research reproducibility enhances openness, research collaboration, and data accessibility which in turn increase public trust in science and hence permits their participation and support for research. This enables public understanding of how research is conducted and its importance. +Read more about Kevin [here](https://fellows.frictionlessdata.io/blog/hello-kevin/). + + + +**Greetings! My name is Lindsay Gypin**, she/her. I grew up in Denver, Colorado and began my career as a K-12 educator. I taught high school English and worked as a school librarian before becoming disillusioned with the politicization of public education and determining my skills were better suited for work in public libraries. Attending library school after having worked in libraries for so many years, I found myself drawn to courses in the research data management track of librarianship, and in qualitative research methods.I recently became a Data Services Librarian at the University of North Carolina Greensboro, where I hope to assist scholars in making their research data more open and accessible. + +For some time, I have wanted to build a reproducible workflow to uncover systemic bias in library catalogs. I’m hoping the Fellows Programme will help me build the foundation to do so. +To learn more about Lindsay click [here](https://fellows.frictionlessdata.io/blog/hello-lindsay/). diff --git a/site/blog/2021-12-17-december-community-call/README.md b/site/blog/2021-12-17-december-community-call/README.md new file mode 100644 index 000000000..1f89b4ce2 --- /dev/null +++ b/site/blog/2021-12-17-december-community-call/README.md @@ -0,0 +1,42 @@ +--- +title: Frictionless Data December 2021 Virtual Hangout +date: 2021-12-17 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/data-dag-blog.png +description: At our Frictionless Data community call we had Keith Hughitt presenting some ideas around representing data processing flows as a DAG inside of a datapackage.json... +author: Sara Petti +--- +On the last Frictionless Data community call of the year, on December 16th, we had Keith Hughitt from the National Cancer Institute (NCI) sharing (and demoing) his ideas around representing data processing flows as a DAG (Directed Acyclic Graph) inside of a datapackage.json, and tools for interacting with and visualizing such DAGs. + +Keith started thinking about this when he realised that cleaning and processing data are not obvious processes, on the contrary, there is a lot of bias in them. The decisions made to clean the raw data are not generally included in the publications and are not made available in any transparent way. To allow collaboration and reproducibility, Keith thought of embedding and annotated data provenance DAG in a datapackage.json using the Frictionless specs. + +The basic process Keith has in mind to solve this problem is: +* The data provenance is encoded as a DAG in the metadata +* For each step in processing the workflow, the previous DAG is copied and extended +* Each node of the DAG represents a dataset at a particular stage of processing, and it can be associated with annotations, views +* Datapackages would be generated and associated with each node +* Have a web UI that reads the metadata and renders the DAG. + +If you would like to dive deeper and discover all about representing data processing flows as DAG inside of a Data Package, you can watch Keith Hughitt’s presentation here: + + + +If you find this idea interesting, come and talk to Keith on [Discord](https://discord.com/invite/j9DNFNw)! He would love to hear what you think and if you have other ideas in mind. + +## Other agenda items from our hangout +We are part of the organisation of the [FOSDEM](https://fosdem.org/) Thematic Track *Open Research Tools & Technologies* this year too. We would love to have someone from the Frictionless community giving a talk. The deadline has been extended and you have time until December 23rd to submit a talk proposal! More info at [this page](https://fosdem.org/2022/news/2021-11-02-devroom-cfp/). + +# Join us next month! +Next community call is next year, on January 21st. Francisco Alves, from the DPCKAN team who won the Frictionless Data hackathon back in October, is going to present their prototype and how it evolved. + +You can sign up [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/README.md b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/README.md new file mode 100644 index 000000000..4df9b1d00 --- /dev/null +++ b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/README.md @@ -0,0 +1,41 @@ +--- +title: Frictionless data packages for the NIH CFDE project +date: 2022-01-12 +tags: ['pilot'] +category: +image: /img/blog/CFDE-logo.png +description: The Frictionless Data team has been working with Dr. Philippe Rocca-Serra on increasing data set discoverability and highlighting how disparate data can be combined... +author: Lilly Winfree (OKF), Philippe Rocca-Serra (University of Oxford) on behalf of the NIH-CFDE +--- +Scientific work produces a wealth of data every year - ranging from electrical signals in neurons to maze-running in mice to hospital readmission counts in patients. Taken as a whole, this data could be queried to discover new connections that could lead to new breakthroughs – how does that increased neuronal activity lead to better memory performance in a mouse, and does that relate to improved Alzheimer's outcomes in humans? The data is there, but it is often difficult to find and mobilize. + +A main reason that this data is under-utilized is because datasets are often created in fragmented, domain-specific, or proprietary formats that aren’t easily used by others. The Frictionless Data team has been working with Dr. Philippe Rocca-Serra on some of these key challenges – increasing data set discoverability and highlighting how disparate data can be combined. Establishing a dataset catalogue, or index, represents a solution for helping scientists discover data. But, this requires some level of data standardization from different sources. To accomplish this, Dr. Rocca-Serra with the NIH Common Fund Data Ecosystem (NIH CFDE) opted for the Frictionless Data for Reproducible Research Project at the Open Knowledge Foundation (OKF). + +The [NIH Common Fund Data Ecosystem](https://www.nih-cfde.org) project launched in 2019 with the aim of providing a data discovery portal in the form of a single venue where all data coordinating centers (DCC) funded by the NIH would index their experimental metadata. Therefore, the [NIH-CFDE](https://www.nih-cfde.org) is meant to be a data catalogue (Figure 1), allowing users to search the entire set of NIH funded programs from one single data aggregating site. Achieving this goal is no mean feat, requiring striking a balance between functional simplicity and useful detail. Data extraction from individual coordinating centers (for example LINCS DCC) into the selected format should be as straightforward as possible yet the underlying object model needs to be rich enough to allow meaningful structuring of the information. + + +![Figure 1](./figure1.png) + +> **Figure 1** shows the landing page of the NIH-CFDE data portal which welcomes visitors to a histogram detailing the datasets distribution based on data types and file counts by default. This settings may be changes to show sample counts, species or anatomical location for instance. +url: https://www.nih-cfde.org/ + + +Furthermore, it is highly desirable to ensure that structural and content validation is performed prior to upload, so only valid submissions are sent to the Deriva-based NIH CFDE catalogue. How could the team achieve these goals while keeping the agility and flexibility required to allow for iterations to occur, adjustments to be made, and integration of user feedback to be included without major overhauls? + +Owing to the nature of the defined backend, the Deriva System, and the overall consistency of data stored by most DCCs, an object model was built around key objects, connected together via linked tables, very much following the [RDBMS / OLAP cubes paradigm](https://en.wikipedia.org/wiki/OLAP_cube). + +With this as a background, the choice of using [OKF Frictionless data packages framework](https://frictionlessdata.io/standards/) came to the fore. The Frictionless specifications are straightforward to understand, supported by libraries available in different languages, allowing creation, I/O operations and validations of objects models as well as instance data. + +Frictionless specifications offer several features which assist several aspects of data interoperation and reuse. The tabular data is always shipped with a JSON-formated definition of the field headers. Each field is typed to a data type but can also be marked-up with an RDFtype. Terminology harmonization relies on 4 resources, NCBI Taxonomy for species descriptions, UBERON for anatomical terms, OBI for experimental methods, and EDAM for data types and file format. Regular expression can be specified by the data model for input validation, and last but not least, the declaration of missing information can be made explicit and specific. The CFDE CrossCut Metadata Model (C2M2) relies on Frictionless specifications to define the objects and their relations (Figure 2). + +![Figure 2](./figure2.png) + +> **Figure 2** shows the latest version of the NIH CFDE data models where the central objects to enable data discovery are identified. Namely, study, biomaterial, biosample, file, each coming with a tight, essential set of attributes some of which associated to controlled vocabularies. url: https://docs.nih-cfde.org/en/latest/c2m2/draft-C2M2_specification/ + + +Researchers can submit their metadata to the portal via the [Datapackage Submission System](https://docs.nih-cfde.org/en/latest/cfde-submit/docs/index.html)(Figure 3). By incorporating Frictionless specifications to produce a common metadata model and applying a thin layer of semantic harmonization on core biological objects, we are closer to the goal of making available an aggregated data index that increases visibility, reusability and clarity of access to a wealth of experimental data. The NIH CFDE data portal currently indexes over 2 million data files, mainly from RNA-Seq and imaging experiments from 9 major NIH programs: a treasure trove for data miners. + +![Figure 3](./figure3.png) + +> **Figure 3** shows the architecture of the software components supporting the overall operation, from ETL from the individual DCC into the NIH CFDE data model to the validation and upload component. +url: https://docs.nih-cfde.org/en/latest/cfde-submit/docs/ diff --git a/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure1.png b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure1.png new file mode 100644 index 000000000..5e3ce9634 Binary files /dev/null and b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure1.png differ diff --git a/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure2.png b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure2.png new file mode 100644 index 000000000..7b0a4bae0 Binary files /dev/null and b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure2.png differ diff --git a/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure3.png b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure3.png new file mode 100644 index 000000000..d21336a12 Binary files /dev/null and b/site/blog/2022-01-12-frictionless-dp-for-nih-cfde-project/figure3.png differ diff --git a/site/blog/2022-01-18-frictionless-planet/README.md b/site/blog/2022-01-18-frictionless-planet/README.md new file mode 100644 index 000000000..9ea573917 --- /dev/null +++ b/site/blog/2022-01-18-frictionless-planet/README.md @@ -0,0 +1,30 @@ +--- +title: Frictionless Planet – Save the Date +date: 2022-01-18 +tags: ['events'] +category: events +image: /img/blog/facebook-color.png +description: You are invited to a brainstorming session to discuss how to make open climate data more useful +author: Open Knowledge Foundation +--- +Originally published: https://blog.okfn.org/2022/01/10/frictionless-planet-save-the-date/ + +We believe that an ecosystem of organisations combining tools, techniques and strategies to transform datasets relevant to the climate crisis into applied knowledge and actionable campaigns can get us closer to the Paris agreement goals. Today, scientists, academics and activists are working against the clock to save us from the greatest catastrophe of our times. But they are doing so under-resourced, siloed and disconnected. Sometimes even facing physical threats or achieving very local, isolated impact. We want to reverse that by activating a cross-sectoral sharing process of tools, techniques and technologies to open the data and unleash the power of knowledge to fight against climate change. We already started with the Frictionless Data process – collaborating with researcher groups to [better manage ocean research data](https://frictionlessdata.io/blog/2020/09/16/goodtables-bcodmo/) and [openly publish cleaned, integrated energy data](https://frictionlessdata.io/blog/2020/03/18/frictionless-data-pilot-study/) – and we want to expand an action-oriented alliance leading to cross regional, cross sectoral, sustainable collaboration. We need to use the best tools and the best minds of our times to fight the problems of our times. + +We consider you-your organisation- as leading thinkers-doers-communicators leveraging technology and creativity in a unique way, with the potential to lead to meaningful change and we would love to invite you to an initial brainstorming session as we think of common efforts, a sustainability path and a road of action to work the next three years and beyond. + +What will we do together during this brainstorming session? Our overarching goal is to make open climate data more useful. To that end, during this initial session, we will conceptualise ways of cleaning and standardising open climate data, creating more reproducible and efficient methods of consuming and analysing that data, and focus on ways to put this data into the hands of those that can truly drive change. + +# WHAT TO BRING? + +* An effort-idea that is effective and you feel proud of at the intersection of digital and climate change. +* A data problem you are struggling with. +* Your best post-holidays smile. + +# When? + +13:30 GMT – 20 January – Registration open [here](https://www.eventbrite.co.uk/e/frictionless-planet-tickets-242708286017). **SOLD OUT** + +20:30 GMT – 21 January – Registration open [here](https://www.eventbrite.co.uk/e/frictionless-planet-tickets-242807803677). + +Limited slots, 25 attendees per session. diff --git a/site/blog/2022-02-02-january-community-call/README.md b/site/blog/2022-02-02-january-community-call/README.md new file mode 100644 index 000000000..01a42b63e --- /dev/null +++ b/site/blog/2022-02-02-january-community-call/README.md @@ -0,0 +1,51 @@ +--- +title: Frictionless Data January 2022 Virtual Hangout +date: 2022-02-02 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/DPCKAN-blog.png +description: At our Frictionless Data community call we heard a presentation on DPCKAN by Francisco Alves... +author: Sara Petti +--- +On January 27th, for the first Frictionless Data community call of the year, we heard a presentation on the Data Package Manager for CKAN (DPCKAN) from Francisco Alves - leader of the proactive transparency policy in the Brazilian State of Minas Gerais. + +You may remember Francisco and DPCKAN from the [Frictionless Data Hackathon](https://frictionlessdata.io/blog/2021/10/13/hackathon-wrap/) back in October 2021, where his team won the hack with this very project. + +## So what is DPCKAN? + +It all started with the will to publish all the raw data on the Fiscal Transparency portal of the State of Minas Gereis, which is built on a [CKAN](https://ckan.org/) instance, as open data following the Frictionless standards. + +Francisco and his team wanted to install a data package, and be able to work with it locally. They also wanted to have the ability to partially update a dataset already uploaded in CKAN without overwriting it (this particular feature was developed during the Hackathon). That’s how the Data Package Manager was born. It is now in active development. + +## And what’s next? + +Francisco and his team would like to: +* Make it possible to read a data package directly from CKAN, +* Make CKAN Datastore respect the Frictionless table schema types +* Have human readable metadata visualisation +* Contribute back upstream to Frictionless Data, CKAN, etc. + +Franscisco also gave a quick demo of what the DPCKAN looks like. You can watch the full presentation (including the demo): + + + +If you are interested in DPCKAN, come and talk to Francisco on [Discord](https://discord.com/invite/j9DNFNw)! You can also check out the presentation slides in [this GitHub repository](https://github.com/dados-mg/frictionless-hangout-jan2022). + +## Other agenda items from our hangout +This year as well, we are helping organise the [FOSDEM](https://fosdem.org/2022/) Thematic Track *Open Research Tools & Technologies*. +Join us on February 5th! Among the many interesting talks, you will have the opportunity to catch senior developer Evgeny Karev presenting the newest Frictionless tool: [Livemark](https://fosdem.org/2022/schedule/event/open_research_livemark/). +Have a look at [the programme](https://fosdem.org/2022/schedule/track/open_research_tools_and_technologies/). The event is free of charge and there is no need to register. You can just log in the talks that you like. + +# Join us next month! +Next community call is next year, on February 24th. We don’t have a presentation scheduled yet, so if you have a project that you would like to present to the community, this could be your chance! Email us if you have something in mind: sara.petti@okfn.org. + +You can sign up for the call already [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2022-02-07-libraries-hacked/README.md b/site/blog/2022-02-07-libraries-hacked/README.md new file mode 100644 index 000000000..9b487797c --- /dev/null +++ b/site/blog/2022-02-07-libraries-hacked/README.md @@ -0,0 +1,104 @@ +--- +title: Libraries Hacked +date: 2022-02-07 +tags: ["case-studies"] +category: case-studies +image: /img/blog/libraries-hacked-logo.png +description: Libraries hacked is a project started in 2014 to promote the use of open data in libraries. That includes publishing data about libraries, as well as using other open datasets to enhance library data. +author: Dave Rowe +--- +I started the [Libraries Hacked](https://www.librarieshacked.org/) project in 2014. Inspired by ‘tech for good’ open data groups and hackathons, I wanted to explore how libraries could leverage data for innovation and service improvement. I had already been involved in the work of the group [Bath Hacked](https://www.bathhacked.org/), and worked at the local Council in Bath, releasing large amounts of open data that was well used by the community. That included data such as live car park occupancy, traffic surveys, and air quality monitoring. + +Getting involved in civic data publishing led me to explore data software, tools, and standards. I’ve used the Frictionless standards of Table Schema and CSV Dialect, as well as the code libraries that can be utilised to implement these. Data standards are an essential tool for data publishers in order to make data easily usable and reproducible across different organisations. + +Public library services in England are managed by 150 local government organisations. The central government department for Digital, Culture, Media, and Sport (DCMS) hold responsibility for superintending those services. In September 2019 they convened a meeting about public library data. + +Library data, of many kinds, is not well utilised in England. + +* **Lack of public data**. There are relatively few library services sharing data about themselves for public use. +* **Low expectations**. There is no guidance on what data to share. Some services will publish certain datasets, but these will likely be different to the ones other publish. +* **Few standards**. The structure of any published data will be unique to each library service. For example, there are published lists of library branches from [Nottinghamshire County Council](https://www.opendatanottingham.org.uk/dataset.aspx?id=1) and [North Somerset Council](https://data.gov.uk/dataset/9342032d-ab88-462f-b31c-4fb07fd4da6f/libraries). Both are out of date, and have different fields, field names, field types, and file formats. + +The meeting discussed these issues, amongst others. The problems are understood, but difficult to tackle, as no organisation has direct responsibility for library data. There are also difficult underlying causes - low skills and funding being two major ones. + +Large scale culture change will take many years. But to begin some sector-led collaborative work, a group of the attendees agreed to define the fields for a core selection of library datasets. The project would involve data practitioners from across English library services. + +The datasets would cover: + +* **Events**: the events that happen in libraries, their attendance, and outcomes +* **Library branches**: physical building locations, opening hours, and contact details +* **Loans**: the items lent from libraries, with counts, time periods, and categories +* **Stock**: the number of items held in libraries, with categories +* **Mobile library stops**: locations of mobile library stops, and their timetabled frequency +* **Physical visits**: how many people visit library premises +* **Membership**: counts of people who are library members, at small-area geographies. + +These can be split into 3 categories: + +* **Registers**. Data that should be updated when it changes. A list of library branches is a permanent register, to be updated when there are changes to those branches. +* **Snapshot**. Data that is released as a point in time representation. Library membership will be continually changing, but a snapshot of membership counts should be released at regular intervals. +* **Time-series**. Data that is new every time it is published. Loans data should be published at regular intervals, each published file being an addition to the existing set. + +To work on these, we held an in-person workshop at the DCMS offices. This featured an exciting interruption by a fire drill, and we had to relocate to a nearby café (difficult for a meeting with many people held in in London!). We also formed an online group using Slack to trial and discuss the data. + +## Schemas and Frictionless Data + +The majority of our discussions were practical rather than technical, such as what data would be most useful, whether or not it was currently used locally by services, and common problems. + +However, to formalise how data should be structured, it became clear that it would be necessary to create technical 'data schemas’. + +It can be easy to decide on the data you want, but fail to describe it properly. For example, we could provide people with a spreadsheet that included a column title such as 'Closed date'. I'd expect people to enter a date in that column, but we'd end up with all kinds of formats. + +The [Table Schema](https://specs.frictionlessdata.io/table-schema/) specification for defining data, from Frictionless Data, provided a good option for tackling this problem. Not only would it allow us to create a detailed description for the data fields, but we could use other frictionless tools such as [Good Tables](https://goodtables.io/). This would allow library services to validate their data before publishing. Things like mismatching date formats would be picked up by the validator, and it would give instructions for how to fix the issue. We would additionally also provide 'human-readable' guidance on the datasets. + +Frictionless Data is an [Open Knowledge Foundation](https://okfn.org/) project, and using tools from an internationally renowned body was also a good practice. The schemas are UK-centric but could be adapted and reused by international library services. + +The schemas are all documented at [Public Library Open Data](https://schema.librarydata.uk/), including guidance, links to sample data, and the technical definition files. + +## Lessons learned + +The initial datasets are not comprehensive. They are designed to be a starting point, allowing more to be developed from service requirements. + +They are overly focussed towards 'physical' library services. It wasn't long after these meetings that public libraries adjusted to provide all-digital services due to lockdowns. There is nothing here to cover valuable usage datasets like the video views that library services receive on YouTube and Facebook. + +There are some that have become even more important. The physical visits schema describes how to structure library footfall data, allowing for differences in collection methods and intervals. This kind of data is now in high demand, to analyse how library service visits recover. + +Some of the discussions we had were fascinating. It was important to involve the people who work with this data on a daily basis. They will know how easy it is to extract and manipulate, and many of the pitfalls that come with interpreting it. + +### Complexity + +There was often a battle between complexity and simplicity. Complex data is good, it often means it is more robust, such as using external identifiers. But simplicity is also good, for data publishers and consumers. + +Public library services will primarily employ data workers who are not formally trained in using data. Where there are complex concepts (e.g. Table Schema itself), they are used because they make data publishing easier and more consistent. + +Public data should also be made as accessible as possible for the public, while being detailed enough to be useful. In this way the data schemas tend towards simplicity. + +### Standards not standardisation + +There is a difference between a standard format for data, and standardised data. The schemas are primarily aimed at getting data from multiple services into the same format, to share analysis techniques between library services, and to have usable data when merged with other services. + +There were some cases where we decided against standardising the actual data within data fields. For example, there is a column in the loans and the stock datasets called 'Item type'. This is a category description of the library item, such as 'Adult fiction'. In some other previous examples of data collection this data is standardised into a uniform set of categories, in order to make it easily comparable. + +That kind of exercise defies reality though. Library services may have their own set of categories, many of them interesting and unique. To use a standard set would mean that library services would have to convert their underlying data. As well as extra work, it would be a loss of data. It would also mean that library services would be unlikely to use the converted data themselves. Why use such data if it doesn’t reflect what you actually hold? + +The downside is that anyone analysing combined data would have to decide themselves how to compare data in those fields. However, that would be at least a clear task for the data analyst - and would most likely be an easier exercise to do in bulk. + +### Detail + +In my ideal world, data would be as detailed as possible. Instead of knowing how many items a library lent every month, I want that data for every hour. In fact I want to have every lending record! But feasibly that would make the data unwieldy and difficult to work with, and wouldn’t be in-line with the statistics libraries are used to. + +We primarily made decisions based upon what library services already do. In a lot of cases this was data aggregated into monthly counts, with fields such as library branch and item type used to break down that data. + +## The future + +The initial meetings were held over two years ago, and it seems longer than that! A lot has happened in the meantime. We are still in a global pandemic that from library perspectives has de-prioritised anything other than core services. + +However, there are good examples of the data in action. Barnet libraries [publish 5 out of the 7 data schemas](https://open.barnet.gov.uk/dataset/e14dj/library-data) on a regular basis. + +I have also been creating tools that highlight how the data can be used such as [Library map](https://www.librarymap.co.uk) and [Mobile libraries](https://www.mobilelibraries.org). + +There is national work underway that can make use of these schemas. The British Library is working on a [Single Digital Presence](https://www.artscouncil.org.uk/blog/single-digital-presence-libraries) project that will require data from library services in a standard form. + +Internationally there are calls for more public library open data. The International Federation of Library Associations and Institutions (IFLA) has [released a statement on Open Library Data](https://www.ifla.org/news/ifla-releases-statement-on-open-library-data/) calling for "governments to ensure, either directly or through supporting others, the collection and open publication of data about libraries and their use". It would be great to work with organisations like IFLA to promote schemas that could be reused Internationally as well as for local services. There could also be the opportunity to use other Frictionless Data tools to aid in publishing data, such as [DataHub](https://datahub.io/). + +Hopefully in the future there can be workshops, training events, and conferences that allow these data schemas to be discussed and further developed. diff --git a/site/blog/2022-02-10-nasa-earth-mission-science/README.md b/site/blog/2022-02-10-nasa-earth-mission-science/README.md new file mode 100644 index 000000000..8f7475ada --- /dev/null +++ b/site/blog/2022-02-10-nasa-earth-mission-science/README.md @@ -0,0 +1,42 @@ +--- +title: Frictionless response to NASA Earth mission science data processing +date: 2022-02-10 +tags: ["news"] +category: news +image: /img/blog/eclipse_epc.png +description: Why Frictionless Data would benefit the ESO mission science data processing in its effort to find a more integrated approach to enhance data architecture efficiency and promote the open science principles... +author: Lilly Winfree & Sara Petti +--- +We are very excited to announce that we responded to a [request for information](https://sam.gov/opp/869f4051df38475591fa48fce5b0868d/view) that was recently published by NASA for its [Earth System Observatory (ESO)](https://science.nasa.gov/earth-science/earth-system-observatory). + +What is ESO? It is a set of (mainly satellite) missions providing information on planet Earth, which can guide efforts related to climate change, natural hazard mitigation, fighting forest fires, and improving real-time agricultural processes. + +With this request for information, ESO wants to gather expert advice on ways to find a more integrated approach to enhance data architecture efficiency and promote the open science principles. + +**We believe Frictionless Data would benefit the mission science data processing in several ways.** Here’s how: + +First, Frictionless automatically infers metadata and schemas from a data file, and allows users to edit that information. Creating good metadata is vital for downstream data users – if you can’t understand the data, you can’t use it (or can’t *easily* use it). Similarly, having a data schema is useful for interoperability, promoting the usefulness of datasets. + +The second Frictionless function we think will be helpful is data validation. Frictionless validates both the structure and content of a dataset, using built-in and custom checks. For instance, Frictionless will check for missing values, incorrect data types, or other constraints (e.g. temperature data points that exceed a certain threshold). If any errors are detected, Frictionless will generate a report for the user detailing the error so the user can fix the data during processing. + +Finally, users can write reproducible data transformation pipelines with Frictionless. Writing declarative transform pipelines allows humans and machines to understand the data cleaning steps and repeat those processes if needed in the future. Collectively, these functions create well documented, high quality, clean data that can then be used in further downstream analysis. + +We provided them with two examples of relevant collaboration: + +### Use Case 1 + +The [Biological and Chemical Oceanography Data Management Office (BCO-DMO)](https://www.bco-dmo.org/) cleans and hosts a wide variety of open oceanography data sets for use by researchers. A main problem for them was data being submitted to them was messy and not standardized, and it was time consuming and difficult for their data managers to clean in a reproducible, documented way. They implemented Frictionless code to create a new data transformation pipeline that ingests the messy data, performs defined cleaning/transforming steps, documents those steps, and produces a cleaned, standardized dataset. It also produces a (human and machine-readable) document detailing all the transformation steps so that downstream users could understand what happened to the data and undo/repeat if necessary. This process not only helps data managers clean data faster and more efficiently, it also drives open science by making the hosted data more understandable and usable while preserving provenance. + +More info on this use case [here](https://frictionlessdata.io/blog/2020/02/10/frictionless-data-pipelines-for-open-ocean/). + +### Use Case 2 + +[Dryad](https://datadryad.org/stash) is a biological data repository with a large user base. In our collaboration, their main issue was that they do not have the people-power to curate all the submitted datasets, so they implemented Frictionless tooling to help data submitters curate their data as they submit it. When data is submitted on the Dryad platform, Frictionless performs validation checks, and generates a report if any errors are found. The data submitter can then fix that error (e.g. there are no headers in row 1) and resubmit. Creating easy-to-understand error reports helps submitters understand how to create more useable, standardized data, and also frees up valuable time for the Dryad data management team. Ultimately, now the Dryad data repository hosts higher quality open science data. + +More info on this use case [here](https://frictionlessdata.io/blog/2021/08/09/dryad-pilot/). + +*** + + Are there other ways you think Frictionless Data could help the ESO project? Let us know! + +*Image used: Antarctica Eclipsed. NASA image courtesy of the DSCOVR EPIC team. NASA Earth Observatory images by Joshua Stevens, using Landsat data from the U.S. Geological Survey. Story by Sara E. Pratt.* diff --git a/site/blog/2022-03-03-community-call-february/README.md b/site/blog/2022-03-03-community-call-february/README.md new file mode 100644 index 000000000..c9172e7bb --- /dev/null +++ b/site/blog/2022-03-03-community-call-february/README.md @@ -0,0 +1,32 @@ +--- +title: Frictionless Data February 2022 Virtual Hangout +date: 2022-03-03 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Community-call-webrecorder.png +description: At our Frictionless Data community call we heard about the effort to standardise the WACZ format from Ilya Kreymer and Ed Summers... +author: Sara Petti +--- +On our second community call of the year, on February 24th, we had Ilya Kreymer and Ed Summers from [Webrecorder](https://webrecorder.net/) updating us on their effort in standardising the WAZC format (which they discussed with us already when it was still at an early development stage, in the community call of December 2020 (you can read the blog [here](https://frictionlessdata.io/blog/2020/12/17/december-virtual-hangout/#a-recap-from-our-december-community-call)). + +Webrecorder is a suite of open source tools and packages to capture interactive websites and replay them at a later time as accurately as possible. They created the WACZ format to have a portable format for archived web content that can be distributed and contain additional useful metadata about the web archives, using the Frictionless Data Package standard. + +Ed & Ilya also hoped to discuss with the community the possibility of signing these Data Packages, in order to provide an optional mechanism to make web archives bundled in WACZ more trusted, because a cryptographic proof of who the author of a Data Package is might be interesting for other projects as well. Unfortunately the call was rather empty. Maybe it was because of the change of time, but in case there are other reasons why you did not come, please let us know (dropping an email at sara.petti@okfn.org or with a direct message on Discord/Matrix). + +We did record the call though, so in case anyone is interested in having that discussion, we could always try to have it asynchronously on [Discord](https://discord.com/invite/Sewv6av) or [Matrix](https://matrix.to/#/#frictionless-data:matrix.org). + + + +Their current proposal to create signed WACZ packages is summarised in [on GitHub](https://github.com/webrecorder/wacz-auth-spec/blob/main/spec.md), so you can always reach out to them there as well. +# Join us next month! +Next community call is on March 31st. We are going to hear from Johan Richer from Multi, who is going to present the latest prototype of Etalab and his theory of [portal vs catalogue](https://jailbreak.gitlab.io/investigation-catalogue/synthese.html#/3) + +You can sign up for the call already [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! +# Call recording: +On a final note, here is the recording of the full call: + + + +As usual, you can join us on [Discord](https://discord.com/invite/j9DNFNw), [Matrix](https://matrix.to/#/#frictionless-data:matrix.org) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2022-03-09-save-our-planet/README.md b/site/blog/2022-03-09-save-our-planet/README.md new file mode 100644 index 000000000..4bc19927d --- /dev/null +++ b/site/blog/2022-03-09-save-our-planet/README.md @@ -0,0 +1,49 @@ +--- +title: Combining Data Skills, Knowledge, and Networks to Save our Planet +date: 2022-03-09 +tags: ['news'] +category: news +image: /img/blog/facebook-color.png +description: To start this year fresh and inspired, we convened two gatherings of climate researchers, activists, and organisations to brainstorm ways to collaborate to make open climate data more usable, accessible, and impactful... +author: Lilly Winfree +--- +During these past tumultuous years, it has been striking to witness the role that information has played in furthering suffering: misinformation, lack of data transparency, and closed technology have worsened the pandemic, increased political strife, and hurt climate policy. Building on these observations, the team at Open Knowledge Foundation are refocusing our energies on how we can come together to empower people, communities, and organisations to create and use open knowledge to solve the most urgent issues of our time, including climate change, inequality, and access to knowledge . Undaunted by these substantial challenges, we entered 2022 with enthusiasm for finding ways to work together, starting with climate data. + +To start this year fresh and inspired, we convened two gatherings of climate researchers, activists, and organisations to brainstorm ways to collaborate to make open climate data more usable, accessible, and impactful. Over 30 experts attended the two sessions, from organisations around the world, and we identified and discussed many problems in the climate data space. We confirmed our initial theory that many of us are working siloed and that combining skills, knowledge and networks can result in a powerful alliance across tech communities, data experts and climate crisis activists. + +Now, we want to share with you some common themes from these sessions and ask: how can we work together to solve these pressing climate issues? + +A primary concern of attendees was **the disconnect between how (and why) data is produced and how data can (and should) be used**. This disconnect shows up as frictions for data use: we know that much existing “open” data isn’t actually usable. During the call, many participants mentioned they frequently can’t find open data, and even when they can find it, they can’t easily access it. Even when they can access the data, they often can’t easily use it. + +So why is it so hard to find, access, and use climate data? First, climate data is not particularly well standardised or curated, and data creators need better training in data management best practices. Another issue is that many climate data users don’t have technical training or knowledge required to clean messy data, greatly slowing down their research or policy work. + +### How will the Open Knowledge Foundation fix the identified problems? Skills, standards and community. + +An aim for this work will be to bridge the gaps between data creators and users. We plan to host several workshops in the future to work with both these groups, focusing on identifying both skills gaps and data gaps, then working towards capacity building. + +Our goal with capacity building will be to give a data platform to those most affected by climate change. How do we make it easier for less technical or newer data users to effectively use climate data? Our future workshops will focus on training data creators and users with the [Open Knowledge Frictionless Data tooling](https://frictionlessdata.io/) to better manage data, create higher quality data, and share data in impactful ways that will empower trained researchers and activists alike. For instance, the Frictionless toolbox can help data creators generate clean data that is easy to understand, share, and use, and the new Frictionless tool Livemark can help data consumers easily share climate data with impactful visualisations and narratives. + +Another theme that emerged from the brainstorm sessions was the role data plays in generating knowledge versus the role knowledge plays in generating data, and how this interplay can be maximised to create change. For instance, **we need to take a hard look at how "open" replicates cycles of inequalities**. Several people brought up the great work citizen scientists are doing for climate research, but how these efforts are rarely recognised by governments or other official research channels. So much vital data on local impacts of climate change are being lost as they aren’t being incorporated into official datasets. How do we make data more equitable, ensuring that those being most affected by climate change can use data to tell their stories? + + +We call on data organisations, climate researchers, and activists to join us in these efforts. How can we best work together to solve pressing climate change issues? Would you like to partner with us for workshops, or do you have other ideas for collaborations? Let us know! We would like to give our utmost thanks to the organisations that joined our brainstorming sessions for paving the way in this important work. To continue planning this work, we are creating a space to talk in our Frictionless Data community chat, and we invite all interested parties to join us. We are currently migrating our community from Discord to Slack. We encourage you to join the Slack channel, which will soon be populated with all Frictionless community members: https://join.slack.com/t/frictionlessdata/shared_invite/zt-14x9bxnkm-2y~uQcmmrqarSP2kV39_Kg +(We also have a Matrix mirror if you prefer Matrix: https://matrix.to/#/#frictionless-data:matrix.org) + +Finally, we’d like to share this list of resources that attendees shared during the calls: +* Patrick J McGovern Data for Climate 2022 Accelerator: [https://www.mcgovern.org/foundation-awards-4-5m-including-new-accelerator-grants-to-advance-data-driven-climate-solutions/](https://www.mcgovern.org/foundation-awards-4-5m-including-new-accelerator-grants-to-advance-data-driven-climate-solutions/) +* Open Climate: [https://www.appropedia.org/OpenClimate](https://www.appropedia.org/OpenClimate) +* Environmental Data and Governance Initiative: [https://envirodatagov.org/](https://envirodatagov.org/) +* Earth Science Information Partners: [https://www.esipfed.org/about](https://www.esipfed.org/about) +Course on environmental data journalism by School of Data Brazil: [https://escoladedados.org/courses/jornalismo-de-dados-ambientais/](https://escoladedados.org/courses/jornalismo-de-dados-ambientais/) +* Catalogue of environmental databases in Brazil by School of Data Brazil: [https://bit.ly/dados-ambientais](https://bit.ly/dados-ambientais) +* A monthly meetup for small companies to share best practices (and data): [https://climatiq.io/blog/climate-action-net-zero-ambition-best-practices-for-sme](https://climatiq.io/blog/climate-action-net-zero-ambition-best-practices-for-sme) +* Reddit Datasets: [https://www.reddit.com/r/datasets/](https://www.reddit.com/r/datasets/) +* Hardware information standard: [https://barbal.co/the-open-know-how-manifest-specification-version-1-0/](https://barbal.co/the-open-know-how-manifest-specification-version-1-0/) +* Catalyst Cooperative: [https://github.com/catalyst-cooperative/pudl](https://github.com/catalyst-cooperative/pudl) and [https://zenodo.org/communities/catalyst-cooperative/](https://zenodo.org/communities/catalyst-cooperative/) +* Research Data Alliance Agriculture: [https://www.rd-alliance.org/rda-disciplines/rda-and-agriculture](https://www.rd-alliance.org/rda-disciplines/rda-and-agriculture) +* Open Climate Now!: [https://branch.climateaction.tech/issues/issue-2/open-climate-now/](https://branch.climateaction.tech/issues/issue-2/open-climate-now/) +* Metadata Game Changers: [https://metadatagamechangers.com](https://metadatagamechangers.com) +* Excellent lecture by J McGlade bridging attitudes etc. to the data story and behaviour change effects: [https://www.youtube.com/watch?v=eIRlLlrnmBM\&t=1561s](https://www.youtube.com/watch?v=eIRlLlrnmBM\&t=1561s) +* The Integrated-Assessment Modeling Community (IAMC) is developing a Python package "pyam" for scenario analysis & data visualization: [https://pyam-iamc.readthedocs.io](https://pyam-iamc.readthedocs.io) +* IIASA is hosting numerous scenario ensemble databases, see [https://data.ece.iiasa.ac.at](https://data.ece.iiasa.ac.at), most importantly the scenario ensemble supporting the quantitative assessment in the IPCC 1.5°C Special Report (2018), and a similar database will be released in two months together with IPCC AR6 WG3 +* Letter to IEA by the openmod community, [https://forum.openmod.org/t/open-letter-to-iea-and-member-countries-requesting-open-data/2949](https://forum.openmod.org/t/open-letter-to-iea-and-member-countries-requesting-open-data/2949) diff --git a/site/blog/2022-04-13-march-community-call/README.md b/site/blog/2022-04-13-march-community-call/README.md new file mode 100644 index 000000000..e995df0a1 --- /dev/null +++ b/site/blog/2022-04-13-march-community-call/README.md @@ -0,0 +1,37 @@ +--- +title: Frictionless Data March 2022 Virtual Hangout +date: 2022-04-13 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Data-cataloguing-multi.png +description: At our Frictionless Data community call we had a discussion with Johan Richer from Multi.coop around his theory of portal vs catalogue. +author: Sara Petti +--- +At our last community call on March 31st, we had a discussion with Johan Richer from [Multi](https://www.multi.coop/) around his theory of portal vs catalogue. + +The discussion started with a presentation of the latest catalogue prototype by [Etalab](https://www.data.gouv.fr/fr/) currently in development: https://github.com/etalab/catalogage-donnees. Data cataloguing has become a major component of open data policies in France, but there are issues related to the maintainability of the catalogue and the traceability of the data. + +In the beginning the data producers were also the data publishers, and therefore the purpose of a portal was to catalogue, publish, and store the data. Recently the process became more complicated, and the cataloguing became a prerequisite to publication. Instead of publishing by default, data producers want to make sure that the data is clean before injecting it into the portal. This started a new workflow of internal data management, that the portals were not made for. So how can we restore the broken link between catalogue and portal? Johan thinks data lineage is key. + +If you want to know more about it, you can go and have a look at Johan’s presentation [here](https://jailbreak.gitlab.io/investigation-catalogue/synthese.html#/3) (in French, but [here’s a shortcut to the Google translation](https://jailbreak-gitlab-io.translate.goog/investigation-catalogue/synthese.html?_x_tr_sl=fr&_x_tr_tl=en#/) if you’d rather have it in English), or watch the recording: + + + +# News from the community + +Our community chat has moved from Discord to Slack! In the community survey we ran last year, many people suggested moving to Slack, and the terms of services are definitely better (ranking B vs E for Discord, according to https://tosdr.org/ ). We will also be able to organise the questions & answer better, and that will definitely be an added value for the community. + +To join our community chat: https://frictionlessdata.slack.com/messages/general + +# Join us next month! +Next community call is on April 28th. We are going to hear about open science practices at the Turing Way from former Frictionless Fellow Anne Lee Steele. +You can sign up for the call already [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up! + +# Call recording: +On a final note, here is the recording of the full call: + + + +Join us on [Slack](https://frictionlessdata.slack.com/messages/general) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2022-05-05-april-community-call/README.md b/site/blog/2022-05-05-april-community-call/README.md new file mode 100644 index 000000000..13b42ff65 --- /dev/null +++ b/site/blog/2022-05-05-april-community-call/README.md @@ -0,0 +1,37 @@ +--- +title: Frictionless Data April 2022 Virtual Hangout +date: 2022-05-05 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/community-call-april-img.png +description: At our Frictionless Data community call we had a discussion around open science best practices and the Turing Way with former Frictionless Fellow Anne Lee Steele... +author: Sara Petti +--- + +At our last community call on April 28th, we had a discussion around open science best practices and the Turing Way with Anne Lee Steele, who - you might remember, was part of the [second cohort of Frictionless Fellows](https://frictionlessdata.io/blog/2020/09/01/hello-fellows-cohort2/). + +The Turing Way is an open source and community-led handbook for reproducible, ethical and collaborative research. It is composed of more than 240 pages created by ~300 researchers over the course of 3 years, written collaboratively via GitHub PRs - contrasting to the notion of single/small-authorship papers. + +There is currently an effort to make the Turing way develop meta-practices that can be applied to other areas as well, one example is documentation. + +A great outcome of the call was the proposal to have a closer cooperation between the Frictionless Data community and the Turing Way’s one, possibly developing a chapter for Open Infrastructures for research to contribute upstream. This chapter would set the context and provide a vision for how to evaluate tools and platforms with a Turing Way perspective on reproducibility, ethical alternatives and collaboration in practice. For more info about this proposal, check [this issue](https://github.com/alan-turing-institute/the-turing-way/issues/2337). + +If you want to know more about the Turing Way, have a look at the [project website](https://the-turing-way.netlify.app/welcome.html). You can also check out the full recording of the call: + + + +# News from the community + +* You’re all invited to join the Frictionless Fellows for a free virtual workshop on Open Science best practices on May 25 at 2pm UTC! +In this beginner-friendly workshop, Fellows will demonstrate how to use the Frictionless tools to make research data more understandable, usable, and open. You will learn how to use the Frictionless non-coding tools to manipulate metadata and schemas (and why that is important!) and how to validate data in a hands-on format. Learn more & sign up on the Fellows website: [https://fellows.frictionlessdata.io/](https://fellows.frictionlessdata.io/). + +* Reminder that our community chat has moved to Slack. Join us [there](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg). We now also have a fully operating [Matrix bridge](https://matrix.to/#/#frictionlessdata:matrix.okfn.org), so if you prefer you can join us from there as well. + +# Join us next month! +Next community call is on May 26th. We are going to hear Nick Kellett from Deploy Solutions explain to us how to build citizen science and climate change solutions, using Frictionless. + +You can sign up for the call already [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up. + +Join us on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (also via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)) or [Twitter](https://twitter.com/frictionlessd8a) to say hi or ask any questions. See you there! diff --git a/site/blog/2022-05-24-TU-Delft-training/README.md b/site/blog/2022-05-24-TU-Delft-training/README.md new file mode 100644 index 000000000..0e56d512a --- /dev/null +++ b/site/blog/2022-05-24-TU-Delft-training/README.md @@ -0,0 +1,52 @@ +--- +title: Workshop on FAIR and Frictionless Workflows for Tabular Data +date: 2022-05-24 +tags: ['events'] +category: events +image: /img/blog/TUDelft-training.png +description: On 28 and 29 April 4TU.ResearchData and the Frictionless Data team joined forces to organize an online workshop on “FAIR and frictionless workflows for tabular data” +author: Paula Martinez Lavanchy +--- +*Originally published on: https://community.data.4tu.nl/2022/05/19/workshop-on-fair-and-frictionless-workflows-for-tabular-data/* + +4TU.ResearchData and Frictionless Data joined forces to organize the workshop [“FAIR and frictionless workflows for tabular data”](https://community.data.4tu.nl/2022/03/22/workshop-fair-and-frictionless-workflows-for-tabular-data-online/). The workshop took place on 28 and 29 April 2022 in an online format + +On 28 and 29 April we ran the workshop “FAIR and frictionless workflows for tabular data” in collaboration with members of the [Frictionless Data project team](https://frictionlessdata.io/). + +This workshop was envisioned as a pilot to create training on reproducible and FAIR tools that researchers can use when working with tabular data, from creation to publication. The programme was a mixture of presentations, exercises and hands-on live coding sessions. We got a lot of inspiration from [The Carpentries](https://carpentries.org/) style of workshops and tried to create a safe, inclusive and interactive learning experience for the participants. + +The workshop started with an introduction to Reproducible and FAIR research given by [Eirini Zormpa](https://www.tudelft.nl/library/research-data-management/r/support/data-stewardship/contact/eirini-zormpa) (Trainer at 4TU.ResearchData), who also introduced learners to best practices for data organization of tabular data based on the [Data Carpentry for Ecologists lesson](https://datacarpentry.org/spreadsheet-ecology-lesson/). You can have a look at [Eirini’s slides here](https://4turesearchdata-carpentries.github.io/frictionless-data-workshop/data-organisation.html#1). + +The introduction was followed by a hands-on session exploring the [Frictionless Data framework](https://framework.frictionlessdata.io/). The Frictionless Data project has developed a full data management framework for Python to describe, extract, validate, and transform tabular data following the FAIR principles. [Lilly Winfree](https://www.linkedin.com/in/lilly-winfree-phd/) used Jupyter Notebook to introduce learners to the different tools, as it helps visualizing the steps of the workflow. You can access the presentation and the notebook (and all the materials of the workshop) used by Lilly in [this GitHub repository](https://github.com/4TUResearchData-Carpentries/FAIR-and-Frictionless-workflows-for-tabular-data-). + +During the hands-on coding session, the learners practiced what they were learning on an example dataset from ecology (source of the dataset: [Data Carpentry for Ecologists](https://datacarpentry.org/ecology-workshop/)). Later in the workshop, Katerina Drakoulaki, Frictionless Data fellow and helper, also gave an example of how to apply the framework tools to a [dataset coming from the computational musicology field](https://github.com/4TUResearchData-Carpentries/FAIR-and-Frictionless-workflows-for-tabular-data-/blob/main/03_Frictionless%20Data-MBn%20presentation_28-4-2022.pdf). + +We concluded the workshop with a presentation about [Data Publication](https://github.com/4TUResearchData-Carpentries/FAIR-and-Frictionless-workflows-for-tabular-data-/blob/main/04_FAIRandFRictionless%20workflows_Data_Publication.pdf) by [Paula Martinez Lavanchy](https://www.tudelft.nl/staff/p.m.martinezlavanchy/?cHash=38d458b8cd0f7bc5562cd130725220c6), Research Data Officer at 4TU.ResearchData. The presentation focused on why researchers should publish their data, how to select the data to publish and how to choose a good data repository that helps implement the FAIR principles to the researchers’ data. Paula also briefly demoed the features of 4TU.ResearchData using the [repository sandbox](https://sandbox.data.4tu.nl/). + +Besides the instructors, we also had a great team of helpers that were there in case the learners encountered any technical problems or had questions during the live coding session. We would like to give a big thank you to: Nicolas Dintzner – TU Delft Data Steward of the Faculty of Technology, Policy & Management, Katerina Drakoulaki – Postdoctoral researcher, at NKUA & Frictionless Data Fellow, Aleksandra Wilczynska – Data Manager at TU Delft Library & the Digital Competence Center and Sara Petti – Project Manager at Open Knowledge Foundation. + +![image](./TUDelft-training.png) + +> **Image:** Top-left: Eirini Zormpa -Trainer of RDM and Open Science at TU Delft Library & 4TU.ResearchData, Top-right: Lilly Winfree – Product Manager of Frictionless Data at the Open Knowledge Foundation, Bottom: Katerina Drakoulaki – Postdoctoral researcher at NKUA & Frictionless Data fellow. + + +Nineteen learners joined the workshop. The audience had a broad range of backgrounds with both researchers and support staff (e.g. data curator, research data manager, research software engineer, data librarian, etc.) represented. The workshop received quite positive feedback. Most of the learner’s expectations were fulfilled (79%) and they would recommend the workshop to other researchers (93%). It was also nice to know that most of the learners felt that they can apply what they learned immediately and they felt comfortable learning in the workshop. + +![image](./TU-Delft-feedback.png) + +> **Images:** Feedback training event + + +This feedback from the learners has helped us to start thinking about how to improve future runs of the workshop. For example, we used less time than we had planned, which creates the opportunity to provide instruction on more features of the framework or to add more exercises or practice time. The learners also indicated they would have liked to have a common document (e.g. Google doc or HackMD) to share reference material and to document the code that the instructor was typing in case they got lost. + +Even though there is room for improvement, the learners appreciated the highly practical approach of the workshop, the space they had to practice what they learned and the overall quality of the Frictionless Data framework tools. Here are some of the strengths that learners mentioned: + +*‘Hands-on, can start using what I learned immediately’* + +*‘Practical experience with the framework and working on shared examples.’* + +*‘Machine readable data and packaging for interoperability through frictionless’* + +*‘Very clear content. Assured assistance in case of technical problems. Adherence to timelines with breaks. Provided many in-depth links. Friendly atmosphere.’* + +We at the 4TU.ResearchData team greatly enjoyed this collaboration that allowed us to help build the skills that researchers and other users of the repository need to make research data findable, accessible, interoperable and reproducible (FAIR). diff --git a/site/blog/2022-05-24-TU-Delft-training/TU-Delft-feedback.png b/site/blog/2022-05-24-TU-Delft-training/TU-Delft-feedback.png new file mode 100644 index 000000000..c577748a4 Binary files /dev/null and b/site/blog/2022-05-24-TU-Delft-training/TU-Delft-feedback.png differ diff --git a/site/blog/2022-05-24-TU-Delft-training/TUDelft-training.png b/site/blog/2022-05-24-TU-Delft-training/TUDelft-training.png new file mode 100644 index 000000000..27cf0e9b3 Binary files /dev/null and b/site/blog/2022-05-24-TU-Delft-training/TUDelft-training.png differ diff --git a/site/blog/2022-06-01-deploy-solutions/README.md b/site/blog/2022-06-01-deploy-solutions/README.md new file mode 100644 index 000000000..e98f70c1a --- /dev/null +++ b/site/blog/2022-06-01-deploy-solutions/README.md @@ -0,0 +1,45 @@ +--- +title: Frictionless Data May 2022 Community Call +date: 2022-06-01 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Deploy-solutions.png +description: At our Frictionless Data community call we heard about citizen science and climate change solutions using Frictionless Data +author: Sara Petti +--- +At our last community call on May 28th, we heard about citizen science and climate change solutions using Frictionless Data from Nick Kellett, Pan Khantidhara and Justin Mosbey from [Deploy Solutions](https://www.deploy.solutions/). + +Deploy Solutions builds software that can help with climate change disruptions, and they are using Frictionless Data to help! They develop cloud-hosted solutions using big data from satellites, and, since 2019, they have adopted a citizen focus in climate change research. +They researched and identified the main problems that prevent people and communities from acting in case of climate change disasters: + +* Citizens feel overwhelmed by the volume of information received. +* They feel the information they get is not personalised to their needs. +* Authorities have difficulties directly collaborating and sharing information with citizens. + +The solution they propose is the creation of a complete map-centred web-application that can be built very quickly (~4 hours) with basic functionalities to provide basic and reliable information for disaster response, while allowing users to upload citizen science observations. + +The app takes Earth observations imagery from satellites, and associates them with imagery that citizens are taking on the ground, to check that the machine learning algorithms applied are correctly predicting the disaster extent. + +It also visualises the data coming in to look for trends, gathering historic data and comparing with what is predicted. The quantity of information needed for such an app is huge, and most often than not, it comes from different sources and does not follow any standards. It is therefore tricky to describe it and validate it. You might have guessed it by now, Frictionless Data is helping with that. + +If you are interested in knowing more about Deploy Solutions and how they are using Frictionless Data, you can watch the full presentation (including Pan Khantidhara’s demo!): + + + +If you have questions or feedback, you can let us know in [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg), or you can reach out to Deploy Solutions directly. + +# Join us next month! +Next community call is on June 30th. Join us to meet the 3rd cohort of Frictionless Fellows and hear about their reproducibility and open science journey! + +You can sign up for the call already [here:](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up. + +Would you like to present at one of the next community calls? Please fill out [this form](https://forms.gle/AWpbxyiGESNSUFK2A). + +Join our community on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (also via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)) or [Twitter](https://twitter.com/frictionlessd8a). See you there! + +# Call Recording + +On a final note, here is the recording of the full call: + diff --git a/site/blog/2022-07-04-june-community-call/README.md b/site/blog/2022-07-04-june-community-call/README.md new file mode 100644 index 000000000..fd0406425 --- /dev/null +++ b/site/blog/2022-07-04-june-community-call/README.md @@ -0,0 +1,29 @@ +--- +title: Frictionless Data June 2022 Community Call +date: 2022-07-04 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/June-community-call.png +description: At our Frictionless Data community call we met the 3rd cohort of Frictionless Fellows +author: Sara Petti +--- +On June 30th we had a very special community call. Instead of a project presentation this time we had the chance to meet the 3rd cohort of [Frictionless Fellows](http://fellows.frictionlessdata.io/) and hear about their reproducibility and open science journey. + +The fellows are a group of early career researchers interested in learning about open science and data management by using the Frictionless Data tools in their own research projects. Melvin Adhiambo, Lindsay Gypin, Kevin Kidambasi, Victoria Stanley, and Guo-Qiang Zhang are almost at the end of their nine months fellowship. During the past nine months they have learnt open science principles and how to discuss them (especially with colleagues who are not convinced yet!). They also learnt data management skills, and how to correctly use metadata and data schemas. Besides using the Frictionless Data browser tools, there was also a coding component to the fellowship, as they used the Frictionless Python tools as well. + +The Fellows also ran workshops and wrote great blog posts during the last nine months. You can read them [here](http://fellows.frictionlessdata.io/blog). + +If you are interested in knowing more about the fellows’ research field and what being a Frictionless Data Fellow meant for them, go and watch the full recording of the call: + + + +# Join us next month! +Next community call is on July 28th. Join us to hear David Raznick telling us about [Flatterer](https://flatterer.opendata.coop/), a new tool that helps convert JSON into tabular data. + +You can sign up for the call already [here.](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link) + +Do you want to share something with the community? Let us know when you sign up. + +Would you like to present at one of the next community calls? Please fill out [this form](https://forms.gle/AWpbxyiGESNSUFK2A). + +Join our community on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (also via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)) or [Twitter](https://twitter.com/frictionlessd8a). See you there! diff --git a/site/blog/2022-07-05-frictionless-planet-conversation/README.md b/site/blog/2022-07-05-frictionless-planet-conversation/README.md new file mode 100644 index 000000000..80771e9d0 --- /dev/null +++ b/site/blog/2022-07-05-frictionless-planet-conversation/README.md @@ -0,0 +1,26 @@ +--- +title: Frictionless Planet and Lacuna Fund discuss gaps in climate datasets for machine learning +date: 2022-07-05 +tags: ['news'] +category: news +image: /img/blog/facebook-color.png +description: On 24 June we hosted a conversation with the Lacuna Fund about datasets for climate change... +author: Lilly Winfree +--- +Originally published on: https://blog.okfn.org/2022/07/05/frictionless-planet-and-lacuna-fund-discuss-gaps-in-climate-datasets-for-machine-learning/ + +On 24 June we hosted a conversation with the Lacuna Fund about datasets for climate change where we heard all about the Lacuna Fund’s recently launched Request for Proposals around Datasets for Climate Applications. We were joined by climate data users and creators from around the globe. This conversation is a part of Open Knowledge Foundation’s recent work on building a Frictionless Planet by using open tools and design principles to tackle the world’s largest problems, including climate change. + +A lacuna is a gap, a blank space or a missing part of an item. Today there are gaps in the datasets that are available to train and evaluate machine learning models. This is especially true when it comes to specific populations and geographies. The Lacuna Fund was created to support data scientists in closing those gaps in machine learning datasets needed to better understand and tackle urgent problems in their communities, like those linked to the climate crisis. + +Lacuna Fund is currently accepting proposals for two climate tracks: [Climate & Energy](https://s31207.pcdn.co/wp-content/uploads/sites/11/2022/04/Climate-and-Energy-RFP-Final.pdf) and [Climate & Health](https://s31207.pcdn.co/wp-content/uploads/sites/11/2022/04/Climate-and-Health-RFP-Final.pdf). The first track is looking at the intersection between energy, climate, and green recovery, and the second focuses on health and strategies to mitigate the impact of the climate crisis. Proposals should focus on machine learning datasets, either collecting and annotating new data, annotating and releasing existing data, or expanding existing datasets and increasing usability. Lacuna Fund’s guiding principles include equity, ethics, and participatory approach, and those values are very important for this work. Accordingly, proposals should include a plan for data management and licencing, privacy, and how the data will be shared. The target audience for this call is data scientists, with a focus on under-represented communities in Africa, Asia, and Latin America. + +During the call, we also discussed if participants have specific data gaps in their fields, like a lack of data on how extreme heat events affect human health. The response was a strong “Yes”! Participants described working in “data deserts” where there is often missing data, leading to less accurate machine learning algorithms. Another common issue is data quality and trust in data, especially from “official” sources. Tackling data transparency will be important for creating impactful climate policy. We’d like to ask you the same question: If your group could have access to one data set that would have a large impact on your work, what is that data set? + +* If you are interested in applying for the Lacuna Fund’s open requests for proposals (RFP), please check out these resources here: +* Apply page: [https://lacunafund.org/apply/](https://lacunafund.org/apply/) +* Q&A (questions from potential applicants): https://s31207.pcdn.co/wp-content/uploads/sites/11/2022/06/QA-Climate-2022.pdf +* RFP for Climate & Energy: https://s31207.pcdn.co/wp-content/uploads/sites/11/2022/04/Climate-and-Energy-RFP-Final.pdf +* RFP for Climate & Health: https://s31207.pcdn.co/wp-content/uploads/sites/11/2022/04/Climate-and-Health-RFP-Final.pdf +* Applicant webinar recording: https://vimeo.com/711365252 +* Proposals are due 17th July diff --git a/site/blog/2022-07-14-flatterer/README.md b/site/blog/2022-07-14-flatterer/README.md new file mode 100644 index 000000000..18cb1f440 --- /dev/null +++ b/site/blog/2022-07-14-flatterer/README.md @@ -0,0 +1,54 @@ +--- +title: "Announcing Flatterer: converting structured data into tabular data" +date: 2022-07-14 +tags: ["case-studies"] +category: case-studies +image: /img/blog/Flatterer.png +description: "In this blog post, we introduce Flatterer — a new tool that helps convert JSON into tabular data." +author: Open Data Services +--- +*Originally posted on: https://medium.com/opendatacoop/announcing-flatterer-converting-structured-data-into-tabular-data-c4652eae27c9* + +*In this blog post, we introduce Flatterer - a new tool that helps convert JSON into tabular data. To hear more about Flatterer, [sign up](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform) to join David Raznick at the Frictionless Data community call on July 28th.* + +Open data needs to be available in formats people want to work with. In our experience at [Open Data Services](https://opendataservices.coop/), we’ve found that developers often want access to structured data (for example, JSON) while analysts are used to working with flat data (in CSV files or tables). + +More and more data is being published as JSON, but for most analysts this isn’t particularly useful. For many, working with JSON means needing to spend time converting the structured data into tables before they can get started. + +That’s where [Flatterer](https://github.com/kindly/flatterer) comes in. Flatterer is an opinionated JSON to CSV/XLSX/SQLITE/PARQUET converter. It helps people to convert JSON into relational, tabular data that can be easily analysed. It’s fast and memory efficient, and can be run either in the [command line](https://flatterer.opendata.coop/) or as a [Python library](https://deepnote.com/@david-raznick/Flatterer-Demo-FWeGccp_QKCu1WAEGQ0mEQ). The Python library supports creating data frames for all the flattened data, making it easy to analyse and visualise. + +## What does it do? +With Flatterer you can: + +* easily convert JSON to flat relational data such as CSV, XLSX, Database Tables, Pandas Dataframes and Parquet; +* convert JSON into data packages, so you can use Frictionless data to convert into any database format; +* create a data dictionary that contains metadata about the conversion, including fields contained in the dataset, to help you understand the data you are looking at; +* create a new table for each one-to-many relationship, alongside \_link fields that help to join the data together. + +## Why we built it +When you receive a JSON file where the structure is deeply nested or not well specified, it’s hard to determine what the data contains. Even if you know the JSON structure, it can still be time consuming to work out how to flatten the JSON into a relational structure for data analysis, and to be part of a data pipeline. +Flatterer aims to be the first tool to go to when faced with this problem. Although you may still need to handwrite code, Flatterer has a number of benefits over most handwritten approaches: +* it’s fast – written in Rust but with Python bindings for ease of use. It can be 10x faster than hand written Python flattening; +* it’s memory efficient – flatterer uses a custom streaming JSON parser which means that a long list of objects nested with the JSON will be streamed, so less data needs to be loaded into memory at once; +* it gives you fast, memory efficient output to CSV/XLSX/SQLITE/PARQUET; +* it uses best practice that has been learnt from our experience flattening JSON countless times, such as generating keys to link one-to-many tables to their parents. + +## Using Flatterer in the OpenOwnership data pipeline +As an example, we’ve used [Flatterer](https://github.com/kindly/flatterer) to help [OpenOwnership](https://www.openownership.org/) create a data pipeline to make information about who owns and controls companies available in a [variety of data formats](https://bods-data.openownership.org/). In the example below, we’ve used Flatterer to convert beneficial ownership data from the Register of Enterprises of the Republic of Latvia and the OpenOwnership Register from JSON into CSV, SQLite, Postgresql, Big Query and Datasette formats. + +![img-1-flatterer](https://user-images.githubusercontent.com/74717970/179058338-08ce8ea1-9b1f-4c4c-b59c-64b04cd450f6.png) + +Alongside converting the data into different formats, Flatterer has created a data dictionary that shows the fields contained in the dataset, alongside the field type and field counts. In the example below, we show how this dictionary interprets person_statement fields contained in the Beneficial Ownership Data Standard. + +![img-2-flatterer](https://user-images.githubusercontent.com/74717970/179058526-19694210-514e-4215-bf9d-f6abc7ef5400.png) + +Finally, you can see Flatterer has created special \_link fields, to help with joining the tables together. The example below shows how the \_link field helps join [entity identifiers](https://medium.com/opendatacoop/why-do-open-organisational-identifiers-matter-46af05ab30a) to statements about beneficial ownership. + +![img-3-flatterer](https://user-images.githubusercontent.com/74717970/179058657-ae4ab534-9fdb-4d6d-ad59-56521f0218e0.png) + +## What’s next? +Next, we’ll be working to make Flatterer more user friendly. We’ll be exploring creating a desktop interface, improving type guessing for fields, and giving more summary statistics about the input data. We welcome feedback on the tool through [GitHub](https://github.com/kindly/flatterer/issues), and are really interested to find out what kind of improvements you'd like to see. + +More information about using Flatterer is available on [deepnote](https://deepnote.com/@david-raznick/Flatterer-Demo-FWeGccp_QKCu1WAEGQ0mEQ). To hear more about Flatterer, you can join David Raznick at Frictionless Data’s monthly community call on July 28th. + +#### At Open Data Services Cooperative we’re always happy to discuss how developing or implementing open data standards could support your goals. Find out more about [our work](https://opendataservices.coop/) and [get in touch](https://opendataservices.coop/#contact). diff --git a/site/blog/2022-07-20-Lilly-message-to-community/README.md b/site/blog/2022-07-20-Lilly-message-to-community/README.md new file mode 100644 index 000000000..6fdeba471 --- /dev/null +++ b/site/blog/2022-07-20-Lilly-message-to-community/README.md @@ -0,0 +1,31 @@ +--- +title: "Thank you from Lilly - A message to the community" +date: 2022-07-20 +tags: +category: +image: /img/blog/fosdem2020.jpeg +description: "I’m writing to let you all know that this is my final week working on Frictionless Data with Open Knowledge Foundation..." +author: Lilly Winfree +--- + +Dear Frictionless community, + +I’m writing to let you all know that this is my final week working on Frictionless Data with Open Knowledge Foundation. It has been a true pleasure to get to interact with you all over the last four years! Rest assured that Frictionless Data is in good hands with the team at Open Knowledge (Evgeny, Sara, Shashi, Edgar, and the rest of the OKF tech team). + +What’s next for me? I’m still staying in the data space, moving to product at data.world (did you know they export data as datapackages?)! Maybe you’ll see me presenting a demo at an upcoming Frictionless community call ;-) + +If you’ll allow me to reminisce for a few minutes, here are some of my favourite Frictionless memories from my time working on this project: + +**The Frictionless Hackathon:** In October 2021, we hosted the first-ever Frictionless Hackathon (virtually of course), and it was so cool to see all the projects and contributors from around the world! You can read all about it in [the summary blog here](https://frictionlessdata.io/blog/2021/10/13/hackathon-wrap/). Should we do another Hackathon? Let Sara know what you think! (Special shout-out to Oleg who set up the Hackathon software and inspired the entire event!) + +**Pilot collaborations**: We started our first Reproducible Research pilot collaboration with the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) team in 2019, and learned so much from this implementation! This resulted in a new data processing pipeline for BCO-DMO data managers that used Frictionless to reproducibly clean and document data. This work ultimately led to the creation of the Frictionless Framework. You can check out all the other [Pilots on the Adoption page](https://frictionlessdata.io/adoption/#pilot-collaborations) too. + +**Fellows**: Getting to mentor and teach 17 Fellows was truly a spectacular experience. These current (and future) leaders in open science and open scholarship are people to keep an eye on – they are brilliant! You can read all about their experience as Fellows on [their blog](https://fellows.frictionlessdata.io/blog/). + +**The Frictionless Team at OKF**: I’ve been very lucky to get to work with the best team while being at OKF! Many of you already know how helpful and smart my colleagues are, but in case you don’t know, I will tell you! Evgeny has been carefully leading the technical development of Frictionless with a clear vision, making my job easy and fun. Sara has transformed how the community feels and works, which is no small feat! Shashi and Edgar have only been working on the project for less than a year, but their contributions to the code base and to help answer questions have already made a big impact! I will miss working with these excellent humans, and all of you in the community that have made Frictionless a special place! + +Thank you all for being a part of the Frictionless community and for working with me in the past! I wish you all the best, and maybe I will see some of you in Buenos Aires in April for [csv,conf,v7](https://csvconf.com/)? + +Cheers! + +-- [Lilly](https://lwinfree.github.io/) \ No newline at end of file diff --git a/site/blog/2022-08-02-community-call-july-flatterer/README.md b/site/blog/2022-08-02-community-call-july-flatterer/README.md new file mode 100644 index 000000000..bc84b3851 --- /dev/null +++ b/site/blog/2022-08-02-community-call-july-flatterer/README.md @@ -0,0 +1,38 @@ +--- +title: Frictionless Data July 2022 Community Call +date: 2022-08-03 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/july-community-call-flatterer.png +description: At our Frictionless Data community call we heard about Flatterer, a tool he developed to convert JSON data into tabular data, from David Raznick... +author: Sara Petti +--- +On the last community call on July 28th, we heard David Raznick (an ex OKFer, now working at [Open Data Services](https://opendataservices.coop/)) presenting Flatterer, a tool he developed to convert structured JSON data into tabular data, using Frictionless Data specifications. + +David has been working with many different open data standards functioning with deeply nested JSON. To make the data in standard formats more human readable, users often flatten JSON files with flattening tools, but the result they get are very large spreadsheets, which can be difficult to work with. + +Flattening tools are also often used to unflatten tabular data in JSON. That way, the data, initially written in a more human readable format, can then be used according to the standards. Unfortunately the result is not optimal, the output of flattening tools is often not user-friendly and the user would probably still need to tweak it by hand, for example modifying headers’ names and/or the way tables are joined together. + +Flatterer aims at making these processes easier and faster. It can convert in the blink of an eye your JSON file in the tabular format of your choice: csv, xlsx, parquet, postgres and sqlite. Flatterer will convert your JSON file into a main table, with keys to link one-to-many tables to their parents. That way the data is tidy and easier to work with. + +If you are interested in knowing more about Flatterer, have a look at David’s presentation and demo: + + + +You can also read more about the project here: https://flatterer.opendata.coop/, or have a look at [the project documentation](https://deepnote.com/@david-raznick/Flatterer-Demo-15678671-ca7f-40a0-aed5-6004190d2611). + +# Join us next month! +Next community call is on August 25th. Frictionless Data developer Shashi Gharti will discuss with the community a tool she would like to add to the Frictionless Framework. Stay tuned to know more! + +You can sign up for the call already [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up. + +Would you like to present at one of the next community calls? Please fill out [this form](https://forms.gle/AWpbxyiGESNSUFK2A). + +Join our community on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (also via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)) or [Twitter](https://twitter.com/frictionlessd8a). See you there! + +# Call Recording +On a final note, here is the recording of the full call: + + diff --git a/site/blog/2022-08-29-frictionless-framework-release/README.md b/site/blog/2022-08-29-frictionless-framework-release/README.md new file mode 100644 index 000000000..db2455f10 --- /dev/null +++ b/site/blog/2022-08-29-frictionless-framework-release/README.md @@ -0,0 +1,366 @@ +--- +title: Welcome Frictionless Framework (v5) +date: 2022-08-29 +tags: ["news"] +category: news +image: /img/blog/framework.png +description: We are very excited to announce the beta release of Frictionless Framework v5 +author: Evgeny Karev +--- +We're releasing a first beta of Firctionless Framework (v5)! +Since the initial Frictionless Framework release we'd been collecting feedback and analyzing both high-level users' needs and bug reports to identify shortcomings and areas that can be improved in the next version for the framework. Once that process had been done we started working on a new v5 with a goal to make the framework more bullet-proof, easy to maintain and simplify user interface. Today, this version is almost stable and ready to be published. Let's go through the main improvements we have made: +# Improved Metadata +This year we started working on the Frictionless Application, at the same time, we were thinking about next steps for the [Frictionless Standards](https://specs.frictionlessdata.io/). For both we need well-defined and an easy-to-understand metadata model. Partially it's already published as standards like Table Schema and partially it's going to be published as standards like File Dialect and possibly validation/transform metadata. + +## Dialect +In v4 of the framework we had Control/Dialect/Layout concepts to describe resource details related to different formats and schemes, as well as tabular details like header rows. In v5 it's merged into the only one concept called Dialect which is going to be standardised as a File Dialect spec. Here is an example: + +#### YAML + +```r +header: true +headerRows: [2, 3] +commentChar: '#' +csv: + delimiter: ';' +``` + +A dialect descriptor can be saved and reused within a resource. Technically, it's possible to provide different schemes and formats settings within one Dialect (e.g. for CSV and Excel) so it's possible to create e.g. one re-usable dialect for a data package. A legacy CSV Dialect spec is supported and will be supported forever so it's possible to provide CSV properties on the root level: + +#### YAML + +```r +header: true +delimiter: ';' +``` +For performance and codebase maintainability reasons some marginal Layout features have been removed completely such as `skip/pick/limit/offsetFields/etc`. It's possible to achieve the same results using the Pipeline concept as a part of the transformation workflow. + +Read an article about [Dialect Class](https://framework.frictionlessdata.io/docs/framework/dialect.html) for more information. + +## Checklist +Checklist is a new concept introduced in v5. It's basically a collection of validation steps and a few other settings to make "validation rules" sharable. For example: +#### YAML + +```r +checks: + - type: ascii-value + - type: row_constraint + formula: id > 1 +skipErrors: + - duplicate-label + ``` + +Having and sharing this checklist it's possible to tune data quality requirements for some data file or set of data files. This concept will provide an ability for creating data quality "libraries" within projects or domains. We can use a checklist for validation: +#### CLI + +```r +frictionless validate table1.csv --checklist checklist.yaml +frictionless validate table2.csv --checklist checklist.yaml +``` + +Here is a list of another changes: + +|From (v4) | To (v5) | +|----------|---------| +|Check(descriptor) | Check.from_descriptor(descriptor)| +|check.code | check.type| + +Read an article about [Checklist Class](https://framework.frictionlessdata.io/docs/framework/checklist.html) for more information. + +## Pipeline +In v4 Pipeline was a complex concept similar to validation Inquiry. We reworked it for v5 to be a lightweight set of validation steps that can be applied to a data resource or a data package. For example: +#### YAML + +```r +steps: + - type: table-normalize + - type: cell-set + fieldName: version + value: v5 +``` + +Similar to the Checklist concept, Pipeline is a reusable (data-abstract) object that can be saved to a descriptor and used in some complex data workflow: + +#### CLI + +```r +frictionless transform table1.csv --pipeline pipeline.yaml +frictionless transform table2.csv --pipeline pipeline.yaml +``` + +Here is another list of changes: +| From (v4) | To (v5) | +|-----------|---------| +|Step(descriptor) | Step.from_descriptor(descriptor) | +|step.code | step.type | + +Read an article about [Pipeline Class](https://framework.frictionlessdata.io/docs/framework/pipeline.html) for more information. + +## Resource +There are no changes in the Resource related to the standards although currently by default instead of `profile` the `type` property will be used to mark a resource as a table. It can be changed using the `--standards v1` flag. + +It's now possible to set Checklist and Pipeline as a Resource property similar to Dialect and Schema: + +#### YAML + +```r +path: table.csv +# ... +checklist: + checks: + - type: ascii-value + - type: row_constraint + formula: id > 1 +pipeline: pipeline.yaml + steps: + - type: table-normalize + - type: cell-set + fieldName: version + value: v5 +``` + +Or using dereference: + +#### YAML + +```r +path: table.csv +# ... +checklist: checklist.yaml +pipeline: pipeline.yaml +``` + +In this case the validation/transformation will use it by default providing an ability to ship validation rules and transformation pipelines within resources and packages. This is an important development for data publishers who want to define what they consider to be valid for their datasets as well as sharing raw data with a cleaning pipeline steps: + +#### CLI + +```r +frictionless validate resource.yaml # will use the checklist above +frictionless transform resource.yaml # will use the pipeline above +``` + +There are minor changes in the `stats` property. Now it uses named keys to simplify hash distinction (md5/sha256 are calculated by default and it's not possible to change for performance reasons as it was in v4): + +#### Python + +```r +from frictionless import describe + +resource = describe('table.csv', stats=True) +print(resource.stats) +``` + +```r +{'md5': '6c2c61dd9b0e9c6876139a449ed87933', + 'sha256': 'a1fd6c5ff3494f697874deeb07f69f8667e903dd94a7bc062dd57550cea26da8', + 'bytes': 30, + 'fields': 2, + 'rows': 2} +``` + +Here is a list of another changes: + +| From (v4) | To (v5) | +|-----------|---------| +| for row in resource: | for row in resource.row_stream | + +Read an article about [Resource Class](https://framework.frictionlessdata.io/docs/framework/resource.html) for more information. + +## Package +There are no changes in the Package related to the standards although it's now possible to use resource dereference: + +#### YAML + +```r +name: package +resources: + - resource1.yaml + - resource2.yaml +``` + +Read an article about [Package Class](https://framework.frictionlessdata.io/docs/framework/package.html) for more information. + +## Catalog +Catalog is a new concept that is a collection of data packages that can be written inline or using dereference: + +#### YAML + +```r +name: catalog +packages: + - package1.yaml + - package2.yaml +``` +Read an article about [Catalog Class](https://framework.frictionlessdata.io/docs/framework/catalog.html) for more information. + +## Detector +Detector is now a metadata class (it wasn't in v4) so it can be saved and shared as other metadata classes: + +#### Python + +```r +from frictionless import Detector + +detector = Detector(sample_size=1000) +print(detector) +``` + +```r +{'sampleSize': 1000} +``` + +Read an article about [Detector Class](https://framework.frictionlessdata.io/docs/framework/detector.html) for more information. + +## Inquiry +There are few changes in the Inquiry concept which is known for using in the [Frictionless Repository](https://repository.frictionlessdata.io/) project: + +| From (v4) | To (v5) | +|-----------|---------| +| inquiryTask.source | inquiryTask.path | +| inquiryTask.source | inquiryTask.resource | +| inquiryTask.source | inquiryTask.package | + +Read an article about [Inquiry Class](https://framework.frictionlessdata.io/docs/framework/inquiry.html) for more information. + +## Report + +The Report concept has been significantly simplified by removing the `resource` property from `reportTask`. It's been replaced by `name/type/place/labels` properties. Also `report.time` is now `report.stats.seconds`. The `report/reportTask.warnings: List[str]` have been added to provide non-error information like reached limits: + +#### CLI + +```r +frictionless validate table.csv --yaml +``` + +```r +valid: true +stats: + tasks: 1 + warnings: 0 + errors: 0 + seconds: 0.091 +warnings: [] +errors: [] +tasks: + - valid: true + name: table + type: table + place: table.csv + labels: + - id + - name + stats: + md5: 6c2c61dd9b0e9c6876139a449ed87933 + sha256: a1fd6c5ff3494f697874deeb07f69f8667e903dd94a7bc062dd57550cea26da8 + bytes: 30 + fields: 2 + rows: 2 + warnings: 0 + errors: 0 + seconds: 0.091 + warnings: [] + errors: [] +``` + +| From (v4) | To (v5) | +|-----------|---------| +| report.time | report.stats.seconds | +| reportTask.time | reportTask.stats.seconds | +| reportTask.resource.name | reportTask.name | +| reportTask.resource.profile | reportTask.type | +| reportTask.resource.path | reportTask.place | +| reportTask.resource.schema | reportTask.labels | + +Read an article about [Report Class](https://framework.frictionlessdata.io/docs/framework/report.html) for more information. + +## Schema +Changes in the Schema class: + +| From (v4) | To (v5) | +|-----------|---------| +| Schema(descriptor) | Schema.from_descriptor(descriptor) | + +## Error +There are a few changes in the Error data structure: + + +| From (v4) | To (v5) | +|-----------|---------| +| error.code | error.type | +| error.name | error.title | +| error.rowPosition | error.rowNumber | +| error.fieldPosition | error.fieldNumber | + +## Types +Note that all the metadata entities that have multiple implementations in v5 are based on a unified `type` model. It means that they use the type property to provide type information: + +| From (v4) | To (v5) | +|-----------|---------| +| resource.profile | resource.type +| check.code | check.type | +| control.code | control.type | +| error.code | error.type | +| field.type | field.type | +|step.type | step.type | + +The new v5 version still supports old notation in descriptors for backward-compatibility. + +# Improved Model +It's been many years that Frictionless were mixing declarative metadata and object model for historical reasons. Since the first implementation of `datapackage` library we used different approaches to sync internal state to provide both interfaces descriptor and object model. In Frictionless Framework v4 this technique had been taken to a really sophisticated level with special observables dictionary classes. It was quite smart and nice-to-use for quick prototyping in REPL but it was really hard to maintain and error-prone. + +In Framework v5 we finally decided to follow the "right way" for handling this problem and split descriptors and object model completely. + +## Descriptors +In the Frictionless World we deal with a lot of declarative metadata descriptors such as packages, schemas, pipelines, etc. Nothing changes in v5 regarding this. So for example here is a Table Schema: + +#### YAML + +```r +fields: + - name: id + type: integer + - name: name + type: string +``` + +## Object Model +The difference comes here we we create a metadata instance based on this descriptor. In v4 all the metadata classes were a subclasses of the dict class providing a mix between a descriptor and object model for state management. In v5 there is a clear boundary between descriptor and object model. All the state are managed as it should be in a normal Python class using class attributes: + + +#### Python + +```r +from frictionless import Schema + +schema = Schema.from_descriptor('schema.yaml') +# Here we deal with a proper object model +descriptor = schema.to_descriptor() +# Here we export it back to be a descriptor +``` + +There are a few important traits of the new model: + +it's not possible to create a metadata instance from an invalid descriptor +it's almost always guaranteed that a metadata instance is valid +it's not possible to mix dicts and classes in methods like `package.add_resource` +it's not possible to export an invalid descriptor +This separation might make one to add a few additional lines of code, but it gives us much less fragile programs in the end. It's especially important for software integrators who want to be sure that they write working code. At the same time, for quick prototyping and discovery Frictionless still provides high-level actions like `validate` function that are more forgiving regarding user input. + +## Static Typing +One of the most important consequences of "fixing" state management in Frictionless is our new ability to provide static typing for the framework codebase. This work is in progress but we have already added a lot of types and it successfully pass `pyright` validation. We highly recommend enabling `pyright` in your IDE to see all the type problems in-advance: + +![type-error](https://user-images.githubusercontent.com/74717970/187296542-9ee89ed3-999e-44b3-b3e4-32f1df125f4e.png) + +# Livemark Docs +We're happy to announce that we're finally ready to drop a JavaScript dependency for the docs generation as we migrated it to Livemark. Moreover, Livemark's ability to execute scripts inside the documentation and other nifty features like simple Tabs or a reference generator will save us hours and hours for writing better docs. + +## Script Execution +![livemark-1](https://user-images.githubusercontent.com/74717970/187296761-09eb95c9-7245-4d75-8753-8b1bee635f62.png) + +## Reference Generation +![livemark-2](https://user-images.githubusercontent.com/74717970/187296860-cb2cc587-c518-47c1-9534-0c1d3f57e552.png) + +## Happy Contributors +We hope that Livemark docs writing experience will make our contributors happier and allow to grow our community of Frictionless Authors and Users. Let's chat in our [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) if you have questions or just want to say hi. + +Read [Livemark Docs](https://framework.frictionlessdata.io/blog/2022/08-22-frictionless-framework-v5.html#:~:text=Read-,Livemark%20Docs,-for%20more%20information) for more information. diff --git a/site/blog/2022-08-30-community-call-github-integration/README.md b/site/blog/2022-08-30-community-call-github-integration/README.md new file mode 100644 index 000000000..9874bb370 --- /dev/null +++ b/site/blog/2022-08-30-community-call-github-integration/README.md @@ -0,0 +1,36 @@ +--- +title: Frictionless Data August 2022 Community Call +date: 2022-08-30 +tags: ['events', 'community-hangout'] +category: events +image: /img/blog/Shashis-presentation.png +description: At our Frictionless Data community call we had our very own Frictionless Data developer Shashi Gharti presenting to the community the new Frictionless GitHub integration... +author: Sara Petti +--- +On the last community call on August 25th, we had our very own Frictionless Data developer Shashi Gharti presenting to the community the new Frictionless GitHub integration, to read and write data packages from/to GitHub repositories. + +Besides reading and writing packages, the integration also allows the creation of containers for data packages: the [catalog](https://framework.frictionlessdata.io/docs/framework/catalog.html), a list of packages from multiple repositories in GitHub. To select which repository you want to be in the catalog, you can use any GitHub qualifier. + +The Frictionless GitHub integration is part of the beta release of [Frictionless Framework version 5](https://frictionlessdata.io/blog/2022/08/29/frictionless-framework-release/). + +If you are interested in knowing more about the Frictionless GitHub integration, have a look at Shashi’s presentation and demo: + + + +You can also check out [Shashi’s slides](https://docs.google.com/presentation/d/1hhHEgEqzIkIpzCZ_FW-DjJtImxPI8jdi7Ck5OXiiDsM/edit?usp=sharing) or have a look at [the project documentation](https://framework.frictionlessdata.io/docs/portals/github.html#reference-portals.githubcontrol). If you use the Frictionless Framework v5 and its GitHub integration, please let us know! And if you have any feedback, feel free to open an issue in the [repository](https://github.com/frictionlessdata/framework) + +# Join us next month! +Next community call is on September 29th. Frictionless Data lead developer Evgeny Karev will be presenting the Frictionless Framework version 5, so make sure not to miss it! + +You can sign up for the call already [here](https://docs.google.com/forms/d/e/1FAIpQLSeuNCopxXauMkrWvF6VHqOyHMcy54SfNDOseVXfWRQZWkvqjQ/viewform?usp=sf_link). + +Do you want to share something with the community? Let us know when you sign up. + +Would you like to present at one of the next community calls? Please fill out [this form](https://forms.gle/AWpbxyiGESNSUFK2A). + +Join our community on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (also via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)) or [Twitter](https://twitter.com/frictionlessd8a). See you there! + +# Call Recording +On a final note, here is the recording of the full call: + + diff --git a/site/design/README.md b/site/design/README.md new file mode 100644 index 000000000..62a8ddac7 --- /dev/null +++ b/site/design/README.md @@ -0,0 +1,103 @@ +--- +title: Frictionless +--- + +# {{ $page.frontmatter.title }} + +Our mission is to bring simplicity and gracefulness to the messy world of data. We build products for developers and data engineers. And those who aspire to become one. + +
+ + ## Light Logo + + +
+
+ +
+
+
+ +
+
+
+ +
+ + ## Dark Logo + + +
+
+ +
+
+
+ +
+
+
+ +
+ + ## Light Logotype + + +
+
+ +
+
+
+ +
+
+
+ +
+ + ## Dark Logotype + + +
+
+ +
+
+
+ +
+
+
+ + \ No newline at end of file diff --git a/site/development/architecture/README.md b/site/development/architecture/README.md new file mode 100644 index 000000000..99e029bf5 --- /dev/null +++ b/site/development/architecture/README.md @@ -0,0 +1,7 @@ +--- +title: Frictionless Architecture +--- + +# Frictionless Architecture + +![Design](/img/structure.png) diff --git a/site/development/process/README.md b/site/development/process/README.md new file mode 100644 index 000000000..2c6e71e9f --- /dev/null +++ b/site/development/process/README.md @@ -0,0 +1,97 @@ +--- +title: Frictionless Process +--- + +# Frictionless Process + +This document proposes a process to work on the technical side of the Frictionless Data project. The goal - have things manageable for a minimal price. + +## Project + +The specific of the project is a huge amount of components and actors (repositories, issues, contributors etc). The process should be effective in handling this specific. + +## Process + +The main idea to focus on getting things done and reduce the price of maintaining the process instead of trying to fully mimic some popular methodologies. We use different ideas from different methodologies. + +## Roles + +- Product Owner (PO) +- Product Manager (PM) +- Developer Advocate (DA) +- Technical Lead (TL) +- Senior Developer (SD) +- Junior Developer (JD) + +## Board + +We use a kanban board located at https://github.com/orgs/frictionlessdata/projects/2?fullscreen=true to work on the project. The board has following columns (ordered by issue stage): +- Backlog - unprocessed issues without labels and processed issues with labels +- Priority - prioritized issues planned for the next iterations (estimated and assigned) +- Current - current iteration issues promoted on iteration planning (estimated and assigned) +- Review - issues under review process +- Done - completed issues + +## Workflow + +The work on the project is a live process splitted into 2 weeks iterations between iteration plannings (including retrospection): +- Inside an iteration assigned persons work on their current issues and subset of roles do issues processing and prioritizing +- During the iteration planning the team moves issues from the Priority column to the Current column and assign persons. Instead of issue estimations assigned person approves amount of work for the current iteration as a high-level estimation. + +## Milestones + +As milestones we use concrete achievements e.g. from our roadmap. It could be tools or spec versions like “spec-v1”. We don’t use the workflow related milestones like “current” of “backlog” managing it via the board labeling system. + +## Labels + +Aside internal waffle labels and helpers labels like “question” etc we use core color-coded labels based on SemVer. The main point of processing issues from Inbox to Backlog is to add one of this labels because we need to plan releases, breaking announces etc: + +![labels](https://cloud.githubusercontent.com/assets/557395/17673693/f6391676-632a-11e6-9971-945623b68e16.png) + +## Assignments + +Every issue in the Current column should be assigned to some person with meaning “this person should do some work on this issue to unblock it”. Assigned person should re-assign an issue for a current blocker. It provides a good real-time overview of the project. + +## Analysis + +After planning it’s highly recommended for an assigned person to write a short plan of how to solve the issue (could be a list of steps) and ask someone to check. This work could be done on some previous stages by subset of roles. + +## Branching + +We use Git Flow with some simplifications (see OKI coding standards). Master branch should always be “green” on tests and new features/fixes should go from pull requests. Direct committing to master could be allowed by subset of roles in some cases. + +## Pull Requests + +A pull request should be visually merged on the board to the corresponding issue using “It fixes #issue-number” sentence in the pull request description (initial comment). If there is no corresponding issue for the pull request it should be handled as an issue with labeling etc. + +## Reviews + +After sending a pull request the author should assign the pull request to another person “asking” for a code review. After the review code should be merged to the codebase by the pull request author (or person having enough rights). + +## Documentation + +By default documentation for a tool should be written in README.md not using additional files and folders. It should be clean and well-structured. API should be documented in the code as docstrings. We compile project level docs automatically. + +## Testings + +Tests should be written using OKI coding standards. Start write tests from top (match high-level requirements) to bottom (if needed). The most high-level tests are implemented as testsuites on project level (integration tests between different tools). + +## Releasing + +We use SemVer for versioning and Github Actions for testing and releasing/deployments. We prefer short release cycle (features and fixes could be released immediately). Releases should be configured using tags based on package examples workflow provided by OKI. + +The release process: +- merge changes to the main branch on GitHub + - use "Squash and Merge" + - use clean commit message +- pull the changes locally +- update the software version according to SemVer rules + - in Python projets we use `/assets/VERSION` + - in JavaScript projects we use standard `package.json` +- update a CHANGELOG file adding info about new feature or important changes +- run `main release` (it will release automatically) + +## References + +- [Open Knowledge International Coding Standards](https://github.com/okfn/coding-standards) +- [MUI Versioning Strategy](https://mui.com/versions/#versioning-strategy) diff --git a/site/development/roadmap/README.md b/site/development/roadmap/README.md new file mode 100644 index 000000000..6be3dbe51 --- /dev/null +++ b/site/development/roadmap/README.md @@ -0,0 +1,101 @@ +--- +title: Frictionless Roadmap +--- + +# Frictionless Roadmap + + + + diff --git a/site/hackathon/README(pt-br).md b/site/hackathon/README(pt-br).md new file mode 100644 index 000000000..f153c9282 --- /dev/null +++ b/site/hackathon/README(pt-br).md @@ -0,0 +1,31 @@ +--- +title: Frictionless Hackathon +--- + +# Junte-se à comunidade de Frictionless Data para dois dias de Hackathon virtual em 7 e 8 de Outubro! + +> As inscrições já estão abertas no formulário: [https://forms.gle/ZhrVfSBrNy2UPRZc9](https://forms.gle/ZhrVfSBrNy2UPRZc9) + +## O que é um hackathon? +Você irá trabalhar com um grupo de outros usuários do Frictionless para criar novos protótipos baseados no código aberto do projeto. Por exemplo: usar a nova ferramenta [Livemark](https://livemark.frictionlessdata.io/) para criar sites de storytelling de dados ou o [React Components](https://components.frictionlessdata.io/) para adicionar a camada de validação de dados da sua aplicação. + +## Quem pode participar deste hackathon? +Nós estamos buscando contribuições de todos os tamanhos e níveis de habilidades! Algumas habilidades que você poderá trazer incluem: programação em Python (ou outras linguagens também!), escrever documentação, gestão de projetos, design, muitas ideias e muito entusiasmo! Você estará em um time para que vocês possam aprender e ajudar uns aos outros. Você não precisa ser familiarizado com o Frictionless ainda - poderá aprender durante o evento. + +## Por que eu deveria participar? +Em primeiro lugar, porque vai ser divertido! Você conhecerá outros usuários do Frictionless e aprenderá algo novo. Esta também é uma oportunidade em que você terá o suporte contínuo da principal equipe do Frictionless para ajudar a realizar seu protótipo. Além disso, haverá prêmios (detalhes em breve). + +## Quando o hackathon ocorrerá? +O hackathon será virtual e ocorrerá de 7 a 8 de outubro. O evento terá início de madrugada em 7 de outubro e terminará de tarde em 8 de outubro no horário brasileiro (os horários exatos serão anunciados em breve). Isso permitirá que pessoas de todo o mundo participem em um período que seja bom para elas. Estaremos usando Github e Zoom para coordenar e trabalhar virtualmente. As equipes serão capazes de se formar desde antes para que possam se organizar e estarem prontos quando a hora chegar. + +## Quero me inscrever! +Use [este formulário](https://forms.gle/ZhrVfSBrNy2UPRZc9) para se registrar. O evento será gratuito e também teremos algumas bolsas para participantes que, de outra forma, não poderiam comparecer. Inscreva-se para uma bolsa de U$ 300 usando este [formulário de bolsa](https://forms.gle/jwxVYjDYs31t1YmKA +) + +## Quais projetos estarão no Hackathon? +Os projetos vão desde um plug-in GeoJSON para frictionless-py, à código Python para trabalhar com Datapackages em CKAN, à criação de um site estático para listar todos os conjuntos de dados Frictionless no GitHub até à criação de novos tutoriais para código Frictionless. +Todos os projetos serão adicionados ao painel do evento em [https://frictionless-hackathon.herokuapp.com/event/1#top](https://frictionless-hackathon.herokuapp.com/event/1#top), mantido por [DribDat](https://dribdat.cc/). +Interessado em trabalhar em seu próprio projeto? Envia-nos um email! + +## Eu tenho dúvidas... +Envie um e-mail para frictionlessdata@okfn.org se você tiver dúvidas ou quiser apoiar o Hackathon. diff --git a/site/hackathon/README.md b/site/hackathon/README.md new file mode 100644 index 000000000..1ca024a16 --- /dev/null +++ b/site/hackathon/README.md @@ -0,0 +1,39 @@ +--- +title: Frictionless Hackathon +--- + +# Join the Frictionless Data community for a two-day virtual Hackathon on 7-8 October! + +> Registration is now open using this form: [https://forms.gle/ZhrVfSBrNy2UPRZc9](https://forms.gle/ZhrVfSBrNy2UPRZc9) + +> See the Participation Guide at the bottom for more info! + +## What’s a hackathon? +You’ll work within a group of other Frictionless users to create new project prototypes based on existing Frictionless open source code. For example, use the new [Livemark](https://livemark.frictionlessdata.io/) tool to create websites that display data-driven storytelling, or use Frictionless React [Components](https://components.frictionlessdata.io/) to add data validation to your application. + +## Who should participate in this hackathon? +We’re looking for contributions of all sizes and skill levels! Some skills that you would bring include: coding in Python (other languages supported too!), writing documentation, project management, having ideas, design skills, and general enthusiasm! You’ll be in a team, so you can learn from each other and help each other. You don’t have to be familiar with Frictionless yet - you can learn that during the event. + +## Why should I participate? +First of all, it will be fun! You’ll meet other Frictionless users and learn something new. This is also an opportunity where you’ll have the uninterrupted support of the Frictionless core team to help you realize your prototype. Also, there will be prizes (details to be announced later). + +## When will the hackathon occur? +The hackathon will be virtual and occur on 7-8 October. The event will start at 9am CEST on 7 October, and will end at 6pm CEST on 8 October. This will allow people from around the world to participate during a time that works for them. We will be using Github and Zoom to coordinate and work virtually. Teams will be able to form before the event occurs so you can start coordinating early and hit the ground running. + +## Sign me up! +Use [this form](https://forms.gle/ZhrVfSBrNy2UPRZc9) to register. The event will be free, and we will also have some scholarships for attendees that would otherwise be unable to attend. Apply for a $300 scholarship using this [scholarship form](https://forms.gle/jwxVYjDYs31t1YmKA +). + +## What projects will be at the Hackathon? +Projects will range from a GeoJSON Plugin for frictionless-py, to Python code to work with Datapackages in CKAN, to creating a static site to list all the Frictionless datasets on GitHub, to creating new tutorials for Frictionless code. +All of the projects will be added to the event dashboard at [https://frictionless-hackathon.herokuapp.com/event/1#top](https://frictionless-hackathon.herokuapp.com/event/1#top), powered by [DribDat](https://dribdat.cc/). +Interested in working on your own project? Email us! + +## I have questions... +Please email us at frictionlessdata@okfn.org if you have questions or would like to support the Hackathon. + +# Participation Guide + +([Here is a link to the Guide](https://docs.google.com/document/d/e/2PACX-1vReWY9N26SbveoCM7Ra4wEry8k7a5rCa3UzpBijfU_mmyME58DRDKmu0QUmx75mif4367IZdtLijFzO/pub)) + + diff --git a/site/introduction/README.md b/site/introduction/README.md new file mode 100644 index 000000000..80f9d9857 --- /dev/null +++ b/site/introduction/README.md @@ -0,0 +1,56 @@ +# Frictionless Data + +Get a quick introduction to Frictionless in "5 minutes". + +Frictionless Data is a progressive open-source framework for building data infrastructure -- data management, data integration, data flows, etc. It includes various data standards and provides software to work with data. + +:::tip +This introduction assumes some basic knowledge about data. If you are new to working with data we recommend starting with the first module, "What is Data?", at [School of Data](https://schoolofdata.org/). +::: + +## Why Frictionless? + +The Frictionless Data project aims to make it easier to work with data - by reducing common data workflow issues (what we call *friction*). Frictionless Data consists of two main parts, software and standards. + +![Structure](/img/introduction/structure.png) + +### Frictionless Software + +The software is based on a suite of data standards that have been designed to make it easy to describe data structure and content so that data is more interoperable, easier to understand, and quicker to use. There are several aspects to the Frictionless software, including two high-level data frameworks (for Python and JavaScript), 10 low-level libraries for other languages, like R, and also visual interfaces and applications. You can read more about how to use the software (and find documentation) on the [projects](/projects) page. + +For example, here is a validation report created by the [Frictionless Repository](https://repository.frictionlessdata.io/) software. Data validation is one of the main focuses of Frictionless Data and this is a good visual representation of how the project might help to reveal common problems working with data. + +![Report](/img/introduction/report.png) + +### Frictionless Standards + +The Standards (aka Specifications) help to describe data. The core specification is called a **Data Package**, which is a simple container format used to describe and package a collection of data files. The format provides a contract for data interoperability that supports frictionless delivery, installation and management of data. + +A Data Package can contain any kind of data. At the same time, Data Packages can be specialized and enriched for specific types of data so there are, for example, Tabular Data Packages for tabular data, Geo Data Packages for geo data, etc. + +To learn more about Data Packages and the other specifications, check out the [projects](/projects) page or watch this video to learn more about the motivation behind packaging data. + + + +## How can I use Frictionless? + +You can use Frictionless to describe your data (add metadata and schemas), validate your data, and transform your data. You can also write custom data standards based on the Frictionless specifications. For example, you can use Frictionless to: +* easily add metadata to your data before you publish it. +* quickly validate your data to check the data quality before you share it. +* build a declarative pipeline to clean and process data before analyzing it. + +Usually, new users start by trying out the software. The software gives you an ability to work with Frictionless using visual interfaces or programming languages. + +As a new user you might not need to dive too deeply into the standards as our software incapsulates its concepts. On the other hand, once you feel comfortable with Frictionless Software you might start reading Frictionless Standards to get a better understanding of the things happening under the hood or to start creating your metadata descriptors more proficiently. + +## Who uses Frictionless? + +The Frictionless Data project has a very diverse audience, ranging from climate scientists, to humanities researchers, to government data centers. + +![Audience](/img/introduction/audience.png) + +During our project development we have had various collaborations with institutions and individuals. We keep track of our [Pilots](/tag/pilot) and [Case Studies](/tag/case-studies) with blog posts, and we welcome our community to share their experiences using our standards and software. Generally speaking, you can apply Frictionless in almost every field where you work with data. Your Frictionless use case could range from a simple data table validation to writing complex data pipelines. + +## Ready for more? + +As a next step, we recommend you start using one of our [Software](/projects) projects, get known our [Standards](/projects) or read about other user experience in [Pilots](/tag/pilot) and [Case Studies](/tag/case-studies) sections. Also, we welcome you to reach out on [Slack](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) or [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org) to say hi or ask questions! diff --git a/site/people/README.md b/site/people/README.md new file mode 100644 index 000000000..9cfc33cfb --- /dev/null +++ b/site/people/README.md @@ -0,0 +1,741 @@ +--- +title: Frictionless People +--- + +# Frictionless People + +People working on the Frictionless Data project. + +Frictionless Data is a collective effort made by many amazing people working on various projects. We're a group of programming engineers, data scientists, community builders, and, in general, people who are interested in working towards a fair, free and open future. + +:::tip +There are many ways to join the movement. If you are interested in working on Frictionless Data don't hesitate and contact the Frictionless Team using any available contact provided on this site. +::: + +## Core Team + +Frictionless Data has a small core team at the Open Knowledge Foundation and Datopian, and the project is supported by a large community of contributors. + +
+ +
+ +## Tool Fund Partners + +Frictionless data has funded international partners who have worked to develop various tools and libraries for the project, and they are featured below. You can read more about their individual projects on the [Adoption page](/adoption/). + +
+ +
+ +## Fellows Programme + +The Frictionless Data for Reproducible Research Fellows are early career researchers that are being trained to become champions of the Frictionless Data tools and approaches in their fields of research. You can read more about the Fellows on the [Fellows site](https://fellows.frictionlessdata.io/). + +
+ +
+ +## Code Contributors + +Frictionless Data is possible due to our awesome contributor community. You can click on the pictures below to see code contributions in detail. This is only a subset of all the people working on the project - please take a look on our [Github Organization](https://github.com/frictionlessdata) to view more. Are you interested in contributing? Check out our [Contributing page](/work-with-us/contribute/) to get started. + +

project

+ + + + +

website

+ + + + +

specs

+ + + + + + + + + + + + + + + + + + + +

datahub.io

+ + + + +

frictionless-py

+ + + + +

frictionless-js

+ + + + +

datapackage-py

+ + + + +

tableschema-py

+ + + + +

datapackage-js

+ + + + +

tableschema-js

+ + + + +

datapackage-rb

+ + + + +

tableschema-rb

+ + + + +

datapackage-php

+ + + + +

tableschema-php

+ + + + +

datapackage-java

+ + + + +

tableschema-java

+ + + + +

datapackage-go

+ + + + +

tableschema-go

+ + + + +

datapackage-r

+ + + + +

tableschema-r

+ + + + +

datapackage-swift

+ + + + +

tableschema-swift

+ + + + +

datapackage-jl

+ + + + +

tableschema-jl

+ + + + +

datapackage-clj

+ + + + +

tableschema-clj

+ + + + + + + diff --git a/site/projects/README.md b/site/projects/README.md new file mode 100644 index 000000000..5e0cd7a88 --- /dev/null +++ b/site/projects/README.md @@ -0,0 +1,135 @@ +--- +title: Frictionless Projects +--- + +# Frictionless Projects + +Open source projects for working with data. + +The Frictionless Data project provides a rich set of open source projects for working with data. There are tools, a visual application, and software for many programming platforms. + +:::tip +This document is an overview of the Frictionless Projects - for more in-depth information, please click on one of the projects below and you will be redirected to a corresponding documentation portal. +::: + +## Software and Standards + +It's a list of core Frictionless Projects developed by the core Frictionless Team: + +
+
+ + +
+
+ + +

Frictionless Application (soon)

+
+

Data management application for Browser and Desktop for working with tabular data.

+
+
+ + +
+
+ + +

Frictionless Framework

+
+

Python framework to describe, extract, validate, and transform tabular data.

+
+
+ + +
+
+ + +

Livemark

+
+

Static site generator that extends Markdown with charts, tables, scripts, and more.

+
+
+ + +
+
+ + +

Frictionless Repository

+
+

Github Action allowing you to validate tabular data on every commit to your repository.

+
+
+ + +
+
+ + +

Frictionless Standards

+
+

Lightweight yet comprehensive data standards as Data Package and Table Schema.

+
+
+ + +
+
+ + +

Datahub

+
+

A web platform built on Frictionless Data that allows discovering, publishing, and sharing data.

+
+
+ +
+
+ +## Which software is right for me? + +Choosing the right tool for the job can be challenging. Here are our recommendations: + +### Visual Interfaces + +If you prefer to use a visual interface: + +- **Frictionless Application (coming soon):** We're working on our brand-new Frictionless Application that will be released in 2021. Until then, you can use [Data Package Creator](https://create.frictionlessdata.io/) to create and edit data packages and [Goodtables On-Demand](http://try.goodtables.io/) for data validation. +- **Frictionless Repository:** For ensuring the quality of your data on Github, Frictionless provides [Frictionless Repository](https://repository.frictionlessdata.io/). This creates visual quality reports and validation statuses on Github everytime you commit your data. +- **Datahub:** For discovering, publishing, and sharing data we have [Datahub](https://datahub.io/) which is built on Frictionless software. Using this software as a service, you can sign-in and find, share, and publish quality data. + +### Command-line Interfaces + +If you like to write commands in the command-line interface: + +- **Frictionless Framework:** For describing, extracting, validating, and transforming data, Frictionless provides the [Frictionless Framework's](https://framework.frictionlessdata.io/) command-line interface. Using the "frictionless" command you can achieve many goals without needing to write Python code. +- **Livemark:** For data journalists and technical writers we have a project called [Livemark](https://livemark.frictionlessdata.io/). Using the "livemark" command in the CLI you can publish a website that incorporates Frictionless functions and is powered by markdown articles. +- **Datahub:** Frictionless provides a command-line tool called [Data](https://datahub.io/docs/features/data-cli) which is an important part of the Datahub project. The "data" command is available for a JavaScript environment and it helps you to interact with data stored on Datahub. + +### Programming Languages +If you want to use or write your own Frictionless code: + +- **Frictionless Framework:** For general data programming in Python, the [Frictionless Framework](https://framework.frictionlessdata.io/) is the way to go. You can describe, extract, validate, and transform your data. It's also possible to extend the framework by adding new validation checks, transformation steps, etc. In addition, there is a lightweight version of the framework written in [JavaScript](https://github.com/frictionlessdata/frictionless-js). +- **Frictionless Universe:** For Frictionless implementations in other languages like [R](https://github.com/frictionlessdata/frictionless-r) or Java and visual components, we have [Frictionless Universe](../universe/). Each library provides metadata validation and editing along with other low-level data operations like reading or writing tabular files. + +## Which standard is right for me? + +To help you pick a standard to use, we've categorized them according to how many files you are working with. + +### Collection of Files + +If you have more than one file: + +- **Data Package**: Use a [Data Package](https://specs.frictionlessdata.io/data-package/) for describing datasets of any file format. Data Package is a basic container format for describing a collection of data in a single "package". It provides a basis for convenient delivery, installation and management of datasets. +- **Fiscal Data Package**: For fiscal data, use a [Fiscal Data Package](https://specs.frictionlessdata.io/fiscal-data-package/). This lightweight and user-oriented format is for publishing and consuming fiscal data. It concerns with how fiscal data should be packaged and providing means for publishers to best convey the meaning of the data - so it can be optimally used by consumers. + +### Individual File + +If you need to describe an individual file: + +- **Data Resource**: Use [Data Resource](https://specs.frictionlessdata.io/data-resource/) for describing individual files. Data Resource is a format to describe and package a single data resource of any file format, such as an individual table or file. It can also be extended for specific use cases. +- **Tabular Data Resource**: For tabular data, use the Data Resource extension called [Tabular Data Resource](https://specs.frictionlessdata.io/tabular-data-resource/). Tabular Data Resource describes a single *tabular* data resource such as a CSV file. It includes support for metadata and schemas to describe the data content and structure. +- **Table Schema**: To describe only the schema of a tabular data file, use [Table Schema](https://specs.frictionlessdata.io/table-schema/). Table Schema is a format to declare a schema for tabular data. The schema is designed to be expressible in JSON. You can have a schema as independent metadata or use it with a Tabular Data Resource. +- **CSV Dialect**: To specify the CSV dialect within a schema, use [CSV Dialect](https://specs.frictionlessdata.io/csv-dialect/). This defines a format to describe the various dialects of CSV files in a language agnostic manner. This is important because CSV files might be published in different forms, making it harder to read the data without errors. CSV Dialect can be used with a Tabular Data Resource to provide additional information. diff --git a/site/universe/README.md b/site/universe/README.md new file mode 100644 index 000000000..111a24370 --- /dev/null +++ b/site/universe/README.md @@ -0,0 +1,63 @@ +--- +title: Frictionless Universe +--- + +# Frictionless Universe + +Cummunity-driven projects based on Frictionless Software and Standards: + +## Visual + +- [Data Curator](https://github.com/qcif/data-curator) +- [Delimiter](https://github.com/frictionlessdata/delimiter) + +## Python + +- [datapackage-py](https://github.com/frictionlessdata/datapackage-py) +- [tableschema-py](https://github.com/frictionlessdata/tableschema-py) +- [datapackage-pipelines](https://github.com/frictionlessdata/datapackage-pipelines) +- [frictionless-ckan-mapper](https://github.com/frictionlessdata/frictionless-ckan-mapper) + +## JavaScript + +- [frictionless-js](https://github.com/frictionlessdata/frictionless-js) +- [datapackage-js](https://github.com/frictionlessdata/datapackage-js) +- [tableschema-js](https://github.com/frictionlessdata/tableschema-js) +- [datapackage-render-js](https://github.com/frictionlessdata/datapackage-render-js) +- [frictionless-components](https://github.com/frictionlessdata/components) + +## R + +- [frictionless-r](https://github.com/frictionlessdata/frictionless-r) +- [datapackage-r](https://github.com/frictionlessdata/datapackage-r) +- [tableschema-r](https://github.com/frictionlessdata/tableschema-r) + +## Ruby + +- [datapackage-rb](https://github.com/frictionlessdata/datapackage-rb) +- [tableschema-rb](https://github.com/frictionlessdata/tableschema-rb) + +## PHP + +- [datapackage-php](https://github.com/frictionlessdata/datapackage-php) +- [tableschema-php](https://github.com/frictionlessdata/tableschema-php) + +## Java + +- [datapackage-java](https://github.com/frictionlessdata/datapackage-java) +- [tableschema-java](https://github.com/frictionlessdata/tableschema-java) + +## Swift + +- [datapackage-swift](https://github.com/frictionlessdata/datapackage-swift) +- [tableschema-swift](https://github.com/frictionlessdata/tableschema-swift) + +## Go + +- [datapackage-go](https://github.com/frictionlessdata/datapackage-go) +- [tableschema-go](https://github.com/frictionlessdata/tableschema-go) + +## Julia + +- [datapackage-jl](https://github.com/frictionlessdata/datapackage.jl) +- [tableschema-jl](https://github.com/frictionlessdata/tableschema.jl) diff --git a/site/work-with-us/code-of-conduct/README.md b/site/work-with-us/code-of-conduct/README.md new file mode 100644 index 000000000..c865b3164 --- /dev/null +++ b/site/work-with-us/code-of-conduct/README.md @@ -0,0 +1,69 @@ +# Code of Conduct + +## Introduction + +The goal of this Code of Conduct is to make explicit the type of participation that is expected, and the behaviour that is unacceptable. These guidelines are to be adhered to by all Frictionless Data team members, all partners on a given project, and all other participants. + +This Code of Conduct applies to all the projects that Frictionless Data hosts/organises and describes the standards of behaviour that we expect all our partners to observe when taking part in our projects. We expect all voices to be welcomed at our events and strive to empower everyone to feel able to participate fully. + +## This Code is applicable to +* All public areas of participation, including but not limited to discussion forums, mailing lists, issue trackers, social media, and in-person venues such as conferences and workshops. +* All private areas of participation, including but not limited to email and closed platforms such as Slack or Matrix. +* Any project that Frictionless Data leads on or partners in. + +## What we expect +The following behaviours are expected from all project participants, including Frictionless Data core team members, project partners, and all other participants. + +* Lead by example by being considerate in your actions and decisions. +* Be respectful in speech and action, especially in disagreement. +* Refrain from demeaning, discriminatory, or harassing behaviour and speech. +* We all make mistakes, and when we do, we take responsibility for them. +* Be mindful of your fellow participants. If someone is in distress, or if someone is in violation of these guidelines, reach out. + +## What we find unacceptable +We do not tolerate harassment of participants at our events in any form. Harassment includes offensive verbal comments, deliberate intimidation, harassing photography or recording, inappropriate physical contact and unwanted sexual attention. Anything that makes someone feel uncomfortable could be deemed harassment. For more information and examples about what constitutes harassment, please refer to [OpenCon’s Code of Conduct in Brief](https://www.opencon2018.org/code_of_conduct) and the [Gathering for Open Source Hardware’s examples of behaviour](http://openhardware.science/gosh-2017/gosh-code-of-conduct/). + +This non-exhaustive list shows examples of behaviours that are unacceptable from all participants: + +* Violence and threats of violence. +* Derogatory comments of any form, including related to gender and expression, sexual orientation, disability, mental illness, neuro(a)typicality, physical appearance, body size, race, religion, age, or socio-economic status. +* Sexual images or behaviour. +* Posting or threatening to post other people’s personally identifying information (“doxing”). +* Deliberate misgendering or use of former names, or improper titles. +* Inappropriate photography or recording. +* Physical contact without affirmative consent. +* Unwelcome sexual attention. This includes, sexualised comments or jokes; inappropriate touching, groping, and unwelcome sexual advances. +* Deliberate intimidation, stalking or following (online or in person). +* Sustained disruption of conference events, including talks and presentations. +* Advocating for, or encouraging, any of the above behaviour. + +## Consequences of unacceptable behaviour +Unacceptable behaviour from any participant in any public or private forum around projects we are involved in, including those with decision-making authority, will not be tolerated. + +Anyone asked to stop unacceptable behaviour is expected to comply immediately. + +If a participant engages in unacceptable behaviour, any action deemed appropriate will be taken, up to and including a temporary ban, permanent expulsion from participatory forums, or reporting to local law enforcement for criminal offences. + +## Reporting +If you are subject to, or witness, unacceptable behaviour, or have any other concerns, please email frictionlessdata@okfn.org. We will handle all reports with discretion, and you can report anonymously if you wish using [this form](https://docs.google.com/forms/d/e/1FAIpQLSfoly-CZT9ZONcns4uG7BsoxGObRqgTlI6NdfvlYSCRVyy_QQ/viewform?usp=sf_link). + +In your report, please do your best to include: + +Your contact information (unless you wish to report anonymously) +* Identifying information (e.g. names, nicknames, pseudonyms) of the participant who has violated the Code of Conduct +* The behaviour that was in violation +* The approximate time of the behaviour +* If possible, where the Code of Conduct violation happened +* The circumstances surrounding the incident +* Other people involved in the incident +* If you believe the incident is ongoing, please let us know +* If there is a publicly available record (e.g. mailing list record), please include a link +* Any additional helpful information + +We will fully investigate any reports, follow up with the reportee (unless it is an anonymous report), and we will work with the reportee (unless anonymous) to decide what action to take. If the complaint is about someone on the response team, that person will recuse themselves from handling the response. + +## Confidentiality +All reports will be kept confidential. When we discuss incidents with people who are reported, we will anonymize details as much as we can to protect reporter privacy. In some cases we may determine that a public statement will need to be made. If that’s the case, the identities of all victims and reporters will remain confidential unless those individuals instruct us otherwise. + +## License and attribution +This Code of Conduct is distributed under a [Creative Commons Attribution-ShareAlike license](https://creativecommons.org/licenses/by-sa/4.0/). It draws heavily on the [Open Knowledge Foundation Code of Conduct](https://okfn.org/about/code-of-conduct/), which is based on this [Mozilla Code of Conduct](https://wiki.mozilla.org/Participation/Community_Gatherings/Brazil_2016/Code_of_Conduct), the School of Data Code of Conduct, and the [csv,conf Code of Conduct](https://csvconf.com/coc/). diff --git a/site/work-with-us/contribute/README.md b/site/work-with-us/contribute/README.md new file mode 100644 index 000000000..138f7fafd --- /dev/null +++ b/site/work-with-us/contribute/README.md @@ -0,0 +1,25 @@ +# Contribute + +## Introduction + +We welcome contributions -- and you don't have to be a software developer to get involved! The first step to becoming a Frictionless Data contributor is to become a Frictionless Data user. Please read the following guidelines, and feel free to reach out to us if you have any questions. Thanks for your interest in helping make Frictionless awesome! + +## General Guidelines + +### Reporting a bug or issue: +We use [Github](https://github.com/frictionlessdata/) as a code and issues hosting platform. To report a bug or propose a new feature, please open an issue. For issues with a specific code repository, please open an issue in that specific repository's tracker on GitHub. For example: https://github.com/frictionlessdata/frictionless-py/issues + +### Give us feedback/suggestions/propose a new idea: +What if the issue is not a bug but a question? Please head to the [discussion forum](https://github.com/frictionlessdata/project/discussions). This is an excellent place to give us thorough feedback about your experience as a whole. In the same way, you may participate in existing discussions and make your voice heard. + +### Pull requests: +For pull requests, we ask that you initially create an issue and then create a pull requests linked to this issue. Look for issues with "help wanted" or "first-time contributor." We welcome pull requests from anyone! + +### Specific guidelines: +Each individual software project has more specific contribution guidelines that you can find in the README in the project's repository. For example: https://github.com/frictionlessdata/frictionless-js#developers + +## Documentation +Are you seeking to advocate and educate people in the data space? We always welcome contributions to our documentation! You can help improve our documentation by opening pull requests if you find typos, have ideas to improve the clarity of the document, or want to translate the text to a non-English language. You can also write tutorials (like this one: [Frictionless Describe and Extract Tutorial](https://colab.research.google.com/drive/12RmGajHamGP5wOoAhy8N7Gchn9TmVnG-)). Let us know if you would like to contribute or if you are interested but need some help! + +## Share your work with us! +Are you using Frictionless with your data? Have you spoken at a conference about using Frictionless? We would love to hear about it! We also have opportunities for blog writing and presenting at our monthly community calls - [contact us](mailto:frictionlessdata@okfn.org) to learn more! \ No newline at end of file diff --git a/site/work-with-us/events/README.md b/site/work-with-us/events/README.md new file mode 100644 index 000000000..f1901101e --- /dev/null +++ b/site/work-with-us/events/README.md @@ -0,0 +1,21 @@ +--- +title: Events Calendar +--- + +# Events Calendar + +## Introduction + +Frictionless Data calendar with a listing of our upcoming [events](/tag/events/) including webinars, virtual hangouts, etc. + +## Frictionless Data Monthly Community Call + +Join the vibrant Frictionless Data community every last Thursday of the month on a call to hear about recent project developments! You can sign up here: [https://forms.gle/rtK7xZw5vrwouTE98](https://forms.gle/rtK7xZw5vrwouTE98) + +## Calendar + +:::tip + You can add any upcoming event to your calendar by clicking on a specific event and **selecting copy to my calendar.** +::: + + diff --git a/site/work-with-us/get-help/README.md b/site/work-with-us/get-help/README.md new file mode 100644 index 000000000..0ee9f835f --- /dev/null +++ b/site/work-with-us/get-help/README.md @@ -0,0 +1,19 @@ +# Need Help? +

We're happy to provide support! Please reach out to us by using one of the following methods:

+ +## Community Support + +You can ask any questions in our [Slack Community Chat room](https://join.slack.com/t/frictionlessdata/shared_invite/zt-17kpbffnm-tRfDW_wJgOw8tJVLvZTrBg) (the Chat room is also accessible via [Matrix](https://matrix.to/#/#frictionlessdata:matrix.okfn.org)). You can also start a thread in [GitHub Discussions](https://github.com/frictionlessdata/project/discussions). Frictionless is a big community that consists of people having different expertise in different domains. Feel free to ask us any questions! + +## School of Data + +School of Data is a project overseen by the Open Knowledge Foundation consisting of a network of individuals and organizations working on empowering civil society organizations, journalists and citizens with skills they need to use data effectively. School of Data provides data literacy trainings and resources for learning how to work with data. + +[School of Data](https://schoolofdata.org) + +## Paid Support + +Professional, timely support is available on a paid basis from the creators of Frictionless Data at Datopian and Open Knowledge Foundation. Please get in touch via: + +[Datopian](http://datopian.com/contact) +Open Knowledge Foundation: diff --git a/tailwind.config.js b/tailwind.config.js new file mode 100644 index 000000000..c0e545be8 --- /dev/null +++ b/tailwind.config.js @@ -0,0 +1,25 @@ +module.exports = { + corePlugins: { + preflight: false, + gridTemplateColumns: true, + }, + theme: { + screens: { + 'xs': '360px', + 'sm': '640px', + 'md': '768px', + 'lg': '1024px', + 'xl': '1280px', + 'xxl': '1480px', + }, + extend: { + colors: { + 'primary': '#d4e5d9', + 'secondary': '#CFE1E6', + 'dark-blue': '#96bdc8' + } + } + }, + variants: {}, + plugins: [] +}