diff --git a/1.3.0/404.html b/1.3.0/404.html index e1d4ae79..581ca559 100644 --- a/1.3.0/404.html +++ b/1.3.0/404.html @@ -541,6 +541,8 @@ + + @@ -604,7 +606,7 @@ - Installing Archipelago Drupal 9 on OSX (macOS) + Installing Archipelago Drupal 10 on OSX (macOS) @@ -624,7 +626,7 @@ - Installing Archipelago Drupal 9 on Ubuntu 18.04 or 20.04 + Installing Archipelago Drupal 10 on Ubuntu 18.04 or 20.04 @@ -644,7 +646,7 @@ - Installing Archipelago Drupal 9 on Windows 10/11 + Installing Archipelago Drupal 10 on Windows 10/11 @@ -679,6 +681,26 @@ +
This documentation was generated with Material for MkDocs. The repo/branch is at https://github.com/esmero/archipelago-documentation/tree/1.0.0, and the site is built using the following Github workflow: https://github.com/esmero/archipelago-documentation/blob/1.0.0/.github/workflows/ci.yml.
+This documentation was generated with Material for MkDocs. The repo/branch is at https://github.com/esmero/archipelago-documentation/tree/1.3.0, and the site is built using the following Github workflow: https://github.com/esmero/archipelago-documentation/blob/1.3.0/.github/workflows/ci.yml.
This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
@@ -2462,7 +2483,7 @@Archipelago Commons, or simply Archipelago, is an evolving Open Source Digital Objects Repository / DAM Server Architecture based on the popular CMS Drupal8/9
and released under GLP V.3 License
.
Archipelago Commons, or simply Archipelago, is an evolving Open Source Digital Objects Repository / DAM Server Architecture based on the popular CMS Drupal 9/10
and released under GLP V.3 License
.
Archipelago is a mix of deeply integrated custom-coded Drupal modules (made with care by us) and a curated and well-configured Drupal instance, running under a discrete and well-planned set of service containers.
Archipelago was dreamt as a multi-tenant, distributed, capable system (as its name suggests!) and can live isolated or in flocks of similar deployments, sharing storage, services, or -- even better -- just the discovery layer. Learn more about the different Software Services
used by Archipelago.
Archipelago's primary focus is to serve the greater GLAM community
by providing a flexible, consistent, and unified way of describing, storing, linking, exposing metadata and media assets. We respect identities and existing workflows. We endeavor to design Archipelago in ways that empower communities of every size and shape.
Finally, Archipelago tries to stay humble, slim, and nimble in nature with a small code base full of inline comments and @todos
. All of our work is driven by a clear and concise but thoughtful planned technical roadmap --updated in tandem with new releases.
Finally, Archipelago tries to stay humble, slim, and nimble in nature with a small code base full of inline comments and @todos
. All of our work is driven by a clear and concise but thoughtful planned technical roadmap --updated in tandem with new releases.
Digital Culture of Metropolitan New York (DCMNY)
-Empire Archival Discovery Cooperative (EADC) Finding Aid Toolkit
@@ -2570,9 +2586,9 @@Frick Collection and Webrecorder Team Web Archives Collaboration
Hamilton College Library & IT Services
+Hamilton College Library & IT Services (https://litsdigital.hamilton.edu/)
Rensselaer Polytechnic Institute Libraries
-San Diego State University Libraries Digital CollectionsÂ
-Consiglio Nazionale delle Ricerche / National Research Council of Italy
@@ -2657,7 +2667,7 @@ap:entitymapping
keyWill tell Archipelago that the JSON key images
should be treated as containing Entity IDs for a Drupal Entity of type (entity:file
) File. This has many interessting consequences. Archipelago, on edit/update/ingest will try (hard) to get a hold of Files with ID 1, 2 and 3. If in temporary storage Arhcipelago will move them to its final Permanent Location, will make sure Drupal knows those files are being used by this ADO, will run multiple Technical Metadata Extractions and classify internally the Files, adding everything it could learn from them. In practice, this means that Archipelago will write for you additional structures into the JSON enriching your Metadata.
Will tell Archipelago that the JSON key images
should be treated as containing Entity IDs for a Drupal Entity of type (entity:file
) File. This has many interessting consequences. Archipelago, on edit/update/ingest will try (hard) to get a hold of Files with ID 1, 2 and 3. If in temporary storage Archipelago will move them to its final Permanent Location, will make sure Drupal knows those files are being used by this ADO, will run multiple Technical Metadata Extractions and classify internally the Files, adding everything it could learn from them. In practice, this means that Archipelago will write for you additional structures into the JSON enriching your Metadata.
Without this structure, the images
key would not trigger any logic but will of course still exist and can always still be used as a list of numbers while templating.
This also implies that for a persisted ADO with those values, if you edit the JSON and delete e.g. the number (integer
or string
representation of an integer
) 3
, Archipelago will disconnect the File Entity with ID 3 from this ADO, remove the enriched metadata and mark the File as not being anymore used by this ADO. If nobody else is using the File it will become temporary
and eventually be automatically removed from the system, if that is setup at the Drupal - Filesystem - level.
Using the same example ap:entitymapping
structure, the following snippet:
Archipelago Commons, or simply Archipelago, is an evolving Open Source Digital Objects Repository / DAM Server Architecture based on the popular CMS Drupal8/9
and released under GLP V.3 License
.
Archipelago is a mix of deeply integrated custom-coded Drupal modules (made with care by us) and a curated and well-configured Drupal instance, running under a discrete and well-planned set of service containers.
Archipelago was dreamt as a multi-tenant, distributed, capable system (as its name suggests!) and can live isolated or in flocks of similar deployments, sharing storage, services, or -- even better -- just the discovery layer. Learn more about the different Software Services
used by Archipelago.
Archipelago's primary focus is to serve the greater GLAM community
by providing a flexible, consistent, and unified way of describing, storing, linking, exposing metadata and media assets. We respect identities and existing workflows. We endeavor to design Archipelago in ways that empower communities of every size and shape.
Finally, Archipelago tries to stay humble, slim, and nimble in nature with a small code base full of inline comments and @todos
. All of our work is driven by a clear and concise but thoughtful planned technical roadmap --updated in tandem with new releases.
Ingesting Only Digital Objects or Both Digital Objects and Collections uses similar processes, with a few key differences. Click here to jump to the Ingesting both Digital Objects and Collections and/or Creative Work Series (Compound) Objects section of this guide page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#ingesting-only-new-digital-objects","title":"Ingesting Only New Digital Objects","text":"From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-1-plugin-selection","title":"Step 1: Plugin Selection","text":"Select the Plugin type you will be using from the dropdown menu.
Spreadsheet Importer (if using local CSV file)
*The Remote JSON API Importer
and additional remote import source options (for other repository systems) will be covered in separate tutorials following future releases.
Select 'Create New ADOs' as the Operation you would like to perform.
If using Google Sheets Importer:
If using Spreadsheet Importer:
Select the data transformation approach--how your source data will be transformed into ADO (Archipelago Digital Object) Metadata.
You will have 3 options for your data transformation approach:
You will also need to Select which columns contain filenames, entities or URLS where files can be fetched from. Select what columns correspond to the Digital Object types found in your spreadsheet source.
Lastly, for this step, you will need to select the destination Fields and Bundles for your New ADOs. If your spreadsheet source only contains Digital Objects, select Strawberry (Descriptive Metadata source) for Digital Object
Select your global ADO mappings.
ismemberof
collection membership relationship predicate column if applicable. For AMI source spreadsheets containing only non-Creative Work Series (Compound) Objects, only ismemberof
can be mapped properly. To use ispartof
relationship setup, please refer to the steps outlined in the separate section below.- `ismemberof` and/or `ispartof` (and/or whatever predicate corresponds with the relationship you are mapping)\n- these columns can be used to connect related objects using the object-to-object relationship that matches your needs\n- in default Archipelago configurations, `ismemberof` is used for Collection Membership and `ispartof` is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in `ispartof`)\n- these columns can hold 3 types of values\n - empty (no value)\n - an integer to connect an object to another object's corresponding row in the same spreadsheet/CSV\n * Ex: Row 2 corresponds to a Digital Object Collection; for a Digital Object corresponding to Row 3, the 'ismemberof' column contains a value of '2'. The Digital Object in Row 3 would be ingested as a member of the Digital Object Collection in Row 2.\n - a UUID to connect with an already ingested object\n
node_uuid
column.Under the 'Base ADO mappings', select the label
column for ADO Label. This selection is only used as a fail-safe (in case your AMI JSON Ingest Template does not have any mapping for a column to be mapped to the JSON label
key, or your source data csv does not contain a label
if going Direct for data transformation).
Provide an optional ZIP file containing your assets.
The file upload size restrictions specified in your Archipelago instance will apply here (512MB maximum by default).
You will now see a message letting you know that 'Your source data was saved and is available as a CSV at linktotheAMIgenerated.csv
The message will also let you know that your New AMI Set was created and provide a link to the AMI Set page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-7-ami-set-processing","title":"Step 7: AMI Set Processing","text":"Your newly created AMI Set will now need to be Processed.
If you clicked on the 'see it here' link in Step 6, you will be brought to the AMI Set page for review. You may also select Process
from the Operations
menu for the AMI set from the main AMI sets
page. From the Process
page you can review the JSON configuration for your set (determined by your selections in the preceding steps).
You may wish to double check the settings configured in your AMI Set in the Raw Metadata (JSON) on the AMI Set View
tab before Processing.
To Process this set, navigate to the Process
tab. You will have multiple options related to the Processing outcome for your AMI Set.
Enqueuing and File Processing Options
Select Confirm
to continue.
You will be returned to AMI sets
page and see a brief confirmation message regarding the Enqueuing and Processing options you selected.
If you chose to 'Confirm\" and Process your AMI Set immediately, proceed to Step 9: Processing and ADO Creation.
If the chose to 'Enqueue' your AMI Set and the Queue operations for your Archipelago instance have been configured, you can simply leave your AMI Set in the Queue for Processing on the preconfigured schedule. Common timing for AMI Set Processes schedules are typically setup to run every three to six hours. Contact your Archipelago Administrators for details about your particular Archipelago's Processing schedule.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-8-queue-manager-push-may-be-restricted-to-administrator-users-only","title":"Step 8: Queue Manager Push (may be restricted to Administrator Users only)","text":"If you chose to place your AMI set in the Queue to Process in step 7 and you wish to manually kickstart the Queue Processes, navigate to the Queue Manager found at /admin/config/system/queue-ui
. (Be sure to select the Queue Manager
under the System section, not the Queue Manager for Hydroponic Service
under the Archipelago section).
To Process your AMI Set immediately from the Queue Manager page, select the checkbox next to the 'AMI Digital Object Ingester Queue Worker'. Keep the Action
menu set to Batch Process
and click the Apply to selected items
button.
Your AMI set will now be Processed. You can follow the set's progress through the Processing queues
loading screen.
After your AMI set is Processed, you will receive confirmation messages letting you know your Digital Objects were successfully created.
From this message, you can click on each ADO title to review the new created Digital Object (or Collection) if you wish. Or, you may proceed to step 10.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-10-review-your-newly-created-digital-objects-directly-or-via-ami-set-report","title":"Step 10: Review your newly created Digital Objects directly or via AMI Set Report","text":"/admin/content
and review your newly created Digital Objects. After ensuring that files and metadata elements were mapped correctly, you may choose to change the Status for your Digital Objects to 'Published'.Option 2: Use the AMI Set Report
From the main AMI sets
page, select Report
from the Operations
menu for the AMI set you wish to review.
This Report will contain information related to the last Processing operation run against your AMI Set.
datetime
stamplevel
(INFO, WARNING, or ERRORS) applicabilitymessage
summarizing the Processing outcome--including a title/label link to the created ADO if successfuldetails
summary containing system information related to the operations.From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#steps-1-plugin-selection-step-2-operation-and-spreadsheet-source-selection","title":"Steps 1: Plugin Selection & Step 2: Operation and Spreadsheet Source Selection","text":"Follow the same instructions found above for Ingesting New Digital Objects.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-3-data-transformation-selections_1","title":"Step 3: Data Transformation Selections","text":"To import Digital Objects and Digital Object Collections and/or Creative Work Series (Compound) Objects at the same time/from same spreadsheet source, you will need to select the Custom (Expert Mode)
option for your data transformation approach.
You will then need to 'Select your Custom Data Transformation and Mapping Options' for each of your Digital Object, Collection, and Creative Work Series (Compound) types.
For Collection and Creative Work Series (Compound) objects:
Strawberry (Descriptive Metadata source) for Digital Object Collection
images
if you are uploading a thumbnail image for your Collection.For each non-Creative Work Series (Compound) Digital Object type in your spreadsheet source:
Strawberry (Descriptive Metadata source) for Digital Object
For example, for 'Map' type Digital Objects, you would select the following options (as depicted in this screenshot):
Select your global ADO mappings.
ismemberof
and ispartof
).- `ismemberof` and/or `ispartof` (and/or whatever predicate corresponds with the relationship you are mapping)\n- these columns can be used to connect related objects using the object-to-object relationship that matches your needs\n- in default Archipelago configurations, `ismemberof` is used for Collection Membership and `ispartof` is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in `ispartof`)\n- these columns can hold 3 types of values\n - empty (no value)\n - an integer to connect an object to another object's corresponding row in the same spreadsheet/CSV\n * Ex: Row 2 corresponds to a Digital Object Collection; for a Digital Object corresponding to Row 3, the 'ismemberof' column contains a value of '2'. The Digital Object in Row 3 would be ingested as a member of the Digital Object Collection in Row 2.\n - a UUID to connect with an already ingested object\n
node_uuid
column.label
column for ADO Label. This selection is only used as a fail-safe (in case your AMI JSON Ingest Template does not have any mapping for a column to be mapped to the JSON label
key, or your source data csv does not contain a label
if going Direct for data transformation).ismemberof
is also selected in the ADO Parent Columns. In order to make sure that Digital Objects containing the corresponding UUID or spreadsheet row number for any corresponding Parent ADOs (Creative Work Series/Compounds), make sure ispartof
is also selected in the ADO Parent Columns.Select the corresponding Columns for the Required ADO mappings.
Follow the same instructions found in Steps 5-10 above. As part of step 10, make sure your Digital Objects were ingested into the corresponding Collections you mapped them to in your spreadsheet source. Please note, you will need to Publish the Digital Objects before the Objects will appear in the Collection's View page (whether accessed as a logged-in Admin user or Anonymous/Public user). Celebrate your next AMI success with another fresh coffee, tea, or cookie!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"CODE_OF_CONDUCT/","title":"Archipelago - code of conduct / anti-harassment policy","text":"The Archipelago Commons community and the Metropolitan New York Library Council (METRO) are dedicated to providing a welcoming and positive experience for all participants, whether they are at a formal gathering, in a social setting, or taking part in activities online. This includes any forum, mailing list, wiki, web site, IRC channel, public meeting, conference, workshop/training or private correspondence. The Archipelago community welcomes participation from people all over the world, and these community members bring with them a wide variety of professional, personal and social backgrounds; whatever these may be, we treat colleagues with dignity and respect.
This Code of Conduct governs how we behave in public or in private. We expect it to be honored by everyone who represents the project officially or informally, claims affiliation with the project, or participates directly.
We ask that all community members adhere to the following expectations:
METRO and Archipelago have a zero-tolerance policy for verbal, physical, and sexual harassment. Anyone who is asked to stop a hostile or harassing behavior is expected to do so immediately. Here, for reference, are New York State\u2019s requirements.
Harassment includes: Offensive verbal comments related to sex, gender, ethnicity, nationality, socioeconomic status, sexual orientation, disability, physical appearance, body size, age, race, religion; sexual or discriminatory images in public spaces; deliberate intimidation; stalking; harassing photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention.
Participation in discussions and activities should be respectful at all times. Please refrain from making inappropriate comments. Create opportunities for all people to speak, exercising tolerance of the perspectives and opinions of others. When we disagree, we do this in a polite and professional manner. We may not always agree. When frustrated, we back away and look for good intentions, not reasons to be more frustrated. When we see a flaw in a contribution, we offer guidance on how to fix it.
Participants in METRO and Archipelago communication channels violating this code of conduct may be sanctioned or expelled at the discretion of the organizers of the meeting (if the channel is an in-person event) or the Archipelago Advisory Board (if the channel is online).
"},{"location":"CODE_OF_CONDUCT/#initial-incident","title":"Initial Incident","text":"If you are being harassed, notice that someone else is being harassed, or have any other concerns, and you feel comfortable speaking with the offender, please inform the offender that he/she/they has affected you negatively. Oftentimes, the offending behavior is unintentional, and the accidental offender and offended will resolve the incident by having that initial discussion. Participants asked to stop any harassing behavior are expected to comply immediately.
"},{"location":"CODE_OF_CONDUCT/#escalation","title":"Escalation","text":"If the offender insists that they did not offend, if offender is actively harassing you, or if direct engagement is not a good option for you at this time, then you will need a third party to step in. To report any violation of the following code of conduct or if you have any questions or suggestions about this code of conduct, please contact archipelago-community@metro.org or fill out this form anonymously. This will be sent to leadership at METRO and the advisory board member currently acting as the Code of Conduct liaison. Our enforcement guidelines work in accordance with those published at the Contributor Covenant.
Upon review, if METRO leadership and the Code of Conduct Liaison determine that the incident constitutes harassment they may take any action they deem appropriate, including warning the offender, expulsion from the meeting or other community channels, or contacting a higher authority such as a representative from the offender's institution.
These policies draw from many other code of conduct documents, including but not limited to: code4lib, DLF, Islandora, ICG, Samvera, WikimediaNYC, and IDOCE
"},{"location":"I7solrImporter/","title":"Using the Islandora 7 Solr Importer","text":"From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-1-plugin-selection","title":"Step 1: Plugin Selection","text":"Select the Islandora 7 Solr Importer from the dropdown menu.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-2-section-1-solr-server-configuration","title":"Step 2, section 1: Solr Server Configuration","text":"You will only have the option to select 'Create New ADOs' as the Operation you would like to perform.
For the Solr Server Configuration section, you will need to provide all of the following information:
You will also need to select the Starting Row you would like to begin fetching results from, and the Number of Rows to fetch.
The Starting Row is an offset and defaults to 0, which is the most common (and recommended) approach. For the Total Number of Rows to Fetch, setting this to empty or null will automatically (refresh when selecting 'Next' button at bottom of page) prefill with the real Number Rows found by the Solr Query invoked. If you set this number higher than the actual results we will only fetch what can be fetched.
For larger collections, you may wish to create multiple/split AMI ingest sets by selecting a specified number of rows.
In this step you will need to make determinations on how you would like to map your Islandora 7 digital objects to your Archipelago repository and whether or not you would like to fetch additional file datastreams, such as those for thumbnail images, transcripts, OCRs/HOCRs, etc.
Selecting \"Collapse Multi Children Objects\" will collapse Children Datastreams into a single ADO with many attached files (single row in the generated AMI set .csv file). Book Pages will be fetched but also the Top Level PDF (if one exists in your Islandora instance).
In the Required ADO mappings, you will need to specify which Archipelago type you want to map each Islandora Content Model found in your source collection.
If you had left \"Collapse Multi Children Objects\" unselected, you will also need to specify the Islandora Content Model to ADO types mapping for possible Children.
- You can also specify an ADO (Object or Collection) to be used as the Parent of Imported Objects. By selecting an existing ADO (Object or Collection) here using the autocomplete/search, the generated AMI set .csv file will contain an 'ismemberof' column containing the UUID of the selected ADO for every row. - Under \"Additional Datastreams to Fetch\", you can select any number and/or combination of extra file datastreams to retrieve from your harvest. Please note that the I7 Importer will fetch every possible datastream that is present in your source I7 repository, but the additional file datastreams referenced may not be associated with actual files for every digital object.
Language from form itself:
Additional datastreams to fetch. OBJ datastream will always be fetched. Not all datastreams listed here might be present once your data is fetched.
Select the data transformation approach--how your source data will be transformed into ADO (Archipelago Digital Object) Metadata. As noted in the list below, 'Custom (Expert Mode)' is the recommended choice for AMI sets generated using the Islandora 7 Solr Importer plugin.
You will have 3 options for your data transformation approach:
You will also need to Select which columns contain filenames, entities or URLS where files can be fetched from. Select what columns correspond to the Digital Object types found in your spreadsheet source. If you fetched additional file datastreams during Step 2, you will see those columns listed here as well (see screenshot below for examples).
Lastly, for this step, you will need to select the destination Fields and Bundles for your New ADOs. If your spreadsheet source only contains Digital Objects, select Strawberry (Descriptive Metadata source) for Digital Object
Template
and use the AMI Ingest JSON template that corresponds with your metadata elements.Select images
, documents
, and audios
for the file source/fetching.
Select your global ADO mappings.
node_uuid
and any relationship predicate columns (such as ismemberof
).If using Sheet 1 of the Demo AMI Ingest set (found above):
ismemberof
and node_uuid
for ADO Parent columnslabel
column for ADO LabelFor standard Spreadsheet or Google Sheets AMI ingests, you would use this step to provide an optional ZIP file containing your assets.
For your Islandora 7 Solr Importer process, the generated AMI set.csv file will contain the necessary URLs to the corresponding Islandora 7 file datastreams for each object as needed. Select next to skip this ZIP upload step and proceed.
After you provide a title for your AMI set under \"Please Name your AMI set\", select \"Press to Create Set\"
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-6-ami-set-confirmation","title":"Step 6: AMI Set Confirmation","text":"You will now see a message letting your know your \"New AMI Set was created\". You will be able to review the generated .csv file directly from this page under Source Data File.
While you may immediately select \"Process\" from this AMI Set Confirmation page to use the Islandora 7 Importer generated .csv file as-is to ingest the ADOs in your AMI set, it is strongly recommended that you review the .csv file first. AMI is configured to trim unecessary (for Archipelago) and de-duplicate redundant Solr source data, but you may wish to pare down the sourced data even further and/or conduct general metadata review and cleanup before migrating your content. You will also likely want to make adjustments to your AMI Ingest JSON Template based on your review, depending on the variation of metadata columns/keys found in your source repostiory.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#next-steps","title":"Next Steps","text":"To proceed with Processing your AMI Set, click here to be directed to the main Ingesting Digital Objects via Spreadsheets.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"about/","title":"About this Documentation","text":"This documentation was generated with Material for MkDocs. The repo/branch is at https://github.com/esmero/archipelago-documentation/tree/1.0.0, and the site is built using the following Github workflow: https://github.com/esmero/archipelago-documentation/blob/1.0.0/.github/workflows/ci.yml.
","tags":["Documentation","Contributing"]},{"location":"about/#contributing","title":"Contributing","text":"pip install mkdocs-material mike
.mike delete --all && mike deploy 1.0.0 default && mike set-default 1.0.0 && mike serve
to see and test changes.This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
"},{"location":"acknowledgments/#license","title":"License","text":"GPLv3
"},{"location":"ami_index/","title":"Archipelago Multi-Importer (AMI)","text":"Archipelago Multi-Importer (AMI) is a module for batch/bulk/mass ingests of Archipelago digital objects (ADOs) and collections. AMI also enables you to perform batch administrative actions, such as updating, patching/revising, or deleting digital objects and collections. AMI's Solr Importer plugin can be used to create AMI ingests and migrating content from existing Solr-sourcable digital repositories (such as Islandora 7).
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#ami-overview-and-under-the-hood-explanations","title":"AMI Overview and Under-the-Hood Explanations","text":"From the desk of Diego Pino
AMI provides Tabulated data ingest for ADOs with customizable input plugins. Each Spreadsheet (or Google Spreadsheet) goes through a Configuration Multi-step setup and generates at the end an AMI Set. AMI Sets then can be enqueued or directly ingested, its generated Objects purged and reingested again, its source data (generated and enriched with UUIDS) CSV replaced, improved and uploaded again and ingested.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#learn-more-about-metadata-in-archipelago-and-ami","title":"Learn More about Metadata in Archipelago and AMI","text":"Please review the Metadata in Archipelago overview to learn about Archipelago's unique approach to metadata and how this applies in the context of AMI set adminstration.
Click to read the full AMI 0.4.0 (Archipelago - 1.0.0) Pre-Release Notes.","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#setup-steps","title":"Setup Steps","text":"AMI has Ingest, Update and Patch capabilities. AMI has a plugin system to fetch data. The data can come from multiple sources and right now CSV/EXCEL or Google Spreadsheets are the ones enabled. It does parent/children validation, makes sure that parents are ingested first, cleans broken relationships, allows arbitrary multi relations to be generated in a single ROW (ismemberof, partOf, etc) pointing to other rows or existing ADOs (via UUIDs) and can process rows directly as JSON or preprocessed via a Metadata Display entity (twig template) capable of producing JSON output. These templates can be configured by \u201ctype\u201d, Articles v/s 3DModel can have different ones. Even which columns contain Files can be configured at that level.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#ami-set-entity","title":"AMI Set Entity","text":"Ami Sets are special custom entities that hold an Ingest Strategy generated via the previous Setup steps (as JSON with all it's settings), a CSV with data imported from the original source (with UUIDs prepopulated if they were not provided by the user). These AMI sets are simpler and faster than \u201cbatch sets\u201d because they do not have a single entry per Object to be ingested. All data lives in a CSV. This means the CSV of an AMI set can be corrected and reuploaded. Users can then Process a Set either putting the to be ingested ADOs in the queue and let Hydroponics Service do the rest or directly via Batch on the UI. ADOs generated by a set can also be purged from there. These sets can also be created manually if needed of any of the chosen settings modified anytime. Which AMI set generated the Ingest is also tracked in a newly created ADO\u2019s JSON and any other extra data (or fixed data e.g common Rights statements, or LoD) can be provided by a Twig Template. Ingest is amazingly fast. We monitored Ingest with Remote URL(islandora Datastreams) files of 15Mbytes average at a speed of 2 seconds per Object (including all post processing) continuously for a set of 100+.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#search-and-replace","title":"Search and Replace","text":"This module also provides a simple search/replace text VBO action (handles JSON as text) and a full blown JSONPATCH VBO action to batch modify ADOs. The last one is extremely powerful permitting multiple operations at the same time with tests. E.g replace a certain value, add another value, remove another value only if a certain test (e.g \u201ctype\u201d:\u201dArticle\u201d and \u201cdate_of_digital\u201d: \u201c2020-09-09\u201d) matches. If any tests fail the whole operation will be canceled for that ADO. An incomplete \u201cWebform\u201d VBO action is present but not fully functional yet. This one allows you to choose a Webform, a certain element inside that Webform and then find and replace using the same Interface you would see while editing/adding a new ADO via the web form workflow.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#getting-started-with-ami","title":"Getting started with AMI","text":"You can access AMI through the AMI Sets
tab on the main Content page found at /admin/content
or directly at /amiset/list
.
If you plan on using the Google Sheets Importer option, you will need to Configure the Google Sheets API.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#example-spreadsheetcsv","title":"Example Spreadsheet/CSV","text":"Please refer to or use a fresh/new copy of the Demo Archipelago Digital Objects (ADOs) spreadsheet to import a small set of Digital Objects, using the same assets part of the One-Step Demo content ingest guide.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#example-json-template","title":"Example JSON template","text":"This JSON template can be used during the Data Transformation (step 3) of your AMI Import. This particular template corresponds with the metadata elements found in the Default Descriptive Metadata and Default Digital Object Collection webforms shipped with Archipelago 1.0.0.
Click to view the example 1.0.0 AMI JSON templateTo use this template, copy and paste the JSON below directly into a new Metadata Display, found here for a local http://localhost:8001/metadatadisplay/list
or http://yoursite.org/metadatadisplay/list
. Select JSON
as the 'Primary mime type this Twig Template entity will generate as output' for this new Metadata Display.
{\n \"type\": {{ data.type|json_encode|raw }},\n \"label\": {{ data.label|json_encode|raw }},\n \"issue_number\": {{ data.issue_number|json_encode|raw }},\n \"interviewee\": {{ data.interviewee|json_encode|raw }},\n \"interviewer\": {{ data.interviewer|json_encode|raw }},\n \"duration\": {{ data.duration|json_encode|raw }},\n \"website_url\": {{ data.website_url|json_encode|raw }},\n \"description\": {{ data.description|json_encode|raw }},\n \"date_created\": {{ data.date_created|json_encode|raw }},\n \"date_created_edtf\": {{ data.date_created_edtf|json_encode|raw }},\n \"date_created_free\": {{ data.date_created_free|json_encode|raw }},\n \"creator\": {{ data.creator|json_encode|raw }},\n \"creator_lod\": {{ data.creator_lod|json_encode|raw }},\n \"publisher\": {{ data.publisher|json_encode|raw }},\n \"language\": {{ data.language|json_encode|raw }},\n \"ismemberof\": [],\n \"ispartof\": [],\n \"sequence_id\": {{ data.sequence_id|json_encode|raw }}, \n \"owner\": {{ data.owner|json_encode|raw }},\n \"local_identifier\": {{ data.local_identifier|json_encode|raw }},\n \"related_item_host_title_info_title\": {{ data.related_item_host_title_info_title|json_encode|raw }},\n \"related_item_host_display_label\": {{ data.related_item_host_display_label|json_encode|raw }},\n \"related_item_host_type_of_resource\": {{ data.related_item_host_type_of_resource|json_encode|raw }},\n \"related_item_host_local_identifier\": {{ data.related_item_host_local_identifier|json_encode|raw }},\n \"related_item_note\": {{ data.related_item_note|json_encode|raw }},\n \"related_item_host_location_url\": {{ data.related_item_host_location_url|json_encode|raw }},\n \"note\": {{ data.note|json_encode|raw }},\n \"physical_description_note_condition\": {{ data.physical_description_note_condition|json_encode|raw }},\n \"note_publishinginfo\": {{ data.note_publishinginfo|json_encode|raw }},\n \"physical_location\": {{ data.physical_location|json_encode|raw }},\n \"physical_description_extent\": {{ data.physical_description_extent|json_encode|raw }},\n \"date_published\": {{ data.date_published|json_encode|raw }},\n \"date_embargo_lift\": {{ data.date_embargo_lift|json_encode|raw }},\n \"rights_statements\": {{ data.rights_statements|json_encode|raw }},\n \"rights\": {{ data.rights|json_encode|raw }},\n \"subject_loc\": {{ data.subject_loc|json_encode|raw }},\n \"subject_lcnaf_personal_names\": {{ data.subject_lcnaf_personal_names|json_encode|raw }},\n \"subject_lcnaf_corporate_names\": {{ data.subject_lcnaf_corporate_names|json_encode|raw }},\n \"subject_lcnaf_geographic_names\": {{ data.subject_lcnaf_geographic_names|json_encode|raw }},\n \"subject_lcgft_terms\": {{ data.subject_lcgft_terms|json_encode|raw }},\n \"subject_wikidata\": {{ data.subject_wikidata|json_encode|raw }},\n \"edm_agent\": {{ data.edm_agent|json_encode|raw }},\n \"term_aat_getty\": {{ data.term_aat_getty|json_encode|raw }},\n \"viaf\": {{ data.viaf|json_encode|raw }},\n \"pubmed_mesh\": {{ data.pubmed_mesh|json_encode|raw }},\n \"europeana_concepts\": {{ data.europeana_concepts|json_encode|raw }},\n \"europeana_agents\": {{ data.europeana_agents|json_encode|raw }},\n \"europeana_places\": {{ data.europeana_places|json_encode|raw }},\n \"geographic_location\": {{ data.geographic_location|json_encode|raw }},\n \"subjects_local_personal_names\": {{ data.subjects_local_personal_names|json_encode|raw }},\n \"subjects_local\": {{ data.subjects_locals|json_encode|raw }},\n \"audios\": [],\n \"images\": [],\n \"models\": [],\n \"videos\": [],\n \"documents\": [],\n \"as:generator\": {\n \"type\": \"Create\",\n \"actor\": {\n \"url\": {{ setURL|json_encode|raw }},\n \"name\": \"ami\",\n \"type\": \"Service\"\n },\n \"endTime\": \"{{\"now\"|date(\"c\")}}\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"upload_associated_warcs\": []\n }\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/","title":"Using AMI's Linked Data Reconciliation","text":"Archipelago Multi Importer (AMI)'s Linked Data Reconciliation tool can be used to enrich your metadata with Linked Data (LoD). Using this tool, you can map values from your topical/subject metadata elements to your preferred LoD vocabulary source. These mappings can then be transformed via a corresponding Metadata Display (Twig) template to process the values into JSON-formatted metadata for your specified AMI set.
The aim of this tool is to automize as much of the reconciliation process as feasible within Archipelago. Please be aware that data reconciliation will still be in part a manual and potentially time intensive process.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#important-note-preliminary-pre-requisite-ami-set-configuration","title":"Important Note: Preliminary / Pre-requisite AMI Set Configuration","text":"In order to Reconciliate an AMI Set, you will need to have selected the 'Template' or 'Custom' data transformation approach (then also, via 'Template' for your Digital Object or Collection types) during Step 3 : Data Transformation of your AMI Set configuration.
Your source spreadsheet will also need to contain at least one column containing terms/names (values) you want to reconcile against an LoD Authority Source. Multiple values should be separated by '|@|'.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-1-select-the-ami-set-you-will-be-working-with","title":"Step 1: Select the AMI Set you will be working with.","text":"From the main AMI Sets List page, click on your AMI Set's Name, or select the 'Edit' option from the Operations menu on the right-hand side of the Sets list.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-2-reconcile-lod-tab","title":"Step 2: Reconcile LoD Tab","text":"Navigate to the Reconcile LoD tab.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-3-lod-reconciling-selections","title":"Step 3: LoD Reconciling Selections","text":"From the list of columns from your spreadsheet source, select which columns you want to reconcile against LoD providers.
Under the LoD Sources section, select how your chosen Columns will be LoD reconciled. - LoD reconcile options will be on the left, LoD Authority Sources will be on the right. - Example: 'local_subjects' will be mapped to 'LoC subjects (LCSH)'
Full list of potential LoD Authority SourcesTo preview the values contained in the column(s) you selected, click the 'Inspect cleaned/split up column values' button.
Tip: This preview step provides you with the opportunity to return to your AMI Set source CSV and make any necessary label/term corrections such as outliers and formatting errors before processing. This can be done multiple times until your source set is fully prepared. If using this workflow, you will tick the 'Re-process only adding new terms/LoD Authority Sources' processing option after replacing your updated source CSV (see screenshot below)
When ready, there are multiple processing options to select from depending on your current need/workflow. - To process immediately, select 'Process LoD from Source' - To enqueue the batch process, select 'Enqueue but do not process Batch in real time. - To add new data (i.e. terms, LoD Authority Sources) to existing reconciliation (e.g after replacing source CSV data), select 'Re-process only adding new terms/LoD Authority Sources
Important note: if you have previously run LoD Reconciliation for your AMI set, this action will overwrite any manually corrected LoD on your Processed CSV. Please make sure you have a backup if unsure.
Depending on the size of your AMI Set, the Reconciliation processing may take a few minutes.
When the process is finished, you will see a brief confirmation message.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-4-edit-reconciled-lod","title":"Step 4: Edit Reconciled LoD","text":"Open the 'Edit Reconciled LoD' tab.
You will see a table (form) containing: - Your Original term values (labels) - The CSV Column Header/Key from the source spreadsheet where the value is found - A Checked option you can use to denote that an LoD mapping has been reviewed/revisioned - The Linked Data Label and URL pairing selected during the LoD reconciliation process
The results table will show 10 original terms and mappings per page. You can advance through the pages using the page numbers and navigational arrows above and below the table.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-5-review-and-edit-your-reconciled-lod-mappings","title":"Step 5: Review and Edit your Reconciled LoD Mappings","text":"Review the LoD reconciliation mappings, to make sure the best terms were selected for your metadata.
As you advance through your review process, it is recommended that you use the 'Save Current LoD Page' at the bottom of each results page as you work. This will preserve the corrections you may have made and update the LoD Reconciled data for your AMI Set within the editing form.
When you have finished editing/reviewing your data, you must select 'Save all LoD back to CSV File' or else your LoD selections will not be preserved.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-6-ami-set-review-and-twig-metadata-display-preparation","title":"Step 6: AMI Set Review and Twig (Metadata Display) Preparation","text":"You will now need to make sure that the Metadata Display (Twig) Template you selected to use during your initial AMI Set configuration is setup to Process your LoD mapped Label and URL selections into your Digital Objects and Collections JSON metadata.
For every JSON key/element in your metadata that you need to process the LoD Reconciled data into, you need to specify in your Template that data for this element will be read from the 'Processed Data' LoD information.
In the following example Twig snippet, the \"subject_loc\" JSON key will map corresponding values from the 'Processed Data' (data.lod) LoD information into a newly created Digital Object/Collection during the AMI Set Processing.
\"subject_loc\": {{ data_lod.mods_subject_topic.loc_subjects_thing|json_encode|raw }},\n
The same general pattern can be adapted to apply to different mapping scenarios (original CSV source columns to Reconciled LoD Sources) as needed.
Full list of Column Options => Corresponding LoD SourcesTo proceed with Processing your AMI Set, click here to be directed to the main Ingesting Digital Objects via Spreadsheets.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_spreadsheet_overview/","title":"Spreadsheet Formatting Overview","text":"","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_spreadsheet_overview/#spreadsheet-formatting-overview","title":"Spreadsheet Formatting Overview","text":"There are multiple ways a spreadsheet/CSV file can be structured to work with AMI, depending on the data transformation and mapping you will be using.
Columns in your spreadsheet/CSV can be mapped to different data (files) and metadata elements (label, description, subjects, etc.).
It is recommended that different types of files are placed into separate columns--\"images\", \"documents\", \"models\", \"videos\", \"audios\", \"texts\".
/var/www/html/d8content/myAMIimage.jpg
s3://myAMIuploads/myAMIdocument.pdf
https://dogsaregreat.edu/dogs.tiff
Every spreadsheet/CSV file should contain the following Columns:
type
label
node_uuid
sequence_id
for Creative Work Series (compound) children objectsRecommended Columns:
ismemberof
and/or ispartof
(and/or whatever predicate corresponds with the relationship you are mapping)ismemberof
is used for Collection Membership and ispartof
is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in ispartof
)You can use direct JSON snippets such as:
[{\"uri\": \"http://id.loc.gov/authorities/subjects/sh95008857\",\"label\": \"Digital libraries\"}]\n
- If you have an advanced twig template with the necessary logic, you can place data in cells that can be parsed and structured in various ways (such as multiple values separated by semicolons split accordingly, capitalization of values based on defined patterns, etc.) Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_update/","title":"Using AMI's Update Operations","text":"Archipelago Multi Importer (AMI)'s Update Operations can be used to Update, Replace, or Append metadata values or files for existing Digital Objects and Collections found in your Archipelago. You can prepare and use AMI Update Sets in different ways using one of three functional operations, depending on your update needs. This guide will provide a general overview of the three main functions and how each operation may be useful.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#important-notes-preliminary-pre-requisites","title":"Important Notes: Preliminary / Pre-requisites","text":"You need to have existing Digital Objects or Collections (ADOs) in your Archipelago to work with. You should have a prepared AMI Update Set CSV that contains at least the following columns/headers:
You should be familiar with the basic mechanics of AMI Set Configuration noted in Steps 1-6.
Best Practices
For all AMI Update operations, it is strongly recommended to both:
Before Updating, use the 'Export Archipelago Digital Objects to CSV content item' Action available on the main Content
page and the Find and Replace
page menus to generate a CSV of your non-modified objects. If something unintended occurs with your Update execution, you could use this CSV of your non-modified objects to restore your objects (or just a field or two) as needed.
Create a small test batch CSV referencing one to two/three ADOs to test the execution of your desired Update actions on before running your larger Update Sets. There is no 'Undo' or 'Revert Changes' button that can be used for an AMI Update Set. You do have the option to 'Revert Revisions' on an object-by-object basis, but that is not ideal for reverting changes that were executing across large batches of ADOs. See the 'Checking Your Changes' documentation section for more information about reviewing and potentially reverting Revisions.
As with regular/Create New AMI Sets, you will have to select your preferred Data Tranformation configuration during Step 3 : of your AMI Update Set Configuration.
Caution with using Templates for Data Transformation
If you are planning to use the 'Template' or 'Custom (Export Mode)' data transformation approach for your AMI Update Set configuration, you will need to have prepared your corresponding AMI Ingest Template to account for the specific Update actions you have planned.
It is important to keep in mind that all of the metadata elements for your existing ADOs metadata may not necessarily be present in your AMI Update Set Source CSV. For example, you may have only prepared your AMI Update Set Source CSV to contain a limited number of headers/columns, such as only those required (node_uuid, label) and one or two metadata elements you wish to update (such as \"subjects\"). If you choose to pass your AMI Update Set through a Twig template, the output after Processing your AMI Update Set may overwrite your existing data if you do not have all of the necessary logic/checks in place to preserve the existing metadata if desired.
In other words, imagine your twig template contains this statement:
\"subjects\": {{ data.subjects|json_encode|raw }}, \n
Independently of IF your CSV contains \"subjects\" as a header/column, the Twig template will still output an empty \"subjects\", which, when using the \"Replace\" mode will wipe out any existing \"subjects\" in your ADO.
During any update operation (independently of the functional operation chosen) and IF you are using/passing your CSV through a template, AMI will provide an extra Context key for you to reference in your Twig Template. You can always reference 'dataOriginal.subjects' for example -- all dataOriginal.xx keys will contain the values of the existing metadata for your ADOs. This allows you to make \"smart\" templates that check IF a certain key/values exists, compare the unmodified (and to be modified) ADO(s) with the new data passed, then generate the desired output.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#update-set-processing-options","title":"Update Set Processing Options","text":"Beginning from Step 7, Processing of your AMI Set Configuration, select the Update operation that best corresponds to your targeted Update scenario.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#1-normal-update-operation","title":"1. Normal Update Operation","text":"The Normal Update Operation 'will update a complete existing ADO's configured target field with new JSON Content.' This will replace everything in an ADO with new processed data.
The Replace Update Operation Replace 'will replace JSON keys found in an ADO's configured target field with new JSON content. Not provided ones (fields/JSON keys) will be kept.'
The Append update operation 'will append values to existing JSON keys in an ADO's configured target field. New ones (fields/JSON keys) will be added too.'
For the other AMI Set Process options and steps, please refer to the information found from Steps 7-10 in this complementary documentation for Create New ADOs AMI Sets. See the 'Checking Your Changes' documentation section for more information about reviewing and potentially reverting Revisions.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"annotations/","title":"Annotations in Archipelago","text":"Archipelago extends Annotorius to provide W3C-compliant Web Annotations for Digital Objects. These annotations can be added per image (when multiple), edited for text and shape adjustments, and saved/discarded using the regular Edit mode (bonus track 1: temp storage that persists when you log out and come back in to your session). Archipelago also exposes a full API for WebAnnotations, that keeps track of which Images (referenced in the Strawberryfield @ as:image
) were annotated and creates the W3C valid entries inside your Digital Object's JSON @ ap:annotationCollection
(bonus track 2: multiple users can annotate the same resource, enabling digital scholarship collaboration opportunities).
Important Note: For any image-based Digital Objects you would like to apply annotations to, the Digital Object type
must be setup to display the image file(s) using the Open SeaDragon viewer. More information about about Managing Display Modes in Archipelago can be found here. Please stay tuned for updates announcing web annotation integration for Mirador 3.
https://yoursite.org/admin/structure/types/manage/digital_object/display/digital_object_viewmode_fullitem
You are now ready to get started adding annotations!
"},{"location":"annotations/#adding-and-saving-annotations","title":"Adding and Saving Annotations","text":"Shift
key. Click and then drag to apply either a Rectangular box or multi-point Polygon shape.ap:annotationCollection
key.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"archifilepersistencestrategy/","title":"Archipelago's File Persistence Strategy","text":""},{"location":"archifilepersistencestrategy/#how-are-files-for-archipelago-digital-objects-ados-persisted-what-happens-with-those-fishtanks","title":"How are files for Archipelago Digital Objects (ADOs) persisted? (What happens with those fishtanks?)","text":"A few Event Subscribers/Data describing logics happen in a certain order:
User Uploads via a webform Element a new File or via Drush/Batch ingest that attaches (via JSONAPI) a file.
If the webform is involved, Archipelago acts quickly and calls directly (before the Node even exists) the file classifier, that will:
as:somefiletype
JSON structure into the main ADO
SBF JSON
, with info about the file, checksums, size, Drupal fids, uuid, etc. This is a heavy function part of the StrawberryfieldFilePersisterService
. It does a lot, making use of optimized logic, but may do more in the future to handle too-many/too-big files needs (FYI: solution will be simple, add to a queue and process later).The user finishes the form, saves and and confirms the ADO creation, and finally all the Node events fire.
On presave StrawberryfieldEventPresaveSubscriberAsFileStructureGenerator
runs and checks if 2.1 already was processed. This is needed since the user could have triggered an ingest via drush/JSONAPI/Webhooks etc. If all is well (this is a less expensive check) Archipelago continues.
On presave (next) StrawberryfieldEventPresaveSubscriberFilePersister
runs, checking all TEMPORARY files described in as:somefiletype
and actually copying them to the right \"desired\" location.
And on Save StrawberryfieldEventInsertFileUsageUpdater
also marking the file as \"being\" used by a Strawberry driven Node (different Event).
Note: Anytime we remove directly from the raw JSON a full as:somefiletype
structure of a sub-element from an as:structure
we force Archipelago to do all the above again, and Archipelago can regenerate technical metadata. This has been used when updating EXIF binaries or even when something went wrong (while testing, but this stuff is safe no worries). Eventually, there will be a BIG red button that does that if you do not like JSON editing.
Discussions related to Archipelago's file persistence strategy and planned potential strategies can be be found here: Strawberryfield Issues: 107, and here: Strawberryfield Issues: 76. This page will be updated with additional information following future developments.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/","title":"Archipelago-deployment: upgrading Drupal 9 to Drupal 10 (1.1.0 to 1.3.0)","text":"","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (1.1.0) running Drupal 9 (D9), this documentation will allow you to update to 1.3.0 on Drupal 10 (D10) without any major issues.
D9 is no longer supported as of the end of November 2023. D10 has been around for a little while, and even if every module is not supported yet, what you need and want for Archipelago has long been ready for D10. However, Archipelago is still D9 compatible if it's necessary for you to stay back a little longer.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database, and settings are mostly self-contained in your current archipelago-deployment
repo folder, and backing up is simple because of that.
On a terminal, cd
into your running archipelago-deployment
folder and shut down your docker-compose
ensemble by running the following:
docker-compose down\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing:
docker ps\n
If anything is still running, wait a little longer and run the command again.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is October 31st of 2023.
cd ..\nsudo tar -czvpf $HOME/archipelago-deployment-D9-20231031.tar.gz archipelago-deployment\ncd archipelago-deployment\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-D9-20231031.tar.gz \n
You will see a listing of files, and at the end you will see something like this: Archive Format: POSIX pax interchange format, Compression: gzip
. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade process.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#upgrading-to-130","title":"Upgrading to 1.3.0","text":"","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-1-edit-docker-composeryml","title":"Step 1: Edit docker-composer.yml","text":"First we are going to edit your docker-compose.yml file to reference the latest PHP container as needed. Starting in your Archipelago deployment directory location, run the following commands:
If you have not already, run:
docker-compose down\n
Then open your docker-compose.yml file:
nano docker-compose.yml\n
Inside your docker-compose.yml
file, page down to the php
section and change the image
section to match exactly as follows:
image: \"esmero/php-8.1-fpm:1.2.0-multiarch\"\n
Next page down to the iiif
section and change the image
section to match exactly as follows:
image: \"esmero/cantaloupe-s3:6.0.1-multiarch\"\n
Save your changes.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-2-docker-pull-and-check","title":"Step 2: docker pull and check","text":"Time to fetch the latest branch and update our docker compose
and composer
dependencies. To pull the images and bring up the ensemble, run:
docker compose pull\ndocker compose up -d\n
Give all a little time to start. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n355e13878b7e nginx \"/docker-entrypoint.\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8001->80/tcp esmero-web\n86b685008158 solr:8.11.2 \"docker-entrypoint.s\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8983->8983/tcp esmero-solr\na8f0d9c6d4a9 esmero/cantaloupe-s3:6.0.1-multiarch \"sh -c 'java -Dcanta\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n6642340b2496 mariadb:10.6.12-focal \"docker-entrypoint.s\u2026\" 10 minutes ago Up 10 minutes 3306/tcp esmero-db\n0aef7df34037 minio/minio:RELEASE.2022-06-11T19-55-32Z \"/usr/bin/docker-ent\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:9000-9001->9000-9001/tcp esmero-minio\n28ee3fb4e7a7 esmero/php-8.1-fpm:1.2.0-multiarch \"docker-php-entrypoi\u2026\" 10 minutes ago Up 10 minutes 9000/tcp esmero-php\na81c36d51a81 esmero/esmero-nlp:fasttext-multiarch \"/usr/local/bin/entr\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:6400->6400/tcp esmero-nlp\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Now we are going to tell composer
to update the key Drupal and Archipelago modules.
First we are going to disable and remove a few minor Drupal modules. Run the following commands (in order):
docker exec -ti esmero-php bash -c \"drush pm-uninstall fancy_file_delete\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall quickedit\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/fancy_file_delete\"\n
Now update the versions used for these Drupal modules:
docker exec -ti esmero-php bash -c \"composer require drupal/jquery_ui_slider:^2 drupal/jquery_ui_effects:^2 drupal/jquery_ui:1.6 drupal/jquery_ui_datepicker:^2 drupal/jquery_ui_touch_punch:^1 drupal/better_exposed_filters:6.0.3 --no-update --with-all-dependencies\"\n
And now update one other Drupal module and the main Archipelago modules:
docker exec -ti esmero-php bash -c \"composer require 'drupal/views_bulk_operations:^4.2' 'strawberryfield/strawberryfield:1.3.0.x-dev' 'strawberryfield/webform_strawberryfield:1.3.0.x-dev' 'strawberryfield/format_strawberryfield:1.3.0.x-dev' 'strawberryfield/strawberry_runners:0.7.0.x-dev' 'archipelago/ami:0.7.0.x-dev' --no-update --with-all-dependencies\"\n
From inside your archipelago-deployment
repo folder we are now going to open up file permissions
for some of your most protected Drupal files.
sudo chmod 777 web/sites/default\nsudo chmod 666 web/sites/default/*settings.php\nsudo chmod 666 web/sites/default/*services.yml\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-4-disableremove-for-additional-select-drupal-modules","title":"Step 4: Disable/Remove for additional select Drupal modules","text":"We are going to remove additional select Drupal modules that are not 1.3.0 or D10 compliant.
Please run each of the following commands separately, in order, and do not skip any commands.
docker exec -ti esmero-php bash -c \"composer remove symfony/http-kernel symfony/yaml --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_inspector:^2 --no-update\" \ndocker exec -ti esmero-php bash -c \"drush pm:uninstall jsonapi_earlyrendering_workaround\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/jsonapi_earlyrendering_workaround --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/core:^10' 'drupal/core-recommended:^10' 'drupal/core-composer-scaffold:^10' 'drupal/core-project-message:^10' --update-with-dependencies --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/core-dev:^10' --dev --update-with-dependencies --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drush/drush:^12' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/twig_tweak:^2 --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_inspector:^2 --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_update:2.0.x-dev --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/config_update_ui --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/context:^5.0@RC' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/devel:^5.1' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/sophron --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove fileeye/mimemap --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/imagemagick:^3 --no-update\" \ndocker exec -ti esmero-php bash -c \"composer remove fileeye/mimemap --no-update\" \ndocker exec -ti esmero-php bash -c \"drush pm:uninstall jsonapi_earlyrendering_workaround\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/jsonapi_earlyrendering_workaround --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/imce:^3.0' --no-update\" \ndocker exec -ti esmero-php bash -c \"composer require 'drupal/search_api_attachments:^9.0' --no-update\" \ndocker exec -ti esmero-php bash -c \"composer require 'drupal/twig_tweak:^3.2' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/webform_entity_handler:^2.0' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/webformnavigation:^2.0' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/form_mode_manager --no-update\"\n
Well done! If you see no issues and all ends in Green colored messages, all is good! Jump to Step 5
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#what-if-all-is-not-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if all is not OK, and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 10 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 10 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL10_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not, try to find a replacement module that does something similar, but in any case you may end up having to remove before proceeding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"drush pm-uninstall the_module_name\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-5-update-composerjson","title":"Step 5: Update composer.json","text":"Now you need to update your composer.json
file to include an important patch. Starting in your Archipelago deployment directory location, run the following commands:
nano composer.json\n
Inside your composer.json
file, page down to the \"patches\"
section and change the section to match exactly as follows:
\"patches\": {\n \"drupal/form_mode_manager\": {\n \"D10 compatibility\": \"https://www.drupal.org/files/issues/2023-10-11/3297262-20.patch\"\n },\n \"drupal/ds\": {\n \"https://www.drupal.org/project/ds/issues/3338860\": \"https://www.drupal.org/files/issues/2023-04-04/3338860-5-d10-compatible_0.patch\"\n }\n }\n
Save your changes and then run:
docker exec -ti esmero-php bash -c \"composer update -W\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-6-one-final-round-of-drupal-module-updates","title":"Step 6: One final round of Drupal module updates","text":"We will now run a few more updates for additional Drupal modules.
Please run each of the following commands separately, in order, and do not skip any commands.
docker exec -ti esmero-php bash -c \"composer require mglaman/composer-drupal-lenient\"\ndocker exec -ti esmero-php bash -c \"composer config --merge --json extra.drupal-lenient.allowed-list '[\\\"drupal/form_mode_manager\\\"]'\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/form_mode_manager:2.x-dev@dev'\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/color:^1.0'\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/hal\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/aggregator\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/ckeditor\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/seven\"\ndocker exec -ti esmero-php bash -c \"composer require archipelago/archipelago_subtheme:1.3.0.x-dev\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall quickedit\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/quickedit drupal/classy drupal/stable\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall hal\"\ndocker exec -ti esmero-php bash -c \"drush pm:install jquery_ui\"\n
Whew, that's a lot of module updates! Now run one final database update command:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-7-optional-syncs","title":"Step 7: Optional Syncs","text":"Optionally, you can sync your new Archipelago 1.3.0 and bring in all the latest configs and settings. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial \n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#a-complete-sync-which-will-bring-new-things-and-update-existing-but-will-also-remove-all-the-ones-that-are-not-part-of-130-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new things and update existing but will also remove all the ones that are not part of 1.3.0. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y \n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 10 realm for a few years!
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-8-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 8: Update (or not) your Metadata Display Entities and menu items","text":"Recommended: If you want to add new templates and menu items 1.3.0 provides, run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (twig templates) we ship with new 1.3.0 versions (heavily adjusted IIIF manifests, better Object description template using Macros). Before you do this, we strongly recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unsure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-9-or-should-we-say-10","title":"Step 9 (or should we say 10)","text":"Please login to your Archipelago and test/check all is working! Enjoy 1.3.0 and Drupal 10. Thanks!
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#need-help-blue-screen-missed-a-step-need-a-hug-and-such","title":"Need help? Blue Screen? Missed a step? Need a hug and such?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
GPLv3
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-democontent/","title":"Adding Demo Archipelago Digital Objects (ADOs) to your Repository","text":"We make this optional since we feel not everyone wants to have Digital Objects from other people using space in their system. Still, if you are new to Archipelago we encourage you to do this. Its a simply way to get started without thinking too much. You can learn and test. Then delete and move over.
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#prerequisites","title":"Prerequisites","text":"","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#the-new-way-archipelago-100-rc2-or-higher","title":"The new way Archipelago 1.0.0-RC2 or higher.","text":"jsonapi
drupal user and an admin
one and you can login and out of your server.admin
user. (If you followed one of the deployment guides, password will be archipelago
)admin
user. Content
-> Ami Sets
. You will see a single AMI Set
already in place. edit
Button), press on the little down arrow
and choose Process
. DESIRED ADOS STATUSES AFTER PROCESS
, change all from Draft to Published, leave Enqueue but do not process Batch in realtime
unchecked and press \"Confirm\". The Ingest will start and a progress bar will advance. Once ready a list of Ingest Objects should appear.jsonapi
drupal user and you can login and out of your server.Go into your archipelago-deployment
folder and into the d8content
folder that is inside it, e.g.
cd archipelago-deployment/d8content\ngit clone https://github.com/esmero/archipelago-recyclables\n
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#step-2-ingest-the-objects","title":"Step 2: Ingest the Objects","text":"docker exec -ti esmero-php bash -c 'd8content/archipelago-recyclables/deploy_ados.sh'\n
You will see multiple outputs similar to this:
Files in provided location:\n - anne_001.jpg\n - anne_002.jpg\n - anne_003.jpg\n - anne_004.jpg\n - anne_005.jpg\n - anne_006.jpg\n - anne_007.jpg\n - anne_008.jpg\n - anne_009.jpg\n - anne_010.jpg\nFile anne_001.jpg sucessfully uploaded with Internal Drupal file ID 5\nFile anne_002.jpg sucessfully uploaded with Internal Drupal file ID 6 \nFile anne_003.jpg sucessfully uploaded with Internal Drupal file ID 7\nFile anne_004.jpg sucessfully uploaded with Internal Drupal file ID 8\nFile anne_005.jpg sucessfully uploaded with Internal Drupal file ID 9 \nFile anne_006.jpg sucessfully uploaded with Internal Drupal file ID 10 \nFile anne_007.jpg sucessfully uploaded with Internal Drupal file ID 11 \nFile anne_008.jpg sucessfully uploaded with Internal Drupal file ID 12\nFile anne_009.jpg sucessfully uploaded with Internal Drupal file ID 13 \nFile anne_010.jpg sucessfully uploaded with Internal Drupal file ID 14\nNew Object 'Anne of Green Gables : Chapters 1 and 2' with UUID 9eb28775-d73a-4904-bc79-f0e925075bc5 successfully ingested. Thanks!\n
The gist here is that if the script says Thanks
you are good.
archipelago-deployment/d8content/archipelago-recyclables/deploy_ados.sh
Inside you will find lines like this one:
drush archipelago:jsonapi-ingest /var/www/html/d8content/archipelago-recyclables/ado/0c2dc01a-7dc2-48a9-b4fd-3f82331ec803.json --uuid=0c2dc01a-7dc2-48a9-b4fd-3f82331ec803 --bundle=digital_object --uri=http://esmero-web --files=/var/www/html/d8content/archipelago-recyclables/ado/0c2dc01a-7dc2-48a9-b4fd-3f82331ec803 --user=jsonapi --password=jsonapi --moderation_state=published;\n
What you want here is to modify/replace the absolute paths that point your demo objects (.json) and their assets (folders with the same name). Basically replace every entry of /var/www/html/d8content/archipelago-recyclables/
with the path to archipelago-recyclables
.
If you have trouble running this or see errors or need help with a step (its only two steps), please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
GPLv3
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-live-gitworkflow/","title":"Managing, sheltering, pruning and nurturing your own custom Archipelago","text":"Now that you have your base Archipelago Live Deployment running (Do you? If not, go back!) you may be wondering about things like:
Archipelagos
are living beings. They evolve and become beautiful, closer and closer to your needs. Because of that resetting
your particularities on every Archipelago
code release is not a good idea, nor even recommended. What you want is to keep your own Drupal Settings
\u2014your facets, your themes, your Solr fields, your own modules, and all their configurations\u2014safe and be able to restore all in case something goes wrong.
The ones we ship with every Release will reset
your Archipelago's settings to Factory defaults if applied wildly
.
This is where Github
comes in place.
Prerequisites:
git config --global --edit
on your Live Instance and Set your user name/email/etc.Vi
! In case of emergency/panic press ESC
and type :x
to escape and/or run away in terror. To edit Press i
and uncomment the lines. Once Done press ESC
and type :x
to save.Let's fork https://github.com/esmero/archipelago-deployment-live under your own Account via the web. Happy Note: Since 2021 also keeping forked branches in sync with the origin can be done via the UI directly.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#12-connect-your-live-instance-terminal","title":"1.2 Connect your Live instance terminal.","text":"Move to your repository's base folder, and let's start by adding your New Fork as a secondary Git Origin
. Replace in this command yourOwnAccount
with (guess what?) your own account:
git remote add upstream https://github.com/yourOwnAccount/archipelago-deployment-live\n
Now check if you have two remotes (origin
=> This repository, upstream
=> your own fork):
git remote -v\n
You will see this:
origin https://github.com/esmero/archipelago-deployment-live (fetch)\norigin https://github.com/esmero/archipelago-deployment-live (push)\nupstream https://github.com/yourOwnAccount/archipelago-deployment-live (fetch)\nupstream https://github.com/yourOwnAccount/archipelago-deployment-live (push)\n
Good!
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#13-now-lets-create-from-your-current-live-instance-a-new-branch","title":"1.3 Now let's create from your current Live Instance a new Branch.","text":"We will push this branch into your Fork and it will be all yours to maintain. Please replace yourOwnOrg
with any Name you want for this. We like to keep the current Branch name in place after your personal prefix:
git checkout -b yourOwnOrg-1.0.0-RC3\n
Good, you now have a new local
branch named yourOwnOrg-1.0.0-RC3
, and it's time to decide what we are going to push into Github.
By default our deployment strategy (this repository) ignores a few files you want to have in Github. Also, there are things like the Installed Drupal Modules and PHP Libraries (the Source Code), the Database, Caches, your Secrets (.env
file), and your Drupal settings.php
file. You FOR SURE do not want to have these in Github and are better suited for a private Backup Storage.
Let's start by push
ing what you have (no commits, your new yourOwnOrg-1.0.0-RC3
as it is) to your new Fork. From there on we can add new Commits and files:
git push upstream yourOwnOrg-1.0.0-RC3\n
And Git will respond with the following (use your yourOwnAccount
personal Github Access Token as password):
Username for 'https://github.com': yourOwnAccount\nPassword for 'https://yourOwnAccount@github.com': \nTotal 0 (delta 0), reused 0 (delta 0)\nremote: \nremote: Create a pull request for 'yourOwnOrg-1.0.0-RC3' on GitHub by visiting:\nremote: https://github.com/yourOwnAccount/archipelago-deployment-live/pull/new/yourOwnOrg-1.0.0-RC3\nremote: \nTo https://github.com/yourOwnAccount/archipelago-deployment-live\n * [new branch] yourOwnOrg-1.0.0-RC3 -> yourOwnOrg-1.0.0-RC3\n
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#15-first-commit","title":"1.5 First Commit","text":"Right now this new Branch (go and check it out at https://github.com/yourOwnAccount/archipelago-deployment-live/tree/yourOwnOrg-1.0.0-RC3) will not differ at all from 1.0.0-RC3. That is OK. To make your Branch unique, what we want is to \"commit\" our changes. How do we do this?
Let's add our composer.json
and composer.lock
to our change list. Both of these files are quite personal, and as you add more Drupal Modules, dependencies, or Upgrade your Archipelgo and/or Drupal Core and Modules, all of these corresponding files will change. See the -f
? Because our base deployment ignores that file and you want it, we \"Force\" add it. Note: At this stage composer.lock
won't be added at all because it's still the same as before. So you can only \"add\" files that have changes.
git add drupal/composer.json \ngit add -f drupal/composer.lock\n
Now we can see what is new and will be committed by executing:
git status\n
You may see something like this:
On branch yourOwnOrg-1.0.0-RC3 \nChanges to be committed:\n (use \"git restore --staged <file>...\" to unstage)\n new file: drupal/composer.json\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n modified: drupal/scripts/archipelago/deploy.sh\n modified: drupal/scripts/archipelago/update_deployed.sh\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n deploy/ec2-docker/docker-compose.yml\n drupal/.editorconfig\n drupal/.gitattributes\n
If you do not want to add each Changes not staged for commit
individually (WE recommend you only commit what you need. Be warned and take caution.), you can also issue a git add .
, which means add all.
git add drupal/scripts/archipelago/deploy.sh\ngit add drupal/scripts/archipelago/update_deployed.sh\ngit add deploy/ec2-docker/docker-compose.yml\n
In this case we are also committing docker-compose.yml
, which you may have customized and modified to your domain (See Install Guide Step 3), deploy.sh
and update_deployed.sh
scripts. If you ever need to avoid tracking certain files at all, you can edit the .gitignore
file and add more patterns to it (look at it, it's fun!).
git commit -m \"Fresh Install of Archipelago for yourOwnOrg\"\n
If you had your email/user account setup correctly (see Prerequisites) you will see:
Fresh Install of Archipelago yourOwnOrg\n 4 files changed, 360 insertions(+), 46 deletions(-)\n create mode 100644 deploy/ec2-docker/docker-compose.yml\n create mode 100644 drupal/composer.json\n
And now finally you can push this back to your Fork:
git push upstream yourOwnOrg-1.0.0-RC3\n
And Git will respond with the following (use your yourOwnAccount
personal Github Access Token as password):
Username for 'https://github.com': yourOwnAccount\nPassword for 'https://yourOwnAccount@github.com': \nEnumerating objects: 18, done.\nCounting objects: 100% (18/18), done.\nCompressing objects: 100% (10/10), done.\nWriting objects: 100% (10/10), 2.26 KiB | 2.26 MiB/s, done.\nTotal 10 (delta 5), reused 0 (delta 0)\nremote: Resolving deltas: 100% (5/5), completed with 5 local objects.\nTo https://github.com/yourOwnAccount/archipelago-deployment-live\n d9fa835..3427ce5 yourOwnOrg-1.0.0-RC3 -> yourOwnOrg-1.0.0-RC3\n
And done.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#2-keeping-your-archipelago-modules-updated-during-releases","title":"2. Keeping your Archipelago Modules Updated during releases","text":"Releases in Archipelago are a bit different to other OSS projects. When a Release is done (let's say 1.0.0-RC2) we freeze the current release branches in every module we provide, package the release, and inmediatelly start with a new Release Cycle (6 months long normally) by creating in each repository a new Set of Branches (for this example 1.0.0-RC3). All new commits, fixes, improvements, features now will ALWAYS go into the Open/on-going new cycle branches (for this example 1.0.0-RC3), and once we are done we do it all over again. We freeze (for this example 1.0.0-RC3), and a new release cycle starts with fresh new \"WIP\" branches (for this example 1.1.0).
Some Modules like AMI or Strawberry Runners have their independent Version but are released together anyway, e.g. for 1.0.0-RC3 both AMI and Strawberry Runners are 0.2.0. Why? Because work started later than the core Archipelago and also because they are not really CORE. So what happens with main
branches? In our project main
branches are never experimental. They are always a 1:1 with the latest stable release. So main
will contain a full commit of 1.0.0-RC2 until we freeze 1.0.0-RC3 when main
gets that code. Over and over. Nice, right?
The following modules are the ones we update on every release:
strawberryfield/strawberryfield
strawberryfield/format_strawberryfield
strawberryfield/webform_strawberryfield
archipelago/ami
strawberryfield/strawberry_runners
We also update macro modules that are meant for deployment like this Repository and https://github.com/esmero/archipelago-deployment.
To keep your Archipelago up to date, especially once you \"go custom\" as described in this Documentation, the process is quite simple, e.g. to fetch latest 1.0.0-RC3
updates during the 1.0.0-RC3
release cycle run:
docker exec -ti esmero-php bash -c \"composer require strawberryfield/strawberryfield:dev-1.0.0-RC3 strawberryfield/format_strawberryfield:dev-1.0.0-RC3 strawberryfield/webform_strawberryfield:dev-1.0.0-RC3 archipelago/ami:0.2.0.x-dev strawberryfield/strawberry_runners:0.2.0.x-dev strawberryfield/strawberry_runners:0.2.0.x-dev archipelago/archipelago_subtheme:dev-1.0.0-RC3 -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will bring all the new code and all (except if there are BUGS!) should work as expected.
Note: Archipelago really tries hard to be as backwards compatible as possible and rarely will you see a non-documented or non-dealt-with deprecation.
Note 2: We of course recommend always running the Stable (frozen) release, but since code is plastic and fixes will go into a WIP open branch, you should be safe enough to move all modules together.
You can run these commands any time you need, and while the release is open you will always get the latest code (even if it's always the same branch). Please follow/subscribe to each Module's Github to be aware of changes/issues and improvements.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#3-keeping-your-archipelagos-drupal-contributed-modules-and-core-updated","title":"3. Keeping your Archipelago's Drupal Contributed Modules and Core updated","text":"","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#31-contributed-modules","title":"3.1 Contributed Modules.","text":"To keep your Archipelago's Drupal up to date check your Drupal at https://yoursite.org/admin/modules/update. Make sure you check mostly (yes mostly, no need to overreact) for Security Updates. Not every Drupal contributed module (project) keeps backwards compatibility, and we try to test every version we ship (as in this repository's composer.lock
files) before releasing. Once you detect a major change/requirement, visit the Project's Changelog Website, and take some time reading it. If you feel confident it's not going to break all, copy the suggested Composer command, e.g. if you visit https://www.drupal.org/project/google_api_client/releases/8.x-3.2 you will see that the update is suggested as:
Install with Composer: $ composer require 'drupal/google_api_client:^3.2'\n
Using the same module as an example, before doing any final updates, check your current running version (take note in case you need to revert):
docker exec -ti esmero-php bash -c \"composer info 'drupal/google_api_client\"\n
Keep the version around.
Now let's update, which means using the suggested command translated to our own Docker world like this (notice the -W
):
docker exec -ti esmero-php bash -c \"composer require 'drupal/google_api_client:^3.2 -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will update that module. Test your website. Depending on what you update, you want to focus first on the functionality it provides, and then create/edit/delete a fictitious Digital Object to ensure it did not add any errors to your most beloved Digital Objects workflows.
If you see errors or you feel it's not acting as it should, you can revert by doing:
docker exec -ti esmero-php bash -c \"composer require 'drupal/google_api_client:^VERSION_YOU_KEPT_AROUND -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
If this happens we encourage you to please \ud83d\udc4f share your findings with our community/slack/Github ISSUE here.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#31-drupal-core-inside-the-same-major-version","title":"3.1 Drupal Core inside the same major version:","text":"This is quite similar to a contributed module but normally involves at least 3 dependencies and of course larger changes.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#exact-version","title":"Exact Version","text":"Inside the same major version, e.g. inside Drupal 9, if you are currently running Drupal 9.0.1
and you want to update to an exact latest (as I write 9.2.4
):
docker exec -ti esmero-php bash -c \"composer require drupal/core:9.2.4 drupal/core-dev:9.2.4 drupal/core-composer-scaffold:9.2.4 drupal/core-project-message:9.2.4 --update-with-dependencies\"\n
Or under Drupal 8, if you are currently running Drupal 8.9.14
and you want to update to an exact latest (as I write 8.9.18
):
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:8.9.18 drupal/core:8.9.18 drupal/core-composer-scaffold:8.9.18 --update-with-dependencies\"\n
And then for both cases run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#alternative-major-version","title":"Alternative Major Version","text":"If you want to only remember a single command
and want to be sure to also get all extra packages for Drupal 9, run:
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^9 drupal/core:^9 drupal/core-composer-scaffold:^9 drupal/core-project-message:^9 -W\"\n
Or for Drupal 8:
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^8 drupal/core:^8 drupal/core-composer-scaffold:^8 drupal/core-project-message:^8 -W\"\n
And then for both cases run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will always get you the latest Drupal
and dependencies
allowed by your composer.json
.
Since major versions may bring larger deprecations, contributed modules will stay behind, and the world (and your database may collapse), we really recommend that you do some tests first (locally) or follow one of our guides. We at Archipelago will always document a larger version update. Currently, the Drupal 8 to Drupal 9 Update is documented in detail here.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Github"]},{"location":"archipelago-deployment-live-moveToLive/","title":"Moving fromarchipelago-deployment
to archipelago-deployment-live
","text":"","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you have been using/running/populating an instance with Archipelago Digital Objects that was set up using our simpler-to-deploy but harder-to-customize archipelago-deployment strategy and can't wait to move to this one\u2014meant for a larger (and somehow easier to maintain and upgrade on the long run) instance\u2014but (wait!) you do not want to ingest again, set up again, configure users, etc. (You already did that!), this is your documentation.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#what-is-this-documentation-not-for","title":"What is this documentation not for?","text":"To install an archipelago-deployment-live
from scratch or to keep (forever) syncing between the two deployment options in a quantum phase shifting eternum like a time crystal.
archipelago-deployment
as a basis.composer
, drush
, Linux Permissions, and git
of course.In a nutshell: archipelago-deployment-live
uses a different folder structure moving configuration storage, data storage outside of your webroot, and allows a much finer control of your settings (safer) and Docker containers. In a nutshell inside the first nutshell: archipelago-deployment-live
also ignores more files so keeping customized versions, your own packages, your own settings around, and version controlled is much easier. Lastly: archipelago-deployment-live
makes more use of Cloud Services, e.g. so if you have been running min.io
as local mounted storage you may now consider moving storage (files) to a cloud service like AWS S3.
In a nutshell: Since both run the same code and use the same Docker Containers, the data is actually the same. Everything is just persisted in different places.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#getting-the-new-repo-in-place","title":"Getting the new repo in place","text":"First you need to clone this repository and (hopefully) store in the same parent folder to your current archipelago-deployment
one. For the purpose of this tutorial we will assume you have archipelago-deployment
cloned in this location: $HOME/archipelago-deployment
.
Locate your archipelago-deployment
folder in your terminal. Do an ls
to make sure you can see the folder (not the content) and run:
git clone https://github.com/esmero/archipelago-deployment-live\ncd archipelago-deployment-live\ngit checkout 1.0.0-RC3\ncd ..\ncd archipelago-deployment\n
Now you have side by side $HOME/archipelago_deployment
and $HOME/archipelago-deployment-live
.
This will give you the base structure.
Before touching anything let's start by generating a backup of your current deployment (safety first).
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#backing-up","title":"Backing up","text":"","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1","title":"Step 1:","text":"Shut down your docker-compose
ensemble. Inside your original archipelago-deployment
folder run this:
docker-compose down\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-2","title":"Step 2:","text":"Verify all containers are actually down:
docker ps\n
The following command should return an empty listing. If anything is still running, wait a little longer and run the previous command again.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-3","title":"Step 3:","text":"Now let's tar.gz the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021:
sudo tar -czvpf $HOME/archipelago-deployment-backup-20211201.tar.gz ../archipelago-deployment\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt:
tar -tvvf $HOME/archipelago-deployment-backup-20211201.tar.gz \n
You will see a listing of files. If corrupt (do you have enough space? did your ssh connection drop?) you will see:
tar: Unrecognized archive format\n
Done! If you are running a public instance we can allow ourselves to start Docker again to avoid downtime:
docker-compose up -d\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-directory-structures","title":"The directory structures","text":"Now that you backed all up we can spend some minutes looking at both directory structures.
If you observe both deployment strategies side by side you will inmediately notice the most important similarities and also differences:
archipelago-deployment Live archipelago-deployment.\n\u251c\u2500\u2500 config_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nginxconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nginxconfig_selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 php-fpm\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrconfig\n\u251c\u2500\u2500 data_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiiftmp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 letsencrypt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 minio-data\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ngnixcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 deploy\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 azure-kubernetes\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ec2-docker\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 kubernetes\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sync\n\u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadatadisplays\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Commands\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sites\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 miniodata\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 webform\n\u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 archipelago\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 composer\n\u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 archipelago\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 asm89\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 aws\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 behat\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 brick\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 chi-teck\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 composer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 consolidation\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 container-interop\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 cweagans\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 data-values\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 dflydev\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 doctrine\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drupal\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 easyrdf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 egulias\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 enlightn\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 erusev\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 evenement\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ezyang\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 fabpot\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 fileeye\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 firebase\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frictionlessdata\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 google\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 graham-campbell\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 grasmash\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 guzzlehttp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 instaclick\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jcalderonzumba\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jean85\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jmikola\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 justinrainbow\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 laminas\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 league\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lsolesen\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 maennchen\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 markbaker\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 masterminds\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mglaman\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mhor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mikey179\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mixnode\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 monolog\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mtdowling\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 myclabs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nesbot\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nette\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nikic\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 paragonie\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pear\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phar-io\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phenx\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpdocumentor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phplang\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpmailer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpoffice\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpoption\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpseclib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpspec\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpstan\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpunit\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 professional-wiki\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 psr\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 psy\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ralouphie\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ramsey\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 react\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sebastian\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seld\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sirbrillig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solarium\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 squizlabs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 stack\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 strawberryfield\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 swaggest\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 symfony\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 symfony-cmf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 theseer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 twbs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 twig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 typo3\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vlucas\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web64\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 webflo\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 webmozart\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 wikibase\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 wikimedia\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 zaporylie\n\u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 core\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 libraries\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 modules\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profiles\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sites\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 themes","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-data","title":"The Data","text":"
Let's start by focusing on the data
, in our case the Database, Solr, and File (S3 + Private) storage. Collapsing here a few folders will make this easier to read. Marked with a *
are matching folders that contain DB, Solr Core, the S3 min.io data (if you are using local storage) and also Drupal's very own private
folder:
.\n\u251c\u2500\u2500 config_storage\n\u251c\u2500\u2500 data_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * db *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiiftmp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 letsencrypt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * minio-data *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ngnixcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 deploy\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * private *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 config\n\u251c\u2500\u2500 d8content\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * db *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * miniodata *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * solrcore *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 * private *\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 vendor\n\u251c\u2500\u2500 web\n","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#copying-the-data-into-the-new-structure","title":"Copying the Data into the new Structure","text":"
To do so we need to stop Docker again. This is needed because Databases sometimes keep an open Change Log and Locks in place, and if there is any interaction or cron running, your data may end up corrupted.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1_1","title":"Step 1:","text":"Shut down your docker-compose
ensemble. Inside your original archipelago-deployment
folder run this:
docker-compose down\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-2_1","title":"Step 2:","text":"Verify all containers are actually down:
docker ps\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-3_1","title":"Step 3:","text":"We will copy DB, min.io (File and ADO storage as files) and Drupal's private (temporary files, caches) folders to its new place:
sudo cp -rpv persistent/db ../archipelago-deployment-live/data_storage/db\nsudo cp -rpv persistent/solrcore ../archipelago-deployment-live/data_storage/solrcore\nsudo cp -rpv persistent/miniodata ../archipelago-deployment-live/data_storage/minio-data\nsudo cp -rpv private ../archipelago-deployment-live/drupal/private\n
Running -rpv
will copy verbosely and recursively while preserving original permissions.
Done!
You can now start docker-compose
again:
docker-compose up -d\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-web","title":"The Web","text":"Collapsing again a few folders to aid in readability, we can now focus on your actual Drupal/Archipelago Code/Web and settings. To be honest (we are), you can easily reinstall and restore all this via composer
, but we can also move folders as a learning experience/time and bandwidth experience. Marked with a *
are matching folders you want to copy over:
.\n\u251c\u2500\u2500 config_storage\n\u251c\u2500\u2500 data_storage\n\u251c\u2500\u2500 deploy\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 * config *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * vendor *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * web *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 * config *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sync\n\u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadatadisplays\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Commands\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sites\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u251c\u2500\u2500 private\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 * vendor *\n\u251c\u2500\u2500 * web *\n","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#copying-the-web-into-the-new-structure","title":"Copying the Web into the new Structure","text":"
No need to stop Docker again. We can do this while your Archipelago is still running.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1_2","title":"Step 1:","text":"We will copy all important folders over. From your archipelago-deployment
folder run:
sudo cp -rpv vendor ../archipelago-deployment-live/drupal/vendor\nsudo cp -rpv web ../archipelago-deployment-live/drupal/web\nsudo cp -rpv config ../archipelago-deployment-live/drupal/config\n
And also, selectively, a few files we know you are very fond of!
sudo cp -rpv composer.json ../archipelago-deployment-live/drupal/composer.json\nsudo cp -rpv composer.lock ../archipelago-deployment-live/drupal/composer.lock\n
Done!
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#ssl-enviromentals-configurations-settings-and-docker","title":"SSL, Enviromentals, Configurations, Settings and Docker","text":"We are almost done, but archipelago-deployment-live
has a different, safer way of defining SSL Certs, credentials, and global settings for your Archipelago. We will start first by copying settings as they are (most likely not very safe), and then we can update passwords/etc. to make your system better-prepared for the world.
To learn more about these general settings please read this section of the parent Documentation (who likes duplicated documentation? Nobody.). The gist here is (after reading, please do not skip) that we need to add our service definitions into a .env
file.
Coming from archipelago-deployment
means and assumes that you are running AWS Linux 2 using the suggested locations in this document, that you have a vanilla deployment, and that you followed these instructions) so your values for $HOME/archipelago-deployment-live/deploy/ec2-docker/.env
will be the following:
ARCHIPELAGO_ROOT=/home/ec2-user/archipelago-deployment-live\nARCHIPELAGO_EMAIL=your@validemail.org\nARCHIPELAGO_DOMAIN=your.domain.org\nMINIO_ACCESS_KEY=minio\nMINIO_SECRET_KEY=minio123\nMYSQL_ROOT_PASSWORD=esmero-db\nMINIO_BUCKET_MEDIA=archipelago\nMINIO_FOLDER_PREFIX_MEDIA=/\nMINIO_BUCKET_CACHE=archipelago\nMINIO_FOLDER_PREFIX_CACHE=/\n
If you plan on staying on local storage driven min.io
, MINIO_BUCKET_CACHE
and MINIO_FOLDER_PREFIX_CACHE
are not going to be used. If you are planning on moving your Storage from local to cloud driven please replace with the right values, e.g. AWS IAM keys and Secrets + bucket names and prefixes (folders). Again, refer to the parent Documentation for setting this up.
Once you have that in place (Double-check. If something goes wrong here we can always fine-tune and fix again.), we need to decide on a new docker-compose
file, and you may need to customize it depending on your choices and current and future needs.
If you already have an SSL certificate, and it's provided by CertBot
you can either copy the certs from your current system (will totally depend on your setup since archipelago-deployment
does not provide out-of-the-box SSL Certs) to $HOME/archipelago-deployment-live/data_storage/letsencrypt
.
A normal folder structure for that is:
.\n\u251c\u2500\u2500 accounts\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 acme-v02.api.letsencrypt.org\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 directory\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 cac9f8218ef18e4f11ec053785bbf648\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 meta.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private_key.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 regr.json\n\u251c\u2500\u2500 archive\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 your.domain.org\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 cert1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 chain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 fullchain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u2514\u2500\u2500 privkey1.pem\n\u2502\u00a0\n\u251c\u2500\u2500 csr\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0000_csr-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0001_csr-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0002_csr-certbot.pem\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 0003_csr-certbot.pem\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0000_key-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0001_key-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0002_key-certbot.pem\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 0003_key-certbot.pem\n\u251c\u2500\u2500 live\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 README\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 your.domain.org\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 cert.pem -> ../../archive/your.domain.org/cert1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 chain.pem -> ../../archive/your.domain.org/chain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 fullchain.pem -> ../../archive/your.domain.org/fullchain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 privkey.pem -> ../../archive/your.domain.org/privkey1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u2514\u2500\u2500 README\n\u251c\u2500\u2500 renewal\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 your.domain.org.conf\n\u2502\u00a0\u00a0 \n\u2514\u2500\u2500 renewal-hooks\n \u251c\u2500\u2500 deploy\n \u251c\u2500\u2500 post\n \u2514\u2500\u2500 pre\n
Or if your SSL cert is up for renewal, you can just let Archipelago request it for you. Renewal will happen auto-magically, and you may never ever need to worry about that in the future.
Finally, let's adapt the docker-compose
file we need to our previous (but still current!) archipelago-deployment
reality.
For x86/AMD, run (for ARM64/Apple M1 please check the parent Documentation):
cp $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose-aws-s3.yml $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose.yml\nnano $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose.yml\n
And replace the content with this slightly modified version. Note: we really only changed the lines after this comment: # THIS DIFFERS FROM THE NORMAL ONE...
.
# Run docker-compose up -d\n\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n image: staticfloat/nginx-certbot\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/conf.d:/etc/nginx/user.conf.d\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/certbot_extra_domains:/etc/nginx/certbot/extra_domains:ro\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n depends_on:\n - solr\n - php\n - db\n tty: true\n networks:\n - host-net\n - esmero-net\n php:\n container_name: esmero-php\n restart: always\n image: \"esmero/php-7.4-fpm:1.0.0-RC2-multiarch\"\n tty: true\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/php-fpm/www.conf:/usr/local/etc/php-fpm.d/www.conf\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n environment:\n MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}\n MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n MINIO_BUCKET_MEDIA: ${MINIO_BUCKET_MEDIA}\n MINIO_FOLDER_PREFIX_MEDIA: ${MINIO_FOLDER_PREFIX_MEDIA}\n solr:\n container_name: esmero-solr\n restart: always\n image: \"solr:8.8.2\"\n tty: true\n ports:\n - \"8983:8983\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/solrcore:/var/solr/data\n - ${ARCHIPELAGO_ROOT}/config_storage/solrconfig:/drupalconfig\n - ${ARCHIPELAGO_ROOT}/data_storage/solrlib:/opt/solr/contrib/archipelago/lib\n entrypoint:\n - docker-entrypoint.sh\n - solr-precreate\n - drupal\n - /drupalconfig\n db:\n image: mysql:8.0.22\n command: mysqld --default-authentication-plugin=mysql_native_password --max_allowed_packet=256M\n container_name: esmero-db\n restart: always\n environment:\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/db:/var/lib/mysql\n nlp:\n container_name: esmero-nlp\n restart: always\n image: \"esmero/esmero-nlp:1.0\"\n ports:\n - \"6400:6400\"\n networks:\n - host-net\n - esmero-net\n iiif:\n container_name: esmero-cantaloupe\n image: \"esmero/cantaloupe-s3:4.1.9RC\"\n restart: always\n ports:\n - \"8183:8182\"\n networks:\n - host-net\n - esmero-net\n environment:\n AWS_ACCESS_KEY_ID: ${MINIO_ACCESS_KEY}\n AWS_SECRET_ACCESS_KEY: ${MINIO_SECRET_KEY}\n # THIS DIFFERS FROM THE STANDARD ONE AND ENABLES LOCAL FILESYSTEM CACHE INSTEAD OF AWS S3 one\n CACHE_SERVER_DERIVATIVE: FilesystemCache\n S3SOURCE_BASICLOOKUPSTRATEGY_BUCKET_NAME: ${MINIO_BUCKET_MEDIA}\n S3SOURCE_BASICLOOKUPSTRATEGY_PATH_PREFIX: ${MINIO_FOLDER_PREFIX_MEDIA}\n S3CACHE_BUCKET_NAME: ${MINIO_BUCKET_CACHE} \n S3CACHE_OBJECT_KEY_PREFIX: ${MINIO_FOLDER_PREFIX_CACHE} \n XMS: 2g\n XMX: 4g\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/iiifconfig:/etc/cantaloupe\n - ${ARCHIPELAGO_ROOT}/data_storage/iiifcache:/var/cache/cantaloupe\n - ${ARCHIPELAGO_ROOT}/data_storage/iiiftmp:/var/cache/cantaloupe_tmp\n minio:\n container_name: esmero-minio\n restart: always\n image: minio/minio:latest\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/minio-data:/data:cached\n ports:\n - \"9000:9000\"\n - \"9001:9001\"\n networks:\n - host-net\n - esmero-net\n environment:\n MINIO_HTTP_TRACE: /tmp/minio-log.txt\n MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}\n MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}\n MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}\n MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}\n # THIS DIFFERS FROM THE STANDARD ONE AND ENABLES LOCAL MINIO INSTEAD OF AWS S3 one \n command: server /data --console-address \":9001\"\nnetworks:\n host-net:\n driver: bridge\n esmero-net:\n driver: bridge\n internal: true\n
Press CNTRL-X, and you are done. Now the final test!!
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#shutdown-the-old-one-start-the-new-one","title":"Shutdown the old one, start the new one","text":"So we are ready. Testing may be a hit-or-miss thing here. Did we cover all the steps? Did a command fail? The good thing is that we can start the new ensemble, and all our old ones will survive. And we can come back over and over until we are ready. Let's try!
We will start by shutting down the running Docker ensemble:
cd $HOME/archipelago-deployment\ndocker-compose down\n
Now let's go to our new deployment. Docker starts here in a different folder:
cd $HOME/archipelago-deployment-live/deploy/ec2-docker\ndocker-compose up\n
You may notice that we removed the -d
. Why? We want to see all the messages and notice/mark/copy any errors, e.g. did the SSL CERT load correctly? Did the MYSQL import work out? To avoid shutting it down while all starts, please open another Terminal and type:
docker ps\n
And look at the up-times. Do you see any Containers restarting (where Created and the Status differ for a lot and Status keeps resetting to 0?)? A healthy deployment will look similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf794c25db64c esmero/cantaloupe-s3:4.1.9RC2-arm64 \"sh -c 'java -Dcanta\u2026\" 6 seconds ago Up 3 seconds 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n5b791445720f jonasal/nginx-certbot \"/docker-entrypoint.\u2026\" 6 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp esmero-web\ne38fbbd86edf esmero/esmero-nlp:1.0.1-RC2-arm64 \"/usr/local/bin/entr\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:6400->6400/tcp esmero-nlp\nc84a0a4d43e9 minio/minio:latest \"/usr/bin/docker-ent\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:9000-9001->9000-9001/tcp esmero-minio\n3ec176a960c3 esmero/php-7.4-fpm:1.0.0-RC2-multiarch \"docker-php-entrypoi\u2026\" 11 seconds ago Up 6 seconds 9000/tcp esmero-php\ne762ad7ea5e2 solr:8.8.2 \"docker-entrypoint.s\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:8983->8983/tcp esmero-solr\n381166d61f8c mariadb:10.5.10-focal \"docker-entrypoint.s\u2026\" 11 seconds ago Up 6 seconds 3306/tcp \n
If you feel that all seems to be fine, open a browser window and visit your website. See if you can log in and see ADOs. If not you can momentarily shut down this new Docker ensemble and restart the older one. Nothing is lost! Then with time and tea/coffee and fresh eyes come back and re-trace your steps. 95% of the issues are incorrect values in the .env
file. The other 5% may be on us. If you run into any trouble please get in touch!
Happy deploying!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/","title":"Archipelago Deployment Live","text":"A Cloud / Local production ready Archipelago Deployment using Docker and soon Kubernetes.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#what-is-this-repo-for","title":"What is this repo for?","text":"Running Archipelago Commons on a live public instance using SSL with Blob/Object Storage backend
Docker
running as a service and docker-compose
Docker
basics knowledge and how to manage packages in your SystemBasically this guide is meant for humans with basic to medium DevOps
background or humans with patience that are willing to troubleshoot, ask, and try again when that background is not (yet) enough. And we are here to help.
Deploy your base system
Make sure your Firewall/AWS Security group has these ports open for everyone to access
Setup your system using your favorite package manager with
e.g. for Amazon Linux 2 (x86/amd64) these steps are tested:
sudo yum update -y\nsudo amazon-linux-extras install -y docker\nsudo service docker start\nsudo usermod -a -G docker ec2-user\nsudo chkconfig docker on\nsudo systemctl enable docker\nsudo yum install -y git htop tree\nsudo curl -L \"https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\nsudo chmod +x /usr/local/bin/docker-compose\nsudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose\nsudo reboot\n
Reboot is needed to allow Docker to take full control over your OS resources.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-2","title":"Step 2:","text":"In your location of choice clone this repo
git clone https://github.com/esmero/archipelago-deployment-live\ncd archipelago-deployment-live\ngit checkout 1.0.0\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-3-setup-your-enviromental-variables-for-dockerservices","title":"Step 3. Setup your enviromental variables for Docker/Services","text":"","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#setup-enviromentals","title":"Setup Enviromentals","text":"Setup your deployment enviromental variables by copying the template
cp deploy/ec2-docker/.env.template deploy/ec2-docker/.env\n
and editing it nano deploy/ec2-docker/.env\n
The content of that file would be similar to this.
ARCHIPELAGO_ROOT=/home/ec2-user/archipelago-deployment-live\nARCHIPELAGO_EMAIL=your@validemail.org\nARCHIPELAGO_DOMAIN=your.domain.org\nMINIO_ACCESS_KEY=THE_S3_AZURE_OR_LOCAL_MINIO_KEY\nMINIO_SECRET_KEY=THE_S3_AZURE_OR_LOCAL_MINIO_SECRET\nMYSQL_ROOT_PASSWORD=YOUR_MYSQL_PASSWORD_FOR_ARCHIPELAGO\nMINIO_BUCKET_MEDIA=THE_NAME_OF_YOUR_S3_BUCKET_FOR_PERSISTEN_STORAGE\nMINIO_FOLDER_PREFIX_MEDIA=media/\nMINIO_BUCKET_CACHE=THE_NAME_OF_YOUR_S3_BUCKET_FOR_IIIF_STORAGE\nMINIO_FOLDER_PREFIX_CACHE=iiifcache/\nREDIS_PASSWORD=YOUR_REDIS_PASSWORD\n
What does each key mean?
ARCHIPELAGO_ROOT
: the absolute path to your archipelago-deployment-live
git repo in your host machine.ARCHIPELAGO_EMAIL
: a valid email, will be used to register your SSL Certificate via Certbot.ARCHIPELAGO_DOMAIN
: a valid domain name for your repository. This domain will be also used to request your SSL Certificate via Certbot.MINIO_ACCESS_KEY
: If you are running a Cloud Service backed S3/Azure Storage this needs to be generated there. The user/IAM owner of this ACCESS KEY needs to have access to read/write the bucket you will configure in this same .env
. If running local min.io
whatever you set will be used.MINIO_SECRET_KEY
: If you are running a Cloud Service backed S3/Azure Storage this needs to generated there. The user/IAM owner of the matching SECRET_KEY needs to have access to read/write the bucket you will configure in this same .env
file. If running local min.io
whatever you set will be used.MYSQL_ROOT_PASSWORD
: The MYSQL 8 or Mariadb 15 password. This password will be used later also during Drupal deployment via drush
MINIO_BUCKET_MEDIA
: The name of your Persistant Storage Bucket. If using mini.io local we recommend keeping it simple, e.g. archipelago
.MINIO_FOLDER_PREFIX_MEDIA
: The folder
(a prefix really) where your DO Storage and File storage will go inside the MINIO_BUCKET_MEDIA
Bucket. media/
is a fine name for this one and common in archipelago deployments. IMPORTANT: Always terminate these with a /
. MINIO_BUCKET_CACHE
: The name of your IIIF Cache storage Bucket. May be the same as MINIO_BUCKET_MEDIA
. If different make sure your your MINIO_ACCESS_KEY
and/or IAM role ACL have permission to read write to this one too.MINIO_FOLDER_PREFIX_CACHE
: The folder
(a prefix really) where Cantaloupe will/can write its iiif
caches. iiifcache/
is a lovely name we use a lot. IMPORTANT: Always terminate these with a /
.REDIS_PASSWORD
: Password for your REDIS (Drupal Cache/Queue storage) if you decide to enable the Drupal REDIS module.IMPORTANT NOTE
: For AWS EC2. If you selected an IAM role
for your server when setting it up/deploying it, min.io
will use the AWS EC2-backed internal API to request access to your S3. This means the ROLE itself needs to have read/write access (ACL) to the given Bucket(s) and your key/secrets won't be able to override that. Please do not ignore this note. It will save you a LOT of frustration and coffee. You can also run an EC2 instace without a given IAM and in that case just the ACCESS_KEY/SECRET will matter.
Now that you know, you also know that these values should not be shared and this .env
file should not be committed/kept in version control. Please be careful.
docker-compose
will read this .env
and start all services for you based on its content.
Once you have modified this you are ready for your first big decision.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#running-a-fully-qualified-domain-you-wish-a-validsigned-certificate-for-amdintel-architecture","title":"Running a fully qualified domain you wish a valid/signed certificate for AMD/INTEL Architecture?","text":"This means you will use the docker-compose-aws-s3.yml
. Do the following:
cp deploy/ec2-docker/docker-compose-aws-s3.yml deploy/ec2-docker/docker-compose.yml\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#running-a-fully-qualified-domain-you-wish-a-validsigned-certificate-for-arm64apple-m1-architecture","title":"Running a fully qualified domain you wish a valid/signed certificate for ARM64/Apple M1 Architecture?","text":"This means you will use the docker-compose-aws-s3-arm64.yml
. Do the following:
cp deploy/ec2-docker/docker-compose-aws-s3-arm64.yml deploy/ec2-docker/docker-compose.yml\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#optional-expert-extra-domains-does-not-apply-to-arm64apple-m1-architecture","title":"Optional (expert) extra domains (does not apply to ARM64/Apple M1 Architecture):","text":"If you have more than a single domain you may create a text file inside config_storage/nginxconfig/certbot_extra_domains/your.domain.org
and write for each subdomain there an entry/line.
Only if you are not running a fully qualified domain you wish a valid/signed. We really DO not recommend this route. IF you plan on using this deployment for local testing or running on non SSL please go for https://github.com/esmero/archipelago-deployment which delivers the same experience in less than 20 minutes deployment time.
Generate a self signed Cert
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout data_storage/selfcert/private/nginx.key -out data_storage/selfcert/certs/nginx.crt \nsudo openssl dhparam -out data_storage/selfcert/dhparam.pem 4096\ncp deploy/ec2-docker/docker-compose-selfsigned.yml deploy/ec2-docker/docker-compose.yml\n
Note: Self signed docker-compose.yml file is setup to use min.io with local storage
volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/minio-data:/data:cached\n
This folder will be created by min.io. If you are using a secondary Drive (e.g. magnetic) you can modify your deploy/ec2-docker/docker-compose.yml
to use a folder there, e.g.
volumes:\n - /persistentinotherdrive/data_storage/minio-data:/data:cached\n
Make sure your logged in user can read/write to it.
NOTE: If you want to use AWS S3 storage for the self signed version replace the minio Service yaml block with this Service Block in your new deploy/ec2-docker/docker-compose.yml
. You can mix and match services and even remove all :cached
statements for improved R/W volumen performance.
sudo chown 8183:8183 config_storage/iiifconfig/cantaloupe.properties\nsudo chown -R 8183:8183 data_storage/iiifcache\nsudo chown -R 8183:8183 data_storage/iiiftmp\nsudo chown -R 8983:8983 data_storage/solrcore\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#actual-first-run","title":"Actual first run","text":"Time to spin our docker containers for the first time. We will start all without going into background so log/error checking is easier, especially if you have selected a Valid/Signed Cert choice and also want to be sure S3 keys/access are working.
cd deploy/ec2-docker\ndocker-compose up\n
You will see a lot of things happening. Check for errors/problems/clear alerts and give all a minute or so to start. Ok, let's assume your setup managed to request a valid signed SSL cert, you will see a nice message!
- Congratulations! Your certificate and chain have been saved at:XXXXX\n Your certificate will expire on 20XX-XX-XX. To obtain a new or\n tweaked version of this certificate in the future, simply run\n certbot again. To non-interactively renew *all* of your\n certificates, run \"certbot renew\"\n
Archipelago will do that for you whenever it's about to expire so no need to deal with this manually, even when docker-compose
restarts.
Now press CTRL+C. docker-compose
will shutdown gracefully. Good!
Copy the shipped default composer.default.json to composer.json and composer.default.lock to composer.lock (ONLY if you are installing from scratch):
cp ../../drupal/composer.default.json ../../drupal/composer.json\ncp ../../drupal/composer.default.lock ../../drupal/composer.lock\n
Start Docker again
docker-compose up -d\n
Wait a few seconds and run:
docker exec -ti esmero-php bash -c \"chown -R www-data:www-data private\"\ndocker exec -ti esmero-php bash -c \"chown -R www-data:www-data web/sites\"\ndocker exec -ti esmero-php bash -c \"composer install\"\n
Composer install will take a little while and bring all your PHP libraries.
Once done, execute our setup script that will prepare your Drupal settings.php
and bring some of the .env
enviromental variables to the Drupal environment.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
And now you can deploy Drupal!
IMPORTANT: Make sure you replace in the following command inside root:MYSQL_ROOT_PASSWORD
the MYSQL_ROOT_PASSWORD
string with the value you used/assigned in your .env
file for MYSQL_ROOT_PASSWORD
. And replace ADMIN_PASSWORD
with a password that is safe and you won't forget! That passwords is for your Drupal super user (uid:0).
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:MYSQL_ROOT_PASSWORD@esmero-db/drupal --account-name=admin --account-pass=ADMIN_PASSWORD -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-6-users-and-initial-content","title":"Step 6. Users and initial Content.","text":"After installation is done (may take a few) you can install initial users and assign them roles. Copy each line separately. A missing permission will not let you ingest the initial Metadata Displays and AMI set.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
Before ingesting the base content we need to make sure we can access your JSON-API
on for your new domain. That means we need to change internal urls (https://esmero-web
) to the new valid SSL driven ones. This is easy:
On your host machine (no need to docker exec
these ones), replace first in the following command your.domain.org
with the domain you setup in your .env
file. Go to (cd into) your base git clone folder (Important: YOUR BASE CLONE FOLDER) and then run
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/deploy.sh\n sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/update_deployed.sh\n
Now your deploy.sh
and update_deployed.sh
are update and ready. Let's ingest some Twig Templates, an AMI Set, menus and a Blocks.
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
NOTE: update_deployed.sh
is not needed when deploying for the first time and totally discouraged on a customized Archipelago. If you make modifications to your Twig templates
, that command will replace the ones shipped by us with fresh copies overwriting all your modifications. Only run to restore larger errors or when needing to update non-customized ones with newer versions.
By default archipelago ships with a public facing and an internal facing IIIF Server URLs configured. These urls are used by a number of IIIF enabled viewers and need to be changed to reflect your new reality (a real Domain name and a proxied path!). These settings belong to the strawberryfield/format_strawberryfield
module.
First check your current settings:
docker exec -ti esmero-php bash -c \"drush config-get format_strawberryfield.iiif_settings\"\n
You will see the following:
pub_server_url: 'http://localhost:8183/iiif/2'\nint_server_url: 'http://esmero-cantaloupe:8182/iiif/2'\n
Let's modify pub_server_url
. Replace in the following command your.domain.org
with the domain you defined in your .env
file. NOTE: We are passing the -y
flag to drush
avoid that way having to answer \"yes\".
docker exec -ti esmero-php bash -c \"drush -y config-set format_strawberryfield.iiif_settings pub_server_url https://your.domain.org/cantaloupe/iiif/2\"\n
Finally Done! Now you can log into your new Archipelago using https
and start exploring. Thank you for following this guide!
This applies to AWS m6g
and t3g
Instances and is documented inline in this guide. Please open an ISSUE in this repository if you run into any problems. Please review https://github.com/esmero/archipelago-deployment-live/blob/1.0.0/deploy/ec2-docker/docker-compose-aws-s3-arm64.yml for more info.
Run
uname -m \n
x86(64 bit)
processor system output will be x86_64
ARM(64 bit)
processor system output will be aarch64
This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#license","title":"License","text":"GPLv3
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/","title":"How to update your Docker containers","text":"From time to time you may have a need to update the containers themselves. Primarily this is done for security releases.
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#1-update-docker-composeyml","title":"1. Update docker-compose.yml","text":"The first thing you need to do is to edit your docker-compose.yml
file and replace the version of the container with the new one you wish to use.
Navigate to your docker-compose.yml
file and open it to edit. On Debian installs it would look like this:
cd /usr/local/archipelago/deploy/archipelago-deployment-live/deploy/ec2-docker\n vi docker-compose.yml\n
You want to change the image line to reflect the name of the new image you wish to use:
image: esmero/php-7.4-fpm:1.0.0-RC3-multiarch\n
might become:
image: esmero/php-8.0-fpm:1.1.0-multiarch\n
Save your change. If use vi like in the above it would look like this:
:wq\n
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#pull-the-new-images","title":"Pull the new image(s)","text":"Docker Compose will now allow us to grab the new image(s) while your current system is running:
docker-compose pull\n
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#stop-and-restart-the-container","title":"Stop and restart the container","text":"It is necesary to stop and start the container or the current image will continue to be used:
docker-compose stop container-name\n
Wait for it to stop. Then bring it back up:
docker-compose up -d \n
It is important to use the -d flag or you will have your live instance stuck in your terminal. You want it to run in the background. The -d
flag stands for detached.
If you are more comfortable having the all the containers go down and up you can do that with the following:
docker-compose down\ndocker-compose up\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/","title":"Archipelago-deployment-live: upgrading Drupal 8 to Drupal 9 (1.0.0-RC2 to 1.0.0-RC3)","text":"","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (RC2 or your own custom version) running Drupal 8 (D8), this documentation will allow you to update to Drupal 9 (D9) without major issues.
D8 is no longer supported as of the end of November 2021. D9 has been around for a little while and even if every module is not supported yet, what you need and want for Archipelago has long been D9-ready.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database and settings are mostly self-contained in your current archipelago-deployment-live
repo folder, and backing up is simple because of that.
Shut down your docker-compose
ensemble. Move to your archipelago-deployment-live
folder and run this:
cd deploy/ec2-docker\ndocker-compose down\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing. If anything is still running, wait a little longer, and run the following comman again.
docker ps\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021.
sudo tar -czvpf $HOME/archipelago-deployment-live-backup-20211201.tar.gz ../../../archipelago-deployment-live\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-live-backup-20211201.tar.gz \n
You will see a listing of files. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#upgrading-to-100-rc3","title":"Upgrading to 1.0.0-RC3","text":"","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-1_1","title":"Step 1:","text":"First we are going to disable modules that are not part of 1.0.0-RC3 or are not yet compatible with D9. Run the following command:
docker exec esmero-php drush pm-uninstall module_missing_message_fixer markdown webprofiler key_value webform_views\n
From inside your archipelago-deployment-live
repo folder we are going to open up the file permissions
for some of your most protected Drupal files.
cd ../../\nsudo chmod 777 drupal/web/sites/default\nsudo chmod 666 drupal/web/sites/default/*settings.php\nsudo chmod 666 drupal/web/sites/default/*services.yml\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-2_1","title":"Step 2:","text":"Time to fetch the 1.0.0-RC3
branch and update our docker-compose
and composer
dependencies. We are also going to stop the current docker
ensemble to update all containers to newer versions:
cd deploy/ec2-docker\ndocker-compose down\ngit checkout 1.0.0-RC3 \n
Then copy the appropriate docker-compose
file for your architecture:
cp docker-compose-osx.yml docker-compose.yml\n
Linux/x86-64/AMD64 cp docker-compose-linux.yml docker-compose.yml\n
OSX (macOS)/Linux/ARM64 cp docker-compose-arm64.yml docker-compose.yml\n
Finally, pull the images, and bring up the ensemble:
docker compose pull \ndocker compose up -d\n
Give all a little time to start. The latest min.io
adds a new console, and your Solr
core and Database
need to be upgraded. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n867fd2a42134 nginx \"/docker-entrypoint.\u2026\" 32 seconds ago Up 27 seconds 0.0.0.0:8001->80/tcp, :::8001->80/tcp esmero-web\n8663e84a9b48 solr:8.8.2 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp esmero-solr\n9b580fa0088f minio/minio:latest \"/usr/bin/docker-ent\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:9000-9001->9000-9001/tcp, :::9000-9001->9000-9001/tcp esmero-minio\n50e2f41c7b60 esmero/esmero-nlp:1.0 \"/usr/local/bin/entr\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:6400->6400/tcp, :::6400->6400/tcp esmero-nlp\n300810fd6f03 esmero/cantaloupe-s3:4.1.9RC \"sh -c 'java -Dcanta\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:8183->8182/tcp, :::8183->8182/tcp esmero-cantaloupe\n248e4638ba2a mysql:8.0.22 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 3306/tcp, 33060/tcp esmero-db\n141ace919344 esmero/php-7.4-fpm:1.0.0-RC2-multiarch \"docker-php-entrypoi\u2026\" 33 seconds ago Up 28 seconds 9000/tcp esmero-php\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Now we are going to tell composer
to actually fetch the new code and dependencies using the 1.0.0-RC3 provided composer.lock
and update the whole Drupal/PHP/JS environment.
docker exec -ti esmero-php bash -c \"composer install\"\n
This will fail (sorry!) for a few packages but no worries, they need to be patched and composer is not that smart. So simply run it again:
docker exec -ti esmero-php bash -c \"composer install\"\n
Well done! If you see no issues and all ends in a Green colored message all is good! Jump to Step 4
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#what-if-not-all-is-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if not all is OK and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 9 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 9 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL9_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not: try to find a replacement module that does something simular, but in any case you may end having to remove before proceding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\ndocker exec -ti esmero-php bash -c \" drush pm-uninstall the_module_name\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-4_1","title":"Step 4:","text":"We will now ask Drupal to update some internal configs and databases. They will bring you up to date with RC3 settings and D9 particularities.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\ndocker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-5_1","title":"Step 5:","text":"Previously D8 installations had a \"module/profile\" driven installation. Those are no longer used or even exist as part of core, but a profile can't be changed once installed so you have to do the following to avoid Drupal complaining about our new and simpler way of doing things (a small roll back):
docker exec -ti esmero-php bash -c \"sed -i 's/minimal: 1000/standard: 1000/g' config/sync/core.extension.yml\"\ndocker exec -ti esmero-php bash -c \"sed -i 's/profile: minimal/profile: standard/g' config/sync/core.extension.yml\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-6","title":"Step 6:","text":"Now you can Sync your new Archipelago 1.0.0-RC3 and bring all the new configs and settings in. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#a-complete-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-also-remove-all-the-ones-that-are-not-part-of-rc3-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new configs and update existing ones but will also remove all the ones that are not part of RC3. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y\n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 9 realm for a few years!
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-7-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 7: Update (or not) your Metadata Display Entities and Menu items.","text":"Recommended: If you want to add new templates and menu items 1.0.0-RC3 provides, run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (Twig templates) we ship with new 1.0.0-RC3 versions (heavily fixed IIIF manifest, Markdown to HTML for Metadata, better Object descriptions). But before you do this, we really recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unusure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
Please log into your Archipelago and test/check all is working! Enjoy 1.0.0-RC3 and Drupal 9. Thanks!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/","title":"Archipelago-deployment-live: upgrading from 1.0.0-RC3 to 1.0.0","text":"","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (RC3 or your own custom version) running Drupal 9, this documentation will allow you to update to 1.0.0 without major issues.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database and settings are mostly self-contained in your current archipelago-deployment-live
repo folder, and backing up is simple because of that.
Shut down your docker-compose
ensemble. Move to your archipelago-deployment-live
folder and run this:
cd deploy/ec2-docker\ndocker-compose down\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing. If anything is still running, wait a little longer, and run the following comman again.
docker ps\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021.
sudo tar -czvpf $HOME/archipelago-deployment-live-backup-20220803.tar.gz ../../../archipelago-deployment-live\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-live-backup-20220803.tar.gz \n
You will see a listing of files. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#upgrading-to-100","title":"Upgrading to 1.0.0","text":"","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-1_1","title":"Step 1:","text":"First we are going to disable modules that are not part of 1.0.0 or are not yet compatible with Drupal 9.4.x or higher . Run the following command:
docker exec esmero-php drush pm-uninstall search_api_solr_defaults entity_reference\n
From inside your archipelago-deployment-live
repo folder we are going to open up the file permissions
for some of your most protected Drupal files.
cd ../../\nsudo chmod 777 drupal/web/sites/default\nsudo chmod 666 drupal/web/sites/default/*settings.php\nsudo chmod 666 drupal/web/sites/default/*services.yml\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-2_1","title":"Step 2:","text":"First let's back up our current composer.lock:
cp drupal/composer.lock drupal/composer.original.lock\n
Time to fetch the 1.0.0
branch and update our docker-compose
and composer
dependencies. We are also going to stop the current docker
ensemble to update all containers to newer versions:
cd deploy/ec2-docker\ndocker-compose down\ngit fetch\ngit checkout 1.0.0 \n
If you decide to enable the Drupal REDIS module, make sure to add the REDIS_PASSWORD
variable to your .env
file.
IMPORTANT NOTE
: For AWS EC2. If you selected an IAM role
for your server when setting it up/deploying it, min.io
will use the AWS EC2-backed internal API to request access to your S3. This means the ROLE itself needs to have read/write access (ACL) to the given Bucket(s) and your key/secrets won't be able to override that. Please do not ignore this note. It will save you a LOT of frustration and coffee. You can also run an EC2 instance without a given IAM and in that case just the ACCESS_KEY/SECRET will matter.
Now that you know, you also know that these values should not be shared and this .env
file should not be committed/kept in version control. Please be careful.
Now let's back up the existing docker-compose
file:
cp docker-compose.yml docker-compose-original.yml\n
Then copy the appropriate docker-compose
file for your architecture:
cp docker-compose-aws-s3.yml docker-compose.yml\n
Linux/ARM64/Apple Silicon (M1 and M2) cp docker-compose-aws-s3-arm64.yml docker-compose.yml\n
Next, let's review what's changed in case any customizations need to be brought into the new docker-compose
configurations:
git diff --no-index docker-compose-original.yml docker-compose.yml\n
You should encounter something like the following:
diff --git a/docker-compose-original.yml b/docker-compose.yml\nindex 6f5b17e..282417f 100644\n--- a/docker-compose-original.yml\n+++ b/docker-compose.yml\n@@ -1,5 +1,5 @@\n # Run docker-compose up -d\n-\n+# Docker file for AMD64/X86 machines\n version: '3.5'\n services:\n web:\n@@ -23,6 +23,7 @@ services:\n - solr\n - php\n - db\n+ - redis\n tty: true\n networks:\n - host-net\n@@ -30,7 +31,7 @@ services:\n php:\n container_name: esmero-php\n restart: always\n- image: \"esmero/php-7.4-fpm:1.0.0-RC2-multiarch\"\n+ image: \"esmero/php-8.0-fpm:1.1.0-multiarch\"\n tty: true\n networks:\n - host-net\n@@ -44,10 +45,11 @@ services:\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n MINIO_BUCKET_MEDIA: ${MINIO_BUCKET_MEDIA}\n MINIO_FOLDER_PREFIX_MEDIA: ${MINIO_FOLDER_PREFIX_MEDIA}\n+ REDIS_PASSWORD: ${REDIS_PASSWORD}\n solr:\n container_name: esmero-solr\n restart: always\n- image: \"solr:8.8.2\"\n+ image: \"solr:8.11.2\"\n
As you can see, most of the changes in this example are for new images and a new service/container/environment variable (REDIS), but you may have custom settings for your containers. Review any differences carefully and make adjustments as needed.
Finally, pull the images:
docker compose pull \n
1.0.0 provides a new Cantaloupe that uses different permissions so we need to adapt those. From your current folder (../ec2-deploy) run:
sudo chown 8183:8183 ../../config_storage/iiifconfig/cantaloupe.properties\nsudo chown -R 8183:8183 ../../data_storage/iiifcache\nsudo chown -R 8183:8183 ../../data_storage/iiiftmp\n
Time to start the ensemble again
docker compose up -d\n
Give all a little time to start. Solr
core and Database
need to be upgraded, Cantaloupe is new and this brings also Redis for caching. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this: e.g if running on ARM64 You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n4ed2f62e866e jonasal/nginx-certbot \"/docker-entrypoint.\u2026\" 32 seconds ago Up 27 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp esmero-web\ne6b4383039c3 minio/minio:RELEASE.2022-06-11T19-55-32Z \"/usr/bin/docker-ent\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:9000-9001->9000-9001/tcp, :::9000-9001->9000-9001/tcp esmero-minio\nf2b6b173b7e2 solr:8.11.2 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp esmero-solr\na553bf484343 esmero/php-8.0-fpm:1.0.0-multiarch \"docker-php-entrypoi\u2026\" 33 seconds ago Up 30 seconds 9000/tcp esmero-php\necb47349ae94 esmero/esmero-nlp:fasttext-multiarch \"/usr/local/bin/entr\u2026\" 33 seconds ago Up 30 second 0.0.0.0:6400->6400/tcp, :::6400->6400/tcp esmero-nlp\n61272dce034a redis:6.2-alpine \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds esmero-redis\n0ee9869f809b esmero/cantaloupe-s3:6.0.0-multiarch \"sh -c 'java -Dcanta\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:8183->8182/tcp, :::8183->8182/tcp esmero-cantaloupe\n131d072567ce mariadb:10.6.8-focal \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 3306/tcp esmero-db esmero-php\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Instead of using the provided composer.default.lock
out of the box we are going to loosen certain dependencies and bring manually Archipelago modules, all this to make update easier and future upgrades less of a pain.
First, as a sanity check let's make sure nothing happened to our original composer.lock
fileby doing a diff against our backed up file:
git diff --no-index ../../drupal/composer.original.lock ../../drupal/composer.lock\n
If all is ok, there should be no output. If there's any output, copy your backed up file back to default:
cp ../../drupal/composer.original.lock ../../drupal/composer.lock\n
Finally, we bring over the modules:
docker exec -ti esmero-php bash -c \"composer require drupal/core:^9 drupal/core-composer-scaffold:^9 drupal/core-project-message:^9 drupal/core-recommended:^9\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^9 --dev\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/tokenuuid:^2\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/facets:^2.0'\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/moderated_content_bulk_publish:^2\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/queue_ui:^3.1\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/jquery_ui_touch_punch:^1.1\"\ndocker exec -ti esmero-php bash -c \"composer require archipelago/ami:0.4.0.x-dev strawberryfield/format_strawberryfield:1.0.0.x-dev strawberryfield/strawberryfield:1.0.0.x-dev strawberryfield/strawberry_runners:0.4.0.x-dev strawberryfield/webform_strawberryfield:1.0.0.x-dev drupal/views_bulk_operations:^4.1\"\n
Now we are going to tell composer
to actually fetch the new code and dependencies using composer.lock
and update the whole Drupal/PHP/JS environment.
docker exec -ti esmero-php bash -c \"composer update -W\"\ndocker exec -ti esmero-php bash -c \"drush cr\"\ndocker exec -ti esmero-php bash -c \"drush en jquery_ui_touch_punch\"\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
Well done! If you see no issues and all ends in a Green colored message all is good! Jump to Step 4
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#what-if-not-all-is-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if not all is OK and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 9 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 9 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL9_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not: try to find a replacement module that does something simular, but in any case you may end having to remove before proceding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\ndocker exec -ti esmero-php bash -c \" drush pm-uninstall the_module_name\"\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-4_1","title":"Step 4:","text":"We will now ask Drupal to update some internal configs and databases. They will bring you up to date with 1.0.0 settings and D9 particularities.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-5_1","title":"Step 5:","text":"Now you can Sync your new Archipelago 1.0.0 and bring all the new configs and settings in. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#a-complete-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-also-remove-all-the-ones-that-are-not-part-of-rc3-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new configs and update existing ones but will also remove all the ones that are not part of RC3. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y\n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 9 realm for a few years!
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-7-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 7: Update (or not) your Metadata Display Entities and Menu items.","text":"Recommended: If you want to add new templates and menu items 1.0.0 provides, go to your base Github repo folder, replace in the following commands your.domain.org
with the actual domain of your Server and run those individually:
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/deploy.sh\n
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/update_deployed.sh\n
Now update your Metadata Display Templates and Blocks
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (Twig templates) we ship with new 1.0.0 versions (heavily fixed IIIF manifest, Markdown to HTML for Metadata, better Object descriptions). But before you do this, we really recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unusure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
Please log into your Archipelago and test/check all is working! Enjoy 1.0.0. Thanks!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-osx/","title":"Installing Archipelago Drupal 10 on OSX (macOS)","text":"","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#about-running-terminal-commands","title":"About running terminal commands","text":"This guide assumes you are comfortable enough running terminal (bash) commands on an OSX Computer.
We made sure that you can copy
and paste
each of these commands from this guide directly into your terminal.
You will notice sometimes commands span more than a single line of text. If that is the case, always make sure you copy and paste a single line at a time and press the Enter
key afterwards. We suggest also you look at the output.
If something fails (and we hope it does not) troubleshooting will be much easier if you can share that output when asking for help.
Happy deploying!
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#osx-macos","title":"OSX (macOS):","text":"Ventura
or Higher on Intel (i5/i7) and Apple Silicon Chips (M1/M2/M3) the tested version is: 4.23.0(120376)
. You may go newer of course.Preferences
-> General
: check Use gRPC FUSE for file sharing
and restart. Specially if you are using your $HOME
folder for deploying, e.g. /Users/username
.Preferences
-> Resources
: 4 Gbytes of RAM is the recommended minimun and works; 8 Gbytes is faster and snappier.Note: Recent OSX (macOS) and newer Macs ship with 2 annoying things: Apple Cloud Syncing User Folders and (wait for it) Case insensitive File Systems. If you are happy with your shiny new Mac (like i was) we are aware that it's better to deploy anything mounted outside of the /User
folder or even better, in an external drive formatted using a Case Sensitive Unix Filesystem (Mac OS Extended (Case-sensitive, Journaled)).
Note 2: \"Use gRPC FUSE for file sharing\" experience may vary, recent Docker for Mac does it well. In older RC1 ones it was evil. Changing/Disabling it after having installed Archipelago may affect your S3/Minio storage accessibility. Please let us know what your experience on this is.
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#wait-question-do-you-have-a-previous-version-of-archipelago-running","title":"Wait! Question: Do you have a previous version of Archipelago running?","text":"If so, let's give that hard working repository a break first. If not, skip to Step 1:
docker-compose down\ndocker-compose rm\n
Let's stop the containers gracefully first, run:
docker stop esmero-web\ndocker stop esmero-solr\ndocker stop esmero-db\ndocker stop esmero-cantaloupe\ndocker stop esmero-php\ndocker stop esmero-minio\ndocker stop esmero-nlp\n
Now we need to remove them, run:
docker rm esmero-web\ndocker rm esmero-solr\ndocker rm esmero-db\ndocker rm esmero-cantaloupe\ndocker rm esmero-php\ndocker rm esmero-minio\ndocker rm esmero-nlp\n
Ok, now we are ready to start. Depending on what type of Chip your Apple uses you have two options:
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-1-intel-docker-deployment-on-intel-chips-apple-machines","title":"Step 1 (Intel): Docker Deployment on Intel Chips Apple Machines","text":"git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\ncp docker-compose-osx.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-1-m1-docker-deployment-on-apple-silicon-chips-m1","title":"Step 1 (M1): Docker Deployment on Apple Silicon Chips (M1)","text":"git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\ncp docker-compose-arm64.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
Note: If you are running on an Intel Apple Machine from an external Drive or a partition/filesystem that is Case Sensitive
and is not syncing automatically to Apple Cloud
you can also use docker-compose-linux.yml
. Note2: docker-compose.yml
is git ignored in case you make local adjustments or changes to it.
Once all containers are up and running (you can do a docker ps
to check), access the minio console at http://localhost:9001
using your most loved Web Browser with the following credentials:
user:minio\npass:minio123\n
and once logged in, press on \"Buckets\" (left tools column) and then on \"Create Bucket\" (top right) and under \"Bucket Name\" type archipelago
. Leave all other options unchecked for now (you can experiment with those later), and make sure you write archipelago
(no spaces, lowercase) and press \"Save\". Done! That is where we will persist all your Files and also your File copies of each Digital Object. You can always go there and explore what Archipelago (well really Strawberryfield does the hard work) has persisted so you can get comfortable with our architecture.
The following will run composer inside the esmero-php container to download all dependencies and Drupal Core too:
docker exec -ti esmero-php bash -c \"composer install\"\n
Once that command finishes run our setup script:
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
Explanation: That script will append some important configurations to your local web/sites/default/settings.php
.
Note: We say local
because your whole Drupal web root (the one you cloned) is also mounted inside the esmero-php and esmero-web containers. So edits to PHP files, for example, can be done without accessing the container directly from your local folder.
If this is the first time you deploy Drupal using the provided Configurations run:
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:esmerodb@esmero-db/drupal --account-name=admin --account-pass=archipelago -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
Note: You will see these warnings: [warning] The \"block_content:1cdf7155-eb60-4f27-9e5e-64fffe93127a\" was not found
[warning] The \"facets_summary_block:search_page_facet_summary\" was not found
Nothing to worry about. We will provide the missing part in Step 5.
Note 2: Please be patient. This step takes since composer 2.0 25-30% longer because of how the most recent Drupal Installation code fetches translations and other resources (see Performed install task
). This means progress might look like getting \"stuck\", go and get a coffee/tea and let it run to the end.
Once finished, this will give you an admin
Drupal user with archipelago
as password (Change this if running on a public instance!).
Final Note about Steps 2-3: You don't need to, nor should you do this more than once. You can destroy/stop/update, recreate your Docker containers, and start again (git pull
), and your Drupal and Data will persist once you're past the Installation complete
message. I repeat, all other containers' data is persisted inside the persistent/
folder contained in this cloned git repository. Drupal and all its code is visible, editable, and stable inside your web/
folder.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-5-ingest-some-metadata-displays-to-make-playing-much-more-interactive","title":"Step 5: Ingest some Metadata Displays to make playing much more interactive","text":"Archipelago is more fun without having to start writing Metadata Displays (in Twig) before you know what they actually are. Since you should now have a jsonapi
user and jsonapi should be enabled, you can use that awesome functionality of D8 to get that done. We have 4 demo Metadata Display Entities that go well with the demo Webform we provided. To do that execute in your shell (copy and paste):
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Open your most loved Web Browser and point it to http://localhost:8001
.
Note: It can take some time to start the first time (Drupal needs some warming up).
Also, to make this docker-compose easier to use we are doing something named bind mounting
(or similar...) your folders. The good thing is that you can edit files in your machine, and they get updated instantly to docker. The bad thing is that the OSX (macOS) driver runs slower than on Linux. Speed is a huge factor here, but you get the flexibility of changing, backing up, and persisting files without needing a Docker University Degree.
One-Step Demo content ingest
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#need-help-blue-screen-missed-a-step-need-a-hug-and-such","title":"Need help? Blue Screen? Missed a step? Need a hug and such?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-readme/","title":"Archipelago Docker Deployment","text":"Updated: October 31st 2023
This repository serves as bootstrap for a Archipelago 1.3.0 deployment on a localhost for development/testing/customizing via Docker and provides a more unified experience this time:
arm64
architecture Chips like Raspberry Pi 4, with specially built arm64 docker containers. The only differences now between deployment strategies is the DB. Blazing fast OCR.The skeleton project contains all the pieces needed to run a local deployment of a vanilla Archipelago including (YES!) content provided as an optional feature from archipelago-recyclables
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#starting-from-zero","title":"Starting from ZERO","text":"This is the recommended, simplest way for this release. There are a too many, tons of fun new features, Metadata Displays, Webforms, New formatters and Twig extensions, improved viewers, new and improved JS libraries, OpenCV/Face Detection, smarter NLP, File composting, better HUGE import/update capabilities, bug fixes (yes so many) so please try them out. The team has also updated the DEMO AMI set (Content) to showcase metadata/display improvements.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#macos-intel-or-apple-silicon-m1m2m3","title":"macOS Intel or Apple Silicon M1/M2/M3:","text":"Step by Step deployment on macOS
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#ubuntu-1804-or-2004","title":"Ubuntu 18.04 or 20.04:","text":"Step by Step deployment on Ubuntu
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#windows-10-or-11","title":"Windows 10 or 11:","text":"Step by Step deployment on Windows
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#more-fun-if-you-add-content","title":"More fun if you add content:","text":"One-Step Demo content ingest
If you like it (or not), want new features, or want to be part of making this better (documenting, coding and planning) let us know. Make your voice and opinion be heard, this is a community effort.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#license","title":"License","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-ubuntu/","title":"Installing Archipelago Drupal 10 on Ubuntu 18.04 or 20.04","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#about-running-terminal-commands","title":"About running terminal commands","text":"This guide assumes you are comfortable enough running terminal (bash) commands on a Linux Computer.
We made sure that you can copy
and paste
each of these commands from this guide directly into your terminal.
You will notice sometimes commands span more than a single line of text. If that is the case, always make sure you copy and paste a single line at a time and press the Enter
key afterwards. We suggest you also look at the output.
If something fails (and we hope it does not) troubleshooting will be much easier if you can share that output when asking for help.
Happy deploying!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#prerequisites","title":"Prerequisites","text":"sudo apt install apt-transport-https ca-certificates curl software-properties-common\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\nsudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable\"\nsudo apt update\nsudo apt-cache policy docker-ce\nsudo apt install docker-ce\nsudo systemctl status docker\n\nsudo usermod -aG docker ${USER}\n
Log out, and log in again!
sudo apt install docker-compose\n
Git tools are included by default in Ubuntu.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#wait-question-do-you-have-a-previous-version-of-archipelago-running","title":"Wait! Question: Do you have a previous version of Archipelago running?","text":"If so, let's give that hard working repository a break first. If not, Step 1:
docker-compose down\ndocker-compose rm\n
Let's stop the containers gracefully first, run:
docker stop esmero-web\ndocker stop esmero-solr\ndocker stop esmero-db\ndocker stop esmero-cantaloupe\ndocker stop esmero-php\ndocker stop esmero-minio\ndocker stop esmero-nlp\n
Now we need to remove them so we run the following:
docker rm esmero-web\ndocker rm esmero-solr\ndocker rm esmero-db\ndocker rm esmero-cantaloupe\ndocker rm esmero-php\ndocker rm esmero-minio\ndocker rm esmero-nlp\n
Ok, now we are ready to start.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-1-deployment","title":"Step 1: Deployment","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#prefer-to-watch-a-video-to-see-what-its-like-to-install-go-to-our-user-contributed-documentation1","title":"Prefer to watch a video to see what it's like to install? Go to ouruser contributed documentation
[^1]!","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#important","title":"IMPORTANT","text":"If you run docker-compose
as root user (using sudo
) some enviromental variables, like the current folder used inside the docker-compose.yml
to mount the Volumes, will not work and you will see a bunch of errors.
There are two possible solutions.
sudo
needed).{$PWD}
inside your docker-compose.yml
with either the full path to your current folder, or with a .
and wrap that whole line in double quotes, basically making the paths for volumes relatives.Instead of: - ${PWD}:/var/www/html:cached
use: - \".:/var/www/html:cached\"
Now that you got it, let's deploy:
git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\n
cp docker-compose-linux.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
Note: docker-compose.yml
is git ignored in case you make local adjustments or changes to it.
You need to make sure Docker can read/write to your local Drive, a.k.a mounted volumes (especially if you decided not to run it as root
because we told you so!).
This means in practice running:
sudo chown -R 8183:8183 persistent/iiifcache\nsudo chown -R 8983:8983 persistent/solrcore\n
And then:
docker exec -ti esmero-php bash -c \"chown -R www-data:www-data private\"\n
Question: Why is this last command different? Answer: Just a variation. The long answer is that the internal www-data
user in that container (Alpine Linux) has uid:82, but on Ubuntu the www-data
user has a different one so we let Docker assign the uid from inside instead. In practice you could also run directly sudo chown -R 82:82 private
which would only apply to an Alpine use case, which can differ in the future! Does this make sense? No worries if not.
Once all containers are up and running (you can do a docker ps
to check), access http://localhost:9001
using your most loved Web Browser with the following credentials:
user:minio\npass:minio123\n
and create a bucket named \"archipelago\". To do so go to the Buckets
section in the navigation pane, and click Create Bucket +
. Type archipelago
under Bucket Name
and submit, done! That is where we will persist all your Files and also your File copies of each Digital Object. You can always go there and explore what Archipelago (well really Strawberryfield does the hard work) has persisted so you can get comfortable with our architecture.
The following will run composer inside the esmero-php container to download all dependencies and Drupal Core too.
docker exec -ti esmero-php bash -c \"composer install\"\n
You might see a warning: Do not run Composer as root/super user! See https://getcomposer.org/root for details
and the a long list of PHP packages. Don't worry. All is good here. Keep following the instructions! Once that command finishes run our setup script:
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
Explanation: That script will append some important configurations to your local web/sites/default/settings.php
.
Note: We say local
because your whole Drupal web root (the one you cloned) is also mounted inside the esmero-php and esmero-web containers. So edits to PHP files, for example, can be done without accessing the container directly from your local folder.
If this is the first time you're deploying Drupal using the provided Configurations run:
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:esmerodb@esmero-db/drupal --account-name=admin --account-pass=archipelago -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
Note: You will see these warnings: [warning] The \"block_content:1cdf7155-eb60-4f27-9e5e-64fffe93127a\" was not found
[warning] The \"facets_summary_block:search_page_facet_summary\" was not found
Nothing to worry about. We will provide the missing part in Step 5.
Note 2: Please be patient. This step takes now 25-30% longer because of how the most recent Drupal Installation code fetches translations and other resources (see Performed install task
). This means progress might look like getting \"stuck\", go and get a coffee/tea and let it run to the end.
Once finished, this will give you an admin
Drupal user with archipelago
as password (change this if running on a public instance!) and also set the right Docker Container owner for your Drupal installation files.
Final note about Steps 2-3: You don't need to, nor should you do this more than once. You can destroy/stop/update, recreate your Docker containers, and start again (git pull
), and your Drupal and Data will persist once you've passed the Installation complete
message. I repeat, all other containers' data is persisted inside the persistent/
folder contained in this cloned git repository. Drupal and all its code is visible, editable, and stable inside your web/
folder.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-5-ingest-some-metadata-displays-to-make-playing-much-more-interactive","title":"Step 5: Ingest some Metadata Displays to make playing much more interactive","text":"Archipelago is more fun without having to start writing Metadata Displays (in Twig) before you know what they actually are. Since you should now have a jsonapi
user and jsonapi should be enabled, you can use that awesome functionality of D8 to get that done. We have 4 demo Metadata Display Entities that go well with the demo Webform we provided. To do that execute in your shell (copy and paste):
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
You are done! Open your most loved Web Browser and point it to http://localhost:8001
Note: It can take some time to start the first time (Drupal needs some warming up). The Ubuntu deployment is WAY faster than the OSX deployment because of the way the bind mount volumes are handled by the driver. Our experience is that Archipelago basically reacts instantly!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-6-optional-but-more-fun-if-you-add-content","title":"Step 6: Optional but more fun if you add content","text":"One-Step Demo content ingest
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#need-help-blue-screen-missed-a-step-need-a-hug","title":"Need help? Blue Screen? Missed a step? Need a hug?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#user-contributed-documentation-a-video1","title":"User contributed documentation (A Video!)[^1]:","text":"Installing Archipelago on AWS Ubuntu by Zach Spalding: https://youtu.be/RBy7UMxSmyQ
[^1]: You may find this user contributed tutorial video, which was created for an earlier Archipelago release, to be helpful. Please note that there are significant differences between the executed steps and that you need to follow the current release instructions in order to have a successful deployment.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/","title":"Installing Archipelago Drupal 9 on Windows 10/11","text":"","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#prerequisites","title":"Prerequisites","text":"Open the Docker Desktop app. The Docker service should start up automatically with a status showing when the service is up and running.
Open an Ubuntu Terminal session (type Ubuntu
in the Windows Start menu).
Bring everything up to date: sudo apt update && sudo apt upgrade -y
Follow the steps for deployment in Ubuntu.
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#acknowledgment","title":"Acknowledgment","text":"Thanks to Corinne Chatnik for documenting these steps!
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#need-help-blue-screen-missed-a-step-need-a-hug","title":"Need help? Blue Screen? Missed a step? Need a hug?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"createdisplaymodes/","title":"Creating Display Modes for Archipelago Digital Objects","text":"We recommend checking out our primer on Display Modes for a broader overview on Form Modes and View Modes for Archipelago Digital Objects (ADOs).
But how do you create and enable these Display Modes in the first place? Let's find out.
"},{"location":"createdisplaymodes/#adding-a-new-form-mode","title":"Adding a new Form Mode","text":"Why would you want to create a new form mode? One common reason is to create different data entry experiences for users with different roles. Let's create an example form mode called \"Student Webform\" -- we can imagine a deployment where Students need a simplified form for ADO creation. We are going to create a form mode, enable it for Digital Objects, and give it some custom settings that differentiate it from existing form modes.
Navigate to yoursite/admin/structure/display-modes
Click on Form modes. This image shows the basic Form Modes shipped with Archipelago
Click the \"Add Form mode\" button at the top of the page. Then select the \"Content\" entity type from the list. In this example, we ultimately want the form mode to be applied to Archipelago Digital Objects, which is a Content entity type.
Enter the name of your Form Mode and hit save. Here we are entering \"Student Webform\".
Great. Now you will see your new Form mode in the list! Let's put it to use.
Head to yoursite/admin/structure/types/manage/digital_object
and click the \"Manage Form Display\" tab. As mentioned above, in this example we want to add a new Form Mode for ADOs, so we are dealing with the Digital Object content type. Scroll to the bottom of this page and look for the \"Custom Display Settings\" area, which is collapsed by default. Expand it, and you should see this.
Enable \"Student Webform\" and hit save! Now scroll back up the page. You'll see it enabled like so.
Now select our new \"Student Webform\" tab. From here, you have many options and can configure input fields as you see fit! To finish out our specific example though, let's finally add our Student Webform to the display. Click on the settings gear icon next to the Descriptive Metadata field.
You'll see that the default webform named \"Descriptive Metadata\" is entered. To add custom content to this Field Widget, start typing in the autocomplete. This example assumes you've created a webform called Student Webform
in yoursite/admin/structure/webform
. For info on how to create a new Webform with proper settings, see our Webforms as input guide.
After you've selected your \"Student Webform\" in the Field Widget setting, hit Update, and then Save at the bottom of the page.
All done! So let's recap. We created a new form mode. We added this form mode to the Manage Form Display > Custom Display Settings options for Digital Objects. And finally we configured the Field Widget for Descriptive Metadata in our new Form Mode to use a new Webform. This last step is arbitrary to this example. We could have enabled or disabled fields, or changed other field widget settings depending on our needs. But configuring different Webforms as Field Widgets for Descriptive Metadata is a common use case in Archipelago.
Thanks for reading this far! But there is more. We might want to display, in addition to ingest, our ADOs in custom ways. The process for creating new View Modes (the other type of Display Mode) is quite similar to creating new Form Modes, but let's walk through it with another example case.
"},{"location":"createdisplaymodes/#adding-a-new-view-mode","title":"Adding a new View Mode","text":"Why would you want to create a new View Mode? Maybe there is a new type of media you are attaching to ADOs that you want to display using the proper player or tool. Or maybe you want to simplify the ADO display, removing fields from the display page. In this example let's create a new View Mode for ADOs that adds some fields to the display to show the Author and Published date of the object.
Navigate to yoursite/admin/structure/display-modes
Select View modes, and click the \"Add View mode\" at the top of the page.
Select Content as your entity type.
Enter the name of your new View Mode and save. Ours is \"Digital Object with Publishing Information\"
Now let's enable this View mode. Go to yoursite/admin/structure/types/manage/digital_object
and click the \"Manage Display\" tab.
Scroll to the bottom of the page and expand the \"Custom Display Settings\" area. You will see our newly created View Mode. Enable it and hit save.
Now scroll back to the page top. You will see \"Digital Object with Publishing Information\" in the list of View Modes, so go ahead and select it.
Scroll down until you see the \"Disabled\" section. This section contains fields that are available to the ADO content type, but are not enabled in this display mode. Let's enable Author and Post date by changing the \"Region\" column dropdown from \"Disabled\" to \"Content\". (To learn more about Regions in Drupal, see here). Basically, this ensures that this field has a home in the page layout. Hit save.
Now, if you want ADOs to use this View Mode for display, there is one last step. You need to select \"Digital Object with Publishing Info\" as the view mode Display Settings when adding new content. This area is located on the right side of the page. See below:
Now, when we view the individual ADO, these new fields have been added to the display.
All done! This was quite a simple example, but now you are aware of how to customize your own ADO display. It can only get more complex and exciting from here.
Let's recap. We created a new View Mode. We enabled this View Mode in Manage Display > Custom Display Settings for Digital Objects. We enabled new fields (in this case, just for instruction, the Author and Post date fields) to make our new View Mode unique, and learned about Disabled fields in the process. We selected our new View Mode in the Display Settings area (slightly confusing wording because yes, this is a View Mode, subset of Display Mode) during ADO creation (for more on creating new objects, see this guide).
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"customwebformelements/","title":"Archipelago Custom Webform Elements","text":"In addition to the core elements provided by the Drupal Webform module, Archipelago also deploys a robust set of custom webform elements specific to digital repositories metadata needs and use cases.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"customwebformelements/#linked-data","title":"Linked Data:","text":"(*found under Composite Elements in \"Add Element\" menu)
Library of Congress (LoC) Linked Open data
Multi LoD Source Agent Items
Wikidata
Getty Vocabulary Term
VIAF
Location GEOJSON (Nominatim--Open Street Maps)
PubMed MeSH Suggest
SNAC Constellation Linked Open Data
Europeana Entity Suggest
Enhancements for Audio, Document, Image, Video file uploads
Import Metadata from a File (such as XML)
Import Metadata in CSV format from a File
Computed Metadata Transplant
Computed Token
Computed Twig
You can review the coding behind these custom elements here: https://github.com/esmero/webform_strawberryfield/tree/1.1.0/src/Element
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"devops/","title":"Archipelago Software Services","text":"At the core of the Archipelago philosophy is our commitment to both simplicity and flexibility.
"},{"location":"devops/#under-the-hood-archipelagos-architecture-is","title":"Under the hood, Archipelago's architecture is:","text":"Installation is entirely Dockerized and scripted with easy-to-follow directions.
Information related to non-Dockerized installation and configruation can be found here: Traditional Installation Notes
"},{"location":"devops/#strawberryfield-modules-at-the-heart-of-every-archipelago","title":"Strawberryfield Modules at the heart of every Archipelago:","text":"Documentation related to the Strawberryfield modules can be found here: Strawberryfields Forever
"},{"location":"devops/#archipelago-also-extends-these-powerful-tools","title":"Archipelago also extends these powerful tools:","text":"Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"documentation_about/","title":"About This Documentation","text":"Documentation is vital to our community, and contributions are welcome, greatly appreciated, and encouraged.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_about/#how-to-contribute","title":"How to Contribute","text":"Difficulty Level: Moderate\u2013Difficult
Below are some examples of features that are currently in use on the site. To explore more visit the Material for MkDocs documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#examples","title":"Examples","text":"","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#images","title":"Images","text":"Images are located in the docs/images
folder. You can add new ones there and link to them by relative path. For example, if you added strawberries_color.png
, you would embed it like so:
Image
MarkupResult![New Documentation Image](images/strawberries_color.png)\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#admonitions","title":"Admonitions","text":"Question Admonition
MarkupResult??? question \"What is a collapsible admonition?\"\n\n This is a collapsible admonition. It can have a title, and it collapses so as not to interrupt the flow the of the document, but it provides useful information as needed.\n
What is a collapsible admonition? This is a collapsible admonition. It can have a title, and it collapses so as not to interrupt the flow the of the document, but it provides useful information as needed.
You can read more about admonitions with further examples in the Material for MkDocs documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#code-blocks","title":"Code blocks","text":"Code block with title and highlighted lines
MarkupResult```html+twig title=\"HTML in a TWIG template\" hl_lines=\"8 9 10\"\n{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n```\n
HTML in a TWIG template{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#quirks","title":"Quirks","text":"Because of the use of front matter (the block of YAML at the top that contains settings and data for the file) the markup for a horizontal rule is restricted. To create one you have to use the following:
Horizontal Rule
MarkupResult___\n
Info
The above are underscore (_
) characters, as opposed to hyphens (-
).
Some of the documentation that is automatically deployed from the repos have special comments that are converted to theme-specific elements via script.
Front Matter
Deployment Repo with Front MatterDocumentation Repo with Front Matter<!--documentation\n---\ntitle: \"Adding Demo Archipelago Digital Objects (ADOs) to your Repository\"\ntags:\n - Archipelago Digital Objects\n - Demo Content\n---\ndocumentation-->\n
---\ntitle: \"Adding Demo Archipelago Digital Objects (ADOs) to your Repository\"\ntags:\n - Archipelago Digital Objects\n - Demo Content\n---\n
Switching Elements
Deployment Repo with Theme-specific MarkupDocumentation Repo with Theme-specific Markup<!--switch_below\n\n??? info \"OSX (macOS)/x86-64\"\n\n ```shell\n cp docker-compose-osx.yml docker-compose.yml\n ```\n\n??? info \"Linux/x86-64/AMD64\"\n\n ```shell\n cp docker-compose-linux.yml docker-compose.yml\n ```\n\n??? info \"OSX (macOS)/Linux/ARM64\"\n\n ```shell\n cp docker-compose-arm64.yml docker-compose.yml\n ```\n\nswitch_below-->\n\n___\n\nOSX (macOS)/x86-64:\n\n```shell\ncp docker-compose-osx.yml docker-compose.yml\n```\n\n___\n\nLinux/x86-64/AMD64:\n\n```shell\ncp docker-compose-linux.yml docker-compose.yml\n```\n\n___\n\nOSX (macOS)/Linux/ARM64:\n\n```shell\ncp docker-compose-arm64.yml docker-compose.yml\n```\n\n___\n\n<!--switch_above\nswitch_above-->\n
??? info \"OSX (macOS)/x86-64\"\n\n ```shell\n cp docker-compose-osx.yml docker-compose.yml\n ```\n\n??? info \"Linux/x86-64/AMD64\"\n\n ```shell\n cp docker-compose-linux.yml docker-compose.yml\n ```\n\n??? info \"OSX (macOS)/Linux/ARM64\"\n\n ```shell\n cp docker-compose-arm64.yml docker-compose.yml\n ```\n\n<!--repo_docs\n\n___\n\nOSX (macOS)/x86-64:\n\n```shell\ncp docker-compose-osx.yml docker-compose.yml\n```\n\n___\n\nLinux/x86-64/AMD64:\n\n```shell\ncp docker-compose-linux.yml docker-compose.yml\n```\n\n___\n\nOSX (macOS)/Linux/ARM64:\n\n```shell\ncp docker-compose-arm64.yml docker-compose.yml\n```\n\n___\n\nrepo_docs-->\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_technical/","title":"Documentation Technical Details","text":"Archipelago documentation is generated using the following open source projects:
To use any advanced features not mentioned in these pages, you can look through the documentation for each of the above projects.
In addition to the pages added directly via this repository, there are some pages automatically deployed here with GitHub Actions from the following repositories:
Both the main READMEs and documentation in the docs
folders for those repositories are prepended with archipelago-deployment
and archipelago-deployment-live
respectively and copied to the docs
folder here with the rest of the documentation. In practice that means those pieces of documentation need to be edited in those repositories directly.
A brief overview of the specific functionality or workflow area that will be covered in your documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_template/#stepsguides","title":"Steps/Guides","text":"Step two example with images:
Step three example with Details section:
Click to open this Details SectionMore Details in a List
Step four example with a simple code block:
{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
Step five example with a code block that has a title and highlighted lines:
HTML in a TWIG template{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
Last Step Example.
Congratulations! \ud83c\udf89
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_workflow/","title":"GitHub Workflow","text":"git checkout -b ISSUE-100\n
cp docs/documentation_template.md docs/new_documentation.md\n
nav
section of the mkdocs.yml
configuration file at the root of the repo. For example: nav:\n - Home: index.md\n - About Archipelago:\n - Archipelago's Philosophy & Guiding Principles: ourtake.md\n - Strawberryfields Forever: strawberryfields.md\n - Software Services: devops.md\n - New Documentation: new_documentation.md\n - Code of Conduct: CODE_OF_CONDUCT.md\n - Instructions and Guides:\n - Archipelago-Deployment:\n - Start: archipelago-deployment-readme.md\n - Installing Archipelago Drupal 9 on OSX (macOS): archipelago-deployment-osx.md\n - Installing Archipelago Drupal 9 on Ubuntu 18.04 or 20.04: archipelago-deployment-ubuntu.md\n - Installing Archipelago Drupal 9 on Windows 10/11: archipelago-deployment-windows.md\n - Adding Demo Archipelago Digital Objects (ADOs) to your Repository: archipelago-deployment-democontent.md\n...\n
To view the changes locally, first install the Python libraries using the Python package manager pip:
pip install mkdocs-material mike git+https://github.com/jldiaz/mkdocs-plugin-tags.git mkdocs-git-revision-date-localized-plugin mkdocs-glightbox\n
You may need to install Python on your machine. Download Python or use your favorite operating system package manager such as Homebrew. Now you can build the site locally, e.g. for the documentation using the 1.0.0 branch:
mike deploy 1.0.0\nmike set-default 1.0.0\n
If you create a new branch to match the issue number as in step 3, you would use your branch instead of 1.0.0. For example, a branch of ISSUE-129. mike deploy ISSUE-129\nmike set-default ISSUE-129\n
mike serve\n
git add .\ngit commit -m \"Create new docs with useful information.\"\ngit push origin ISSUE-100\n
Resolves #100
.Drupal, the project, puts out new core releases on a regular schedule. Your Archipelago site needs to apply the security updates and possibly minor releases between major core updates. Major core updates will typically coincide with an updated Archipelago stable release.
Updating core is done via Composer:
","tags":["Archipelago-deployment","Archipelago-deployment-live","DevOps","Drupal","Drupal Core"]},{"location":"drupal_core_update/#stepsguides","title":"Steps/Guides","text":"docker exec -ti esmero-php bash -c \"composer update \"drupal/core-*:^9\" --with-all-dependencies --dry-run\n
The --dry-run
flag will allow you to see what will be updated. Once you review the updates and are ready to go with the full update, you will run the same command without the dry-run
flag.docker exec -ti esmero-php bash -c \"composer update \"drupal/core-*:^9\" --with-all-dependencies\"\n
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
docker exec -ti esmero-php bash -c \"drush cache:rebuild\"\n
Occasionally there will be other Drupal modules that Archipelago uses, and they need to be updated at the same time you run a Core update. This is an example of updating Drupal Webform, which was required for moving to Drupal 9.5.x:Updating a Drupal module
docker exec -ti esmero-php bash -c \"composer update \"drupal/webform:6.1.4\n
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
or docker exec -ti esmero-php bash -c \"drush updb\"\n
docker exec -ti esmero-php bash -c \"drush cache:rebuild\"\n
or docker exec -ti esmero-php bash -c \"drush cr\"\n
Archipelago's Advanced Batch Find and Replace functionality provides different ways for you to efficiently Find/Search and Replace metadata values found in the raw JSON of your Digital Objects and Collections. Advanced Batch Find and Replace makes use of customized Actions that extend Drupal's VBO module to enable these powerful batch metadata replacement Actions in your Archipelago environment.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace/#where-to-find","title":"Where to Find","text":"In default Archipelagos, you can find Advanced Batch Find and Replace:
Tools
menu > Advanced Batch Find and Replace
/search-and-replace
/admin/structure/views/view/solr_search_content_with_find_and_replace
. The default Facets referenced above can be found at /admin/structure/block/list/archipelago_subtheme
in the Sidebar Second
section. Please proceed with caution if making any changes to the default configurations for this View or the Facets referenced on this View Page. From the main page (display title 'Search and Replace'), you will see:
Fulltext Search
boxActions
Select/deselect all results in this view (all pages)
via toggle switch\u25ba Raw Metadata (JSON)
section beneath each each individual Object/Collection containing the full Raw JSON metadata record for referenceSelected items in this view
(will be 0 items to start).Selected items
available for preview on this main/top page.You will also see a listing of a few different default Facets configured to help guide your selection of potential Digital Objects/Collections:
type
The default options available through the Action dropdown menu include:
Export Archipelago Digital Objects to CSV content item
Text based find and replace Metadata for Archipelago Digital Objects content item
Webform find-and-replace Metadata for Archipelago Digital Objects content item
JSON Patch Metadata for Archipelago Digital Objects content item
Publish Digital Object
Unpublish Digital Object
Change the author of content
Delete selected entities/translations
* denotes Action options that are also shared with the main Content
Page Action Menu
After reviewing the 'Important Notes & Workflow Recommendations' below, please see the following separate pages for detailed examples walking through the usage of the three different Find and Replace specific actions.
Important Note
The Actions available through Archipelago's Advanced Batch Find and Replace can potentially have repository-wide effects. It is strongly recommended that you proceed with caution when executing any of the available Actions.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace/#simulation-mode","title":"Simulation Mode","text":"Before executing any of the available Find and Replace Actions, the best-practice workflow recommendation is to always first run in Simulation Mode:
After applying any of the Find and Replace Actions, you can review the specific changes that were made within the Revision history of the impacted Digital Objects and Collections.
Find and Replace
page results listing or the main Content
page, navigate to the Digital Object/Collection you wish to review.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_json_patch/","title":"JSON Patch Find and Replace","text":"Enables you to carry out advanced JSON Patch operations within your metadata.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#what-is-a-json-patch-and-when-to-use-it","title":"What is a JSON Patch and when to use it?","text":"Before we dive into the mechanics of doing JSON Patching Batch in Archipelago we need to learn what a JSON Patch is and of course when applying this action is useful, possible (or not).
A JSON Patch is a JSON
Document containing precise operations that can heavily modify the structure and values of an existing JSON Document, in this case the RAW JSON found inside a strawberryfield of our ADOs.
The operations available for modifications of an JSON document are:
And there is also one (very important) used to check/validate the existence of values/keys:
Even if you can have multiple operations in a single JSON Patch Document, they are always applied as a whole. Means if any of those fail nothing will be applied and in concrete, in our VBO action, no change will be done to your ADO. This in combination with the \u201ctest\u201d operation gives you a lot of safety and a way of discerning/skipping completely a complex set of operations on e.g. a failed \u201dtest\u201d.
JSON Patch uses JSON Pointers
as arguments in all of these operations to target a specific key/value of your JSON.
Using the following JSON Snippet as example:
\"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n]\n
These JSON Pointers will resolve in the following manner:
/subject_lcgft_terms
[\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n]\n
/subject_lcgft_terms/0
{\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n}\n
/subject_lcgft_terms/0/label
Photographs\n
AS you can see a JSON Pointer is very precise , allowing you to target complete structures and values but it does not allow Wildcard Operations. Means you can not \"search\" or do loosy comparissons. This very fact limits many times the use case. E.g. if you have a list of terms like this:
\"terms\": [\n \"term1\",\n \"term2\",\n \"term3\"\n]\n
and you want to \"test\" for the existence of \"term3\"
before applying a change, you would need to know exactly at what position inside the terms Array
(Starting from 0) it will/should be found. And that might not be consistent across every ADO.
So how do we use these pointers in an operation inside a JSON Patch document? Using the same fake \"terms\" JSON snippet the previously listed, operations are written like this:
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#add","title":"add","text":"{ \"op\": \"add\", \"path\": \"/terms/0\", \"value\": \"term_another\" }\n
This will add before the first term (in this case \"term1\" ) \"term_another\" as a value.
At the endyou can use a dash (-
) (e.g. \"/terms/-\") instead of the numeric position of an array entry to denote that the \"value\" needs to be added at the end. This is needed specially for empty lists. You can not target 0
position on an empty array.
{ \"op\": \"remove\", \"path\": \"/terms/1\"}\n
This will remove the second term (in this case \"term2\" )
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#replace","title":"replace","text":"{ \"op\": \"replace\", \"path\": \"/terms/1\", \"value\": \"term_again\" }\n
This will remove the second term (in this case \"term2\" ) and put in its place \"term_again\" as a value. Basically two operationes, \u201cremove\u201c and \u201cadd \u201c in a single step","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#move","title":"move","text":"{ \"op\": \"move\", \"from\": \"/terms\", \"path\": \"/terms_somewhere_else\"}\n
This will copy all values inside top JSON key terms
into a new top JSON key named terms_somewhere_else
and remove then the old terms
key.
{ \"op\": \"copy\", \"from\": \"/terms\", \"path\": \"/terms_somewhere_else\"}\n
Similar to \u201cmove\u201c, it will copy values inside top JSON key terms
into a new top JSON key named terms_somewhere_else
without removing then the old terms
key or its content!
{ \"op\": \"test\", \"path\": \"/terms/0\", \"value\": \"term1\"}\n
Finally, \u201ctest\u201c will check if on position 0
of terms there is a value of \"term1\". If not, the test will fail. If a single test fails the whole JSON Patch will be cancelled. Tests can not be concatenated via OR bolean operators. So they always act individually. Two tests with one failing is a failed JSON Patch.
There is more of course! The Complete documentation can be found here
So. When to use JSON Patch? There are a few general rules/suggestions:
0
, then again at position 1
, etc)fixing
a value, in the sense of putting a static replacement, is not what you need. Other Actions
documented will allow you to replace one value with another fixed one. But JSON Patch will allow you to use existing data inside your target JSON and move it/copy it.Now that we know what it is and when we should do it, we can make a small exercise. The goal:
photograph
that are member of collection with Node ID 16
description
key from a single value into an array.As with every other VBO action described in our documentation, start by selecting the ADOs you want to modify using the exposed Search Field(s) and/or Facets present in the Search and Replace View
found at /search-and-replace
.
Once you have filtered down the list to manageable size, containing at least the ADOs you plan on modifying (but for JSON Patch operation could be more too because you can \u201ctest\u201c and match ), press either on Select / deselect all results in this view
to pass all the result (this includes all pages, not only the currently visible one) or go selectively one by one by checking on the toggle
found beside each ADO's title. You will see how the number in Selected 0 item in this view
increases. Now press Apply to select Items
. We will use for this example Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]
, an ADO we provide in our DEMO sets.
Redundant to say, Batch Actions are intended to be used when a modification needs to be applied to more than one item, implying there is a pattern. For single ADOs you can always do this faster directly via the EDIT tab.
Keep your JSON Patches (and friends) aroundSince JSON Patching involves writing a, sometimes complex, JSON Document, please keep around an Application (or Text File) where you can copy/paste and save your JSON Patches for resuse or future references. Archipelago will not store nor remember between runs the JSON Patch document you submitted. It is also very useful to copy and have at hand the RAW JSON
of one of the ADOs you plan to modify as a reference/aid while building the given JSON Patch document.
The default config JSON Patch form will contain an example JSON Patch set of Commands (Document)
We are going to replace this one with a valid JSON document. Notice that it does not require a root {}
Object wrapper and it is really a list (or array) of operations.
A bit of repetition but needed to explain the JSON Patch document. You can see on the following Note box how we copyed the RAW JSON of one ADO to be Patched to have a reference while building the JSON PATCH. for the sake of brevity we remove here the longer Image File technical Metadata.
RAW JSON ofLaddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]
Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?] | before Patching{\n \"note\": \"\",\n \"type\": \"Photograph\",\n \"viaf\": \"\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"model\": [],\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"audios\": [],\n \"images\": [\n 26\n ],\n \"models\": [],\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at [http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions](http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions).\",\n \"videos\": [],\n \"creator\": \"\",\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n },\n \"duration\": \"\",\n \"ispartof\": [],\n \"language\": \"English\",\n \"documents\": [],\n \"edm_agent\": \"\",\n \"publisher\": \"\",\n \"ismemberof\": [\n 16\n ],\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\",\n \"interviewee\": \"\",\n \"interviewer\": \"\",\n \"pubmed_mesh\": null,\n \"sequence_id\": \"\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"website_url\": \"\",\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-12-05T09:19:37-05:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"date_created\": \"1910-01-01\",\n \"issue_number\": null,\n \"date_published\": \"\",\n \"subjects_local\": null,\n \"term_aat_getty\": \"\",\n \"ap:entitymapping\": {\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n },\n \"europeana_agents\": \"\",\n \"europeana_places\": \"\",\n \"local_identifier\": \"nyhs_PR066_6136\",\n \"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q3381576\",\n \"label\": \"black-and-white photography\"\n },\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q60\",\n \"label\": \"New York City\"\n }\n ],\n \"date_created_edtf\": \"\",\n \"date_created_free\": null,\n \"date_embargo_lift\": null,\n \"physical_location\": null,\n \"related_item_note\": null,\n \"rights_statements\": \"In Copyright - Educational Use Permitted\",\n \"europeana_concepts\": \"\",\n \"geographic_location\": {\n \"lat\": \"40.8466508\",\n \"lng\": \"-73.8785937\",\n \"city\": \"New York\",\n \"state\": \"New York\",\n \"value\": \"The Bronx, Bronx County, New York, United States\",\n \"county\": \"\",\n \"osm_id\": \"9691916\",\n \"country\": \"United States\",\n \"category\": \"boundary\",\n \"locality\": \"\",\n \"osm_type\": \"relation\",\n \"postcode\": \"\",\n \"country_code\": \"us\",\n \"display_name\": \"The Bronx, Bronx County, New York, United States\",\n \"neighbourhood\": \"Bronx County\",\n \"state_district\": \"\"\n },\n \"note_publishinginfo\": null,\n \"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n ],\n \"upload_associated_warcs\": [],\n \"physical_description_extent\": \"\",\n \"subject_lcnaf_personal_names\": \"\",\n \"subject_lcnaf_corporate_names\": \"\",\n \"subjects_local_personal_names\": \"\",\n \"related_item_host_location_url\": null,\n \"subject_lcnaf_geographic_names\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n81059724\",\n \"label\": \"Bronx (New York, N.Y.)\"\n },\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n79007751\",\n \"label\": \"New York (N.Y.)\"\n }\n ],\n \"related_item_host_display_label\": null,\n \"related_item_host_local_identifier\": null,\n \"related_item_host_title_info_title\": \"\",\n \"related_item_host_type_of_resource\": null,\n \"physical_description_note_condition\": null\n}\n
Based on our own plan:
photograph
and memberof
Node ID 16
{ \"op\": \"test\", \"path\": \"/type\", \"value\": \"Photograph\"},\n{ \"op\": \"test\", \"path\": \"/ismemberof\", \"value\": [16]}\n
Notice that to be really sure we also match the data type, an array
with a single value 16
of type integer (makes sense since the Operation is also a JSON and will be evaluated in the same way as the source RAW JSON). This is a precise match. if the ADO belongs to multiple Collections it will fail of course.
description
key from a single value into an array.{\"op\": \"add\",\"path\": \"/temp_description_array\",\"value\": []},\n{\"op\": \"move\",\"from\": \"/description\",\"path\": \"/temp_description_array/-\"},\n{\"op\": \"move\",\"from\": \"/temp_description_array\",\"path\": \"/description\"},\n
This is a multi step operation. Given that JSON Patch can not \"cast\" types and depends on a given datatype to be present before, e.g. adding a new value to it, we use here a temporary key. Notice that you can not \u201cadd\u201c or \u201cmove\u201c to e.g. the position 0
because the destination array is indeed empty (that will fail). But by using the dash
you can command it to put it at the end, which on an empty list is also at the beginning (we are starting to understand this!).
{ \"op\": \"add\", \"path\": \"/subject_wikidata/-\", \"value\": {\n \"uri\": \"https://www.wikidata.org/wiki/Q1196071\",\n \"label\": \"collie\"\n }\n}\n
geographic_location.state
value and put it into subjects_local
:{\"op\": \"remove\", \"path\": \"/subjects_local\"},\n{\"op\": \"add\", \"path\": \"/subjects_local\", \"value\": []},\n{\"op\": \"copy\", \"from\": \"/geographic_location/state\", \"path\": \"/subjects_local/-\"}\n
Why so many operations? Because initially \"subjects_local\"
had a value of null
and it is not suited to generate/add to it as a multivalued key because of that. So we need to remove it first, recreate it as an empty list and then we can copy. Pro Note: you could partially rewrite this as replace
operation!
You will lose it! you can add a \u201ctest\u201c or move data instead of recreating. Many choices.
The final JSON Patch will look like this. Copy it into the Configuration JSON Patch commands form field:
[\n {\"op\": \"test\", \"path\": \"/type\", \"value\": \"Photograph\"},\n {\"op\": \"test\", \"path\": \"/ismemberof\", \"value\": [16]},\n {\"op\": \"add\",\"path\": \"/temp_description_array\",\"value\": []},\n {\"op\": \"move\",\"from\": \"/description\",\"path\": \"/temp_description_array/-\"},\n {\"op\": \"move\",\"from\": \"/temp_description_array\",\"path\": \"/description\"},\n {\"op\": \"add\",\"path\": \"/subject_wikidata/-\",\"value\": \n {\n \"uri\": \"https://www.wikidata.org/wiki/Q1196071\",\n \"label\": \"collie\"\n }\n },\n {\"op\": \"remove\", \"path\": \"/subjects_local\"},\n {\"op\": \"add\", \"path\": \"/subjects_local\", \"value\": []},\n {\"op\": \"copy\", \"from\": \"/geographic_location/state\", \"path\": \"/subjects_local/-\"}\n]\n
The inverse process online Now that you know (or are starting to understand) the manual process you can also try this online tool that allows you, based on a source and a destination JSON, generate the needed JSON Patch to mutate one JSON into the another. The logic might not be always what you need and most likely it will not take in account that you actually need to move values and will prefer to fix values to be added via an add operation
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#step-3-run-the-json-patch-action-in-simulation-mode","title":"Step 3: Run the JSON Patch Action in simulation mode","text":"Ready. Now check the \"only simulate and debug affected JSON\" checkbox. We want to see if we did well but not yet modify any ADOs. Press Apply
button. You will get another confirmation screen. Press Execute Action
.
It will run quick (on this example) and you will redirected back on the original Drupal Views of Step 1. If your source ADO
is actually the one from our Demo Collection you might see a diff
, something very similar to this:
129d129 < \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\", 155d154 < \"subjects_local\": null, 182a180,183 > }, > { > \"uri\": \"https:\\/\\/www.wikidata.org\\/wiki\\/Q1196071\", > \"label\": \"collie\" 236c238,244 < \"physical_description_note_condition\": null --- > \"physical_description_note_condition\": null, > \"description\": [ > \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\" > ], > \"subjects_local\": [ > \"New York\" > ]\n
Which means your Patch would have been applied!
In case something went wrong, e.g. any of the operations did not match your source data, you will see a WARNING
like this
Patch could not be applied for Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\n
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#step-4-run-the-json-patch-action-but-for-real","title":"Step 4: Run the JSON Patch Action but for real!","text":"Now that actual patching. Repeat from Step 1 to Step 3 but keep \"only simulate and debug affected JSON\" unchecked and follow the steps again. You ADO will be modified and you will get almost no notifications except of an action completed notice (in soothing green). If you check Laddie's ADO RAW json (expand in the same resulting view) it will look now like this
Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?] JSON after patching{ \n \"note\": \"\",\n \"type\": \"Photograph\",\n \"viaf\": \"\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"model\": [],\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"audios\": [],\n \"images\": [\n 26\n ],\n \"models\": [],\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at [http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions](http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions).\",\n \"videos\": [],\n \"creator\": \"\",\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n },\n \"duration\": \"\",\n \"ispartof\": [],\n \"language\": \"English\",\n \"documents\": [],\n \"edm_agent\": \"\",\n \"publisher\": \"\",\n \"ismemberof\": [\n 16\n ],\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": [\n \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\"\n ],\n \"interviewee\": \"\",\n \"interviewer\": \"\",\n \"pubmed_mesh\": null,\n \"sequence_id\": \"\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"website_url\": \"\",\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-12-05T09:19:37-05:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"date_created\": \"1910-01-01\",\n \"issue_number\": null,\n \"date_published\": \"\",\n \"subjects_local\": [\n \"New York\"\n ],\n \"term_aat_getty\": \"\",\n \"ap:entitymapping\": {\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n },\n \"europeana_agents\": \"\",\n \"europeana_places\": \"\",\n \"local_identifier\": \"nyhs_PR066_6136\",\n \"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q3381576\",\n \"label\": \"black-and-white photography\"\n },\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q60\",\n \"label\": \"New York City\"\n },\n {\n \"uri\": \"https:\\/\\/www.wikidata.org\\/wiki\\/Q1196071\",\n \"label\": \"collie\"\n }\n ],\n \"date_created_edtf\": \"\",\n \"date_created_free\": null,\n \"date_embargo_lift\": null,\n \"physical_location\": null,\n \"related_item_note\": null,\n \"rights_statements\": \"In Copyright - Educational Use Permitted\",\n \"europeana_concepts\": \"\",\n \"geographic_location\": {\n \"lat\": \"40.8466508\",\n \"lng\": \"-73.8785937\",\n \"city\": \"New York\",\n \"state\": \"New York\",\n \"value\": \"The Bronx, Bronx County, New York, United States\",\n \"county\": \"\",\n \"osm_id\": \"9691916\",\n \"country\": \"United States\",\n \"category\": \"boundary\",\n \"locality\": \"\",\n \"osm_type\": \"relation\",\n \"postcode\": \"\",\n \"country_code\": \"us\",\n \"display_name\": \"The Bronx, Bronx County, New York, United States\",\n \"neighbourhood\": \"Bronx County\",\n \"state_district\": \"\"\n },\n \"note_publishinginfo\": null,\n \"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n ],\n \"upload_associated_warcs\": [],\n \"physical_description_extent\": \"\",\n \"subject_lcnaf_personal_names\": \"\",\n \"subject_lcnaf_corporate_names\": \"\",\n \"subjects_local_personal_names\": \"\",\n \"related_item_host_location_url\": null,\n \"subject_lcnaf_geographic_names\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n81059724\",\n \"label\": \"Bronx (New York, N.Y.)\"\n },\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n79007751\",\n \"label\": \"New York (N.Y.)\"\n }\n ],\n \"related_item_host_display_label\": null,\n \"related_item_host_local_identifier\": null,\n \"related_item_host_title_info_title\": \"\",\n \"related_item_host_type_of_resource\": null,\n \"physical_description_note_condition\": null\n}\n
That is all. Again, keep your JSON Patches safe in a text document, test/try simple things first, look for patterns, look for No-Nos that can become \"tests\" to avoid touching ADOs that do not need to be updated and always remember that the destination type (single value, array or object) of an existing Key might affect your complex logic. Happy Patching!
Thank you for reading! Please contact us on our\u00a0Archipelago Commons Google Group\u00a0with any questions or feedback.
Return to the main\u00a0Find and Replace documentation page\u00a0or the\u00a0Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_text/","title":"Text Based Find and Replace","text":"The text-based find and replace is case-sensitive and space-sensitive, and while it's the most simple of the actions, it's quite powerful. For this reason it's important to be very precise and target only what's intended. Below are a guide and some examples of use cases for this action.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_text/#step-by-step-guide","title":"Step-by-Step Guide","text":"Tools > Advanced Batch Find and Replace
.Select / deselect all results (all pages, x total)
or toggle the buttons for individual objects.\u25ba Raw Metadata (JSON)
for some of the objects and double-check that the text being searched is only targeting what is intended and that the replacement text makes sense.Text based find and replace Metadata for Archipelago Digital Objects content item
from the Action
dropdown.Selected X items
and review the list.Apply to selected items
button (don't worry, nothing will happen yet).JSON Search String
) and replace (JSON Replacement String
).If you're absolutely certain about the replacement you have targeted, uncheck the 'only simulate and debug affected JSON' option and select Apply
.
only simulate and debug affected JSON
This option, which is selected by default, will simulate the action and show the list of objects that would be affected, along with the number of modifications for each object and a total number of results processed. For each JSON key and value affected the modifications will count 2:
1 for the deletion of the current key and value
+
1 for the creation of the modified key and value
Replacing a JSON key
Use Case: A JSON key is currently singular but should be plural.
JSON key example with empty array value...\n\"myKey\": [],\n...\n
JSON key example with array values...\n\"myKey\": [\"strawberries\",\"blueberries\",\"blackberries\"],\n...\n
Follow the steps above and use the following for the search and replace values:
Search Value\"myKey\":\n
Replace Value\"myKeys\":\n
Tip
By using quotes and the colon instead of myKey
only, we avoid unintentionally replacing other instances of the text within the JSON.
After applying the changes, we have the following key:
JSON key example with empty array value after update...\n\"myKeys\": [],\n...\n
JSON key example with array values after update...\n\"myKeys\": [\"strawberries\",\"blueberries\",\"blackberries\"],\n...\n
Replacing a JSON value
Use Case: After a batch ingest, it was discovered that JSON values across ADOs in multiple keys contain the same typo: Agnes Meyerhoff
(two fs) instead of Agnes Meyerhof
.
...\n\"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/art\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Meyerhoff, Agnes\",\n \"role_label\": \"Artist\"\n },\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/col\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Messenger, Maria, , 1849-1937\",\n \"role_label\": \"Collector\"\n }\n],\n\"description\": \"Inscription on mount: \\\"Meyerhoff, Agnes \\\\ Frankfurt - a\\/M. \\\\ Inv el-lith \\\\ Painter.\\\" Inscription on verso: \\\"Agnes Meyerhoff \\\\ Frankfurt a\\/M \\\\ inv. [at?] lith. \\\\ [maker in?]\\\".\",\n...\n
Follow the steps above and use the following for the search and replace values:
Search ValueMeyerhoff, Agnes\n
Replace ValueMeyerhof, Agnes\n
After applying the changes, we have the following values:
JSON values after update...\n\"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/art\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Meyerhof, Agnes\",\n \"role_label\": \"Artist\"\n },\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/col\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Messenger, Maria, , 1849-1937\",\n \"role_label\": \"Collector\"\n }\n],\n\"description\": \"Inscription on mount: \\\"Meyerhof, Agnes \\\\ Frankfurt - a\\/M. \\\\ Inv el-lith \\\\ Painter.\\\" Inscription on verso: \\\"Agnes Meyerhoff \\\\ Frankfurt a\\/M \\\\ inv. [at?] lith. \\\\ [maker in?]\\\".\",\n...\n
Replacing a JSON value with escape characters
Use Case: The URL for a website that appears in multiple keys needs to be updated from http://hubblesite.org
to https://hubblesite.org
.
...\n\"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the NASA and the Space Telescope Science Institute (STScI). For more information, please visit the NASA and the Space Telescope Science Institute's Copyright web page at [http:\\/\\/hubblesite.org\\/copyright](http:\\/\\/hubblesite.org\\/copyright).\",\n...\n\"description\": \"\\\"The largest NASA Hubble Space Telescope image ever assembled, this sweeping bird\u2019s-eye view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic next-door neighbor. Though the galaxy is over 2 million light-years away, The Hubble Space Telescope is powerful enough to resolve individual stars in a 61,000-light-year-long stretch of the galaxy\u2019s pancake-shaped disk. ... The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This cropped view shows a 48,000-light-year-long stretch of the galaxy in its natural visible-light color, as photographed with Hubble's Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.\\\" -full description available at: [http:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat](http:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat).\",\n...\n
Follow the steps above and use the following for the search and replace values:
Search Valuehttp://hubblesite.org\n
Replace Valuehttps://hubblesite.org\n
Note
You'll notice that the escape characters for the forward slash (\\/
), which appear in the raw JSON, do not need to be included in the search or replace values.
After applying the changes, we have the following values:
JSON value example with URL after update...\n\"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the NASA and the Space Telescope Science Institute (STScI). For more information, please visit the NASA and the Space Telescope Science Institute's Copyright web page at [https:\\/\\/hubblesite.org\\/copyright](https:\\/\\/hubblesite.org\\/copyright).\",\n...\n\"description\": \"\\\"The largest NASA Hubble Space Telescope image ever assembled, this sweeping bird\u2019s-eye view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic next-door neighbor. Though the galaxy is over 2 million light-years away, The Hubble Space Telescope is powerful enough to resolve individual stars in a 61,000-light-year-long stretch of the galaxy\u2019s pancake-shaped disk. ... The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This cropped view shows a 48,000-light-year-long stretch of the galaxy in its natural visible-light color, as photographed with Hubble's Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.\\\" -full description available at: [https:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat](https:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat).\",\n...\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the main Find and Replace documentation page or the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_webform/","title":"Webform Find and Replace","text":"Webform Find and Replace enables you to search against values found within defined Webform elements to apply metadata replacements with targeted care. Below are a guide and an example use case for this Action.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"find_and_replace_action_webform/#important-notes-about-different-webform-elements","title":"Important Notes about Different Webform Elements","text":"Maximum Length as Defined by your Webform Element Configuration OR Theme Defaults
For certain text-based webform element types, the maximum field length (maxlength
) defined in your specific webform element configurations will be enforced during Webform Find and Replace operations. If no maximum length is defined, the Admin Theme will enforce a maximum length of 128 characters. Please see our main Webforms documentation for information about configuring webforms in Archipelago.
Some Complex Webform Elements Not Available
Please note that some complex webform elements are not available for use with Webform Find and Replace. Any webform element that requires user interactions (such as the Nominatim Open Street Maps lookup/query and selection) is not available for usage. The different file upload webform elements are also not available for use with Webform Find and Replace.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"find_and_replace_action_webform/#step-by-step-guide","title":"Step by Step Guide","text":"Tools > Advanced Batch Find and Replace
.Select / deselect all results (all pages, x total)
or toggle the buttons for individual objects.\u25ba Raw Metadata (JSON)
for some of the objects and double-check that the metadata field and value you are targeting for replacement is present.Webform find-and-replace Metadata for Archipelago Digital Objects content item
from the Action
dropdown.Selected X items
and review the list.Apply to selected items
button (don't worry, nothing will happen yet). Apply
.In the following example configuration, for the selected 'Senju no oubashi (Senju great bridge)' object, the Media Type (type JSON key) value of \"Visual Artwork\" will be replaced with the type
JSON key value of \"Photograph\".
Selection of Single ADO and Webform find-and-replace
Action
Webform Find and Replace Form
Confirmation of Successfully Executed Changes
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the main Find and Replace documentation page or the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"firstobject/","title":"Your First Digital Object","text":"You followed every Deployment step and you have now a local Archipelago
instance. Great!
So what now? It is time to give your new repository a try and we feel the best way is to start by ingesting a simple Digital Object.
Note
This guide will assume Archipelago is running on http://localhost:8001
, so if you wizardly deployed everything in a different location, please replace all URIs
with your own setup while following this guide.
Start by opening http://locahost:8001
in your favourite Web Browser.
Your Demo deployment will have a fancy Home page with some banners and a small explanation of what Archipelago is and can do. Feel free to read through that now or later.
Click on Log in
in the top left corner and use your demo
credentials from the deployment guide.
(or whatever password you decided was easy for you to remember during the deployment phase)
Press the Log in
button.
Great, welcome demo
user! This users has limited credentials and uses the same global theme as any anonymous user would. Still, demo
can create content, so let's use those super powers and give that a try.
You will see a new Menu item
under Tools
on the top navigation bar named Add Content
. Click it!
As you already know Archipelago is build on Drupal 8/9
, a very extensible CMS
. In practice that means you have (at least) the same functionality any Drupal deployment has and that is also true for Content Managment.
Drupal ships by default with a very flexible Content Entity Type
named Node
. Nodes
are used for creating Articles and simple Pages but also in Archipelago as Digital Objects
. Drupal has a pretty tight integration with Nodes
and that means you get a lot of fun and useful functionality by default by using them.
An Article
and a Digital Object
are both of type Nodes
, but each one represents a different Content Type
. Content Types
are also named Bundles
. An individual Content, like \"Black and White photograph of a kind Dog\" is named a Content Entity
or more specific in this case a Node
.
What have Article
and Digital Object
Content types in common and what puts the apart?
Base Fields
and also user configurable set of Fields
attached (or bundled together).Article
has a title, a Body and the option to add an image.Digital Object
has a title but also a special, very flexible one named Strawberry Field
(more about that later).Fields are where you put your data into and also where your data comes from when you expose it to the world.
Nodes
, as any other Content entity have Base Fields (which means you can't remove or configure them) that are used all over the place. Good examples are the title
and also the owner, named uid
(you!).Field Widget
is used to input data into a Field.Field Formatter
that allows you to setup how it is displayed to the World.Field Formatters
(the way you want to show your content formatted to the world) is named a Display Mode
. You can have many, create new ones and remove them, but only use one at the time.Field Widgets
(the way you want to Create and Edit a Node
) is named a Form Mode
. You can also have many, create new ones but only use one at the time.Each Content Type can have different Permissions (using the build in User Roles
system).
Display Modes
. In Practice this means Display Modes
are attached to Content Types
.Form Modes
. In Practice this means Form Modes
are attached to Content Types
.Form Mode
can have its own Permissions.There is of course a lot more to Nodes, Content Types, Formatters, Widgets and in general Content Entities but this is a good start to understand what will happen next.
"},{"location":"firstobject/#adding-content","title":"Adding Content","text":"Below you see all the Content Types
defined by default in Archipelago. Let's click on Digital Object
to get your first Digital Object Node.
What you see below is a Form Mode
in action. A multi-step Webform that will ingest metadata into a field of type Strawberry Field (where all the magic happens) attached to that field using a Webform Field Widget
, an editorial/advanced Block on the right side, and a Quick Save
button at the bottom for saving the session.
Let's fill out the form to begin our ingest. We recommend using similar values as the ones shown in the screen capture to make following the tutorial easier.
Make sure you select Photograph
as Media Type
and all the fields with a red *
are filled up. Then press Move on to next step
at the bottom of the webform to load the next step in line.
Since this is our first digital object we do not yet have a Digital Object Collection for which My First Digital Object
could be a member of. In other words, you can leave Collection Membership
blank and click Next: Upload Files
.
We assume you come from a world where repositories define different Content types and the shape, the fields and values (Schema) are fixed and set by someone else or at least quite complicated to configure. This is where Archipelago differs and starts to propose its own style. You noticed that there is a single Content Type named Digital Object
and you have here a single Web Form. So how does this allow you to have images, sequences, videos, audio, 3D images, etc?
There are many ways of answering that, Archipelago works under the idea of an (or many) Open Schema(s), and that notion permeates the whole environment. Practical answer and simplest way to explain based on this demo is:
Digital Object
is a generic container for any shape of metadata. Metadata is generated either via this Webform-based widget you're currently using, manually (power-user need only) or via APIs. Because of this, Metadata can take any shape to express your needs of Digital Objects and therefore we do not recommend making multiple Digital Object types. However, if you ever do need more Digital Object types, the option is available.Webform
, built using the Webform Module
and Webforms can be setup in almost infinite ways. Any field, combo, or style can be used. Multi Step, single step - we made sure they always only touch/modify data they know how to touch, so even a single input element webform would ensure any previous metadata to persist even if not readable by itself (See the potential?). And Each Webform can be also quite smart!Strawberry field
.We will come back to this later.
"},{"location":"firstobject/#linked-data","title":"Linked Data","text":"As the name of this step suggests; you will be adding all your Linked Data elements here. This step showcases some of the autocomplete Linked Data Webform elements we built for Archipelago. We truly believe in Wikidata as an open, honest, source of Linked Open Data and also one where you can contribute back. But we also have LoC autocompletes and Getty.
Again, enter all fields with a red *
and when you are finished, click Move on to next step
Tip
When entering a location, place or address you will need to click on the Search OpenStreet Maps
button, which is what that big red arrow is pointing to in the screenshot below.
Now we will upload our Photograph
. Click Choose Files
to open your file selector window and choose which file you would like to ingest.
Once you've uploaded your file, you will see all the Exif data extracted from the image, like so...
Once you've mentally digested all of that data, let's go ahead and click Save Metadata
.
By clicking Save Metadata
we are simply persisting all the metadata in the current webform session. The actual ingest of the Object happens when you click Save
on the next and final step, Complete
.
Alright, we've made it. We've added metadata, linked Data, uploaded our files and now... we're ready to save! Go ahead and change the status from Draft to Published and click Save
.
Once you hit save you should see the following green message and your first Archipelago Digital Object!
Congratulations on creating your first digital object! \ud83c\udf53
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"fragaria/","title":"Fragaria Redirects Module","text":"Archipelago's Fragaria Redirect Module is a Drupal 9/10 module that provides dynamic redirect routes matched against existing Search API field values. This module will also provide future Unique IDs (API) integrations and PURLs.
","tags":["Fragaria","Redirects"]},{"location":"fragaria/#prerequisites","title":"Prerequisites","text":"Before proceeding with the following configuration steps, you need to first create the Strawberry Key Name Provider and Solr Field that corresponds to the Field that will be matched against the variable part of the route exists.
In other words, if your Digital Objects have a field and value such as:
\"legacy_PID\": [\n \"mylegacyrepo:1234\"\n ], \n
You need to make sure the values from the legacy_PID
JSON key are indexed (as a Solr/Search API Field) and ready for use as part of the Fragaria Redirects configuration.
Best Practice
Your new Solr field should to be of field type \"String\" for a perfect match and best results. Using \"Full Text\" or a related variant Solr field type will allow for a partial match, which might lead multiple original URLs redirecting to the first match in Archipelago.
","tags":["Fragaria","Redirects"]},{"location":"fragaria/#fragaria-redirect-entity-configuration","title":"Fragaria Redirect Entity Configuration","text":"Navigate to /admin/config/archipelago/fragariaredirect
.
Select the Add a Redirect Config
button.
Enter a label for the Fragaria Redirect Entity you are configuring.
Enter the Prefix (that follows your domain) for the Redirect Route.
node/
or do/
as a Prefix. Even if these will technically work (redirect), using either of these Prefix paths will override your existing Paths defined by Drupal and Archipelago.If applicable, enter the the Suffixes (that follow the prefix + the variable part) for the Redirect Route.
Instead of fixed Prefixes add a single {catch_all} variable suffix at the end
as needed. Checking this will disable any entered static suffixes.Select the Search API Index where the Field that will be matched against the variable part of the route exists.
Select the Search API Field that will be matched against the variable part of the route.
If applicable, add static prefixes for to the variable part/argument of the path.
Add static suffixes for to the variable part/argument of the path.
Select the Type of HTTP redirect to perform.
Lastly, select the box next to Is this Fragaria Redirect Route active?
to set your Redirect to active
, and Save your configuration.
The above example configuration would enable a Temporary redirect from a legacy repository site with a URL of https://mylegacyrepo.edu//mylegacyrepo/object/mylegacyrepo:1234 to your new Archipelago PURL of https://mynewarchipelagorepo.edu/do/mynewADOUUID.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Fragaria","Redirects"]},{"location":"generalqa_minio_logging/","title":"Min.io Logging","text":"Q: How can I see my minio (S3) docker container's realtime traffic and requests?
A: For standard demo deployments, mini.io storage server runs on the esmero-minio
docker container. Steps are:
Install the mc
binaries (minio client) for your platform following this instructions. e.g for OSX run on your terminal:
brew install minio/stable/mc\nmc alias set esmero-minio http://localhost:9000 user password\n
with http://localhost:9000
being your current machines mini.io URL and exposed port, user
being your username (defaults to minio
) and your original choosen password
(defaults to minio123
)
Run a trace
to watch realtime activity on your terminal:
mc admin trace -v -a --debug --insecure --no-color esmero-minio\n
Note: mc
client is also AWS S3 compatible and can be used to move/copy/delete files on the local instance and to/from a remote AWS storage.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"generalqa_smtp_configuration/","title":"SMTP Configuration","text":"Q: How can I enable SMTP for Archipelago?
A: For standard demo deployments, SMTP is not setup to send emails. To enable SMTP:
Enter the following commands in your terminal. Note: make sure docker is running. Optionally, you can verify that all Archipelago containers are present by entering the docker ps
command first.
docker exec -ti esmero-php bash -c 'php -dmemory_limit=-1 /usr/bin/composer require drupal/smtp:^1.0'\ndocker exec -ti esmero-php bash -c 'drush en -y smtp'\n
Check that the SMTP module has been enabled by navigating (as admin user) to the EXTEND module menu item (localhost:8001/admin/modules
). You should see \"SMTP Authentication Support\" listed.
Navigate to localhost:8001/admin/config/system/smtp
to configure the SMTP settings.
Save your settings, then test by adding a recipient address in the \u201cSEND TEST E-MAIL\u201d field.
Note: Depending on your email provider, you may also need to enable \u201cless secure\u201d applications in your account settings (such as here for Google email accounts: https://myaccount.google.com/lesssecureapps)
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"generalqa_twig_modules_configuration/","title":"Twig Modules Configuration","text":"Q: When attempting to save a Twig template for a Metadata Display, I receive an error message related to an Unknown \"bamboo_load_entity\" function
.
A: You need to enable the necessary Twig modules.
Navigate to: yoursite/admin/modules
In the \u201cEnter a part of the module name or description\u201d box, enter \u201cbam\u201d to filter for the related Bamboo Twig modules. Alternatively, scroll down to the Bamboo Twig modules section on this page.
Check the box next to each of the following to enable (some may already be enabled):
Click Install
.
After receiving the successful installation confirmation, check to make sure you are now able to save your Twig template without receiving an error message.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"giveortake/","title":"Archipelago Contribution Guide","text":"Contributing Documentation
Looking to contribute documentation? Start here.
Archipelago welcomes and appreciates any type of contribution, from use cases and needs, questions, documentation, devops and configuration and -- of course -- code, fixes, or new features. To make the process less painful, we recommend you first to read our documentation and deploy a local instance. After that please follow the guidelines below to help you get started.
Archipelago
welcomes, appreciates, and recognizes any and all types of contribution. This includes input on all use cases and needs, questions or answers, documentation, DevOps, and configurations. We also welcome general ideas, thoughts, and even dreams for the future of our repository! Of course, we also invite you to contribute PHP code, including fixes and new features.
We will be helpful, kind, and open. We encourage discussions and always respect one another's opinions, language, gender, style, backgrounds, origins, and destinations, provided they come from the same root values of respect, as stated here. We support conflict resolution using nothing more than basic common sense. We value diversity in all its shapes, forms, colors, epoches, numbers, and kinds, with or without labels, including in-between and evolving. We always assume we can do better and that you have done a lot. Under this very basic social framework, this is how we hope you can contribute:
"},{"location":"giveortake/#where-the-wild-things-live","title":"Where The Wild Things Live","text":"Archipelago has 5 active GitHub repositories
Strawberryfield
.We host a community interaction channel, our google group. This is the best place to ask questions and make suggestions that are not specific to a single module, and/or if you would like to contribute to a larger conversation within our community. Discussions work best in this forum (not excluding GitHub of course), and our official announcements are posted there too.
"},{"location":"giveortake/#documentation-workflow","title":"Documentation Workflow","text":"Documentation is an evolving effort, and we need help. This guide lives in GitHub in Archipelago Documentation. Documentation and Development Worklfow both work the same way, so keep reading!
"},{"location":"giveortake/#development-workflow","title":"Development Workflow","text":"Start by reading open ISSUES (so you don't end up redoing what someone else is already working on) and looking at our Roadmap for version 1.0.0. If the solution to your problem is not there or if there is an unchecked element in the roadmap, this is a great opportunity to help by creating a new ISSUE.
Next, start by opening an GitHub ISSUE in any of the 5 GitHub repositories, depending on what it is you are trying to do.
Please be concise with the title of your ISSUE so that it is easy to understand. Use Markdown to explain the what, how, etc, of your contribution. Note: Even if something related is already in the works, you can still contribute. Just add your comments on any open ISSUE. Or, if you think you want to contribute with a totally different perspective, feel free to open a new ISSUE anyway. We can always discuss next steps starting from there. Every community has its rhythm and style and our style is just beginning to develop. We are still figuring out what works best for everyone.
Once you are done and you feel comfortable working to make a change yourself, take note of the ISSUE number
(lets name it #issuenumber
).
The gist is:
As a best practice, we encourage pull requests to discuss/fix existing code, new code, and documention changes.
For the full step-by-step workflow, we will use Archipelago Documentation and the 1.0.0
branch as example. The same applies to any of the other repositories: just change the remote urls and use the most current branch name.
Fork the Archipelago Documentation Upstream source repository to your own personal GitHub account (e.g. YOU). Copy the URL of your Archipelago Documentation fork (you will need it for the git clone
command below).
$ git clone https://github.com/YOU/archipelago-documentation\n$ cd archipelago-documentation\n$ git checkout 1.0.0\n
"},{"location":"giveortake/#set-up-git-remote-as-upstream","title":"Set Up Git Remote As upstream
","text":"$ git remote add upstream https://github.com/esmero/archipelago-documentation\n$ git fetch upstream\n$ git merge upstream/1.0.0\n...\n
"},{"location":"giveortake/#create-your-issue-branch","title":"Create Your ISSUE Branch","text":"Before making changes, make sure you create a branch using the ISSUE number you created for these contributions.
$ git checkout -b ISSUE-6\n
"},{"location":"giveortake/#do-some-clean-up-and-test-locally","title":"Do Some Clean Up and Test Locally","text":"After your code changes, make sure
PHP
, run phpcs --standard=Drupal yourchanged.file.php
. We (try our best to) use Drupal 8 coding standards.MARKDOWN
file, make sure it renders well (you can use Textmate, Atom, Textile, etc to preview) and that links are not broken.PHP
, please test your changes live on your local instance of Archipelago. All non-documentation modules are already inside web/modules/contrib/
.After verification, commit your changes. This is very good post on how to write commit messages.
$ git commit -am 'Fix that Strawberry'\n
"},{"location":"giveortake/#push-to-the-branch","title":"Push To The Branch","text":"Push your locally committed changes to the remote origin (your fork)
$ git push origin ISSUE-6\n
"},{"location":"giveortake/#create-a-pull-request","title":"Create A Pull Request","text":"Pull requests can be created via GitHub. This document explains in detail how to create one. After your Pull Request gets peer reviewed and approved, it can be merged. Discussion can happen and peers can ask you for modifications, fixes or more information on how to test. We will be respectful. You will be given credit for all your contributions and shown appreciation. There is no wrong and never too little. There could never be too much!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"googleapi/","title":"Configuration for Google Sheets API","text":"To allow the Archipelago Multi Importer (AMI) to read from Google spreadsheets, you first need to configure the Google Sheets API as outlined in the following instructions.
Please note:
Login to the Google Developer Console. You will see the API & Services Dashboard.
If you have not created Credentials or a Project before, you will need to first create a Project.
Next, click the Create credentials
select box and select OAuth client ID
You will now need to Configure the Consent Screen.
On the initial OAuth Consent Screen setup, select Internal
for User Type.
Now enter AMI
as the App name, and your email address in the User support email. You may also wish to add Authorized domains (bottom of image below) as well.
On the Scopes page, select Add or Remove Scopes
. Then either search/filter the API table for the Google Sheets API. Or, under Manually add scopes
enter: https://www.googleapis.com/auth/spreadsheets.readonly
After selecting or entering in the Google Sheets API, you should see this listed under Sensitive Scopes
.
Review the information on the Summary
page, then Save.
You will now be able to Create Oauth client ID
. Select Web Application
as the Application type
Enter \"AMI\" under 'Name' and add any URIs you will be using below.
http://localhost:8001/google_api_client/callback
All URIs need to include /google_api_client/callback
After Saving, you will see a message notifying you that the OAuth client was created. You can copy the Client ID
and Client Secret
directly from this confirmation message into a text editor. You can also access the information from Credentials
in the APIs & Services
section in the Developer console, where you will have additional options for downloading, copying, and modifying if needed.
On the 'Add Google Api Client account' configuration page, enter the following information using your Client ID
and Client Secret
. 'Developer Key' is optional. Select Google Sheets API
under 'Services' and https://www,googleapis.com/auth/spreadsheets.readonly
under 'Scopes'. Check the box for Is Access Type Offline
. Select the Save button.
You will now need to Authenticate your AMI Google API Client. Return to the Google API Client Listing page. Under the Operation menu on the right-hand side of the AMI client listing, select Authenticate
.
You will be directed to the Google Consent Screen. You may need to login to your corresponding Google Account before proceeding. When loged in, you will see the following screen requesting that AMI is allowed to \"View your Google Spreadsheets\". Click Allow
.
On the Google API Client Listing page, your AMI client listing should now have 'Yes' under 'Is Authenticated'. You are now ready to use Google Sheets with AMI! Return to the main AMI documentation page to get started.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"inthewild/","title":"Archipelagos in the Wild \ud83d\uddfa\ufe0f","text":"Explore Archipelago instances running free across digital realms.
Note
*Please be aware that some of the following Archipelago instances are still brewing and these links may change. Stay tuned for future updates to live production sites when available.
"},{"location":"inthewild/#metro-archipelago","title":"METRO + Archipelago","text":"The Archipelagos listed below are supported by the Digital Services Team at the Metropolitan New York Library Council. \ud83e\uddd1\u200d\ud83c\udf3e \ud83d\udc1d \ud83c\udf53
Archipelago Playground and Studio Site
Barnard College
Digital Culture of Metropolitan New York (DCMNY)
Empire Archival Discovery Cooperative (EADC) Finding Aid Toolkit
Empire Immersive Experiences
Frick Collection and Webrecorder Team Web Archives Collaboration
Hamilton College Library & IT Services
Olin College Library Phoenix Files
New York State COVID-19 Personal History Initiative
Rensselaer Polytechnic Institute Libraries
San Diego State University Libraries Digital Collections
Union College Library
Western Washington University
From all around our beautiful shared world. \ud83c\udfe1 \ud83c\udfeb \ud83c\udfdb\ufe0f
Amherst College
Association Montessori Internationale
California Revealed
Consiglio Nazionale delle Ricerche / National Research Council of Italy
University of Edinburgh Libraries
If you have a public Archipelago instance you'd like to share on this page \ud83c\udfdd\ufe0f\ud83d\udccd, please contact us. We would love to add your great work to this list! \ud83d\udc9a
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"metadata_display_preview/","title":"Metadata Display Preview","text":"Archipelago's Metadata Display Preview is a very handy tool for your repository toolkit that enables you to preview the output of your Metadata Display (Twig) Templates (found at /metadatadisplay/list
). You can use the Metadata Display Preview to test and check the results of any type of Template (HTML Display, JSON Ingest, IIIF JSON, XML, etc.) against both Archipelago Digital Objects (ADOs) and AMI Sets (rows within).
Prerequisite Note
Before diving into Metadata Display (Twig) Template changes, we recommend reading our Twigs in Archipelago documentation overview guide and also our Working with Twig primer.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadata_display_preview/#step-by-step","title":"Step-by-Step","text":"Navigate to the Metadata Display list at /admin/content/metadatadisplay/list
(or through the admin menu via Manage > Content > Metadata Displays
). From the main Metadata Display List page, you can access all of the different display, rendering, and processing templates found in your Archipelago.
Selecting a Metadata Display Template
Open and select 'Edit' for the Template you wish to Edit and/or Preview.
Editing a Metadata Display Template
You will now be able to select either an Archipelago Digital Object (ADO) or AMI Set to Preview. Both selection types will use an autocomplete search (make sure the autocomplete matches fully against your selection before proceeding).
Archipelago Digital Object (ADO) selection
AMI Set and Row selection
For the Row, you can enter either a (CSV row) number
AMI Set and Row selection
Or a label found within the Source Data CSV:
After you select your ADO or AMI Set and press the Show Preview
button, the fuller Preview section will open up on the right side of the screen. The left side will continue to show the Metadata Display Template you originally selected to Edit.
Tip
It is strongly recommended to always select the option to \"Show Preview using native Output Format (e.g. HTML)\".
Archipelago Digital Object (ADO) selection against an HTML Display template
AMI Set and Row selection against a JSON Ingest template
To keep track of the JSON keys used in your template select the Show Preview with JSON keys used in this template
option before pressing Show Preview
. For more details see below.
Within the Preview Section on the right side of the screen:
From the Edit + Preview mode, you can:
Select the Show Preview
button as you make changes to refresh the Preview output and check your work. After saving any changes you may have made to your selected Template, all of the displays/AMI Sets/other outputs that reference this same Template will reflect the changes made.
Note
This feature is available as of strawberryfield/format_strawberryfield:1.2.0.x-dev
and archipelago/ami:0.6.0.x-dev
. To make use of it before the official 0.6.0/1.2.0 release you can run the following commands:
docker exec -ti esmero-php bash -c \"composer require 'archipelago/ami:0.6.0.x-dev as 0.5.0.x-dev' 'strawberryfield/format_strawberryfield:1.2.0.x-dev as 1.1.0.x-dev'\"\n
docker exec -ti esmero-php bash -c \"drush updb\"\n
docker exec -ti esmero-php bash -c \"drush cr\"\n
When creating or editing a Metadata Display Twig template, you can keep track of the JSON keys being used in the template by enabling the option after selecting an Archipelago Digital Object (ADO) or AMI Set row before pressing Show preview
:
Enable Metadata Display Preview Variables
The last two tabs in the Preview section above expand to show two tables listing the JSON keys that are used and unused by the template. The used keys are sorted by first instance line number (from the template) and the unused keys are sorted alphabetically.
Metadata Display Preview Variables Used JSON Keys
Metadata Display Preview Variables Unused JSON Keys
The JSON Keys that appear in these tables will vary based on changes to the template and the selected ADO or AMI set row.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadata_display_preview/#warnings-and-errors","title":"Warnings and Errors","text":"Warnings and Errors encountered during the processing will be shown at the top of the Preview section. A line number (from the template) will be included in the message if available.
Warning
A Warning will be generated if output can be rendered, and the output will be displayed below it.
Error
An Error will be generated if no output can be rendered, and no output will be displayed.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadatainarchipelago/","title":"Your JSON, our JSON - RAW Metadata in Archipelago","text":"From the desk of Diego Pino
Archipelago's RAW metadata is stored as JSON and this is core to our architecture. To avoid writing RAW over and over, this document will refer to RAW Metadata simply as Metadata.
Data and Metadata can be extremely complex and extensive. The use cases that define what Data, Media and Metadata to collect, to catalog and expose, to use during search and discovery or to enable interactive functionality, including questions like \"what public facing schemas, formats and serializations I need or want to be compliant with\" are as diverse and complex as the Metadata driving them.
But also Metadata, in specific, is plastic and evolving as are use cases. And more over, some Metadata is descriptive and some Metadata is technical and there are other types of Metadata too, e.g Control Metadata.
Finally Metadata is very close to their generators. Means you and your peers will know, better than any Software Development team, what is needed, useful and, many times also, available given what use case you have, end users needs and resources at hand, your In real life workflows and future expectations.
"},{"location":"metadatainarchipelago/#reason-behind-using-json","title":"Reason behind using JSON","text":"Drupal, the OSS CMS system Archipelago uses and extends, is RDB driven. This means that Content Types
normally follow the idea of an Entity with Fields attached. Each of these Fields becomes then a Database Table and the sum of all these fields living under a Content Type
definition, a fixed schema.
For integration and interoperability reasons with the larger Drupal ecosystem, we inherit in Archipelago the idea of an Entity, in specific, a Content Entity (Node) and Content Type
(Bundled fields for a Node). But instead of generating (and encouraging) the use of hundreds of fixed fields to describe your Digital Objects we put all Metadata as JSON, means a JSON BLOB, into a single smart Field of type Strawberry Field
. \ud83c\udf53
We go a long way of making as much as possible flexible and dynamic. This also implies the definition (and separation) of what an Archipelago Digital Object (ADO) is in our Architecture v/s what a general Drupal Content Type (e.g a static page or a blog post) is defined in code as: \"Any Content type that uses a Strawberry Field is an ADO and will be processed as such\". No configuration is needed. In other words, all is a NODE but any node that uses a Strawberry Field gets a different treatment and will be named in Archipelago an ADO.
One of the challenges of our flexible approach is how to allow Drupal to access the JSON in a way, as native as possible, to generate filtered listings via Drupal Views, free text Search and Faceting. To make this happen Strawberry Field uses a JSON Querying and Exposing as \"Native Field Properties\" logic. Through a special type of Plugin system named Strawberry Key Name Providers and associated Configuration Entities (can be found at /admin/structure/strawberry_keynameprovider
), you have control on which keys and values of your JSON are going to be exposed as field properties of any Strawberry Field, allowing Drupal through this to access values in a flat manner and expose them to the Search API natively. The access to the values of any JSON is done via JMESPATH expressions and then transformed either to a list of values or even \"cast\" into more complex data Data types, like an Entity Reference (means a connection to another Entity).
This gives you a lot of power and control and makes a lot of very heavy operations lighter. You can even plan upfront or evolve these properties in time.
In other words, you control how storage is mapped to Discovery and this allows Drupal Views to work that way too. Of course this also means traditional SQL based Drupal Views won't have access to these internals (for filtering) given that your JSON data nor the virtual Properties generated via Strawberry Key Name Providers are not accessible as individual RDB tables to generate SQL joins and that is why we heavily depend on the Search API (Solr).
"},{"location":"metadatainarchipelago/#open-schema-what-is-yours-what-is-archipelagos","title":"Open Schema. What is yours, what is Archipelago's","text":"What can you add to an ADO's Strawberry Field? As long as it is valid JSON, Archipelago can store it, display it, transform it and search across it in Archipelago. The way you manage Metadata can be as \"intime\" or \"aligned\" to other schemas as you want. Still, there are a few suggested keys/functional ideas:
"},{"location":"metadatainarchipelago/#suggested-json-keys","title":"Suggested JSON keys","text":""},{"location":"metadatainarchipelago/#the-type-key","title":"Thetype
key","text":"{\n \"type\": \"Photograph\"\n}\n
The type
JSON key has a semantic and functional importance in Archipelago. Given that we don't use multiple Drupal Content Types to denote the difference between e.g. a Photograph or a Painting (which would also mean you would be stuck with one or other if we did), we use this key's value to allow Archipelago to select/swap View Modes. This approach also allows for your own needs to define what an ADO in real life or digital realm is (the WHAT). This key is also important when doing AMI
based batch ingests since many of the mappings and decisions (e.g. what Template to use to transform your CSV or if the Destination Drupal Content Type is going to be a Digital Object or a Digital Object Collection) will depend on this.
Note: Archipelago does something extra fun too when using type
value for View Mode Selection (and this is also a feature of one of the Key Name provider Plugins). It will flatten the JSON first and then fetch all type
keys. How does this in practice work?
{\n \"type\": \"Photograph\",\n \"subtypes\": [\n {\n \"type\": \"125 film\"\n },\n {\n \"type\": \"Instant film\"\n }\n ] \n}\n
Means, while doing a View Mode Selection Archipelago will bring all found type
key values together and will have ['Photograph', '125 film', 'Instant film'] available as choices, meaning you will be able to make even finer decisions on how to display your ADOs. View Mode selection is based on order or evaluation, means we recommend putting the more specific mappings first.
label
key","text":"{\n \"label\": \"Black and White Photograph of Cataloger working with JSON\"\n}\n
Archipelago will use the label
key's value to populate the ADO's (Drupal Node) Title. Drupal has a length limit for its native build in Node Entity Title but JSON has not, so in case of more than 255 characters Archipelago will truncate the Title (not the label
key's value) adding an ellipsis (...) as suffix.
Because of the need of having Technical Metadata, Descriptive Metadata and Semantic Metadata while generating different representations of your JSON via Metadata Display Entities (Twig templates) transformations, we store and characterize Files attached to an ADO as part of the JSON. We also use a set of special keys to map and cast JSON keys and values to Drupal's internal Entities system via their Numeric and/or UUID IDs.
Through this, Archipelago will also move files between upload locations and permanent storage, execute Technical metadata extraction, keep track of ADO to ADO relationships (e.g ispartof or ismemberof) and emulate what a traditional Drupal Entity Reference field
would do without the limitations (speed and immutability) a static RDB definition imposes.
ap:entitymapping
key","text":"{\n \"ap:entitymapping\":{\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n }\n}\n
the ap:entitymapping
is a hint for Archipelago. With this key we can treat certain keys and their values as Drupal Numeric Entity IDs instead of semantically unknown values.
In the presence of the structure exemplified above the following JSON snipped:
\"images\": [\n 1,\n 2,\n 3\n ] \n
Will tell Archipelago that the JSON key images
should be treated as containing Entity IDs for a Drupal Entity of type (entity:file
) File. This has many interessting consequences. Archipelago, on edit/update/ingest will try (hard) to get a hold of Files with ID 1, 2 and 3. If in temporary storage Arhcipelago will move them to its final Permanent Location, will make sure Drupal knows those files are being used by this ADO, will run multiple Technical Metadata Extractions and classify internally the Files, adding everything it could learn from them. In practice, this means that Archipelago will write for you additional structures into the JSON enriching your Metadata.
Without this structure, the images
key would not trigger any logic but will of course still exist and can always still be used as a list of numbers while templating.
This also implies that for a persisted ADO with those values, if you edit the JSON and delete e.g. the number (integer
or string
representation of an integer
) 3
, Archipelago will disconnect the File Entity with ID 3 from this ADO, remove the enriched metadata and mark the File as not being anymore used by this ADO. If nobody else is using the File it will become temporary
and eventually be automatically removed from the system, if that is setup at the Drupal - Filesystem - level.
Using the same example ap:entitymapping
structure, the following snippet:
\"ispartof\": [\n 2000\n ] \n
Will hint to Archipelago on assumed connection between this ADO and another ADO with Drupal Entity ID 2000
. This will drive other functionality in Archipelago (semantic), allowing for example a Navigation Breadcrumb to be built using all connections found in its hierarchical path.
In Archipelago ADO to ADO relationships are normally from Child to Parent and hopefully (but not enforced!) building an Acyclic graph, from leaves to trunk. This will also allow inheritance to happen. This means also that a Parent ADO needs to exist before connecting/relating to it (chicken first). But if it does not, the system will not fail and assume a temporarily broken relationship (egg stays safely intact).
Entity mapping key also drives a very special compatibility addition to any ADO. Archipelago will populate Native Computed Drupal fields (attached at run time to each ADO) with these values loading and exposing them as Drupal Entities, processing both Files and Node Entities and making them visible outside the scope of a Strawberry Field
to the whole CMS.
The following Computed fields are provided:
field_file_drop
: Computed Entity Reference Field. Needed also for JSON API level upload of Files to an ADO (Drupal need). It will expose all File Entities referenced in an ADO, independently of the type of the File.field_sbf_nodetonode
: Computed Entity Reference Field. It will expose all Nodes (other ADOs) Entities referenced in an ADO, independently of the Content type and/or the semantic predicate (ismemberof, ispartof, etc) used.These Fields, because of their native Drupal nature, can be used directly everywhere, e.g. in the Search API to index all related ADOs (or any of their Fields and subproperties, even deeply chained, tree down) without having to specify what predicate is used. Said differently, they act as aggregators, as a generic \"isrelatedto\" property bringing all together.
"},{"location":"metadatainarchipelago/#the-asas_file_type-keys","title":"Theas:{AS_FILE_TYPE}
keys","text":"As explained in the ap:entitymapping
section above, when Archipelago gets hold of a File entity it will enrich your JSON with its extracted data. Archipelago will compute and append to your JSON a set of controlled as:{AS_FILE_TYPE}
keys containing a classified File's Metadata. The naming will be automatic based on grouping Files by their Mime Types.
The possible values for as:{AS_FILE_TYPE}
are
as:image
as:document
as:video
as:audio
as:application
as:text
as:model
as:multipart
as:message
An example for an Image attached to an ADO:
{\n \"as:image\": {\n \"urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0\": {\n \"url\": \"s3:\\/\\/de2\\/image-f6268bde41a39874bc69e57ac70d9764-view-ef596613-b2e7-444e-865d-efabbf1c59b0.jp2\",\n \"name\": \"f6268bde41a39874bc69e57ac70d9764_view.jp2\",\n \"tags\": [],\n \"type\": \"Image\",\n \"dr:fid\": 7461,\n \"dr:for\": \"images\",\n \"dr:uuid\": \"ef596613-b2e7-444e-865d-efabbf1c59b0\",\n \"checksum\": \"de2862d4accf5165d32cd0c3db7e7123\",\n \"flv:exif\": {\n \"FileSize\": \"932 KiB\",\n \"MIMEType\": \"image\\/jp2\",\n \"ImageSize\": \"1375x2029\",\n \"ColorSpace\": \"sRGB\",\n \"ImageWidth\": 1375,\n \"ImageHeight\": 2029\n },\n \"sequence\": 1,\n \"flv:pronom\": {\n \"label\": \"JP2 (JPEG 2000 part 1)\",\n \"mimetype\": \"image\\/jp2\",\n \"pronom_id\": \"info:pronom\\/x-fmt\\/392\",\n \"detection_type\": \"signature\"\n },\n \"dr:filesize\": 954064,\n \"dr:mimetype\": \"image\\/jp2\",\n \"crypHashFunc\": \"md5\",\n \"flv:identify\": {\n \"1\": {\n \"width\": \"1375\",\n \"format\": \"JP2\",\n \"height\": \"2029\",\n \"orientation\": \"Undefined\"\n }\n }\n }\n }\n}\n
That is a lot of Metadata! But to understand what is happening here, we need to dissect this into more readable chunks. Let's start with the basics from root to leaves of this hierarchy.
"},{"location":"metadatainarchipelago/#direct-file-level-metadata","title":"Direct File level Metadata","text":"Every Classified File inside the as:{AS_FILE_TYPE}
key will be contained in a unique URN JSON Object property:
\"urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0\": {}\n
We use a Property instead of a \"List or Array\" of Technical Metadata because this allows us (at code level) to access quickly from e.g. as:image
structure all the data for a File Entity with UUID ef596613-b2e7-444e-865d-efabbf1c59b0
without iterating. (Also now you know what urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0 means.)
Next, inside that property, the following Data provides basic Information about the File so you can access/make decisions when Templating. Notice the duplication of similar data at different levels. Duplication is on purpose and again, allows you to access certain JSON values (or filter) quicker without having to go to other keys or hierarchies to make decisions.
{\n \"url\": \"s3:\\/\\/de2\\/image-f6268bde41a39874bc69e57ac70d9764-view-ef596613-b2e7-444e-865d-efabbf1c59b0.jp2\",\n \"name\": \"Original Name of my Image.jp2\",\n \"tags\": [],\n \"type\": \"Image\",\n \"dr:fid\": 3,\n \"dr:for\": \"images\",\n \"dr:uuid\": \"ef596613-b2e7-444e-865d-efabbf1c59b0\",\n \"crypHashFunc\": \"md5\",\n \"checksum\": \"de2862d4accf5165d32cd0c3db7e7123\",\n \"dr:filesize\": 954064,\n \"dr:mimetype\": \"image\\/jp2\",\n \"sequence\": 1\n
\"url\"
: Contains the Final Storage location/URI of the File. It's prefixed with the configured Streamwrapper, a functional symbolic link to the underlying complexities of the backend storage. e.g s3://
implies an S3 API backend with a (hidden/abstracted) set of credentials, Bucket and Prefixes inside the bucket. This value is also used in Archipelago's IIIF Cantaloupe Service as the Image id
when building a IIIF Image API URL.\"name\"
: The Original Name of the File. Can be used to give a Download a human readable name or as an internal hint/preservation for you.\"tags\"
: Unused by default. You can use this for your own logic if needed.\"type\"
: A redundant (contextual, at this level) key whose value will match {AS_FILE_TYPE}
already found at 2 levels before. Allows you to know what File type this is when iterating over this File's data (without having to look back, or on our Code, when dealing with Flattened JSON).\"dr:fid\"
: The Drupal Entity Numeric ID.\"dr:for\"
: Where in your JSON (top level key) this File ID was stored (or in other words where you can find the value of \"dr:fid\"
. All this will match / was be driven of course by ap:entitymapping
. Sometimes (try uploading a WARC file and run the queue) this key might contain flv:{ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID}
. This means the File will have been generated by an active Strawberry Runners Processor and not uploaded by you. ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID
will be the Machine name (or ID) of a given Strawberry Runners Processor Configuration Entity.\"dr:uuid\"
: A redundant (contextual, at this level) key whose value will match the Drupal File entity UUID for this File.\"crypHashFunc\"
: What Cryptographic function was used for generating the checksum. By default Archipelago will do MD5 (faster but also because S3 APIs use that to ensure upload consistency and E-tag). In the future others can be enabled and made configurable\"checksum\"
: The Checksum (calculated) of this File via \"crypHashFunc\"
\"dr:filesize\"
: The File size in Bytes.\"dr:mimetype\"
: The Drupal level infered Mime Type. Archipelago extends this list. This is based on the File Extension.\"sequence\"
: A number (integer) denoting order of this file relative to other files of the same type inside the JSON. Which default type ordering is used will depend on how the ADO was created/edited, but can be overriden using Control Metadata.Deeper inside this structure Archipelago will produce Extracted Technical Metadata. Some of this Metadata will be common to every File Type, some will be specific to a subset, like Moving Media or PDFs. What runs and how it runs can be configured at the File Persister Service Settings configuration form found at/admin/config/archipelago/filepersisting
. Why there? These are service that run syncroniusly on ADO save (Create/Edit) and in while doing File persistance.
\"flv:exif\"
: EXIF Tool extraction for a file. The number of elements that come out might vary, for an Image file it might be normally short, but a PDF might have a very extensive and long list. The above mentioned File Persister Service Settings form allows you to also set a Files Cap Number, that will, once reached, limit and reduce the EXIF. This is very useful if you want to control the size of your complete JSON for any reason you feel that is needed (performance, readability, etc).
{\n \"flv:exif\": {\n \"FileSize\": \"932 KiB\",\n \"MIMEType\": \"image\\/jp2\",\n \"ImageSize\": \"1375x2029\",\n \"ColorSpace\": \"sRGB\",\n \"ImageWidth\": 1375,\n \"ImageHeight\": 2029\n },\n}\n
\"flv:identify\"
: Graphics Magic Identity binary will run on every file and format it knows how to run (and will try even on the ones it does not). Will give you data similar to EXIF but processed based on the actual File and not just extracted from the EXIF data found at the header. Notice that the details will be inside a \"1\", \"2\", etc property. This is because Identify might also go deeper and for e.g a Multi Layer Tiff extract different sequences on the same File.
{\n \"flv:identify\": {\n \"1\": {\n \"width\": \"1375\",\n \"format\": \"JP2\",\n \"height\": \"2029\",\n \"orientation\": \"Undefined\"\n }\n }\n}\n
\"flv:pronom\"
: Droid, a File Signature detection tool will find a matching pronom_id
for your File based on https://www.nationalarchives.gov.uk/aboutapps/pronom/droid-signature-files.htm. This detection type is deeper that EXIF or the mime type based on extension, reading from binary data. It allows you to get small differences between formats (even if e.g both are JP2) and thus make decisions like \"Will Cantaloupe IIIF Image Server
be able to handle this type?\". This has also positive Digital Preservation consequences.
{\n \"flv:pronom\": {\n \"label\": \"JP2 (JPEG 2000 part 1)\",\n \"mimetype\": \"image\\/jp2\",\n \"pronom_id\": \"info:pronom\\/x-fmt\\/392\",\n \"detection_type\": \"signature\"\n },\n}\n
\"flv:mediainfo\"
: Media Info works on Video and Audio. It goes very detailed into codecs
andstreams
and the output added to your JSON might look massive. This is also very needed when working with IIIF Manifests and deciding if a certain Video will be able to play natively on a browser or if Cantaloupe IIIF Image Server
will be able to extract individual frames as images. This again has positive Digital Preservation consequences. The Following is an example of an MP4 file generated via Quicktime on an Apple MacOS computer.
{\n \"flv:mediainfo\": {\n \"menus\": [],\n \"audios\": [\n {\n \"id\": {\n \"fullName\": \"1\",\n \"shortName\": \"1\"\n },\n \"count\": \"282\",\n \"title\": \"Core Media Audio\",\n \"format\": {\n \"fullName\": \"AAC LC\",\n \"shortName\": \"AAC\"\n },\n \"bit_rate\": {\n \"textValue\": \"85.3 kb\\/s\",\n \"absoluteValue\": 85264\n },\n \"codec_id\": \"mp4a-40-2\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"channel_s\": {\n \"textValue\": \"1 channel\",\n \"absoluteValue\": 1\n },\n \"frame_rate\": {\n \"textValue\": \"43.066 FPS (1024 SPF)\",\n \"absoluteValue\": 43\n },\n \"format_info\": \"Advanced Audio Codec Low Complexity\",\n \"frame_count\": \"914\",\n \"stream_size\": {\n \"bit\": 226109\n },\n \"streamorder\": \"0\",\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"source_delay\": \"-0\",\n \"bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"samples_count\": \"935582\",\n \"sampling_rate\": {\n \"textValue\": \"44.1 kHz\",\n \"absoluteValue\": 44100\n },\n \"channel_layout\": \"C\",\n \"kind_of_stream\": {\n \"fullName\": \"Audio\",\n \"shortName\": \"Audio\"\n },\n \"commercial_name\": \"AAC\",\n \"source_duration\": [\n \"21269\",\n \"21 s 269 ms\",\n \"21 s 269 ms\",\n \"21 s 269 ms\",\n \"00:00:21.269\"\n ],\n \"compression_mode\": {\n \"fullName\": \"Lossy\",\n \"shortName\": \"Lossy\"\n },\n \"channel_positions\": {\n \"fullName\": \"1\\/0\\/0\",\n \"shortName\": \"Front: C\"\n },\n \"samples_per_frame\": \"1024\",\n \"stream_identifier\": \"0\",\n \"source_frame_count\": \"916\",\n \"source_stream_size\": [\n \"226460\",\n \"221 KiB (1%)\",\n \"221 KiB\",\n \"221 KiB\",\n \"221 KiB\",\n \"221.2 KiB\",\n \"221 KiB (1%)\"\n ],\n \"source_delay_source\": \"Container\",\n \"format_additionalfeatures\": \"LC\",\n \"proportion_of_this_stream\": \"0.01178\",\n \"count_of_stream_of_this_kind\": \"1\",\n \"source_streamsize_proportion\": \"0.01180\"\n }\n ],\n \"images\": [],\n \"others\": [\n {\n \"type\": \"meta\",\n \"count\": \"188\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"frame_count\": \"1\",\n \"kind_of_stream\": {\n \"fullName\": \"Other\",\n \"shortName\": \"Other\"\n },\n \"stream_identifier\": [\n \"0\",\n \"1\"\n ],\n \"count_of_stream_of_this_kind\": \"2\"\n },\n {\n \"type\": \"meta\",\n \"count\": \"188\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"frame_count\": \"1\",\n \"kind_of_stream\": {\n \"fullName\": \"Other\",\n \"shortName\": \"Other\"\n },\n \"stream_identifier\": [\n \"1\",\n \"2\"\n ],\n \"count_of_stream_of_this_kind\": \"2\"\n }\n ],\n \"videos\": [\n {\n \"id\": {\n \"fullName\": \"2\",\n \"shortName\": \"2\"\n },\n \"count\": \"380\",\n \"title\": \"Core Media Video\",\n \"width\": {\n \"textValue\": \"1 280 pixels\",\n \"absoluteValue\": 1280\n },\n \"format\": {\n \"fullName\": \"AVC\",\n \"shortName\": \"AVC\"\n },\n \"height\": {\n \"textValue\": \"720 pixels\",\n \"absoluteValue\": 720\n },\n \"bit_rate\": {\n \"textValue\": \"7 144 kb\\/s\",\n \"absoluteValue\": 7144261\n },\n \"codec_id\": \"avc1\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"rotation\": \"0.000\",\n \"bit_depth\": {\n \"textValue\": \"8 bits\",\n \"absoluteValue\": 8\n },\n \"scan_type\": {\n \"fullName\": \"Progressive\",\n \"shortName\": \"Progressive\"\n },\n \"format_url\": \"http:\\/\\/developers.videolan.org\\/x264.html\",\n \"frame_rate\": {\n \"textValue\": \"29.970 (29970\\/1000) FPS\",\n \"absoluteValue\": 29\n },\n \"buffer_size\": \"768000\",\n \"color_range\": \"Limited\",\n \"color_space\": \"YUV\",\n \"format_info\": \"Advanced Video Codec\",\n \"frame_count\": \"636\",\n \"stream_size\": {\n \"bit\": 18951244\n },\n \"streamorder\": \"1\",\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"codec_id_info\": \"Advanced Video Coding\",\n \"framerate_den\": \"1000\",\n \"framerate_num\": \"29970\",\n \"sampled_width\": \"1280\",\n \"format_profile\": \"Main@L3.1\",\n \"kind_of_stream\": {\n \"fullName\": \"Video\",\n \"shortName\": \"Video\"\n },\n \"sampled_height\": \"720\",\n \"color_primaries\": \"BT.709\",\n \"commercial_name\": \"AVC\",\n \"format_settings\": \"CABAC \\/ 2 Ref Frames\",\n \"frame_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VFR\"\n },\n \"bits_pixel_frame\": \"0.259\",\n \"maximum_bit_rate\": {\n \"textValue\": \"768 kb\\/s\",\n \"absoluteValue\": 768000\n },\n \"stream_identifier\": \"0\",\n \"chroma_subsampling\": [\n \"4:2:0\",\n \"4:2:0\"\n ],\n \"maximum_frame_rate\": [\n \"30.000\",\n \"30.000 FPS\"\n ],\n \"minimum_frame_rate\": [\n \"28.571\",\n \"28.571 FPS\"\n ],\n \"pixel_aspect_ratio\": \"1.000\",\n \"colour_range_source\": \"Stream\",\n \"format_settings_gop\": \"M=2, N=30\",\n \"internet_media_type\": \"video\\/H264\",\n \"matrix_coefficients\": \"BT.709\",\n \"original_frame_rate\": [\n \"25.000\",\n \"25.000 FPS\"\n ],\n \"display_aspect_ratio\": {\n \"textValue\": \"16:9\",\n \"absoluteValue\": 1.778\n },\n \"format_settings_cabac\": {\n \"fullName\": \"Yes\",\n \"shortName\": \"Yes\"\n },\n \"codec_configuration_box\": \"avcC\",\n \"colour_primaries_source\": \"Container \\/ Stream\",\n \"transfer_characteristics\": \"BT.709\",\n \"proportion_of_this_stream\": \"0.98734\",\n \"colour_description_present\": \"Yes\",\n \"matrix_coefficients_source\": \"Container \\/ Stream\",\n \"count_of_stream_of_this_kind\": \"1\",\n \"transfer_characteristics_source\": \"Container \\/ Stream\",\n \"format_settings_reference_frames\": [\n \"2\",\n \"2 frames\"\n ],\n \"colour_description_present_source\": \"Container \\/ Stream\"\n }\n ],\n \"general\": {\n \"count\": \"336\",\n \"format\": {\n \"fullName\": \"MPEG-4\",\n \"shortName\": \"MPEG-4\"\n },\n \"codec_id\": [\n \"qt \",\n \"qt 0000.00 (qt )\"\n ],\n \"datasize\": \"19177730\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"file_name\": \"c98e7bc52e4bd3fe5681a746f2d9c76f_diego4\",\n \"file_size\": {\n \"bit\": 19194157\n },\n \"footersize\": \"0\",\n \"frame_rate\": {\n \"textValue\": \"29.970 FPS\",\n \"absoluteValue\": 29\n },\n \"headersize\": \"16427\",\n \"othercount\": \"2\",\n \"folder_name\": \"\\/tmp\\/ami\\/setfiles\\/cb606b13b823eaea784dc77c460f3baf\",\n \"frame_count\": \"636\",\n \"stream_size\": {\n \"bit\": 16804\n },\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"audio_codecs\": \"AAC LC\",\n \"codec_id_url\": \"http:\\/\\/www.apple.com\\/quicktime\\/download\\/standalone.html\",\n \"codecs_video\": \"AVC\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"isstreamable\": \"Yes\",\n \"complete_name\": \"\\/tmp\\/ami\\/setfiles\\/cb606b13b823eaea784dc77c460f3baf\\/c98e7bc52e4bd3fe5681a746f2d9c76f_diego4.m4v\",\n \"file_extension\": \"m4v\",\n \"format_profile\": \"QuickTime\",\n \"kind_of_stream\": {\n \"fullName\": \"General\",\n \"shortName\": \"General\"\n },\n \"codecid_version\": \"0000.00\",\n \"commercial_name\": \"MPEG-4\",\n \"writing_library\": {\n \"fullName\": \"Apple QuickTime\",\n \"shortName\": \"Apple QuickTime\"\n },\n \"overall_bit_rate\": {\n \"fullName\": \"7 238 kb\\/s\",\n \"shortName\": \"7237957\"\n },\n \"audio_format_list\": \"AAC LC\",\n \"stream_identifier\": \"0\",\n \"video_format_list\": \"AVC\",\n \"codecid_compatible\": \"qt \",\n \"file_name_extension\": \"c98e7bc52e4bd3fe5681a746f2d9c76f_diego4.m4v\",\n \"internet_media_type\": \"video\\/mp4\",\n \"encoded_library_name\": \"Apple QuickTime\",\n \"comapplequicktimemake\": \"Apple\",\n \"overall_bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"comapplequicktimemodel\": \"iPhone SE\",\n \"count_of_audio_streams\": \"1\",\n \"count_of_video_streams\": \"1\",\n \"comapplequicktimesoftware\": \"10.3.2\",\n \"proportion_of_this_stream\": \"0.00088\",\n \"audio_format_withhint_list\": \"AAC LC\",\n \"video_format_withhint_list\": \"AVC\",\n \"file_last_modification_date\": {\n \"date\": \"2022-10-19 20:02:32.000000\",\n \"timezone\": \"UTC\",\n \"timezone_type\": 3\n },\n \"count_of_stream_of_this_kind\": \"1\",\n \"comapplequicktimecreationdate\": \"2017-10-25T16:58:17-0400\",\n \"format_extensions_usually_used\": \"braw mov mp4 m4v m4a m4b m4p m4r 3ga 3gpa 3gpp 3gp 3gpp2 3g2 k3g jpm jpx mqv ismv isma ismt f4a f4b f4v\",\n \"comapplequicktimelocationiso6709\": \"+40.6145-074.2678+020.977\\/\",\n \"file_last_modification_date_local\": {\n \"date\": \"2022-10-19 20:02:32.000000\",\n \"timezone\": \"America\\/New_York\",\n \"timezone_type\": 3\n }\n },\n \"version\": \"21.09\",\n \"subtitles\": []\n }\n }\n}\n
\"flv:pdfinfo\"
: PDF Info will get Page level Information for a PDF or Ghostscript document. The dimensions displayed in the following example are not in pixels but points (resolution independent) and are also used for IIIF generation when deciding at what rasterized pixel size a given PDF document page will be rendered. Same as with flv:identify
, the technical metadata will be contained inside a keyed (string but semantically an integer) property. In this particular case each number is a page sequence in the original PDF order.
{\n \"flv:pdfinfo\": {\n \"1\": {\n \"width\": \"612\",\n \"height\": \"792\",\n \"rotation\": \"0\",\n \"orientation\": \"TopLeft\"\n }\n }\n}\n
@TODO: add the extra special key used by Strawberry Runners when it attaches a file. e.g WARC to WACZ
"},{"location":"metadatainarchipelago/#did-you-know","title":"Did you #know?","text":"If you delete a whole as:{AS_FILE_TYPE}
structure or one of the File level structures (a urn:uuid:{uuid}
key and its children), Archipelago will recreate it. If you modify any internal value contained in it, Archipelago will do nothing and will trust you (and if you do strange things like modifying the url
something might even fail e.g in a IIIF Metadata Display Entity Twig Template). No data edit there will trigger a modification/moving/deletion of a File (or e.g write back EXIF to be binary). You will have time to revert to a previous revision (version) of the ADO if any manual change was done. So, should you modify/delete this structures? Almost never. Ever. But you might find needs for that someday. Also to be noted. Producing this structure for a large file in S3:// is intensive. It needs to be downloaded to a local path and if the File is a few Gigabytes in size Archipelago might even run out of PHP processing time. If that ever happens you can also copy/paste from a previous revision of the ADO the relevant piece. If archipelago finds it (implied in the previous explanation) it will not have to regenerate it. The AMI module does this in an async/enqueued way to avoid time out issues and can reuse a cached metadata extraction between runs, but when working directly on an ADO via e.g a webform or via RAW edit, take that in account. More work is being done to allow also one on one async File operations and larger uploads via the web.
ap:tasks
keys","text":"As mentioned briefly before, there is also Control Metadata. What do we mean with that? Control metadata in Archipelago's way of allowing you to give, through metadata, (that you might want to preserve or not) instructions to Archipelago that relate to processing. Let's start with the basic one:
{\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n }\n}\n
\"ap:sortfiles\"
key will instruct Archipelago to sort (create a sequence
key and a sequential number (integer value) inside each Metadata File entry of a as:{AS_FILE_TYPE}
structure. Values can be one of ['natural', 'index', 'manual']
defaulting, if absent or has an invalid value, to natural
. - natural
: files will be sorted by File Name, the filename
key found at the same level of sequence
in the previously mentioned as:{AS_FILE_TYPE}
structure. a Photograph_1.jpeg
will come before a Photograph_10.jpeg
. The way a human being naturally would order by name. - index
files will be sorted by the order in which they appear inside the upload JSON key (the dr:for
key, one of the keys mapped in the ap:entitymapping
structure under entity:file
explained before. e.g. images
:[5, 10 , 1 ], would imply the File Entity with Drupal ID 5 would get \"sequence\": 1
, the one with Drupal ID 10 will get \"sequence\": 1
, etc. This is the default when ingesting via the AMI module given the need to preserve file order that has/might have unknown names or names you don't have control of (thus natural won't work) coming from e.g a Remote, HTTP/HTTPS location - manual
: You can modify the values manually for any sequence
key inside as:{AS_FILE_TYPE}
structure and those values will stick.
What do we mean with stick? Well, everytime archipelago gets a change in this \"ap:sortfiles\"
, e.g a new File is added, a File is deleted, automatic re-sorting will happen.
{\n \"ap:tasks\": {\n \"ap:forcepost\": true\n }\n}\n
\"ap:forcepost\"
: A boolean. The functionality of this key is provided by the Strawberry Runners Module. Will force Strawberry Runners Post processing for this ADO.
Each Configured and active Postprocessor provided by the Strawberry Runners
module might or not kick in
by evaluating a set of rules. If rule evaluates to TRUE, the PostProcessor will generate a certain output., e.g a Solr Indexed Strawberry Flavor
Data Source containing OCR, HOCR and NLP metadata for one or more pages of a PDF.
Everytime a Create or Update operation on an ADO happens, these rules will be evaluated and the Processor will be enqueued as a future task. But at the moment of executing, when the queue workers take one item, a check will be made and if the result of a previous run (e.g HOCR) is already present in the system (e.g in Solr) and it's veryfied to be belonging to the same source, the actual heavyload of processing the PDF will be skipped.
While testing, coding, doing complex changes in the system (like modifying largely the settings for one processor) or even in the case of an ISSUE (e.g HOCR was wrongly run with a setting that made all look like garbage!) you can instruct Archipelago run again without checks. And again. And again. basically everytime you Save an ADO by setting ap:forcepost
to true
. This can also be used batch and is already implied (means it does it for you but only once, without modifying the JSON) in the Trigger Strawberrry Runners process/reprocess for Archipelago Digital Objects
VBO action we provide.
In the absence of \"ap:forcepost\"
the value is implicitly false
, same as setting it explicitly to false
.
{\n \"ap:tasks\": {\n \"ap:nopost\": [\n 'pager'\n ]\n }\n}\n
\"ap:nopost\"
: an array or list of ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID
entries, means Machine names (or IDs) Strawberry Runners Processor Configuration Entities. The functionality of this key is provided by the Strawberry Runners. If not an array it will be ignored. Any value present not matching an active Strawberry Runners Processor Configuration Entity ID will also be ignored. Effectively, any post processors in this list will be skipped for this ADO. This allows a finer grained avoidance of some expensive processing that might lead to unsuable data. E.g a particular Manuscript ADO that Tesseract won't be able to OCR correctly. Adding this key to an ADO that was already processed won't remove existing generated/stored processing.
{\n \"ap:tasks\": {\n \"ap:ami\": {\n \"metadata_display\": 7\n\n }\n }\n}\n
\"ap:ami\"
is a newer key ( as of Archipelago 1.1.0 and AMI 0.5.0) and for now can only contain another single key named \"metadata_display\"
. The value of this one can be either a single Integer, the Drupal ID of a Metadata Display Entity or a string, the UUID
of a Metadata Display Entity. The functionality triggered by this key is provided by the AMI module and will do something extremely powerfull: it will take the complete JSON and process through the Twig or Metadata Display Entity refererenced in its value, IF, and only IF, the output of that template is JSON. This runs before any other event (Archipelago runs a ton of events that validate, enrich, check, etc your ADOs from the moment you SAVE or Create it) and because of that allows you to totally pivot, transform, change RAW data coming into Archipelago, e.g via the JSON:API into the structure you need/want. Said differently, you could push JSON from a totally different system and if the referenced Metadata Display Entity is well written, end with a perfectly aligned JSON matching your internal structure without modifying the INPUT manually. Because Twig is very powerful you can also do cleanups, complex logic, etc. More over, you can transform any existing ADO via Batch by adding this key(s) and values using the JSON Patch VBO action. Once processed and if all went well, meaning the output of the Template is valid JSON, the key itself will be removed. This, to avoid running over and over (invisibly to you) on further operations/edits/etc. This is a one time operation that does not stick. What happens if it does not run well, fails, errors out or the Template referenced does not exist? You get a second change (everyone deserves one), the Original ingested JSON, without transformations is kept. All this is very similar to what the AMI module does via a CSV but in this case its atomic. We know what you are thinking. You can process data twice, via AMI and then at the end pass it again through another template based on a certain logic coming from the first? yes. you can!
In the future \"ap:ami\"
might contain more keys to do more advanced File level actions. Archipelago is being constantly enhanced!
Archipelago also keeps information about who/how a certain JSON was generated. Depending on how the Ingest/Edit of an ADO happened, this can be automatically generated or added manually (the case for AMI ingests).
The structure is simple and not accumulative because there is also versioning at the ADO (Drupal) level that allows you to look back if needed.
{\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/default-descriptive-metadata-ami\",\n \"name\": \"default_descriptive_metadata_ami\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-03-16T15:51:24-04:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n}\n
The \"as:generator\"
Conforms to the Activity Stream Vocabulary Version 2.0 and keeps track of the last Operation executed on an ADO. Edits and Ingests via the Webform Widget will create this automatically using the Canonical URL of the Webform That generated the content and \"type\"
might be either \"Update\"
or \"Create
\". ADOs that were processed via \"ap:ami\"
will have automatically one generated to express the fact that the Original JSON was modified by a Metadata Display Entity. Objects created via AMI and using a Metadata Display Entity
can also add via the Twig template syntax the AMI Set ID used to generate the ADO (or Update) allowing the Service URL (\"url\"
) to be faceted/searched for (e.g show me all objects ingested via AMI Set with ID 4).
Anything or everything else (including unknow data, future data, upcoming data) belongs to you. How you name it, how you structure, how it evolves is up to you and the functionality and integration you want. That said, as someone (the writer) that enjoys cooking and had to learn the hardway (experimenting and failing sometmes) the basics before doing proper meals for others to enjoy, we suggest you plan on this before inventing the next Open Schema.
Note: Why the out-of-context Cooking Analogy?
This idea is deeply embedded in our Architecture. We see Metadata as ingredients. Your JSON is your fridge (or pantry or both). Metadata Display Entities, and their Twig Templates, recipes that allow you to pick and choose Ingredients and your Twig coding skills (and filters, functions, loops and conditionals) your basic cooking skills. This analogy has many consequences:
The Open Schema you will get from a Vanilla Archipelago already covers many many uses cases and was developed by a caring team of metadata professionals and practitioners working with Archipelago for a while already. It covers LoD and most Description needs for your Whys, Wheres, When, Who/Whom. Some tips:
property_1
might be hard to document for you. But original_artifact_in_collection
might be better (and denotes semantically the value might be a boolean, true of false). Use plural and singular in your naming to denote that something might contain more than one entry. Try to be generic but assertive. mods_modsinfo_namepart
is tempting but is already hinting a single original fixed schema. And you might end using the same value (the who) in Dublin Core, IIIF, schema.org, etc outputs. So mybe author
instead? This also leads to: sometimes multiple keys are better than many deeply nested ones where understanding. You can keep authors and contributors in separate keys.{# #}
explaining why/what it holds. You can also add Help/Extra info when designing your schema via a the Webform. Each element has extra properties to do so and that way you can also explain others (the ones using the Webform to add/edit) what the purpose of your metadata is.Do you have your own Kitchen/cooking tips you want to share? We hope you enjoy the learning process and the many choices Archipelago provides.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"metadatatwigs/","title":"Twig Templates and Archipelago","text":"Archipelago uses a fast, cached templating system that is core to Drupal, called Twig
. In its guts (or its heart?) Archipelago uses this system to transform the close to your needs open schema metadata that lives in every strawberryfield
as JSON into close to other one's fixed schema needs metadata. This is quite simple, but it is an essential component of our vision of how a repository should manage metadata.
Twig is a template engine for PHP part of Symfony framework.
This templating system is exposed to Archipelago users through the UI, and is stored in the repository as content. This setup empowers users to fully control how metadata is transformed and published without touching their individual sources or needing to manage hard-coded configurations. We named these readily accessible and powerful templates Metadata Display entities
, but they serve more than just display needs.
Twig drives every Page in a Drupal 8/9/10 environment.
Twig drives every aspect of your ADO exposure to the world in Archipelago and even batch Ingest.
Templates or recipes can be shared, exported, ingested, updated, and adapted in many ways. This means you can make changes quickly without having to wait for the next major release of Archipelago or your favorite Metadata Schema Specs Committee\u2019s agreement to implement the next or the last version. This module not only handles metadata but media assets as well. It will extract local or remote URIs and files from your metadata and render them as media viewers: books, 3D models, images, panoramas, A/V, all with IIIF in its soul.
Metadata Display Entities are used for:
Archipelago Ships with:
You can find these templates here:
Archipelago (the humans) will keep adding and refining these with every release.
"},{"location":"metadatatwigs/#instructions-and-examples","title":"Instructions and Examples","text":"While a lot of core needs and use cases are covered with the Twig Templates shipped with Archipelago, you may want to add more Input elements to your Webforms, which in turn will generate new JSON Values, which in turn you may want to show/expose to end users.
Knowing (even if you do not plan to) how to edit or create your own Twig templates is important.
format_strawberryfield
can do and what many other possibilities are exposed through our templating system in this guide: Strawberryfield Formatters.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"modifyingfileextensionsinwebform/","title":"Customizing Webforms (Modifying allowable file extensions)","text":"A guide to walk users through how to modify the Webform Descriptive Metadata
to allow additional file extensions to be ingested into Archipelago. This is the default Webform with Archipelago by following archipelago-deployment.
When creating an Archipelago Digital Object (ADO), on Step 4 of the ingest, Attach Files
, there is a step during the ingest to upload the files associated with your ADO. There will be a section on the Webform outlining the maximum number of files allowed, the maximum file size allowed, and the allowed file extensions that can be uploaded.
Let's say we are creating an ADO with the media type DigitalDocument
and this ADO contains a data set saved as a csv
file, but when we get to Step 4 of the ingest workflow we find that csv
is not an allowed file extension. Fortunately, Archipelago has no restrictions on what file extensions can be uploaded, but some use cases will require a little configuring to fit a specific need. This guide will walk users through the steps to modify the default Webform, Descriptive Metadata
, to allow additional file extensions to be included during an ingest.
Prerequisites for following this guide:
Once logged in as admin
, the first thing we need to do is navigate to the Webforms page so we can edit the Webform Descriptive Metadata.
Click on Manage
, then Structure
and when the page loads, scroll down and click Webforms
.
This is where all of the Webforms inside your Archipelago live. For this guide we're going to edit the Webform Descriptive Metadata
. Go ahead and click Build
under the OPERATIONS
column for Descriptive Metadata
.
Here we see all of the elements in Descriptive Metadata
; Title, Media type, Description, Linked Data elements, etc. The element that we want to edit is Upload Associated Documents
as this is the field you will use to upload pdf
, doc
, rtf
, txt
, etc. files during the ingest workflow. Click on Edit
under the OPERATIONS
column.
A new screen will pop up named Edit Upload Associated Documents element
. This is where you can configure the maximum number of values (under ELEMENT SETTINGS
), the maximum file size and also edit the allowed file extensions for this element, which is what we'll be doing. The latter both exist under FILE SETTINGS
section, highlighted in the screenshot below.
When you scroll down you'll see the Allowed file extensions
field. This is where we will add the csv
file extension. Please note: All file extensions are separated by a space; no ,
or .
between the values.
Once you've added all the file extensions your project needs, scroll down to the bottom of Edit Upload Associated Documents element
and click Save
.
This next step is imperative for saving your changes, scroll to the bottom of your elements list page and click Save elements
in order to persist all changes made.
Woohoo! Now when you are ingesting a DigitalDocument
object, you will be able to add csv
files! \ud83c\udf53
When logged in as an admin, we go to Manage > Structure > Webforms and click on Build
under the OPERATIONS
column of Descriptive Metadata
(shortcut: /admin/structure/webform/manage/descriptive_metadata). Then we click on Upload Associated Documents
to edit the element, scroll down to the Allowed file extensions field and add csv
without .
or ,
separating the values. Click Save
at the bottom of the Edit Upload Associated Documents element
page and then Save elements
at the bottom of the Webform page.
wav
or aiff
file for \"MusicRecording\" or an mov
file for a \\\"Movie\\\"? The steps are virtually the same as what is outlined in this guide! The difference here is that instead of editing Upload Associated Documents
, you will need to edit the field element that is associated with your ADO's media type. For example, with Media type MusicRecording
, you will edit Upload Audio File
, for Movie
, will edit Videos
.
When editing an element inside Descriptive Metadata
, at the top of the window Edit Upload Associated Documents element
(see Step 3 for a recap on how to get here) there is a tab next to General
titled Conditions
. Inside of Conditions
we have CONDITIONAL LOGIC
which is where the Webform is told which Media type
needs this element to be visible in the Webform. In the example below, we know that the field element Upload Associated Documents
will be visible when DigitalDocument
, Thesis
and Book
are the selected Media type
.
This is also the place you can add new logic or delete present logic by clicking the +
or -
next to the TRIGGER/VALUE
to create new conditionals.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"ourtake/","title":"Archipelago's Philosophy & Guiding Principles","text":"Archipelago operates under a different concept than the one we all have become used to in recent times. We like to think this is not done by re-inventing the wheel, but by making sure the road is clean, level, and with fewer obstacles than before. We do this by removing some heavy weight from the top, some unneeded ballast, plus, of course, some well positioned innovations to make the ride enjoyable.
We also like to say that Archipelago is like a Metadata Synthetizer (LFO anyone?) and we want to give you all the knobs, parameters, inputs and outputs to make the best out of it. Still, you can make \"music\" by just tapping the keyboard.
To get here we had to do a full stop first. Look around. Questioning everything we knew. Research and test (repeat) and then re-architect slowly on new and old assumptions, and especially new community values.
"},{"location":"ourtake/#whys-and-whats-of-archipelago","title":"Whys and Whats of Archipelago","text":"Because this topic is near and dear to our hearts, we are taking extra care with writing this important document. Please stay tuned for the full, verbose, heartfelt, and detailed long story of Archipelago's origins, development, future hopes and dreams.
In the meantime, please consider reviewing this presentation created by Archipelago's Lead Architect Diego Pino which captures the essence of Archipelago's philosophy and guiding principles:
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"presentations_events/","title":"Archipelago Presentations, Events, and Additional Resources","text":"Important General & Internal Recordings Notes
Please be aware that some of the presentation documents shared above may contain links to older documentation resources that have since changed or are no longer available. We recommend referring to the latest documentation versions available on this site whenever needed.
METRO's Digital Services Team facilitated many different internal training sessions throughout 2020-2022. If you and your team need access to any of these sessions that were recorded, please contact us. Thank you!
"},{"location":"presentations_events/#2023","title":"2023","text":"Archipelago Late 2022 Workshop Series:
McCarthy, B. J. (2022). Archipelago Commons: Using the Archipelago and AMI software to provide access to Rensselaer Polytechnic Institute's engineering drawings, a pilot project. Issues in Science and Technology Librarianship, 101. https://doi.org/10.29173/istl2717
Open Perspectives Forum. Monger, Jenifer J.; McCarthy, Brenden. (November 2022)
Migration, Collaboration and Innovation with Archipelago Commons. Monger, Jenifer J. (September 2022)
\ud83c\udf53 Archipelago 1.0.0 - August 2022 Release Announcement (August 2022) and updated Specs and Features List
Open Repositories June 2022
Formation of the Archipelago Working Group (April 2022)
\ud83c\udf53 Archipelago 1.0.0-RC3 and 1.0.0 Release Announcement - November 2021
AMIA Conference Workshop: Building a Web Archive-Capable Digital Repository with Webrecorder and Archipelago. Kreymer, Ilya; Ramirez-Lopez, Lorena; Dickson, Emma; Pino Navarro, Diego; Sherrick (Lund), Allison. (November 2021)
Solr Importer AMI Migrations, Showcase and Roundtable. Pino Navarro, Diego; Sherrick (Lund), Allison. (July 2021)
IIIF Annual 2021 Conference:
June 2021 Open Repositories Conference:
WebRecorder + Archipelago Workshop. Pino Navarro, Diego; Sherrick (Lund), Allison; Kreymer, Ilya; Ramirez-Lopez, Lorena; Dickson, Emma. (May 2021)
Twig Templates and Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison. (May 2021)
\ud83c\udf53Archipelago 1.0.0-RC2 Release Announcement (May 2021) and Archipelago RC2 Specs and Features List
Working with Archipelago Multi-Importer (AMI). Pino Navarro, Diego; Sherrick (Lund), Allison. (April 2021)
Archipelago Digital Objects Repository (an) architecture to last. Pino Navarro, Diego. (DrupalCon North America 2021)
Metadata, Schemas and Media in Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison (February 2021)
Deploying Archipelago 1.0.0-RC1. Pino Navarro, Diego; Sherrick (Lund), Allison. (February 2021)
\ud83c\udf53 Archipelago 1.0.0-RC1 Release Announcement (December 2020)
Webforms in Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison; Palmentiero, Jennifer. (December 2020)
IIIF and Archipelago - Community Call. Pino Navarro, Diego. (October 2020)
Archipelago : an empathic Digital Repository Architecture (September 2020)
\ud83c\udf53 Archipelago 8.x-1.0-beta3 Release Announcement (July 2020)
If you have a public Archipelago presentation, recording, or other resource you'd like to share on this page \ud83c\udfdd\ufe0f\ud83d\udccd, please contact us. We would love to add your great work to this list! \ud83d\udc9a
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"search_advanced/","title":"Advanced Search","text":"This page is under construction. Please stay tuned for further updates and thank you for your patience as we continue to brew up more documentation.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers"]},{"location":"search_solr_index/","title":"Search and Solr Overview","text":"This page is under construction. Please stay tuned for further updates and thank you for your patience as we continue to brew up more documentation.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#preamble-prerequisites","title":"Preamble + prerequisites","text":"Before diving into any Search and Solr Configuration, please review our Metadata in Archipelago overview documentation, which provides important context for understanding how the shape of your Archipelago Digital Objects/Collections (ADOs) metadata will inform your Search and Solr options and outcomes.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#archipelago-and-solr","title":"Archipelago and Solr","text":"Archipelago's latest Release (1.1.0) uses Apache Solr 9.1, which incorporates some major improvements and changes from Solr 8. Please refer to the [primary Solr documentation]((https://solr.apache.org/guide/solr/9_1/index.html) for the most comprehensive and in-depth information about Solr's wide breadth of functionality and configuration options.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#instructions-and-guides","title":"Instructions and Guides","text":"Archipelago uses solr-ocrhighlighting v0.8.4, built by the Development Team at the Bavarian State Library.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"security_bots/","title":"Managing Bots","text":"A public-facing production instance will likely encounter bad bots and other malicious traffic that will consume resources. There are many solutions available that address a variety of different needs, but we provide basic configurations and a Docker image for integrating the NGINX Ultimate Bad Bot & Referrer Blocker.
Warning
Before proceeding, please be sure to familiarize yourself with the NGINX Ultimate Bad Bot & Referrer Blocker README.
","tags":["Security","Bots"]},{"location":"security_bots/#deployment","title":"Deployment","text":"MSMTP_ACCOUNT=SMTP_ACCOUNT_NAME\nMSMTP_EMAIL=repositorysupport@metro.org\nMSMTP_HOST=smtp.metro.org\nMSMTP_PASSWORD=YOUR_SMTP_PASSWORD\nMSMTP_PORT=SMTP_PORT\nMSMTP_STARTTLS=on\nNGXBLOCKER_ENABLE=false\nNGXBLOCKER_CRON=00 22 * * *\nNGXBLOCKER_CRON_COMMAND=/usr/local/sbin/update-ngxblocker -x\nNGXBLOCKER_CRON_START=false\n
# Run docker-compose up -d\n# Docker file for Arm64 and Apple M1 machines\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n # image: jonasal/nginx-certbot\n image: esmero/nginx-bot-blocker:1.1.0-multiarch\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx/user_conf.d\n MSMTP_ACCOUNT: ${MSMTP_ACCOUNT}\n MSMTP_EMAIL: ${MSMTP_EMAIL}\n MSMTP_HOST: ${MSMTP_HOST}\n MSMTP_PASSWORD: ${MSMTP_PASSWORD}\n MSMTP_PORT: ${MSMTP_PORT}\n MSMTP_STARTTLS: ${MSMTP_STARTTLS}\n NGXBLOCKER_CRON: ${NGXBLOCKER_CRON}\n NGXBLOCKER_CRON_COMMAND: ${NGXBLOCKER_CRON_COMMAND}\n NGXBLOCKER_CRON_START: ${NGXBLOCKER_CRON_START}\n NGXBLOCKER_ENABLE: ${NGXBLOCKER_ENABLE}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/template:/etc/nginx/templates\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/bots.d:/etc/nginx/bots.d\n
First pull the new image:
docker compose pull\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose pull\n
Now bring the Docker ensemble down and up again:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
Run the install script for the bot blocker in the default dry run mode and review the output:
docker exec -ti esmero-web bash -c \"/usr/local/sbin/install-ngxblocker\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/install-ngxblocker -x\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/setup-ngxblocker -v /etc/nginx/templates -e .copy\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/setup-ngxblocker -v /etc/nginx/templates -e .copy -x\"\n
Enable the bot blocker and cron (if applicable): .env
MSMTP_ACCOUNT=SMTP_ACCOUNT_NAME\nMSMTP_EMAIL=repositorysupport@metro.org\nMSMTP_HOST=smtp.metro.org\nMSMTP_PASSWORD=YOUR_SMTP_PASSWORD\nMSMTP_PORT=SMTP_PORT\nMSMTP_STARTTLS=on\nNGXBLOCKER_ENABLE=true\nNGXBLOCKER_CRON=00 22 * * *\nNGXBLOCKER_CRON_COMMAND=/usr/local/sbin/update-ngxblocker -x\nNGXBLOCKER_CRON_START=true\n
Note
If MSMTP_EMAIL
is blank and cron is enabled the flag for sending email notifications will be skipped.
Bring the Docker ensemble down and back up again:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
Test that it is working by following the \"TESTING\" section (STEP 10) in the official documentation: https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker
If looking to use this solution as part of an upgrade (from 1.0.0 to 1.1.0, for example) we recommend coming back to the above steps after successfully completing the upgrade. After the upgrade, you will only need to add the environment variables and docker compose configurations and follow the steps as detailed above.
","tags":["Security","Bots"]},{"location":"security_bots/#advanced-configuration","title":"Advanced Configuration","text":"Because our Docker containers only persist our mounted files and folders, any advanced configurations may require overriding the files generated by our esmero-web container on boot. For example, the above setup-ngxblocker
script is normally responsible for writing the following include lines:
include /etc/nginx/bots.d/blockbots.conf;\ninclude /etc/nginx/bots.d/ddos.conf;\n
Because the script is unable to place them in the correct part of our nginx.conf.template
file, which in turn generates our nginx.conf
file (see Using environment variables in nginx configuration), our own script adds (when NGINXBLOCKER_ENABLE=true
) or removes (when NGINXBLOCKER_ENABLE=false
) the lines to an empty file, which in turn is statically included in our main nginx.conf.template
file. One option provided by setup-ngxblocker
is to exclude (-d
) the DDOS rule. In our case, we need to manually override the lines in our template file to reproduce this behavior:
Example
nginx.conf.templateupstream cantaloupe {\n server esmero-cantaloupe:8182;\n}\n\nserver {\n listen 443 ssl;\n server_name ${FQDN};\n ssl_certificate /etc/letsencrypt/live/${FQDN}/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/${FQDN}/privkey.pem;\n client_max_body_size 1536M; ## Match with PHP from FPM container\n\n root /var/www/html/web; ## <-- Your only path reference.\n\n fastcgi_send_timeout 120s;\n fastcgi_read_timeout 120s;\n fastcgi_pass_request_headers on;\n\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n\n # Please adapt to your needs\n proxy_buffers 16 16k; \n proxy_buffer_size 16k;\n\n #include /etc/nginx/conf.d/bots.include;\n include /etc/nginx/bots.d/blockbots.conf;\n
Note
Keep in mind that from this point, when disabling/enabling the bot blocker via the environment variable, you'll also need to uncomment/comment the added line.
Another more generally applicable approach is to override files that are part of the docker image:
Example
Our bash script (setup_bot_blocker.sh) is triggered by and runs just before the startup script (start_nginx_certbot.sh) for the NGINX Certbot image. For any advanced needs involving custom startup behavior, our script can be modified and overwridden:
docker cp esmero-web:/scripts/setup_bot_blocker.sh drupal/scripts/archipelago/\n
# Run docker-compose up -d\n# Docker file for Arm64 and Apple M1 machines\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n # image: jonasal/nginx-certbot\n image: esmero/nginx-bot-blocker:1.1.0-multiarch\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx/user_conf.d\n MSMTP_ACCOUNT: ${MSMTP_ACCOUNT}\n MSMTP_EMAIL: ${MSMTP_EMAIL}\n MSMTP_HOST: ${MSMTP_HOST}\n MSMTP_PASSWORD: ${MSMTP_PASSWORD}\n MSMTP_PORT: ${MSMTP_PORT}\n MSMTP_STARTTLS: ${MSMTP_STARTTLS}\n NGXBLOCKER_CRON: ${NGXBLOCKER_CRON}\n NGXBLOCKER_CRON_COMMAND: ${NGXBLOCKER_CRON_COMMAND}\n NGXBLOCKER_CRON_START: ${NGXBLOCKER_CRON_START}\n NGXBLOCKER_ENABLE: ${NGXBLOCKER_ENABLE}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/template:/etc/nginx/templates\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/bots.d:/etc/nginx/bots.d\n - ${ARCHIPELAGO_ROOT}/drupal/scripts/archipelago/setup_bot_blocker.sh:/scripts/setup_bot_blocker.sh\n
#!/bin/bash\n\nset -e\n\nif [ ! -z \"${MSMTP_EMAIL}\" ]; then\n envsubst < /root/.msmtprc.template > /root/.msmtprc\nfi\n\nif [ \"${NGXBLOCKER_CRON_START}\" = true ]; then\n if [ ! -z \"${MSMTP_EMAIL}\" ]; then\n CRON_COMMAND=\"${NGXBLOCKER_CRON} ${NGXBLOCKER_CRON_COMMAND} -e ${MSMTP_EMAIL}\"\n else\n CRON_COMMAND=\"${NGXBLOCKER_CRON} ${NGXBLOCKER_CRON_COMMAND} -n\"\n fi\n echo \"${CRON_COMMAND}\" | crontab - &&\n /etc/init.d/cron start\nfi\n\nif [ ! -f /etc/nginx/templates/bots.include.copy ]; then\n touch /etc/nginx/templates/bots.include.copy\nfi\nif [ ! -f /etc/nginx/templates/bots.include.template ]; then\n touch /etc/nginx/templates/bots.include.template\nfi\n\nif [ \"${NGXBLOCKER_ENABLE}\" = true ]; then\n if [ ! -L /etc/nginx/conf.d/botblocker-nginx-settings.conf ]; then\n ln -s /etc/nginx/bots_settings_conf.d/botblocker-nginx-settings.conf /etc/nginx/conf.d/botblocker-nginx-settings.conf\n fi\n if [ ! -L /etc/nginx/conf.d/globalblacklist.conf ]; then\n ln -s /etc/nginx/bots_settings_conf.d/globalblacklist.conf /etc/nginx/conf.d/globalblacklist.conf\n fi\n if ! grep -q blockbots.conf /etc/nginx/templates/bots.include.copy; then\n echo \"include /etc/nginx/bots.d/blockbots.conf;\" >> /etc/nginx/templates/bots.include.copy\n fi\n #if ! grep -q ddos.conf /etc/nginx/templates/bots.include.copy; then\n # echo \"include /etc/nginx/bots.d/ddos.conf;\" >> /etc/nginx/templates/bots.include.copy\n #fi\n if ! grep -q blockbots.conf /etc/nginx/user_conf.d/bots.include; then\n echo \"include /etc/nginx/bots.d/blockbots.conf;\" >> /etc/nginx/user_conf.d/bots.include\n fi\n #if ! grep -q ddos.conf /etc/nginx/user_conf.d/bots.include; then\n # echo \"include /etc/nginx/bots.d/ddos.conf;\" >> /etc/nginx/user_conf.d/bots.include\n #fi\n cp /etc/nginx/templates/bots.include.copy /etc/nginx/templates/bots.include.template\nelse\n >|/etc/nginx/templates/bots.include.template\n >|/etc/nginx/user_conf.d/bots.include\n if [ -L /etc/nginx/conf.d/botblocker-nginx-settings.conf ]; then\n rm /etc/nginx/conf.d/botblocker-nginx-settings.conf\n fi\n if [ -L /etc/nginx/conf.d/globalblacklist.conf ]; then\n rm /etc/nginx/conf.d/globalblacklist.conf\n fi\nfi\n
include
line from the existing files: docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/templates/bots.include.copy\"\n
docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/templates/bots.include.template\"\n
docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/user_conf.d/bots.include\"\n
Finally we bring the Docker ensemble down and back up again to propagate the changes in our container:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
The above is an example of a more complicated customization, but it's a pattern that can be used more generally throughout the Docker containers, i.e.:
- ${ARCHIPELAGO_ROOT}/LOCATION_ON_HOST/CUSTOMIZED_FILE_ON_HOST:/LOCATION_IN_DOCKER_CONTAINER/FILE_IN_DOCKER_CONTAINER\n
Work-In-Progress Note This documentation page is still under construction and content may change with future updates. Please use caution when implementing any instructions referenced herein, as there may be missing steps or corresponding configuration files. Thank you for your patience as we continue to update Archipelago's documentation.
The steps found below describe one potential manual SSL configuration for Archipelago deployments. A git clone
deployment option will be available for future releases.
This process takes less than 10 minutes of reading YML files and editing a few files (described below) to get SSL running and setup with auto-renewal.
First, configure Certbot, following the instructions found on https://certbot.eff.org.
Inside a /persistent partition, establish the following folder structure. Note: you can keep the existing folder structure if you so choose. A benefit of the following structure is that it decouples the git clone of archipelago-deployment, which is made to be self sustainable and good for coding or smaller deployments.
[ec2-user@ip-17x-xx-x-xxx persistent]$ ls -lah\ntotal 64K\ndrwxr-xr-x 14 root root 4.0K Oct 5 23:11 .\ndr-xr-xr-x 19 root root 275 Dec 15 2019 ..\ndrwxr-xr-x 8 999 999 4096 Oct 13 20:07 db\ndrwxr-xr-x 13 root root 4.0K Oct 5 23:03 drupal8\ndrwxr-xr-x 5 8183 8183 4.0K Feb 23 2020 iiifcache\ndrwxr-xr-x 2 root root 4.0K Feb 23 2020 iiifconfig\ndrwxr-xr-x 4 root root 4.0K Oct 5 22:45 nginx_conf\ndrwxr-xr-x 3 root root 4.0K Feb 26 2019 solrconfig\ndrwxr-xr-x 3 8983 8983 4.0K Feb 26 2019 solrcore\n
To get to this point, create a git clone of archipelago deployment and then copy the content of the /persistent out of the repo folder into this structure. The original (or what is left) archipelago-deployment ends inside a drupal8 folder here.
Copy and paste the following to create a local copy of this file:
docker-compose.yml**Be sure to replace youremail@gmail.com with your email address.
version: '3.5'\n services:\n web:\n container_name: esmero-web\n image: staticfloat/nginx-certbot\n restart: always\n environment:\n CERTBOT_EMAIL: \"youremail@gmail.com\"\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - /persistent/nginx_conf/conf.d:/etc/nginx/user.conf.d:ro\n - /persistent/nginx_conf/certbot_extra_domains:/etc/nginx/certbot/extra_domains:ro\n - /persistent/drupal8:/var/www/html:cached\n depends_on:\n - solr\n - php\n tty: true\n networks:\n - host-net\n - esmero-net\n php:\n container_name: esmero-php\n restart: always\n image: \"esmero/php-7.3-fpm:latest\"\n tty: true\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${PWD}:/var/www/html:cached\n solr:\n container_name: esmero-solr\n restart: always\n image: \"solr:7.5.0\"\n tty: true\n ports:\n - \"8983:8983\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/solrcore:/opt/solr/server/solr/mycores:cached\n - /persistent/solrconfig:/drupalconfig:cached\n entrypoint:\n - docker-entrypoint.sh\n - solr-precreate\n - drupal\n - /drupalconfig\n # see https://hub.docker.com/_/mysql/\n db:\n image: mysql:5.7\n command: --max_allowed_packet=256M\n container_name: esmero-db\n restart: always\n environment:\n MYSQL_ROOT_PASSWORD: esmerodb\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/db:/var/lib/mysql:cached\n iiif:\n container_name: esmero-cantaloupe\n image: \"esmero/cantaloupe-s3:4.1.6\"\n restart: always\n ports:\n - \"8183:8182\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/iiifconfig:/etc/cantaloupe\n - /persistent/iiifcache:/var/cache/cantaloupe\n networks:\n host-net:\n driver: bridge\n esmero-net:\n driver: bridge\n internal: true\n
Note: This file shows how the folders in Step 1 are being used, and how SSL is being automatically deployed and renewed (without any human interaction other than starting the docker-compose and watching the logs).
Now copy and paste the following to create a local copy of this file:
ngnix.conf**Be sure to replace all instances of yoursite.org with your own domain.
# goes into /persistent/nginx_conf/conf.d/nginx.conf\n upstream cantaloupe {\n server esmero-cantaloupe:8182;\n }\n\n server {\n listen 443 ssl;\n server_name yoursite.org;\n ssl_certificate /etc/letsencrypt/live/yourstie.org/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/yoursite.org/privkey.pem;\n\n client_max_body_size 512M; ## Match with PHP from FPM container\n\n root /var/www/html/web; ## <-- Your only path reference.\n\n fastcgi_send_timeout 120s;\n fastcgi_read_timeout 120s;\n fastcgi_pass_request_headers on;\n\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n\n # Cantaloupe proxypass\n location /cantaloupe/ {\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Port $server_port;\n proxy_set_header X-Forwarded-Path /cantaloupe/;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n if ($request_uri ~* \"/cantaloupe/(.*)\") {\n proxy_pass http://cantaloupe/$1;\n }\n }\n\n location = /favicon.ico {\n log_not_found off;\n access_log off;\n }\n\n location = /robots.txt {\n allow all;\n log_not_found off;\n access_log off;\n }\n\n # Very rarely should these ever be accessed outside of your lan\n location ~* \\.(txt|log)$ {\n deny all;\n }\n\n location ~ \\..*/.*\\.php$ {\n return 403;\n }\n\n location ~ ^/sites/.*/private/ {\n return 403;\n }\n\n # Allow \"Well-Known URIs\" as per RFC 5785\n location ~* ^/.well-known/ {\n allow all;\n }\n\n # Block access to \"hidden\" files and directories whose names begin with a\n # period. This includes directories used by version control systems such\n # as Subversion or Git to store control files.\n location ~ (^|/)\\. {\n return 403;\n }\n\n location / {\n try_files $uri /index.php?$query_string; # For Drupal >= 7\n }\n\n location @rewrite {\n rewrite ^/(.*)$ /index.php?q=$1;\n }\n\n # Don't allow direct access to PHP files in the vendor directory.\n location ~ /vendor/.*\\.php$ {\n deny all;\n return 404;\n }\n\n # Allow Modules to be updated via UI (still we believe composer is the way) \n rewrite ^/core/authorize.php/core/authorize.php(.*)$ /core/authorize.php$1;\n\n # In Drupal 8, we must also match new paths where the '.php' appears in\n # the middle, such as update.php/selection. The rule we use is strict,\n # and only allows this pattern with the update.php front controller.\n # This allows legacy path aliases in the form of\n # blog/index.php/legacy-path to continue to route to Drupal nodes. If\n # you do not have any paths like that, then you might prefer to use a\n # laxer rule, such as:\n # location ~ \\.php(/|$) {\n # The laxer rule will continue to work if Drupal uses this new URL\n # pattern with front controllers other than update.php in a future\n # release.\n location ~ '\\.php$|^/update.php' {\n fastcgi_split_path_info ^(.+?\\.php)(|/.*)$;\n include fastcgi_params;\n # Block httpoxy attacks. See https://httpoxy.org/.\n fastcgi_param HTTP_PROXY \"\";\n fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;\n fastcgi_param PATH_INFO $fastcgi_path_info;\n fastcgi_param PHP_VALUE \"upload_max_filesize=512M \\n post_max_size=512M\";\n proxy_read_timeout 900s;\n fastcgi_intercept_errors on;\n fastcgi_pass esmero-php:9000;\n }\n\n # Fighting with Styles? This little gem is amazing.\n location ~ ^/sites/.*/files/styles/ { # For Drupal >= 7\n try_files $uri @rewrite;\n }\n\n # Handle private files through Drupal.\n location ~ ^/system/files/ { # For Drupal >= 7\n try_files $uri /index.php?$query_string;\n }\n}\n
Create the following folder:
/persistent/nginx_conf/conf.d/\n
Place the ngnix.conf file inside the /conf.d/
folder.
Create also this other folder:
/persistent/nginx_conf/certbot_extra_domains/\n
Inside the /certbot_extra_domains/
folder, create a text file named the same way as your domain (which can/or not contain additional subdomains but needs to exist).
cat /persistent/nginx_conf/certbot_extra_domains/yoursite.org\n
drwxr-xr-x 2 root root 4.0K Oct 5 22:46 .\ndrwxr-xr-x 4 root root 4.0K Oct 5 22:45 ..\n-rw-r--r-- 1 root root 48 Oct 5 22:46 yoursite.org\n
Optionally, create additional subdomains if needed.
cat /persistent/nginx_conf/certbot_extra_domains/yoursite.org\nsubdomain.yoursite.org\nanothersub.yoursite.org\n
Make sure you have edited the docker-compose.yml
and ngnix.conf
files you created to match your own information. Also make sure to also adjust the paths if you do not want the /persistent approach described in Step 1.
Run the following commands:
docker -compose up -d\ndocker ps\n
You should see this:
b5a04747ee06 staticfloat/nginx-certbot \"/bin/bash /scripts/\u2026\" 8 days ago Up 8 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp esmero-web\n84afae094b57 esmero/php-7.3-fpm:latest \"docker-php-entrypoi\u2026\" 8 days ago Up 8 days 9000/tcp esmero-php\n13a9214acfd0 esmero/cantaloupe-s3:4.1.6 \"sh -c 'java -Dcanta\u2026\" 8 days ago Up 8 days 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n044dd5bc7245 mysql:5.7 \"docker-entrypoint.s\u2026\" 8 days ago Up 8 days 3306/tcp, 33060/tcp esmero-db\n31f4f0f45acc solr:7.5.0 \"docker-entrypoint.s\u2026\" 8 days ago Up 8 days 0.0.0.0:8983->8983/tcp esmero-solr\n
SSL has now been configured for your Archipelago instance.
Adding SSL to Archipelago running docker by Zachary Spalding: https://youtu.be/rfH5TLzIRIQ
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"strawberry_key_name_providers/","title":"Strawberry Key Name Providers, Solr Field, and Facet Configuration","text":"For an overview of how Strawberry Key Name Providers fit within the context of the rest of Archipelago, please see the Drupal and JSON section in our Metadata in Archipelago overview documentation.
In order to expose the Strawberry Field JSON keys (and values) for Archipelago Digital Objects (ADOs) to Search/Solr, Views, and Facets, we need to make use of a plugin system called Strawberry Key Name Providers. The following guide covers - Configuring first the Strawberry Key Name Providers - Then configuring the corresponding Solr Fields necessary for Search and Views exposure - Finally, the configuration of Facets and placement of Facet blocks on your theme as needed.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberry_key_name_providers/#creating-a-strawberry-key-name-provider","title":"Creating a Strawberry Key Name Provider","text":"First, we'll start with an example of a Strawberry Field JSON key that we would like to expose:
date_created_edtf
...\n\"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q55488\",\n \"label\": \"railway station\"\n }\n],\n\"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": \"2016~\\/2017~\",\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n},\n\"date_created_free\": null,\n...\n
Next, we are going to create a new Strawberry Key Name Provider by going to Administration > Structure > Strawberry Key Name Providers
, pressing the + Add Strawberry Key Name Provider
button, filling in the fields as follows, and saving:
Label
: Date Created EDTF
Strawberry Key Name Provider Plugin
: JmesPath Strawberry Field Key Name Provider
One or more comma separated valid JMESPaths
: date_created_edtf.date_free
Exposed Strawberry Field Property
(under the One or more comma separated valid JMESPaths
field) is set to date_created_edtf_date_free
. This is the Strawberry Field Property
that will hold the data coming from the JMESPath Query when evaluated against and ADO's JSON and will be visible as a Strawberry Field Property to Drupal and the Search API. When doing this in a production environment, you might want to change the automatically generated value and assign a simpler one to remember. You can always do this by pressing Edit
. But for the purpose of this documentation please keep date_created_edtf_date_free
.Is Date?
: \u2611
You'll notice that there are four plugins, each with different options, available for different use cases. Below you'll find each plugin with examples from the providers that come with a default deployment.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberry_key_name_providers/#entity-reference-jmespath-strawberry-field-key-name-provider","title":"Entity Reference JmesPath Strawberry Field Key Name Provider","text":"ismemberof
One or more comma separated valid JMESPaths: ismemberof
Entity type: node
hoCR Service
Source JSON Key used to read the Service/Flavour: ap:hocr
Subject Labels
One or more comma separated valid JMESPaths: subject_loc[*].label, subject_wikidata[*].label, subject_lcnaf_geographic_names[*].label,subject_temporal[*].label, subject_lcgft_terms[*].label, term_aat_getty[*].label, pubmed_mesh[*].label
Best Practice
As in the example below, if there are a group of flat and unique keys that you want to expose, we recommend creating one provider with this plugin and using a list of keys instead of creating multiple providers. This Provider will also auto assign Lists of Properties from an external JSON-LD ontology/vocabulary (e.g Schema.org). It uses direct access approach, e.g. type
will get all values for any JSON Key named type
at any hierarchy level (across the whole JSON document) and it will also use the same exact name (type
) for the Exposed Strawberry Field Property
.
schema.org
Additional keys separated by commas: ismemberof,type,hocr,city,category,country,state,display_name,author,license
Administration > Configuration > Search and metadata > Search API > Drupal Content to Solr 8 > Fields
.Add fields
button.\ud83c\udf53 Strawberry (Descriptive Metadata source) (field_descriptive_metadata)
, e.g. for the key mapped above, look for field_descriptive_metadata:date_created_edtf_date_free
.Type
for the field is correct (date
for the example in this guide).Administration > Configuration > Search and metadata > Search API
and click on the link to the index for your Drupal data.Queue all items for reindexing
button.Index now
button.Administration > Configuration > Search and metadata > Facets
.+ Add facet
button.Facet source
: View Solr search content, display Page
Field
: \ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free (field_descriptive_metadata:date_created_edtf_date_free)
Name
: \ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free
Edit
for the facet we just created and adjusting the many options available as needed. For the example in this guide, we'll adjust the below from the default settings:Facet settings
\u2611
Date item processor
Date display
\ud83d\udd18
Actual date with granularity
Granularity
\ud83d\udd18
Year
URL alias
: sbf_date_created_edtf
Administration > Structure > Block layout
.Archipelago Base Theme
.Place block
button next to the appropriate region. For the example in this guide, we'll be placing the block in the Sidebar second
region.\ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free
Place block
button next to the facet. Once the block is added, you can drag and drop it to change its position among the existing blocks and saving.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberryfield-formatters/","title":"Strawberryfield Formatters","text":"This documentation will give a brief overview of Archipelago's Strawberryfield Formatters and how they work using the default View mode Digital Object Full View
as an example.
When taking a look at your First Digital Object note that multiple formatters are working together to create this Display
( or View mode
). Since \"My First Digital Object\" is a Photograph
the Display
being used is Digital Object Full View
which, by default, uses formatters to:
Object Description
and Type of Resource
.When editing an ADO, at the top of the Webform page there is a tab titled Manage display
which will take us to where all the Formatters live. Take note that the DISPLAY SETTINGS
shown in the screenshot below are using the Default View mode.
Once the page loads the Default
View mode is automatically selected. However, because we are editing an object with the Media type
Photograph
, we need to edit the View mode Digital Object Full View
since it is the Default View mode for this Media type
.
The ADO Type to View mode Mapping page tells the ADOs which View mode to use by default per Media type. This page can be accessed at yoursite//admin/config/archipelago/viewmode_mapping
There are two sections in Manage display
for Digital Object Full View
: 1) Content and 2) Disabled. Moving a field into Content means this formatter will be used to the display the ADO in some way. The formatters moved to Disabled are inactive and are subsequently not being used for displaying the ADO.
There are four fields named \ud83c\udf53Strawberry
and each one is a copy of the field \ud83c\udf53Strawberry (Descriptive Metadata source)
. Since the names of the fields do not imply their function, they have been named Strawberry in four different ways (Italiano, Deutsch, Din\u00e9 Bizaad, and English) in order to organize and help users visually remember which field is doing what for the Display
.
Recall My First Digital Object at beginning of this document where there were 3 sections highlighted in Red, Blue, and Green.
\ud83c\udf53Fragola
) there is the Strawberry Field Formatter for IIIF media which takes the image stored in S3 to display the photograph with the image viewer.\ud83c\udf53Erdbeere
) there is the Strawberry Field Formatter for Custom Metadata Templates which displays the raw JSON metadata using configurable Twig templates. In this example, the default Twig template uses the JSON key type
to display the Type of Resource
.\ud83c\udf53Strawberry (Descriptive Metadata)
) there is the Strawberry Default Formatter which is used to display the Raw JSON Metadata.The decision for how your metadata is displayed is totally in your control.
Under the WIDGET
column, there is a quick description/overview of what the formatter is doing.
And by clicking on the gear icon under the OPERATIONS
column, all of the options for configuring the formatter are revealed. To use \ud83c\udf53Fragola
as an example (the Formatter for IIIF media), we can choose which JSON Key is being used to fetch the IIIF Media URLs (found inside the raw JSON being played with Strawberry Default Formatter
), the maximum height and width of the viewer, etc.
And then with \ud83c\udf53Erdbeere
(the Formatter for Custom Metadata Templates) there is the option, among many others, to configure which Twig template the formatter will use for displaying your Metadata.
More information about Managing Metadata Displays with Twig Templates can be found here.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"strawberryfields/","title":"Strawberryfields Forever","text":""},{"location":"strawberryfields/#what-strawberry-fields-does-why-we-built-it-and-what-issues-it-addresses","title":"What Strawberry fields does, why we built it, and what issues it addresses","text":"Archipelago integrates transparently into the Drupal 8 ecosystem using its Core Content Entity System (Nodes), Discovery (Search API) and in general all its Core Components plus a few well maintained external ones.
By design (and because we think its imperative), Archipelago takes full charge of the metadata layer and associated media assets by implementing a highly configurable, smart Drupal field written in JSON named Strawberryfield
that attaches to any content.
All of JSON's internals, keys, paths, and values are dynamically exposed to the rest of the ecosystem. Strawberryfield even remembers its structure as data evolves by storing JSON paths of every little detail.
"},{"location":"strawberryfields/#nothing-is-real","title":"Nothing Is Real","text":"Archipelago includes additional companion modules, Webform_strawberryfield
and Format_strawberryfield
that extend the core metadata capabilities of the main Strawberryfield
module and allow the same flexibility to be exposed during ingest and viewing of digital objects.
The in-development Strawberry Runners
and AMI
modules further extend Archipelago's capabilities. Additional information related to these modules will be made available following initial public releases.
Webform Strawberryfield
(we had a better name) extends and integrates into the amazing Drupal Webform module
to allow Archipelago users to build any possible metadata and media, ingest and edit, workflows directly via the UI using webforms.
By not having a hardcoded ingest method, Archipelago can be used outside the GLAM community too, as a pure data repository in biological sciences, digital humanities, archives, or even as a mixed, multidisciplinary/cross-domain system.
We also added WIKIDATA
, LoC
, Getty
, and VIAF
authority querying elements to aid in linking to external Linked Open Data sources.
All these integrations are made to help local needs and community identities to survive the never-ending race for the next metadata schema. They are made to prototype, plan, and grow independently of how metadata will need to be exposed yesterday or tomorrow. And we plan to add more.
Explore what other features webform_strawberryfield
provides to help with ingesting, reading, and interacting with your metadata during that process.
Format Strawberryfield
(we had even a better name but...) deals with taking your JSON based metadata and casting
, mashing, mixing, exposing, displaying, and transforming it to allow rich interaction for users and other systems with your digital objects.
In its guts (or heart?), Archipelago does something quite simple but core to our concept of repository: it transforms in realtime the close to your needs open schema metadata that lives in strawberryfield as JSON into close to other one's fixed schema needs metadata; any destination format, using a fast, cached templating system. A templating system that is core to Drupal, called Twig
:
This templating system is exposed to Archipelago users through the UI and stored side by side in the repository as content (we named them Metadata Display entities
, but they not only serve display needs!) so users can fully control how metadata is transformed and published without touching their individual sources.
Templates or recipes can be shared, exported, ingested, updated, and adapted in many ways. Fast changes are possible without having to wait for the next mayor release of Archipelago or your favorited Metadata Schema Specs Committee agreeing on the next or the last version. Of course, this module not only handles metadata but media assets too, extracting local or remote URIs and files from your metadata and rendering them as media viewers: books, 3D models, images, panoramas, A/V with IIIF in its soul.
You can learn more about what format_strawberryfield can do and what many other possibilities are exposed through our templating system.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"traditional-install/","title":"Traditional install","text":""},{"location":"traditional-install/#traditional-installation-notes","title":"Traditional Installation Notes","text":"For those who prefer classic approaches to system installation and configuration (instead of Dockerized deployment), this page is reserved for notes, recommendations, and guides.
Please stay tuned for additional future updates. Thank you!
Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"twig_extensions/","title":"Twig Extensions","text":"One advantage of Drupal's integration of the Twig template engine is the availability of extensions (filters and functions).
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#default-twig-extensions-from-symfony","title":"Default Twig Extensions from Symfony","text":"The Symfony PHP framework, which is integrated into Drupal Core, provides extensions, which we use in our default templates:
Additionally, we have some very handy Drupal-specific extensions:
Finally, we have a growing list of extensions that apply to our own specific use cases:
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#twig-filters-from-archipelago","title":"Twig Filters from Archipelago","text":"edtf_2_human_date
The edtf_2_human_date
filter takes an EDTF date and an optional language code (defaults to English), and converts it to a human-readable format using the EDTF PHP library. The list of language codes is available here.
Let's start with the following metadata fragment: Metadata Fragment
...\n\"subject_wikidata\": \"\",\n\"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": \"~1899\",\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n},\n\"date_created_free\": null,\n...\n
Then we pass the date_free
field through the trim
filter (as a precaution, in case there's any accidental whitespace), and then we finally hand off the field to our edtf_2_human_date
filter: edtf_2_human_date
{{ data.date_created_edtf.date_free|trim|edtf_2_human_date('en') }}\n\n{# Output: Circa 1899 #}\n
html_2_markdown
The html_2_markdown
filter, as the name suggests, converts HTML to Markdown.
We start with this string of HTML: HTML string
{% set html_string = \"\n <ul>\n <li>One thing</li>\n <li>Another thing</li>\n <li>The last thing</li>\n </ul>\n\" %}\n
Then we pass it to the filter: html_2_markdown
{{ html_string | html_2_markdown }}\n\n{# Output:\n - One thing\n - Another thing\n - The last thing\n#}\n
markdown_2_html
The markdown_2_html
filter, as the name suggests, is the reverse of the above and converts Markdown to HTML.
We start with this string of Markdown: Markdown string
{% set markdown_string = \"\n - One thing\n - Another thing\n - The last thing\n\" %}\n
Then we pass it to the filter: markdown_2_html
{{ markdown_string | markdown_2_html }}\n\n{# Output:\n <ul>\n <li>One thing</li>\n <li>Another thing</li>\n <li>The last thing</li>\n </ul>\n#}\n
sbf_json_decode
The sbf_json_decode
filter decodes a JSON-encoded string.
We start with this string of JSON string: JSON string
{% set json_string = \"\n {\n \\\"date_to\\\": \\\"\\\",\n \\\"date_free\\\": \\\"~1899\\\",\n \\\"date_from\\\": \\\"\\\",\n \\\"date_type\\\": \\\"date_free\\\"\n }\n\" %}\n
Then we pass it to the filter: sbf_json_decode
{% json_decoded = json_string | sbf_json_decode %}\n\n{{ json_decoded.date_free }}\n{# Output:\n ~1899\n#}\n
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#twig-functions-from-archipelago","title":"Twig Functions from Archipelago","text":"clipboard_copy
The clipboard_copy
function, using the clipboard-copy-element library, takes a provided CSS class for the element(s) whose text we'd like to copy, and targets the CSS class of an existing HTML element on the page or generates an HTML element that can be clicked to copy the text to the user's clipboard.
Usage
clipboard_copy usage{{ clipboard_copy('CSS CLASS','OPTIONAL CSS CLASS(ES)','OPTIONAL TEXT') }}\n
This function takes three arguments:
clipboard-copy-button
) or classes (space-separated) for the copy button if auto-generating or a single, unique class if using your own existing button(s) Copy to Clipboard
) for the copy button if auto-generatingIn the examples below, we want users to be able to copy the text from three different kinds of HTML elements: a div
, an input
, and an a
hyperlink href.
First we start by giving the div element(s) we'd like to copy a unique class:
div element text<div class=\"csl-bib-body-container chicago-fullnote-bibliography\">\n <div id=\"copy-csl\" class=\"csl-bib-body\">\n <div class=\"csl-entry\">\n New York Botanical Garden. \u201cDescriptive Guide to the Grounds, Buildings and Collections.\u201d\n </div>\n </div>\n</div>\n
Then we pass the class to the function:
clipboard_copy for div element text{{ clipboard_copy('csl-bib-body','','Copy Bibliography Entry') }}\n
Note
The class can be attached to parent elements of the element we are ultimately targeting if needed, but any intermediate characters may get caught up in the copied text.
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for div element text{{ clipboard_copy('csl-bib-body','custom custom-button','Copy Bibliography Entry') }}\n
The result for the above div
example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"copy-csl\" tabindex=\"0\" role=\"button\">Copy Bibliography Entry</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"copy-csl\" tabindex=\"0\" role=\"button\">Copy Bibliography Entry</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying input element value with auto-generated buttonFirst we start by giving the input element(s) we'd like to copy a unique class:
input element value{% if attribute(data, 'as:image')|length > 0 or attribute(data, 'as:document')|length > 0 %}\n <h2>\n <span class=\"align-middle\">Direct Link to Digital Object's IIIF Presentation Manifest V3 </span>\n <img src=\"https://iiif.io/img/logo-iiif-34x30.png\">\n </h2>\n {% set iiifmanifest = nodeurl|render ~ \"/metadata/iiifmanifest/default.jsonld\" %}\n <input type=\"text\" value=\"{{ iiifmanifest }}\" id=\"iiifmanifest_copy\" size=\"{{ iiifmanifest|length }}\" class=\"col-xs-3 copy-content\">\n{% endif %}\n
Then we pass the class to the function:
clipboard_copy for input element value{{ clipboard_copy('copy-content','',\"Copy Link to Digital Object's IIIF Presentation Manifest V3\") }}\n
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for input element text{{ clipboard_copy('copy-content','custom custom-button',\"Copy Link to Digital Object's IIIF Presentation Manifest V3\") }}\n
The result for the above input
example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"iiifmanifest_copy\" tabindex=\"0\" role=\"button\">Copy Link to Digital Object's IIIF Presentation Manifest V3</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"iiifmanifest_copy\" tabindex=\"0\" role=\"button\">Copy Link to Digital Object's IIIF Presentation Manifest V3</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying anchor element hyperlink href with auto-generated buttonFirst we start by giving the a
element(s) we'd like to copy a unique class:
<a id=\"copy-documentation-id\" class=\"copy-documentation-class row\" href=\"https://docs.archipelago.nyc\">Archipelago Documentation</a>\n
Then we pass the class to the function:
clipboard_copy for anchor element hyperlink href{{ clipboard_copy('copy-documentation-class','',\"Copy Link to Documentation\") }}\n
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for anchor element text{{ clipboard_copy('copy-documentation-class','custom custom-button',\"Copy Link to Documentation\") }}\n
The result for the above anchor example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"copy-documentation-id\" tabindex=\"0\" role=\"button\">Copy Link to Documentation</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"copy-documentation-id\" tabindex=\"0\" role=\"button\">Copy Link to Documentation</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
The above examples automatically generate copy
buttons. They can be styled, but if we need more control over the button placement and styling, we can use our own button(s) by ensuring that they meet the following requirements:
<copy-clipboard>
element (this can be hidden) with a for
attribute, whose value is the ID of the source element, attached to the element acting as the button.First we start by giving the div element(s) we'd like to copy a unique class:
div element text<div class=\"csl-bib-body-container chicago-fullnote-bibliography\">\n <div id=\"copy-csl\" class=\"csl-bib-body\">\n <div class=\"csl-entry\">\n New York Botanical Garden. \u201cDescriptive Guide to the Grounds, Buildings and Collections.\u201d\n </div>\n </div>\n</div>\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for div element text<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"copy-csl\">Copy Text</clipboard-copy>\n</button>\n\n{{ clipboard_copy('csl-bib-body','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying input element value with custom buttonFirst we start by giving the input element(s) we'd like to copy a unique class:
input element value{% if attribute(data, 'as:image')|length > 0 or attribute(data, 'as:document')|length > 0 %}\n <h2>\n <span class=\"align-middle\">Direct Link to Digital Object's IIIF Presentation Manifest V3 </span>\n <img src=\"https://iiif.io/img/logo-iiif-34x30.png\">\n </h2>\n {% set iiifmanifest = nodeurl|render ~ \"/metadata/iiifmanifest/default.jsonld\" %}\n <input type=\"text\" value=\"{{ iiifmanifest }}\" id=\"iiifmanifest_copy\" size=\"{{ iiifmanifest|length }}\" class=\"col-xs-3 copy-content\">\n{% endif %}\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for input element value<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"iiifmanifest_copy\">Copy Input</clipboard-copy>\n</button>\n\n{{ clipboard_copy('copy-content','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying anchor element with custom buttonFirst we start by giving the a
element(s) we'd like to copy a unique class:
<a id=\"copy-documentation-id\" class=\"copy-documentation-class row\" href=\"https://docs.archipelago.nyc\">Archipelago Documentation</a>\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for anchor element hyperlink href<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"copy-documentation-id\">Copy Link</clipboard-copy>\n</button>\n\n{{ clipboard_copy('copy-documentation-class','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
sbf_entity_ids_by_label
The sbf_entity_ids_by_label
function, as the name suggests, provides a Drupal entity ID for the following Drupal entity types:
If we start with the user entity jsonapi
, we can do the following: sbf_entity_ids_by_label
{% set jsonapi_user_ids=sbf_entity_ids_by_label('jsonapi','user','') %}\n\n{% for jsonapi_user_id in jsonapi_user_ids %}\n {{ jsonapi_user_id }}\n{% endfor %}\n\n{# Output:\n 3\n#}\n
As you can see above, the sbf_entity_ids_by_label
function takes three arguments:
We then loop through the returned result, which is an array of IDs (in this case, just a single one).
sbf_search_api
The sbf_search_api
function executes a search API query against a specified index.
{% set search_results=sbf_search_api('default_solr_index','strawberry',[],{'status':1},[]) %}\n{% set labels=search_results['results']['13']['fields']['label_2'] %}\n<ul>\n {% for label in labels %}\n <li>{{ label }}</li>\n {% endfor %}\n</ul>\n
As you can see above, the sbf_search_api
function takes eight arguments:
For this example we end up with the following output:
The Twig Recipe Cards below reference common Metadata transformation, display, or other use cases/needs you may have in your own Archipelago repository.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"twig_recipe_cards/#getting-started-working-with-twig-in-archipelago","title":"Getting Started Working with Twig in Archipelago","text":"We recommend reading through our main Metadata Display Preview and Twigs in Archipelago documentation overview guides, and also our Working with Twig primer before diving into applying any of these recipes in your own Archipelago.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"twig_recipe_cards/#ami-ingest-template-adaptations-common-use-cases-and-twig-recipe-cards","title":"AMI Ingest Template Adaptations -- Common Use Cases and Twig Recipe Cards:","text":"Use Case #1: I used AMI LoD Reconciliation to reconciliate the values in my AMI Set Source CSV mods_subject_topic
column against both LCSH and Wikidata. I would like to map the reconciliated values into the Archipelago default subject_loc
and subject_wikidata
JSON keys.
Twig Recipe Card for Use Case #1:
{#- LCSH -#}\n {% if data.mods_subject_topic|length > 0 %}\n \"subject_loc\": {{ data_lod.mods_subject_topic.loc_subjects_thing|json_encode|raw }},\n {% endif %}\n {#- Wikidata -#} \n {% set subject_wikidata = [] %}\n {% for source, reconciliated in data_lod %}\n {% if (('subject' in source) or ('genre' in source)) and reconciliated.wikidata_subjects_thing and reconciliated.wikidata_subjects_thing|length > 0 %}\n {% set subject_wikidata = subject_wikidata|merge(reconciliated.wikidata_subjects_thing) %}\n {% endif %}\n {% endfor %} \n
Use Case #2: I have both columns containing a mods_subject_authority_lcsh_topic
(labels) and corresponding mods_subject_authority_lcsh_valueuri
(URIs) data in my AMI Set Source Data CSV that I would like to pair and map into the Archipelago default subject_loc
JSON key.
Twig Recipe Card for Use Case #2:
{%- if data['mods_subject_authority_lcsh_topic'] is defined and not empty -%}\n {% set subjects = data[\"mods_subject_authority_lcsh_topic\"] is iterable ? data[\"mods_subject_authority_lcsh_topic\"] : data[\"mods_subject_authority_lcsh_topic\"]|split('|@|') %} \n {% set subject_uris = data[\"mods_subject_authority_lcsh_valueuri\"] is defined ? data[\"mods_subject_authority_lcsh_valueuri\"] : '' %} \n {% set subject_uris_list = subject_uris is iterable ? subject_uris : subject_uris|split('|@|') %}\n \"subject_loc\": [\n {% for subject in subjects %}\n {\n \"uri\": {{ subject_uris_list[loop.index0]|default('')|json_encode|raw }},\n \"label\": {{ subject|json_encode|raw }}\n }\n {{ not loop.last ? ',' : '' }}\n {% endfor %}\n ],\n{%- endif -%}\n
Use case #3: I have dc.creator
and dc.contributor
columns in my AMI Set Source Data CSV with simple JSON-encoded values (e.g. source column cells contain [\"Name 1, Name 2\"]
) that I would like to map to the Archipelago default creator_lod
JSON key.
Twig Recipe Card for Use Case #3:
{% if data['dc.creator']|length > 0 or data['dc.contributor']|length > 0 %}\n {% set total_creators = (data[\"dc.creator\"]|length) + (data[\"dc.contributor\"]|length) %}\n {% set current_creator = 0 %} \n \"creator_lod\": [\n {% for creator in data[\"dc.creator\"] %}\n {% set current_creator = current_creator + 1 %}\n {% set creator_source = data[\"dc.creator\"][loop.index0] %}\n {\n \"name_uri\": null,\n \"agent_type\": null,\n \"name_label\": {{ creator|json_encode|raw }},\n \"role_label\": \"Creator\",\n \"role_uri\": \"http://id.loc.gov/vocabulary/relators/cre\"\n }\n {{ current_creator != total_creators ? ',' : '' }}\n {% endfor %}\n {% for creator in data[\"dc.contributor\"] %}\n {% set current_creator = current_creator + 1 %}\n {% set creator_source = data[\"dc.contributor\"][loop.index0] %}\n {\n \"name_uri\": null,\n \"agent_type\": null,\n \"name_label\": {{ creator|json_encode|raw }},\n \"role_label\": \"Contributor\",\n \"role_uri\": \"http://id.loc.gov/vocabulary/relators/ctb\"\n }\n {{ current_creator != total_creators ? ',' : '' }}\n {% endfor %} \n ],\n{% endif %} \n
Use Case #4: I have a mix of different columns containing Creator/Contributor/Other-Role-Types Name values with or without corresponding URI values that I would like to map to the default Archipelago creator_lod
JSON key. Twig Recipe Card for Use Case #4:
Click to view the full Recipe Card{#- START Names from LoD and MODS CSV with/without URIS. -#} \n {# Updated August 26th 2022, by Diego Pino. New checks/logic for mods_name_type_role_composed_or_more_namepart\n - Check first IF for a given namepart there is already reconciliaton. \n - IF not i check if there is a matching valueuri, \n - If not leave the URL empty and use the value in the namepart (label) only?\n - Only check/use mods_name_corporate/personal_namepart field IF there are no other fields\n - That specify Roles. Since normally in ISLANDORA that field (no role) is a Catch all names one\n - And in that case USE creator as the default ROLE\n #}\n {%~ set creator_lod = [] -%} \n {# Used to keep track of parts after the type (corporate, etc) that are no roles\n but authority properties. Add more if you find them #}\n {% set roles_that_are_no_roles = ['authority_naf','authority_marcrelator',''] %}\n {# Used to keep track of the ones that are reconciled already #}\n {%- set name_has_creator_lod = [] -%}\n {%- for key,value in data_lod -%}\n {%- if key starts with 'mods_name_' and key ends with '_namepart' -%}\n {# If there is mods_name_SOMETHING_namepart in data_lod we keep track so we \n do not try afterwards to use that Sources KEY from the CSV.\n #}\n {%- set name_has_creator_lod = name_has_creator_lod|merge([key]) -%}\n {# Now we remove 'mods_name_' and '_namepart' #}\n {%- set name_type_and_role = key|replace({'mods_name_':'', '_namepart':''}) -%}\n {# We will only target personal or corporate. If any of those are missing we skip? #}\n {% set name_type = null %}\n {%- if name_type_and_role starts with 'personal_' -%}\n {% set name_type = 'personal' %}\n {%- elseif name_type_and_role starts with 'corporate_' -%}\n {%- set name_type = 'corporate' -%}\n {%- endif -%}\n {%- if name_type is not empty -%}\n {#- Now we remove 'type', e.g 'corporate_' -#}\n {%- set name_role = name_type_and_role|replace({(name_type ~ '_'):''}) -%}\n {# in case the name_role contains one of roles_that_are_no_roles, e.g\n something like `creator_authority_marcrelator` we remove that #}\n {% for role_that_is_no_role in roles_that_are_no_roles %}\n {%- set name_role = name_role|replace({(role_that_is_no_role):''}) -%}\n {% endfor %}\n {# After removing all what can not be a role if we end with an empty #}\n {% if name_role|trim|length == 0 %}\n {%- set name_role = \"creator\" %}\n {% else %}\n {%- set name_role = name_role|replace({'\\\\/':'//' , '_':' '})|trim -%}\n {% endif %}\n {#- we iterate over all possible vocabularies and fetch the reconciliated names from them (if any) -#}\n {%- for approach, names in value -%} \n {#- if there are actually name pairs (name and uri) that were reconciliated we use them -#}\n {%- if names|length > 0 -%}\n {#- we call the ami_lod_reconcile twig extension with the role label using the LoC Relators endpoint in english and get 1 result -#}\n {%- set role_uri = ami_lod_reconcile(name_role|lower|capitalize,'loc;relators;thing','en',1) -%}\n {#- for each found name pair in a list of possible LoD reconciliated elements we generate the final structure that goes into \"creator_lod\" json key -#}\n {%- for name in names -%} \n {%- set creator_lod = creator_lod|merge([{'role_label': name_role|lower|capitalize, 'role_uri': role_uri[0].uri, \"agent_type\": name_type, \"name_label\": name.label, \"name_uri\": name.uri}]) -%} \n {%- endfor -%}\n {%- endif -%}\n {%- endfor -%}\n {% endif -%}\n {%- endif -%} \n {%- endfor -%}\n {# Now go for the RAW CSV data for names #}\n {%- for key,value in data -%}\n {# here we skip values previoulsy fetched from LoD and stored in name_has_creator_lod #}\n {%- if key not in name_has_creator_lod and key starts with 'mods_name_' and key ends with '_namepart' -%}\n {# If there is mods_name_SOMETHING_namepart in data_lod we keep track so we \n do not try afterwards to use that Sources KEY from the CSV.\n #}\n {%- set name_has_creator_lod = name_has_creator_lod|merge([key]) -%}\n {# Now we remove 'mods_name_' and '_namepart' #}\n {%- set name_type_and_role = key|replace({'mods_name_':'', '_namepart':''}) -%}\n {# We will only target personal or corporate. If any of those are missing we skip? #}\n {%- set name_type = null -%}\n {%- if name_type_and_role starts with 'personal_' -%}\n {%- set name_type = 'personal' -%}\n {%- elseif name_type_and_role starts with 'corporate_' -%}\n {%- set name_type = 'corporate' -%}\n {%- endif -%}\n {% if name_type is not empty %}\n {# Now we remove 'type', e.g 'corporate_' #}\n {%- set name_role = name_type_and_role|replace({(name_type ~ '_'):''}) -%}\n {# in case the name_role contains one of roles_that_are_no_roles, e.g\n something like `creator_authority_marcrelator` we remove that #}\n {% for role_that_is_no_role in roles_that_are_no_roles %}\n {%- set name_role = name_role|replace({(role_that_is_no_role):''}) -%}\n {% endfor %}\n {# After removing all what can not be a role if we end with an empty #}\n {% if name_role|trim|length == 0 %}\n {%- set name_role = \"creator\" %}\n {% else %}\n {%- set name_role = name_role|replace({'\\\\/':'//' , '_':' '})|trim -%}\n {% endif %}\n {# Now we check if there is a corresponding _valueuri for this #}\n {% set name_uris = [] %}\n {%- if data[('mods_name_' ~ name_type_and_role ~ '_valueuri')] is not empty \n and data[('mods_name_' ~ name_type_and_role ~ '_valueuri')] != '' -%}\n {%- set name_uris = data[('mods_name_' ~ name_type_and_role ~ '_valueuri')]|split('|@|') -%}\n {%- endif -%}\n {%- set role_uri = ami_lod_reconcile(name_role|lower|capitalize,'loc;relators;thing','en',1) -%}\n {#- we split and iterate over the value of the mods_name key -#}\n {# NOTE. THIS IS TARGETING Anything after the year 1000, or 2000 #}\n {%- for index,name in value|replace({'|@|1':', 1', '|@|2':', 2', '|@|-':', -'})|split('|@|') -%}\n {%- if name is not empty and name|trim != '' -%}\n {%- set name_uri = null -%}\n {# Here we can check if one of the names IS not a name (e.g a year? #}\n {#- we call the ami_lod_reconcile twig extension with the role label using the LoC Relators endpoint in english and get 1 result -#}\n {%- if name_uris[index] is defined and name_uris[index] is not empty -%}\n {%- set name_uri = name_uris[index] -%}\n {%- endif -%}\n {%- set creator_lod = creator_lod|merge([{'role_label': name_role|lower|capitalize, 'role_uri': role_uri[0].uri, \"agent_type\": name_type, \"name_label\": name, \"name_uri\": name_uri}]) -%}\n {%- endif -%}\n {%- endfor -%}\n {%- endif -%}\n {%- endif -%}\n {%- endfor ~%}\n {# Use reduce filter + other logic for depulicating #}\n {% set creator_lod = creator_lod|reduce((unique, item) => item in unique ? unique : unique|merge([item]), []) %}\n \"creator_lod\": {{ creator_lod|json_encode|raw -}},\n {#- END Names from LoD and MODS CSV with/without URIS. -#}\n
Use Case #5: I have geographic location information that I would like to reconciliate against Nominatim and map into the default Archipelago 'geographic_location' key. I have AMI Source Data CSVs which contain values/labels and some which contain coordinates.
Twig Recipe Card for Use Case #5 with variation notes:
{#- <-- Geographic Info and terms:\n Includes options for geographic info for:\n - Nominatim lookup by value/label\n - Nominatim lookup by coordinates \n -#}\n {#- use value for Nominatim search -#}\n {% if data.mods_subject_geographic|length > 0 %}\n {% set nominatim_from_label = ami_lod_reconcile(data.mods_subject_geographic,'nominatim;thing;search','en') -%}\n \"geographic_location\": {{ nominatim_from_label|json_encode|raw }},\n {% endif %}\n {#- use coordinates for Nominatim search, if provided -#}\n {% if data.mods_subject_cartographics_coordinates|length > 0 %}\n {% set nominatim_from_coordinates = ami_lod_reconcile(data.mods_subject_cartographics_coordinates,'nominatim;thing;reverse','en') -%}\n \"geographic_location\": {{ nominatim_from_coordinates|json_encode|raw }},\n {% endif %}\n{#- Geographic Info and terms --> #} \n
Use Case #6: I have date values in a dc.date
column that contain instances of 'circa' or 'Circa' where I would like to replace with the EDTF-friendly '~' instead and map to the Archipelago default 'date_created_edtf' JSON key.
Twig Recipe Card for Use Case #6:
{% if data['dc.date'] is defined %}\n {% set datecleaned = data['dc.date']|replace({\"circa \":\"~\", \"Circa \":\"~\"}) %}\n \"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": {{ datecleaned|json_encode|raw }},\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n }, \n {% endif %}\n
More recipe cards will be added over time. Please see our Archipelago Contribution Guide to learn about contributing your own recipe card or other documentation.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"utility_scripts/","title":"Utility Scripts","text":"If you've already followed deployment guides for archipelago-deployment and archipelago-deployment-live, you may have used some shell scripts that archipelago provides. The scripts are available in the scripts/archipelago/
and drupal/scripts/archipelago/
folders respectively.
Metadata Display Entity Twig Templates can be exported out of and imported into both local and remote deployments with the following script: import_export.sh
. The script can be run interactively or non-interactively.
Docker host vs. Docker container
Because the script uses the Docker .env
file for the JSONAPI user and URL by default, we recommend running this directly on the host.
Running the script interactively will guide you through a number of prompts to configure and import or export to an existing folder or to one which will be created.
./import_export.sh -n\n
","tags":["Bash","Scripts","DevOps"]},{"location":"utility_scripts/#non-interactive-mode","title":"Non-interactive Mode","text":"To run the command non-interactively provide the required and optional parameters with the necessary arguments as needed.
Options for Non-interactive Mode
-i
or -e
(required)
\u00a0\u00a0\u00a0\u00a0Import or export, respectively, Metadata Entity Display Twig Templates from a local folder.
-s path
(required)
\u00a0\u00a0\u00a0\u00a0The absolute path of the local folder to export to or import from.
-j path/filename
(only required if the .env
file containing the JSONAPI user and password is in a non-standard location)
\u00a0\u00a0\u00a0\u00a0The absolute path to the .env
file containing the JSONAPI user and password.
-d url
(required if URL is not in .env
file or importing to or exporting from a remote deployment)
\u00a0\u00a0\u00a0\u00a0The URL of the archipelago deployment.
-k
(optional)
\u00a0\u00a0\u00a0\u00a0Keep any existing files ending with .json
in the specified folder (the default is to delete) before exporting.
JSONAPI User
The JSONAPI user credentials, by default, will be read from the .env
files in the following locations (relative to the root of the deployment):
./deploy/ec2-user/.env
archipelago-deployment ./.env
A separate file can also be passed as an argument using the -j
option.
JSONAPI_USER=jsonapi\nJSONAPI_PASSWORD=jsonapi\n
Exporting from local archipelago-deployment-live
./import_export.sh -e -s /home/ec2-user/metadatadisplay_export\n
After logging into the archipelago-deployment-live host, the above command will delete any files with the .json
extension if the destination folder exists. Otherwise, the folder will be created. The JSON user credentials and domain from the .env
file will then be used to download the files so please make sure these are set. Exporting from local archipelago-deployment
./import_export.sh -e -s /home/user/metadatadisplay_export -d http://localhost:8001\n
This will work the same way as the above example, but the URL is passed as an argument in this case since the .env
file will not contain (in most cases) the domain. As above, the JSON user credentials will have to be set in the .env
file. Exporting from remote archipelago-deployment-live
./import_export.sh -e -s /home/user/metadatadisplay_export -d https://archipelago.nyc\n
This is essentially the same as the example directly above, except that in this case the JSON user credentials in the .env
file will have to be set to the ones used to access the remote instance. Importing locally into archipelago-deployment-live
./import_export.sh -i -s /home/user/metadatadisplay_import\n
This is essentially the same as the first example above, except that the import option (-i
) is used. The folder name is changed for the sake of example, but you can use the same folder that was used for exporting. Importing locally into archipelago-deployment
./import_export.sh -i -s /home/user/metadatadisplay_import -d http://localhost:8001\n
As in the example directly above, this corresponds to the example for exporting with a local archipelago-deployment instance, except that the import option (-i
) is used. Importing from local instance into remote archipelago-deployment-live
./import_export.sh -i -s /home/user/metadatadisplay_import -d https://archipelago.nyc\n
In this example, the locally exported files are being imported into a remote instance. As in the above examples with remote instances, the JSON user credentials need to be set in the .env
file to those with access to the remote instance.","tags":["Bash","Scripts","DevOps"]},{"location":"utility_scripts/#automatic-deployment-script","title":"Automatic Deployment Script","text":"If you're frequently deploying locally with archipelago-deployment, you may want to use the automated deployment script available at scripts/archipelago/devops/auto_deploy.sh
. The script is interactive and can be called from the root of the deployment, e.g. /home/user/archipelago-deployment/
:
Automatic Deployment
scripts/archipelago/devops/auto_deploy.sh
Follow the prompts and select your options to complete the deployment.
","tags":["Bash","Scripts","DevOps"]},{"location":"webforms/","title":"Webforms in Archipelago","text":"The Webform Strawberryfield module provides Drupal Webform ( == awesome piece of code) integrations for StrawberryField so you can really have control over your Metadata ingests. These custom elements provide Drupal Webform integrations for Archipelago\u2019s StrawberryField so you can have fine grained and detailed control over your Metadata ingests and edits.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"webforms/#instructions-and-guides","title":"Instructions and Guides","text":"Use these webforms or their elements to create a custom webform for your own repository/project needs
Archipelago Default Deployment Webforms
Descriptive Metadata
Digital Object Collection
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"webformsasinput/","title":"How to Create a Webform as an Input Method for Archipelago Digital Objects (ADO) / Primer on Display Modes","text":"Drupal 8/9 provides a lot of out-of-the-box functionality to setup the way Content Entities (Nodes or in our case ADOs) are exposed to users with the proper credentials. That functionality lives under the \"Display Modes\" and can be accessed at yoursite/admin/structure/display-modes
.
In a few quick words, The Display Mode Concept covers: formatting your Content Entities and their associated Fields so when a user lands on a Content Page, they are displayed in a certain, hopefully pleasing, way and also how users with proper Credentials can fill inputs/edit values for each field
a Content Entity provides.
First, formatting output (basically building the front facing page for each content entity) is done by a View Mode
. Second, defining how/what input method you are going to use to create or edit Content entities, is handled by a Form Mode
. Both Modes, are, in Drupal Lingo, Configuration Entities, they provide things you can configure, you can name them and reuse them and those configurations can all be exported and reimported using YAML files. Also both Modes the following in common:
The main difference, other than their purpose (Output v/s Input) is that, on View Modes, the settings you apply to each field are associated to \"Formatters\" and on Form Modes, the settings you apply to each field are connected to \"Widgets\".
So, resuming, this is what lives under the Concept of a \"Display Mode\":
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#view-mode","title":"View Mode","text":"SBF
will provide a large list of possible Formatters
, like IIIF driven viewers, Video formatters, Metadata Display (Twig template driven) ones, etc. This is because a SBF type of field has much more than just a text value, it contains a full graph of metadata and properties, inclusive links to Files and provenance metadata.Which Widgets are available will depend on the \"type\" of field the Content Entity has.
Node
Title will have a single Text Input with some options, like the size of the Textfield used to feed it.SBF
(strawberryfield), will provide a larger list of possible Widgets, ranging from raw JSON input (which you could select if your data was already in the right format) to the reason we are reading this: Webform driven Widgets
. These Widgets include:If you chose a widget other than the raw JSON, the widget will take the raw JSON to build, massage and enrich the data so that it can be presented in a visual format by the SBF. This is because a SBF type of field has much more than just a text value. It contains a full graph of metadata and properties, inclusive links to Files and provenance metadata, which for example allows us to use an Upload field directly in the attached/configured webform. - Form modes also have an additional benefit. Each one can have fine grained permissions. That way you can have many different Form Modes, but allow only certain ones to be visible, or usable by users of a given Drupal Role.
Good question! So, to enable, configure, and customize these Display Modes you have to navigate to your Content Type
Configuration page in your running Archipelago. This is found at /admin/structure/types
. Note: the way things are named in Drupal can be confusing to even the most deeply committed Drupal user, so bear in mind some terms will change. Feel free to read and re-read.
You can see that for every existent Content Type, there is a drop down menu with options:
On the top you will see all your View Modes Listed, with the Default
one selected and expanded. The Table that follows has one row per Field attached/part of this Content Type. Some of the fields are part of the Content Type itself, in this case Digital Object (bundled) and some other ones are common to every Content Entity derived from a Node. The \"Field\" column contains each field name (not their type, reason why you don't see Strawberry Field there!) but we can tell you right now that there is one, named \"Descriptive Metadata\", that is of SBF
type.
How do we know that the field named \"Descriptive Metadata\" is a Strawberryfield? Well, we set-up the Digital Object Content Type for you that way, but also you can know what we know by pressing on \"Manage fields\" Tab on the top (don't forget to come back to \"Manage display\", afterwards!)
Also Surprise: You Content Entity has really really just 2 fields! And that, friends, is one of the secret ingredients of Archipelago. All goes into a Single Field. But wait: i see more fields in my Manage Display table. Why? Well. Some of them are base fields, part of what a Drupal Node is: base field means you can not remove them, they are part of the Definition itself. One obvious one is the Title
.
But there are also some fields very particular to Archipelago: You can see there are also ones named \"Formatter Object Metadata\", \"Media\" and one named \"Static Media\"!. Where does come from? Those are also Strawberryfields. It sounds confusing but it is really simple. They are really not \"fields\" in the sense of having different data than \"Descriptive Metadata\". Those are In Memory, realtime, copies of the \"Descriptive Metadata\" SBF field and are there to overcome one limitation of Drupal 8:
Each Field can have a single \"Formatter\" setup per field.
But we want to re-usue the JSON data to show a Viewer, Show Metadata as HTML directly on the ADO/NODE landing page, and we want also to, for example, format sometimes images as Thumbnails and not using a IIIF viewer only. This CopyFields (Legal term) have also a nice Performance advantage. Drupal needs to fetch only once the data from the real Field, \"Descriptive Metadata\", from the database. And then just makes the data available in real time to its copies. That makes all fast, very very fast! And of course flexible. As you dig more into Archipelago you will see the benefits of this approach. Finally, if you need to, you can make more CopyFields. But the reality is, there is a single, only one, SBF in each Digital Object and its named \"Descriptive Metadata\".
You can also simply not care about the type and trust the UI. It will just show Formatters that are right for each type and expose Configuration options (and a little abstract of the current ones) under the Widget Column. Operations Columns allows you to setup each Widget. Widget term here is a bit confusing. These are not really Widget in terms of Data Input, but in terms of \"Configuration\" Input. But D8/9 is evolving and its getting better. Those settings apply always only to the current View Mode.
You can play with this, experiment and change some settings to get more comfortable. We humbly propose you that you complete this info with the official Drupal 8 Documentation and also apply custom settings to your own, custom View Mode so you don't end changing base, expected functionality while you are still learning.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#manage-form-display","title":"Manage Form Display","text":"On the top you will see all your Form Modes Listed, with the Default
one selected and expanded. The Table that follows has one row per Field attached/partof this Content Type. The list of fields here is shorter, the SBF CopyFields are not present because all data goes really only into real fields. Also some other, display only ones (means you can not modify them) will not appear here. Again, Some of the fields are part of the Content Type itself, in this case Digital Object (bundled) and some other ones are common to every Content Entity derived from a Node. \"Field\" column contains each field name and the Widget Column allows you to select what type of Input you are going to use to feed it on Ingest/edit. On the right you will see again a little gear, that allows you to configure the settings for a particular Widget. Those settings apply always only to the current Form Mode.
So. The one we want to understand is the one attached to the \"Descriptive Metadata\" field. Currently one named \"Strawberryfield webform based input with inline rendering\". There are other two. But let's start with this. Press on the Gear to the right on the same row.
AS you can see there are not too many options. But, the main, first Text input is an Autocomplete field that will resolve against your existing Webforms. So, guess what. If you want to use your own Webform to feed a SBF, what do you do? You type the name, let the autocomplete work, select the right Webform, maybe your own custom one, and the you press \"Update\". Once that is done you need to \"Save\" your Form Mode (hint, button at the bottom of the page).
We wish life was that easy (and it will once we are done with refining Drupal's UI) but for now there are some extra things you need to do to make sure the Webform, your custom one, can speak JSON. The default one you get named also \"Descriptive Metadata (descriptive_metadata)\", same as the field, is already setup to be used. Means if you create a new Webform by Copying that one, you can start using it inmediately. But if you created one from scratch (Different tutorial) you need to setup some settings.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#setting-up-a-webform-to-talk-strawberryfield","title":"Setting up a Webform to talk Strawberryfield","text":"Navigate to your Webform Managment form at /admin/structure/webform
If you already created a Webform (different tutorial on how to do that) you will see your own named one in that list. I created for the purpose of this documentation one named \"Diego Test\" (Hi, i'm Diego..) and on the most right Column, \"Operations\" you will haven an Drop Down Menu. On your own Webform row, press on \"Settings\".
First time, this can be a little bit intimitading. We recommend going baby steps since the Webform Module is a very powerful one but also exposes you to a lot (and sometimes too many) options. Even more, if you are new to Webforms, we recomment you to copy the \"Descriptive Metadata\" Webform we provided first, and make small changes to it (starting by naming it your own way!) so you can see how that affects your workflow and experience, and how that interacts with the created metadata. The Webform Module provides testing and building capabilities, so you have a Playground there before actually ingesting ADOs. Copying it will also make all the needed settings for SBF interaction to be moved over, so your work will be much easier.
But we know you did not do that (where is the fun there right?). So lets setup one from scratch.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#general-settings","title":"General Settings","text":"Gist here is (look at the screenshot and copy the settings):
Gist here is (again, look at the screenshot and copy the settings):
The glue, the piece of resistance. The handler is the one that knows how to talk to a SBF. In simple words, the handler (any handler) provides functionality that does something with a Webform Submission. The one that you want to select here, is the \"Strawberryfield harvester\" handler. Add it, name it whatever you like (or copy what you see in the screenshot) and make sure you select, if running using our deployment strategy, \"S3 File System\" as the option for \"Permanent Destination for uploaded files\". The wording is tricky there, its not really Permanent, since that is handled by Archipelago, but more to Temporary, while working in ingesting an Object, destination for the Webform. Its not really wrong neither. Its permanent for the Webform, but we have better plans for the files and metadata!
Save your settings. And you are ready to roll. That webform can now be used as a Setting for any of the StrawberryField Widgets that use Webforms.
Finally (the real finally). Archipelago encourages at least one Field/JSON key to be present always. One with \"type\" as key value. So make sure that your Custom Webform has that one.
There are two ways of doing that:
You can copy how it is setup from the provided Webform's Elements, from the main Descriptive Metadata Webform and then add one \"select\" element to yours using the same \"type\" \"key\".Important in Archipelago is always the key value since that is what builds the JSON for your metadata. The Description can be any, but for UI consistency you could want to keep it the same across all your webforms.
Or, advanced, you can use the import/export capabilities (Webforms are just YAML files!) and export/copy your custom one as text, add the following element before or after some existing elements there
type:\n '#type': select\n '#title': 'Media Type'\n '#options': schema_org_creative_works\n '#required': true\n '#label_attributes':\n class:\n - custom-form-input-heading\n
And then reimport.
Having a \"type\" value will make your life easier. You don't need it, but everything works smoother that way.
Since you have a single Content Type named Digital Object, having a Webform field that has as key \"type\", which leads to a \"type\" JSON key, allows you to discern the Nature of your Digital Object, book or Podcast, Image or 3D and do smart, nice things with them.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"workingtwigs/","title":"Working with Twig in Archipelago","text":"The following information can also be found in this Presentation from the \"Twig Templates and Archipelago\" Spring 2021 Workshop:
Note
All examples shown below are using the following JSON snipped from Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?].
Click to view image of the JSON snippet. Click to view this snippet as JSON.{\n \"type\": \"Photograph\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions\",\n \"language\": [\n \"English\"\n ],\n \"documents\": [],\n \"publisher\": \"\",\n \"ismemberof\": \"111\",\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York.\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"date_created\": \"1910-01-01\"\n}\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#first-know-your-data","title":"First: Know Your Data","text":"Understanding the basic structure of your JSON data.
Single JSON Value.
\"type\": \"Photograph\"
Multiple JSON Values (Array of Enumeration of Strings)
- For \"language\": [\"English\",\"Spanish\"]
- \"language\" = JSON Key or Property - \"[\"English\",\"Spanish\"]\" = Multiple JSON Values (Array of Enumeration of Strings)
Multiple JSON Values (Array of Enumeration of Objects)
\"subject_loc\":[{\"uri\":\"http://..\",\"label\":\"Dogs\"},{\"uri\":\"http://..\",\"label\":\"Pets\"}]
Data is known as Context in Twig Lingo.
All your JSON Strawberryfield Metadata is accessible inside a Variable named data in your twig template.
You can access the values by using data DOT Property (attribute) Name
.
data.type
will contain \"Photograph\"data.language
will contain [ \"English\" ]data.language[0]
will contain \"English\" data.subject_loc
will contain [{ \"uri\":\"http://..\",\"label\": \"Dog\" }]data.subject_loc.uri
will contain \"http://..\"
data.subject_loc.label
will contain \"Dog\"Note
You also have access to other info in your context node
: such asnode.id
is the Drupal ID of your Current ADO; Also is_front
, language
, is_admin
, logged_in
; and more!
Twig for Template Designers
https://twig.symfony.com/doc/3.x/templates.html
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#simple-examples-using-printing-statements","title":"Simple examples using Printing Statements","text":"Single JSON Value Example
Twig templateHello I am a {{ data.type }} and very happy to meet you\n
Rendered outputHello I am a Photograph and very happy to meet you\n
Multiple JSON Values Example
Twig templateHello I was classified as \"{{ data.subject_loc[0].label }}\" and very happy to meet you\n
Rendered outputHello I was classified as \"Dogs\" and very happy to meet you\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#twig-statements-and-executing","title":"Twig Statements and Executing","text":"If in Twig
https://twig.symfony.com/doc/3.x/tags/if.html
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#rendered-output-based-upon-different-twig-conditionals-operators-tests-assignments-and-filters","title":"Rendered Output based upon different Twigconditionals
, operators
, tests
, assignments
, and filters
","text":"Conditionals, Operator, and Test Usage
Twig Template{% if data.subject_loc is defined %}\nHey I have a Subject Key\n{% else %}\nUps no Subject Key\n{% endif %}\n
Rendered OutputHello I was classified as \"Dogs\" and very happy to meet you\n
Loop Usage
Twig Template{% for key, subject in data.subject_loc %}\n* Subject {{ subject.label }} found at position {{ key }}\n{% endfor %}\n
Rendered Output* Subject Dogs found at position 0\n
Assignment, Filter, and Loop Usage
Twig Template{% for subject in data.subject_loc %}\n{% set label_lowercase = subject.label|lower %}\nMy lower case Subject is {{ label_lowercase }}\n{% endfor %}\n
Rendered Output`My lower case Subject is dogs`\n
Loop Scope
Twig Template{% for subject in data.subject_loc %}\n {% set label_lowercase = subject.label|lower %}\nMy lower case Subject is {{ label_lowercase }}\n{% endfor %}\n{# \n The below won\u2019t display because it was assigned inside \n The For Loop\n#}\n{{ label_lowercase }}\n
Rendered Output`My lower case Subject is dogs`\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#full-examples-for-common-uses-cases","title":"Full Examples for Common Uses Cases:","text":"Use Case #1
I have multiple LoD Subjects and want to display them in my page as a clickable ordered list but I\u2019m a safe/careful person.
Twig Example for Use Case #1{% if data.subject_loc is iterable and data.subject_loc is not empty %}\n<h2>My Subjects</h2>\n<ul>\n {% for subject in data.subject_loc %}\n <li>\n <a href=\"{{ subject.uri }}\" title=\"{{ subject.label|capitalize }}\" target=\"_blank\">\n {{ subject.label }}\n </a>\n </li> \n {% endfor %}\n</ul>\n{% endif %}\n
Use Case #2
I have sometimes a publication date. I want to show it in beautiful human readable language.
Twig Example for Use Case #2{% if data.date_published is not empty %}\n<h2>Date {{ data.label }} was published:</h2>\n<p>\n{{ data.date_published|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% endif %}\n
About date
Use Case #3 (Full Curry)
{# May 4th 2021 @dpino: I have sometimes a user provided creation date. I want to show it in beautiful human readable language but fallback to automatic date if absent. I also want in the last case to show it was either \u201ccreated\u201d or \u201cupdated\u201d. #}
\"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"https:\\/\\/archipelago.nyc\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2021-03-17T13:24:01-04:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n }\n
Twig Example for Use Case #3{% if data.date_created is not empty %}\n<h2>Date {{ data.label }} was created:</h2>\n<p>\n {{ data.date_created|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% else %}\n<h2>Date {{ data.label }} was {{ attribute(data, 'as:generator').type|lower }}d in this repository:</h2>\n<p>\n {{ attribute(data, 'as:generator').endTime|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% endif %}\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#a-recommended-workflow","title":"A Recommended Workflow","text":"You want to create a New Metadata Display (HTML) or a new (XML) Schema based format?
data.label
info and check where your Frame uses a Title or a Label. Remove that text (Cmd+X or Ctrl+X) and replace with a {{ data.label }}
. Press Preview. Do you see your title?data.subject_loc
){# I added this because .. #}
Once the Template is in place you can use it in a Formatter, as Endpoint, in your Search Results or just keep it around until you and the world are ready!
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#and-now-its-your-turn","title":"And now it's your turn!","text":"We hope you found the information presented here to be helpful in getting started working with Twigs in Archipelago. Click here to return to the main Twigs in Archipelago documentation. Happy Twigging!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"xdebug/","title":"Debugging PHP in Archipelago","text":"This document describes how to enable Xdebug for local PHP development using the PHPStorm IDE and a docker container running the Archipelago esmero-php:development
image. It involves interacting with the esmero/archipelago-docker-images repo and the esmero/archipelago-deployment repo.
Run the following commands from your /archipelago-deployment
directory:
docker-compose down
\\ docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
This version of docker-compose up
uses an override file to modify our services. docker-compose.dev.yml
we now have an extra PHP container called esmero-php-debug
.
To stop the containers in the future, run docker-compose -f docker-compose.yml -f docker-compose.dev.yml down
.
(To make these commands easier to remember, consider making bash aliases in your .bashrc file.) (If you are running your development on a Linux system, you may need to make a modification to your xdebug configuration file on the esmero-php-dev container. See appendix at the bottom of this page.)
So we have reloaded the containers and now you are ready for Part 2.
In PHPStorm, open your archipelago-deployment
project.
Go to Preferences > Languages & Frameworks > PHP > Debug
or Settings > PHP > Servers
. In this window there is an Xdebug section. Use these settings:
9003
. (do NOT use the default, 9000)Your settings should look like this. Hit APPLY and OK.
Go to Preferences > Languages & Frameworks > PHP > Servers
. We will create a new server here. Use these settings:
docker-debug-server
localhost
8001
archipelago-deployment
directory in the File/Directory
column.Absolute path on the server
add /var/www/html
Hit APPLY and OK and close the window.
Go to Run > Edit Configurations
. Hit the +
Button to create a new PHP Remote Debug. Name whatever you want, I called mine Archipelago
. Use these settings:
docker-debug-server
from dropdown (we created this in step 3)archipelago
(this matches the key set in our container)
Note: If you try to validate your connection, it will fail. But that's ok.
Validate your connection. With Run > Edit Configurations
still open, you can hit the link that says \"Validate\". Use these settings in the following validation window:
<your local path>/archipelago-deployment/web
http://localhost:8001
Hit VALIDATE. You should get a series of green check marks. If you get a warning about missing php.ini
file, that is OK, our file has a different name in the container (xdebug.ini
) and is still being read correctly.
We have had success using the XDebug Helper extension in Chrome. Once you have the extension installed, right-click on the bug icon in the top right of your chrome browser window and select \"Options\" to configure the IDE key. Under \"IDE\", select \"Other\", and in the text box, enter \"archipelago\"
Hit the button (top right bar of PHPStorm) that looks like a telephone, for Start Listening for PHP Debug Connections
.
Now, you can use Run > Debug
and select the Archipelago
named configuration that we created in the previous steps. The debugging console will appear. It will say it is waiting for incoming connection from 'archipelago' .
Right now the debugging session is not enabled. Browse to localhost:8001
. Click on the gray XDebug Helper icon at the top right of your window and select the green \"Debug\" button. This will tell chrome to set the xdebug session key when you reload the page.
Now set a breakpoint in your code, and refresh the page. If you have breakpoints set, either manually, or from leaving \"Break at first line in PHP scripts\" checked, you should have output now in the debugger.
If you are done actively debugging, it is best to click the green XDebug Helper icon and select \"Disable\". This will greatly improve speed and performance for your app in development. When you need to debug, just turn on debugging using the XDebug Helper button again.
If you would like to see the output of your xdebug logs, run the following script: docker exec -ti esmero-php bash -c 'tail -f /tmp/xdebug.log > /proc/1/fd/2'
Then, you can use the typical docker logs command on the esmero-php
container, and you will see the xdebug output: docker logs esmero-php -f
Xdebug makes accessing variables in Drupal kind of great. Many possibilities, including debugging for Twig templates. Happy debugging!
"},{"location":"xdebug/#appendix-xdebug-on-a-linux-host","title":"Appendix: XDebug on a linux host","text":"If you are developing on a linux machine, you may need to make a change to the xdebug configuration file.
/archipelago-deployment/xdebug
folder called xdebug.ini
and enter the following text: zend_extension=xdebug\n\n[xdebug]\nxdebug.mode=develop,debug\nxdebug.discover_client_host = 1\nxdebug.start_with_request=yes\n
php-debug:\n ...\n volumes:\n - ${PWD}:/var/www/html:cached\n # Bind mount custom xdebug configuration file...\n - ${PWD}/xdebug/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Archipelago Commons Intro","text":"Archipelago Commons, or simply Archipelago, is an evolving Open Source Digital Objects Repository / DAM Server Architecture based on the popular CMS Drupal 9/10
and released under GLP V.3 License
.
Archipelago is a mix of deeply integrated custom-coded Drupal modules (made with care by us) and a curated and well-configured Drupal instance, running under a discrete and well-planned set of service containers.
Archipelago was dreamt as a multi-tenant, distributed, capable system (as its name suggests!) and can live isolated or in flocks of similar deployments, sharing storage, services, or -- even better -- just the discovery layer. Learn more about the different Software Services
used by Archipelago.
Archipelago's primary focus is to serve the greater GLAM community
by providing a flexible, consistent, and unified way of describing, storing, linking, exposing metadata and media assets. We respect identities and existing workflows. We endeavor to design Archipelago in ways that empower communities of every size and shape.
Finally, Archipelago tries to stay humble, slim, and nimble in nature with a small code base full of inline comments and @todos
. All of our work is driven by a clear and concise but thoughtful planned technical roadmap --updated in tandem with new releases.
Ingesting Only Digital Objects or Both Digital Objects and Collections uses similar processes, with a few key differences. Click here to jump to the Ingesting both Digital Objects and Collections and/or Creative Work Series (Compound) Objects section of this guide page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#ingesting-only-new-digital-objects","title":"Ingesting Only New Digital Objects","text":"From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-1-plugin-selection","title":"Step 1: Plugin Selection","text":"Select the Plugin type you will be using from the dropdown menu.
Spreadsheet Importer (if using local CSV file)
*The Remote JSON API Importer
and additional remote import source options (for other repository systems) will be covered in separate tutorials following future releases.
Select 'Create New ADOs' as the Operation you would like to perform.
If using Google Sheets Importer:
If using Spreadsheet Importer:
Select the data transformation approach--how your source data will be transformed into ADO (Archipelago Digital Object) Metadata.
You will have 3 options for your data transformation approach:
You will also need to Select which columns contain filenames, entities or URLS where files can be fetched from. Select what columns correspond to the Digital Object types found in your spreadsheet source.
Lastly, for this step, you will need to select the destination Fields and Bundles for your New ADOs. If your spreadsheet source only contains Digital Objects, select Strawberry (Descriptive Metadata source) for Digital Object
Select your global ADO mappings.
ismemberof
collection membership relationship predicate column if applicable. For AMI source spreadsheets containing only non-Creative Work Series (Compound) Objects, only ismemberof
can be mapped properly. To use ispartof
relationship setup, please refer to the steps outlined in the separate section below.- `ismemberof` and/or `ispartof` (and/or whatever predicate corresponds with the relationship you are mapping)\n- these columns can be used to connect related objects using the object-to-object relationship that matches your needs\n- in default Archipelago configurations, `ismemberof` is used for Collection Membership and `ispartof` is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in `ispartof`)\n- these columns can hold 3 types of values\n - empty (no value)\n - an integer to connect an object to another object's corresponding row in the same spreadsheet/CSV\n * Ex: Row 2 corresponds to a Digital Object Collection; for a Digital Object corresponding to Row 3, the 'ismemberof' column contains a value of '2'. The Digital Object in Row 3 would be ingested as a member of the Digital Object Collection in Row 2.\n - a UUID to connect with an already ingested object\n
node_uuid
column.Under the 'Base ADO mappings', select the label
column for ADO Label. This selection is only used as a fail-safe (in case your AMI JSON Ingest Template does not have any mapping for a column to be mapped to the JSON label
key, or your source data csv does not contain a label
if going Direct for data transformation).
Provide an optional ZIP file containing your assets.
The file upload size restrictions specified in your Archipelago instance will apply here (512MB maximum by default).
You will now see a message letting you know that 'Your source data was saved and is available as a CSV at linktotheAMIgenerated.csv
The message will also let you know that your New AMI Set was created and provide a link to the AMI Set page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-7-ami-set-processing","title":"Step 7: AMI Set Processing","text":"Your newly created AMI Set will now need to be Processed.
If you clicked on the 'see it here' link in Step 6, you will be brought to the AMI Set page for review. You may also select Process
from the Operations
menu for the AMI set from the main AMI sets
page. From the Process
page you can review the JSON configuration for your set (determined by your selections in the preceding steps).
You may wish to double check the settings configured in your AMI Set in the Raw Metadata (JSON) on the AMI Set View
tab before Processing.
To Process this set, navigate to the Process
tab. You will have multiple options related to the Processing outcome for your AMI Set.
Enqueuing and File Processing Options
Select Confirm
to continue.
You will be returned to AMI sets
page and see a brief confirmation message regarding the Enqueuing and Processing options you selected.
If you chose to 'Confirm\" and Process your AMI Set immediately, proceed to Step 9: Processing and ADO Creation.
If the chose to 'Enqueue' your AMI Set and the Queue operations for your Archipelago instance have been configured, you can simply leave your AMI Set in the Queue for Processing on the preconfigured schedule. Common timing for AMI Set Processes schedules are typically setup to run every three to six hours. Contact your Archipelago Administrators for details about your particular Archipelago's Processing schedule.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-8-queue-manager-push-may-be-restricted-to-administrator-users-only","title":"Step 8: Queue Manager Push (may be restricted to Administrator Users only)","text":"If you chose to place your AMI set in the Queue to Process in step 7 and you wish to manually kickstart the Queue Processes, navigate to the Queue Manager found at /admin/config/system/queue-ui
. (Be sure to select the Queue Manager
under the System section, not the Queue Manager for Hydroponic Service
under the Archipelago section).
To Process your AMI Set immediately from the Queue Manager page, select the checkbox next to the 'AMI Digital Object Ingester Queue Worker'. Keep the Action
menu set to Batch Process
and click the Apply to selected items
button.
Your AMI set will now be Processed. You can follow the set's progress through the Processing queues
loading screen.
After your AMI set is Processed, you will receive confirmation messages letting you know your Digital Objects were successfully created.
From this message, you can click on each ADO title to review the new created Digital Object (or Collection) if you wish. Or, you may proceed to step 10.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-10-review-your-newly-created-digital-objects-directly-or-via-ami-set-report","title":"Step 10: Review your newly created Digital Objects directly or via AMI Set Report","text":"/admin/content
and review your newly created Digital Objects. After ensuring that files and metadata elements were mapped correctly, you may choose to change the Status for your Digital Objects to 'Published'.Option 2: Use the AMI Set Report
From the main AMI sets
page, select Report
from the Operations
menu for the AMI set you wish to review.
This Report will contain information related to the last Processing operation run against your AMI Set.
datetime
stamplevel
(INFO, WARNING, or ERRORS) applicabilitymessage
summarizing the Processing outcome--including a title/label link to the created ADO if successfuldetails
summary containing system information related to the operations.From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#steps-1-plugin-selection-step-2-operation-and-spreadsheet-source-selection","title":"Steps 1: Plugin Selection & Step 2: Operation and Spreadsheet Source Selection","text":"Follow the same instructions found above for Ingesting New Digital Objects.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"AMIviaSpreadsheets/#step-3-data-transformation-selections_1","title":"Step 3: Data Transformation Selections","text":"To import Digital Objects and Digital Object Collections and/or Creative Work Series (Compound) Objects at the same time/from same spreadsheet source, you will need to select the Custom (Expert Mode)
option for your data transformation approach.
You will then need to 'Select your Custom Data Transformation and Mapping Options' for each of your Digital Object, Collection, and Creative Work Series (Compound) types.
For Collection and Creative Work Series (Compound) objects:
Strawberry (Descriptive Metadata source) for Digital Object Collection
images
if you are uploading a thumbnail image for your Collection.For each non-Creative Work Series (Compound) Digital Object type in your spreadsheet source:
Strawberry (Descriptive Metadata source) for Digital Object
For example, for 'Map' type Digital Objects, you would select the following options (as depicted in this screenshot):
Select your global ADO mappings.
ismemberof
and ispartof
).- `ismemberof` and/or `ispartof` (and/or whatever predicate corresponds with the relationship you are mapping)\n- these columns can be used to connect related objects using the object-to-object relationship that matches your needs\n- in default Archipelago configurations, `ismemberof` is used for Collection Membership and `ispartof` is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in `ispartof`)\n- these columns can hold 3 types of values\n - empty (no value)\n - an integer to connect an object to another object's corresponding row in the same spreadsheet/CSV\n * Ex: Row 2 corresponds to a Digital Object Collection; for a Digital Object corresponding to Row 3, the 'ismemberof' column contains a value of '2'. The Digital Object in Row 3 would be ingested as a member of the Digital Object Collection in Row 2.\n - a UUID to connect with an already ingested object\n
node_uuid
column.label
column for ADO Label. This selection is only used as a fail-safe (in case your AMI JSON Ingest Template does not have any mapping for a column to be mapped to the JSON label
key, or your source data csv does not contain a label
if going Direct for data transformation).ismemberof
is also selected in the ADO Parent Columns. In order to make sure that Digital Objects containing the corresponding UUID or spreadsheet row number for any corresponding Parent ADOs (Creative Work Series/Compounds), make sure ispartof
is also selected in the ADO Parent Columns.Select the corresponding Columns for the Required ADO mappings.
Follow the same instructions found in Steps 5-10 above. As part of step 10, make sure your Digital Objects were ingested into the corresponding Collections you mapped them to in your spreadsheet source. Please note, you will need to Publish the Digital Objects before the Objects will appear in the Collection's View page (whether accessed as a logged-in Admin user or Anonymous/Public user). Celebrate your next AMI success with another fresh coffee, tea, or cookie!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"CODE_OF_CONDUCT/","title":"Archipelago - code of conduct / anti-harassment policy","text":"The Archipelago Commons community and the Metropolitan New York Library Council (METRO) are dedicated to providing a welcoming and positive experience for all participants, whether they are at a formal gathering, in a social setting, or taking part in activities online. This includes any forum, mailing list, wiki, web site, IRC channel, public meeting, conference, workshop/training or private correspondence. The Archipelago community welcomes participation from people all over the world, and these community members bring with them a wide variety of professional, personal and social backgrounds; whatever these may be, we treat colleagues with dignity and respect.
This Code of Conduct governs how we behave in public or in private. We expect it to be honored by everyone who represents the project officially or informally, claims affiliation with the project, or participates directly.
We ask that all community members adhere to the following expectations:
METRO and Archipelago have a zero-tolerance policy for verbal, physical, and sexual harassment. Anyone who is asked to stop a hostile or harassing behavior is expected to do so immediately. Here, for reference, are New York State\u2019s requirements.
Harassment includes: Offensive verbal comments related to sex, gender, ethnicity, nationality, socioeconomic status, sexual orientation, disability, physical appearance, body size, age, race, religion; sexual or discriminatory images in public spaces; deliberate intimidation; stalking; harassing photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention.
Participation in discussions and activities should be respectful at all times. Please refrain from making inappropriate comments. Create opportunities for all people to speak, exercising tolerance of the perspectives and opinions of others. When we disagree, we do this in a polite and professional manner. We may not always agree. When frustrated, we back away and look for good intentions, not reasons to be more frustrated. When we see a flaw in a contribution, we offer guidance on how to fix it.
Participants in METRO and Archipelago communication channels violating this code of conduct may be sanctioned or expelled at the discretion of the organizers of the meeting (if the channel is an in-person event) or the Archipelago Advisory Board (if the channel is online).
"},{"location":"CODE_OF_CONDUCT/#initial-incident","title":"Initial Incident","text":"If you are being harassed, notice that someone else is being harassed, or have any other concerns, and you feel comfortable speaking with the offender, please inform the offender that he/she/they has affected you negatively. Oftentimes, the offending behavior is unintentional, and the accidental offender and offended will resolve the incident by having that initial discussion. Participants asked to stop any harassing behavior are expected to comply immediately.
"},{"location":"CODE_OF_CONDUCT/#escalation","title":"Escalation","text":"If the offender insists that they did not offend, if offender is actively harassing you, or if direct engagement is not a good option for you at this time, then you will need a third party to step in. To report any violation of the following code of conduct or if you have any questions or suggestions about this code of conduct, please contact archipelago-community@metro.org or fill out this form anonymously. This will be sent to leadership at METRO and the advisory board member currently acting as the Code of Conduct liaison. Our enforcement guidelines work in accordance with those published at the Contributor Covenant.
Upon review, if METRO leadership and the Code of Conduct Liaison determine that the incident constitutes harassment they may take any action they deem appropriate, including warning the offender, expulsion from the meeting or other community channels, or contacting a higher authority such as a representative from the offender's institution.
These policies draw from many other code of conduct documents, including but not limited to: code4lib, DLF, Islandora, ICG, Samvera, WikimediaNYC, and IDOCE
"},{"location":"I7solrImporter/","title":"Using the Islandora 7 Solr Importer","text":"From either the main Content page or the AMI Sets List page, select the 'Start an AMI set' button to begin.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-1-plugin-selection","title":"Step 1: Plugin Selection","text":"Select the Islandora 7 Solr Importer from the dropdown menu.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-2-section-1-solr-server-configuration","title":"Step 2, section 1: Solr Server Configuration","text":"You will only have the option to select 'Create New ADOs' as the Operation you would like to perform.
For the Solr Server Configuration section, you will need to provide all of the following information:
You will also need to select the Starting Row you would like to begin fetching results from, and the Number of Rows to fetch.
The Starting Row is an offset and defaults to 0, which is the most common (and recommended) approach. For the Total Number of Rows to Fetch, setting this to empty or null will automatically (refresh when selecting 'Next' button at bottom of page) prefill with the real Number Rows found by the Solr Query invoked. If you set this number higher than the actual results we will only fetch what can be fetched.
For larger collections, you may wish to create multiple/split AMI ingest sets by selecting a specified number of rows.
In this step you will need to make determinations on how you would like to map your Islandora 7 digital objects to your Archipelago repository and whether or not you would like to fetch additional file datastreams, such as those for thumbnail images, transcripts, OCRs/HOCRs, etc.
Selecting \"Collapse Multi Children Objects\" will collapse Children Datastreams into a single ADO with many attached files (single row in the generated AMI set .csv file). Book Pages will be fetched but also the Top Level PDF (if one exists in your Islandora instance).
In the Required ADO mappings, you will need to specify which Archipelago type you want to map each Islandora Content Model found in your source collection.
If you had left \"Collapse Multi Children Objects\" unselected, you will also need to specify the Islandora Content Model to ADO types mapping for possible Children.
- You can also specify an ADO (Object or Collection) to be used as the Parent of Imported Objects. By selecting an existing ADO (Object or Collection) here using the autocomplete/search, the generated AMI set .csv file will contain an 'ismemberof' column containing the UUID of the selected ADO for every row. - Under \"Additional Datastreams to Fetch\", you can select any number and/or combination of extra file datastreams to retrieve from your harvest. Please note that the I7 Importer will fetch every possible datastream that is present in your source I7 repository, but the additional file datastreams referenced may not be associated with actual files for every digital object.
Language from form itself:
Additional datastreams to fetch. OBJ datastream will always be fetched. Not all datastreams listed here might be present once your data is fetched.
Select the data transformation approach--how your source data will be transformed into ADO (Archipelago Digital Object) Metadata. As noted in the list below, 'Custom (Expert Mode)' is the recommended choice for AMI sets generated using the Islandora 7 Solr Importer plugin.
You will have 3 options for your data transformation approach:
You will also need to Select which columns contain filenames, entities or URLS where files can be fetched from. Select what columns correspond to the Digital Object types found in your spreadsheet source. If you fetched additional file datastreams during Step 2, you will see those columns listed here as well (see screenshot below for examples).
Lastly, for this step, you will need to select the destination Fields and Bundles for your New ADOs. If your spreadsheet source only contains Digital Objects, select Strawberry (Descriptive Metadata source) for Digital Object
Template
and use the AMI Ingest JSON template that corresponds with your metadata elements.Select images
, documents
, and audios
for the file source/fetching.
Select your global ADO mappings.
node_uuid
and any relationship predicate columns (such as ismemberof
).If using Sheet 1 of the Demo AMI Ingest set (found above):
ismemberof
and node_uuid
for ADO Parent columnslabel
column for ADO LabelFor standard Spreadsheet or Google Sheets AMI ingests, you would use this step to provide an optional ZIP file containing your assets.
For your Islandora 7 Solr Importer process, the generated AMI set.csv file will contain the necessary URLs to the corresponding Islandora 7 file datastreams for each object as needed. Select next to skip this ZIP upload step and proceed.
After you provide a title for your AMI set under \"Please Name your AMI set\", select \"Press to Create Set\"
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#step-6-ami-set-confirmation","title":"Step 6: AMI Set Confirmation","text":"You will now see a message letting your know your \"New AMI Set was created\". You will be able to review the generated .csv file directly from this page under Source Data File.
While you may immediately select \"Process\" from this AMI Set Confirmation page to use the Islandora 7 Importer generated .csv file as-is to ingest the ADOs in your AMI set, it is strongly recommended that you review the .csv file first. AMI is configured to trim unecessary (for Archipelago) and de-duplicate redundant Solr source data, but you may wish to pare down the sourced data even further and/or conduct general metadata review and cleanup before migrating your content. You will also likely want to make adjustments to your AMI Ingest JSON Template based on your review, depending on the variation of metadata columns/keys found in your source repostiory.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"I7solrImporter/#next-steps","title":"Next Steps","text":"To proceed with Processing your AMI Set, click here to be directed to the main Ingesting Digital Objects via Spreadsheets.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"about/","title":"About this Documentation","text":"This documentation was generated with Material for MkDocs. The repo/branch is at https://github.com/esmero/archipelago-documentation/tree/1.3.0, and the site is built using the following Github workflow: https://github.com/esmero/archipelago-documentation/blob/1.3.0/.github/workflows/ci.yml.
","tags":["Documentation","Contributing"]},{"location":"about/#contributing","title":"Contributing","text":"pip install mkdocs-material mike
.mike delete --all && mike deploy 1.0.0 default && mike set-default 1.0.0 && mike serve
to see and test changes.This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
"},{"location":"acknowledgments/#license","title":"License","text":"GPLv3
"},{"location":"ami_index/","title":"Archipelago Multi-Importer (AMI)","text":"Archipelago Multi-Importer (AMI) is a module for batch/bulk/mass ingests of Archipelago digital objects (ADOs) and collections. AMI also enables you to perform batch administrative actions, such as updating, patching/revising, or deleting digital objects and collections. AMI's Solr Importer plugin can be used to create AMI ingests and migrating content from existing Solr-sourcable digital repositories (such as Islandora 7).
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#ami-overview-and-under-the-hood-explanations","title":"AMI Overview and Under-the-Hood Explanations","text":"From the desk of Diego Pino
AMI provides Tabulated data ingest for ADOs with customizable input plugins. Each Spreadsheet (or Google Spreadsheet) goes through a Configuration Multi-step setup and generates at the end an AMI Set. AMI Sets then can be enqueued or directly ingested, its generated Objects purged and reingested again, its source data (generated and enriched with UUIDS) CSV replaced, improved and uploaded again and ingested.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#learn-more-about-metadata-in-archipelago-and-ami","title":"Learn More about Metadata in Archipelago and AMI","text":"Please review the Metadata in Archipelago overview to learn about Archipelago's unique approach to metadata and how this applies in the context of AMI set adminstration.
Click to read the full AMI 0.4.0 (Archipelago - 1.0.0) Pre-Release Notes.","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#setup-steps","title":"Setup Steps","text":"AMI has Ingest, Update and Patch capabilities. AMI has a plugin system to fetch data. The data can come from multiple sources and right now CSV/EXCEL or Google Spreadsheets are the ones enabled. It does parent/children validation, makes sure that parents are ingested first, cleans broken relationships, allows arbitrary multi relations to be generated in a single ROW (ismemberof, partOf, etc) pointing to other rows or existing ADOs (via UUIDs) and can process rows directly as JSON or preprocessed via a Metadata Display entity (twig template) capable of producing JSON output. These templates can be configured by \u201ctype\u201d, Articles v/s 3DModel can have different ones. Even which columns contain Files can be configured at that level.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#ami-set-entity","title":"AMI Set Entity","text":"Ami Sets are special custom entities that hold an Ingest Strategy generated via the previous Setup steps (as JSON with all it's settings), a CSV with data imported from the original source (with UUIDs prepopulated if they were not provided by the user). These AMI sets are simpler and faster than \u201cbatch sets\u201d because they do not have a single entry per Object to be ingested. All data lives in a CSV. This means the CSV of an AMI set can be corrected and reuploaded. Users can then Process a Set either putting the to be ingested ADOs in the queue and let Hydroponics Service do the rest or directly via Batch on the UI. ADOs generated by a set can also be purged from there. These sets can also be created manually if needed of any of the chosen settings modified anytime. Which AMI set generated the Ingest is also tracked in a newly created ADO\u2019s JSON and any other extra data (or fixed data e.g common Rights statements, or LoD) can be provided by a Twig Template. Ingest is amazingly fast. We monitored Ingest with Remote URL(islandora Datastreams) files of 15Mbytes average at a speed of 2 seconds per Object (including all post processing) continuously for a set of 100+.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#search-and-replace","title":"Search and Replace","text":"This module also provides a simple search/replace text VBO action (handles JSON as text) and a full blown JSONPATCH VBO action to batch modify ADOs. The last one is extremely powerful permitting multiple operations at the same time with tests. E.g replace a certain value, add another value, remove another value only if a certain test (e.g \u201ctype\u201d:\u201dArticle\u201d and \u201cdate_of_digital\u201d: \u201c2020-09-09\u201d) matches. If any tests fail the whole operation will be canceled for that ADO. An incomplete \u201cWebform\u201d VBO action is present but not fully functional yet. This one allows you to choose a Webform, a certain element inside that Webform and then find and replace using the same Interface you would see while editing/adding a new ADO via the web form workflow.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#getting-started-with-ami","title":"Getting started with AMI","text":"You can access AMI through the AMI Sets
tab on the main Content page found at /admin/content
or directly at /amiset/list
.
If you plan on using the Google Sheets Importer option, you will need to Configure the Google Sheets API.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#example-spreadsheetcsv","title":"Example Spreadsheet/CSV","text":"Please refer to or use a fresh/new copy of the Demo Archipelago Digital Objects (ADOs) spreadsheet to import a small set of Digital Objects, using the same assets part of the One-Step Demo content ingest guide.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_index/#example-json-template","title":"Example JSON template","text":"This JSON template can be used during the Data Transformation (step 3) of your AMI Import. This particular template corresponds with the metadata elements found in the Default Descriptive Metadata and Default Digital Object Collection webforms shipped with Archipelago 1.0.0.
Click to view the example 1.0.0 AMI JSON templateTo use this template, copy and paste the JSON below directly into a new Metadata Display, found here for a local http://localhost:8001/metadatadisplay/list
or http://yoursite.org/metadatadisplay/list
. Select JSON
as the 'Primary mime type this Twig Template entity will generate as output' for this new Metadata Display.
{\n \"type\": {{ data.type|json_encode|raw }},\n \"label\": {{ data.label|json_encode|raw }},\n \"issue_number\": {{ data.issue_number|json_encode|raw }},\n \"interviewee\": {{ data.interviewee|json_encode|raw }},\n \"interviewer\": {{ data.interviewer|json_encode|raw }},\n \"duration\": {{ data.duration|json_encode|raw }},\n \"website_url\": {{ data.website_url|json_encode|raw }},\n \"description\": {{ data.description|json_encode|raw }},\n \"date_created\": {{ data.date_created|json_encode|raw }},\n \"date_created_edtf\": {{ data.date_created_edtf|json_encode|raw }},\n \"date_created_free\": {{ data.date_created_free|json_encode|raw }},\n \"creator\": {{ data.creator|json_encode|raw }},\n \"creator_lod\": {{ data.creator_lod|json_encode|raw }},\n \"publisher\": {{ data.publisher|json_encode|raw }},\n \"language\": {{ data.language|json_encode|raw }},\n \"ismemberof\": [],\n \"ispartof\": [],\n \"sequence_id\": {{ data.sequence_id|json_encode|raw }}, \n \"owner\": {{ data.owner|json_encode|raw }},\n \"local_identifier\": {{ data.local_identifier|json_encode|raw }},\n \"related_item_host_title_info_title\": {{ data.related_item_host_title_info_title|json_encode|raw }},\n \"related_item_host_display_label\": {{ data.related_item_host_display_label|json_encode|raw }},\n \"related_item_host_type_of_resource\": {{ data.related_item_host_type_of_resource|json_encode|raw }},\n \"related_item_host_local_identifier\": {{ data.related_item_host_local_identifier|json_encode|raw }},\n \"related_item_note\": {{ data.related_item_note|json_encode|raw }},\n \"related_item_host_location_url\": {{ data.related_item_host_location_url|json_encode|raw }},\n \"note\": {{ data.note|json_encode|raw }},\n \"physical_description_note_condition\": {{ data.physical_description_note_condition|json_encode|raw }},\n \"note_publishinginfo\": {{ data.note_publishinginfo|json_encode|raw }},\n \"physical_location\": {{ data.physical_location|json_encode|raw }},\n \"physical_description_extent\": {{ data.physical_description_extent|json_encode|raw }},\n \"date_published\": {{ data.date_published|json_encode|raw }},\n \"date_embargo_lift\": {{ data.date_embargo_lift|json_encode|raw }},\n \"rights_statements\": {{ data.rights_statements|json_encode|raw }},\n \"rights\": {{ data.rights|json_encode|raw }},\n \"subject_loc\": {{ data.subject_loc|json_encode|raw }},\n \"subject_lcnaf_personal_names\": {{ data.subject_lcnaf_personal_names|json_encode|raw }},\n \"subject_lcnaf_corporate_names\": {{ data.subject_lcnaf_corporate_names|json_encode|raw }},\n \"subject_lcnaf_geographic_names\": {{ data.subject_lcnaf_geographic_names|json_encode|raw }},\n \"subject_lcgft_terms\": {{ data.subject_lcgft_terms|json_encode|raw }},\n \"subject_wikidata\": {{ data.subject_wikidata|json_encode|raw }},\n \"edm_agent\": {{ data.edm_agent|json_encode|raw }},\n \"term_aat_getty\": {{ data.term_aat_getty|json_encode|raw }},\n \"viaf\": {{ data.viaf|json_encode|raw }},\n \"pubmed_mesh\": {{ data.pubmed_mesh|json_encode|raw }},\n \"europeana_concepts\": {{ data.europeana_concepts|json_encode|raw }},\n \"europeana_agents\": {{ data.europeana_agents|json_encode|raw }},\n \"europeana_places\": {{ data.europeana_places|json_encode|raw }},\n \"geographic_location\": {{ data.geographic_location|json_encode|raw }},\n \"subjects_local_personal_names\": {{ data.subjects_local_personal_names|json_encode|raw }},\n \"subjects_local\": {{ data.subjects_locals|json_encode|raw }},\n \"audios\": [],\n \"images\": [],\n \"models\": [],\n \"videos\": [],\n \"documents\": [],\n \"as:generator\": {\n \"type\": \"Create\",\n \"actor\": {\n \"url\": {{ setURL|json_encode|raw }},\n \"name\": \"ami\",\n \"type\": \"Service\"\n },\n \"endTime\": \"{{\"now\"|date(\"c\")}}\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"upload_associated_warcs\": []\n }\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/","title":"Using AMI's Linked Data Reconciliation","text":"Archipelago Multi Importer (AMI)'s Linked Data Reconciliation tool can be used to enrich your metadata with Linked Data (LoD). Using this tool, you can map values from your topical/subject metadata elements to your preferred LoD vocabulary source. These mappings can then be transformed via a corresponding Metadata Display (Twig) template to process the values into JSON-formatted metadata for your specified AMI set.
The aim of this tool is to automize as much of the reconciliation process as feasible within Archipelago. Please be aware that data reconciliation will still be in part a manual and potentially time intensive process.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#important-note-preliminary-pre-requisite-ami-set-configuration","title":"Important Note: Preliminary / Pre-requisite AMI Set Configuration","text":"In order to Reconciliate an AMI Set, you will need to have selected the 'Template' or 'Custom' data transformation approach (then also, via 'Template' for your Digital Object or Collection types) during Step 3 : Data Transformation of your AMI Set configuration.
Your source spreadsheet will also need to contain at least one column containing terms/names (values) you want to reconcile against an LoD Authority Source. Multiple values should be separated by '|@|'.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-1-select-the-ami-set-you-will-be-working-with","title":"Step 1: Select the AMI Set you will be working with.","text":"From the main AMI Sets List page, click on your AMI Set's Name, or select the 'Edit' option from the Operations menu on the right-hand side of the Sets list.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-2-reconcile-lod-tab","title":"Step 2: Reconcile LoD Tab","text":"Navigate to the Reconcile LoD tab.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-3-lod-reconciling-selections","title":"Step 3: LoD Reconciling Selections","text":"From the list of columns from your spreadsheet source, select which columns you want to reconcile against LoD providers.
Under the LoD Sources section, select how your chosen Columns will be LoD reconciled. - LoD reconcile options will be on the left, LoD Authority Sources will be on the right. - Example: 'local_subjects' will be mapped to 'LoC subjects (LCSH)'
Full list of potential LoD Authority SourcesTo preview the values contained in the column(s) you selected, click the 'Inspect cleaned/split up column values' button.
Tip: This preview step provides you with the opportunity to return to your AMI Set source CSV and make any necessary label/term corrections such as outliers and formatting errors before processing. This can be done multiple times until your source set is fully prepared. If using this workflow, you will tick the 'Re-process only adding new terms/LoD Authority Sources' processing option after replacing your updated source CSV (see screenshot below)
When ready, there are multiple processing options to select from depending on your current need/workflow. - To process immediately, select 'Process LoD from Source' - To enqueue the batch process, select 'Enqueue but do not process Batch in real time. - To add new data (i.e. terms, LoD Authority Sources) to existing reconciliation (e.g after replacing source CSV data), select 'Re-process only adding new terms/LoD Authority Sources
Important note: if you have previously run LoD Reconciliation for your AMI set, this action will overwrite any manually corrected LoD on your Processed CSV. Please make sure you have a backup if unsure.
Depending on the size of your AMI Set, the Reconciliation processing may take a few minutes.
When the process is finished, you will see a brief confirmation message.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-4-edit-reconciled-lod","title":"Step 4: Edit Reconciled LoD","text":"Open the 'Edit Reconciled LoD' tab.
You will see a table (form) containing: - Your Original term values (labels) - The CSV Column Header/Key from the source spreadsheet where the value is found - A Checked option you can use to denote that an LoD mapping has been reviewed/revisioned - The Linked Data Label and URL pairing selected during the LoD reconciliation process
The results table will show 10 original terms and mappings per page. You can advance through the pages using the page numbers and navigational arrows above and below the table.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-5-review-and-edit-your-reconciled-lod-mappings","title":"Step 5: Review and Edit your Reconciled LoD Mappings","text":"Review the LoD reconciliation mappings, to make sure the best terms were selected for your metadata.
As you advance through your review process, it is recommended that you use the 'Save Current LoD Page' at the bottom of each results page as you work. This will preserve the corrections you may have made and update the LoD Reconciled data for your AMI Set within the editing form.
When you have finished editing/reviewing your data, you must select 'Save all LoD back to CSV File' or else your LoD selections will not be preserved.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_lod_rec/#step-6-ami-set-review-and-twig-metadata-display-preparation","title":"Step 6: AMI Set Review and Twig (Metadata Display) Preparation","text":"You will now need to make sure that the Metadata Display (Twig) Template you selected to use during your initial AMI Set configuration is setup to Process your LoD mapped Label and URL selections into your Digital Objects and Collections JSON metadata.
For every JSON key/element in your metadata that you need to process the LoD Reconciled data into, you need to specify in your Template that data for this element will be read from the 'Processed Data' LoD information.
In the following example Twig snippet, the \"subject_loc\" JSON key will map corresponding values from the 'Processed Data' (data.lod) LoD information into a newly created Digital Object/Collection during the AMI Set Processing.
\"subject_loc\": {{ data_lod.mods_subject_topic.loc_subjects_thing|json_encode|raw }},\n
The same general pattern can be adapted to apply to different mapping scenarios (original CSV source columns to Reconciled LoD Sources) as needed.
Full list of Column Options => Corresponding LoD SourcesTo proceed with Processing your AMI Set, click here to be directed to the main Ingesting Digital Objects via Spreadsheets.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_spreadsheet_overview/","title":"Spreadsheet Formatting Overview","text":"","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_spreadsheet_overview/#spreadsheet-formatting-overview","title":"Spreadsheet Formatting Overview","text":"There are multiple ways a spreadsheet/CSV file can be structured to work with AMI, depending on the data transformation and mapping you will be using.
Columns in your spreadsheet/CSV can be mapped to different data (files) and metadata elements (label, description, subjects, etc.).
It is recommended that different types of files are placed into separate columns--\"images\", \"documents\", \"models\", \"videos\", \"audios\", \"texts\".
/var/www/html/d8content/myAMIimage.jpg
s3://myAMIuploads/myAMIdocument.pdf
https://dogsaregreat.edu/dogs.tiff
Every spreadsheet/CSV file should contain the following Columns:
type
label
node_uuid
sequence_id
for Creative Work Series (compound) children objectsRecommended Columns:
ismemberof
and/or ispartof
(and/or whatever predicate corresponds with the relationship you are mapping)ismemberof
is used for Collection Membership and ispartof
is used for Parent-Child Object Relationships (so a Child ADO would reference the Parent ADO in ispartof
)You can use direct JSON snippets such as:
[{\"uri\": \"http://id.loc.gov/authorities/subjects/sh95008857\",\"label\": \"Digital libraries\"}]\n
- If you have an advanced twig template with the necessary logic, you can place data in cells that can be parsed and structured in various ways (such as multiple values separated by semicolons split accordingly, capitalization of values based on defined patterns, etc.) Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer"]},{"location":"ami_update/","title":"Using AMI's Update Operations","text":"Archipelago Multi Importer (AMI)'s Update Operations can be used to Update, Replace, or Append metadata values or files for existing Digital Objects and Collections found in your Archipelago. You can prepare and use AMI Update Sets in different ways using one of three functional operations, depending on your update needs. This guide will provide a general overview of the three main functions and how each operation may be useful.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#important-notes-preliminary-pre-requisites","title":"Important Notes: Preliminary / Pre-requisites","text":"You need to have existing Digital Objects or Collections (ADOs) in your Archipelago to work with. You should have a prepared AMI Update Set CSV that contains at least the following columns/headers:
You should be familiar with the basic mechanics of AMI Set Configuration noted in Steps 1-6.
Best Practices
For all AMI Update operations, it is strongly recommended to both:
Before Updating, use the 'Export Archipelago Digital Objects to CSV content item' Action available on the main Content
page and the Find and Replace
page menus to generate a CSV of your non-modified objects. If something unintended occurs with your Update execution, you could use this CSV of your non-modified objects to restore your objects (or just a field or two) as needed.
Create a small test batch CSV referencing one to two/three ADOs to test the execution of your desired Update actions on before running your larger Update Sets. There is no 'Undo' or 'Revert Changes' button that can be used for an AMI Update Set. You do have the option to 'Revert Revisions' on an object-by-object basis, but that is not ideal for reverting changes that were executing across large batches of ADOs. See the 'Checking Your Changes' documentation section for more information about reviewing and potentially reverting Revisions.
As with regular/Create New AMI Sets, you will have to select your preferred Data Tranformation configuration during Step 3 : of your AMI Update Set Configuration.
Caution with using Templates for Data Transformation
If you are planning to use the 'Template' or 'Custom (Export Mode)' data transformation approach for your AMI Update Set configuration, you will need to have prepared your corresponding AMI Ingest Template to account for the specific Update actions you have planned.
It is important to keep in mind that all of the metadata elements for your existing ADOs metadata may not necessarily be present in your AMI Update Set Source CSV. For example, you may have only prepared your AMI Update Set Source CSV to contain a limited number of headers/columns, such as only those required (node_uuid, label) and one or two metadata elements you wish to update (such as \"subjects\"). If you choose to pass your AMI Update Set through a Twig template, the output after Processing your AMI Update Set may overwrite your existing data if you do not have all of the necessary logic/checks in place to preserve the existing metadata if desired.
In other words, imagine your twig template contains this statement:
\"subjects\": {{ data.subjects|json_encode|raw }}, \n
Independently of IF your CSV contains \"subjects\" as a header/column, the Twig template will still output an empty \"subjects\", which, when using the \"Replace\" mode will wipe out any existing \"subjects\" in your ADO.
During any update operation (independently of the functional operation chosen) and IF you are using/passing your CSV through a template, AMI will provide an extra Context key for you to reference in your Twig Template. You can always reference 'dataOriginal.subjects' for example -- all dataOriginal.xx keys will contain the values of the existing metadata for your ADOs. This allows you to make \"smart\" templates that check IF a certain key/values exists, compare the unmodified (and to be modified) ADO(s) with the new data passed, then generate the desired output.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#update-set-processing-options","title":"Update Set Processing Options","text":"Beginning from Step 7, Processing of your AMI Set Configuration, select the Update operation that best corresponds to your targeted Update scenario.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"ami_update/#1-normal-update-operation","title":"1. Normal Update Operation","text":"The Normal Update Operation 'will update a complete existing ADO's configured target field with new JSON Content.' This will replace everything in an ADO with new processed data.
The Replace Update Operation Replace 'will replace JSON keys found in an ADO's configured target field with new JSON content. Not provided ones (fields/JSON keys) will be kept.'
The Append update operation 'will append values to existing JSON keys in an ADO's configured target field. New ones (fields/JSON keys) will be added too.'
For the other AMI Set Process options and steps, please refer to the information found from Steps 7-10 in this complementary documentation for Create New ADOs AMI Sets. See the 'Checking Your Changes' documentation section for more information about reviewing and potentially reverting Revisions.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["AMI","Archipelago Multi Importer","AMI Update Sets","Update","Replace","Append"]},{"location":"annotations/","title":"Annotations in Archipelago","text":"Archipelago extends Annotorius to provide W3C-compliant Web Annotations for Digital Objects. These annotations can be added per image (when multiple), edited for text and shape adjustments, and saved/discarded using the regular Edit mode (bonus track 1: temp storage that persists when you log out and come back in to your session). Archipelago also exposes a full API for WebAnnotations, that keeps track of which Images (referenced in the Strawberryfield @ as:image
) were annotated and creates the W3C valid entries inside your Digital Object's JSON @ ap:annotationCollection
(bonus track 2: multiple users can annotate the same resource, enabling digital scholarship collaboration opportunities).
Important Note: For any image-based Digital Objects you would like to apply annotations to, the Digital Object type
must be setup to display the image file(s) using the Open SeaDragon viewer. More information about about Managing Display Modes in Archipelago can be found here. Please stay tuned for updates announcing web annotation integration for Mirador 3.
https://yoursite.org/admin/structure/types/manage/digital_object/display/digital_object_viewmode_fullitem
You are now ready to get started adding annotations!
"},{"location":"annotations/#adding-and-saving-annotations","title":"Adding and Saving Annotations","text":"Shift
key. Click and then drag to apply either a Rectangular box or multi-point Polygon shape.ap:annotationCollection
key.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"archifilepersistencestrategy/","title":"Archipelago's File Persistence Strategy","text":""},{"location":"archifilepersistencestrategy/#how-are-files-for-archipelago-digital-objects-ados-persisted-what-happens-with-those-fishtanks","title":"How are files for Archipelago Digital Objects (ADOs) persisted? (What happens with those fishtanks?)","text":"A few Event Subscribers/Data describing logics happen in a certain order:
User Uploads via a webform Element a new File or via Drush/Batch ingest that attaches (via JSONAPI) a file.
If the webform is involved, Archipelago acts quickly and calls directly (before the Node even exists) the file classifier, that will:
as:somefiletype
JSON structure into the main ADO
SBF JSON
, with info about the file, checksums, size, Drupal fids, uuid, etc. This is a heavy function part of the StrawberryfieldFilePersisterService
. It does a lot, making use of optimized logic, but may do more in the future to handle too-many/too-big files needs (FYI: solution will be simple, add to a queue and process later).The user finishes the form, saves and and confirms the ADO creation, and finally all the Node events fire.
On presave StrawberryfieldEventPresaveSubscriberAsFileStructureGenerator
runs and checks if 2.1 already was processed. This is needed since the user could have triggered an ingest via drush/JSONAPI/Webhooks etc. If all is well (this is a less expensive check) Archipelago continues.
On presave (next) StrawberryfieldEventPresaveSubscriberFilePersister
runs, checking all TEMPORARY files described in as:somefiletype
and actually copying them to the right \"desired\" location.
And on Save StrawberryfieldEventInsertFileUsageUpdater
also marking the file as \"being\" used by a Strawberry driven Node (different Event).
Note: Anytime we remove directly from the raw JSON a full as:somefiletype
structure of a sub-element from an as:structure
we force Archipelago to do all the above again, and Archipelago can regenerate technical metadata. This has been used when updating EXIF binaries or even when something went wrong (while testing, but this stuff is safe no worries). Eventually, there will be a BIG red button that does that if you do not like JSON editing.
Discussions related to Archipelago's file persistence strategy and planned potential strategies can be be found here: Strawberryfield Issues: 107, and here: Strawberryfield Issues: 76. This page will be updated with additional information following future developments.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/","title":"Archipelago-deployment: upgrading Drupal 9 to Drupal 10 (1.1.0 to 1.3.0)","text":"","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (1.1.0) running Drupal 9 (D9), this documentation will allow you to update to 1.3.0 on Drupal 10 (D10) without any major issues.
D9 is no longer supported as of the end of November 2023. D10 has been around for a little while, and even if every module is not supported yet, what you need and want for Archipelago has long been ready for D10. However, Archipelago is still D9 compatible if it's necessary for you to stay back a little longer.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database, and settings are mostly self-contained in your current archipelago-deployment
repo folder, and backing up is simple because of that.
On a terminal, cd
into your running archipelago-deployment
folder and shut down your docker-compose
ensemble by running the following:
docker-compose down\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing:
docker ps\n
If anything is still running, wait a little longer and run the command again.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is October 31st of 2023.
cd ..\nsudo tar -czvpf $HOME/archipelago-deployment-D9-20231031.tar.gz archipelago-deployment\ncd archipelago-deployment\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-D9-20231031.tar.gz \n
You will see a listing of files, and at the end you will see something like this: Archive Format: POSIX pax interchange format, Compression: gzip
. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade process.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#upgrading-to-130","title":"Upgrading to 1.3.0","text":"","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-1-edit-docker-composeryml","title":"Step 1: Edit docker-composer.yml","text":"First we are going to edit your docker-compose.yml file to reference the latest PHP container as needed. Starting in your Archipelago deployment directory location, run the following commands:
If you have not already, run:
docker-compose down\n
Then open your docker-compose.yml file:
nano docker-compose.yml\n
Inside your docker-compose.yml
file, page down to the php
section and change the image
section to match exactly as follows:
image: \"esmero/php-8.1-fpm:1.2.0-multiarch\"\n
Next page down to the iiif
section and change the image
section to match exactly as follows:
image: \"esmero/cantaloupe-s3:6.0.1-multiarch\"\n
Save your changes.
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-2-docker-pull-and-check","title":"Step 2: docker pull and check","text":"Time to fetch the latest branch and update our docker compose
and composer
dependencies. To pull the images and bring up the ensemble, run:
docker compose pull\ndocker compose up -d\n
Give all a little time to start. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n355e13878b7e nginx \"/docker-entrypoint.\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8001->80/tcp esmero-web\n86b685008158 solr:8.11.2 \"docker-entrypoint.s\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8983->8983/tcp esmero-solr\na8f0d9c6d4a9 esmero/cantaloupe-s3:6.0.1-multiarch \"sh -c 'java -Dcanta\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n6642340b2496 mariadb:10.6.12-focal \"docker-entrypoint.s\u2026\" 10 minutes ago Up 10 minutes 3306/tcp esmero-db\n0aef7df34037 minio/minio:RELEASE.2022-06-11T19-55-32Z \"/usr/bin/docker-ent\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:9000-9001->9000-9001/tcp esmero-minio\n28ee3fb4e7a7 esmero/php-8.1-fpm:1.2.0-multiarch \"docker-php-entrypoi\u2026\" 10 minutes ago Up 10 minutes 9000/tcp esmero-php\na81c36d51a81 esmero/esmero-nlp:fasttext-multiarch \"/usr/local/bin/entr\u2026\" 10 minutes ago Up 10 minutes 0.0.0.0:6400->6400/tcp esmero-nlp\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Now we are going to tell composer
to update the key Drupal and Archipelago modules.
First we are going to disable and remove a few minor Drupal modules. Run the following commands (in order):
docker exec -ti esmero-php bash -c \"drush pm-uninstall fancy_file_delete\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall quickedit\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/fancy_file_delete\"\n
Now update the versions used for these Drupal modules:
docker exec -ti esmero-php bash -c \"composer require drupal/jquery_ui_slider:^2 drupal/jquery_ui_effects:^2 drupal/jquery_ui:1.6 drupal/jquery_ui_datepicker:^2 drupal/jquery_ui_touch_punch:^1 drupal/better_exposed_filters:6.0.3 --no-update --with-all-dependencies\"\n
And now update one other Drupal module and the main Archipelago modules:
docker exec -ti esmero-php bash -c \"composer require 'drupal/views_bulk_operations:^4.2' 'strawberryfield/strawberryfield:1.3.0.x-dev' 'strawberryfield/webform_strawberryfield:1.3.0.x-dev' 'strawberryfield/format_strawberryfield:1.3.0.x-dev' 'strawberryfield/strawberry_runners:0.7.0.x-dev' 'archipelago/ami:0.7.0.x-dev' --no-update --with-all-dependencies\"\n
From inside your archipelago-deployment
repo folder we are now going to open up file permissions
for some of your most protected Drupal files.
sudo chmod 777 web/sites/default\nsudo chmod 666 web/sites/default/*settings.php\nsudo chmod 666 web/sites/default/*services.yml\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-4-disableremove-for-additional-select-drupal-modules","title":"Step 4: Disable/Remove for additional select Drupal modules","text":"We are going to remove additional select Drupal modules that are not 1.3.0 or D10 compliant.
Please run each of the following commands separately, in order, and do not skip any commands.
docker exec -ti esmero-php bash -c \"composer remove symfony/http-kernel symfony/yaml --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_inspector:^2 --no-update\" \ndocker exec -ti esmero-php bash -c \"drush pm:uninstall jsonapi_earlyrendering_workaround\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/jsonapi_earlyrendering_workaround --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/core:^10' 'drupal/core-recommended:^10' 'drupal/core-composer-scaffold:^10' 'drupal/core-project-message:^10' --update-with-dependencies --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/core-dev:^10' --dev --update-with-dependencies --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drush/drush:^12' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/twig_tweak:^2 --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_inspector:^2 --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/config_update:2.0.x-dev --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/config_update_ui --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/context:^5.0@RC' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/devel:^5.1' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/sophron --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove fileeye/mimemap --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/imagemagick:^3 --no-update\" \ndocker exec -ti esmero-php bash -c \"composer remove fileeye/mimemap --no-update\" \ndocker exec -ti esmero-php bash -c \"drush pm:uninstall jsonapi_earlyrendering_workaround\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/jsonapi_earlyrendering_workaround --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/imce:^3.0' --no-update\" \ndocker exec -ti esmero-php bash -c \"composer require 'drupal/search_api_attachments:^9.0' --no-update\" \ndocker exec -ti esmero-php bash -c \"composer require 'drupal/twig_tweak:^3.2' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/webform_entity_handler:^2.0' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/webformnavigation:^2.0' --no-update\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/form_mode_manager --no-update\"\n
Well done! If you see no issues and all ends in Green colored messages, all is good! Jump to Step 5
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#what-if-all-is-not-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if all is not OK, and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 10 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 10 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL10_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not, try to find a replacement module that does something similar, but in any case you may end up having to remove before proceeding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"drush pm-uninstall the_module_name\"\ndocker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-5-update-composerjson","title":"Step 5: Update composer.json","text":"Now you need to update your composer.json
file to include an important patch. Starting in your Archipelago deployment directory location, run the following commands:
nano composer.json\n
Inside your composer.json
file, page down to the \"patches\"
section and change the section to match exactly as follows:
\"patches\": {\n \"drupal/form_mode_manager\": {\n \"D10 compatibility\": \"https://www.drupal.org/files/issues/2023-10-11/3297262-20.patch\"\n },\n \"drupal/ds\": {\n \"https://www.drupal.org/project/ds/issues/3338860\": \"https://www.drupal.org/files/issues/2023-04-04/3338860-5-d10-compatible_0.patch\"\n }\n }\n
Save your changes and then run:
docker exec -ti esmero-php bash -c \"composer update -W\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-6-one-final-round-of-drupal-module-updates","title":"Step 6: One final round of Drupal module updates","text":"We will now run a few more updates for additional Drupal modules.
Please run each of the following commands separately, in order, and do not skip any commands.
docker exec -ti esmero-php bash -c \"composer require mglaman/composer-drupal-lenient\"\ndocker exec -ti esmero-php bash -c \"composer config --merge --json extra.drupal-lenient.allowed-list '[\\\"drupal/form_mode_manager\\\"]'\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/form_mode_manager:2.x-dev@dev'\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/color:^1.0'\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/hal\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/aggregator\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/ckeditor\" \ndocker exec -ti esmero-php bash -c \"composer require drupal/seven\"\ndocker exec -ti esmero-php bash -c \"composer require archipelago/archipelago_subtheme:1.3.0.x-dev\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall quickedit\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/quickedit drupal/classy drupal/stable\"\ndocker exec -ti esmero-php bash -c \"drush pm:uninstall hal\"\ndocker exec -ti esmero-php bash -c \"drush pm:install jquery_ui\"\n
Whew, that's a lot of module updates! Now run one final database update command:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-7-optional-syncs","title":"Step 7: Optional Syncs","text":"Optionally, you can sync your new Archipelago 1.3.0 and bring in all the latest configs and settings. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial \n
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#a-complete-sync-which-will-bring-new-things-and-update-existing-but-will-also-remove-all-the-ones-that-are-not-part-of-130-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new things and update existing but will also remove all the ones that are not part of 1.3.0. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y \n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 10 realm for a few years!
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-8-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 8: Update (or not) your Metadata Display Entities and menu items","text":"Recommended: If you want to add new templates and menu items 1.3.0 provides, run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (twig templates) we ship with new 1.3.0 versions (heavily adjusted IIIF manifests, better Object description template using Macros). Before you do this, we strongly recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unsure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#step-9-or-should-we-say-10","title":"Step 9 (or should we say 10)","text":"Please login to your Archipelago and test/check all is working! Enjoy 1.3.0 and Drupal 10. Thanks!
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-UpgradeDrupalD9toD10/#need-help-blue-screen-missed-a-step-need-a-hug-and-such","title":"Need help? Blue Screen? Missed a step? Need a hug and such?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
GPLv3
","tags":["Archipelago-deployment","Drupal 9","Drupal 10"]},{"location":"archipelago-deployment-democontent/","title":"Adding Demo Archipelago Digital Objects (ADOs) to your Repository","text":"We make this optional since we feel not everyone wants to have Digital Objects from other people using space in their system. Still, if you are new to Archipelago we encourage you to do this. Its a simply way to get started without thinking too much. You can learn and test. Then delete and move over.
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#prerequisites","title":"Prerequisites","text":"","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#the-new-way-archipelago-100-rc2-or-higher","title":"The new way Archipelago 1.0.0-RC2 or higher.","text":"jsonapi
drupal user and an admin
one and you can login and out of your server.admin
user. (If you followed one of the deployment guides, password will be archipelago
)admin
user. Content
-> Ami Sets
. You will see a single AMI Set
already in place. edit
Button), press on the little down arrow
and choose Process
. DESIRED ADOS STATUSES AFTER PROCESS
, change all from Draft to Published, leave Enqueue but do not process Batch in realtime
unchecked and press \"Confirm\". The Ingest will start and a progress bar will advance. Once ready a list of Ingest Objects should appear.jsonapi
drupal user and you can login and out of your server.Go into your archipelago-deployment
folder and into the d8content
folder that is inside it, e.g.
cd archipelago-deployment/d8content\ngit clone https://github.com/esmero/archipelago-recyclables\n
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-democontent/#step-2-ingest-the-objects","title":"Step 2: Ingest the Objects","text":"docker exec -ti esmero-php bash -c 'd8content/archipelago-recyclables/deploy_ados.sh'\n
You will see multiple outputs similar to this:
Files in provided location:\n - anne_001.jpg\n - anne_002.jpg\n - anne_003.jpg\n - anne_004.jpg\n - anne_005.jpg\n - anne_006.jpg\n - anne_007.jpg\n - anne_008.jpg\n - anne_009.jpg\n - anne_010.jpg\nFile anne_001.jpg sucessfully uploaded with Internal Drupal file ID 5\nFile anne_002.jpg sucessfully uploaded with Internal Drupal file ID 6 \nFile anne_003.jpg sucessfully uploaded with Internal Drupal file ID 7\nFile anne_004.jpg sucessfully uploaded with Internal Drupal file ID 8\nFile anne_005.jpg sucessfully uploaded with Internal Drupal file ID 9 \nFile anne_006.jpg sucessfully uploaded with Internal Drupal file ID 10 \nFile anne_007.jpg sucessfully uploaded with Internal Drupal file ID 11 \nFile anne_008.jpg sucessfully uploaded with Internal Drupal file ID 12\nFile anne_009.jpg sucessfully uploaded with Internal Drupal file ID 13 \nFile anne_010.jpg sucessfully uploaded with Internal Drupal file ID 14\nNew Object 'Anne of Green Gables : Chapters 1 and 2' with UUID 9eb28775-d73a-4904-bc79-f0e925075bc5 successfully ingested. Thanks!\n
The gist here is that if the script says Thanks
you are good.
archipelago-deployment/d8content/archipelago-recyclables/deploy_ados.sh
Inside you will find lines like this one:
drush archipelago:jsonapi-ingest /var/www/html/d8content/archipelago-recyclables/ado/0c2dc01a-7dc2-48a9-b4fd-3f82331ec803.json --uuid=0c2dc01a-7dc2-48a9-b4fd-3f82331ec803 --bundle=digital_object --uri=http://esmero-web --files=/var/www/html/d8content/archipelago-recyclables/ado/0c2dc01a-7dc2-48a9-b4fd-3f82331ec803 --user=jsonapi --password=jsonapi --moderation_state=published;\n
What you want here is to modify/replace the absolute paths that point your demo objects (.json) and their assets (folders with the same name). Basically replace every entry of /var/www/html/d8content/archipelago-recyclables/
with the path to archipelago-recyclables
.
If you have trouble running this or see errors or need help with a step (its only two steps), please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
GPLv3
","tags":["Archipelago Digital Objects","Demo Content"]},{"location":"archipelago-deployment-live-gitworkflow/","title":"Managing, sheltering, pruning and nurturing your own custom Archipelago","text":"Now that you have your base Archipelago Live Deployment running (Do you? If not, go back!) you may be wondering about things like:
Archipelagos
are living beings. They evolve and become beautiful, closer and closer to your needs. Because of that resetting
your particularities on every Archipelago
code release is not a good idea, nor even recommended. What you want is to keep your own Drupal Settings
\u2014your facets, your themes, your Solr fields, your own modules, and all their configurations\u2014safe and be able to restore all in case something goes wrong.
The ones we ship with every Release will reset
your Archipelago's settings to Factory defaults if applied wildly
.
This is where Github
comes in place.
Prerequisites:
git config --global --edit
on your Live Instance and Set your user name/email/etc.Vi
! In case of emergency/panic press ESC
and type :x
to escape and/or run away in terror. To edit Press i
and uncomment the lines. Once Done press ESC
and type :x
to save.Let's fork https://github.com/esmero/archipelago-deployment-live under your own Account via the web. Happy Note: Since 2021 also keeping forked branches in sync with the origin can be done via the UI directly.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#12-connect-your-live-instance-terminal","title":"1.2 Connect your Live instance terminal.","text":"Move to your repository's base folder, and let's start by adding your New Fork as a secondary Git Origin
. Replace in this command yourOwnAccount
with (guess what?) your own account:
git remote add upstream https://github.com/yourOwnAccount/archipelago-deployment-live\n
Now check if you have two remotes (origin
=> This repository, upstream
=> your own fork):
git remote -v\n
You will see this:
origin https://github.com/esmero/archipelago-deployment-live (fetch)\norigin https://github.com/esmero/archipelago-deployment-live (push)\nupstream https://github.com/yourOwnAccount/archipelago-deployment-live (fetch)\nupstream https://github.com/yourOwnAccount/archipelago-deployment-live (push)\n
Good!
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#13-now-lets-create-from-your-current-live-instance-a-new-branch","title":"1.3 Now let's create from your current Live Instance a new Branch.","text":"We will push this branch into your Fork and it will be all yours to maintain. Please replace yourOwnOrg
with any Name you want for this. We like to keep the current Branch name in place after your personal prefix:
git checkout -b yourOwnOrg-1.0.0-RC3\n
Good, you now have a new local
branch named yourOwnOrg-1.0.0-RC3
, and it's time to decide what we are going to push into Github.
By default our deployment strategy (this repository) ignores a few files you want to have in Github. Also, there are things like the Installed Drupal Modules and PHP Libraries (the Source Code), the Database, Caches, your Secrets (.env
file), and your Drupal settings.php
file. You FOR SURE do not want to have these in Github and are better suited for a private Backup Storage.
Let's start by push
ing what you have (no commits, your new yourOwnOrg-1.0.0-RC3
as it is) to your new Fork. From there on we can add new Commits and files:
git push upstream yourOwnOrg-1.0.0-RC3\n
And Git will respond with the following (use your yourOwnAccount
personal Github Access Token as password):
Username for 'https://github.com': yourOwnAccount\nPassword for 'https://yourOwnAccount@github.com': \nTotal 0 (delta 0), reused 0 (delta 0)\nremote: \nremote: Create a pull request for 'yourOwnOrg-1.0.0-RC3' on GitHub by visiting:\nremote: https://github.com/yourOwnAccount/archipelago-deployment-live/pull/new/yourOwnOrg-1.0.0-RC3\nremote: \nTo https://github.com/yourOwnAccount/archipelago-deployment-live\n * [new branch] yourOwnOrg-1.0.0-RC3 -> yourOwnOrg-1.0.0-RC3\n
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#15-first-commit","title":"1.5 First Commit","text":"Right now this new Branch (go and check it out at https://github.com/yourOwnAccount/archipelago-deployment-live/tree/yourOwnOrg-1.0.0-RC3) will not differ at all from 1.0.0-RC3. That is OK. To make your Branch unique, what we want is to \"commit\" our changes. How do we do this?
Let's add our composer.json
and composer.lock
to our change list. Both of these files are quite personal, and as you add more Drupal Modules, dependencies, or Upgrade your Archipelgo and/or Drupal Core and Modules, all of these corresponding files will change. See the -f
? Because our base deployment ignores that file and you want it, we \"Force\" add it. Note: At this stage composer.lock
won't be added at all because it's still the same as before. So you can only \"add\" files that have changes.
git add drupal/composer.json \ngit add -f drupal/composer.lock\n
Now we can see what is new and will be committed by executing:
git status\n
You may see something like this:
On branch yourOwnOrg-1.0.0-RC3 \nChanges to be committed:\n (use \"git restore --staged <file>...\" to unstage)\n new file: drupal/composer.json\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n modified: drupal/scripts/archipelago/deploy.sh\n modified: drupal/scripts/archipelago/update_deployed.sh\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n deploy/ec2-docker/docker-compose.yml\n drupal/.editorconfig\n drupal/.gitattributes\n
If you do not want to add each Changes not staged for commit
individually (WE recommend you only commit what you need. Be warned and take caution.), you can also issue a git add .
, which means add all.
git add drupal/scripts/archipelago/deploy.sh\ngit add drupal/scripts/archipelago/update_deployed.sh\ngit add deploy/ec2-docker/docker-compose.yml\n
In this case we are also committing docker-compose.yml
, which you may have customized and modified to your domain (See Install Guide Step 3), deploy.sh
and update_deployed.sh
scripts. If you ever need to avoid tracking certain files at all, you can edit the .gitignore
file and add more patterns to it (look at it, it's fun!).
git commit -m \"Fresh Install of Archipelago for yourOwnOrg\"\n
If you had your email/user account setup correctly (see Prerequisites) you will see:
Fresh Install of Archipelago yourOwnOrg\n 4 files changed, 360 insertions(+), 46 deletions(-)\n create mode 100644 deploy/ec2-docker/docker-compose.yml\n create mode 100644 drupal/composer.json\n
And now finally you can push this back to your Fork:
git push upstream yourOwnOrg-1.0.0-RC3\n
And Git will respond with the following (use your yourOwnAccount
personal Github Access Token as password):
Username for 'https://github.com': yourOwnAccount\nPassword for 'https://yourOwnAccount@github.com': \nEnumerating objects: 18, done.\nCounting objects: 100% (18/18), done.\nCompressing objects: 100% (10/10), done.\nWriting objects: 100% (10/10), 2.26 KiB | 2.26 MiB/s, done.\nTotal 10 (delta 5), reused 0 (delta 0)\nremote: Resolving deltas: 100% (5/5), completed with 5 local objects.\nTo https://github.com/yourOwnAccount/archipelago-deployment-live\n d9fa835..3427ce5 yourOwnOrg-1.0.0-RC3 -> yourOwnOrg-1.0.0-RC3\n
And done.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#2-keeping-your-archipelago-modules-updated-during-releases","title":"2. Keeping your Archipelago Modules Updated during releases","text":"Releases in Archipelago are a bit different to other OSS projects. When a Release is done (let's say 1.0.0-RC2) we freeze the current release branches in every module we provide, package the release, and inmediatelly start with a new Release Cycle (6 months long normally) by creating in each repository a new Set of Branches (for this example 1.0.0-RC3). All new commits, fixes, improvements, features now will ALWAYS go into the Open/on-going new cycle branches (for this example 1.0.0-RC3), and once we are done we do it all over again. We freeze (for this example 1.0.0-RC3), and a new release cycle starts with fresh new \"WIP\" branches (for this example 1.1.0).
Some Modules like AMI or Strawberry Runners have their independent Version but are released together anyway, e.g. for 1.0.0-RC3 both AMI and Strawberry Runners are 0.2.0. Why? Because work started later than the core Archipelago and also because they are not really CORE. So what happens with main
branches? In our project main
branches are never experimental. They are always a 1:1 with the latest stable release. So main
will contain a full commit of 1.0.0-RC2 until we freeze 1.0.0-RC3 when main
gets that code. Over and over. Nice, right?
The following modules are the ones we update on every release:
strawberryfield/strawberryfield
strawberryfield/format_strawberryfield
strawberryfield/webform_strawberryfield
archipelago/ami
strawberryfield/strawberry_runners
We also update macro modules that are meant for deployment like this Repository and https://github.com/esmero/archipelago-deployment.
To keep your Archipelago up to date, especially once you \"go custom\" as described in this Documentation, the process is quite simple, e.g. to fetch latest 1.0.0-RC3
updates during the 1.0.0-RC3
release cycle run:
docker exec -ti esmero-php bash -c \"composer require strawberryfield/strawberryfield:dev-1.0.0-RC3 strawberryfield/format_strawberryfield:dev-1.0.0-RC3 strawberryfield/webform_strawberryfield:dev-1.0.0-RC3 archipelago/ami:0.2.0.x-dev strawberryfield/strawberry_runners:0.2.0.x-dev strawberryfield/strawberry_runners:0.2.0.x-dev archipelago/archipelago_subtheme:dev-1.0.0-RC3 -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will bring all the new code and all (except if there are BUGS!) should work as expected.
Note: Archipelago really tries hard to be as backwards compatible as possible and rarely will you see a non-documented or non-dealt-with deprecation.
Note 2: We of course recommend always running the Stable (frozen) release, but since code is plastic and fixes will go into a WIP open branch, you should be safe enough to move all modules together.
You can run these commands any time you need, and while the release is open you will always get the latest code (even if it's always the same branch). Please follow/subscribe to each Module's Github to be aware of changes/issues and improvements.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#3-keeping-your-archipelagos-drupal-contributed-modules-and-core-updated","title":"3. Keeping your Archipelago's Drupal Contributed Modules and Core updated","text":"","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#31-contributed-modules","title":"3.1 Contributed Modules.","text":"To keep your Archipelago's Drupal up to date check your Drupal at https://yoursite.org/admin/modules/update. Make sure you check mostly (yes mostly, no need to overreact) for Security Updates. Not every Drupal contributed module (project) keeps backwards compatibility, and we try to test every version we ship (as in this repository's composer.lock
files) before releasing. Once you detect a major change/requirement, visit the Project's Changelog Website, and take some time reading it. If you feel confident it's not going to break all, copy the suggested Composer command, e.g. if you visit https://www.drupal.org/project/google_api_client/releases/8.x-3.2 you will see that the update is suggested as:
Install with Composer: $ composer require 'drupal/google_api_client:^3.2'\n
Using the same module as an example, before doing any final updates, check your current running version (take note in case you need to revert):
docker exec -ti esmero-php bash -c \"composer info 'drupal/google_api_client\"\n
Keep the version around.
Now let's update, which means using the suggested command translated to our own Docker world like this (notice the -W
):
docker exec -ti esmero-php bash -c \"composer require 'drupal/google_api_client:^3.2 -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will update that module. Test your website. Depending on what you update, you want to focus first on the functionality it provides, and then create/edit/delete a fictitious Digital Object to ensure it did not add any errors to your most beloved Digital Objects workflows.
If you see errors or you feel it's not acting as it should, you can revert by doing:
docker exec -ti esmero-php bash -c \"composer require 'drupal/google_api_client:^VERSION_YOU_KEPT_AROUND -W\"\n
And then run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
If this happens we encourage you to please \ud83d\udc4f share your findings with our community/slack/Github ISSUE here.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#31-drupal-core-inside-the-same-major-version","title":"3.1 Drupal Core inside the same major version:","text":"This is quite similar to a contributed module but normally involves at least 3 dependencies and of course larger changes.
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#exact-version","title":"Exact Version","text":"Inside the same major version, e.g. inside Drupal 9, if you are currently running Drupal 9.0.1
and you want to update to an exact latest (as I write 9.2.4
):
docker exec -ti esmero-php bash -c \"composer require drupal/core:9.2.4 drupal/core-dev:9.2.4 drupal/core-composer-scaffold:9.2.4 drupal/core-project-message:9.2.4 --update-with-dependencies\"\n
Or under Drupal 8, if you are currently running Drupal 8.9.14
and you want to update to an exact latest (as I write 8.9.18
):
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:8.9.18 drupal/core:8.9.18 drupal/core-composer-scaffold:8.9.18 --update-with-dependencies\"\n
And then for both cases run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Github"]},{"location":"archipelago-deployment-live-gitworkflow/#alternative-major-version","title":"Alternative Major Version","text":"If you want to only remember a single command
and want to be sure to also get all extra packages for Drupal 9, run:
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^9 drupal/core:^9 drupal/core-composer-scaffold:^9 drupal/core-project-message:^9 -W\"\n
Or for Drupal 8:
docker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^8 drupal/core:^8 drupal/core-composer-scaffold:^8 drupal/core-project-message:^8 -W\"\n
And then for both cases run any Database updates that may be needed:
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
This will always get you the latest Drupal
and dependencies
allowed by your composer.json
.
Since major versions may bring larger deprecations, contributed modules will stay behind, and the world (and your database may collapse), we really recommend that you do some tests first (locally) or follow one of our guides. We at Archipelago will always document a larger version update. Currently, the Drupal 8 to Drupal 9 Update is documented in detail here.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Github"]},{"location":"archipelago-deployment-live-moveToLive/","title":"Moving fromarchipelago-deployment
to archipelago-deployment-live
","text":"","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you have been using/running/populating an instance with Archipelago Digital Objects that was set up using our simpler-to-deploy but harder-to-customize archipelago-deployment strategy and can't wait to move to this one\u2014meant for a larger (and somehow easier to maintain and upgrade on the long run) instance\u2014but (wait!) you do not want to ingest again, set up again, configure users, etc. (You already did that!), this is your documentation.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#what-is-this-documentation-not-for","title":"What is this documentation not for?","text":"To install an archipelago-deployment-live
from scratch or to keep (forever) syncing between the two deployment options in a quantum phase shifting eternum like a time crystal.
archipelago-deployment
as a basis.composer
, drush
, Linux Permissions, and git
of course.In a nutshell: archipelago-deployment-live
uses a different folder structure moving configuration storage, data storage outside of your webroot, and allows a much finer control of your settings (safer) and Docker containers. In a nutshell inside the first nutshell: archipelago-deployment-live
also ignores more files so keeping customized versions, your own packages, your own settings around, and version controlled is much easier. Lastly: archipelago-deployment-live
makes more use of Cloud Services, e.g. so if you have been running min.io
as local mounted storage you may now consider moving storage (files) to a cloud service like AWS S3.
In a nutshell: Since both run the same code and use the same Docker Containers, the data is actually the same. Everything is just persisted in different places.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#getting-the-new-repo-in-place","title":"Getting the new repo in place","text":"First you need to clone this repository and (hopefully) store in the same parent folder to your current archipelago-deployment
one. For the purpose of this tutorial we will assume you have archipelago-deployment
cloned in this location: $HOME/archipelago-deployment
.
Locate your archipelago-deployment
folder in your terminal. Do an ls
to make sure you can see the folder (not the content) and run:
git clone https://github.com/esmero/archipelago-deployment-live\ncd archipelago-deployment-live\ngit checkout 1.0.0-RC3\ncd ..\ncd archipelago-deployment\n
Now you have side by side $HOME/archipelago_deployment
and $HOME/archipelago-deployment-live
.
This will give you the base structure.
Before touching anything let's start by generating a backup of your current deployment (safety first).
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#backing-up","title":"Backing up","text":"","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1","title":"Step 1:","text":"Shut down your docker-compose
ensemble. Inside your original archipelago-deployment
folder run this:
docker-compose down\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-2","title":"Step 2:","text":"Verify all containers are actually down:
docker ps\n
The following command should return an empty listing. If anything is still running, wait a little longer and run the previous command again.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-3","title":"Step 3:","text":"Now let's tar.gz the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021:
sudo tar -czvpf $HOME/archipelago-deployment-backup-20211201.tar.gz ../archipelago-deployment\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt:
tar -tvvf $HOME/archipelago-deployment-backup-20211201.tar.gz \n
You will see a listing of files. If corrupt (do you have enough space? did your ssh connection drop?) you will see:
tar: Unrecognized archive format\n
Done! If you are running a public instance we can allow ourselves to start Docker again to avoid downtime:
docker-compose up -d\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-directory-structures","title":"The directory structures","text":"Now that you backed all up we can spend some minutes looking at both directory structures.
If you observe both deployment strategies side by side you will inmediately notice the most important similarities and also differences:
archipelago-deployment Live archipelago-deployment.\n\u251c\u2500\u2500 config_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nginxconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nginxconfig_selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 php-fpm\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrconfig\n\u251c\u2500\u2500 data_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiiftmp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 letsencrypt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 minio-data\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ngnixcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 deploy\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 azure-kubernetes\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ec2-docker\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 kubernetes\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sync\n\u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadatadisplays\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Commands\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sites\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 miniodata\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 webform\n\u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 archipelago\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 composer\n\u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 archipelago\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 asm89\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 aws\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 behat\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 brick\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 chi-teck\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 composer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 consolidation\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 container-interop\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 cweagans\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 data-values\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 dflydev\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 doctrine\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drupal\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 easyrdf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 egulias\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 enlightn\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 erusev\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 evenement\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ezyang\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 fabpot\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 fileeye\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 firebase\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frictionlessdata\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 google\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 graham-campbell\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 grasmash\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 guzzlehttp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 instaclick\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jcalderonzumba\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jean85\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jmikola\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 justinrainbow\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 laminas\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 league\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lsolesen\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 maennchen\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 markbaker\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 masterminds\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mglaman\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mhor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mikey179\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mixnode\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 monolog\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 mtdowling\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 myclabs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nesbot\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nette\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 nikic\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 paragonie\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pear\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phar-io\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phenx\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpdocumentor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phplang\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpmailer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpoffice\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpoption\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpseclib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpspec\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpstan\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 phpunit\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 professional-wiki\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 psr\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 psy\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ralouphie\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ramsey\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 react\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sebastian\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seld\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sirbrillig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solarium\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 squizlabs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 stack\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 strawberryfield\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 swaggest\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 symfony\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 symfony-cmf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 theseer\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 twbs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 twig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 typo3\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vlucas\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web64\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 webflo\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 webmozart\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 wikibase\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 wikimedia\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 zaporylie\n\u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 core\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 libraries\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 modules\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profiles\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sites\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 themes","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-data","title":"The Data","text":"
Let's start by focusing on the data
, in our case the Database, Solr, and File (S3 + Private) storage. Collapsing here a few folders will make this easier to read. Marked with a *
are matching folders that contain DB, Solr Core, the S3 min.io data (if you are using local storage) and also Drupal's very own private
folder:
.\n\u251c\u2500\u2500 config_storage\n\u251c\u2500\u2500 data_storage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * db *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiiftmp\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 letsencrypt\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * minio-data *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ngnixcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 selfcert\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrcore\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 deploy\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * private *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 vendor\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 web\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 config\n\u251c\u2500\u2500 d8content\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * db *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifcache\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 iiifconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * miniodata *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 solrconfig\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * solrcore *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 solrlib\n\u251c\u2500\u2500 * private *\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 vendor\n\u251c\u2500\u2500 web\n","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#copying-the-data-into-the-new-structure","title":"Copying the Data into the new Structure","text":"
To do so we need to stop Docker again. This is needed because Databases sometimes keep an open Change Log and Locks in place, and if there is any interaction or cron running, your data may end up corrupted.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1_1","title":"Step 1:","text":"Shut down your docker-compose
ensemble. Inside your original archipelago-deployment
folder run this:
docker-compose down\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-2_1","title":"Step 2:","text":"Verify all containers are actually down:
docker ps\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-3_1","title":"Step 3:","text":"We will copy DB, min.io (File and ADO storage as files) and Drupal's private (temporary files, caches) folders to its new place:
sudo cp -rpv persistent/db ../archipelago-deployment-live/data_storage/db\nsudo cp -rpv persistent/solrcore ../archipelago-deployment-live/data_storage/solrcore\nsudo cp -rpv persistent/miniodata ../archipelago-deployment-live/data_storage/minio-data\nsudo cp -rpv private ../archipelago-deployment-live/drupal/private\n
Running -rpv
will copy verbosely and recursively while preserving original permissions.
Done!
You can now start docker-compose
again:
docker-compose up -d\n
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#the-web","title":"The Web","text":"Collapsing again a few folders to aid in readability, we can now focus on your actual Drupal/Archipelago Code/Web and settings. To be honest (we are), you can easily reinstall and restore all this via composer
, but we can also move folders as a learning experience/time and bandwidth experience. Marked with a *
are matching folders you want to copy over:
.\n\u251c\u2500\u2500 config_storage\n\u251c\u2500\u2500 data_storage\n\u251c\u2500\u2500 deploy\n\u251c\u2500\u2500 docs\n\u2514\u2500\u2500 drupal\n\u2502 \u251c\u2500\u2500 * config *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 patches\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 persistent\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 scripts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * vendor *\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 * web *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 xdebug\n
.\n\u251c\u2500\u2500 * config *\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sync\n\u251c\u2500\u2500 d8content\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadatadisplays\n\u251c\u2500\u2500 docs\n\u251c\u2500\u2500 drush\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Commands\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sites\n\u251c\u2500\u2500 nginxconfigford8\n\u251c\u2500\u2500 patches\n\u251c\u2500\u2500 persistent\n\u251c\u2500\u2500 private\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 * vendor *\n\u251c\u2500\u2500 * web *\n","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#copying-the-web-into-the-new-structure","title":"Copying the Web into the new Structure","text":"
No need to stop Docker again. We can do this while your Archipelago is still running.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#step-1_2","title":"Step 1:","text":"We will copy all important folders over. From your archipelago-deployment
folder run:
sudo cp -rpv vendor ../archipelago-deployment-live/drupal/vendor\nsudo cp -rpv web ../archipelago-deployment-live/drupal/web\nsudo cp -rpv config ../archipelago-deployment-live/drupal/config\n
And also, selectively, a few files we know you are very fond of!
sudo cp -rpv composer.json ../archipelago-deployment-live/drupal/composer.json\nsudo cp -rpv composer.lock ../archipelago-deployment-live/drupal/composer.lock\n
Done!
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#ssl-enviromentals-configurations-settings-and-docker","title":"SSL, Enviromentals, Configurations, Settings and Docker","text":"We are almost done, but archipelago-deployment-live
has a different, safer way of defining SSL Certs, credentials, and global settings for your Archipelago. We will start first by copying settings as they are (most likely not very safe), and then we can update passwords/etc. to make your system better-prepared for the world.
To learn more about these general settings please read this section of the parent Documentation (who likes duplicated documentation? Nobody.). The gist here is (after reading, please do not skip) that we need to add our service definitions into a .env
file.
Coming from archipelago-deployment
means and assumes that you are running AWS Linux 2 using the suggested locations in this document, that you have a vanilla deployment, and that you followed these instructions) so your values for $HOME/archipelago-deployment-live/deploy/ec2-docker/.env
will be the following:
ARCHIPELAGO_ROOT=/home/ec2-user/archipelago-deployment-live\nARCHIPELAGO_EMAIL=your@validemail.org\nARCHIPELAGO_DOMAIN=your.domain.org\nMINIO_ACCESS_KEY=minio\nMINIO_SECRET_KEY=minio123\nMYSQL_ROOT_PASSWORD=esmero-db\nMINIO_BUCKET_MEDIA=archipelago\nMINIO_FOLDER_PREFIX_MEDIA=/\nMINIO_BUCKET_CACHE=archipelago\nMINIO_FOLDER_PREFIX_CACHE=/\n
If you plan on staying on local storage driven min.io
, MINIO_BUCKET_CACHE
and MINIO_FOLDER_PREFIX_CACHE
are not going to be used. If you are planning on moving your Storage from local to cloud driven please replace with the right values, e.g. AWS IAM keys and Secrets + bucket names and prefixes (folders). Again, refer to the parent Documentation for setting this up.
Once you have that in place (Double-check. If something goes wrong here we can always fine-tune and fix again.), we need to decide on a new docker-compose
file, and you may need to customize it depending on your choices and current and future needs.
If you already have an SSL certificate, and it's provided by CertBot
you can either copy the certs from your current system (will totally depend on your setup since archipelago-deployment
does not provide out-of-the-box SSL Certs) to $HOME/archipelago-deployment-live/data_storage/letsencrypt
.
A normal folder structure for that is:
.\n\u251c\u2500\u2500 accounts\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 acme-v02.api.letsencrypt.org\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 directory\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 cac9f8218ef18e4f11ec053785bbf648\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 meta.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 private_key.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 regr.json\n\u251c\u2500\u2500 archive\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 your.domain.org\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 cert1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 chain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 fullchain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u2514\u2500\u2500 privkey1.pem\n\u2502\u00a0\n\u251c\u2500\u2500 csr\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0000_csr-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0001_csr-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0002_csr-certbot.pem\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 0003_csr-certbot.pem\n\u251c\u2500\u2500 keys\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0000_key-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0001_key-certbot.pem\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 0002_key-certbot.pem\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 0003_key-certbot.pem\n\u251c\u2500\u2500 live\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 README\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 your.domain.org\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 cert.pem -> ../../archive/your.domain.org/cert1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 chain.pem -> ../../archive/your.domain.org/chain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 fullchain.pem -> ../../archive/your.domain.org/fullchain1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u251c\u2500\u2500 privkey.pem -> ../../archive/your.domain.org/privkey1.pem\n\u2502\u00a0\u00a0 \u00a0\u00a0 \u2514\u2500\u2500 README\n\u251c\u2500\u2500 renewal\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 your.domain.org.conf\n\u2502\u00a0\u00a0 \n\u2514\u2500\u2500 renewal-hooks\n \u251c\u2500\u2500 deploy\n \u251c\u2500\u2500 post\n \u2514\u2500\u2500 pre\n
Or if your SSL cert is up for renewal, you can just let Archipelago request it for you. Renewal will happen auto-magically, and you may never ever need to worry about that in the future.
Finally, let's adapt the docker-compose
file we need to our previous (but still current!) archipelago-deployment
reality.
For x86/AMD, run (for ARM64/Apple M1 please check the parent Documentation):
cp $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose-aws-s3.yml $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose.yml\nnano $home/archipelago-deployment-live/deploy/ec2-docker/docker-compose.yml\n
And replace the content with this slightly modified version. Note: we really only changed the lines after this comment: # THIS DIFFERS FROM THE NORMAL ONE...
.
# Run docker-compose up -d\n\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n image: staticfloat/nginx-certbot\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/conf.d:/etc/nginx/user.conf.d\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/certbot_extra_domains:/etc/nginx/certbot/extra_domains:ro\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n depends_on:\n - solr\n - php\n - db\n tty: true\n networks:\n - host-net\n - esmero-net\n php:\n container_name: esmero-php\n restart: always\n image: \"esmero/php-7.4-fpm:1.0.0-RC2-multiarch\"\n tty: true\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/php-fpm/www.conf:/usr/local/etc/php-fpm.d/www.conf\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n environment:\n MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}\n MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n MINIO_BUCKET_MEDIA: ${MINIO_BUCKET_MEDIA}\n MINIO_FOLDER_PREFIX_MEDIA: ${MINIO_FOLDER_PREFIX_MEDIA}\n solr:\n container_name: esmero-solr\n restart: always\n image: \"solr:8.8.2\"\n tty: true\n ports:\n - \"8983:8983\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/solrcore:/var/solr/data\n - ${ARCHIPELAGO_ROOT}/config_storage/solrconfig:/drupalconfig\n - ${ARCHIPELAGO_ROOT}/data_storage/solrlib:/opt/solr/contrib/archipelago/lib\n entrypoint:\n - docker-entrypoint.sh\n - solr-precreate\n - drupal\n - /drupalconfig\n db:\n image: mysql:8.0.22\n command: mysqld --default-authentication-plugin=mysql_native_password --max_allowed_packet=256M\n container_name: esmero-db\n restart: always\n environment:\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/db:/var/lib/mysql\n nlp:\n container_name: esmero-nlp\n restart: always\n image: \"esmero/esmero-nlp:1.0\"\n ports:\n - \"6400:6400\"\n networks:\n - host-net\n - esmero-net\n iiif:\n container_name: esmero-cantaloupe\n image: \"esmero/cantaloupe-s3:4.1.9RC\"\n restart: always\n ports:\n - \"8183:8182\"\n networks:\n - host-net\n - esmero-net\n environment:\n AWS_ACCESS_KEY_ID: ${MINIO_ACCESS_KEY}\n AWS_SECRET_ACCESS_KEY: ${MINIO_SECRET_KEY}\n # THIS DIFFERS FROM THE STANDARD ONE AND ENABLES LOCAL FILESYSTEM CACHE INSTEAD OF AWS S3 one\n CACHE_SERVER_DERIVATIVE: FilesystemCache\n S3SOURCE_BASICLOOKUPSTRATEGY_BUCKET_NAME: ${MINIO_BUCKET_MEDIA}\n S3SOURCE_BASICLOOKUPSTRATEGY_PATH_PREFIX: ${MINIO_FOLDER_PREFIX_MEDIA}\n S3CACHE_BUCKET_NAME: ${MINIO_BUCKET_CACHE} \n S3CACHE_OBJECT_KEY_PREFIX: ${MINIO_FOLDER_PREFIX_CACHE} \n XMS: 2g\n XMX: 4g\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/iiifconfig:/etc/cantaloupe\n - ${ARCHIPELAGO_ROOT}/data_storage/iiifcache:/var/cache/cantaloupe\n - ${ARCHIPELAGO_ROOT}/data_storage/iiiftmp:/var/cache/cantaloupe_tmp\n minio:\n container_name: esmero-minio\n restart: always\n image: minio/minio:latest\n volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/minio-data:/data:cached\n ports:\n - \"9000:9000\"\n - \"9001:9001\"\n networks:\n - host-net\n - esmero-net\n environment:\n MINIO_HTTP_TRACE: /tmp/minio-log.txt\n MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}\n MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}\n MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}\n MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}\n # THIS DIFFERS FROM THE STANDARD ONE AND ENABLES LOCAL MINIO INSTEAD OF AWS S3 one \n command: server /data --console-address \":9001\"\nnetworks:\n host-net:\n driver: bridge\n esmero-net:\n driver: bridge\n internal: true\n
Press CNTRL-X, and you are done. Now the final test!!
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-moveToLive/#shutdown-the-old-one-start-the-new-one","title":"Shutdown the old one, start the new one","text":"So we are ready. Testing may be a hit-or-miss thing here. Did we cover all the steps? Did a command fail? The good thing is that we can start the new ensemble, and all our old ones will survive. And we can come back over and over until we are ready. Let's try!
We will start by shutting down the running Docker ensemble:
cd $HOME/archipelago-deployment\ndocker-compose down\n
Now let's go to our new deployment. Docker starts here in a different folder:
cd $HOME/archipelago-deployment-live/deploy/ec2-docker\ndocker-compose up\n
You may notice that we removed the -d
. Why? We want to see all the messages and notice/mark/copy any errors, e.g. did the SSL CERT load correctly? Did the MYSQL import work out? To avoid shutting it down while all starts, please open another Terminal and type:
docker ps\n
And look at the up-times. Do you see any Containers restarting (where Created and the Status differ for a lot and Status keeps resetting to 0?)? A healthy deployment will look similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nf794c25db64c esmero/cantaloupe-s3:4.1.9RC2-arm64 \"sh -c 'java -Dcanta\u2026\" 6 seconds ago Up 3 seconds 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n5b791445720f jonasal/nginx-certbot \"/docker-entrypoint.\u2026\" 6 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp esmero-web\ne38fbbd86edf esmero/esmero-nlp:1.0.1-RC2-arm64 \"/usr/local/bin/entr\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:6400->6400/tcp esmero-nlp\nc84a0a4d43e9 minio/minio:latest \"/usr/bin/docker-ent\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:9000-9001->9000-9001/tcp esmero-minio\n3ec176a960c3 esmero/php-7.4-fpm:1.0.0-RC2-multiarch \"docker-php-entrypoi\u2026\" 11 seconds ago Up 6 seconds 9000/tcp esmero-php\ne762ad7ea5e2 solr:8.8.2 \"docker-entrypoint.s\u2026\" 11 seconds ago Up 6 seconds 0.0.0.0:8983->8983/tcp esmero-solr\n381166d61f8c mariadb:10.5.10-focal \"docker-entrypoint.s\u2026\" 11 seconds ago Up 6 seconds 3306/tcp \n
If you feel that all seems to be fine, open a browser window and visit your website. See if you can log in and see ADOs. If not you can momentarily shut down this new Docker ensemble and restart the older one. Nothing is lost! Then with time and tea/coffee and fresh eyes come back and re-trace your steps. 95% of the issues are incorrect values in the .env
file. The other 5% may be on us. If you run into any trouble please get in touch!
Happy deploying!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/","title":"Archipelago Deployment Live","text":"A Cloud / Local production ready Archipelago Deployment using Docker and soon Kubernetes.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#what-is-this-repo-for","title":"What is this repo for?","text":"Running Archipelago Commons on a live public instance using SSL with Blob/Object Storage backend
Docker
running as a service and docker-compose
Docker
basics knowledge and how to manage packages in your SystemBasically this guide is meant for humans with basic to medium DevOps
background or humans with patience that are willing to troubleshoot, ask, and try again when that background is not (yet) enough. And we are here to help.
Deploy your base system
Make sure your Firewall/AWS Security group has these ports open for everyone to access
Setup your system using your favorite package manager with
e.g. for Amazon Linux 2 (x86/amd64) these steps are tested:
sudo yum update -y\nsudo amazon-linux-extras install -y docker\nsudo service docker start\nsudo usermod -a -G docker ec2-user\nsudo chkconfig docker on\nsudo systemctl enable docker\nsudo yum install -y git htop tree\nsudo curl -L \"https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\nsudo chmod +x /usr/local/bin/docker-compose\nsudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose\nsudo reboot\n
Reboot is needed to allow Docker to take full control over your OS resources.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-2","title":"Step 2:","text":"In your location of choice clone this repo
git clone https://github.com/esmero/archipelago-deployment-live\ncd archipelago-deployment-live\ngit checkout 1.0.0\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-3-setup-your-enviromental-variables-for-dockerservices","title":"Step 3. Setup your enviromental variables for Docker/Services","text":"","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#setup-enviromentals","title":"Setup Enviromentals","text":"Setup your deployment enviromental variables by copying the template
cp deploy/ec2-docker/.env.template deploy/ec2-docker/.env\n
and editing it nano deploy/ec2-docker/.env\n
The content of that file would be similar to this.
ARCHIPELAGO_ROOT=/home/ec2-user/archipelago-deployment-live\nARCHIPELAGO_EMAIL=your@validemail.org\nARCHIPELAGO_DOMAIN=your.domain.org\nMINIO_ACCESS_KEY=THE_S3_AZURE_OR_LOCAL_MINIO_KEY\nMINIO_SECRET_KEY=THE_S3_AZURE_OR_LOCAL_MINIO_SECRET\nMYSQL_ROOT_PASSWORD=YOUR_MYSQL_PASSWORD_FOR_ARCHIPELAGO\nMINIO_BUCKET_MEDIA=THE_NAME_OF_YOUR_S3_BUCKET_FOR_PERSISTEN_STORAGE\nMINIO_FOLDER_PREFIX_MEDIA=media/\nMINIO_BUCKET_CACHE=THE_NAME_OF_YOUR_S3_BUCKET_FOR_IIIF_STORAGE\nMINIO_FOLDER_PREFIX_CACHE=iiifcache/\nREDIS_PASSWORD=YOUR_REDIS_PASSWORD\n
What does each key mean?
ARCHIPELAGO_ROOT
: the absolute path to your archipelago-deployment-live
git repo in your host machine.ARCHIPELAGO_EMAIL
: a valid email, will be used to register your SSL Certificate via Certbot.ARCHIPELAGO_DOMAIN
: a valid domain name for your repository. This domain will be also used to request your SSL Certificate via Certbot.MINIO_ACCESS_KEY
: If you are running a Cloud Service backed S3/Azure Storage this needs to be generated there. The user/IAM owner of this ACCESS KEY needs to have access to read/write the bucket you will configure in this same .env
. If running local min.io
whatever you set will be used.MINIO_SECRET_KEY
: If you are running a Cloud Service backed S3/Azure Storage this needs to generated there. The user/IAM owner of the matching SECRET_KEY needs to have access to read/write the bucket you will configure in this same .env
file. If running local min.io
whatever you set will be used.MYSQL_ROOT_PASSWORD
: The MYSQL 8 or Mariadb 15 password. This password will be used later also during Drupal deployment via drush
MINIO_BUCKET_MEDIA
: The name of your Persistant Storage Bucket. If using mini.io local we recommend keeping it simple, e.g. archipelago
.MINIO_FOLDER_PREFIX_MEDIA
: The folder
(a prefix really) where your DO Storage and File storage will go inside the MINIO_BUCKET_MEDIA
Bucket. media/
is a fine name for this one and common in archipelago deployments. IMPORTANT: Always terminate these with a /
. MINIO_BUCKET_CACHE
: The name of your IIIF Cache storage Bucket. May be the same as MINIO_BUCKET_MEDIA
. If different make sure your your MINIO_ACCESS_KEY
and/or IAM role ACL have permission to read write to this one too.MINIO_FOLDER_PREFIX_CACHE
: The folder
(a prefix really) where Cantaloupe will/can write its iiif
caches. iiifcache/
is a lovely name we use a lot. IMPORTANT: Always terminate these with a /
.REDIS_PASSWORD
: Password for your REDIS (Drupal Cache/Queue storage) if you decide to enable the Drupal REDIS module.IMPORTANT NOTE
: For AWS EC2. If you selected an IAM role
for your server when setting it up/deploying it, min.io
will use the AWS EC2-backed internal API to request access to your S3. This means the ROLE itself needs to have read/write access (ACL) to the given Bucket(s) and your key/secrets won't be able to override that. Please do not ignore this note. It will save you a LOT of frustration and coffee. You can also run an EC2 instace without a given IAM and in that case just the ACCESS_KEY/SECRET will matter.
Now that you know, you also know that these values should not be shared and this .env
file should not be committed/kept in version control. Please be careful.
docker-compose
will read this .env
and start all services for you based on its content.
Once you have modified this you are ready for your first big decision.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#running-a-fully-qualified-domain-you-wish-a-validsigned-certificate-for-amdintel-architecture","title":"Running a fully qualified domain you wish a valid/signed certificate for AMD/INTEL Architecture?","text":"This means you will use the docker-compose-aws-s3.yml
. Do the following:
cp deploy/ec2-docker/docker-compose-aws-s3.yml deploy/ec2-docker/docker-compose.yml\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#running-a-fully-qualified-domain-you-wish-a-validsigned-certificate-for-arm64apple-m1-architecture","title":"Running a fully qualified domain you wish a valid/signed certificate for ARM64/Apple M1 Architecture?","text":"This means you will use the docker-compose-aws-s3-arm64.yml
. Do the following:
cp deploy/ec2-docker/docker-compose-aws-s3-arm64.yml deploy/ec2-docker/docker-compose.yml\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#optional-expert-extra-domains-does-not-apply-to-arm64apple-m1-architecture","title":"Optional (expert) extra domains (does not apply to ARM64/Apple M1 Architecture):","text":"If you have more than a single domain you may create a text file inside config_storage/nginxconfig/certbot_extra_domains/your.domain.org
and write for each subdomain there an entry/line.
Only if you are not running a fully qualified domain you wish a valid/signed. We really DO not recommend this route. IF you plan on using this deployment for local testing or running on non SSL please go for https://github.com/esmero/archipelago-deployment which delivers the same experience in less than 20 minutes deployment time.
Generate a self signed Cert
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout data_storage/selfcert/private/nginx.key -out data_storage/selfcert/certs/nginx.crt \nsudo openssl dhparam -out data_storage/selfcert/dhparam.pem 4096\ncp deploy/ec2-docker/docker-compose-selfsigned.yml deploy/ec2-docker/docker-compose.yml\n
Note: Self signed docker-compose.yml file is setup to use min.io with local storage
volumes:\n - ${ARCHIPELAGO_ROOT}/data_storage/minio-data:/data:cached\n
This folder will be created by min.io. If you are using a secondary Drive (e.g. magnetic) you can modify your deploy/ec2-docker/docker-compose.yml
to use a folder there, e.g.
volumes:\n - /persistentinotherdrive/data_storage/minio-data:/data:cached\n
Make sure your logged in user can read/write to it.
NOTE: If you want to use AWS S3 storage for the self signed version replace the minio Service yaml block with this Service Block in your new deploy/ec2-docker/docker-compose.yml
. You can mix and match services and even remove all :cached
statements for improved R/W volumen performance.
sudo chown 8183:8183 config_storage/iiifconfig/cantaloupe.properties\nsudo chown -R 8183:8183 data_storage/iiifcache\nsudo chown -R 8183:8183 data_storage/iiiftmp\nsudo chown -R 8983:8983 data_storage/solrcore\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#actual-first-run","title":"Actual first run","text":"Time to spin our docker containers for the first time. We will start all without going into background so log/error checking is easier, especially if you have selected a Valid/Signed Cert choice and also want to be sure S3 keys/access are working.
cd deploy/ec2-docker\ndocker-compose up\n
You will see a lot of things happening. Check for errors/problems/clear alerts and give all a minute or so to start. Ok, let's assume your setup managed to request a valid signed SSL cert, you will see a nice message!
- Congratulations! Your certificate and chain have been saved at:XXXXX\n Your certificate will expire on 20XX-XX-XX. To obtain a new or\n tweaked version of this certificate in the future, simply run\n certbot again. To non-interactively renew *all* of your\n certificates, run \"certbot renew\"\n
Archipelago will do that for you whenever it's about to expire so no need to deal with this manually, even when docker-compose
restarts.
Now press CTRL+C. docker-compose
will shutdown gracefully. Good!
Copy the shipped default composer.default.json to composer.json and composer.default.lock to composer.lock (ONLY if you are installing from scratch):
cp ../../drupal/composer.default.json ../../drupal/composer.json\ncp ../../drupal/composer.default.lock ../../drupal/composer.lock\n
Start Docker again
docker-compose up -d\n
Wait a few seconds and run:
docker exec -ti esmero-php bash -c \"chown -R www-data:www-data private\"\ndocker exec -ti esmero-php bash -c \"chown -R www-data:www-data web/sites\"\ndocker exec -ti esmero-php bash -c \"composer install\"\n
Composer install will take a little while and bring all your PHP libraries.
Once done, execute our setup script that will prepare your Drupal settings.php
and bring some of the .env
enviromental variables to the Drupal environment.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
And now you can deploy Drupal!
IMPORTANT: Make sure you replace in the following command inside root:MYSQL_ROOT_PASSWORD
the MYSQL_ROOT_PASSWORD
string with the value you used/assigned in your .env
file for MYSQL_ROOT_PASSWORD
. And replace ADMIN_PASSWORD
with a password that is safe and you won't forget! That passwords is for your Drupal super user (uid:0).
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:MYSQL_ROOT_PASSWORD@esmero-db/drupal --account-name=admin --account-pass=ADMIN_PASSWORD -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#step-6-users-and-initial-content","title":"Step 6. Users and initial Content.","text":"After installation is done (may take a few) you can install initial users and assign them roles. Copy each line separately. A missing permission will not let you ingest the initial Metadata Displays and AMI set.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
Before ingesting the base content we need to make sure we can access your JSON-API
on for your new domain. That means we need to change internal urls (https://esmero-web
) to the new valid SSL driven ones. This is easy:
On your host machine (no need to docker exec
these ones), replace first in the following command your.domain.org
with the domain you setup in your .env
file. Go to (cd into) your base git clone folder (Important: YOUR BASE CLONE FOLDER) and then run
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/deploy.sh\n sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/update_deployed.sh\n
Now your deploy.sh
and update_deployed.sh
are update and ready. Let's ingest some Twig Templates, an AMI Set, menus and a Blocks.
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
NOTE: update_deployed.sh
is not needed when deploying for the first time and totally discouraged on a customized Archipelago. If you make modifications to your Twig templates
, that command will replace the ones shipped by us with fresh copies overwriting all your modifications. Only run to restore larger errors or when needing to update non-customized ones with newer versions.
By default archipelago ships with a public facing and an internal facing IIIF Server URLs configured. These urls are used by a number of IIIF enabled viewers and need to be changed to reflect your new reality (a real Domain name and a proxied path!). These settings belong to the strawberryfield/format_strawberryfield
module.
First check your current settings:
docker exec -ti esmero-php bash -c \"drush config-get format_strawberryfield.iiif_settings\"\n
You will see the following:
pub_server_url: 'http://localhost:8183/iiif/2'\nint_server_url: 'http://esmero-cantaloupe:8182/iiif/2'\n
Let's modify pub_server_url
. Replace in the following command your.domain.org
with the domain you defined in your .env
file. NOTE: We are passing the -y
flag to drush
avoid that way having to answer \"yes\".
docker exec -ti esmero-php bash -c \"drush -y config-set format_strawberryfield.iiif_settings pub_server_url https://your.domain.org/cantaloupe/iiif/2\"\n
Finally Done! Now you can log into your new Archipelago using https
and start exploring. Thank you for following this guide!
This applies to AWS m6g
and t3g
Instances and is documented inline in this guide. Please open an ISSUE in this repository if you run into any problems. Please review https://github.com/esmero/archipelago-deployment-live/blob/1.0.0/deploy/ec2-docker/docker-compose-aws-s3-arm64.yml for more info.
Run
uname -m \n
x86(64 bit)
processor system output will be x86_64
ARM(64 bit)
processor system output will be aarch64
This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-readme/#license","title":"License","text":"GPLv3
","tags":["Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/","title":"How to update your Docker containers","text":"From time to time you may have a need to update the containers themselves. Primarily this is done for security releases.
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#1-update-docker-composeyml","title":"1. Update docker-compose.yml","text":"The first thing you need to do is to edit your docker-compose.yml
file and replace the version of the container with the new one you wish to use.
Navigate to your docker-compose.yml
file and open it to edit. On Debian installs it would look like this:
cd /usr/local/archipelago/deploy/archipelago-deployment-live/deploy/ec2-docker\n vi docker-compose.yml\n
You want to change the image line to reflect the name of the new image you wish to use:
image: esmero/php-7.4-fpm:1.0.0-RC3-multiarch\n
might become:
image: esmero/php-8.0-fpm:1.1.0-multiarch\n
Save your change. If use vi like in the above it would look like this:
:wq\n
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#pull-the-new-images","title":"Pull the new image(s)","text":"Docker Compose will now allow us to grab the new image(s) while your current system is running:
docker-compose pull\n
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-updatingContainers/#stop-and-restart-the-container","title":"Stop and restart the container","text":"It is necesary to stop and start the container or the current image will continue to be used:
docker-compose stop container-name\n
Wait for it to stop. Then bring it back up:
docker-compose up -d \n
It is important to use the -d flag or you will have your live instance stuck in your terminal. You want it to run in the background. The -d
flag stands for detached.
If you are more comfortable having the all the containers go down and up you can do that with the following:
docker-compose down\ndocker-compose up\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Github","Docker","Archipelago-deployment-live"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/","title":"Archipelago-deployment-live: upgrading Drupal 8 to Drupal 9 (1.0.0-RC2 to 1.0.0-RC3)","text":"","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (RC2 or your own custom version) running Drupal 8 (D8), this documentation will allow you to update to Drupal 9 (D9) without major issues.
D8 is no longer supported as of the end of November 2021. D9 has been around for a little while and even if every module is not supported yet, what you need and want for Archipelago has long been D9-ready.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database and settings are mostly self-contained in your current archipelago-deployment-live
repo folder, and backing up is simple because of that.
Shut down your docker-compose
ensemble. Move to your archipelago-deployment-live
folder and run this:
cd deploy/ec2-docker\ndocker-compose down\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing. If anything is still running, wait a little longer, and run the following comman again.
docker ps\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021.
sudo tar -czvpf $HOME/archipelago-deployment-live-backup-20211201.tar.gz ../../../archipelago-deployment-live\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-live-backup-20211201.tar.gz \n
You will see a listing of files. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#upgrading-to-100-rc3","title":"Upgrading to 1.0.0-RC3","text":"","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-1_1","title":"Step 1:","text":"First we are going to disable modules that are not part of 1.0.0-RC3 or are not yet compatible with D9. Run the following command:
docker exec esmero-php drush pm-uninstall module_missing_message_fixer markdown webprofiler key_value webform_views\n
From inside your archipelago-deployment-live
repo folder we are going to open up the file permissions
for some of your most protected Drupal files.
cd ../../\nsudo chmod 777 drupal/web/sites/default\nsudo chmod 666 drupal/web/sites/default/*settings.php\nsudo chmod 666 drupal/web/sites/default/*services.yml\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-2_1","title":"Step 2:","text":"Time to fetch the 1.0.0-RC3
branch and update our docker-compose
and composer
dependencies. We are also going to stop the current docker
ensemble to update all containers to newer versions:
cd deploy/ec2-docker\ndocker-compose down\ngit checkout 1.0.0-RC3 \n
Then copy the appropriate docker-compose
file for your architecture:
cp docker-compose-osx.yml docker-compose.yml\n
Linux/x86-64/AMD64 cp docker-compose-linux.yml docker-compose.yml\n
OSX (macOS)/Linux/ARM64 cp docker-compose-arm64.yml docker-compose.yml\n
Finally, pull the images, and bring up the ensemble:
docker compose pull \ndocker compose up -d\n
Give all a little time to start. The latest min.io
adds a new console, and your Solr
core and Database
need to be upgraded. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n867fd2a42134 nginx \"/docker-entrypoint.\u2026\" 32 seconds ago Up 27 seconds 0.0.0.0:8001->80/tcp, :::8001->80/tcp esmero-web\n8663e84a9b48 solr:8.8.2 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp esmero-solr\n9b580fa0088f minio/minio:latest \"/usr/bin/docker-ent\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:9000-9001->9000-9001/tcp, :::9000-9001->9000-9001/tcp esmero-minio\n50e2f41c7b60 esmero/esmero-nlp:1.0 \"/usr/local/bin/entr\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:6400->6400/tcp, :::6400->6400/tcp esmero-nlp\n300810fd6f03 esmero/cantaloupe-s3:4.1.9RC \"sh -c 'java -Dcanta\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:8183->8182/tcp, :::8183->8182/tcp esmero-cantaloupe\n248e4638ba2a mysql:8.0.22 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 3306/tcp, 33060/tcp esmero-db\n141ace919344 esmero/php-7.4-fpm:1.0.0-RC2-multiarch \"docker-php-entrypoi\u2026\" 33 seconds ago Up 28 seconds 9000/tcp esmero-php\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Now we are going to tell composer
to actually fetch the new code and dependencies using the 1.0.0-RC3 provided composer.lock
and update the whole Drupal/PHP/JS environment.
docker exec -ti esmero-php bash -c \"composer install\"\n
This will fail (sorry!) for a few packages but no worries, they need to be patched and composer is not that smart. So simply run it again:
docker exec -ti esmero-php bash -c \"composer install\"\n
Well done! If you see no issues and all ends in a Green colored message all is good! Jump to Step 4
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#what-if-not-all-is-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if not all is OK and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 9 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 9 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL9_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not: try to find a replacement module that does something simular, but in any case you may end having to remove before proceding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\ndocker exec -ti esmero-php bash -c \" drush pm-uninstall the_module_name\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-4_1","title":"Step 4:","text":"We will now ask Drupal to update some internal configs and databases. They will bring you up to date with RC3 settings and D9 particularities.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\ndocker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-5_1","title":"Step 5:","text":"Previously D8 installations had a \"module/profile\" driven installation. Those are no longer used or even exist as part of core, but a profile can't be changed once installed so you have to do the following to avoid Drupal complaining about our new and simpler way of doing things (a small roll back):
docker exec -ti esmero-php bash -c \"sed -i 's/minimal: 1000/standard: 1000/g' config/sync/core.extension.yml\"\ndocker exec -ti esmero-php bash -c \"sed -i 's/profile: minimal/profile: standard/g' config/sync/core.extension.yml\"\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-6","title":"Step 6:","text":"Now you can Sync your new Archipelago 1.0.0-RC3 and bring all the new configs and settings in. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial\n
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#a-complete-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-also-remove-all-the-ones-that-are-not-part-of-rc3-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new configs and update existing ones but will also remove all the ones that are not part of RC3. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y\n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 9 realm for a few years!
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromD8ToD9/#step-7-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 7: Update (or not) your Metadata Display Entities and Menu items.","text":"Recommended: If you want to add new templates and menu items 1.0.0-RC3 provides, run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (Twig templates) we ship with new 1.0.0-RC3 versions (heavily fixed IIIF manifest, Markdown to HTML for Metadata, better Object descriptions). But before you do this, we really recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unusure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
Please log into your Archipelago and test/check all is working! Enjoy 1.0.0-RC3 and Drupal 9. Thanks!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment-live","Drupal 8","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/","title":"Archipelago-deployment-live: upgrading from 1.0.0-RC3 to 1.0.0","text":"","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#what-is-this-documentation-for","title":"What is this documentation for?","text":"If you already have a well-set-up and well-loved Archipelago (RC3 or your own custom version) running Drupal 9, this documentation will allow you to update to 1.0.0 without major issues.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#requirements","title":"Requirements","text":"composer
and drush
.Backups are always going to be your best friends. Archipelago's code, database and settings are mostly self-contained in your current archipelago-deployment-live
repo folder, and backing up is simple because of that.
Shut down your docker-compose
ensemble. Move to your archipelago-deployment-live
folder and run this:
cd deploy/ec2-docker\ndocker-compose down\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-2","title":"Step 2:","text":"Verify that all containers are actually down. The following command should return an empty listing. If anything is still running, wait a little longer, and run the following comman again.
docker ps\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-3","title":"Step 3:","text":"Now let's tar.gz
the whole ensemble with data and configs. As an example we will save this into your $HOME
folder. As a good practice we append the current date (YEAR-MONTH-DAY) to the filename. Here we assume today is December 1st of 2021.
sudo tar -czvpf $HOME/archipelago-deployment-live-backup-20220803.tar.gz ../../../archipelago-deployment-live\n
The process may take a few minutes. Now let's verify that all is there and that the tar.gz
is not corrupt.
tar -tvvf $HOME/archipelago-deployment-live-backup-20220803.tar.gz \n
You will see a listing of files. If corrupt (Do you have enough space? Did your ssh connection drop?) you will see the following:
tar: Unrecognized archive format\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-4","title":"Step 4:","text":"Restart your docker-compose
ensemble, and wait a little while for all to start.
docker-compose up -d\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-5","title":"Step 5:","text":"Export/backup all of your live Archipelago configurations (this allows you to compare/come back in case you lose something custom during the upgrade).
docker exec esmero-php mkdir config/backup\ndocker exec esmero-php drush cex --destination=/var/www/html/config/backup\n
Good. Now it's safe to begin the upgrade.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#upgrading-to-100","title":"Upgrading to 1.0.0","text":"","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-1_1","title":"Step 1:","text":"First we are going to disable modules that are not part of 1.0.0 or are not yet compatible with Drupal 9.4.x or higher . Run the following command:
docker exec esmero-php drush pm-uninstall search_api_solr_defaults entity_reference\n
From inside your archipelago-deployment-live
repo folder we are going to open up the file permissions
for some of your most protected Drupal files.
cd ../../\nsudo chmod 777 drupal/web/sites/default\nsudo chmod 666 drupal/web/sites/default/*settings.php\nsudo chmod 666 drupal/web/sites/default/*services.yml\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-2_1","title":"Step 2:","text":"First let's back up our current composer.lock:
cp drupal/composer.lock drupal/composer.original.lock\n
Time to fetch the 1.0.0
branch and update our docker-compose
and composer
dependencies. We are also going to stop the current docker
ensemble to update all containers to newer versions:
cd deploy/ec2-docker\ndocker-compose down\ngit fetch\ngit checkout 1.0.0 \n
If you decide to enable the Drupal REDIS module, make sure to add the REDIS_PASSWORD
variable to your .env
file.
IMPORTANT NOTE
: For AWS EC2. If you selected an IAM role
for your server when setting it up/deploying it, min.io
will use the AWS EC2-backed internal API to request access to your S3. This means the ROLE itself needs to have read/write access (ACL) to the given Bucket(s) and your key/secrets won't be able to override that. Please do not ignore this note. It will save you a LOT of frustration and coffee. You can also run an EC2 instance without a given IAM and in that case just the ACCESS_KEY/SECRET will matter.
Now that you know, you also know that these values should not be shared and this .env
file should not be committed/kept in version control. Please be careful.
Now let's back up the existing docker-compose
file:
cp docker-compose.yml docker-compose-original.yml\n
Then copy the appropriate docker-compose
file for your architecture:
cp docker-compose-aws-s3.yml docker-compose.yml\n
Linux/ARM64/Apple Silicon (M1 and M2) cp docker-compose-aws-s3-arm64.yml docker-compose.yml\n
Next, let's review what's changed in case any customizations need to be brought into the new docker-compose
configurations:
git diff --no-index docker-compose-original.yml docker-compose.yml\n
You should encounter something like the following:
diff --git a/docker-compose-original.yml b/docker-compose.yml\nindex 6f5b17e..282417f 100644\n--- a/docker-compose-original.yml\n+++ b/docker-compose.yml\n@@ -1,5 +1,5 @@\n # Run docker-compose up -d\n-\n+# Docker file for AMD64/X86 machines\n version: '3.5'\n services:\n web:\n@@ -23,6 +23,7 @@ services:\n - solr\n - php\n - db\n+ - redis\n tty: true\n networks:\n - host-net\n@@ -30,7 +31,7 @@ services:\n php:\n container_name: esmero-php\n restart: always\n- image: \"esmero/php-7.4-fpm:1.0.0-RC2-multiarch\"\n+ image: \"esmero/php-8.0-fpm:1.1.0-multiarch\"\n tty: true\n networks:\n - host-net\n@@ -44,10 +45,11 @@ services:\n MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}\n MINIO_BUCKET_MEDIA: ${MINIO_BUCKET_MEDIA}\n MINIO_FOLDER_PREFIX_MEDIA: ${MINIO_FOLDER_PREFIX_MEDIA}\n+ REDIS_PASSWORD: ${REDIS_PASSWORD}\n solr:\n container_name: esmero-solr\n restart: always\n- image: \"solr:8.8.2\"\n+ image: \"solr:8.11.2\"\n
As you can see, most of the changes in this example are for new images and a new service/container/environment variable (REDIS), but you may have custom settings for your containers. Review any differences carefully and make adjustments as needed.
Finally, pull the images:
docker compose pull \n
1.0.0 provides a new Cantaloupe that uses different permissions so we need to adapt those. From your current folder (../ec2-deploy) run:
sudo chown 8183:8183 ../../config_storage/iiifconfig/cantaloupe.properties\nsudo chown -R 8183:8183 ../../data_storage/iiifcache\nsudo chown -R 8183:8183 ../../data_storage/iiiftmp\n
Time to start the ensemble again
docker compose up -d\n
Give all a little time to start. Solr
core and Database
need to be upgraded, Cantaloupe is new and this brings also Redis for caching. Please be patient. To ensure all is well, run (more than once if necessary) the following:
docker ps\n
You should see something like this: e.g if running on ARM64 You should see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n4ed2f62e866e jonasal/nginx-certbot \"/docker-entrypoint.\u2026\" 32 seconds ago Up 27 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp esmero-web\ne6b4383039c3 minio/minio:RELEASE.2022-06-11T19-55-32Z \"/usr/bin/docker-ent\u2026\" 33 seconds ago Up 30 seconds 0.0.0.0:9000-9001->9000-9001/tcp, :::9000-9001->9000-9001/tcp esmero-minio\nf2b6b173b7e2 solr:8.11.2 \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp esmero-solr\na553bf484343 esmero/php-8.0-fpm:1.0.0-multiarch \"docker-php-entrypoi\u2026\" 33 seconds ago Up 30 seconds 9000/tcp esmero-php\necb47349ae94 esmero/esmero-nlp:fasttext-multiarch \"/usr/local/bin/entr\u2026\" 33 seconds ago Up 30 second 0.0.0.0:6400->6400/tcp, :::6400->6400/tcp esmero-nlp\n61272dce034a redis:6.2-alpine \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds esmero-redis\n0ee9869f809b esmero/cantaloupe-s3:6.0.0-multiarch \"sh -c 'java -Dcanta\u2026\" 33 seconds ago Up 28 seconds 0.0.0.0:8183->8182/tcp, :::8183->8182/tcp esmero-cantaloupe\n131d072567ce mariadb:10.6.8-focal \"docker-entrypoint.s\u2026\" 33 seconds ago Up 28 seconds 3306/tcp esmero-db esmero-php\n
Important here is the STATUS
column. It needs to be a number that goes up in time every time you run docker ps
again (and again).
Instead of using the provided composer.default.lock
out of the box we are going to loosen certain dependencies and bring manually Archipelago modules, all this to make update easier and future upgrades less of a pain.
First, as a sanity check let's make sure nothing happened to our original composer.lock
fileby doing a diff against our backed up file:
git diff --no-index ../../drupal/composer.original.lock ../../drupal/composer.lock\n
If all is ok, there should be no output. If there's any output, copy your backed up file back to default:
cp ../../drupal/composer.original.lock ../../drupal/composer.lock\n
Finally, we bring over the modules:
docker exec -ti esmero-php bash -c \"composer require drupal/core:^9 drupal/core-composer-scaffold:^9 drupal/core-project-message:^9 drupal/core-recommended:^9\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/core-dev:^9 --dev\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/tokenuuid:^2\"\ndocker exec -ti esmero-php bash -c \"composer require 'drupal/facets:^2.0'\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/moderated_content_bulk_publish:^2\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/queue_ui:^3.1\"\ndocker exec -ti esmero-php bash -c \"composer require drupal/jquery_ui_touch_punch:^1.1\"\ndocker exec -ti esmero-php bash -c \"composer require archipelago/ami:0.4.0.x-dev strawberryfield/format_strawberryfield:1.0.0.x-dev strawberryfield/strawberryfield:1.0.0.x-dev strawberryfield/strawberry_runners:0.4.0.x-dev strawberryfield/webform_strawberryfield:1.0.0.x-dev drupal/views_bulk_operations:^4.1\"\n
Now we are going to tell composer
to actually fetch the new code and dependencies using composer.lock
and update the whole Drupal/PHP/JS environment.
docker exec -ti esmero-php bash -c \"composer update -W\"\ndocker exec -ti esmero-php bash -c \"drush cr\"\ndocker exec -ti esmero-php bash -c \"drush en jquery_ui_touch_punch\"\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
Well done! If you see no issues and all ends in a Green colored message all is good! Jump to Step 4
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#what-if-not-all-is-ok-and-i-see-red-and-a-lot-of-dependency-explanations","title":"What if not all is OK and I see red and a lot of dependency explanations?","text":"If you have manually installed packages via composer in the past that are NO longer Drupal 9 compatible you may see errors. In that case you need to check each package website's (normally https://www.drupal.org/project/the_module_name) and check if there is a Drupal 9 compatible version.
If so run:
docker exec -ti esmero-php bash -c \"composer require 'drupal/the_module_name:^VERSION_NUMBER_THAT_WORKS_ON_DRUPAL9_' --update-with-dependencies --no-update\" and run **Step 3 ** again (and again until all is cleared)\n
If not: try to find a replacement module that does something simular, but in any case you may end having to remove before proceding. Give us a ping/slack/google group/open a github ISSUE if you find yourself uncertain about this.
docker exec -ti esmero-php bash -c \"composer remove drupal/the_module_name --no-update\"\ndocker exec -ti esmero-php bash -c \" drush pm-uninstall the_module_name\"\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-4_1","title":"Step 4:","text":"We will now ask Drupal to update some internal configs and databases. They will bring you up to date with 1.0.0 settings and D9 particularities.
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\ndocker exec -ti esmero-php bash -c \"drush updatedb\"\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-5_1","title":"Step 5:","text":"Now you can Sync your new Archipelago 1.0.0 and bring all the new configs and settings in. For this you have two options (no worries, remember you made a backup!):
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#a-partial-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-not-remove-ones-that-only-exist-in-your-custom-setup-eg-new-webforms-or-view-modes","title":"A Partial Sync, which will bring new configs and update existing ones but will not remove ones that only exist in your custom setup, e.g. new Webforms or View Modes.","text":"docker exec esmero-php drush cim -y --partial\n
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#a-complete-sync-which-will-bring-new-configs-and-update-existing-ones-but-will-also-remove-all-the-ones-that-are-not-part-of-rc3-its-a-like-clean-factory-reset","title":"A Complete Sync, which will bring new configs and update existing ones but will also remove all the ones that are not part of RC3. It's a like clean factory reset.","text":"docker exec esmero-php drush cim -y\n
If all goes well here and you see no errors it's time to reindex Solr
because there are new Fields. Run the following:
docker exec esmero-php drush search-api-reindex\ndocker exec esmero-php drush search-api-index\n
You might see some warnings related to modules dealing with previously non-existent data\u2014no worries, just ignore those.
If you made it this far you are done with code/devops (are we ever ready?), and that means you should be able to (hopefully) stay in the Drupal 9 realm for a few years!
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-live-upgradeFromRC3/#step-7-update-or-not-your-metadata-display-entities-and-menu-items","title":"Step 7: Update (or not) your Metadata Display Entities and Menu items.","text":"Recommended: If you want to add new templates and menu items 1.0.0 provides, go to your base Github repo folder, replace in the following commands your.domain.org
with the actual domain of your Server and run those individually:
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/deploy.sh\n
sed -i 's/http:\\/\\/esmero-web/https:\\/\\/your.domain.org/g' drupal/scripts/archipelago/update_deployed.sh\n
Now update your Metadata Display Templates and Blocks
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Once that is done, you can choose to update all Metadata Displays (Twig templates) we ship with new 1.0.0 versions (heavily fixed IIIF manifest, Markdown to HTML for Metadata, better Object descriptions). But before you do this, we really recommend that you first make sure to manually (copy/paste) backup any Twig templates you have modified. If unusure, do not run the command that comes after this warning! You can always manually copy the new templates from the d8content/metadatadisplays
folder which contains text versions (again, copy/paste) of each shipped template you can pick and use when you feel ready.
If you are sure (like really?) you want to overwrite the ones you modified (sure, just checking?), then you can run this:
docker exec -ti esmero-php bash -c 'scripts/archipelago/update_deployed.sh'\n
Done! (For realz now)
Please log into your Archipelago and test/check all is working! Enjoy 1.0.0. Thanks!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback, or open an ISSUE in this Archipelago Deployment Live Repository.
Return to Archipelago Live Deployment.
","tags":["Archipelago-deployment-live","Drupal 9"]},{"location":"archipelago-deployment-osx/","title":"Installing Archipelago Drupal 10 on OSX (macOS)","text":"","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#about-running-terminal-commands","title":"About running terminal commands","text":"This guide assumes you are comfortable enough running terminal (bash) commands on an OSX Computer.
We made sure that you can copy
and paste
each of these commands from this guide directly into your terminal.
You will notice sometimes commands span more than a single line of text. If that is the case, always make sure you copy and paste a single line at a time and press the Enter
key afterwards. We suggest also you look at the output.
If something fails (and we hope it does not) troubleshooting will be much easier if you can share that output when asking for help.
Happy deploying!
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#osx-macos","title":"OSX (macOS):","text":"Ventura
or Higher on Intel (i5/i7) and Apple Silicon Chips (M1/M2/M3) the tested version is: 4.23.0(120376)
. You may go newer of course.Preferences
-> General
: check Use gRPC FUSE for file sharing
and restart. Specially if you are using your $HOME
folder for deploying, e.g. /Users/username
.Preferences
-> Resources
: 4 Gbytes of RAM is the recommended minimun and works; 8 Gbytes is faster and snappier.Note: Recent OSX (macOS) and newer Macs ship with 2 annoying things: Apple Cloud Syncing User Folders and (wait for it) Case insensitive File Systems. If you are happy with your shiny new Mac (like i was) we are aware that it's better to deploy anything mounted outside of the /User
folder or even better, in an external drive formatted using a Case Sensitive Unix Filesystem (Mac OS Extended (Case-sensitive, Journaled)).
Note 2: \"Use gRPC FUSE for file sharing\" experience may vary, recent Docker for Mac does it well. In older RC1 ones it was evil. Changing/Disabling it after having installed Archipelago may affect your S3/Minio storage accessibility. Please let us know what your experience on this is.
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#wait-question-do-you-have-a-previous-version-of-archipelago-running","title":"Wait! Question: Do you have a previous version of Archipelago running?","text":"If so, let's give that hard working repository a break first. If not, skip to Step 1:
docker-compose down\ndocker-compose rm\n
Let's stop the containers gracefully first, run:
docker stop esmero-web\ndocker stop esmero-solr\ndocker stop esmero-db\ndocker stop esmero-cantaloupe\ndocker stop esmero-php\ndocker stop esmero-minio\ndocker stop esmero-nlp\n
Now we need to remove them, run:
docker rm esmero-web\ndocker rm esmero-solr\ndocker rm esmero-db\ndocker rm esmero-cantaloupe\ndocker rm esmero-php\ndocker rm esmero-minio\ndocker rm esmero-nlp\n
Ok, now we are ready to start. Depending on what type of Chip your Apple uses you have two options:
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-1-intel-docker-deployment-on-intel-chips-apple-machines","title":"Step 1 (Intel): Docker Deployment on Intel Chips Apple Machines","text":"git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\ncp docker-compose-osx.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-1-m1-docker-deployment-on-apple-silicon-chips-m1","title":"Step 1 (M1): Docker Deployment on Apple Silicon Chips (M1)","text":"git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\ncp docker-compose-arm64.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
Note: If you are running on an Intel Apple Machine from an external Drive or a partition/filesystem that is Case Sensitive
and is not syncing automatically to Apple Cloud
you can also use docker-compose-linux.yml
. Note2: docker-compose.yml
is git ignored in case you make local adjustments or changes to it.
Once all containers are up and running (you can do a docker ps
to check), access the minio console at http://localhost:9001
using your most loved Web Browser with the following credentials:
user:minio\npass:minio123\n
and once logged in, press on \"Buckets\" (left tools column) and then on \"Create Bucket\" (top right) and under \"Bucket Name\" type archipelago
. Leave all other options unchecked for now (you can experiment with those later), and make sure you write archipelago
(no spaces, lowercase) and press \"Save\". Done! That is where we will persist all your Files and also your File copies of each Digital Object. You can always go there and explore what Archipelago (well really Strawberryfield does the hard work) has persisted so you can get comfortable with our architecture.
The following will run composer inside the esmero-php container to download all dependencies and Drupal Core too:
docker exec -ti esmero-php bash -c \"composer install\"\n
Once that command finishes run our setup script:
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
Explanation: That script will append some important configurations to your local web/sites/default/settings.php
.
Note: We say local
because your whole Drupal web root (the one you cloned) is also mounted inside the esmero-php and esmero-web containers. So edits to PHP files, for example, can be done without accessing the container directly from your local folder.
If this is the first time you deploy Drupal using the provided Configurations run:
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:esmerodb@esmero-db/drupal --account-name=admin --account-pass=archipelago -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
Note: You will see these warnings: [warning] The \"block_content:1cdf7155-eb60-4f27-9e5e-64fffe93127a\" was not found
[warning] The \"facets_summary_block:search_page_facet_summary\" was not found
Nothing to worry about. We will provide the missing part in Step 5.
Note 2: Please be patient. This step takes since composer 2.0 25-30% longer because of how the most recent Drupal Installation code fetches translations and other resources (see Performed install task
). This means progress might look like getting \"stuck\", go and get a coffee/tea and let it run to the end.
Once finished, this will give you an admin
Drupal user with archipelago
as password (Change this if running on a public instance!).
Final Note about Steps 2-3: You don't need to, nor should you do this more than once. You can destroy/stop/update, recreate your Docker containers, and start again (git pull
), and your Drupal and Data will persist once you're past the Installation complete
message. I repeat, all other containers' data is persisted inside the persistent/
folder contained in this cloned git repository. Drupal and all its code is visible, editable, and stable inside your web/
folder.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#step-5-ingest-some-metadata-displays-to-make-playing-much-more-interactive","title":"Step 5: Ingest some Metadata Displays to make playing much more interactive","text":"Archipelago is more fun without having to start writing Metadata Displays (in Twig) before you know what they actually are. Since you should now have a jsonapi
user and jsonapi should be enabled, you can use that awesome functionality of D8 to get that done. We have 4 demo Metadata Display Entities that go well with the demo Webform we provided. To do that execute in your shell (copy and paste):
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
Open your most loved Web Browser and point it to http://localhost:8001
.
Note: It can take some time to start the first time (Drupal needs some warming up).
Also, to make this docker-compose easier to use we are doing something named bind mounting
(or similar...) your folders. The good thing is that you can edit files in your machine, and they get updated instantly to docker. The bad thing is that the OSX (macOS) driver runs slower than on Linux. Speed is a huge factor here, but you get the flexibility of changing, backing up, and persisting files without needing a Docker University Degree.
One-Step Demo content ingest
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#need-help-blue-screen-missed-a-step-need-a-hug-and-such","title":"Need help? Blue Screen? Missed a step? Need a hug and such?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-osx/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","macOS","OSX"]},{"location":"archipelago-deployment-readme/","title":"Archipelago Docker Deployment","text":"Updated: October 31st 2023
This repository serves as bootstrap for a Archipelago 1.3.0 deployment on a localhost for development/testing/customizing via Docker and provides a more unified experience this time:
arm64
architecture Chips like Raspberry Pi 4, with specially built arm64 docker containers. The only differences now between deployment strategies is the DB. Blazing fast OCR.The skeleton project contains all the pieces needed to run a local deployment of a vanilla Archipelago including (YES!) content provided as an optional feature from archipelago-recyclables
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#starting-from-zero","title":"Starting from ZERO","text":"This is the recommended, simplest way for this release. There are a too many, tons of fun new features, Metadata Displays, Webforms, New formatters and Twig extensions, improved viewers, new and improved JS libraries, OpenCV/Face Detection, smarter NLP, File composting, better HUGE import/update capabilities, bug fixes (yes so many) so please try them out. The team has also updated the DEMO AMI set (Content) to showcase metadata/display improvements.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#macos-intel-or-apple-silicon-m1m2m3","title":"macOS Intel or Apple Silicon M1/M2/M3:","text":"Step by Step deployment on macOS
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#ubuntu-1804-or-2004","title":"Ubuntu 18.04 or 20.04:","text":"Step by Step deployment on Ubuntu
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#windows-10-or-11","title":"Windows 10 or 11:","text":"Step by Step deployment on Windows
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#more-fun-if-you-add-content","title":"More fun if you add content:","text":"One-Step Demo content ingest
If you like it (or not), want new features, or want to be part of making this better (documenting, coding and planning) let us know. Make your voice and opinion be heard, this is a community effort.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"This software is a Metropolitan New York Library Council Open-Source initiative and part of the Archipelago Commons project.
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-readme/#license","title":"License","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Docker"]},{"location":"archipelago-deployment-ubuntu/","title":"Installing Archipelago Drupal 10 on Ubuntu 18.04 or 20.04","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#about-running-terminal-commands","title":"About running terminal commands","text":"This guide assumes you are comfortable enough running terminal (bash) commands on a Linux Computer.
We made sure that you can copy
and paste
each of these commands from this guide directly into your terminal.
You will notice sometimes commands span more than a single line of text. If that is the case, always make sure you copy and paste a single line at a time and press the Enter
key afterwards. We suggest you also look at the output.
If something fails (and we hope it does not) troubleshooting will be much easier if you can share that output when asking for help.
Happy deploying!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#prerequisites","title":"Prerequisites","text":"sudo apt install apt-transport-https ca-certificates curl software-properties-common\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\nsudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable\"\nsudo apt update\nsudo apt-cache policy docker-ce\nsudo apt install docker-ce\nsudo systemctl status docker\n\nsudo usermod -aG docker ${USER}\n
Log out, and log in again!
sudo apt install docker-compose\n
Git tools are included by default in Ubuntu.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#wait-question-do-you-have-a-previous-version-of-archipelago-running","title":"Wait! Question: Do you have a previous version of Archipelago running?","text":"If so, let's give that hard working repository a break first. If not, Step 1:
docker-compose down\ndocker-compose rm\n
Let's stop the containers gracefully first, run:
docker stop esmero-web\ndocker stop esmero-solr\ndocker stop esmero-db\ndocker stop esmero-cantaloupe\ndocker stop esmero-php\ndocker stop esmero-minio\ndocker stop esmero-nlp\n
Now we need to remove them so we run the following:
docker rm esmero-web\ndocker rm esmero-solr\ndocker rm esmero-db\ndocker rm esmero-cantaloupe\ndocker rm esmero-php\ndocker rm esmero-minio\ndocker rm esmero-nlp\n
Ok, now we are ready to start.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-1-deployment","title":"Step 1: Deployment","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#prefer-to-watch-a-video-to-see-what-its-like-to-install-go-to-our-user-contributed-documentation1","title":"Prefer to watch a video to see what it's like to install? Go to ouruser contributed documentation
[^1]!","text":"","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#important","title":"IMPORTANT","text":"If you run docker-compose
as root user (using sudo
) some enviromental variables, like the current folder used inside the docker-compose.yml
to mount the Volumes, will not work and you will see a bunch of errors.
There are two possible solutions.
sudo
needed).{$PWD}
inside your docker-compose.yml
with either the full path to your current folder, or with a .
and wrap that whole line in double quotes, basically making the paths for volumes relatives.Instead of: - ${PWD}:/var/www/html:cached
use: - \".:/var/www/html:cached\"
Now that you got it, let's deploy:
git clone https://github.com/esmero/archipelago-deployment.git archipelago-deployment\ncd archipelago-deployment\ngit checkout 1.3.0\n
cp docker-compose-linux.yml docker-compose.yml\ndocker-compose pull\ndocker-compose up -d\n
Note: docker-compose.yml
is git ignored in case you make local adjustments or changes to it.
You need to make sure Docker can read/write to your local Drive, a.k.a mounted volumes (especially if you decided not to run it as root
because we told you so!).
This means in practice running:
sudo chown -R 8183:8183 persistent/iiifcache\nsudo chown -R 8983:8983 persistent/solrcore\n
And then:
docker exec -ti esmero-php bash -c \"chown -R www-data:www-data private\"\n
Question: Why is this last command different? Answer: Just a variation. The long answer is that the internal www-data
user in that container (Alpine Linux) has uid:82, but on Ubuntu the www-data
user has a different one so we let Docker assign the uid from inside instead. In practice you could also run directly sudo chown -R 82:82 private
which would only apply to an Alpine use case, which can differ in the future! Does this make sense? No worries if not.
Once all containers are up and running (you can do a docker ps
to check), access http://localhost:9001
using your most loved Web Browser with the following credentials:
user:minio\npass:minio123\n
and create a bucket named \"archipelago\". To do so go to the Buckets
section in the navigation pane, and click Create Bucket +
. Type archipelago
under Bucket Name
and submit, done! That is where we will persist all your Files and also your File copies of each Digital Object. You can always go there and explore what Archipelago (well really Strawberryfield does the hard work) has persisted so you can get comfortable with our architecture.
The following will run composer inside the esmero-php container to download all dependencies and Drupal Core too.
docker exec -ti esmero-php bash -c \"composer install\"\n
You might see a warning: Do not run Composer as root/super user! See https://getcomposer.org/root for details
and the a long list of PHP packages. Don't worry. All is good here. Keep following the instructions! Once that command finishes run our setup script:
docker exec -ti esmero-php bash -c 'scripts/archipelago/setup.sh'\n
Explanation: That script will append some important configurations to your local web/sites/default/settings.php
.
Note: We say local
because your whole Drupal web root (the one you cloned) is also mounted inside the esmero-php and esmero-web containers. So edits to PHP files, for example, can be done without accessing the container directly from your local folder.
If this is the first time you're deploying Drupal using the provided Configurations run:
docker exec -ti -u www-data esmero-php bash -c \"cd web;../vendor/bin/drush -y si --verbose --existing-config --db-url=mysql://root:esmerodb@esmero-db/drupal --account-name=admin --account-pass=archipelago -r=/var/www/html/web --sites-subdir=default --notify=false;drush cr;chown -R www-data:www-data sites;\"\n
Note: You will see these warnings: [warning] The \"block_content:1cdf7155-eb60-4f27-9e5e-64fffe93127a\" was not found
[warning] The \"facets_summary_block:search_page_facet_summary\" was not found
Nothing to worry about. We will provide the missing part in Step 5.
Note 2: Please be patient. This step takes now 25-30% longer because of how the most recent Drupal Installation code fetches translations and other resources (see Performed install task
). This means progress might look like getting \"stuck\", go and get a coffee/tea and let it run to the end.
Once finished, this will give you an admin
Drupal user with archipelago
as password (change this if running on a public instance!) and also set the right Docker Container owner for your Drupal installation files.
Final note about Steps 2-3: You don't need to, nor should you do this more than once. You can destroy/stop/update, recreate your Docker containers, and start again (git pull
), and your Drupal and Data will persist once you've passed the Installation complete
message. I repeat, all other containers' data is persisted inside the persistent/
folder contained in this cloned git repository. Drupal and all its code is visible, editable, and stable inside your web/
folder.
docker exec -ti esmero-php bash -c 'drush ucrt demo --password=\"demo\"; drush urol metadata_pro \"demo\"'\n
docker exec -ti esmero-php bash -c 'drush ucrt jsonapi --password=\"jsonapi\"; drush urol metadata_api \"jsonapi\"'\n
docker exec -ti esmero-php bash -c 'drush urol administrator \"admin\"'\n
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-5-ingest-some-metadata-displays-to-make-playing-much-more-interactive","title":"Step 5: Ingest some Metadata Displays to make playing much more interactive","text":"Archipelago is more fun without having to start writing Metadata Displays (in Twig) before you know what they actually are. Since you should now have a jsonapi
user and jsonapi should be enabled, you can use that awesome functionality of D8 to get that done. We have 4 demo Metadata Display Entities that go well with the demo Webform we provided. To do that execute in your shell (copy and paste):
docker exec -ti esmero-php bash -c 'scripts/archipelago/deploy.sh'\n
You are done! Open your most loved Web Browser and point it to http://localhost:8001
Note: It can take some time to start the first time (Drupal needs some warming up). The Ubuntu deployment is WAY faster than the OSX deployment because of the way the bind mount volumes are handled by the driver. Our experience is that Archipelago basically reacts instantly!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#step-6-optional-but-more-fun-if-you-add-content","title":"Step 6: Optional but more fun if you add content","text":"One-Step Demo content ingest
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#need-help-blue-screen-missed-a-step-need-a-hug","title":"Need help? Blue Screen? Missed a step? Need a hug?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#user-contributed-documentation-a-video1","title":"User contributed documentation (A Video!)[^1]:","text":"Installing Archipelago on AWS Ubuntu by Zach Spalding: https://youtu.be/RBy7UMxSmyQ
[^1]: You may find this user contributed tutorial video, which was created for an earlier Archipelago release, to be helpful. Please note that there are significant differences between the executed steps and that you need to follow the current release instructions in order to have a successful deployment.
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-ubuntu/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/","title":"Installing Archipelago Drupal 9 on Windows 10/11","text":"","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#prerequisites","title":"Prerequisites","text":"Open the Docker Desktop app. The Docker service should start up automatically with a status showing when the service is up and running.
Open an Ubuntu Terminal session (type Ubuntu
in the Windows Start menu).
Bring everything up to date: sudo apt update && sudo apt upgrade -y
Follow the steps for deployment in Ubuntu.
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#acknowledgment","title":"Acknowledgment","text":"Thanks to Corinne Chatnik for documenting these steps!
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#need-help-blue-screen-missed-a-step-need-a-hug","title":"Need help? Blue Screen? Missed a step? Need a hug?","text":"If you see any issues or errors or need help with a step, please let us know (ASAP!). You can either open an issue
in this repository or use the Google Group. We are here to help.
If you like this, let us know!
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"archipelago-deployment-windows/#caring-coding-fixing-testing","title":"Caring & Coding + Fixing + Testing","text":"GPLv3
","tags":["Archipelago-deployment","Drupal 10","Windows","Ubuntu 18.04","Ubuntu 20.04"]},{"location":"createdisplaymodes/","title":"Creating Display Modes for Archipelago Digital Objects","text":"We recommend checking out our primer on Display Modes for a broader overview on Form Modes and View Modes for Archipelago Digital Objects (ADOs).
But how do you create and enable these Display Modes in the first place? Let's find out.
"},{"location":"createdisplaymodes/#adding-a-new-form-mode","title":"Adding a new Form Mode","text":"Why would you want to create a new form mode? One common reason is to create different data entry experiences for users with different roles. Let's create an example form mode called \"Student Webform\" -- we can imagine a deployment where Students need a simplified form for ADO creation. We are going to create a form mode, enable it for Digital Objects, and give it some custom settings that differentiate it from existing form modes.
Navigate to yoursite/admin/structure/display-modes
Click on Form modes. This image shows the basic Form Modes shipped with Archipelago
Click the \"Add Form mode\" button at the top of the page. Then select the \"Content\" entity type from the list. In this example, we ultimately want the form mode to be applied to Archipelago Digital Objects, which is a Content entity type.
Enter the name of your Form Mode and hit save. Here we are entering \"Student Webform\".
Great. Now you will see your new Form mode in the list! Let's put it to use.
Head to yoursite/admin/structure/types/manage/digital_object
and click the \"Manage Form Display\" tab. As mentioned above, in this example we want to add a new Form Mode for ADOs, so we are dealing with the Digital Object content type. Scroll to the bottom of this page and look for the \"Custom Display Settings\" area, which is collapsed by default. Expand it, and you should see this.
Enable \"Student Webform\" and hit save! Now scroll back up the page. You'll see it enabled like so.
Now select our new \"Student Webform\" tab. From here, you have many options and can configure input fields as you see fit! To finish out our specific example though, let's finally add our Student Webform to the display. Click on the settings gear icon next to the Descriptive Metadata field.
You'll see that the default webform named \"Descriptive Metadata\" is entered. To add custom content to this Field Widget, start typing in the autocomplete. This example assumes you've created a webform called Student Webform
in yoursite/admin/structure/webform
. For info on how to create a new Webform with proper settings, see our Webforms as input guide.
After you've selected your \"Student Webform\" in the Field Widget setting, hit Update, and then Save at the bottom of the page.
All done! So let's recap. We created a new form mode. We added this form mode to the Manage Form Display > Custom Display Settings options for Digital Objects. And finally we configured the Field Widget for Descriptive Metadata in our new Form Mode to use a new Webform. This last step is arbitrary to this example. We could have enabled or disabled fields, or changed other field widget settings depending on our needs. But configuring different Webforms as Field Widgets for Descriptive Metadata is a common use case in Archipelago.
Thanks for reading this far! But there is more. We might want to display, in addition to ingest, our ADOs in custom ways. The process for creating new View Modes (the other type of Display Mode) is quite similar to creating new Form Modes, but let's walk through it with another example case.
"},{"location":"createdisplaymodes/#adding-a-new-view-mode","title":"Adding a new View Mode","text":"Why would you want to create a new View Mode? Maybe there is a new type of media you are attaching to ADOs that you want to display using the proper player or tool. Or maybe you want to simplify the ADO display, removing fields from the display page. In this example let's create a new View Mode for ADOs that adds some fields to the display to show the Author and Published date of the object.
Navigate to yoursite/admin/structure/display-modes
Select View modes, and click the \"Add View mode\" at the top of the page.
Select Content as your entity type.
Enter the name of your new View Mode and save. Ours is \"Digital Object with Publishing Information\"
Now let's enable this View mode. Go to yoursite/admin/structure/types/manage/digital_object
and click the \"Manage Display\" tab.
Scroll to the bottom of the page and expand the \"Custom Display Settings\" area. You will see our newly created View Mode. Enable it and hit save.
Now scroll back to the page top. You will see \"Digital Object with Publishing Information\" in the list of View Modes, so go ahead and select it.
Scroll down until you see the \"Disabled\" section. This section contains fields that are available to the ADO content type, but are not enabled in this display mode. Let's enable Author and Post date by changing the \"Region\" column dropdown from \"Disabled\" to \"Content\". (To learn more about Regions in Drupal, see here). Basically, this ensures that this field has a home in the page layout. Hit save.
Now, if you want ADOs to use this View Mode for display, there is one last step. You need to select \"Digital Object with Publishing Info\" as the view mode Display Settings when adding new content. This area is located on the right side of the page. See below:
Now, when we view the individual ADO, these new fields have been added to the display.
All done! This was quite a simple example, but now you are aware of how to customize your own ADO display. It can only get more complex and exciting from here.
Let's recap. We created a new View Mode. We enabled this View Mode in Manage Display > Custom Display Settings for Digital Objects. We enabled new fields (in this case, just for instruction, the Author and Post date fields) to make our new View Mode unique, and learned about Disabled fields in the process. We selected our new View Mode in the Display Settings area (slightly confusing wording because yes, this is a View Mode, subset of Display Mode) during ADO creation (for more on creating new objects, see this guide).
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"customwebformelements/","title":"Archipelago Custom Webform Elements","text":"In addition to the core elements provided by the Drupal Webform module, Archipelago also deploys a robust set of custom webform elements specific to digital repositories metadata needs and use cases.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"customwebformelements/#linked-data","title":"Linked Data:","text":"(*found under Composite Elements in \"Add Element\" menu)
Library of Congress (LoC) Linked Open data
Multi LoD Source Agent Items
Wikidata
Getty Vocabulary Term
VIAF
Location GEOJSON (Nominatim--Open Street Maps)
PubMed MeSH Suggest
SNAC Constellation Linked Open Data
Europeana Entity Suggest
Enhancements for Audio, Document, Image, Video file uploads
Import Metadata from a File (such as XML)
Import Metadata in CSV format from a File
Computed Metadata Transplant
Computed Token
Computed Twig
You can review the coding behind these custom elements here: https://github.com/esmero/webform_strawberryfield/tree/1.1.0/src/Element
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"devops/","title":"Archipelago Software Services","text":"At the core of the Archipelago philosophy is our commitment to both simplicity and flexibility.
"},{"location":"devops/#under-the-hood-archipelagos-architecture-is","title":"Under the hood, Archipelago's architecture is:","text":"Installation is entirely Dockerized and scripted with easy-to-follow directions.
Information related to non-Dockerized installation and configruation can be found here: Traditional Installation Notes
"},{"location":"devops/#strawberryfield-modules-at-the-heart-of-every-archipelago","title":"Strawberryfield Modules at the heart of every Archipelago:","text":"Documentation related to the Strawberryfield modules can be found here: Strawberryfields Forever
"},{"location":"devops/#archipelago-also-extends-these-powerful-tools","title":"Archipelago also extends these powerful tools:","text":"Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"documentation_about/","title":"About This Documentation","text":"Documentation is vital to our community, and contributions are welcome, greatly appreciated, and encouraged.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_about/#how-to-contribute","title":"How to Contribute","text":"Difficulty Level: Moderate\u2013Difficult
Below are some examples of features that are currently in use on the site. To explore more visit the Material for MkDocs documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#examples","title":"Examples","text":"","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#images","title":"Images","text":"Images are located in the docs/images
folder. You can add new ones there and link to them by relative path. For example, if you added strawberries_color.png
, you would embed it like so:
Image
MarkupResult![New Documentation Image](images/strawberries_color.png)\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#admonitions","title":"Admonitions","text":"Question Admonition
MarkupResult??? question \"What is a collapsible admonition?\"\n\n This is a collapsible admonition. It can have a title, and it collapses so as not to interrupt the flow the of the document, but it provides useful information as needed.\n
What is a collapsible admonition? This is a collapsible admonition. It can have a title, and it collapses so as not to interrupt the flow the of the document, but it provides useful information as needed.
You can read more about admonitions with further examples in the Material for MkDocs documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#code-blocks","title":"Code blocks","text":"Code block with title and highlighted lines
MarkupResult```html+twig title=\"HTML in a TWIG template\" hl_lines=\"8 9 10\"\n{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n```\n
HTML in a TWIG template{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_features/#quirks","title":"Quirks","text":"Because of the use of front matter (the block of YAML at the top that contains settings and data for the file) the markup for a horizontal rule is restricted. To create one you have to use the following:
Horizontal Rule
MarkupResult___\n
Info
The above are underscore (_
) characters, as opposed to hyphens (-
).
Some of the documentation that is automatically deployed from the repos have special comments that are converted to theme-specific elements via script.
Front Matter
Deployment Repo with Front MatterDocumentation Repo with Front Matter<!--documentation\n---\ntitle: \"Adding Demo Archipelago Digital Objects (ADOs) to your Repository\"\ntags:\n - Archipelago Digital Objects\n - Demo Content\n---\ndocumentation-->\n
---\ntitle: \"Adding Demo Archipelago Digital Objects (ADOs) to your Repository\"\ntags:\n - Archipelago Digital Objects\n - Demo Content\n---\n
Switching Elements
Deployment Repo with Theme-specific MarkupDocumentation Repo with Theme-specific Markup<!--switch_below\n\n??? info \"OSX (macOS)/x86-64\"\n\n ```shell\n cp docker-compose-osx.yml docker-compose.yml\n ```\n\n??? info \"Linux/x86-64/AMD64\"\n\n ```shell\n cp docker-compose-linux.yml docker-compose.yml\n ```\n\n??? info \"OSX (macOS)/Linux/ARM64\"\n\n ```shell\n cp docker-compose-arm64.yml docker-compose.yml\n ```\n\nswitch_below-->\n\n___\n\nOSX (macOS)/x86-64:\n\n```shell\ncp docker-compose-osx.yml docker-compose.yml\n```\n\n___\n\nLinux/x86-64/AMD64:\n\n```shell\ncp docker-compose-linux.yml docker-compose.yml\n```\n\n___\n\nOSX (macOS)/Linux/ARM64:\n\n```shell\ncp docker-compose-arm64.yml docker-compose.yml\n```\n\n___\n\n<!--switch_above\nswitch_above-->\n
??? info \"OSX (macOS)/x86-64\"\n\n ```shell\n cp docker-compose-osx.yml docker-compose.yml\n ```\n\n??? info \"Linux/x86-64/AMD64\"\n\n ```shell\n cp docker-compose-linux.yml docker-compose.yml\n ```\n\n??? info \"OSX (macOS)/Linux/ARM64\"\n\n ```shell\n cp docker-compose-arm64.yml docker-compose.yml\n ```\n\n<!--repo_docs\n\n___\n\nOSX (macOS)/x86-64:\n\n```shell\ncp docker-compose-osx.yml docker-compose.yml\n```\n\n___\n\nLinux/x86-64/AMD64:\n\n```shell\ncp docker-compose-linux.yml docker-compose.yml\n```\n\n___\n\nOSX (macOS)/Linux/ARM64:\n\n```shell\ncp docker-compose-arm64.yml docker-compose.yml\n```\n\n___\n\nrepo_docs-->\n
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_technical/","title":"Documentation Technical Details","text":"Archipelago documentation is generated using the following open source projects:
To use any advanced features not mentioned in these pages, you can look through the documentation for each of the above projects.
In addition to the pages added directly via this repository, there are some pages automatically deployed here with GitHub Actions from the following repositories:
Both the main READMEs and documentation in the docs
folders for those repositories are prepended with archipelago-deployment
and archipelago-deployment-live
respectively and copied to the docs
folder here with the rest of the documentation. In practice that means those pieces of documentation need to be edited in those repositories directly.
A brief overview of the specific functionality or workflow area that will be covered in your documentation.
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_template/#stepsguides","title":"Steps/Guides","text":"Step two example with images:
Step three example with Details section:
Click to open this Details SectionMore Details in a List
Step four example with a simple code block:
{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
Step five example with a code block that has a title and highlighted lines:
HTML in a TWIG template{% for image in attribute(data, 'as:image')|slice(0,1) %}\n {% if image[\"flv:exif\"] %}\n {% set height = image[\"flv:exif\"].ImageHeight%}\n {% else %}\n {% set width = 1200 %}\n {% endif %}\n {% set imageurl = IIIFserverurl ~ image['url']|replace({'s3://':''})|replace({'private://':''})|url_encode %}\n<a href=\"{{ nodeurl }}\" title=\"{{ data.label }}\" alt=\"{{ data.label }}\">\n<img src=\"{{ imageurl }}/pct:5,5,90,30/,400/0/default.jpg\" class=\"d-block.w-auto\" alt=\"{{ image.name }}\" height=\"400px\" style=\"min-width:1200px\">\n</a> \n{% endfor %}\n
Last Step Example.
Congratulations! \ud83c\udf89
","tags":["Documentation","Contributing","Examples"]},{"location":"documentation_workflow/","title":"GitHub Workflow","text":"git checkout -b ISSUE-100\n
cp docs/documentation_template.md docs/new_documentation.md\n
nav
section of the mkdocs.yml
configuration file at the root of the repo. For example: nav:\n - Home: index.md\n - About Archipelago:\n - Archipelago's Philosophy & Guiding Principles: ourtake.md\n - Strawberryfields Forever: strawberryfields.md\n - Software Services: devops.md\n - New Documentation: new_documentation.md\n - Code of Conduct: CODE_OF_CONDUCT.md\n - Instructions and Guides:\n - Archipelago-Deployment:\n - Start: archipelago-deployment-readme.md\n - Installing Archipelago Drupal 9 on OSX (macOS): archipelago-deployment-osx.md\n - Installing Archipelago Drupal 9 on Ubuntu 18.04 or 20.04: archipelago-deployment-ubuntu.md\n - Installing Archipelago Drupal 9 on Windows 10/11: archipelago-deployment-windows.md\n - Adding Demo Archipelago Digital Objects (ADOs) to your Repository: archipelago-deployment-democontent.md\n...\n
To view the changes locally, first install the Python libraries using the Python package manager pip:
pip install mkdocs-material mike git+https://github.com/jldiaz/mkdocs-plugin-tags.git mkdocs-git-revision-date-localized-plugin mkdocs-glightbox\n
You may need to install Python on your machine. Download Python or use your favorite operating system package manager such as Homebrew. Now you can build the site locally, e.g. for the documentation using the 1.0.0 branch:
mike deploy 1.0.0\nmike set-default 1.0.0\n
If you create a new branch to match the issue number as in step 3, you would use your branch instead of 1.0.0. For example, a branch of ISSUE-129. mike deploy ISSUE-129\nmike set-default ISSUE-129\n
mike serve\n
git add .\ngit commit -m \"Create new docs with useful information.\"\ngit push origin ISSUE-100\n
Resolves #100
.Drupal, the project, puts out new core releases on a regular schedule. Your Archipelago site needs to apply the security updates and possibly minor releases between major core updates. Major core updates will typically coincide with an updated Archipelago stable release.
Updating core is done via Composer:
","tags":["Archipelago-deployment","Archipelago-deployment-live","DevOps","Drupal","Drupal Core"]},{"location":"drupal_core_update/#stepsguides","title":"Steps/Guides","text":"docker exec -ti esmero-php bash -c \"composer update \"drupal/core-*:^9\" --with-all-dependencies --dry-run\n
The --dry-run
flag will allow you to see what will be updated. Once you review the updates and are ready to go with the full update, you will run the same command without the dry-run
flag.docker exec -ti esmero-php bash -c \"composer update \"drupal/core-*:^9\" --with-all-dependencies\"\n
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
docker exec -ti esmero-php bash -c \"drush cache:rebuild\"\n
Occasionally there will be other Drupal modules that Archipelago uses, and they need to be updated at the same time you run a Core update. This is an example of updating Drupal Webform, which was required for moving to Drupal 9.5.x:Updating a Drupal module
docker exec -ti esmero-php bash -c \"composer update \"drupal/webform:6.1.4\n
docker exec -ti esmero-php bash -c \"drush updatedb\"\n
or docker exec -ti esmero-php bash -c \"drush updb\"\n
docker exec -ti esmero-php bash -c \"drush cache:rebuild\"\n
or docker exec -ti esmero-php bash -c \"drush cr\"\n
Archipelago's Advanced Batch Find and Replace functionality provides different ways for you to efficiently Find/Search and Replace metadata values found in the raw JSON of your Digital Objects and Collections. Advanced Batch Find and Replace makes use of customized Actions that extend Drupal's VBO module to enable these powerful batch metadata replacement Actions in your Archipelago environment.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace/#where-to-find","title":"Where to Find","text":"In default Archipelagos, you can find Advanced Batch Find and Replace:
Tools
menu > Advanced Batch Find and Replace
/search-and-replace
/admin/structure/views/view/solr_search_content_with_find_and_replace
. The default Facets referenced above can be found at /admin/structure/block/list/archipelago_subtheme
in the Sidebar Second
section. Please proceed with caution if making any changes to the default configurations for this View or the Facets referenced on this View Page. From the main page (display title 'Search and Replace'), you will see:
Fulltext Search
boxActions
Select/deselect all results in this view (all pages)
via toggle switch\u25ba Raw Metadata (JSON)
section beneath each each individual Object/Collection containing the full Raw JSON metadata record for referenceSelected items in this view
(will be 0 items to start).Selected items
available for preview on this main/top page.You will also see a listing of a few different default Facets configured to help guide your selection of potential Digital Objects/Collections:
type
The default options available through the Action dropdown menu include:
Export Archipelago Digital Objects to CSV content item
Text based find and replace Metadata for Archipelago Digital Objects content item
Webform find-and-replace Metadata for Archipelago Digital Objects content item
JSON Patch Metadata for Archipelago Digital Objects content item
Publish Digital Object
Unpublish Digital Object
Change the author of content
Delete selected entities/translations
* denotes Action options that are also shared with the main Content
Page Action Menu
After reviewing the 'Important Notes & Workflow Recommendations' below, please see the following separate pages for detailed examples walking through the usage of the three different Find and Replace specific actions.
Important Note
The Actions available through Archipelago's Advanced Batch Find and Replace can potentially have repository-wide effects. It is strongly recommended that you proceed with caution when executing any of the available Actions.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace/#simulation-mode","title":"Simulation Mode","text":"Before executing any of the available Find and Replace Actions, the best-practice workflow recommendation is to always first run in Simulation Mode:
After applying any of the Find and Replace Actions, you can review the specific changes that were made within the Revision history of the impacted Digital Objects and Collections.
Find and Replace
page results listing or the main Content
page, navigate to the Digital Object/Collection you wish to review.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_json_patch/","title":"JSON Patch Find and Replace","text":"Enables you to carry out advanced JSON Patch operations within your metadata.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#what-is-a-json-patch-and-when-to-use-it","title":"What is a JSON Patch and when to use it?","text":"Before we dive into the mechanics of doing JSON Patching Batch in Archipelago we need to learn what a JSON Patch is and of course when applying this action is useful, possible (or not).
A JSON Patch is a JSON
Document containing precise operations that can heavily modify the structure and values of an existing JSON Document, in this case the RAW JSON found inside a strawberryfield of our ADOs.
The operations available for modifications of an JSON document are:
And there is also one (very important) used to check/validate the existence of values/keys:
Even if you can have multiple operations in a single JSON Patch Document, they are always applied as a whole. Means if any of those fail nothing will be applied and in concrete, in our VBO action, no change will be done to your ADO. This in combination with the \u201ctest\u201d operation gives you a lot of safety and a way of discerning/skipping completely a complex set of operations on e.g. a failed \u201dtest\u201d.
JSON Patch uses JSON Pointers
as arguments in all of these operations to target a specific key/value of your JSON.
Using the following JSON Snippet as example:
\"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n]\n
These JSON Pointers will resolve in the following manner:
/subject_lcgft_terms
[\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n]\n
/subject_lcgft_terms/0
{\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n}\n
/subject_lcgft_terms/0/label
Photographs\n
AS you can see a JSON Pointer is very precise , allowing you to target complete structures and values but it does not allow Wildcard Operations. Means you can not \"search\" or do loosy comparissons. This very fact limits many times the use case. E.g. if you have a list of terms like this:
\"terms\": [\n \"term1\",\n \"term2\",\n \"term3\"\n]\n
and you want to \"test\" for the existence of \"term3\"
before applying a change, you would need to know exactly at what position inside the terms Array
(Starting from 0) it will/should be found. And that might not be consistent across every ADO.
So how do we use these pointers in an operation inside a JSON Patch document? Using the same fake \"terms\" JSON snippet the previously listed, operations are written like this:
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#add","title":"add","text":"{ \"op\": \"add\", \"path\": \"/terms/0\", \"value\": \"term_another\" }\n
This will add before the first term (in this case \"term1\" ) \"term_another\" as a value.
At the endyou can use a dash (-
) (e.g. \"/terms/-\") instead of the numeric position of an array entry to denote that the \"value\" needs to be added at the end. This is needed specially for empty lists. You can not target 0
position on an empty array.
{ \"op\": \"remove\", \"path\": \"/terms/1\"}\n
This will remove the second term (in this case \"term2\" )
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#replace","title":"replace","text":"{ \"op\": \"replace\", \"path\": \"/terms/1\", \"value\": \"term_again\" }\n
This will remove the second term (in this case \"term2\" ) and put in its place \"term_again\" as a value. Basically two operationes, \u201cremove\u201c and \u201cadd \u201c in a single step","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#move","title":"move","text":"{ \"op\": \"move\", \"from\": \"/terms\", \"path\": \"/terms_somewhere_else\"}\n
This will copy all values inside top JSON key terms
into a new top JSON key named terms_somewhere_else
and remove then the old terms
key.
{ \"op\": \"copy\", \"from\": \"/terms\", \"path\": \"/terms_somewhere_else\"}\n
Similar to \u201cmove\u201c, it will copy values inside top JSON key terms
into a new top JSON key named terms_somewhere_else
without removing then the old terms
key or its content!
{ \"op\": \"test\", \"path\": \"/terms/0\", \"value\": \"term1\"}\n
Finally, \u201ctest\u201c will check if on position 0
of terms there is a value of \"term1\". If not, the test will fail. If a single test fails the whole JSON Patch will be cancelled. Tests can not be concatenated via OR bolean operators. So they always act individually. Two tests with one failing is a failed JSON Patch.
There is more of course! The Complete documentation can be found here
So. When to use JSON Patch? There are a few general rules/suggestions:
0
, then again at position 1
, etc)fixing
a value, in the sense of putting a static replacement, is not what you need. Other Actions
documented will allow you to replace one value with another fixed one. But JSON Patch will allow you to use existing data inside your target JSON and move it/copy it.Now that we know what it is and when we should do it, we can make a small exercise. The goal:
photograph
that are member of collection with Node ID 16
description
key from a single value into an array.As with every other VBO action described in our documentation, start by selecting the ADOs you want to modify using the exposed Search Field(s) and/or Facets present in the Search and Replace View
found at /search-and-replace
.
Once you have filtered down the list to manageable size, containing at least the ADOs you plan on modifying (but for JSON Patch operation could be more too because you can \u201ctest\u201c and match ), press either on Select / deselect all results in this view
to pass all the result (this includes all pages, not only the currently visible one) or go selectively one by one by checking on the toggle
found beside each ADO's title. You will see how the number in Selected 0 item in this view
increases. Now press Apply to select Items
. We will use for this example Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]
, an ADO we provide in our DEMO sets.
Redundant to say, Batch Actions are intended to be used when a modification needs to be applied to more than one item, implying there is a pattern. For single ADOs you can always do this faster directly via the EDIT tab.
Keep your JSON Patches (and friends) aroundSince JSON Patching involves writing a, sometimes complex, JSON Document, please keep around an Application (or Text File) where you can copy/paste and save your JSON Patches for resuse or future references. Archipelago will not store nor remember between runs the JSON Patch document you submitted. It is also very useful to copy and have at hand the RAW JSON
of one of the ADOs you plan to modify as a reference/aid while building the given JSON Patch document.
The default config JSON Patch form will contain an example JSON Patch set of Commands (Document)
We are going to replace this one with a valid JSON document. Notice that it does not require a root {}
Object wrapper and it is really a list (or array) of operations.
A bit of repetition but needed to explain the JSON Patch document. You can see on the following Note box how we copyed the RAW JSON of one ADO to be Patched to have a reference while building the JSON PATCH. for the sake of brevity we remove here the longer Image File technical Metadata.
RAW JSON ofLaddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]
Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?] | before Patching{\n \"note\": \"\",\n \"type\": \"Photograph\",\n \"viaf\": \"\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"model\": [],\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"audios\": [],\n \"images\": [\n 26\n ],\n \"models\": [],\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at [http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions](http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions).\",\n \"videos\": [],\n \"creator\": \"\",\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n },\n \"duration\": \"\",\n \"ispartof\": [],\n \"language\": \"English\",\n \"documents\": [],\n \"edm_agent\": \"\",\n \"publisher\": \"\",\n \"ismemberof\": [\n 16\n ],\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\",\n \"interviewee\": \"\",\n \"interviewer\": \"\",\n \"pubmed_mesh\": null,\n \"sequence_id\": \"\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"website_url\": \"\",\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-12-05T09:19:37-05:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"date_created\": \"1910-01-01\",\n \"issue_number\": null,\n \"date_published\": \"\",\n \"subjects_local\": null,\n \"term_aat_getty\": \"\",\n \"ap:entitymapping\": {\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n },\n \"europeana_agents\": \"\",\n \"europeana_places\": \"\",\n \"local_identifier\": \"nyhs_PR066_6136\",\n \"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q3381576\",\n \"label\": \"black-and-white photography\"\n },\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q60\",\n \"label\": \"New York City\"\n }\n ],\n \"date_created_edtf\": \"\",\n \"date_created_free\": null,\n \"date_embargo_lift\": null,\n \"physical_location\": null,\n \"related_item_note\": null,\n \"rights_statements\": \"In Copyright - Educational Use Permitted\",\n \"europeana_concepts\": \"\",\n \"geographic_location\": {\n \"lat\": \"40.8466508\",\n \"lng\": \"-73.8785937\",\n \"city\": \"New York\",\n \"state\": \"New York\",\n \"value\": \"The Bronx, Bronx County, New York, United States\",\n \"county\": \"\",\n \"osm_id\": \"9691916\",\n \"country\": \"United States\",\n \"category\": \"boundary\",\n \"locality\": \"\",\n \"osm_type\": \"relation\",\n \"postcode\": \"\",\n \"country_code\": \"us\",\n \"display_name\": \"The Bronx, Bronx County, New York, United States\",\n \"neighbourhood\": \"Bronx County\",\n \"state_district\": \"\"\n },\n \"note_publishinginfo\": null,\n \"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n ],\n \"upload_associated_warcs\": [],\n \"physical_description_extent\": \"\",\n \"subject_lcnaf_personal_names\": \"\",\n \"subject_lcnaf_corporate_names\": \"\",\n \"subjects_local_personal_names\": \"\",\n \"related_item_host_location_url\": null,\n \"subject_lcnaf_geographic_names\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n81059724\",\n \"label\": \"Bronx (New York, N.Y.)\"\n },\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n79007751\",\n \"label\": \"New York (N.Y.)\"\n }\n ],\n \"related_item_host_display_label\": null,\n \"related_item_host_local_identifier\": null,\n \"related_item_host_title_info_title\": \"\",\n \"related_item_host_type_of_resource\": null,\n \"physical_description_note_condition\": null\n}\n
Based on our own plan:
photograph
and memberof
Node ID 16
{ \"op\": \"test\", \"path\": \"/type\", \"value\": \"Photograph\"},\n{ \"op\": \"test\", \"path\": \"/ismemberof\", \"value\": [16]}\n
Notice that to be really sure we also match the data type, an array
with a single value 16
of type integer (makes sense since the Operation is also a JSON and will be evaluated in the same way as the source RAW JSON). This is a precise match. if the ADO belongs to multiple Collections it will fail of course.
description
key from a single value into an array.{\"op\": \"add\",\"path\": \"/temp_description_array\",\"value\": []},\n{\"op\": \"move\",\"from\": \"/description\",\"path\": \"/temp_description_array/-\"},\n{\"op\": \"move\",\"from\": \"/temp_description_array\",\"path\": \"/description\"},\n
This is a multi step operation. Given that JSON Patch can not \"cast\" types and depends on a given datatype to be present before, e.g. adding a new value to it, we use here a temporary key. Notice that you can not \u201cadd\u201c or \u201cmove\u201c to e.g. the position 0
because the destination array is indeed empty (that will fail). But by using the dash
you can command it to put it at the end, which on an empty list is also at the beginning (we are starting to understand this!).
{ \"op\": \"add\", \"path\": \"/subject_wikidata/-\", \"value\": {\n \"uri\": \"https://www.wikidata.org/wiki/Q1196071\",\n \"label\": \"collie\"\n }\n}\n
geographic_location.state
value and put it into subjects_local
:{\"op\": \"remove\", \"path\": \"/subjects_local\"},\n{\"op\": \"add\", \"path\": \"/subjects_local\", \"value\": []},\n{\"op\": \"copy\", \"from\": \"/geographic_location/state\", \"path\": \"/subjects_local/-\"}\n
Why so many operations? Because initially \"subjects_local\"
had a value of null
and it is not suited to generate/add to it as a multivalued key because of that. So we need to remove it first, recreate it as an empty list and then we can copy. Pro Note: you could partially rewrite this as replace
operation!
You will lose it! you can add a \u201ctest\u201c or move data instead of recreating. Many choices.
The final JSON Patch will look like this. Copy it into the Configuration JSON Patch commands form field:
[\n {\"op\": \"test\", \"path\": \"/type\", \"value\": \"Photograph\"},\n {\"op\": \"test\", \"path\": \"/ismemberof\", \"value\": [16]},\n {\"op\": \"add\",\"path\": \"/temp_description_array\",\"value\": []},\n {\"op\": \"move\",\"from\": \"/description\",\"path\": \"/temp_description_array/-\"},\n {\"op\": \"move\",\"from\": \"/temp_description_array\",\"path\": \"/description\"},\n {\"op\": \"add\",\"path\": \"/subject_wikidata/-\",\"value\": \n {\n \"uri\": \"https://www.wikidata.org/wiki/Q1196071\",\n \"label\": \"collie\"\n }\n },\n {\"op\": \"remove\", \"path\": \"/subjects_local\"},\n {\"op\": \"add\", \"path\": \"/subjects_local\", \"value\": []},\n {\"op\": \"copy\", \"from\": \"/geographic_location/state\", \"path\": \"/subjects_local/-\"}\n]\n
The inverse process online Now that you know (or are starting to understand) the manual process you can also try this online tool that allows you, based on a source and a destination JSON, generate the needed JSON Patch to mutate one JSON into the another. The logic might not be always what you need and most likely it will not take in account that you actually need to move values and will prefer to fix values to be added via an add operation
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#step-3-run-the-json-patch-action-in-simulation-mode","title":"Step 3: Run the JSON Patch Action in simulation mode","text":"Ready. Now check the \"only simulate and debug affected JSON\" checkbox. We want to see if we did well but not yet modify any ADOs. Press Apply
button. You will get another confirmation screen. Press Execute Action
.
It will run quick (on this example) and you will redirected back on the original Drupal Views of Step 1. If your source ADO
is actually the one from our Demo Collection you might see a diff
, something very similar to this:
129d129 < \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\", 155d154 < \"subjects_local\": null, 182a180,183 > }, > { > \"uri\": \"https:\\/\\/www.wikidata.org\\/wiki\\/Q1196071\", > \"label\": \"collie\" 236c238,244 < \"physical_description_note_condition\": null --- > \"physical_description_note_condition\": null, > \"description\": [ > \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\" > ], > \"subjects_local\": [ > \"New York\" > ]\n
Which means your Patch would have been applied!
In case something went wrong, e.g. any of the operations did not match your source data, you will see a WARNING
like this
Patch could not be applied for Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\n
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_json_patch/#step-4-run-the-json-patch-action-but-for-real","title":"Step 4: Run the JSON Patch Action but for real!","text":"Now that actual patching. Repeat from Step 1 to Step 3 but keep \"only simulate and debug affected JSON\" unchecked and follow the steps again. You ADO will be modified and you will get almost no notifications except of an action completed notice (in soothing green). If you check Laddie's ADO RAW json (expand in the same resulting view) it will look now like this
Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?] JSON after patching{ \n \"note\": \"\",\n \"type\": \"Photograph\",\n \"viaf\": \"\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"model\": [],\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"audios\": [],\n \"images\": [\n 26\n ],\n \"models\": [],\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at [http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions](http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions).\",\n \"videos\": [],\n \"creator\": \"\",\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n },\n \"duration\": \"\",\n \"ispartof\": [],\n \"language\": \"English\",\n \"documents\": [],\n \"edm_agent\": \"\",\n \"publisher\": \"\",\n \"ismemberof\": [\n 16\n ],\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": [\n \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York. He left little record of himself, but an invaluable one of his surroundings and interests. Stonebridge lived at several locations in the Bronx with his wife Bella, and their three children Grace, George, and William. He worked at the Northern Gaslight Company, although the position he held is unknown. In addition to taking photographs, Stonebridge wrote poetry and prose about his love of the Bronx, his children, and in honor of military victories. Some of Stonebridge's photographs appeared in local papers. In 1898, he was an authorized reporter and photographer for the North Side News; in 1905 he was an authorized reporter for the Bronx Borough Record and Times, and probably took photographs for that paper as well. Stonebridge was fascinated with the subject of military preparedness. Training rituals and staged battles were one of his favorite photographic subjects. His 1898 poem, \\\"Remember the Maine,\\\" celebrates the United States' victory in the Spanish-American War. He was especially proud of soldiers from the Bronx, and photographed historical tablets throughout the Borough commemorating previous military victories. Stonebridge also used his photographs to illustrate lectures. In 1907, he gave several lectures on \\\"The Training of War,\\\" using colored lantern slides.\"\n ],\n \"interviewee\": \"\",\n \"interviewer\": \"\",\n \"pubmed_mesh\": null,\n \"sequence_id\": \"\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"website_url\": \"\",\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-12-05T09:19:37-05:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n \"date_created\": \"1910-01-01\",\n \"issue_number\": null,\n \"date_published\": \"\",\n \"subjects_local\": [\n \"New York\"\n ],\n \"term_aat_getty\": \"\",\n \"ap:entitymapping\": {\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n },\n \"europeana_agents\": \"\",\n \"europeana_places\": \"\",\n \"local_identifier\": \"nyhs_PR066_6136\",\n \"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q3381576\",\n \"label\": \"black-and-white photography\"\n },\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q60\",\n \"label\": \"New York City\"\n },\n {\n \"uri\": \"https:\\/\\/www.wikidata.org\\/wiki\\/Q1196071\",\n \"label\": \"collie\"\n }\n ],\n \"date_created_edtf\": \"\",\n \"date_created_free\": null,\n \"date_embargo_lift\": null,\n \"physical_location\": null,\n \"related_item_note\": null,\n \"rights_statements\": \"In Copyright - Educational Use Permitted\",\n \"europeana_concepts\": \"\",\n \"geographic_location\": {\n \"lat\": \"40.8466508\",\n \"lng\": \"-73.8785937\",\n \"city\": \"New York\",\n \"state\": \"New York\",\n \"value\": \"The Bronx, Bronx County, New York, United States\",\n \"county\": \"\",\n \"osm_id\": \"9691916\",\n \"country\": \"United States\",\n \"category\": \"boundary\",\n \"locality\": \"\",\n \"osm_type\": \"relation\",\n \"postcode\": \"\",\n \"country_code\": \"us\",\n \"display_name\": \"The Bronx, Bronx County, New York, United States\",\n \"neighbourhood\": \"Bronx County\",\n \"state_district\": \"\"\n },\n \"note_publishinginfo\": null,\n \"subject_lcgft_terms\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/genreForms\\/gf2017027249\",\n \"label\": \"Photographs\"\n }\n ],\n \"upload_associated_warcs\": [],\n \"physical_description_extent\": \"\",\n \"subject_lcnaf_personal_names\": \"\",\n \"subject_lcnaf_corporate_names\": \"\",\n \"subjects_local_personal_names\": \"\",\n \"related_item_host_location_url\": null,\n \"subject_lcnaf_geographic_names\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n81059724\",\n \"label\": \"Bronx (New York, N.Y.)\"\n },\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/names\\/n79007751\",\n \"label\": \"New York (N.Y.)\"\n }\n ],\n \"related_item_host_display_label\": null,\n \"related_item_host_local_identifier\": null,\n \"related_item_host_title_info_title\": \"\",\n \"related_item_host_type_of_resource\": null,\n \"physical_description_note_condition\": null\n}\n
That is all. Again, keep your JSON Patches safe in a text document, test/try simple things first, look for patterns, look for No-Nos that can become \"tests\" to avoid touching ADOs that do not need to be updated and always remember that the destination type (single value, array or object) of an existing Key might affect your complex logic. Happy Patching!
Thank you for reading! Please contact us on our\u00a0Archipelago Commons Google Group\u00a0with any questions or feedback.
Return to the main\u00a0Find and Replace documentation page\u00a0or the\u00a0Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","JSON Patch Find and Replace","Search and Replace","JSON Patch","JSON Pointer"]},{"location":"find_and_replace_action_text/","title":"Text Based Find and Replace","text":"The text-based find and replace is case-sensitive and space-sensitive, and while it's the most simple of the actions, it's quite powerful. For this reason it's important to be very precise and target only what's intended. Below are a guide and some examples of use cases for this action.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_text/#step-by-step-guide","title":"Step-by-Step Guide","text":"Tools > Advanced Batch Find and Replace
.Select / deselect all results (all pages, x total)
or toggle the buttons for individual objects.\u25ba Raw Metadata (JSON)
for some of the objects and double-check that the text being searched is only targeting what is intended and that the replacement text makes sense.Text based find and replace Metadata for Archipelago Digital Objects content item
from the Action
dropdown.Selected X items
and review the list.Apply to selected items
button (don't worry, nothing will happen yet).JSON Search String
) and replace (JSON Replacement String
).If you're absolutely certain about the replacement you have targeted, uncheck the 'only simulate and debug affected JSON' option and select Apply
.
only simulate and debug affected JSON
This option, which is selected by default, will simulate the action and show the list of objects that would be affected, along with the number of modifications for each object and a total number of results processed. For each JSON key and value affected the modifications will count 2:
1 for the deletion of the current key and value
+
1 for the creation of the modified key and value
Replacing a JSON key
Use Case: A JSON key is currently singular but should be plural.
JSON key example with empty array value...\n\"myKey\": [],\n...\n
JSON key example with array values...\n\"myKey\": [\"strawberries\",\"blueberries\",\"blackberries\"],\n...\n
Follow the steps above and use the following for the search and replace values:
Search Value\"myKey\":\n
Replace Value\"myKeys\":\n
Tip
By using quotes and the colon instead of myKey
only, we avoid unintentionally replacing other instances of the text within the JSON.
After applying the changes, we have the following key:
JSON key example with empty array value after update...\n\"myKeys\": [],\n...\n
JSON key example with array values after update...\n\"myKeys\": [\"strawberries\",\"blueberries\",\"blackberries\"],\n...\n
Replacing a JSON value
Use Case: After a batch ingest, it was discovered that JSON values across ADOs in multiple keys contain the same typo: Agnes Meyerhoff
(two fs) instead of Agnes Meyerhof
.
...\n\"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/art\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Meyerhoff, Agnes\",\n \"role_label\": \"Artist\"\n },\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/col\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Messenger, Maria, , 1849-1937\",\n \"role_label\": \"Collector\"\n }\n],\n\"description\": \"Inscription on mount: \\\"Meyerhoff, Agnes \\\\ Frankfurt - a\\/M. \\\\ Inv el-lith \\\\ Painter.\\\" Inscription on verso: \\\"Agnes Meyerhoff \\\\ Frankfurt a\\/M \\\\ inv. [at?] lith. \\\\ [maker in?]\\\".\",\n...\n
Follow the steps above and use the following for the search and replace values:
Search ValueMeyerhoff, Agnes\n
Replace ValueMeyerhof, Agnes\n
After applying the changes, we have the following values:
JSON values after update...\n\"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/art\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Meyerhof, Agnes\",\n \"role_label\": \"Artist\"\n },\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/col\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Messenger, Maria, , 1849-1937\",\n \"role_label\": \"Collector\"\n }\n],\n\"description\": \"Inscription on mount: \\\"Meyerhof, Agnes \\\\ Frankfurt - a\\/M. \\\\ Inv el-lith \\\\ Painter.\\\" Inscription on verso: \\\"Agnes Meyerhoff \\\\ Frankfurt a\\/M \\\\ inv. [at?] lith. \\\\ [maker in?]\\\".\",\n...\n
Replacing a JSON value with escape characters
Use Case: The URL for a website that appears in multiple keys needs to be updated from http://hubblesite.org
to https://hubblesite.org
.
...\n\"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the NASA and the Space Telescope Science Institute (STScI). For more information, please visit the NASA and the Space Telescope Science Institute's Copyright web page at [http:\\/\\/hubblesite.org\\/copyright](http:\\/\\/hubblesite.org\\/copyright).\",\n...\n\"description\": \"\\\"The largest NASA Hubble Space Telescope image ever assembled, this sweeping bird\u2019s-eye view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic next-door neighbor. Though the galaxy is over 2 million light-years away, The Hubble Space Telescope is powerful enough to resolve individual stars in a 61,000-light-year-long stretch of the galaxy\u2019s pancake-shaped disk. ... The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This cropped view shows a 48,000-light-year-long stretch of the galaxy in its natural visible-light color, as photographed with Hubble's Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.\\\" -full description available at: [http:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat](http:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat).\",\n...\n
Follow the steps above and use the following for the search and replace values:
Search Valuehttp://hubblesite.org\n
Replace Valuehttps://hubblesite.org\n
Note
You'll notice that the escape characters for the forward slash (\\/
), which appear in the raw JSON, do not need to be included in the search or replace values.
After applying the changes, we have the following values:
JSON value example with URL after update...\n\"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the NASA and the Space Telescope Science Institute (STScI). For more information, please visit the NASA and the Space Telescope Science Institute's Copyright web page at [https:\\/\\/hubblesite.org\\/copyright](https:\\/\\/hubblesite.org\\/copyright).\",\n...\n\"description\": \"\\\"The largest NASA Hubble Space Telescope image ever assembled, this sweeping bird\u2019s-eye view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic next-door neighbor. Though the galaxy is over 2 million light-years away, The Hubble Space Telescope is powerful enough to resolve individual stars in a 61,000-light-year-long stretch of the galaxy\u2019s pancake-shaped disk. ... The panorama is the product of the Panchromatic Hubble Andromeda Treasury (PHAT) program. Images were obtained from viewing the galaxy in near-ultraviolet, visible, and near-infrared wavelengths, using the Advanced Camera for Surveys and the Wide Field Camera 3 aboard Hubble. This cropped view shows a 48,000-light-year-long stretch of the galaxy in its natural visible-light color, as photographed with Hubble's Advanced Camera for Surveys in red and blue filters July 2010 through October 2013.\\\" -full description available at: [https:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat](https:\\/\\/hubblesite.org\\/image\\/3476\\/gallery\\/73-phat).\",\n...\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the main Find and Replace documentation page or the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Advanced Find and Replace","Find and Replace","Search and Replace","Metadata","Review"]},{"location":"find_and_replace_action_webform/","title":"Webform Find and Replace","text":"Webform Find and Replace enables you to search against values found within defined Webform elements to apply metadata replacements with targeted care. Below are a guide and an example use case for this Action.
Note
Please refer to the main Find and Replace documentation page for a general overview of where to find within your Archipelago, a general overview of default options and important notes and workflow recommendations.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"find_and_replace_action_webform/#important-notes-about-different-webform-elements","title":"Important Notes about Different Webform Elements","text":"Maximum Length as Defined by your Webform Element Configuration OR Theme Defaults
For certain text-based webform element types, the maximum field length (maxlength
) defined in your specific webform element configurations will be enforced during Webform Find and Replace operations. If no maximum length is defined, the Admin Theme will enforce a maximum length of 128 characters. Please see our main Webforms documentation for information about configuring webforms in Archipelago.
Some Complex Webform Elements Not Available
Please note that some complex webform elements are not available for use with Webform Find and Replace. Any webform element that requires user interactions (such as the Nominatim Open Street Maps lookup/query and selection) is not available for usage. The different file upload webform elements are also not available for use with Webform Find and Replace.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"find_and_replace_action_webform/#step-by-step-guide","title":"Step by Step Guide","text":"Tools > Advanced Batch Find and Replace
.Select / deselect all results (all pages, x total)
or toggle the buttons for individual objects.\u25ba Raw Metadata (JSON)
for some of the objects and double-check that the metadata field and value you are targeting for replacement is present.Webform find-and-replace Metadata for Archipelago Digital Objects content item
from the Action
dropdown.Selected X items
and review the list.Apply to selected items
button (don't worry, nothing will happen yet). Apply
.In the following example configuration, for the selected 'Senju no oubashi (Senju great bridge)' object, the Media Type (type JSON key) value of \"Visual Artwork\" will be replaced with the type
JSON key value of \"Photograph\".
Selection of Single ADO and Webform find-and-replace
Action
Webform Find and Replace Form
Confirmation of Successfully Executed Changes
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the main Find and Replace documentation page or the Archipelago Documentation main page.
","tags":["Advanced Batch Find and Replace","Webform Find and Replace","Search and Replace"]},{"location":"firstobject/","title":"Your First Digital Object","text":"You followed every Deployment step and you have now a local Archipelago
instance. Great!
So what now? It is time to give your new repository a try and we feel the best way is to start by ingesting a simple Digital Object.
Note
This guide will assume Archipelago is running on http://localhost:8001
, so if you wizardly deployed everything in a different location, please replace all URIs
with your own setup while following this guide.
Start by opening http://locahost:8001
in your favourite Web Browser.
Your Demo deployment will have a fancy Home page with some banners and a small explanation of what Archipelago is and can do. Feel free to read through that now or later.
Click on Log in
in the top left corner and use your demo
credentials from the deployment guide.
(or whatever password you decided was easy for you to remember during the deployment phase)
Press the Log in
button.
Great, welcome demo
user! This users has limited credentials and uses the same global theme as any anonymous user would. Still, demo
can create content, so let's use those super powers and give that a try.
You will see a new Menu item
under Tools
on the top navigation bar named Add Content
. Click it!
As you already know Archipelago is build on Drupal 8/9
, a very extensible CMS
. In practice that means you have (at least) the same functionality any Drupal deployment has and that is also true for Content Managment.
Drupal ships by default with a very flexible Content Entity Type
named Node
. Nodes
are used for creating Articles and simple Pages but also in Archipelago as Digital Objects
. Drupal has a pretty tight integration with Nodes
and that means you get a lot of fun and useful functionality by default by using them.
An Article
and a Digital Object
are both of type Nodes
, but each one represents a different Content Type
. Content Types
are also named Bundles
. An individual Content, like \"Black and White photograph of a kind Dog\" is named a Content Entity
or more specific in this case a Node
.
What have Article
and Digital Object
Content types in common and what puts the apart?
Base Fields
and also user configurable set of Fields
attached (or bundled together).Article
has a title, a Body and the option to add an image.Digital Object
has a title but also a special, very flexible one named Strawberry Field
(more about that later).Fields are where you put your data into and also where your data comes from when you expose it to the world.
Nodes
, as any other Content entity have Base Fields (which means you can't remove or configure them) that are used all over the place. Good examples are the title
and also the owner, named uid
(you!).Field Widget
is used to input data into a Field.Field Formatter
that allows you to setup how it is displayed to the World.Field Formatters
(the way you want to show your content formatted to the world) is named a Display Mode
. You can have many, create new ones and remove them, but only use one at the time.Field Widgets
(the way you want to Create and Edit a Node
) is named a Form Mode
. You can also have many, create new ones but only use one at the time.Each Content Type can have different Permissions (using the build in User Roles
system).
Display Modes
. In Practice this means Display Modes
are attached to Content Types
.Form Modes
. In Practice this means Form Modes
are attached to Content Types
.Form Mode
can have its own Permissions.There is of course a lot more to Nodes, Content Types, Formatters, Widgets and in general Content Entities but this is a good start to understand what will happen next.
"},{"location":"firstobject/#adding-content","title":"Adding Content","text":"Below you see all the Content Types
defined by default in Archipelago. Let's click on Digital Object
to get your first Digital Object Node.
What you see below is a Form Mode
in action. A multi-step Webform that will ingest metadata into a field of type Strawberry Field (where all the magic happens) attached to that field using a Webform Field Widget
, an editorial/advanced Block on the right side, and a Quick Save
button at the bottom for saving the session.
Let's fill out the form to begin our ingest. We recommend using similar values as the ones shown in the screen capture to make following the tutorial easier.
Make sure you select Photograph
as Media Type
and all the fields with a red *
are filled up. Then press Move on to next step
at the bottom of the webform to load the next step in line.
Since this is our first digital object we do not yet have a Digital Object Collection for which My First Digital Object
could be a member of. In other words, you can leave Collection Membership
blank and click Next: Upload Files
.
We assume you come from a world where repositories define different Content types and the shape, the fields and values (Schema) are fixed and set by someone else or at least quite complicated to configure. This is where Archipelago differs and starts to propose its own style. You noticed that there is a single Content Type named Digital Object
and you have here a single Web Form. So how does this allow you to have images, sequences, videos, audio, 3D images, etc?
There are many ways of answering that, Archipelago works under the idea of an (or many) Open Schema(s), and that notion permeates the whole environment. Practical answer and simplest way to explain based on this demo is:
Digital Object
is a generic container for any shape of metadata. Metadata is generated either via this Webform-based widget you're currently using, manually (power-user need only) or via APIs. Because of this, Metadata can take any shape to express your needs of Digital Objects and therefore we do not recommend making multiple Digital Object types. However, if you ever do need more Digital Object types, the option is available.Webform
, built using the Webform Module
and Webforms can be setup in almost infinite ways. Any field, combo, or style can be used. Multi Step, single step - we made sure they always only touch/modify data they know how to touch, so even a single input element webform would ensure any previous metadata to persist even if not readable by itself (See the potential?). And Each Webform can be also quite smart!Strawberry field
.We will come back to this later.
"},{"location":"firstobject/#linked-data","title":"Linked Data","text":"As the name of this step suggests; you will be adding all your Linked Data elements here. This step showcases some of the autocomplete Linked Data Webform elements we built for Archipelago. We truly believe in Wikidata as an open, honest, source of Linked Open Data and also one where you can contribute back. But we also have LoC autocompletes and Getty.
Again, enter all fields with a red *
and when you are finished, click Move on to next step
Tip
When entering a location, place or address you will need to click on the Search OpenStreet Maps
button, which is what that big red arrow is pointing to in the screenshot below.
Now we will upload our Photograph
. Click Choose Files
to open your file selector window and choose which file you would like to ingest.
Once you've uploaded your file, you will see all the Exif data extracted from the image, like so...
Once you've mentally digested all of that data, let's go ahead and click Save Metadata
.
By clicking Save Metadata
we are simply persisting all the metadata in the current webform session. The actual ingest of the Object happens when you click Save
on the next and final step, Complete
.
Alright, we've made it. We've added metadata, linked Data, uploaded our files and now... we're ready to save! Go ahead and change the status from Draft to Published and click Save
.
Once you hit save you should see the following green message and your first Archipelago Digital Object!
Congratulations on creating your first digital object! \ud83c\udf53
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"fragaria/","title":"Fragaria Redirects Module","text":"Archipelago's Fragaria Redirect Module is a Drupal 9/10 module that provides dynamic redirect routes matched against existing Search API field values. This module will also provide future Unique IDs (API) integrations and PURLs.
","tags":["Fragaria","Redirects"]},{"location":"fragaria/#prerequisites","title":"Prerequisites","text":"Before proceeding with the following configuration steps, you need to first create the Strawberry Key Name Provider and Solr Field that corresponds to the Field that will be matched against the variable part of the route exists.
In other words, if your Digital Objects have a field and value such as:
\"legacy_PID\": [\n \"mylegacyrepo:1234\"\n ], \n
You need to make sure the values from the legacy_PID
JSON key are indexed (as a Solr/Search API Field) and ready for use as part of the Fragaria Redirects configuration.
Best Practice
Your new Solr field should to be of field type \"String\" for a perfect match and best results. Using \"Full Text\" or a related variant Solr field type will allow for a partial match, which might lead multiple original URLs redirecting to the first match in Archipelago.
","tags":["Fragaria","Redirects"]},{"location":"fragaria/#fragaria-redirect-entity-configuration","title":"Fragaria Redirect Entity Configuration","text":"Navigate to /admin/config/archipelago/fragariaredirect
.
Select the Add a Redirect Config
button.
Enter a label for the Fragaria Redirect Entity you are configuring.
Enter the Prefix (that follows your domain) for the Redirect Route.
node/
or do/
as a Prefix. Even if these will technically work (redirect), using either of these Prefix paths will override your existing Paths defined by Drupal and Archipelago.If applicable, enter the the Suffixes (that follow the prefix + the variable part) for the Redirect Route.
Instead of fixed Prefixes add a single {catch_all} variable suffix at the end
as needed. Checking this will disable any entered static suffixes.Select the Search API Index where the Field that will be matched against the variable part of the route exists.
Select the Search API Field that will be matched against the variable part of the route.
If applicable, add static prefixes for to the variable part/argument of the path.
Add static suffixes for to the variable part/argument of the path.
Select the Type of HTTP redirect to perform.
Lastly, select the box next to Is this Fragaria Redirect Route active?
to set your Redirect to active
, and Save your configuration.
The above example configuration would enable a Temporary redirect from a legacy repository site with a URL of https://mylegacyrepo.edu//mylegacyrepo/object/mylegacyrepo:1234 to your new Archipelago PURL of https://mynewarchipelagorepo.edu/do/mynewADOUUID.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Fragaria","Redirects"]},{"location":"generalqa_minio_logging/","title":"Min.io Logging","text":"Q: How can I see my minio (S3) docker container's realtime traffic and requests?
A: For standard demo deployments, mini.io storage server runs on the esmero-minio
docker container. Steps are:
Install the mc
binaries (minio client) for your platform following this instructions. e.g for OSX run on your terminal:
brew install minio/stable/mc\nmc alias set esmero-minio http://localhost:9000 user password\n
with http://localhost:9000
being your current machines mini.io URL and exposed port, user
being your username (defaults to minio
) and your original choosen password
(defaults to minio123
)
Run a trace
to watch realtime activity on your terminal:
mc admin trace -v -a --debug --insecure --no-color esmero-minio\n
Note: mc
client is also AWS S3 compatible and can be used to move/copy/delete files on the local instance and to/from a remote AWS storage.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"generalqa_smtp_configuration/","title":"SMTP Configuration","text":"Q: How can I enable SMTP for Archipelago?
A: For standard demo deployments, SMTP is not setup to send emails. To enable SMTP:
Enter the following commands in your terminal. Note: make sure docker is running. Optionally, you can verify that all Archipelago containers are present by entering the docker ps
command first.
docker exec -ti esmero-php bash -c 'php -dmemory_limit=-1 /usr/bin/composer require drupal/smtp:^1.0'\ndocker exec -ti esmero-php bash -c 'drush en -y smtp'\n
Check that the SMTP module has been enabled by navigating (as admin user) to the EXTEND module menu item (localhost:8001/admin/modules
). You should see \"SMTP Authentication Support\" listed.
Navigate to localhost:8001/admin/config/system/smtp
to configure the SMTP settings.
Save your settings, then test by adding a recipient address in the \u201cSEND TEST E-MAIL\u201d field.
Note: Depending on your email provider, you may also need to enable \u201cless secure\u201d applications in your account settings (such as here for Google email accounts: https://myaccount.google.com/lesssecureapps)
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"generalqa_twig_modules_configuration/","title":"Twig Modules Configuration","text":"Q: When attempting to save a Twig template for a Metadata Display, I receive an error message related to an Unknown \"bamboo_load_entity\" function
.
A: You need to enable the necessary Twig modules.
Navigate to: yoursite/admin/modules
In the \u201cEnter a part of the module name or description\u201d box, enter \u201cbam\u201d to filter for the related Bamboo Twig modules. Alternatively, scroll down to the Bamboo Twig modules section on this page.
Check the box next to each of the following to enable (some may already be enabled):
Click Install
.
After receiving the successful installation confirmation, check to make sure you are now able to save your Twig template without receiving an error message.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"giveortake/","title":"Archipelago Contribution Guide","text":"Contributing Documentation
Looking to contribute documentation? Start here.
Archipelago welcomes and appreciates any type of contribution, from use cases and needs, questions, documentation, devops and configuration and -- of course -- code, fixes, or new features. To make the process less painful, we recommend you first to read our documentation and deploy a local instance. After that please follow the guidelines below to help you get started.
Archipelago
welcomes, appreciates, and recognizes any and all types of contribution. This includes input on all use cases and needs, questions or answers, documentation, DevOps, and configurations. We also welcome general ideas, thoughts, and even dreams for the future of our repository! Of course, we also invite you to contribute PHP code, including fixes and new features.
We will be helpful, kind, and open. We encourage discussions and always respect one another's opinions, language, gender, style, backgrounds, origins, and destinations, provided they come from the same root values of respect, as stated here. We support conflict resolution using nothing more than basic common sense. We value diversity in all its shapes, forms, colors, epoches, numbers, and kinds, with or without labels, including in-between and evolving. We always assume we can do better and that you have done a lot. Under this very basic social framework, this is how we hope you can contribute:
"},{"location":"giveortake/#where-the-wild-things-live","title":"Where The Wild Things Live","text":"Archipelago has 5 active GitHub repositories
Strawberryfield
.We host a community interaction channel, our google group. This is the best place to ask questions and make suggestions that are not specific to a single module, and/or if you would like to contribute to a larger conversation within our community. Discussions work best in this forum (not excluding GitHub of course), and our official announcements are posted there too.
"},{"location":"giveortake/#documentation-workflow","title":"Documentation Workflow","text":"Documentation is an evolving effort, and we need help. This guide lives in GitHub in Archipelago Documentation. Documentation and Development Worklfow both work the same way, so keep reading!
"},{"location":"giveortake/#development-workflow","title":"Development Workflow","text":"Start by reading open ISSUES (so you don't end up redoing what someone else is already working on) and looking at our Roadmap for version 1.0.0. If the solution to your problem is not there or if there is an unchecked element in the roadmap, this is a great opportunity to help by creating a new ISSUE.
Next, start by opening an GitHub ISSUE in any of the 5 GitHub repositories, depending on what it is you are trying to do.
Please be concise with the title of your ISSUE so that it is easy to understand. Use Markdown to explain the what, how, etc, of your contribution. Note: Even if something related is already in the works, you can still contribute. Just add your comments on any open ISSUE. Or, if you think you want to contribute with a totally different perspective, feel free to open a new ISSUE anyway. We can always discuss next steps starting from there. Every community has its rhythm and style and our style is just beginning to develop. We are still figuring out what works best for everyone.
Once you are done and you feel comfortable working to make a change yourself, take note of the ISSUE number
(lets name it #issuenumber
).
The gist is:
As a best practice, we encourage pull requests to discuss/fix existing code, new code, and documention changes.
For the full step-by-step workflow, we will use Archipelago Documentation and the 1.0.0
branch as example. The same applies to any of the other repositories: just change the remote urls and use the most current branch name.
Fork the Archipelago Documentation Upstream source repository to your own personal GitHub account (e.g. YOU). Copy the URL of your Archipelago Documentation fork (you will need it for the git clone
command below).
$ git clone https://github.com/YOU/archipelago-documentation\n$ cd archipelago-documentation\n$ git checkout 1.0.0\n
"},{"location":"giveortake/#set-up-git-remote-as-upstream","title":"Set Up Git Remote As upstream
","text":"$ git remote add upstream https://github.com/esmero/archipelago-documentation\n$ git fetch upstream\n$ git merge upstream/1.0.0\n...\n
"},{"location":"giveortake/#create-your-issue-branch","title":"Create Your ISSUE Branch","text":"Before making changes, make sure you create a branch using the ISSUE number you created for these contributions.
$ git checkout -b ISSUE-6\n
"},{"location":"giveortake/#do-some-clean-up-and-test-locally","title":"Do Some Clean Up and Test Locally","text":"After your code changes, make sure
PHP
, run phpcs --standard=Drupal yourchanged.file.php
. We (try our best to) use Drupal 8 coding standards.MARKDOWN
file, make sure it renders well (you can use Textmate, Atom, Textile, etc to preview) and that links are not broken.PHP
, please test your changes live on your local instance of Archipelago. All non-documentation modules are already inside web/modules/contrib/
.After verification, commit your changes. This is very good post on how to write commit messages.
$ git commit -am 'Fix that Strawberry'\n
"},{"location":"giveortake/#push-to-the-branch","title":"Push To The Branch","text":"Push your locally committed changes to the remote origin (your fork)
$ git push origin ISSUE-6\n
"},{"location":"giveortake/#create-a-pull-request","title":"Create A Pull Request","text":"Pull requests can be created via GitHub. This document explains in detail how to create one. After your Pull Request gets peer reviewed and approved, it can be merged. Discussion can happen and peers can ask you for modifications, fixes or more information on how to test. We will be respectful. You will be given credit for all your contributions and shown appreciation. There is no wrong and never too little. There could never be too much!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"googleapi/","title":"Configuration for Google Sheets API","text":"To allow the Archipelago Multi Importer (AMI) to read from Google spreadsheets, you first need to configure the Google Sheets API as outlined in the following instructions.
Please note:
Login to the Google Developer Console. You will see the API & Services Dashboard.
If you have not created Credentials or a Project before, you will need to first create a Project.
Next, click the Create credentials
select box and select OAuth client ID
You will now need to Configure the Consent Screen.
On the initial OAuth Consent Screen setup, select Internal
for User Type.
Now enter AMI
as the App name, and your email address in the User support email. You may also wish to add Authorized domains (bottom of image below) as well.
On the Scopes page, select Add or Remove Scopes
. Then either search/filter the API table for the Google Sheets API. Or, under Manually add scopes
enter: https://www.googleapis.com/auth/spreadsheets.readonly
After selecting or entering in the Google Sheets API, you should see this listed under Sensitive Scopes
.
Review the information on the Summary
page, then Save.
You will now be able to Create Oauth client ID
. Select Web Application
as the Application type
Enter \"AMI\" under 'Name' and add any URIs you will be using below.
http://localhost:8001/google_api_client/callback
All URIs need to include /google_api_client/callback
After Saving, you will see a message notifying you that the OAuth client was created. You can copy the Client ID
and Client Secret
directly from this confirmation message into a text editor. You can also access the information from Credentials
in the APIs & Services
section in the Developer console, where you will have additional options for downloading, copying, and modifying if needed.
On the 'Add Google Api Client account' configuration page, enter the following information using your Client ID
and Client Secret
. 'Developer Key' is optional. Select Google Sheets API
under 'Services' and https://www,googleapis.com/auth/spreadsheets.readonly
under 'Scopes'. Check the box for Is Access Type Offline
. Select the Save button.
You will now need to Authenticate your AMI Google API Client. Return to the Google API Client Listing page. Under the Operation menu on the right-hand side of the AMI client listing, select Authenticate
.
You will be directed to the Google Consent Screen. You may need to login to your corresponding Google Account before proceeding. When loged in, you will see the following screen requesting that AMI is allowed to \"View your Google Spreadsheets\". Click Allow
.
On the Google API Client Listing page, your AMI client listing should now have 'Yes' under 'Is Authenticated'. You are now ready to use Google Sheets with AMI! Return to the main AMI documentation page to get started.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"inthewild/","title":"Archipelagos in the Wild \ud83d\uddfa\ufe0f","text":"Explore Archipelago instances running free across digital realms.
Note
*Please be aware that some of the following Archipelago instances are still brewing and these links may change. Stay tuned for future updates to live production sites when available.
"},{"location":"inthewild/#metro-archipelago","title":"METRO + Archipelago","text":"The Archipelagos listed below are supported by the Digital Services Team at the Metropolitan New York Library Council. \ud83e\uddd1\u200d\ud83c\udf3e \ud83d\udc1d \ud83c\udf53
Archipelago Playground and Studio Site
Barnard College
Digital Culture of Metropolitan New York (DCMNY)
Empire Archival Discovery Cooperative (EADC) Finding Aid Toolkit
Empire Immersive Experiences
Frick Collection and Webrecorder Team Web Archives Collaboration
Hamilton College Library & IT Services (https://litsdigital.hamilton.edu/)
Olin College Library Phoenix Files
New York State COVID-19 Personal History Initiative
Rensselaer Polytechnic Institute Libraries
San Diego State University Libraries Digital Collections
Union College Library
Western Washington University
From all around our beautiful shared world. \ud83c\udfe1 \ud83c\udfeb \ud83c\udfdb\ufe0f
Amherst College
Association Montessori Internationale
California Revealed
Christian Observatory of the Pro Civitate Christiana
Consiglio Nazionale delle Ricerche / National Research Council of Italy
University of Edinburgh Libraries
If you have a public Archipelago instance you'd like to share on this page \ud83c\udfdd\ufe0f\ud83d\udccd, please contact us. We would love to add your great work to this list! \ud83d\udc9a
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"metadata_display_preview/","title":"Metadata Display Preview","text":"Archipelago's Metadata Display Preview is a very handy tool for your repository toolkit that enables you to preview the output of your Metadata Display (Twig) Templates (found at /metadatadisplay/list
). You can use the Metadata Display Preview to test and check the results of any type of Template (HTML Display, JSON Ingest, IIIF JSON, XML, etc.) against both Archipelago Digital Objects (ADOs) and AMI Sets (rows within).
Prerequisite Note
Before diving into Metadata Display (Twig) Template changes, we recommend reading our Twigs in Archipelago documentation overview guide and also our Working with Twig primer.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadata_display_preview/#step-by-step","title":"Step-by-Step","text":"Navigate to the Metadata Display list at /admin/content/metadatadisplay/list
(or through the admin menu via Manage > Content > Metadata Displays
). From the main Metadata Display List page, you can access all of the different display, rendering, and processing templates found in your Archipelago.
Selecting a Metadata Display Template
Open and select 'Edit' for the Template you wish to Edit and/or Preview.
Editing a Metadata Display Template
You will now be able to select either an Archipelago Digital Object (ADO) or AMI Set to Preview. Both selection types will use an autocomplete search (make sure the autocomplete matches fully against your selection before proceeding).
Archipelago Digital Object (ADO) selection
AMI Set and Row selection
For the Row, you can enter either a (CSV row) number
AMI Set and Row selection
Or a label found within the Source Data CSV:
After you select your ADO or AMI Set and press the Show Preview
button, the fuller Preview section will open up on the right side of the screen. The left side will continue to show the Metadata Display Template you originally selected to Edit.
Tip
It is strongly recommended to always select the option to \"Show Preview using native Output Format (e.g. HTML)\".
Archipelago Digital Object (ADO) selection against an HTML Display template
AMI Set and Row selection against a JSON Ingest template
To keep track of the JSON keys used in your template select the Show Preview with JSON keys used in this template
option before pressing Show Preview
. For more details see below.
Within the Preview Section on the right side of the screen:
From the Edit + Preview mode, you can:
Select the Show Preview
button as you make changes to refresh the Preview output and check your work. After saving any changes you may have made to your selected Template, all of the displays/AMI Sets/other outputs that reference this same Template will reflect the changes made.
Note
This feature is available as of strawberryfield/format_strawberryfield:1.2.0.x-dev
and archipelago/ami:0.6.0.x-dev
. To make use of it before the official 0.6.0/1.2.0 release you can run the following commands:
docker exec -ti esmero-php bash -c \"composer require 'archipelago/ami:0.6.0.x-dev as 0.5.0.x-dev' 'strawberryfield/format_strawberryfield:1.2.0.x-dev as 1.1.0.x-dev'\"\n
docker exec -ti esmero-php bash -c \"drush updb\"\n
docker exec -ti esmero-php bash -c \"drush cr\"\n
When creating or editing a Metadata Display Twig template, you can keep track of the JSON keys being used in the template by enabling the option after selecting an Archipelago Digital Object (ADO) or AMI Set row before pressing Show preview
:
Enable Metadata Display Preview Variables
The last two tabs in the Preview section above expand to show two tables listing the JSON keys that are used and unused by the template. The used keys are sorted by first instance line number (from the template) and the unused keys are sorted alphabetically.
Metadata Display Preview Variables Used JSON Keys
Metadata Display Preview Variables Unused JSON Keys
The JSON Keys that appear in these tables will vary based on changes to the template and the selected ADO or AMI set row.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadata_display_preview/#warnings-and-errors","title":"Warnings and Errors","text":"Warnings and Errors encountered during the processing will be shown at the top of the Preview section. A line number (from the template) will be included in the message if available.
Warning
A Warning will be generated if output can be rendered, and the output will be displayed below it.
Error
An Error will be generated if no output can be rendered, and no output will be displayed.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Metadata Display","Twig Template","Preview"]},{"location":"metadatainarchipelago/","title":"Your JSON, our JSON - RAW Metadata in Archipelago","text":"From the desk of Diego Pino
Archipelago's RAW metadata is stored as JSON and this is core to our architecture. To avoid writing RAW over and over, this document will refer to RAW Metadata simply as Metadata.
Data and Metadata can be extremely complex and extensive. The use cases that define what Data, Media and Metadata to collect, to catalog and expose, to use during search and discovery or to enable interactive functionality, including questions like \"what public facing schemas, formats and serializations I need or want to be compliant with\" are as diverse and complex as the Metadata driving them.
But also Metadata, in specific, is plastic and evolving as are use cases. And more over, some Metadata is descriptive and some Metadata is technical and there are other types of Metadata too, e.g Control Metadata.
Finally Metadata is very close to their generators. Means you and your peers will know, better than any Software Development team, what is needed, useful and, many times also, available given what use case you have, end users needs and resources at hand, your In real life workflows and future expectations.
"},{"location":"metadatainarchipelago/#reason-behind-using-json","title":"Reason behind using JSON","text":"Drupal, the OSS CMS system Archipelago uses and extends, is RDB driven. This means that Content Types
normally follow the idea of an Entity with Fields attached. Each of these Fields becomes then a Database Table and the sum of all these fields living under a Content Type
definition, a fixed schema.
For integration and interoperability reasons with the larger Drupal ecosystem, we inherit in Archipelago the idea of an Entity, in specific, a Content Entity (Node) and Content Type
(Bundled fields for a Node). But instead of generating (and encouraging) the use of hundreds of fixed fields to describe your Digital Objects we put all Metadata as JSON, means a JSON BLOB, into a single smart Field of type Strawberry Field
. \ud83c\udf53
We go a long way of making as much as possible flexible and dynamic. This also implies the definition (and separation) of what an Archipelago Digital Object (ADO) is in our Architecture v/s what a general Drupal Content Type (e.g a static page or a blog post) is defined in code as: \"Any Content type that uses a Strawberry Field is an ADO and will be processed as such\". No configuration is needed. In other words, all is a NODE but any node that uses a Strawberry Field gets a different treatment and will be named in Archipelago an ADO.
One of the challenges of our flexible approach is how to allow Drupal to access the JSON in a way, as native as possible, to generate filtered listings via Drupal Views, free text Search and Faceting. To make this happen Strawberry Field uses a JSON Querying and Exposing as \"Native Field Properties\" logic. Through a special type of Plugin system named Strawberry Key Name Providers and associated Configuration Entities (can be found at /admin/structure/strawberry_keynameprovider
), you have control on which keys and values of your JSON are going to be exposed as field properties of any Strawberry Field, allowing Drupal through this to access values in a flat manner and expose them to the Search API natively. The access to the values of any JSON is done via JMESPATH expressions and then transformed either to a list of values or even \"cast\" into more complex data Data types, like an Entity Reference (means a connection to another Entity).
This gives you a lot of power and control and makes a lot of very heavy operations lighter. You can even plan upfront or evolve these properties in time.
In other words, you control how storage is mapped to Discovery and this allows Drupal Views to work that way too. Of course this also means traditional SQL based Drupal Views won't have access to these internals (for filtering) given that your JSON data nor the virtual Properties generated via Strawberry Key Name Providers are not accessible as individual RDB tables to generate SQL joins and that is why we heavily depend on the Search API (Solr).
"},{"location":"metadatainarchipelago/#open-schema-what-is-yours-what-is-archipelagos","title":"Open Schema. What is yours, what is Archipelago's","text":"What can you add to an ADO's Strawberry Field? As long as it is valid JSON, Archipelago can store it, display it, transform it and search across it in Archipelago. The way you manage Metadata can be as \"intime\" or \"aligned\" to other schemas as you want. Still, there are a few suggested keys/functional ideas:
"},{"location":"metadatainarchipelago/#suggested-json-keys","title":"Suggested JSON keys","text":""},{"location":"metadatainarchipelago/#the-type-key","title":"Thetype
key","text":"{\n \"type\": \"Photograph\"\n}\n
The type
JSON key has a semantic and functional importance in Archipelago. Given that we don't use multiple Drupal Content Types to denote the difference between e.g. a Photograph or a Painting (which would also mean you would be stuck with one or other if we did), we use this key's value to allow Archipelago to select/swap View Modes. This approach also allows for your own needs to define what an ADO in real life or digital realm is (the WHAT). This key is also important when doing AMI
based batch ingests since many of the mappings and decisions (e.g. what Template to use to transform your CSV or if the Destination Drupal Content Type is going to be a Digital Object or a Digital Object Collection) will depend on this.
Note: Archipelago does something extra fun too when using type
value for View Mode Selection (and this is also a feature of one of the Key Name provider Plugins). It will flatten the JSON first and then fetch all type
keys. How does this in practice work?
{\n \"type\": \"Photograph\",\n \"subtypes\": [\n {\n \"type\": \"125 film\"\n },\n {\n \"type\": \"Instant film\"\n }\n ] \n}\n
Means, while doing a View Mode Selection Archipelago will bring all found type
key values together and will have ['Photograph', '125 film', 'Instant film'] available as choices, meaning you will be able to make even finer decisions on how to display your ADOs. View Mode selection is based on order or evaluation, means we recommend putting the more specific mappings first.
label
key","text":"{\n \"label\": \"Black and White Photograph of Cataloger working with JSON\"\n}\n
Archipelago will use the label
key's value to populate the ADO's (Drupal Node) Title. Drupal has a length limit for its native build in Node Entity Title but JSON has not, so in case of more than 255 characters Archipelago will truncate the Title (not the label
key's value) adding an ellipsis (...) as suffix.
Because of the need of having Technical Metadata, Descriptive Metadata and Semantic Metadata while generating different representations of your JSON via Metadata Display Entities (Twig templates) transformations, we store and characterize Files attached to an ADO as part of the JSON. We also use a set of special keys to map and cast JSON keys and values to Drupal's internal Entities system via their Numeric and/or UUID IDs.
Through this, Archipelago will also move files between upload locations and permanent storage, execute Technical metadata extraction, keep track of ADO to ADO relationships (e.g ispartof or ismemberof) and emulate what a traditional Drupal Entity Reference field
would do without the limitations (speed and immutability) a static RDB definition imposes.
ap:entitymapping
key","text":"{\n \"ap:entitymapping\":{\n \"entity:file\": [\n \"model\",\n \"audios\",\n \"images\",\n \"videos\",\n \"documents\",\n \"upload_associated_warcs\"\n ],\n \"entity:node\": [\n \"ispartof\",\n \"ismemberof\"\n ]\n }\n}\n
the ap:entitymapping
is a hint for Archipelago. With this key we can treat certain keys and their values as Drupal Numeric Entity IDs instead of semantically unknown values.
In the presence of the structure exemplified above the following JSON snipped:
\"images\": [\n 1,\n 2,\n 3\n ] \n
Will tell Archipelago that the JSON key images
should be treated as containing Entity IDs for a Drupal Entity of type (entity:file
) File. This has many interessting consequences. Archipelago, on edit/update/ingest will try (hard) to get a hold of Files with ID 1, 2 and 3. If in temporary storage Archipelago will move them to its final Permanent Location, will make sure Drupal knows those files are being used by this ADO, will run multiple Technical Metadata Extractions and classify internally the Files, adding everything it could learn from them. In practice, this means that Archipelago will write for you additional structures into the JSON enriching your Metadata.
Without this structure, the images
key would not trigger any logic but will of course still exist and can always still be used as a list of numbers while templating.
This also implies that for a persisted ADO with those values, if you edit the JSON and delete e.g. the number (integer
or string
representation of an integer
) 3
, Archipelago will disconnect the File Entity with ID 3 from this ADO, remove the enriched metadata and mark the File as not being anymore used by this ADO. If nobody else is using the File it will become temporary
and eventually be automatically removed from the system, if that is setup at the Drupal - Filesystem - level.
Using the same example ap:entitymapping
structure, the following snippet:
\"ispartof\": [\n 2000\n ] \n
Will hint to Archipelago on assumed connection between this ADO and another ADO with Drupal Entity ID 2000
. This will drive other functionality in Archipelago (semantic), allowing for example a Navigation Breadcrumb to be built using all connections found in its hierarchical path.
In Archipelago ADO to ADO relationships are normally from Child to Parent and hopefully (but not enforced!) building an Acyclic graph, from leaves to trunk. This will also allow inheritance to happen. This means also that a Parent ADO needs to exist before connecting/relating to it (chicken first). But if it does not, the system will not fail and assume a temporarily broken relationship (egg stays safely intact).
Entity mapping key also drives a very special compatibility addition to any ADO. Archipelago will populate Native Computed Drupal fields (attached at run time to each ADO) with these values loading and exposing them as Drupal Entities, processing both Files and Node Entities and making them visible outside the scope of a Strawberry Field
to the whole CMS.
The following Computed fields are provided:
field_file_drop
: Computed Entity Reference Field. Needed also for JSON API level upload of Files to an ADO (Drupal need). It will expose all File Entities referenced in an ADO, independently of the type of the File.field_sbf_nodetonode
: Computed Entity Reference Field. It will expose all Nodes (other ADOs) Entities referenced in an ADO, independently of the Content type and/or the semantic predicate (ismemberof, ispartof, etc) used.These Fields, because of their native Drupal nature, can be used directly everywhere, e.g. in the Search API to index all related ADOs (or any of their Fields and subproperties, even deeply chained, tree down) without having to specify what predicate is used. Said differently, they act as aggregators, as a generic \"isrelatedto\" property bringing all together.
"},{"location":"metadatainarchipelago/#the-asas_file_type-keys","title":"Theas:{AS_FILE_TYPE}
keys","text":"As explained in the ap:entitymapping
section above, when Archipelago gets hold of a File entity it will enrich your JSON with its extracted data. Archipelago will compute and append to your JSON a set of controlled as:{AS_FILE_TYPE}
keys containing a classified File's Metadata. The naming will be automatic based on grouping Files by their Mime Types.
The possible values for as:{AS_FILE_TYPE}
are
as:image
as:document
as:video
as:audio
as:application
as:text
as:model
as:multipart
as:message
An example for an Image attached to an ADO:
{\n \"as:image\": {\n \"urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0\": {\n \"url\": \"s3:\\/\\/de2\\/image-f6268bde41a39874bc69e57ac70d9764-view-ef596613-b2e7-444e-865d-efabbf1c59b0.jp2\",\n \"name\": \"f6268bde41a39874bc69e57ac70d9764_view.jp2\",\n \"tags\": [],\n \"type\": \"Image\",\n \"dr:fid\": 7461,\n \"dr:for\": \"images\",\n \"dr:uuid\": \"ef596613-b2e7-444e-865d-efabbf1c59b0\",\n \"checksum\": \"de2862d4accf5165d32cd0c3db7e7123\",\n \"flv:exif\": {\n \"FileSize\": \"932 KiB\",\n \"MIMEType\": \"image\\/jp2\",\n \"ImageSize\": \"1375x2029\",\n \"ColorSpace\": \"sRGB\",\n \"ImageWidth\": 1375,\n \"ImageHeight\": 2029\n },\n \"sequence\": 1,\n \"flv:pronom\": {\n \"label\": \"JP2 (JPEG 2000 part 1)\",\n \"mimetype\": \"image\\/jp2\",\n \"pronom_id\": \"info:pronom\\/x-fmt\\/392\",\n \"detection_type\": \"signature\"\n },\n \"dr:filesize\": 954064,\n \"dr:mimetype\": \"image\\/jp2\",\n \"crypHashFunc\": \"md5\",\n \"flv:identify\": {\n \"1\": {\n \"width\": \"1375\",\n \"format\": \"JP2\",\n \"height\": \"2029\",\n \"orientation\": \"Undefined\"\n }\n }\n }\n }\n}\n
That is a lot of Metadata! But to understand what is happening here, we need to dissect this into more readable chunks. Let's start with the basics from root to leaves of this hierarchy.
"},{"location":"metadatainarchipelago/#direct-file-level-metadata","title":"Direct File level Metadata","text":"Every Classified File inside the as:{AS_FILE_TYPE}
key will be contained in a unique URN JSON Object property:
\"urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0\": {}\n
We use a Property instead of a \"List or Array\" of Technical Metadata because this allows us (at code level) to access quickly from e.g. as:image
structure all the data for a File Entity with UUID ef596613-b2e7-444e-865d-efabbf1c59b0
without iterating. (Also now you know what urn:uuid:ef596613-b2e7-444e-865d-efabbf1c59b0 means.)
Next, inside that property, the following Data provides basic Information about the File so you can access/make decisions when Templating. Notice the duplication of similar data at different levels. Duplication is on purpose and again, allows you to access certain JSON values (or filter) quicker without having to go to other keys or hierarchies to make decisions.
{\n \"url\": \"s3:\\/\\/de2\\/image-f6268bde41a39874bc69e57ac70d9764-view-ef596613-b2e7-444e-865d-efabbf1c59b0.jp2\",\n \"name\": \"Original Name of my Image.jp2\",\n \"tags\": [],\n \"type\": \"Image\",\n \"dr:fid\": 3,\n \"dr:for\": \"images\",\n \"dr:uuid\": \"ef596613-b2e7-444e-865d-efabbf1c59b0\",\n \"crypHashFunc\": \"md5\",\n \"checksum\": \"de2862d4accf5165d32cd0c3db7e7123\",\n \"dr:filesize\": 954064,\n \"dr:mimetype\": \"image\\/jp2\",\n \"sequence\": 1\n
\"url\"
: Contains the Final Storage location/URI of the File. It's prefixed with the configured Streamwrapper, a functional symbolic link to the underlying complexities of the backend storage. e.g s3://
implies an S3 API backend with a (hidden/abstracted) set of credentials, Bucket and Prefixes inside the bucket. This value is also used in Archipelago's IIIF Cantaloupe Service as the Image id
when building a IIIF Image API URL.\"name\"
: The Original Name of the File. Can be used to give a Download a human readable name or as an internal hint/preservation for you.\"tags\"
: Unused by default. You can use this for your own logic if needed.\"type\"
: A redundant (contextual, at this level) key whose value will match {AS_FILE_TYPE}
already found at 2 levels before. Allows you to know what File type this is when iterating over this File's data (without having to look back, or on our Code, when dealing with Flattened JSON).\"dr:fid\"
: The Drupal Entity Numeric ID.\"dr:for\"
: Where in your JSON (top level key) this File ID was stored (or in other words where you can find the value of \"dr:fid\"
. All this will match / was be driven of course by ap:entitymapping
. Sometimes (try uploading a WARC file and run the queue) this key might contain flv:{ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID}
. This means the File will have been generated by an active Strawberry Runners Processor and not uploaded by you. ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID
will be the Machine name (or ID) of a given Strawberry Runners Processor Configuration Entity.\"dr:uuid\"
: A redundant (contextual, at this level) key whose value will match the Drupal File entity UUID for this File.\"crypHashFunc\"
: What Cryptographic function was used for generating the checksum. By default Archipelago will do MD5 (faster but also because S3 APIs use that to ensure upload consistency and E-tag). In the future others can be enabled and made configurable\"checksum\"
: The Checksum (calculated) of this File via \"crypHashFunc\"
\"dr:filesize\"
: The File size in Bytes.\"dr:mimetype\"
: The Drupal level infered Mime Type. Archipelago extends this list. This is based on the File Extension.\"sequence\"
: A number (integer) denoting order of this file relative to other files of the same type inside the JSON. Which default type ordering is used will depend on how the ADO was created/edited, but can be overriden using Control Metadata.Deeper inside this structure Archipelago will produce Extracted Technical Metadata. Some of this Metadata will be common to every File Type, some will be specific to a subset, like Moving Media or PDFs. What runs and how it runs can be configured at the File Persister Service Settings configuration form found at/admin/config/archipelago/filepersisting
. Why there? These are service that run syncroniusly on ADO save (Create/Edit) and in while doing File persistance.
\"flv:exif\"
: EXIF Tool extraction for a file. The number of elements that come out might vary, for an Image file it might be normally short, but a PDF might have a very extensive and long list. The above mentioned File Persister Service Settings form allows you to also set a Files Cap Number, that will, once reached, limit and reduce the EXIF. This is very useful if you want to control the size of your complete JSON for any reason you feel that is needed (performance, readability, etc).
{\n \"flv:exif\": {\n \"FileSize\": \"932 KiB\",\n \"MIMEType\": \"image\\/jp2\",\n \"ImageSize\": \"1375x2029\",\n \"ColorSpace\": \"sRGB\",\n \"ImageWidth\": 1375,\n \"ImageHeight\": 2029\n },\n}\n
\"flv:identify\"
: Graphics Magic Identity binary will run on every file and format it knows how to run (and will try even on the ones it does not). Will give you data similar to EXIF but processed based on the actual File and not just extracted from the EXIF data found at the header. Notice that the details will be inside a \"1\", \"2\", etc property. This is because Identify might also go deeper and for e.g a Multi Layer Tiff extract different sequences on the same File.
{\n \"flv:identify\": {\n \"1\": {\n \"width\": \"1375\",\n \"format\": \"JP2\",\n \"height\": \"2029\",\n \"orientation\": \"Undefined\"\n }\n }\n}\n
\"flv:pronom\"
: Droid, a File Signature detection tool will find a matching pronom_id
for your File based on https://www.nationalarchives.gov.uk/aboutapps/pronom/droid-signature-files.htm. This detection type is deeper that EXIF or the mime type based on extension, reading from binary data. It allows you to get small differences between formats (even if e.g both are JP2) and thus make decisions like \"Will Cantaloupe IIIF Image Server
be able to handle this type?\". This has also positive Digital Preservation consequences.
{\n \"flv:pronom\": {\n \"label\": \"JP2 (JPEG 2000 part 1)\",\n \"mimetype\": \"image\\/jp2\",\n \"pronom_id\": \"info:pronom\\/x-fmt\\/392\",\n \"detection_type\": \"signature\"\n },\n}\n
\"flv:mediainfo\"
: Media Info works on Video and Audio. It goes very detailed into codecs
andstreams
and the output added to your JSON might look massive. This is also very needed when working with IIIF Manifests and deciding if a certain Video will be able to play natively on a browser or if Cantaloupe IIIF Image Server
will be able to extract individual frames as images. This again has positive Digital Preservation consequences. The Following is an example of an MP4 file generated via Quicktime on an Apple MacOS computer.
{\n \"flv:mediainfo\": {\n \"menus\": [],\n \"audios\": [\n {\n \"id\": {\n \"fullName\": \"1\",\n \"shortName\": \"1\"\n },\n \"count\": \"282\",\n \"title\": \"Core Media Audio\",\n \"format\": {\n \"fullName\": \"AAC LC\",\n \"shortName\": \"AAC\"\n },\n \"bit_rate\": {\n \"textValue\": \"85.3 kb\\/s\",\n \"absoluteValue\": 85264\n },\n \"codec_id\": \"mp4a-40-2\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"channel_s\": {\n \"textValue\": \"1 channel\",\n \"absoluteValue\": 1\n },\n \"frame_rate\": {\n \"textValue\": \"43.066 FPS (1024 SPF)\",\n \"absoluteValue\": 43\n },\n \"format_info\": \"Advanced Audio Codec Low Complexity\",\n \"frame_count\": \"914\",\n \"stream_size\": {\n \"bit\": 226109\n },\n \"streamorder\": \"0\",\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"source_delay\": \"-0\",\n \"bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"samples_count\": \"935582\",\n \"sampling_rate\": {\n \"textValue\": \"44.1 kHz\",\n \"absoluteValue\": 44100\n },\n \"channel_layout\": \"C\",\n \"kind_of_stream\": {\n \"fullName\": \"Audio\",\n \"shortName\": \"Audio\"\n },\n \"commercial_name\": \"AAC\",\n \"source_duration\": [\n \"21269\",\n \"21 s 269 ms\",\n \"21 s 269 ms\",\n \"21 s 269 ms\",\n \"00:00:21.269\"\n ],\n \"compression_mode\": {\n \"fullName\": \"Lossy\",\n \"shortName\": \"Lossy\"\n },\n \"channel_positions\": {\n \"fullName\": \"1\\/0\\/0\",\n \"shortName\": \"Front: C\"\n },\n \"samples_per_frame\": \"1024\",\n \"stream_identifier\": \"0\",\n \"source_frame_count\": \"916\",\n \"source_stream_size\": [\n \"226460\",\n \"221 KiB (1%)\",\n \"221 KiB\",\n \"221 KiB\",\n \"221 KiB\",\n \"221.2 KiB\",\n \"221 KiB (1%)\"\n ],\n \"source_delay_source\": \"Container\",\n \"format_additionalfeatures\": \"LC\",\n \"proportion_of_this_stream\": \"0.01178\",\n \"count_of_stream_of_this_kind\": \"1\",\n \"source_streamsize_proportion\": \"0.01180\"\n }\n ],\n \"images\": [],\n \"others\": [\n {\n \"type\": \"meta\",\n \"count\": \"188\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"frame_count\": \"1\",\n \"kind_of_stream\": {\n \"fullName\": \"Other\",\n \"shortName\": \"Other\"\n },\n \"stream_identifier\": [\n \"0\",\n \"1\"\n ],\n \"count_of_stream_of_this_kind\": \"2\"\n },\n {\n \"type\": \"meta\",\n \"count\": \"188\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"frame_count\": \"1\",\n \"kind_of_stream\": {\n \"fullName\": \"Other\",\n \"shortName\": \"Other\"\n },\n \"stream_identifier\": [\n \"1\",\n \"2\"\n ],\n \"count_of_stream_of_this_kind\": \"2\"\n }\n ],\n \"videos\": [\n {\n \"id\": {\n \"fullName\": \"2\",\n \"shortName\": \"2\"\n },\n \"count\": \"380\",\n \"title\": \"Core Media Video\",\n \"width\": {\n \"textValue\": \"1 280 pixels\",\n \"absoluteValue\": 1280\n },\n \"format\": {\n \"fullName\": \"AVC\",\n \"shortName\": \"AVC\"\n },\n \"height\": {\n \"textValue\": \"720 pixels\",\n \"absoluteValue\": 720\n },\n \"bit_rate\": {\n \"textValue\": \"7 144 kb\\/s\",\n \"absoluteValue\": 7144261\n },\n \"codec_id\": \"avc1\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"rotation\": \"0.000\",\n \"bit_depth\": {\n \"textValue\": \"8 bits\",\n \"absoluteValue\": 8\n },\n \"scan_type\": {\n \"fullName\": \"Progressive\",\n \"shortName\": \"Progressive\"\n },\n \"format_url\": \"http:\\/\\/developers.videolan.org\\/x264.html\",\n \"frame_rate\": {\n \"textValue\": \"29.970 (29970\\/1000) FPS\",\n \"absoluteValue\": 29\n },\n \"buffer_size\": \"768000\",\n \"color_range\": \"Limited\",\n \"color_space\": \"YUV\",\n \"format_info\": \"Advanced Video Codec\",\n \"frame_count\": \"636\",\n \"stream_size\": {\n \"bit\": 18951244\n },\n \"streamorder\": \"1\",\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"codec_id_info\": \"Advanced Video Coding\",\n \"framerate_den\": \"1000\",\n \"framerate_num\": \"29970\",\n \"sampled_width\": \"1280\",\n \"format_profile\": \"Main@L3.1\",\n \"kind_of_stream\": {\n \"fullName\": \"Video\",\n \"shortName\": \"Video\"\n },\n \"sampled_height\": \"720\",\n \"color_primaries\": \"BT.709\",\n \"commercial_name\": \"AVC\",\n \"format_settings\": \"CABAC \\/ 2 Ref Frames\",\n \"frame_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VFR\"\n },\n \"bits_pixel_frame\": \"0.259\",\n \"maximum_bit_rate\": {\n \"textValue\": \"768 kb\\/s\",\n \"absoluteValue\": 768000\n },\n \"stream_identifier\": \"0\",\n \"chroma_subsampling\": [\n \"4:2:0\",\n \"4:2:0\"\n ],\n \"maximum_frame_rate\": [\n \"30.000\",\n \"30.000 FPS\"\n ],\n \"minimum_frame_rate\": [\n \"28.571\",\n \"28.571 FPS\"\n ],\n \"pixel_aspect_ratio\": \"1.000\",\n \"colour_range_source\": \"Stream\",\n \"format_settings_gop\": \"M=2, N=30\",\n \"internet_media_type\": \"video\\/H264\",\n \"matrix_coefficients\": \"BT.709\",\n \"original_frame_rate\": [\n \"25.000\",\n \"25.000 FPS\"\n ],\n \"display_aspect_ratio\": {\n \"textValue\": \"16:9\",\n \"absoluteValue\": 1.778\n },\n \"format_settings_cabac\": {\n \"fullName\": \"Yes\",\n \"shortName\": \"Yes\"\n },\n \"codec_configuration_box\": \"avcC\",\n \"colour_primaries_source\": \"Container \\/ Stream\",\n \"transfer_characteristics\": \"BT.709\",\n \"proportion_of_this_stream\": \"0.98734\",\n \"colour_description_present\": \"Yes\",\n \"matrix_coefficients_source\": \"Container \\/ Stream\",\n \"count_of_stream_of_this_kind\": \"1\",\n \"transfer_characteristics_source\": \"Container \\/ Stream\",\n \"format_settings_reference_frames\": [\n \"2\",\n \"2 frames\"\n ],\n \"colour_description_present_source\": \"Container \\/ Stream\"\n }\n ],\n \"general\": {\n \"count\": \"336\",\n \"format\": {\n \"fullName\": \"MPEG-4\",\n \"shortName\": \"MPEG-4\"\n },\n \"codec_id\": [\n \"qt \",\n \"qt 0000.00 (qt )\"\n ],\n \"datasize\": \"19177730\",\n \"duration\": {\n \"milliseconds\": 21215\n },\n \"file_name\": \"c98e7bc52e4bd3fe5681a746f2d9c76f_diego4\",\n \"file_size\": {\n \"bit\": 19194157\n },\n \"footersize\": \"0\",\n \"frame_rate\": {\n \"textValue\": \"29.970 FPS\",\n \"absoluteValue\": 29\n },\n \"headersize\": \"16427\",\n \"othercount\": \"2\",\n \"folder_name\": \"\\/tmp\\/ami\\/setfiles\\/cb606b13b823eaea784dc77c460f3baf\",\n \"frame_count\": \"636\",\n \"stream_size\": {\n \"bit\": 16804\n },\n \"tagged_date\": \"UTC 2017-12-05 17:14:10\",\n \"audio_codecs\": \"AAC LC\",\n \"codec_id_url\": \"http:\\/\\/www.apple.com\\/quicktime\\/download\\/standalone.html\",\n \"codecs_video\": \"AVC\",\n \"encoded_date\": \"UTC 2017-12-05 17:14:07\",\n \"isstreamable\": \"Yes\",\n \"complete_name\": \"\\/tmp\\/ami\\/setfiles\\/cb606b13b823eaea784dc77c460f3baf\\/c98e7bc52e4bd3fe5681a746f2d9c76f_diego4.m4v\",\n \"file_extension\": \"m4v\",\n \"format_profile\": \"QuickTime\",\n \"kind_of_stream\": {\n \"fullName\": \"General\",\n \"shortName\": \"General\"\n },\n \"codecid_version\": \"0000.00\",\n \"commercial_name\": \"MPEG-4\",\n \"writing_library\": {\n \"fullName\": \"Apple QuickTime\",\n \"shortName\": \"Apple QuickTime\"\n },\n \"overall_bit_rate\": {\n \"fullName\": \"7 238 kb\\/s\",\n \"shortName\": \"7237957\"\n },\n \"audio_format_list\": \"AAC LC\",\n \"stream_identifier\": \"0\",\n \"video_format_list\": \"AVC\",\n \"codecid_compatible\": \"qt \",\n \"file_name_extension\": \"c98e7bc52e4bd3fe5681a746f2d9c76f_diego4.m4v\",\n \"internet_media_type\": \"video\\/mp4\",\n \"encoded_library_name\": \"Apple QuickTime\",\n \"comapplequicktimemake\": \"Apple\",\n \"overall_bit_rate_mode\": {\n \"fullName\": \"Variable\",\n \"shortName\": \"VBR\"\n },\n \"comapplequicktimemodel\": \"iPhone SE\",\n \"count_of_audio_streams\": \"1\",\n \"count_of_video_streams\": \"1\",\n \"comapplequicktimesoftware\": \"10.3.2\",\n \"proportion_of_this_stream\": \"0.00088\",\n \"audio_format_withhint_list\": \"AAC LC\",\n \"video_format_withhint_list\": \"AVC\",\n \"file_last_modification_date\": {\n \"date\": \"2022-10-19 20:02:32.000000\",\n \"timezone\": \"UTC\",\n \"timezone_type\": 3\n },\n \"count_of_stream_of_this_kind\": \"1\",\n \"comapplequicktimecreationdate\": \"2017-10-25T16:58:17-0400\",\n \"format_extensions_usually_used\": \"braw mov mp4 m4v m4a m4b m4p m4r 3ga 3gpa 3gpp 3gp 3gpp2 3g2 k3g jpm jpx mqv ismv isma ismt f4a f4b f4v\",\n \"comapplequicktimelocationiso6709\": \"+40.6145-074.2678+020.977\\/\",\n \"file_last_modification_date_local\": {\n \"date\": \"2022-10-19 20:02:32.000000\",\n \"timezone\": \"America\\/New_York\",\n \"timezone_type\": 3\n }\n },\n \"version\": \"21.09\",\n \"subtitles\": []\n }\n }\n}\n
\"flv:pdfinfo\"
: PDF Info will get Page level Information for a PDF or Ghostscript document. The dimensions displayed in the following example are not in pixels but points (resolution independent) and are also used for IIIF generation when deciding at what rasterized pixel size a given PDF document page will be rendered. Same as with flv:identify
, the technical metadata will be contained inside a keyed (string but semantically an integer) property. In this particular case each number is a page sequence in the original PDF order.
{\n \"flv:pdfinfo\": {\n \"1\": {\n \"width\": \"612\",\n \"height\": \"792\",\n \"rotation\": \"0\",\n \"orientation\": \"TopLeft\"\n }\n }\n}\n
@TODO: add the extra special key used by Strawberry Runners when it attaches a file. e.g WARC to WACZ
"},{"location":"metadatainarchipelago/#did-you-know","title":"Did you #know?","text":"If you delete a whole as:{AS_FILE_TYPE}
structure or one of the File level structures (a urn:uuid:{uuid}
key and its children), Archipelago will recreate it. If you modify any internal value contained in it, Archipelago will do nothing and will trust you (and if you do strange things like modifying the url
something might even fail e.g in a IIIF Metadata Display Entity Twig Template). No data edit there will trigger a modification/moving/deletion of a File (or e.g write back EXIF to be binary). You will have time to revert to a previous revision (version) of the ADO if any manual change was done. So, should you modify/delete this structures? Almost never. Ever. But you might find needs for that someday. Also to be noted. Producing this structure for a large file in S3:// is intensive. It needs to be downloaded to a local path and if the File is a few Gigabytes in size Archipelago might even run out of PHP processing time. If that ever happens you can also copy/paste from a previous revision of the ADO the relevant piece. If archipelago finds it (implied in the previous explanation) it will not have to regenerate it. The AMI module does this in an async/enqueued way to avoid time out issues and can reuse a cached metadata extraction between runs, but when working directly on an ADO via e.g a webform or via RAW edit, take that in account. More work is being done to allow also one on one async File operations and larger uploads via the web.
ap:tasks
keys","text":"As mentioned briefly before, there is also Control Metadata. What do we mean with that? Control metadata in Archipelago's way of allowing you to give, through metadata, (that you might want to preserve or not) instructions to Archipelago that relate to processing. Let's start with the basic one:
{\n \"ap:tasks\": {\n \"ap:sortfiles\": \"index\"\n }\n}\n
\"ap:sortfiles\"
key will instruct Archipelago to sort (create a sequence
key and a sequential number (integer value) inside each Metadata File entry of a as:{AS_FILE_TYPE}
structure. Values can be one of ['natural', 'index', 'manual']
defaulting, if absent or has an invalid value, to natural
. - natural
: files will be sorted by File Name, the filename
key found at the same level of sequence
in the previously mentioned as:{AS_FILE_TYPE}
structure. a Photograph_1.jpeg
will come before a Photograph_10.jpeg
. The way a human being naturally would order by name. - index
files will be sorted by the order in which they appear inside the upload JSON key (the dr:for
key, one of the keys mapped in the ap:entitymapping
structure under entity:file
explained before. e.g. images
:[5, 10 , 1 ], would imply the File Entity with Drupal ID 5 would get \"sequence\": 1
, the one with Drupal ID 10 will get \"sequence\": 1
, etc. This is the default when ingesting via the AMI module given the need to preserve file order that has/might have unknown names or names you don't have control of (thus natural won't work) coming from e.g a Remote, HTTP/HTTPS location - manual
: You can modify the values manually for any sequence
key inside as:{AS_FILE_TYPE}
structure and those values will stick.
What do we mean with stick? Well, everytime archipelago gets a change in this \"ap:sortfiles\"
, e.g a new File is added, a File is deleted, automatic re-sorting will happen.
{\n \"ap:tasks\": {\n \"ap:forcepost\": true\n }\n}\n
\"ap:forcepost\"
: A boolean. The functionality of this key is provided by the Strawberry Runners Module. Will force Strawberry Runners Post processing for this ADO.
Each Configured and active Postprocessor provided by the Strawberry Runners
module might or not kick in
by evaluating a set of rules. If rule evaluates to TRUE, the PostProcessor will generate a certain output., e.g a Solr Indexed Strawberry Flavor
Data Source containing OCR, HOCR and NLP metadata for one or more pages of a PDF.
Everytime a Create or Update operation on an ADO happens, these rules will be evaluated and the Processor will be enqueued as a future task. But at the moment of executing, when the queue workers take one item, a check will be made and if the result of a previous run (e.g HOCR) is already present in the system (e.g in Solr) and it's veryfied to be belonging to the same source, the actual heavyload of processing the PDF will be skipped.
While testing, coding, doing complex changes in the system (like modifying largely the settings for one processor) or even in the case of an ISSUE (e.g HOCR was wrongly run with a setting that made all look like garbage!) you can instruct Archipelago run again without checks. And again. And again. basically everytime you Save an ADO by setting ap:forcepost
to true
. This can also be used batch and is already implied (means it does it for you but only once, without modifying the JSON) in the Trigger Strawberrry Runners process/reprocess for Archipelago Digital Objects
VBO action we provide.
In the absence of \"ap:forcepost\"
the value is implicitly false
, same as setting it explicitly to false
.
{\n \"ap:tasks\": {\n \"ap:nopost\": [\n 'pager'\n ]\n }\n}\n
\"ap:nopost\"
: an array or list of ACTIVE_STRAWBERRY_RUNNERS_CONFIG_ID
entries, means Machine names (or IDs) Strawberry Runners Processor Configuration Entities. The functionality of this key is provided by the Strawberry Runners. If not an array it will be ignored. Any value present not matching an active Strawberry Runners Processor Configuration Entity ID will also be ignored. Effectively, any post processors in this list will be skipped for this ADO. This allows a finer grained avoidance of some expensive processing that might lead to unsuable data. E.g a particular Manuscript ADO that Tesseract won't be able to OCR correctly. Adding this key to an ADO that was already processed won't remove existing generated/stored processing.
{\n \"ap:tasks\": {\n \"ap:ami\": {\n \"metadata_display\": 7\n\n }\n }\n}\n
\"ap:ami\"
is a newer key ( as of Archipelago 1.1.0 and AMI 0.5.0) and for now can only contain another single key named \"metadata_display\"
. The value of this one can be either a single Integer, the Drupal ID of a Metadata Display Entity or a string, the UUID
of a Metadata Display Entity. The functionality triggered by this key is provided by the AMI module and will do something extremely powerfull: it will take the complete JSON and process through the Twig or Metadata Display Entity refererenced in its value, IF, and only IF, the output of that template is JSON. This runs before any other event (Archipelago runs a ton of events that validate, enrich, check, etc your ADOs from the moment you SAVE or Create it) and because of that allows you to totally pivot, transform, change RAW data coming into Archipelago, e.g via the JSON:API into the structure you need/want. Said differently, you could push JSON from a totally different system and if the referenced Metadata Display Entity is well written, end with a perfectly aligned JSON matching your internal structure without modifying the INPUT manually. Because Twig is very powerful you can also do cleanups, complex logic, etc. More over, you can transform any existing ADO via Batch by adding this key(s) and values using the JSON Patch VBO action. Once processed and if all went well, meaning the output of the Template is valid JSON, the key itself will be removed. This, to avoid running over and over (invisibly to you) on further operations/edits/etc. This is a one time operation that does not stick. What happens if it does not run well, fails, errors out or the Template referenced does not exist? You get a second change (everyone deserves one), the Original ingested JSON, without transformations is kept. All this is very similar to what the AMI module does via a CSV but in this case its atomic. We know what you are thinking. You can process data twice, via AMI and then at the end pass it again through another template based on a certain logic coming from the first? yes. you can!
In the future \"ap:ami\"
might contain more keys to do more advanced File level actions. Archipelago is being constantly enhanced!
Archipelago also keeps information about who/how a certain JSON was generated. Depending on how the Ingest/Edit of an ADO happened, this can be automatically generated or added manually (the case for AMI ingests).
The structure is simple and not accumulative because there is also versioning at the ADO (Drupal) level that allows you to look back if needed.
{\n \"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"http:\\/\\/localhost:8001\\/form\\/default-descriptive-metadata-ami\",\n \"name\": \"default_descriptive_metadata_ami\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2022-03-16T15:51:24-04:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n },\n}\n
The \"as:generator\"
Conforms to the Activity Stream Vocabulary Version 2.0 and keeps track of the last Operation executed on an ADO. Edits and Ingests via the Webform Widget will create this automatically using the Canonical URL of the Webform That generated the content and \"type\"
might be either \"Update\"
or \"Create
\". ADOs that were processed via \"ap:ami\"
will have automatically one generated to express the fact that the Original JSON was modified by a Metadata Display Entity. Objects created via AMI and using a Metadata Display Entity
can also add via the Twig template syntax the AMI Set ID used to generate the ADO (or Update) allowing the Service URL (\"url\"
) to be faceted/searched for (e.g show me all objects ingested via AMI Set with ID 4).
Anything or everything else (including unknow data, future data, upcoming data) belongs to you. How you name it, how you structure, how it evolves is up to you and the functionality and integration you want. That said, as someone (the writer) that enjoys cooking and had to learn the hardway (experimenting and failing sometmes) the basics before doing proper meals for others to enjoy, we suggest you plan on this before inventing the next Open Schema.
Note: Why the out-of-context Cooking Analogy?
This idea is deeply embedded in our Architecture. We see Metadata as ingredients. Your JSON is your fridge (or pantry or both). Metadata Display Entities, and their Twig Templates, recipes that allow you to pick and choose Ingredients and your Twig coding skills (and filters, functions, loops and conditionals) your basic cooking skills. This analogy has many consequences:
The Open Schema you will get from a Vanilla Archipelago already covers many many uses cases and was developed by a caring team of metadata professionals and practitioners working with Archipelago for a while already. It covers LoD and most Description needs for your Whys, Wheres, When, Who/Whom. Some tips:
property_1
might be hard to document for you. But original_artifact_in_collection
might be better (and denotes semantically the value might be a boolean, true of false). Use plural and singular in your naming to denote that something might contain more than one entry. Try to be generic but assertive. mods_modsinfo_namepart
is tempting but is already hinting a single original fixed schema. And you might end using the same value (the who) in Dublin Core, IIIF, schema.org, etc outputs. So mybe author
instead? This also leads to: sometimes multiple keys are better than many deeply nested ones where understanding. You can keep authors and contributors in separate keys.{# #}
explaining why/what it holds. You can also add Help/Extra info when designing your schema via a the Webform. Each element has extra properties to do so and that way you can also explain others (the ones using the Webform to add/edit) what the purpose of your metadata is.Do you have your own Kitchen/cooking tips you want to share? We hope you enjoy the learning process and the many choices Archipelago provides.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"metadatatwigs/","title":"Twig Templates and Archipelago","text":"Archipelago uses a fast, cached templating system that is core to Drupal, called Twig
. In its guts (or its heart?) Archipelago uses this system to transform the close to your needs open schema metadata that lives in every strawberryfield
as JSON into close to other one's fixed schema needs metadata. This is quite simple, but it is an essential component of our vision of how a repository should manage metadata.
Twig is a template engine for PHP part of Symfony framework.
This templating system is exposed to Archipelago users through the UI, and is stored in the repository as content. This setup empowers users to fully control how metadata is transformed and published without touching their individual sources or needing to manage hard-coded configurations. We named these readily accessible and powerful templates Metadata Display entities
, but they serve more than just display needs.
Twig drives every Page in a Drupal 8/9/10 environment.
Twig drives every aspect of your ADO exposure to the world in Archipelago and even batch Ingest.
Templates or recipes can be shared, exported, ingested, updated, and adapted in many ways. This means you can make changes quickly without having to wait for the next major release of Archipelago or your favorite Metadata Schema Specs Committee\u2019s agreement to implement the next or the last version. This module not only handles metadata but media assets as well. It will extract local or remote URIs and files from your metadata and render them as media viewers: books, 3D models, images, panoramas, A/V, all with IIIF in its soul.
Metadata Display Entities are used for:
Archipelago Ships with:
You can find these templates here:
Archipelago (the humans) will keep adding and refining these with every release.
"},{"location":"metadatatwigs/#instructions-and-examples","title":"Instructions and Examples","text":"While a lot of core needs and use cases are covered with the Twig Templates shipped with Archipelago, you may want to add more Input elements to your Webforms, which in turn will generate new JSON Values, which in turn you may want to show/expose to end users.
Knowing (even if you do not plan to) how to edit or create your own Twig templates is important.
format_strawberryfield
can do and what many other possibilities are exposed through our templating system in this guide: Strawberryfield Formatters.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"modifyingfileextensionsinwebform/","title":"Customizing Webforms (Modifying allowable file extensions)","text":"A guide to walk users through how to modify the Webform Descriptive Metadata
to allow additional file extensions to be ingested into Archipelago. This is the default Webform with Archipelago by following archipelago-deployment.
When creating an Archipelago Digital Object (ADO), on Step 4 of the ingest, Attach Files
, there is a step during the ingest to upload the files associated with your ADO. There will be a section on the Webform outlining the maximum number of files allowed, the maximum file size allowed, and the allowed file extensions that can be uploaded.
Let's say we are creating an ADO with the media type DigitalDocument
and this ADO contains a data set saved as a csv
file, but when we get to Step 4 of the ingest workflow we find that csv
is not an allowed file extension. Fortunately, Archipelago has no restrictions on what file extensions can be uploaded, but some use cases will require a little configuring to fit a specific need. This guide will walk users through the steps to modify the default Webform, Descriptive Metadata
, to allow additional file extensions to be included during an ingest.
Prerequisites for following this guide:
Once logged in as admin
, the first thing we need to do is navigate to the Webforms page so we can edit the Webform Descriptive Metadata.
Click on Manage
, then Structure
and when the page loads, scroll down and click Webforms
.
This is where all of the Webforms inside your Archipelago live. For this guide we're going to edit the Webform Descriptive Metadata
. Go ahead and click Build
under the OPERATIONS
column for Descriptive Metadata
.
Here we see all of the elements in Descriptive Metadata
; Title, Media type, Description, Linked Data elements, etc. The element that we want to edit is Upload Associated Documents
as this is the field you will use to upload pdf
, doc
, rtf
, txt
, etc. files during the ingest workflow. Click on Edit
under the OPERATIONS
column.
A new screen will pop up named Edit Upload Associated Documents element
. This is where you can configure the maximum number of values (under ELEMENT SETTINGS
), the maximum file size and also edit the allowed file extensions for this element, which is what we'll be doing. The latter both exist under FILE SETTINGS
section, highlighted in the screenshot below.
When you scroll down you'll see the Allowed file extensions
field. This is where we will add the csv
file extension. Please note: All file extensions are separated by a space; no ,
or .
between the values.
Once you've added all the file extensions your project needs, scroll down to the bottom of Edit Upload Associated Documents element
and click Save
.
This next step is imperative for saving your changes, scroll to the bottom of your elements list page and click Save elements
in order to persist all changes made.
Woohoo! Now when you are ingesting a DigitalDocument
object, you will be able to add csv
files! \ud83c\udf53
When logged in as an admin, we go to Manage > Structure > Webforms and click on Build
under the OPERATIONS
column of Descriptive Metadata
(shortcut: /admin/structure/webform/manage/descriptive_metadata). Then we click on Upload Associated Documents
to edit the element, scroll down to the Allowed file extensions field and add csv
without .
or ,
separating the values. Click Save
at the bottom of the Edit Upload Associated Documents element
page and then Save elements
at the bottom of the Webform page.
wav
or aiff
file for \"MusicRecording\" or an mov
file for a \\\"Movie\\\"? The steps are virtually the same as what is outlined in this guide! The difference here is that instead of editing Upload Associated Documents
, you will need to edit the field element that is associated with your ADO's media type. For example, with Media type MusicRecording
, you will edit Upload Audio File
, for Movie
, will edit Videos
.
When editing an element inside Descriptive Metadata
, at the top of the window Edit Upload Associated Documents element
(see Step 3 for a recap on how to get here) there is a tab next to General
titled Conditions
. Inside of Conditions
we have CONDITIONAL LOGIC
which is where the Webform is told which Media type
needs this element to be visible in the Webform. In the example below, we know that the field element Upload Associated Documents
will be visible when DigitalDocument
, Thesis
and Book
are the selected Media type
.
This is also the place you can add new logic or delete present logic by clicking the +
or -
next to the TRIGGER/VALUE
to create new conditionals.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"ourtake/","title":"Archipelago's Philosophy & Guiding Principles","text":"Archipelago operates under a different concept than the one we all have become used to in recent times. We like to think this is not done by re-inventing the wheel, but by making sure the road is clean, level, and with fewer obstacles than before. We do this by removing some heavy weight from the top, some unneeded ballast, plus, of course, some well positioned innovations to make the ride enjoyable.
We also like to say that Archipelago is like a Metadata Synthetizer (LFO anyone?) and we want to give you all the knobs, parameters, inputs and outputs to make the best out of it. Still, you can make \"music\" by just tapping the keyboard.
To get here we had to do a full stop first. Look around. Questioning everything we knew. Research and test (repeat) and then re-architect slowly on new and old assumptions, and especially new community values.
"},{"location":"ourtake/#whys-and-whats-of-archipelago","title":"Whys and Whats of Archipelago","text":"Because this topic is near and dear to our hearts, we are taking extra care with writing this important document. Please stay tuned for the full, verbose, heartfelt, and detailed long story of Archipelago's origins, development, future hopes and dreams.
In the meantime, please consider reviewing this presentation created by Archipelago's Lead Architect Diego Pino which captures the essence of Archipelago's philosophy and guiding principles:
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"presentations_events/","title":"Archipelago Presentations, Events, and Additional Resources","text":"Important General & Internal Recordings Notes
Please be aware that some of the presentation documents shared above may contain links to older documentation resources that have since changed or are no longer available. We recommend referring to the latest documentation versions available on this site whenever needed.
METRO's Digital Services Team facilitated many different internal training sessions throughout 2020-2022. If you and your team need access to any of these sessions that were recorded, please contact us. Thank you!
"},{"location":"presentations_events/#2023","title":"2023","text":"Archipelago Late 2022 Workshop Series:
McCarthy, B. J. (2022). Archipelago Commons: Using the Archipelago and AMI software to provide access to Rensselaer Polytechnic Institute's engineering drawings, a pilot project. Issues in Science and Technology Librarianship, 101. https://doi.org/10.29173/istl2717
Open Perspectives Forum. Monger, Jenifer J.; McCarthy, Brenden. (November 2022)
Migration, Collaboration and Innovation with Archipelago Commons. Monger, Jenifer J. (September 2022)
\ud83c\udf53 Archipelago 1.0.0 - August 2022 Release Announcement (August 2022) and updated Specs and Features List
Open Repositories June 2022
Formation of the Archipelago Working Group (April 2022)
\ud83c\udf53 Archipelago 1.0.0-RC3 and 1.0.0 Release Announcement - November 2021
AMIA Conference Workshop: Building a Web Archive-Capable Digital Repository with Webrecorder and Archipelago. Kreymer, Ilya; Ramirez-Lopez, Lorena; Dickson, Emma; Pino Navarro, Diego; Sherrick (Lund), Allison. (November 2021)
Solr Importer AMI Migrations, Showcase and Roundtable. Pino Navarro, Diego; Sherrick (Lund), Allison. (July 2021)
IIIF Annual 2021 Conference:
June 2021 Open Repositories Conference:
WebRecorder + Archipelago Workshop. Pino Navarro, Diego; Sherrick (Lund), Allison; Kreymer, Ilya; Ramirez-Lopez, Lorena; Dickson, Emma. (May 2021)
Twig Templates and Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison. (May 2021)
\ud83c\udf53Archipelago 1.0.0-RC2 Release Announcement (May 2021) and Archipelago RC2 Specs and Features List
Working with Archipelago Multi-Importer (AMI). Pino Navarro, Diego; Sherrick (Lund), Allison. (April 2021)
Archipelago Digital Objects Repository (an) architecture to last. Pino Navarro, Diego. (DrupalCon North America 2021)
Metadata, Schemas and Media in Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison (February 2021)
Deploying Archipelago 1.0.0-RC1. Pino Navarro, Diego; Sherrick (Lund), Allison. (February 2021)
\ud83c\udf53 Archipelago 1.0.0-RC1 Release Announcement (December 2020)
Webforms in Archipelago. Pino Navarro, Diego; Sherrick (Lund), Allison; Palmentiero, Jennifer. (December 2020)
IIIF and Archipelago - Community Call. Pino Navarro, Diego. (October 2020)
Archipelago : an empathic Digital Repository Architecture (September 2020)
\ud83c\udf53 Archipelago 8.x-1.0-beta3 Release Announcement (July 2020)
If you have a public Archipelago presentation, recording, or other resource you'd like to share on this page \ud83c\udfdd\ufe0f\ud83d\udccd, please contact us. We would love to add your great work to this list! \ud83d\udc9a
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"search_advanced/","title":"Advanced Search","text":"This page is under construction. Please stay tuned for further updates and thank you for your patience as we continue to brew up more documentation.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers"]},{"location":"search_solr_index/","title":"Search and Solr Overview","text":"This page is under construction. Please stay tuned for further updates and thank you for your patience as we continue to brew up more documentation.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#preamble-prerequisites","title":"Preamble + prerequisites","text":"Before diving into any Search and Solr Configuration, please review our Metadata in Archipelago overview documentation, which provides important context for understanding how the shape of your Archipelago Digital Objects/Collections (ADOs) metadata will inform your Search and Solr options and outcomes.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#archipelago-and-solr","title":"Archipelago and Solr","text":"Archipelago's latest Release (1.1.0) uses Apache Solr 9.1, which incorporates some major improvements and changes from Solr 8. Please refer to the [primary Solr documentation]((https://solr.apache.org/guide/solr/9_1/index.html) for the most comprehensive and in-depth information about Solr's wide breadth of functionality and configuration options.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"search_solr_index/#instructions-and-guides","title":"Instructions and Guides","text":"Archipelago uses solr-ocrhighlighting v0.8.4, built by the Development Team at the Bavarian State Library.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Search","Search API","Solr","Solr Index","Facets","Strawberry Key Name Providers","OCR","HOCR"]},{"location":"security_bots/","title":"Managing Bots","text":"A public-facing production instance will likely encounter bad bots and other malicious traffic that will consume resources. There are many solutions available that address a variety of different needs, but we provide basic configurations and a Docker image for integrating the NGINX Ultimate Bad Bot & Referrer Blocker.
Warning
Before proceeding, please be sure to familiarize yourself with the NGINX Ultimate Bad Bot & Referrer Blocker README.
","tags":["Security","Bots"]},{"location":"security_bots/#deployment","title":"Deployment","text":"MSMTP_ACCOUNT=SMTP_ACCOUNT_NAME\nMSMTP_EMAIL=repositorysupport@metro.org\nMSMTP_HOST=smtp.metro.org\nMSMTP_PASSWORD=YOUR_SMTP_PASSWORD\nMSMTP_PORT=SMTP_PORT\nMSMTP_STARTTLS=on\nNGXBLOCKER_ENABLE=false\nNGXBLOCKER_CRON=00 22 * * *\nNGXBLOCKER_CRON_COMMAND=/usr/local/sbin/update-ngxblocker -x\nNGXBLOCKER_CRON_START=false\n
# Run docker-compose up -d\n# Docker file for Arm64 and Apple M1 machines\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n # image: jonasal/nginx-certbot\n image: esmero/nginx-bot-blocker:1.1.0-multiarch\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx/user_conf.d\n MSMTP_ACCOUNT: ${MSMTP_ACCOUNT}\n MSMTP_EMAIL: ${MSMTP_EMAIL}\n MSMTP_HOST: ${MSMTP_HOST}\n MSMTP_PASSWORD: ${MSMTP_PASSWORD}\n MSMTP_PORT: ${MSMTP_PORT}\n MSMTP_STARTTLS: ${MSMTP_STARTTLS}\n NGXBLOCKER_CRON: ${NGXBLOCKER_CRON}\n NGXBLOCKER_CRON_COMMAND: ${NGXBLOCKER_CRON_COMMAND}\n NGXBLOCKER_CRON_START: ${NGXBLOCKER_CRON_START}\n NGXBLOCKER_ENABLE: ${NGXBLOCKER_ENABLE}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/template:/etc/nginx/templates\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/bots.d:/etc/nginx/bots.d\n
First pull the new image:
docker compose pull\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose pull\n
Now bring the Docker ensemble down and up again:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
Run the install script for the bot blocker in the default dry run mode and review the output:
docker exec -ti esmero-web bash -c \"/usr/local/sbin/install-ngxblocker\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/install-ngxblocker -x\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/setup-ngxblocker -v /etc/nginx/templates -e .copy\"\n
docker exec -ti esmero-web bash -c \"/usr/local/sbin/setup-ngxblocker -v /etc/nginx/templates -e .copy -x\"\n
Enable the bot blocker and cron (if applicable): .env
MSMTP_ACCOUNT=SMTP_ACCOUNT_NAME\nMSMTP_EMAIL=repositorysupport@metro.org\nMSMTP_HOST=smtp.metro.org\nMSMTP_PASSWORD=YOUR_SMTP_PASSWORD\nMSMTP_PORT=SMTP_PORT\nMSMTP_STARTTLS=on\nNGXBLOCKER_ENABLE=true\nNGXBLOCKER_CRON=00 22 * * *\nNGXBLOCKER_CRON_COMMAND=/usr/local/sbin/update-ngxblocker -x\nNGXBLOCKER_CRON_START=true\n
Note
If MSMTP_EMAIL
is blank and cron is enabled the flag for sending email notifications will be skipped.
Bring the Docker ensemble down and back up again:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
Test that it is working by following the \"TESTING\" section (STEP 10) in the official documentation: https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker
If looking to use this solution as part of an upgrade (from 1.0.0 to 1.1.0, for example) we recommend coming back to the above steps after successfully completing the upgrade. After the upgrade, you will only need to add the environment variables and docker compose configurations and follow the steps as detailed above.
","tags":["Security","Bots"]},{"location":"security_bots/#advanced-configuration","title":"Advanced Configuration","text":"Because our Docker containers only persist our mounted files and folders, any advanced configurations may require overriding the files generated by our esmero-web container on boot. For example, the above setup-ngxblocker
script is normally responsible for writing the following include lines:
include /etc/nginx/bots.d/blockbots.conf;\ninclude /etc/nginx/bots.d/ddos.conf;\n
Because the script is unable to place them in the correct part of our nginx.conf.template
file, which in turn generates our nginx.conf
file (see Using environment variables in nginx configuration), our own script adds (when NGINXBLOCKER_ENABLE=true
) or removes (when NGINXBLOCKER_ENABLE=false
) the lines to an empty file, which in turn is statically included in our main nginx.conf.template
file. One option provided by setup-ngxblocker
is to exclude (-d
) the DDOS rule. In our case, we need to manually override the lines in our template file to reproduce this behavior:
Example
nginx.conf.templateupstream cantaloupe {\n server esmero-cantaloupe:8182;\n}\n\nserver {\n listen 443 ssl;\n server_name ${FQDN};\n ssl_certificate /etc/letsencrypt/live/${FQDN}/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/${FQDN}/privkey.pem;\n client_max_body_size 1536M; ## Match with PHP from FPM container\n\n root /var/www/html/web; ## <-- Your only path reference.\n\n fastcgi_send_timeout 120s;\n fastcgi_read_timeout 120s;\n fastcgi_pass_request_headers on;\n\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n\n # Please adapt to your needs\n proxy_buffers 16 16k; \n proxy_buffer_size 16k;\n\n #include /etc/nginx/conf.d/bots.include;\n include /etc/nginx/bots.d/blockbots.conf;\n
Note
Keep in mind that from this point, when disabling/enabling the bot blocker via the environment variable, you'll also need to uncomment/comment the added line.
Another more generally applicable approach is to override files that are part of the docker image:
Example
Our bash script (setup_bot_blocker.sh) is triggered by and runs just before the startup script (start_nginx_certbot.sh) for the NGINX Certbot image. For any advanced needs involving custom startup behavior, our script can be modified and overwridden:
docker cp esmero-web:/scripts/setup_bot_blocker.sh drupal/scripts/archipelago/\n
# Run docker-compose up -d\n# Docker file for Arm64 and Apple M1 machines\nversion: '3.5'\nservices:\n web:\n container_name: esmero-web\n # image: jonasal/nginx-certbot\n image: esmero/nginx-bot-blocker:1.1.0-multiarch\n restart: always\n environment:\n CERTBOT_EMAIL: ${ARCHIPELAGO_EMAIL}\n ENVSUBST_VARS: FQDN\n FQDN: ${ARCHIPELAGO_DOMAIN}\n NGINX_ENVSUBST_OUTPUT_DIR: /etc/nginx/user_conf.d\n MSMTP_ACCOUNT: ${MSMTP_ACCOUNT}\n MSMTP_EMAIL: ${MSMTP_EMAIL}\n MSMTP_HOST: ${MSMTP_HOST}\n MSMTP_PASSWORD: ${MSMTP_PASSWORD}\n MSMTP_PORT: ${MSMTP_PORT}\n MSMTP_STARTTLS: ${MSMTP_STARTTLS}\n NGXBLOCKER_CRON: ${NGXBLOCKER_CRON}\n NGXBLOCKER_CRON_COMMAND: ${NGXBLOCKER_CRON_COMMAND}\n NGXBLOCKER_CRON_START: ${NGXBLOCKER_CRON_START}\n NGXBLOCKER_ENABLE: ${NGXBLOCKER_ENABLE}\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/template:/etc/nginx/templates\n - ${ARCHIPELAGO_ROOT}/drupal:/var/www/html:cached\n - ${ARCHIPELAGO_ROOT}/data_storage/ngnixcache:/var/cache/nginx\n - ${ARCHIPELAGO_ROOT}/data_storage/letsencrypt:/etc/letsencrypt\n - ${ARCHIPELAGO_ROOT}/config_storage/nginxconfig/bots.d:/etc/nginx/bots.d\n - ${ARCHIPELAGO_ROOT}/drupal/scripts/archipelago/setup_bot_blocker.sh:/scripts/setup_bot_blocker.sh\n
#!/bin/bash\n\nset -e\n\nif [ ! -z \"${MSMTP_EMAIL}\" ]; then\n envsubst < /root/.msmtprc.template > /root/.msmtprc\nfi\n\nif [ \"${NGXBLOCKER_CRON_START}\" = true ]; then\n if [ ! -z \"${MSMTP_EMAIL}\" ]; then\n CRON_COMMAND=\"${NGXBLOCKER_CRON} ${NGXBLOCKER_CRON_COMMAND} -e ${MSMTP_EMAIL}\"\n else\n CRON_COMMAND=\"${NGXBLOCKER_CRON} ${NGXBLOCKER_CRON_COMMAND} -n\"\n fi\n echo \"${CRON_COMMAND}\" | crontab - &&\n /etc/init.d/cron start\nfi\n\nif [ ! -f /etc/nginx/templates/bots.include.copy ]; then\n touch /etc/nginx/templates/bots.include.copy\nfi\nif [ ! -f /etc/nginx/templates/bots.include.template ]; then\n touch /etc/nginx/templates/bots.include.template\nfi\n\nif [ \"${NGXBLOCKER_ENABLE}\" = true ]; then\n if [ ! -L /etc/nginx/conf.d/botblocker-nginx-settings.conf ]; then\n ln -s /etc/nginx/bots_settings_conf.d/botblocker-nginx-settings.conf /etc/nginx/conf.d/botblocker-nginx-settings.conf\n fi\n if [ ! -L /etc/nginx/conf.d/globalblacklist.conf ]; then\n ln -s /etc/nginx/bots_settings_conf.d/globalblacklist.conf /etc/nginx/conf.d/globalblacklist.conf\n fi\n if ! grep -q blockbots.conf /etc/nginx/templates/bots.include.copy; then\n echo \"include /etc/nginx/bots.d/blockbots.conf;\" >> /etc/nginx/templates/bots.include.copy\n fi\n #if ! grep -q ddos.conf /etc/nginx/templates/bots.include.copy; then\n # echo \"include /etc/nginx/bots.d/ddos.conf;\" >> /etc/nginx/templates/bots.include.copy\n #fi\n if ! grep -q blockbots.conf /etc/nginx/user_conf.d/bots.include; then\n echo \"include /etc/nginx/bots.d/blockbots.conf;\" >> /etc/nginx/user_conf.d/bots.include\n fi\n #if ! grep -q ddos.conf /etc/nginx/user_conf.d/bots.include; then\n # echo \"include /etc/nginx/bots.d/ddos.conf;\" >> /etc/nginx/user_conf.d/bots.include\n #fi\n cp /etc/nginx/templates/bots.include.copy /etc/nginx/templates/bots.include.template\nelse\n >|/etc/nginx/templates/bots.include.template\n >|/etc/nginx/user_conf.d/bots.include\n if [ -L /etc/nginx/conf.d/botblocker-nginx-settings.conf ]; then\n rm /etc/nginx/conf.d/botblocker-nginx-settings.conf\n fi\n if [ -L /etc/nginx/conf.d/globalblacklist.conf ]; then\n rm /etc/nginx/conf.d/globalblacklist.conf\n fi\nfi\n
include
line from the existing files: docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/templates/bots.include.copy\"\n
docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/templates/bots.include.template\"\n
docker exec -ti esmero-web bash -c \"sed -i '/include \\/etc\\/nginx\\/bots.d\\/ddos.conf/d' /etc/nginx/user_conf.d/bots.include\"\n
Finally we bring the Docker ensemble down and back up again to propagate the changes in our container:
docker compose down && docker compose up -d\n
Note
If using an older version of docker, don't forget the hyphen:
docker-compose down && docker-compose up -d\n
The above is an example of a more complicated customization, but it's a pattern that can be used more generally throughout the Docker containers, i.e.:
- ${ARCHIPELAGO_ROOT}/LOCATION_ON_HOST/CUSTOMIZED_FILE_ON_HOST:/LOCATION_IN_DOCKER_CONTAINER/FILE_IN_DOCKER_CONTAINER\n
Work-In-Progress Note This documentation page is still under construction and content may change with future updates. Please use caution when implementing any instructions referenced herein, as there may be missing steps or corresponding configuration files. Thank you for your patience as we continue to update Archipelago's documentation.
The steps found below describe one potential manual SSL configuration for Archipelago deployments. A git clone
deployment option will be available for future releases.
This process takes less than 10 minutes of reading YML files and editing a few files (described below) to get SSL running and setup with auto-renewal.
First, configure Certbot, following the instructions found on https://certbot.eff.org.
Inside a /persistent partition, establish the following folder structure. Note: you can keep the existing folder structure if you so choose. A benefit of the following structure is that it decouples the git clone of archipelago-deployment, which is made to be self sustainable and good for coding or smaller deployments.
[ec2-user@ip-17x-xx-x-xxx persistent]$ ls -lah\ntotal 64K\ndrwxr-xr-x 14 root root 4.0K Oct 5 23:11 .\ndr-xr-xr-x 19 root root 275 Dec 15 2019 ..\ndrwxr-xr-x 8 999 999 4096 Oct 13 20:07 db\ndrwxr-xr-x 13 root root 4.0K Oct 5 23:03 drupal8\ndrwxr-xr-x 5 8183 8183 4.0K Feb 23 2020 iiifcache\ndrwxr-xr-x 2 root root 4.0K Feb 23 2020 iiifconfig\ndrwxr-xr-x 4 root root 4.0K Oct 5 22:45 nginx_conf\ndrwxr-xr-x 3 root root 4.0K Feb 26 2019 solrconfig\ndrwxr-xr-x 3 8983 8983 4.0K Feb 26 2019 solrcore\n
To get to this point, create a git clone of archipelago deployment and then copy the content of the /persistent out of the repo folder into this structure. The original (or what is left) archipelago-deployment ends inside a drupal8 folder here.
Copy and paste the following to create a local copy of this file:
docker-compose.yml**Be sure to replace youremail@gmail.com with your email address.
version: '3.5'\n services:\n web:\n container_name: esmero-web\n image: staticfloat/nginx-certbot\n restart: always\n environment:\n CERTBOT_EMAIL: \"youremail@gmail.com\"\n ports:\n - \"80:80\"\n - \"443:443\"\n volumes:\n - /persistent/nginx_conf/conf.d:/etc/nginx/user.conf.d:ro\n - /persistent/nginx_conf/certbot_extra_domains:/etc/nginx/certbot/extra_domains:ro\n - /persistent/drupal8:/var/www/html:cached\n depends_on:\n - solr\n - php\n tty: true\n networks:\n - host-net\n - esmero-net\n php:\n container_name: esmero-php\n restart: always\n image: \"esmero/php-7.3-fpm:latest\"\n tty: true\n networks:\n - host-net\n - esmero-net\n volumes:\n - ${PWD}:/var/www/html:cached\n solr:\n container_name: esmero-solr\n restart: always\n image: \"solr:7.5.0\"\n tty: true\n ports:\n - \"8983:8983\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/solrcore:/opt/solr/server/solr/mycores:cached\n - /persistent/solrconfig:/drupalconfig:cached\n entrypoint:\n - docker-entrypoint.sh\n - solr-precreate\n - drupal\n - /drupalconfig\n # see https://hub.docker.com/_/mysql/\n db:\n image: mysql:5.7\n command: --max_allowed_packet=256M\n container_name: esmero-db\n restart: always\n environment:\n MYSQL_ROOT_PASSWORD: esmerodb\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/db:/var/lib/mysql:cached\n iiif:\n container_name: esmero-cantaloupe\n image: \"esmero/cantaloupe-s3:4.1.6\"\n restart: always\n ports:\n - \"8183:8182\"\n networks:\n - host-net\n - esmero-net\n volumes:\n - /persistent/iiifconfig:/etc/cantaloupe\n - /persistent/iiifcache:/var/cache/cantaloupe\n networks:\n host-net:\n driver: bridge\n esmero-net:\n driver: bridge\n internal: true\n
Note: This file shows how the folders in Step 1 are being used, and how SSL is being automatically deployed and renewed (without any human interaction other than starting the docker-compose and watching the logs).
Now copy and paste the following to create a local copy of this file:
ngnix.conf**Be sure to replace all instances of yoursite.org with your own domain.
# goes into /persistent/nginx_conf/conf.d/nginx.conf\n upstream cantaloupe {\n server esmero-cantaloupe:8182;\n }\n\n server {\n listen 443 ssl;\n server_name yoursite.org;\n ssl_certificate /etc/letsencrypt/live/yourstie.org/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/yoursite.org/privkey.pem;\n\n client_max_body_size 512M; ## Match with PHP from FPM container\n\n root /var/www/html/web; ## <-- Your only path reference.\n\n fastcgi_send_timeout 120s;\n fastcgi_read_timeout 120s;\n fastcgi_pass_request_headers on;\n\n fastcgi_buffers 16 16k;\n fastcgi_buffer_size 32k;\n\n # Cantaloupe proxypass\n location /cantaloupe/ {\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Port $server_port;\n proxy_set_header X-Forwarded-Path /cantaloupe/;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n if ($request_uri ~* \"/cantaloupe/(.*)\") {\n proxy_pass http://cantaloupe/$1;\n }\n }\n\n location = /favicon.ico {\n log_not_found off;\n access_log off;\n }\n\n location = /robots.txt {\n allow all;\n log_not_found off;\n access_log off;\n }\n\n # Very rarely should these ever be accessed outside of your lan\n location ~* \\.(txt|log)$ {\n deny all;\n }\n\n location ~ \\..*/.*\\.php$ {\n return 403;\n }\n\n location ~ ^/sites/.*/private/ {\n return 403;\n }\n\n # Allow \"Well-Known URIs\" as per RFC 5785\n location ~* ^/.well-known/ {\n allow all;\n }\n\n # Block access to \"hidden\" files and directories whose names begin with a\n # period. This includes directories used by version control systems such\n # as Subversion or Git to store control files.\n location ~ (^|/)\\. {\n return 403;\n }\n\n location / {\n try_files $uri /index.php?$query_string; # For Drupal >= 7\n }\n\n location @rewrite {\n rewrite ^/(.*)$ /index.php?q=$1;\n }\n\n # Don't allow direct access to PHP files in the vendor directory.\n location ~ /vendor/.*\\.php$ {\n deny all;\n return 404;\n }\n\n # Allow Modules to be updated via UI (still we believe composer is the way) \n rewrite ^/core/authorize.php/core/authorize.php(.*)$ /core/authorize.php$1;\n\n # In Drupal 8, we must also match new paths where the '.php' appears in\n # the middle, such as update.php/selection. The rule we use is strict,\n # and only allows this pattern with the update.php front controller.\n # This allows legacy path aliases in the form of\n # blog/index.php/legacy-path to continue to route to Drupal nodes. If\n # you do not have any paths like that, then you might prefer to use a\n # laxer rule, such as:\n # location ~ \\.php(/|$) {\n # The laxer rule will continue to work if Drupal uses this new URL\n # pattern with front controllers other than update.php in a future\n # release.\n location ~ '\\.php$|^/update.php' {\n fastcgi_split_path_info ^(.+?\\.php)(|/.*)$;\n include fastcgi_params;\n # Block httpoxy attacks. See https://httpoxy.org/.\n fastcgi_param HTTP_PROXY \"\";\n fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;\n fastcgi_param PATH_INFO $fastcgi_path_info;\n fastcgi_param PHP_VALUE \"upload_max_filesize=512M \\n post_max_size=512M\";\n proxy_read_timeout 900s;\n fastcgi_intercept_errors on;\n fastcgi_pass esmero-php:9000;\n }\n\n # Fighting with Styles? This little gem is amazing.\n location ~ ^/sites/.*/files/styles/ { # For Drupal >= 7\n try_files $uri @rewrite;\n }\n\n # Handle private files through Drupal.\n location ~ ^/system/files/ { # For Drupal >= 7\n try_files $uri /index.php?$query_string;\n }\n}\n
Create the following folder:
/persistent/nginx_conf/conf.d/\n
Place the ngnix.conf file inside the /conf.d/
folder.
Create also this other folder:
/persistent/nginx_conf/certbot_extra_domains/\n
Inside the /certbot_extra_domains/
folder, create a text file named the same way as your domain (which can/or not contain additional subdomains but needs to exist).
cat /persistent/nginx_conf/certbot_extra_domains/yoursite.org\n
drwxr-xr-x 2 root root 4.0K Oct 5 22:46 .\ndrwxr-xr-x 4 root root 4.0K Oct 5 22:45 ..\n-rw-r--r-- 1 root root 48 Oct 5 22:46 yoursite.org\n
Optionally, create additional subdomains if needed.
cat /persistent/nginx_conf/certbot_extra_domains/yoursite.org\nsubdomain.yoursite.org\nanothersub.yoursite.org\n
Make sure you have edited the docker-compose.yml
and ngnix.conf
files you created to match your own information. Also make sure to also adjust the paths if you do not want the /persistent approach described in Step 1.
Run the following commands:
docker -compose up -d\ndocker ps\n
You should see this:
b5a04747ee06 staticfloat/nginx-certbot \"/bin/bash /scripts/\u2026\" 8 days ago Up 8 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp esmero-web\n84afae094b57 esmero/php-7.3-fpm:latest \"docker-php-entrypoi\u2026\" 8 days ago Up 8 days 9000/tcp esmero-php\n13a9214acfd0 esmero/cantaloupe-s3:4.1.6 \"sh -c 'java -Dcanta\u2026\" 8 days ago Up 8 days 0.0.0.0:8183->8182/tcp esmero-cantaloupe\n044dd5bc7245 mysql:5.7 \"docker-entrypoint.s\u2026\" 8 days ago Up 8 days 3306/tcp, 33060/tcp esmero-db\n31f4f0f45acc solr:7.5.0 \"docker-entrypoint.s\u2026\" 8 days ago Up 8 days 0.0.0.0:8983->8983/tcp esmero-solr\n
SSL has now been configured for your Archipelago instance.
Adding SSL to Archipelago running docker by Zachary Spalding: https://youtu.be/rfH5TLzIRIQ
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"strawberry_key_name_providers/","title":"Strawberry Key Name Providers, Solr Field, and Facet Configuration","text":"For an overview of how Strawberry Key Name Providers fit within the context of the rest of Archipelago, please see the Drupal and JSON section in our Metadata in Archipelago overview documentation.
In order to expose the Strawberry Field JSON keys (and values) for Archipelago Digital Objects (ADOs) to Search/Solr, Views, and Facets, we need to make use of a plugin system called Strawberry Key Name Providers. The following guide covers - Configuring first the Strawberry Key Name Providers - Then configuring the corresponding Solr Fields necessary for Search and Views exposure - Finally, the configuration of Facets and placement of Facet blocks on your theme as needed.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberry_key_name_providers/#creating-a-strawberry-key-name-provider","title":"Creating a Strawberry Key Name Provider","text":"First, we'll start with an example of a Strawberry Field JSON key that we would like to expose:
date_created_edtf
...\n\"subject_wikidata\": [\n {\n \"uri\": \"http:\\/\\/www.wikidata.org\\/entity\\/Q55488\",\n \"label\": \"railway station\"\n }\n],\n\"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": \"2016~\\/2017~\",\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n},\n\"date_created_free\": null,\n...\n
Next, we are going to create a new Strawberry Key Name Provider by going to Administration > Structure > Strawberry Key Name Providers
, pressing the + Add Strawberry Key Name Provider
button, filling in the fields as follows, and saving:
Label
: Date Created EDTF
Strawberry Key Name Provider Plugin
: JmesPath Strawberry Field Key Name Provider
One or more comma separated valid JMESPaths
: date_created_edtf.date_free
Exposed Strawberry Field Property
(under the One or more comma separated valid JMESPaths
field) is set to date_created_edtf_date_free
. This is the Strawberry Field Property
that will hold the data coming from the JMESPath Query when evaluated against and ADO's JSON and will be visible as a Strawberry Field Property to Drupal and the Search API. When doing this in a production environment, you might want to change the automatically generated value and assign a simpler one to remember. You can always do this by pressing Edit
. But for the purpose of this documentation please keep date_created_edtf_date_free
.Is Date?
: \u2611
You'll notice that there are four plugins, each with different options, available for different use cases. Below you'll find each plugin with examples from the providers that come with a default deployment.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberry_key_name_providers/#entity-reference-jmespath-strawberry-field-key-name-provider","title":"Entity Reference JmesPath Strawberry Field Key Name Provider","text":"ismemberof
One or more comma separated valid JMESPaths: ismemberof
Entity type: node
hoCR Service
Source JSON Key used to read the Service/Flavour: ap:hocr
Subject Labels
One or more comma separated valid JMESPaths: subject_loc[*].label, subject_wikidata[*].label, subject_lcnaf_geographic_names[*].label,subject_temporal[*].label, subject_lcgft_terms[*].label, term_aat_getty[*].label, pubmed_mesh[*].label
Best Practice
As in the example below, if there are a group of flat and unique keys that you want to expose, we recommend creating one provider with this plugin and using a list of keys instead of creating multiple providers. This Provider will also auto assign Lists of Properties from an external JSON-LD ontology/vocabulary (e.g Schema.org). It uses direct access approach, e.g. type
will get all values for any JSON Key named type
at any hierarchy level (across the whole JSON document) and it will also use the same exact name (type
) for the Exposed Strawberry Field Property
.
schema.org
Additional keys separated by commas: ismemberof,type,hocr,city,category,country,state,display_name,author,license
Administration > Configuration > Search and metadata > Search API > Drupal Content to Solr 8 > Fields
.Add fields
button.\ud83c\udf53 Strawberry (Descriptive Metadata source) (field_descriptive_metadata)
, e.g. for the key mapped above, look for field_descriptive_metadata:date_created_edtf_date_free
.Type
for the field is correct (date
for the example in this guide).Administration > Configuration > Search and metadata > Search API
and click on the link to the index for your Drupal data.Queue all items for reindexing
button.Index now
button.Administration > Configuration > Search and metadata > Facets
.+ Add facet
button.Facet source
: View Solr search content, display Page
Field
: \ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free (field_descriptive_metadata:date_created_edtf_date_free)
Name
: \ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free
Edit
for the facet we just created and adjusting the many options available as needed. For the example in this guide, we'll adjust the below from the default settings:Facet settings
\u2611
Date item processor
Date display
\ud83d\udd18
Actual date with granularity
Granularity
\ud83d\udd18
Year
URL alias
: sbf_date_created_edtf
Administration > Structure > Block layout
.Archipelago Base Theme
.Place block
button next to the appropriate region. For the example in this guide, we'll be placing the block in the Sidebar second
region.\ud83c\udf53 Strawberry (Descriptive Metadata source) >> date_created_edtf_date_free
Place block
button next to the facet. Once the block is added, you can drag and drop it to change its position among the existing blocks and saving.Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Strawberry Key Name Providers","Solr","Facets","Search"]},{"location":"strawberryfield-formatters/","title":"Strawberryfield Formatters","text":"This documentation will give a brief overview of Archipelago's Strawberryfield Formatters and how they work using the default View mode Digital Object Full View
as an example.
When taking a look at your First Digital Object note that multiple formatters are working together to create this Display
( or View mode
). Since \"My First Digital Object\" is a Photograph
the Display
being used is Digital Object Full View
which, by default, uses formatters to:
Object Description
and Type of Resource
.When editing an ADO, at the top of the Webform page there is a tab titled Manage display
which will take us to where all the Formatters live. Take note that the DISPLAY SETTINGS
shown in the screenshot below are using the Default View mode.
Once the page loads the Default
View mode is automatically selected. However, because we are editing an object with the Media type
Photograph
, we need to edit the View mode Digital Object Full View
since it is the Default View mode for this Media type
.
The ADO Type to View mode Mapping page tells the ADOs which View mode to use by default per Media type. This page can be accessed at yoursite//admin/config/archipelago/viewmode_mapping
There are two sections in Manage display
for Digital Object Full View
: 1) Content and 2) Disabled. Moving a field into Content means this formatter will be used to the display the ADO in some way. The formatters moved to Disabled are inactive and are subsequently not being used for displaying the ADO.
There are four fields named \ud83c\udf53Strawberry
and each one is a copy of the field \ud83c\udf53Strawberry (Descriptive Metadata source)
. Since the names of the fields do not imply their function, they have been named Strawberry in four different ways (Italiano, Deutsch, Din\u00e9 Bizaad, and English) in order to organize and help users visually remember which field is doing what for the Display
.
Recall My First Digital Object at beginning of this document where there were 3 sections highlighted in Red, Blue, and Green.
\ud83c\udf53Fragola
) there is the Strawberry Field Formatter for IIIF media which takes the image stored in S3 to display the photograph with the image viewer.\ud83c\udf53Erdbeere
) there is the Strawberry Field Formatter for Custom Metadata Templates which displays the raw JSON metadata using configurable Twig templates. In this example, the default Twig template uses the JSON key type
to display the Type of Resource
.\ud83c\udf53Strawberry (Descriptive Metadata)
) there is the Strawberry Default Formatter which is used to display the Raw JSON Metadata.The decision for how your metadata is displayed is totally in your control.
Under the WIDGET
column, there is a quick description/overview of what the formatter is doing.
And by clicking on the gear icon under the OPERATIONS
column, all of the options for configuring the formatter are revealed. To use \ud83c\udf53Fragola
as an example (the Formatter for IIIF media), we can choose which JSON Key is being used to fetch the IIIF Media URLs (found inside the raw JSON being played with Strawberry Default Formatter
), the maximum height and width of the viewer, etc.
And then with \ud83c\udf53Erdbeere
(the Formatter for Custom Metadata Templates) there is the option, among many others, to configure which Twig template the formatter will use for displaying your Metadata.
More information about Managing Metadata Displays with Twig Templates can be found here.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"strawberryfields/","title":"Strawberryfields Forever","text":""},{"location":"strawberryfields/#what-strawberry-fields-does-why-we-built-it-and-what-issues-it-addresses","title":"What Strawberry fields does, why we built it, and what issues it addresses","text":"Archipelago integrates transparently into the Drupal 8 ecosystem using its Core Content Entity System (Nodes), Discovery (Search API) and in general all its Core Components plus a few well maintained external ones.
By design (and because we think its imperative), Archipelago takes full charge of the metadata layer and associated media assets by implementing a highly configurable, smart Drupal field written in JSON named Strawberryfield
that attaches to any content.
All of JSON's internals, keys, paths, and values are dynamically exposed to the rest of the ecosystem. Strawberryfield even remembers its structure as data evolves by storing JSON paths of every little detail.
"},{"location":"strawberryfields/#nothing-is-real","title":"Nothing Is Real","text":"Archipelago includes additional companion modules, Webform_strawberryfield
and Format_strawberryfield
that extend the core metadata capabilities of the main Strawberryfield
module and allow the same flexibility to be exposed during ingest and viewing of digital objects.
The in-development Strawberry Runners
and AMI
modules further extend Archipelago's capabilities. Additional information related to these modules will be made available following initial public releases.
Webform Strawberryfield
(we had a better name) extends and integrates into the amazing Drupal Webform module
to allow Archipelago users to build any possible metadata and media, ingest and edit, workflows directly via the UI using webforms.
By not having a hardcoded ingest method, Archipelago can be used outside the GLAM community too, as a pure data repository in biological sciences, digital humanities, archives, or even as a mixed, multidisciplinary/cross-domain system.
We also added WIKIDATA
, LoC
, Getty
, and VIAF
authority querying elements to aid in linking to external Linked Open Data sources.
All these integrations are made to help local needs and community identities to survive the never-ending race for the next metadata schema. They are made to prototype, plan, and grow independently of how metadata will need to be exposed yesterday or tomorrow. And we plan to add more.
Explore what other features webform_strawberryfield
provides to help with ingesting, reading, and interacting with your metadata during that process.
Format Strawberryfield
(we had even a better name but...) deals with taking your JSON based metadata and casting
, mashing, mixing, exposing, displaying, and transforming it to allow rich interaction for users and other systems with your digital objects.
In its guts (or heart?), Archipelago does something quite simple but core to our concept of repository: it transforms in realtime the close to your needs open schema metadata that lives in strawberryfield as JSON into close to other one's fixed schema needs metadata; any destination format, using a fast, cached templating system. A templating system that is core to Drupal, called Twig
:
This templating system is exposed to Archipelago users through the UI and stored side by side in the repository as content (we named them Metadata Display entities
, but they not only serve display needs!) so users can fully control how metadata is transformed and published without touching their individual sources.
Templates or recipes can be shared, exported, ingested, updated, and adapted in many ways. Fast changes are possible without having to wait for the next mayor release of Archipelago or your favorited Metadata Schema Specs Committee agreeing on the next or the last version. Of course, this module not only handles metadata but media assets too, extracting local or remote URIs and files from your metadata and rendering them as media viewers: books, 3D models, images, panoramas, A/V with IIIF in its soul.
You can learn more about what format_strawberryfield can do and what many other possibilities are exposed through our templating system.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"traditional-install/","title":"Traditional install","text":""},{"location":"traditional-install/#traditional-installation-notes","title":"Traditional Installation Notes","text":"For those who prefer classic approaches to system installation and configuration (instead of Dockerized deployment), this page is reserved for notes, recommendations, and guides.
Please stay tuned for additional future updates. Thank you!
Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"},{"location":"twig_extensions/","title":"Twig Extensions","text":"One advantage of Drupal's integration of the Twig template engine is the availability of extensions (filters and functions).
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#default-twig-extensions-from-symfony","title":"Default Twig Extensions from Symfony","text":"The Symfony PHP framework, which is integrated into Drupal Core, provides extensions, which we use in our default templates:
Additionally, we have some very handy Drupal-specific extensions:
Finally, we have a growing list of extensions that apply to our own specific use cases:
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#twig-filters-from-archipelago","title":"Twig Filters from Archipelago","text":"edtf_2_human_date
The edtf_2_human_date
filter takes an EDTF date and an optional language code (defaults to English), and converts it to a human-readable format using the EDTF PHP library. The list of language codes is available here.
Let's start with the following metadata fragment: Metadata Fragment
...\n\"subject_wikidata\": \"\",\n\"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": \"~1899\",\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n},\n\"date_created_free\": null,\n...\n
Then we pass the date_free
field through the trim
filter (as a precaution, in case there's any accidental whitespace), and then we finally hand off the field to our edtf_2_human_date
filter: edtf_2_human_date
{{ data.date_created_edtf.date_free|trim|edtf_2_human_date('en') }}\n\n{# Output: Circa 1899 #}\n
html_2_markdown
The html_2_markdown
filter, as the name suggests, converts HTML to Markdown.
We start with this string of HTML: HTML string
{% set html_string = \"\n <ul>\n <li>One thing</li>\n <li>Another thing</li>\n <li>The last thing</li>\n </ul>\n\" %}\n
Then we pass it to the filter: html_2_markdown
{{ html_string | html_2_markdown }}\n\n{# Output:\n - One thing\n - Another thing\n - The last thing\n#}\n
markdown_2_html
The markdown_2_html
filter, as the name suggests, is the reverse of the above and converts Markdown to HTML.
We start with this string of Markdown: Markdown string
{% set markdown_string = \"\n - One thing\n - Another thing\n - The last thing\n\" %}\n
Then we pass it to the filter: markdown_2_html
{{ markdown_string | markdown_2_html }}\n\n{# Output:\n <ul>\n <li>One thing</li>\n <li>Another thing</li>\n <li>The last thing</li>\n </ul>\n#}\n
sbf_json_decode
The sbf_json_decode
filter decodes a JSON-encoded string.
We start with this string of JSON string: JSON string
{% set json_string = \"\n {\n \\\"date_to\\\": \\\"\\\",\n \\\"date_free\\\": \\\"~1899\\\",\n \\\"date_from\\\": \\\"\\\",\n \\\"date_type\\\": \\\"date_free\\\"\n }\n\" %}\n
Then we pass it to the filter: sbf_json_decode
{% json_decoded = json_string | sbf_json_decode %}\n\n{{ json_decoded.date_free }}\n{# Output:\n ~1899\n#}\n
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON","Markdown","HTML"]},{"location":"twig_extensions/#twig-functions-from-archipelago","title":"Twig Functions from Archipelago","text":"clipboard_copy
The clipboard_copy
function, using the clipboard-copy-element library, takes a provided CSS class for the element(s) whose text we'd like to copy, and targets the CSS class of an existing HTML element on the page or generates an HTML element that can be clicked to copy the text to the user's clipboard.
Usage
clipboard_copy usage{{ clipboard_copy('CSS CLASS','OPTIONAL CSS CLASS(ES)','OPTIONAL TEXT') }}\n
This function takes three arguments:
clipboard-copy-button
) or classes (space-separated) for the copy button if auto-generating or a single, unique class if using your own existing button(s) Copy to Clipboard
) for the copy button if auto-generatingIn the examples below, we want users to be able to copy the text from three different kinds of HTML elements: a div
, an input
, and an a
hyperlink href.
First we start by giving the div element(s) we'd like to copy a unique class:
div element text<div class=\"csl-bib-body-container chicago-fullnote-bibliography\">\n <div id=\"copy-csl\" class=\"csl-bib-body\">\n <div class=\"csl-entry\">\n New York Botanical Garden. \u201cDescriptive Guide to the Grounds, Buildings and Collections.\u201d\n </div>\n </div>\n</div>\n
Then we pass the class to the function:
clipboard_copy for div element text{{ clipboard_copy('csl-bib-body','','Copy Bibliography Entry') }}\n
Note
The class can be attached to parent elements of the element we are ultimately targeting if needed, but any intermediate characters may get caught up in the copied text.
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for div element text{{ clipboard_copy('csl-bib-body','custom custom-button','Copy Bibliography Entry') }}\n
The result for the above div
example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"copy-csl\" tabindex=\"0\" role=\"button\">Copy Bibliography Entry</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"copy-csl\" tabindex=\"0\" role=\"button\">Copy Bibliography Entry</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying input element value with auto-generated buttonFirst we start by giving the input element(s) we'd like to copy a unique class:
input element value{% if attribute(data, 'as:image')|length > 0 or attribute(data, 'as:document')|length > 0 %}\n <h2>\n <span class=\"align-middle\">Direct Link to Digital Object's IIIF Presentation Manifest V3 </span>\n <img src=\"https://iiif.io/img/logo-iiif-34x30.png\">\n </h2>\n {% set iiifmanifest = nodeurl|render ~ \"/metadata/iiifmanifest/default.jsonld\" %}\n <input type=\"text\" value=\"{{ iiifmanifest }}\" id=\"iiifmanifest_copy\" size=\"{{ iiifmanifest|length }}\" class=\"col-xs-3 copy-content\">\n{% endif %}\n
Then we pass the class to the function:
clipboard_copy for input element value{{ clipboard_copy('copy-content','',\"Copy Link to Digital Object's IIIF Presentation Manifest V3\") }}\n
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for input element text{{ clipboard_copy('copy-content','custom custom-button',\"Copy Link to Digital Object's IIIF Presentation Manifest V3\") }}\n
The result for the above input
example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"iiifmanifest_copy\" tabindex=\"0\" role=\"button\">Copy Link to Digital Object's IIIF Presentation Manifest V3</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"iiifmanifest_copy\" tabindex=\"0\" role=\"button\">Copy Link to Digital Object's IIIF Presentation Manifest V3</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying anchor element hyperlink href with auto-generated buttonFirst we start by giving the a
element(s) we'd like to copy a unique class:
<a id=\"copy-documentation-id\" class=\"copy-documentation-class row\" href=\"https://docs.archipelago.nyc\">Archipelago Documentation</a>\n
Then we pass the class to the function:
clipboard_copy for anchor element hyperlink href{{ clipboard_copy('copy-documentation-class','',\"Copy Link to Documentation\") }}\n
Or to give the generated button multiple classes (in case they need additional styling):
clipboard_copy for anchor element text{{ clipboard_copy('copy-documentation-class','custom custom-button',\"Copy Link to Documentation\") }}\n
The result for the above anchor example looks as follows:
The following is the HTML for the auto-generated button with no provided CSS class:
<button class=\"clipboard-copy-button\">\n <clipboard-copy for=\"copy-documentation-id\" tabindex=\"0\" role=\"button\">Copy Link to Documentation</clipboard-copy>\n</button>\n
And the following is HTML for the auto-generated button with multiple CSS classes provided:
<button class=\"custom custom-button\">\n <clipboard-copy for=\"copy-documentation-id\" tabindex=\"0\" role=\"button\">Copy Link to Documentation</clipboard-copy>\n</button>\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
The above examples automatically generate copy
buttons. They can be styled, but if we need more control over the button placement and styling, we can use our own button(s) by ensuring that they meet the following requirements:
<copy-clipboard>
element (this can be hidden) with a for
attribute, whose value is the ID of the source element, attached to the element acting as the button.First we start by giving the div element(s) we'd like to copy a unique class:
div element text<div class=\"csl-bib-body-container chicago-fullnote-bibliography\">\n <div id=\"copy-csl\" class=\"csl-bib-body\">\n <div class=\"csl-entry\">\n New York Botanical Garden. \u201cDescriptive Guide to the Grounds, Buildings and Collections.\u201d\n </div>\n </div>\n</div>\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for div element text<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"copy-csl\">Copy Text</clipboard-copy>\n</button>\n\n{{ clipboard_copy('csl-bib-body','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying input element value with custom buttonFirst we start by giving the input element(s) we'd like to copy a unique class:
input element value{% if attribute(data, 'as:image')|length > 0 or attribute(data, 'as:document')|length > 0 %}\n <h2>\n <span class=\"align-middle\">Direct Link to Digital Object's IIIF Presentation Manifest V3 </span>\n <img src=\"https://iiif.io/img/logo-iiif-34x30.png\">\n </h2>\n {% set iiifmanifest = nodeurl|render ~ \"/metadata/iiifmanifest/default.jsonld\" %}\n <input type=\"text\" value=\"{{ iiifmanifest }}\" id=\"iiifmanifest_copy\" size=\"{{ iiifmanifest|length }}\" class=\"col-xs-3 copy-content\">\n{% endif %}\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for input element value<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"iiifmanifest_copy\">Copy Input</clipboard-copy>\n</button>\n\n{{ clipboard_copy('copy-content','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
Copying anchor element with custom buttonFirst we start by giving the a
element(s) we'd like to copy a unique class:
<a id=\"copy-documentation-id\" class=\"copy-documentation-class row\" href=\"https://docs.archipelago.nyc\">Archipelago Documentation</a>\n
Then we generate the button and pass the class to the function:
clipboard_copy custom button for anchor element hyperlink href<button class=\"custom-button btn btn-primary btn-sm\">\n <clipboard-copy for=\"copy-documentation-id\">Copy Link</clipboard-copy>\n</button>\n\n{{ clipboard_copy('copy-documentation-class','custom-button','') }}\n
Note
The clipboard-copy-element library requires an element ID. If the element being copied does not have an ID, one will automatically generated and assigned.
sbf_entity_ids_by_label
The sbf_entity_ids_by_label
function, as the name suggests, provides a Drupal entity ID for the following Drupal entity types:
If we start with the user entity jsonapi
, we can do the following: sbf_entity_ids_by_label
{% set jsonapi_user_ids=sbf_entity_ids_by_label('jsonapi','user','') %}\n\n{% for jsonapi_user_id in jsonapi_user_ids %}\n {{ jsonapi_user_id }}\n{% endfor %}\n\n{# Output:\n 3\n#}\n
As you can see above, the sbf_entity_ids_by_label
function takes three arguments:
We then loop through the returned result, which is an array of IDs (in this case, just a single one).
sbf_search_api
The sbf_search_api
function executes a search API query against a specified index.
{% set search_results=sbf_search_api('default_solr_index','strawberry',[],{'status':1},[]) %}\n{% set labels=search_results['results']['13']['fields']['label_2'] %}\n<ul>\n {% for label in labels %}\n <li>{{ label }}</li>\n {% endfor %}\n</ul>\n
As you can see above, the sbf_search_api
function takes eight arguments:
For this example we end up with the following output:
The Twig Recipe Cards below reference common Metadata transformation, display, or other use cases/needs you may have in your own Archipelago repository.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"twig_recipe_cards/#getting-started-working-with-twig-in-archipelago","title":"Getting Started Working with Twig in Archipelago","text":"We recommend reading through our main Metadata Display Preview and Twigs in Archipelago documentation overview guides, and also our Working with Twig primer before diving into applying any of these recipes in your own Archipelago.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"twig_recipe_cards/#ami-ingest-template-adaptations-common-use-cases-and-twig-recipe-cards","title":"AMI Ingest Template Adaptations -- Common Use Cases and Twig Recipe Cards:","text":"Use Case #1: I used AMI LoD Reconciliation to reconciliate the values in my AMI Set Source CSV mods_subject_topic
column against both LCSH and Wikidata. I would like to map the reconciliated values into the Archipelago default subject_loc
and subject_wikidata
JSON keys.
Twig Recipe Card for Use Case #1:
{#- LCSH -#}\n {% if data.mods_subject_topic|length > 0 %}\n \"subject_loc\": {{ data_lod.mods_subject_topic.loc_subjects_thing|json_encode|raw }},\n {% endif %}\n {#- Wikidata -#} \n {% set subject_wikidata = [] %}\n {% for source, reconciliated in data_lod %}\n {% if (('subject' in source) or ('genre' in source)) and reconciliated.wikidata_subjects_thing and reconciliated.wikidata_subjects_thing|length > 0 %}\n {% set subject_wikidata = subject_wikidata|merge(reconciliated.wikidata_subjects_thing) %}\n {% endif %}\n {% endfor %} \n
Use Case #2: I have both columns containing a mods_subject_authority_lcsh_topic
(labels) and corresponding mods_subject_authority_lcsh_valueuri
(URIs) data in my AMI Set Source Data CSV that I would like to pair and map into the Archipelago default subject_loc
JSON key.
Twig Recipe Card for Use Case #2:
{%- if data['mods_subject_authority_lcsh_topic'] is defined and not empty -%}\n {% set subjects = data[\"mods_subject_authority_lcsh_topic\"] is iterable ? data[\"mods_subject_authority_lcsh_topic\"] : data[\"mods_subject_authority_lcsh_topic\"]|split('|@|') %} \n {% set subject_uris = data[\"mods_subject_authority_lcsh_valueuri\"] is defined ? data[\"mods_subject_authority_lcsh_valueuri\"] : '' %} \n {% set subject_uris_list = subject_uris is iterable ? subject_uris : subject_uris|split('|@|') %}\n \"subject_loc\": [\n {% for subject in subjects %}\n {\n \"uri\": {{ subject_uris_list[loop.index0]|default('')|json_encode|raw }},\n \"label\": {{ subject|json_encode|raw }}\n }\n {{ not loop.last ? ',' : '' }}\n {% endfor %}\n ],\n{%- endif -%}\n
Use case #3: I have dc.creator
and dc.contributor
columns in my AMI Set Source Data CSV with simple JSON-encoded values (e.g. source column cells contain [\"Name 1, Name 2\"]
) that I would like to map to the Archipelago default creator_lod
JSON key.
Twig Recipe Card for Use Case #3:
{% if data['dc.creator']|length > 0 or data['dc.contributor']|length > 0 %}\n {% set total_creators = (data[\"dc.creator\"]|length) + (data[\"dc.contributor\"]|length) %}\n {% set current_creator = 0 %} \n \"creator_lod\": [\n {% for creator in data[\"dc.creator\"] %}\n {% set current_creator = current_creator + 1 %}\n {% set creator_source = data[\"dc.creator\"][loop.index0] %}\n {\n \"name_uri\": null,\n \"agent_type\": null,\n \"name_label\": {{ creator|json_encode|raw }},\n \"role_label\": \"Creator\",\n \"role_uri\": \"http://id.loc.gov/vocabulary/relators/cre\"\n }\n {{ current_creator != total_creators ? ',' : '' }}\n {% endfor %}\n {% for creator in data[\"dc.contributor\"] %}\n {% set current_creator = current_creator + 1 %}\n {% set creator_source = data[\"dc.contributor\"][loop.index0] %}\n {\n \"name_uri\": null,\n \"agent_type\": null,\n \"name_label\": {{ creator|json_encode|raw }},\n \"role_label\": \"Contributor\",\n \"role_uri\": \"http://id.loc.gov/vocabulary/relators/ctb\"\n }\n {{ current_creator != total_creators ? ',' : '' }}\n {% endfor %} \n ],\n{% endif %} \n
Use Case #4: I have a mix of different columns containing Creator/Contributor/Other-Role-Types Name values with or without corresponding URI values that I would like to map to the default Archipelago creator_lod
JSON key. Twig Recipe Card for Use Case #4:
Click to view the full Recipe Card{#- START Names from LoD and MODS CSV with/without URIS. -#} \n {# Updated August 26th 2022, by Diego Pino. New checks/logic for mods_name_type_role_composed_or_more_namepart\n - Check first IF for a given namepart there is already reconciliaton. \n - IF not i check if there is a matching valueuri, \n - If not leave the URL empty and use the value in the namepart (label) only?\n - Only check/use mods_name_corporate/personal_namepart field IF there are no other fields\n - That specify Roles. Since normally in ISLANDORA that field (no role) is a Catch all names one\n - And in that case USE creator as the default ROLE\n #}\n {%~ set creator_lod = [] -%} \n {# Used to keep track of parts after the type (corporate, etc) that are no roles\n but authority properties. Add more if you find them #}\n {% set roles_that_are_no_roles = ['authority_naf','authority_marcrelator',''] %}\n {# Used to keep track of the ones that are reconciled already #}\n {%- set name_has_creator_lod = [] -%}\n {%- for key,value in data_lod -%}\n {%- if key starts with 'mods_name_' and key ends with '_namepart' -%}\n {# If there is mods_name_SOMETHING_namepart in data_lod we keep track so we \n do not try afterwards to use that Sources KEY from the CSV.\n #}\n {%- set name_has_creator_lod = name_has_creator_lod|merge([key]) -%}\n {# Now we remove 'mods_name_' and '_namepart' #}\n {%- set name_type_and_role = key|replace({'mods_name_':'', '_namepart':''}) -%}\n {# We will only target personal or corporate. If any of those are missing we skip? #}\n {% set name_type = null %}\n {%- if name_type_and_role starts with 'personal_' -%}\n {% set name_type = 'personal' %}\n {%- elseif name_type_and_role starts with 'corporate_' -%}\n {%- set name_type = 'corporate' -%}\n {%- endif -%}\n {%- if name_type is not empty -%}\n {#- Now we remove 'type', e.g 'corporate_' -#}\n {%- set name_role = name_type_and_role|replace({(name_type ~ '_'):''}) -%}\n {# in case the name_role contains one of roles_that_are_no_roles, e.g\n something like `creator_authority_marcrelator` we remove that #}\n {% for role_that_is_no_role in roles_that_are_no_roles %}\n {%- set name_role = name_role|replace({(role_that_is_no_role):''}) -%}\n {% endfor %}\n {# After removing all what can not be a role if we end with an empty #}\n {% if name_role|trim|length == 0 %}\n {%- set name_role = \"creator\" %}\n {% else %}\n {%- set name_role = name_role|replace({'\\\\/':'//' , '_':' '})|trim -%}\n {% endif %}\n {#- we iterate over all possible vocabularies and fetch the reconciliated names from them (if any) -#}\n {%- for approach, names in value -%} \n {#- if there are actually name pairs (name and uri) that were reconciliated we use them -#}\n {%- if names|length > 0 -%}\n {#- we call the ami_lod_reconcile twig extension with the role label using the LoC Relators endpoint in english and get 1 result -#}\n {%- set role_uri = ami_lod_reconcile(name_role|lower|capitalize,'loc;relators;thing','en',1) -%}\n {#- for each found name pair in a list of possible LoD reconciliated elements we generate the final structure that goes into \"creator_lod\" json key -#}\n {%- for name in names -%} \n {%- set creator_lod = creator_lod|merge([{'role_label': name_role|lower|capitalize, 'role_uri': role_uri[0].uri, \"agent_type\": name_type, \"name_label\": name.label, \"name_uri\": name.uri}]) -%} \n {%- endfor -%}\n {%- endif -%}\n {%- endfor -%}\n {% endif -%}\n {%- endif -%} \n {%- endfor -%}\n {# Now go for the RAW CSV data for names #}\n {%- for key,value in data -%}\n {# here we skip values previoulsy fetched from LoD and stored in name_has_creator_lod #}\n {%- if key not in name_has_creator_lod and key starts with 'mods_name_' and key ends with '_namepart' -%}\n {# If there is mods_name_SOMETHING_namepart in data_lod we keep track so we \n do not try afterwards to use that Sources KEY from the CSV.\n #}\n {%- set name_has_creator_lod = name_has_creator_lod|merge([key]) -%}\n {# Now we remove 'mods_name_' and '_namepart' #}\n {%- set name_type_and_role = key|replace({'mods_name_':'', '_namepart':''}) -%}\n {# We will only target personal or corporate. If any of those are missing we skip? #}\n {%- set name_type = null -%}\n {%- if name_type_and_role starts with 'personal_' -%}\n {%- set name_type = 'personal' -%}\n {%- elseif name_type_and_role starts with 'corporate_' -%}\n {%- set name_type = 'corporate' -%}\n {%- endif -%}\n {% if name_type is not empty %}\n {# Now we remove 'type', e.g 'corporate_' #}\n {%- set name_role = name_type_and_role|replace({(name_type ~ '_'):''}) -%}\n {# in case the name_role contains one of roles_that_are_no_roles, e.g\n something like `creator_authority_marcrelator` we remove that #}\n {% for role_that_is_no_role in roles_that_are_no_roles %}\n {%- set name_role = name_role|replace({(role_that_is_no_role):''}) -%}\n {% endfor %}\n {# After removing all what can not be a role if we end with an empty #}\n {% if name_role|trim|length == 0 %}\n {%- set name_role = \"creator\" %}\n {% else %}\n {%- set name_role = name_role|replace({'\\\\/':'//' , '_':' '})|trim -%}\n {% endif %}\n {# Now we check if there is a corresponding _valueuri for this #}\n {% set name_uris = [] %}\n {%- if data[('mods_name_' ~ name_type_and_role ~ '_valueuri')] is not empty \n and data[('mods_name_' ~ name_type_and_role ~ '_valueuri')] != '' -%}\n {%- set name_uris = data[('mods_name_' ~ name_type_and_role ~ '_valueuri')]|split('|@|') -%}\n {%- endif -%}\n {%- set role_uri = ami_lod_reconcile(name_role|lower|capitalize,'loc;relators;thing','en',1) -%}\n {#- we split and iterate over the value of the mods_name key -#}\n {# NOTE. THIS IS TARGETING Anything after the year 1000, or 2000 #}\n {%- for index,name in value|replace({'|@|1':', 1', '|@|2':', 2', '|@|-':', -'})|split('|@|') -%}\n {%- if name is not empty and name|trim != '' -%}\n {%- set name_uri = null -%}\n {# Here we can check if one of the names IS not a name (e.g a year? #}\n {#- we call the ami_lod_reconcile twig extension with the role label using the LoC Relators endpoint in english and get 1 result -#}\n {%- if name_uris[index] is defined and name_uris[index] is not empty -%}\n {%- set name_uri = name_uris[index] -%}\n {%- endif -%}\n {%- set creator_lod = creator_lod|merge([{'role_label': name_role|lower|capitalize, 'role_uri': role_uri[0].uri, \"agent_type\": name_type, \"name_label\": name, \"name_uri\": name_uri}]) -%}\n {%- endif -%}\n {%- endfor -%}\n {%- endif -%}\n {%- endif -%}\n {%- endfor ~%}\n {# Use reduce filter + other logic for depulicating #}\n {% set creator_lod = creator_lod|reduce((unique, item) => item in unique ? unique : unique|merge([item]), []) %}\n \"creator_lod\": {{ creator_lod|json_encode|raw -}},\n {#- END Names from LoD and MODS CSV with/without URIS. -#}\n
Use Case #5: I have geographic location information that I would like to reconciliate against Nominatim and map into the default Archipelago 'geographic_location' key. I have AMI Source Data CSVs which contain values/labels and some which contain coordinates.
Twig Recipe Card for Use Case #5 with variation notes:
{#- <-- Geographic Info and terms:\n Includes options for geographic info for:\n - Nominatim lookup by value/label\n - Nominatim lookup by coordinates \n -#}\n {#- use value for Nominatim search -#}\n {% if data.mods_subject_geographic|length > 0 %}\n {% set nominatim_from_label = ami_lod_reconcile(data.mods_subject_geographic,'nominatim;thing;search','en') -%}\n \"geographic_location\": {{ nominatim_from_label|json_encode|raw }},\n {% endif %}\n {#- use coordinates for Nominatim search, if provided -#}\n {% if data.mods_subject_cartographics_coordinates|length > 0 %}\n {% set nominatim_from_coordinates = ami_lod_reconcile(data.mods_subject_cartographics_coordinates,'nominatim;thing;reverse','en') -%}\n \"geographic_location\": {{ nominatim_from_coordinates|json_encode|raw }},\n {% endif %}\n{#- Geographic Info and terms --> #} \n
Use Case #6: I have date values in a dc.date
column that contain instances of 'circa' or 'Circa' where I would like to replace with the EDTF-friendly '~' instead and map to the Archipelago default 'date_created_edtf' JSON key.
Twig Recipe Card for Use Case #6:
{% if data['dc.date'] is defined %}\n {% set datecleaned = data['dc.date']|replace({\"circa \":\"~\", \"Circa \":\"~\"}) %}\n \"date_created_edtf\": {\n \"date_to\": \"\",\n \"date_free\": {{ datecleaned|json_encode|raw }},\n \"date_from\": \"\",\n \"date_type\": \"date_free\"\n }, \n {% endif %}\n
More recipe cards will be added over time. Please see our Archipelago Contribution Guide to learn about contributing your own recipe card or other documentation.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Twig","Twig Filters","Twig Functions","Twig Templates","Examples","JSON"]},{"location":"utility_scripts/","title":"Utility Scripts","text":"If you've already followed deployment guides for archipelago-deployment and archipelago-deployment-live, you may have used some shell scripts that archipelago provides. The scripts are available in the scripts/archipelago/
and drupal/scripts/archipelago/
folders respectively.
Metadata Display Entity Twig Templates can be exported out of and imported into both local and remote deployments with the following script: import_export.sh
. The script can be run interactively or non-interactively.
Docker host vs. Docker container
Because the script uses the Docker .env
file for the JSONAPI user and URL by default, we recommend running this directly on the host.
Running the script interactively will guide you through a number of prompts to configure and import or export to an existing folder or to one which will be created.
./import_export.sh -n\n
","tags":["Bash","Scripts","DevOps"]},{"location":"utility_scripts/#non-interactive-mode","title":"Non-interactive Mode","text":"To run the command non-interactively provide the required and optional parameters with the necessary arguments as needed.
Options for Non-interactive Mode
-i
or -e
(required)
\u00a0\u00a0\u00a0\u00a0Import or export, respectively, Metadata Entity Display Twig Templates from a local folder.
-s path
(required)
\u00a0\u00a0\u00a0\u00a0The absolute path of the local folder to export to or import from.
-j path/filename
(only required if the .env
file containing the JSONAPI user and password is in a non-standard location)
\u00a0\u00a0\u00a0\u00a0The absolute path to the .env
file containing the JSONAPI user and password.
-d url
(required if URL is not in .env
file or importing to or exporting from a remote deployment)
\u00a0\u00a0\u00a0\u00a0The URL of the archipelago deployment.
-k
(optional)
\u00a0\u00a0\u00a0\u00a0Keep any existing files ending with .json
in the specified folder (the default is to delete) before exporting.
JSONAPI User
The JSONAPI user credentials, by default, will be read from the .env
files in the following locations (relative to the root of the deployment):
./deploy/ec2-user/.env
archipelago-deployment ./.env
A separate file can also be passed as an argument using the -j
option.
JSONAPI_USER=jsonapi\nJSONAPI_PASSWORD=jsonapi\n
Exporting from local archipelago-deployment-live
./import_export.sh -e -s /home/ec2-user/metadatadisplay_export\n
After logging into the archipelago-deployment-live host, the above command will delete any files with the .json
extension if the destination folder exists. Otherwise, the folder will be created. The JSON user credentials and domain from the .env
file will then be used to download the files so please make sure these are set. Exporting from local archipelago-deployment
./import_export.sh -e -s /home/user/metadatadisplay_export -d http://localhost:8001\n
This will work the same way as the above example, but the URL is passed as an argument in this case since the .env
file will not contain (in most cases) the domain. As above, the JSON user credentials will have to be set in the .env
file. Exporting from remote archipelago-deployment-live
./import_export.sh -e -s /home/user/metadatadisplay_export -d https://archipelago.nyc\n
This is essentially the same as the example directly above, except that in this case the JSON user credentials in the .env
file will have to be set to the ones used to access the remote instance. Importing locally into archipelago-deployment-live
./import_export.sh -i -s /home/user/metadatadisplay_import\n
This is essentially the same as the first example above, except that the import option (-i
) is used. The folder name is changed for the sake of example, but you can use the same folder that was used for exporting. Importing locally into archipelago-deployment
./import_export.sh -i -s /home/user/metadatadisplay_import -d http://localhost:8001\n
As in the example directly above, this corresponds to the example for exporting with a local archipelago-deployment instance, except that the import option (-i
) is used. Importing from local instance into remote archipelago-deployment-live
./import_export.sh -i -s /home/user/metadatadisplay_import -d https://archipelago.nyc\n
In this example, the locally exported files are being imported into a remote instance. As in the above examples with remote instances, the JSON user credentials need to be set in the .env
file to those with access to the remote instance.","tags":["Bash","Scripts","DevOps"]},{"location":"utility_scripts/#automatic-deployment-script","title":"Automatic Deployment Script","text":"If you're frequently deploying locally with archipelago-deployment, you may want to use the automated deployment script available at scripts/archipelago/devops/auto_deploy.sh
. The script is interactive and can be called from the root of the deployment, e.g. /home/user/archipelago-deployment/
:
Automatic Deployment
scripts/archipelago/devops/auto_deploy.sh
Follow the prompts and select your options to complete the deployment.
","tags":["Bash","Scripts","DevOps"]},{"location":"webforms/","title":"Webforms in Archipelago","text":"The Webform Strawberryfield module provides Drupal Webform ( == awesome piece of code) integrations for StrawberryField so you can really have control over your Metadata ingests. These custom elements provide Drupal Webform integrations for Archipelago\u2019s StrawberryField so you can have fine grained and detailed control over your Metadata ingests and edits.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"webforms/#instructions-and-guides","title":"Instructions and Guides","text":"Use these webforms or their elements to create a custom webform for your own repository/project needs
Archipelago Default Deployment Webforms
Descriptive Metadata
Digital Object Collection
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Webform","Form Mode","Webform Elements"]},{"location":"webformsasinput/","title":"How to Create a Webform as an Input Method for Archipelago Digital Objects (ADO) / Primer on Display Modes","text":"Drupal 8/9 provides a lot of out-of-the-box functionality to setup the way Content Entities (Nodes or in our case ADOs) are exposed to users with the proper credentials. That functionality lives under the \"Display Modes\" and can be accessed at yoursite/admin/structure/display-modes
.
In a few quick words, The Display Mode Concept covers: formatting your Content Entities and their associated Fields so when a user lands on a Content Page, they are displayed in a certain, hopefully pleasing, way and also how users with proper Credentials can fill inputs/edit values for each field
a Content Entity provides.
First, formatting output (basically building the front facing page for each content entity) is done by a View Mode
. Second, defining how/what input method you are going to use to create or edit Content entities, is handled by a Form Mode
. Both Modes, are, in Drupal Lingo, Configuration Entities, they provide things you can configure, you can name them and reuse them and those configurations can all be exported and reimported using YAML files. Also both Modes the following in common:
The main difference, other than their purpose (Output v/s Input) is that, on View Modes, the settings you apply to each field are associated to \"Formatters\" and on Form Modes, the settings you apply to each field are connected to \"Widgets\".
So, resuming, this is what lives under the Concept of a \"Display Mode\":
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#view-mode","title":"View Mode","text":"SBF
will provide a large list of possible Formatters
, like IIIF driven viewers, Video formatters, Metadata Display (Twig template driven) ones, etc. This is because a SBF type of field has much more than just a text value, it contains a full graph of metadata and properties, inclusive links to Files and provenance metadata.Which Widgets are available will depend on the \"type\" of field the Content Entity has.
Node
Title will have a single Text Input with some options, like the size of the Textfield used to feed it.SBF
(strawberryfield), will provide a larger list of possible Widgets, ranging from raw JSON input (which you could select if your data was already in the right format) to the reason we are reading this: Webform driven Widgets
. These Widgets include:If you chose a widget other than the raw JSON, the widget will take the raw JSON to build, massage and enrich the data so that it can be presented in a visual format by the SBF. This is because a SBF type of field has much more than just a text value. It contains a full graph of metadata and properties, inclusive links to Files and provenance metadata, which for example allows us to use an Upload field directly in the attached/configured webform. - Form modes also have an additional benefit. Each one can have fine grained permissions. That way you can have many different Form Modes, but allow only certain ones to be visible, or usable by users of a given Drupal Role.
Good question! So, to enable, configure, and customize these Display Modes you have to navigate to your Content Type
Configuration page in your running Archipelago. This is found at /admin/structure/types
. Note: the way things are named in Drupal can be confusing to even the most deeply committed Drupal user, so bear in mind some terms will change. Feel free to read and re-read.
You can see that for every existent Content Type, there is a drop down menu with options:
On the top you will see all your View Modes Listed, with the Default
one selected and expanded. The Table that follows has one row per Field attached/part of this Content Type. Some of the fields are part of the Content Type itself, in this case Digital Object (bundled) and some other ones are common to every Content Entity derived from a Node. The \"Field\" column contains each field name (not their type, reason why you don't see Strawberry Field there!) but we can tell you right now that there is one, named \"Descriptive Metadata\", that is of SBF
type.
How do we know that the field named \"Descriptive Metadata\" is a Strawberryfield? Well, we set-up the Digital Object Content Type for you that way, but also you can know what we know by pressing on \"Manage fields\" Tab on the top (don't forget to come back to \"Manage display\", afterwards!)
Also Surprise: You Content Entity has really really just 2 fields! And that, friends, is one of the secret ingredients of Archipelago. All goes into a Single Field. But wait: i see more fields in my Manage Display table. Why? Well. Some of them are base fields, part of what a Drupal Node is: base field means you can not remove them, they are part of the Definition itself. One obvious one is the Title
.
But there are also some fields very particular to Archipelago: You can see there are also ones named \"Formatter Object Metadata\", \"Media\" and one named \"Static Media\"!. Where does come from? Those are also Strawberryfields. It sounds confusing but it is really simple. They are really not \"fields\" in the sense of having different data than \"Descriptive Metadata\". Those are In Memory, realtime, copies of the \"Descriptive Metadata\" SBF field and are there to overcome one limitation of Drupal 8:
Each Field can have a single \"Formatter\" setup per field.
But we want to re-usue the JSON data to show a Viewer, Show Metadata as HTML directly on the ADO/NODE landing page, and we want also to, for example, format sometimes images as Thumbnails and not using a IIIF viewer only. This CopyFields (Legal term) have also a nice Performance advantage. Drupal needs to fetch only once the data from the real Field, \"Descriptive Metadata\", from the database. And then just makes the data available in real time to its copies. That makes all fast, very very fast! And of course flexible. As you dig more into Archipelago you will see the benefits of this approach. Finally, if you need to, you can make more CopyFields. But the reality is, there is a single, only one, SBF in each Digital Object and its named \"Descriptive Metadata\".
You can also simply not care about the type and trust the UI. It will just show Formatters that are right for each type and expose Configuration options (and a little abstract of the current ones) under the Widget Column. Operations Columns allows you to setup each Widget. Widget term here is a bit confusing. These are not really Widget in terms of Data Input, but in terms of \"Configuration\" Input. But D8/9 is evolving and its getting better. Those settings apply always only to the current View Mode.
You can play with this, experiment and change some settings to get more comfortable. We humbly propose you that you complete this info with the official Drupal 8 Documentation and also apply custom settings to your own, custom View Mode so you don't end changing base, expected functionality while you are still learning.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#manage-form-display","title":"Manage Form Display","text":"On the top you will see all your Form Modes Listed, with the Default
one selected and expanded. The Table that follows has one row per Field attached/partof this Content Type. The list of fields here is shorter, the SBF CopyFields are not present because all data goes really only into real fields. Also some other, display only ones (means you can not modify them) will not appear here. Again, Some of the fields are part of the Content Type itself, in this case Digital Object (bundled) and some other ones are common to every Content Entity derived from a Node. \"Field\" column contains each field name and the Widget Column allows you to select what type of Input you are going to use to feed it on Ingest/edit. On the right you will see again a little gear, that allows you to configure the settings for a particular Widget. Those settings apply always only to the current Form Mode.
So. The one we want to understand is the one attached to the \"Descriptive Metadata\" field. Currently one named \"Strawberryfield webform based input with inline rendering\". There are other two. But let's start with this. Press on the Gear to the right on the same row.
AS you can see there are not too many options. But, the main, first Text input is an Autocomplete field that will resolve against your existing Webforms. So, guess what. If you want to use your own Webform to feed a SBF, what do you do? You type the name, let the autocomplete work, select the right Webform, maybe your own custom one, and the you press \"Update\". Once that is done you need to \"Save\" your Form Mode (hint, button at the bottom of the page).
We wish life was that easy (and it will once we are done with refining Drupal's UI) but for now there are some extra things you need to do to make sure the Webform, your custom one, can speak JSON. The default one you get named also \"Descriptive Metadata (descriptive_metadata)\", same as the field, is already setup to be used. Means if you create a new Webform by Copying that one, you can start using it inmediately. But if you created one from scratch (Different tutorial) you need to setup some settings.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#setting-up-a-webform-to-talk-strawberryfield","title":"Setting up a Webform to talk Strawberryfield","text":"Navigate to your Webform Managment form at /admin/structure/webform
If you already created a Webform (different tutorial on how to do that) you will see your own named one in that list. I created for the purpose of this documentation one named \"Diego Test\" (Hi, i'm Diego..) and on the most right Column, \"Operations\" you will haven an Drop Down Menu. On your own Webform row, press on \"Settings\".
First time, this can be a little bit intimitading. We recommend going baby steps since the Webform Module is a very powerful one but also exposes you to a lot (and sometimes too many) options. Even more, if you are new to Webforms, we recomment you to copy the \"Descriptive Metadata\" Webform we provided first, and make small changes to it (starting by naming it your own way!) so you can see how that affects your workflow and experience, and how that interacts with the created metadata. The Webform Module provides testing and building capabilities, so you have a Playground there before actually ingesting ADOs. Copying it will also make all the needed settings for SBF interaction to be moved over, so your work will be much easier.
But we know you did not do that (where is the fun there right?). So lets setup one from scratch.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"webformsasinput/#general-settings","title":"General Settings","text":"Gist here is (look at the screenshot and copy the settings):
Gist here is (again, look at the screenshot and copy the settings):
The glue, the piece of resistance. The handler is the one that knows how to talk to a SBF. In simple words, the handler (any handler) provides functionality that does something with a Webform Submission. The one that you want to select here, is the \"Strawberryfield harvester\" handler. Add it, name it whatever you like (or copy what you see in the screenshot) and make sure you select, if running using our deployment strategy, \"S3 File System\" as the option for \"Permanent Destination for uploaded files\". The wording is tricky there, its not really Permanent, since that is handled by Archipelago, but more to Temporary, while working in ingesting an Object, destination for the Webform. Its not really wrong neither. Its permanent for the Webform, but we have better plans for the files and metadata!
Save your settings. And you are ready to roll. That webform can now be used as a Setting for any of the StrawberryField Widgets that use Webforms.
Finally (the real finally). Archipelago encourages at least one Field/JSON key to be present always. One with \"type\" as key value. So make sure that your Custom Webform has that one.
There are two ways of doing that:
You can copy how it is setup from the provided Webform's Elements, from the main Descriptive Metadata Webform and then add one \"select\" element to yours using the same \"type\" \"key\".Important in Archipelago is always the key value since that is what builds the JSON for your metadata. The Description can be any, but for UI consistency you could want to keep it the same across all your webforms.
Or, advanced, you can use the import/export capabilities (Webforms are just YAML files!) and export/copy your custom one as text, add the following element before or after some existing elements there
type:\n '#type': select\n '#title': 'Media Type'\n '#options': schema_org_creative_works\n '#required': true\n '#label_attributes':\n class:\n - custom-form-input-heading\n
And then reimport.
Having a \"type\" value will make your life easier. You don't need it, but everything works smoother that way.
Since you have a single Content Type named Digital Object, having a Webform field that has as key \"type\", which leads to a \"type\" JSON key, allows you to discern the Nature of your Digital Object, book or Podcast, Image or 3D and do smart, nice things with them.
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Webform","Form Mode","View Mode","Display Mode","Manage Display","Manage Form Display","Handler"]},{"location":"workingtwigs/","title":"Working with Twig in Archipelago","text":"The following information can also be found in this Presentation from the \"Twig Templates and Archipelago\" Spring 2021 Workshop:
Note
All examples shown below are using the following JSON snipped from Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?].
Click to view image of the JSON snippet. Click to view this snippet as JSON.{\n \"type\": \"Photograph\",\n \"label\": \"Laddie the dog running in the garden, Bronx, N.Y., undated [c. 1910-1918?]\",\n \"owner\": \"New-York Historical Society, 170 Central Park West, New York, NY 10024, 212-873-3400.\",\n \"rights\": \"This digital image may be used for educational or scholarly purposes without restriction. Commercial and other uses of the item are prohibited without prior written permission from the New-York Historical Society. For more information, please visit the New-York Historical Society's Rights and Reproductions Department web page at http:\\/\\/www.nyhistory.org\\/about\\/rights-reproductions\",\n \"language\": [\n \"English\"\n ],\n \"documents\": [],\n \"publisher\": \"\",\n \"ismemberof\": \"111\",\n \"creator_lod\": [\n {\n \"name_uri\": \"\",\n \"role_uri\": \"http:\\/\\/id.loc.gov\\/vocabulary\\/relators\\/pht\",\n \"agent_type\": \"personal\",\n \"name_label\": \"Stonebridge, George Ehler\",\n \"role_label\": \"Photographer\"\n }\n ],\n \"description\": \"George Ehler Stonebridge (d. 1941) was an amateur photographer who lived and worked in the Bronx, New York.\",\n \"subject_loc\": [\n {\n \"uri\": \"http:\\/\\/id.loc.gov\\/authorities\\/subjects\\/sh85038796\",\n \"label\": \"Dogs\"\n }\n ],\n \"date_created\": \"1910-01-01\"\n}\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#first-know-your-data","title":"First: Know Your Data","text":"Understanding the basic structure of your JSON data.
Single JSON Value.
\"type\": \"Photograph\"
Multiple JSON Values (Array of Enumeration of Strings)
- For \"language\": [\"English\",\"Spanish\"]
- \"language\" = JSON Key or Property - \"[\"English\",\"Spanish\"]\" = Multiple JSON Values (Array of Enumeration of Strings)
Multiple JSON Values (Array of Enumeration of Objects)
\"subject_loc\":[{\"uri\":\"http://..\",\"label\":\"Dogs\"},{\"uri\":\"http://..\",\"label\":\"Pets\"}]
Data is known as Context in Twig Lingo.
All your JSON Strawberryfield Metadata is accessible inside a Variable named data in your twig template.
You can access the values by using data DOT Property (attribute) Name
.
data.type
will contain \"Photograph\"data.language
will contain [ \"English\" ]data.language[0]
will contain \"English\" data.subject_loc
will contain [{ \"uri\":\"http://..\",\"label\": \"Dog\" }]data.subject_loc.uri
will contain \"http://..\"
data.subject_loc.label
will contain \"Dog\"Note
You also have access to other info in your context node
: such asnode.id
is the Drupal ID of your Current ADO; Also is_front
, language
, is_admin
, logged_in
; and more!
Twig for Template Designers
https://twig.symfony.com/doc/3.x/templates.html
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#simple-examples-using-printing-statements","title":"Simple examples using Printing Statements","text":"Single JSON Value Example
Twig templateHello I am a {{ data.type }} and very happy to meet you\n
Rendered outputHello I am a Photograph and very happy to meet you\n
Multiple JSON Values Example
Twig templateHello I was classified as \"{{ data.subject_loc[0].label }}\" and very happy to meet you\n
Rendered outputHello I was classified as \"Dogs\" and very happy to meet you\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#twig-statements-and-executing","title":"Twig Statements and Executing","text":"If in Twig
https://twig.symfony.com/doc/3.x/tags/if.html
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#rendered-output-based-upon-different-twig-conditionals-operators-tests-assignments-and-filters","title":"Rendered Output based upon different Twigconditionals
, operators
, tests
, assignments
, and filters
","text":"Conditionals, Operator, and Test Usage
Twig Template{% if data.subject_loc is defined %}\nHey I have a Subject Key\n{% else %}\nUps no Subject Key\n{% endif %}\n
Rendered OutputHello I was classified as \"Dogs\" and very happy to meet you\n
Loop Usage
Twig Template{% for key, subject in data.subject_loc %}\n* Subject {{ subject.label }} found at position {{ key }}\n{% endfor %}\n
Rendered Output* Subject Dogs found at position 0\n
Assignment, Filter, and Loop Usage
Twig Template{% for subject in data.subject_loc %}\n{% set label_lowercase = subject.label|lower %}\nMy lower case Subject is {{ label_lowercase }}\n{% endfor %}\n
Rendered Output`My lower case Subject is dogs`\n
Loop Scope
Twig Template{% for subject in data.subject_loc %}\n {% set label_lowercase = subject.label|lower %}\nMy lower case Subject is {{ label_lowercase }}\n{% endfor %}\n{# \n The below won\u2019t display because it was assigned inside \n The For Loop\n#}\n{{ label_lowercase }}\n
Rendered Output`My lower case Subject is dogs`\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#full-examples-for-common-uses-cases","title":"Full Examples for Common Uses Cases:","text":"Use Case #1
I have multiple LoD Subjects and want to display them in my page as a clickable ordered list but I\u2019m a safe/careful person.
Twig Example for Use Case #1{% if data.subject_loc is iterable and data.subject_loc is not empty %}\n<h2>My Subjects</h2>\n<ul>\n {% for subject in data.subject_loc %}\n <li>\n <a href=\"{{ subject.uri }}\" title=\"{{ subject.label|capitalize }}\" target=\"_blank\">\n {{ subject.label }}\n </a>\n </li> \n {% endfor %}\n</ul>\n{% endif %}\n
Use Case #2
I have sometimes a publication date. I want to show it in beautiful human readable language.
Twig Example for Use Case #2{% if data.date_published is not empty %}\n<h2>Date {{ data.label }} was published:</h2>\n<p>\n{{ data.date_published|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% endif %}\n
About date
Use Case #3 (Full Curry)
{# May 4th 2021 @dpino: I have sometimes a user provided creation date. I want to show it in beautiful human readable language but fallback to automatic date if absent. I also want in the last case to show it was either \u201ccreated\u201d or \u201cupdated\u201d. #}
\"as:generator\": {\n \"type\": \"Update\",\n \"actor\": {\n \"url\": \"https:\\/\\/archipelago.nyc\\/form\\/descriptive-metadata\",\n \"name\": \"descriptive_metadata\",\n \"type\": \"Service\"\n },\n \"endTime\": \"2021-03-17T13:24:01-04:00\",\n \"summary\": \"Generator\",\n \"@context\": \"https:\\/\\/www.w3.org\\/ns\\/activitystreams\"\n }\n
Twig Example for Use Case #3{% if data.date_created is not empty %}\n<h2>Date {{ data.label }} was created:</h2>\n<p>\n {{ data.date_created|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% else %}\n<h2>Date {{ data.label }} was {{ attribute(data, 'as:generator').type|lower }}d in this repository:</h2>\n<p>\n {{ attribute(data, 'as:generator').endTime|date(\"F jS \\\\o\\\\f Y \\\\a\\\\t g:ia\") }}\n</p>\n{% endif %}\n
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#a-recommended-workflow","title":"A Recommended Workflow","text":"You want to create a New Metadata Display (HTML) or a new (XML) Schema based format?
data.label
info and check where your Frame uses a Title or a Label. Remove that text (Cmd+X or Ctrl+X) and replace with a {{ data.label }}
. Press Preview. Do you see your title?data.subject_loc
){# I added this because .. #}
Once the Template is in place you can use it in a Formatter, as Endpoint, in your Search Results or just keep it around until you and the world are ready!
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"workingtwigs/#and-now-its-your-turn","title":"And now it's your turn!","text":"We hope you found the information presented here to be helpful in getting started working with Twigs in Archipelago. Click here to return to the main Twigs in Archipelago documentation. Happy Twigging!
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
","tags":["Twig","Twig Templates","Twig Filters","Twig Functions","Examples","JSON"]},{"location":"xdebug/","title":"Debugging PHP in Archipelago","text":"This document describes how to enable Xdebug for local PHP development using the PHPStorm IDE and a docker container running the Archipelago esmero-php:development
image. It involves interacting with the esmero/archipelago-docker-images repo and the esmero/archipelago-deployment repo.
Run the following commands from your /archipelago-deployment
directory:
docker-compose down
\\ docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
This version of docker-compose up
uses an override file to modify our services. docker-compose.dev.yml
we now have an extra PHP container called esmero-php-debug
.
To stop the containers in the future, run docker-compose -f docker-compose.yml -f docker-compose.dev.yml down
.
(To make these commands easier to remember, consider making bash aliases in your .bashrc file.) (If you are running your development on a Linux system, you may need to make a modification to your xdebug configuration file on the esmero-php-dev container. See appendix at the bottom of this page.)
So we have reloaded the containers and now you are ready for Part 2.
In PHPStorm, open your archipelago-deployment
project.
Go to Preferences > Languages & Frameworks > PHP > Debug
or Settings > PHP > Servers
. In this window there is an Xdebug section. Use these settings:
9003
. (do NOT use the default, 9000)Your settings should look like this. Hit APPLY and OK.
Go to Preferences > Languages & Frameworks > PHP > Servers
. We will create a new server here. Use these settings:
docker-debug-server
localhost
8001
archipelago-deployment
directory in the File/Directory
column.Absolute path on the server
add /var/www/html
Hit APPLY and OK and close the window.
Go to Run > Edit Configurations
. Hit the +
Button to create a new PHP Remote Debug. Name whatever you want, I called mine Archipelago
. Use these settings:
docker-debug-server
from dropdown (we created this in step 3)archipelago
(this matches the key set in our container)
Note: If you try to validate your connection, it will fail. But that's ok.
Validate your connection. With Run > Edit Configurations
still open, you can hit the link that says \"Validate\". Use these settings in the following validation window:
<your local path>/archipelago-deployment/web
http://localhost:8001
Hit VALIDATE. You should get a series of green check marks. If you get a warning about missing php.ini
file, that is OK, our file has a different name in the container (xdebug.ini
) and is still being read correctly.
We have had success using the XDebug Helper extension in Chrome. Once you have the extension installed, right-click on the bug icon in the top right of your chrome browser window and select \"Options\" to configure the IDE key. Under \"IDE\", select \"Other\", and in the text box, enter \"archipelago\"
Hit the button (top right bar of PHPStorm) that looks like a telephone, for Start Listening for PHP Debug Connections
.
Now, you can use Run > Debug
and select the Archipelago
named configuration that we created in the previous steps. The debugging console will appear. It will say it is waiting for incoming connection from 'archipelago' .
Right now the debugging session is not enabled. Browse to localhost:8001
. Click on the gray XDebug Helper icon at the top right of your window and select the green \"Debug\" button. This will tell chrome to set the xdebug session key when you reload the page.
Now set a breakpoint in your code, and refresh the page. If you have breakpoints set, either manually, or from leaving \"Break at first line in PHP scripts\" checked, you should have output now in the debugger.
If you are done actively debugging, it is best to click the green XDebug Helper icon and select \"Disable\". This will greatly improve speed and performance for your app in development. When you need to debug, just turn on debugging using the XDebug Helper button again.
If you would like to see the output of your xdebug logs, run the following script: docker exec -ti esmero-php bash -c 'tail -f /tmp/xdebug.log > /proc/1/fd/2'
Then, you can use the typical docker logs command on the esmero-php
container, and you will see the xdebug output: docker logs esmero-php -f
Xdebug makes accessing variables in Drupal kind of great. Many possibilities, including debugging for Twig templates. Happy debugging!
"},{"location":"xdebug/#appendix-xdebug-on-a-linux-host","title":"Appendix: XDebug on a linux host","text":"If you are developing on a linux machine, you may need to make a change to the xdebug configuration file.
/archipelago-deployment/xdebug
folder called xdebug.ini
and enter the following text: zend_extension=xdebug\n\n[xdebug]\nxdebug.mode=develop,debug\nxdebug.discover_client_host = 1\nxdebug.start_with_request=yes\n
php-debug:\n ...\n volumes:\n - ${PWD}:/var/www/html:cached\n # Bind mount custom xdebug configuration file...\n - ${PWD}/xdebug/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini\n
Thank you for reading! Please contact us on our Archipelago Commons Google Group with any questions or feedback.
Return to the Archipelago Documentation main page.
"}]} \ No newline at end of file diff --git a/1.3.0/search_advanced/index.html b/1.3.0/search_advanced/index.html index 74621905..b5bb665d 100644 --- a/1.3.0/search_advanced/index.html +++ b/1.3.0/search_advanced/index.html @@ -565,6 +565,8 @@ + + @@ -628,7 +630,7 @@ - Installing Archipelago Drupal 9 on OSX (macOS) + Installing Archipelago Drupal 10 on OSX (macOS) @@ -648,7 +650,7 @@ - Installing Archipelago Drupal 9 on Ubuntu 18.04 or 20.04 + Installing Archipelago Drupal 10 on Ubuntu 18.04 or 20.04 @@ -668,7 +670,7 @@ - Installing Archipelago Drupal 9 on Windows 10/11 + Installing Archipelago Drupal 10 on Windows 10/11 @@ -703,6 +705,26 @@ +