Skip to content

Commit

Permalink
Changing IRIs to remove plurals
Browse files Browse the repository at this point in the history
  • Loading branch information
charishruthi committed May 22, 2024
1 parent 4df60f8 commit 32315f7
Showing 1 changed file with 28 additions and 28 deletions.
56 changes: 28 additions & 28 deletions Ontologies/v3/explanation-ontology.owl
Original file line number Diff line number Diff line change
Expand Up @@ -1416,7 +1416,7 @@ Contextual knowledge in turn is the knowledge that is derived from such relevant
<rdf:Description rdf:about="https://purl.org/heals/eo#ModelExplanationOutputs"/>
<owl:Restriction>
<owl:onProperty rdf:resource="http://semanticscience.org/resource/SIO_000232"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethods"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethod"/>
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
Expand Down Expand Up @@ -1469,11 +1469,11 @@ event that did not occur), the output of interest.</dc:description>



<!-- https://purl.org/heals/eo#ContrastiveSaliencyMethods -->
<!-- https://purl.org/heals/eo#ContrastiveSaliencyMethod -->

<owl:Class rdf:about="https://purl.org/heals/eo#ContrastiveSaliencyMethods">
<owl:Class rdf:about="https://purl.org/heals/eo#ContrastiveSaliencyMethod">
<rdfs:subClassOf rdf:resource="https://purl.org/heals/eo#SaliencyMethod"/>
<rdfs:label>Contrastive saliency methods</rdfs:label>
<rdfs:label>Contrastive Saliency Method</rdfs:label>
</owl:Class>


Expand Down Expand Up @@ -1546,7 +1546,7 @@ event that did not occur), the output of interest.</dc:description>

<owl:Class rdf:about="https://purl.org/heals/eo#CounterfactualSaliencyMethod">
<rdfs:subClassOf rdf:resource="https://purl.org/heals/eo#SaliencyMethod"/>
<rdfs:label>Counterfactual saliency method</rdfs:label>
<rdfs:label>Counterfactual Saliency Method</rdfs:label>
</owl:Class>


Expand Down Expand Up @@ -2224,7 +2224,7 @@ understanding and knowledge [(McNeil and Krajack, 2008) of how the world works,
</owl:Restriction>
<owl:Restriction>
<owl:onProperty rdf:resource="http://semanticscience.org/resource/SIO_000232"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#RationaleProvidingMethod"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#ProvidingRationaleMethod"/>
</owl:Restriction>
</owl:unionOf>
</owl:Class>
Expand Down Expand Up @@ -2358,6 +2358,23 @@ understanding and knowledge [(McNeil and Krajack, 2008) of how the world works,



<!-- https://purl.org/heals/eo#ProvidingRationaleMethod -->

<owl:Class rdf:about="https://purl.org/heals/eo#ProvidingRationaleMethod">
<rdfs:subClassOf rdf:resource="https://purl.org/heals/eo#ExplanationMethod"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://semanticscience.org/resource/SIO_000229"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#LocalExplanation"/>
</owl:Restriction>
</rdfs:subClassOf>
<terms:description>Work in the natural language processing and computer vision domains that generates rationales/explanations derived from input text would be considered as local self explanations. Here however, new words or phrases could be generated so the feature space can be richer than the original input space.</terms:description>
<terms:source>Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... &amp; Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012.</terms:source>
<rdfs:label>Providing Rationale Method</rdfs:label>
</owl:Class>



<!-- https://purl.org/heals/eo#RationaleExplanation -->

<owl:Class rdf:about="https://purl.org/heals/eo#RationaleExplanation">
Expand Down Expand Up @@ -2434,7 +2451,7 @@ understanding and knowledge [(McNeil and Krajack, 2008) of how the world works,
<rdf:Description rdf:about="https://purl.org/heals/eo#LocalExplanation"/>
<owl:Restriction>
<owl:onProperty rdf:resource="http://semanticscience.org/resource/SIO_000232"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#RationaleProvidingMethod"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#ProvidingRationaleMethod"/>
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
Expand All @@ -2458,23 +2475,6 @@ understanding and knowledge [(McNeil and Krajack, 2008) of how the world works,



<!-- https://purl.org/heals/eo#RationaleProvidingMethod -->

<owl:Class rdf:about="https://purl.org/heals/eo#RationaleProvidingMethod">
<rdfs:subClassOf rdf:resource="https://purl.org/heals/eo#ExplanationMethod"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="http://semanticscience.org/resource/SIO_000229"/>
<owl:someValuesFrom rdf:resource="https://purl.org/heals/eo#LocalExplanation"/>
</owl:Restriction>
</rdfs:subClassOf>
<terms:description>Work in the natural language processing and computer vision domains that generates rationales/explanations derived from input text would be considered as local self explanations. Here however, new words or phrases could be generated so the feature space can be richer than the original input space.</terms:description>
<terms:source>Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... &amp; Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012.</terms:source>
<rdfs:label>Providing rationale method</rdfs:label>
</owl:Class>



<!-- https://purl.org/heals/eo#Reasoning_Mode -->

<owl:Class rdf:about="https://purl.org/heals/eo#Reasoning_Mode">
Expand Down Expand Up @@ -3284,7 +3284,7 @@ vol. 267, pp. 1–38, 2019.</terms:source>
<!-- https://purl.org/heals/eo#BRCG -->

<owl:NamedIndividual rdf:about="https://purl.org/heals/eo#BRCG">
<rdf:type rdf:resource="https://purl.org/heals/eo#RationaleProvidingMethod"/>
<rdf:type rdf:resource="https://purl.org/heals/eo#ProvidingRationaleMethod"/>
<terms:source>S. Dash, O. Günlük, and D. Wei. Boolean decision rules via column generation. In
Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), 2018.</terms:source>
<sio:hasSynonym>Boolean decision rules via column generation</sio:hasSynonym>
Expand Down Expand Up @@ -3370,7 +3370,7 @@ Proceedings of the 32nd International Conference on Neural Information Processin
<!-- https://purl.org/heals/eo#GLRM -->

<owl:NamedIndividual rdf:about="https://purl.org/heals/eo#GLRM">
<rdf:type rdf:resource="https://purl.org/heals/eo#RationaleProvidingMethod"/>
<rdf:type rdf:resource="https://purl.org/heals/eo#ProvidingRationaleMethod"/>
<terms:source>D. Wei, S. Dash, T. Gao, and O. Günlük. Generalized linear rule models. In
Proceedings of the 36th International Conference on Machine Learning (ICML), 2019.</terms:source>
<sio:hasSynonym>Generalized linear rule models</sio:hasSynonym>
Expand Down Expand Up @@ -3402,7 +3402,7 @@ Proceedings of the 36th International Conference on Machine Learning (ICML), 201
<!-- https://purl.org/heals/eo#LIME -->

<owl:NamedIndividual rdf:about="https://purl.org/heals/eo#LIME">
<rdf:type rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethods"/>
<rdf:type rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethod"/>
<terms:source>Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “&quot;Why should I trust you?&quot;: Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Kdd San Francisco, ca, 1135–44. New York, NY: Association for Computing Machinery.</terms:source>
<sio:hasSynonym>Locally Interpretable Model-Agnostic Explanations</sio:hasSynonym>
<rdfs:label>LIME</rdfs:label>
Expand Down Expand Up @@ -3441,7 +3441,7 @@ Proceedings of the 36th International Conference on Machine Learning (ICML), 201
<!-- https://purl.org/heals/eo#SHAP -->

<owl:NamedIndividual rdf:about="https://purl.org/heals/eo#SHAP">
<rdf:type rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethods"/>
<rdf:type rdf:resource="https://purl.org/heals/eo#ContrastiveSaliencyMethod"/>
<terms:source>Lundberg, Scott M., and Su-In Lee. &quot;A unified approach to interpreting model predictions.&quot; Advances in neural information processing systems 30 (2017).</terms:source>
<sio:hasSynonym>SHAPley Additive Explanations</sio:hasSynonym>
<rdfs:label>SHAP</rdfs:label>
Expand Down

0 comments on commit 32315f7

Please sign in to comment.