-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we improve computation of figures and the result description? #828
Comments
@matthiasschaub @joker234 @Gigaszi @mcauer @Hagellach37 |
JanReifenrath
added a commit
that referenced
this issue
Oct 24, 2024
…the attribute completeness plot #828
JanReifenrath
added a commit
that referenced
this issue
Oct 24, 2024
…ness, still needs to be made less brittle #828
JanReifenrath
added a commit
that referenced
this issue
Oct 25, 2024
…indicator.py more readable #828
JanReifenrath
added a commit
that referenced
this issue
Oct 31, 2024
JanReifenrath
added a commit
that referenced
this issue
Oct 31, 2024
…e multiple attributes/ tags. Also changed the tests to adjust to the changed description and wrote a unittest for a description with multiple attributes #828
Gigaszi
added a commit
that referenced
this issue
Oct 31, 2024
JanReifenrath
added a commit
that referenced
this issue
Nov 7, 2024
Gigaszi
pushed a commit
that referenced
this issue
Nov 12, 2024
…the attribute completeness plot #828
Gigaszi
pushed a commit
that referenced
this issue
Nov 12, 2024
…ness, still needs to be made less brittle #828
Gigaszi
pushed a commit
that referenced
this issue
Nov 12, 2024
Gigaszi
pushed a commit
that referenced
this issue
Nov 12, 2024
…e multiple attributes/ tags. Also changed the tests to adjust to the changed description and wrote a unittest for a description with multiple attributes #828
Gigaszi
pushed a commit
that referenced
this issue
Nov 12, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Computation of figures
Currently, the figures displayed in the dashboard are mostly created in the backend and included in the result,
but some parts are actually done in the frontend, e.g. formatting of titles or rendering of the badges (e.g. 'low currentness').
Would it make sense to create the whole figure in the backend?
Result description
Currently, the textual result description lacks a reference to the topics and attributes used for the indicator,
for example:
"The ratio of the features (all: 1034299.9) compared to features with expected tags (matched: 276785.2) is 0.27. Around 25-75% of the features match the expected tags."
This makes it hard to make sense of the description.
Should we improve this by including topic and attribute names?
The text was updated successfully, but these errors were encountered: