You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to:
Predict performance during performance testing executions (?)
Identify performance bottlenecks and recommend areas to optimize
Trigger automated investigations when Indy SLOs are breached
Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate
We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.
You may use the programming language of your choice, but the chosen processing framework must run in an OpenShift environment (Kubernetes with restricted container access / privileges).
The text was updated successfully, but these errors were encountered:
Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to:
Predict performance during performance testing executions (?)
Identify performance bottlenecks and recommend areas to optimize
Trigger automated investigations when Indy SLOs are breached
Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate
We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.
You may use the programming language of your choice, but the chosen processing framework must run in an OpenShift environment (Kubernetes with restricted container access / privileges).
The text was updated successfully, but these errors were encountered: