# ModsysML (Apollo)

Maintained by [Apollo](https://www.apolloapi.io/), [ModsysML](https://modsys.vercel.app/) is an open-source model management toolkit for continuous model improvement. It helps generative AI developers evaluate and compare LLM outputs, test quality, as well as catch regressions and automate their evaluations.

The [`modsys.connectors.avid`](https://github.com/modsysML/modsysML/tree/main/modsys/connectors/avid) module generates AVID reports from LLM evaluation runs performed by ModsysML. To do so, you need information of the model to be evaluated,  a text-based summary of the outcomes, details of the evaluation outcomes dataset (description, link to its location in your local/cloud storage), and finally a path to save the report.

Here is a minimal example of the above, for an evaluation done on `gpt-3.5-turbo` by OpenAI.

```python
# Source: https://github.com/modsysML/modsysML/blob/main/modsys/connectors/avid/cloud.py
from modsys.connectors.avid.cloud import AVIDProvider

AVIDProvider().create_report(
    provider_name = 'openai',
    provider_model = 'gpt-3.5-turbo',
    dataset_name = 'eval_data',
    dataset_link - 's3://your/bucket/eval_data.csv',
    summary = '',
    path_to_save_report = '/path/to/report/eval_report.json'
)
```
