Appearance
Completions โ
When you "execute" a prompt, Promptmetheus will compile your prompt into a plain text string and send it to the API of the inference provider of your choice, where it will be completed by the selected LLM and returned to Promptmetheus for you to inspect.
Ratings โ
๐ง In the making...
Automatic evaluations โ
Take a look at the Evaluators section for more information on how evaluators and automatic evaluations work.
Prompt fragments โ
๐ง In the making...
Used model (LLM) and settings โ
At the bottom left of each completion you can find the identifier of the model that was used for the completion together with the values of the selected model parameters:
...and indicators for Seed,
JSON Mode, and
Stop Sequences.
Inference metrics โ
At the bottom right you can find a selection of relevant metrics for the completion.
Inference Time in seconds
Inference Speed in tokens per second (tps)
Token Count (input/output)
Inference Cost (in cents)
Associated prompt โ
๐ง In the making...
Search, filter, display mode, and sweep โ
You can search completions and/or
filter them by rating with the respective actions at the top right of the screen.
Additionally, there are 3 display modes available that you can just toggle through:
text only
text plus ratings, actions, model parameter, and inference metrics
same as 2, plus prompt fragments
The last action in the list is the sweep button. It allows you to clear all completions from the current prompt. Note that there is currently no "undo" button and completions that are cleared cannot be restored (that's why you will have to confirm this action).
Completion exports โ
๐ง In the making...