Skip to content

Completions

When you "execute" a prompt, Promptmetheus will compile your prompt into a plain text string and send it to the API of the inference provider of your choice, where it will be completed by the selected LLM and returned to Promptmetheus for you to inspect.

Completion example

Ratings

🚧 In the making...

Automatic evaluations

Take a look at the Evaluators section for more information on how evaluators and automatic evaluations work.

Prompt fragments

🚧 In the making...

Used model (LLM) and settings

At the bottom left of each completion you can find the identifier of the model that was used for the completion together with the values of the selected model parameters:

...and indicators for Seed, JSON Mode, and Stop Sequences.

Inference metrics

At the bottom right you can find a selection of relevant metrics for the completion.

  • Inference Time in seconds
  • Inference Speed in tokens per second (tps)
  • Token Count (input/output)
  • Inference Cost (in cents)

Associated prompt

🚧 In the making...

Completion prompt example

Search, filter, display mode, and sweep

You can search completions and/or filter them by rating with the respective actions at the top right of the screen.

Additionally, there are 3 display modes available that you can just toggle through:

  1. text only
  2. text plus ratings, actions, model parameter, and inference metrics
  3. same as 2, plus prompt fragments

The last action in the list is the sweep button. It allows you to clear all completions from the current prompt. Note that there is currently no "undo" button and completions that are cleared cannot be restored (that's why you will have to confirm this action).

Completion exports

🚧 In the making...