OpenAI Provider¶
Below is how you can instantiate OpenAI as a provider, along with feedback functions available only from OpenAI.
Additionally, all feedback functions listed in the base LLMProvider class can be run with OpenAI.
trulens_eval.feedback.provider.openai.OpenAI
¶
Bases: LLMProvider
Out of the box feedback functions calling OpenAI APIs.
Create an OpenAI Provider with out of the box feedback functions.
Example
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
PARAMETER | DESCRIPTION |
---|---|
model_engine |
The OpenAI completion model. Defaults to
|
**kwargs |
Additional arguments to pass to the OpenAIEndpoint which are then passed to OpenAIClient and finally to the OpenAI client.
TYPE:
|
Functions¶
moderation_hate
¶
Uses OpenAI's Moderation API. A function that checks if text is hate speech.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_hate, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not hate) and 1.0 (hate).
TYPE:
|
moderation_hatethreatening
¶
Uses OpenAI's Moderation API. A function that checks if text is threatening speech.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_hatethreatening, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not threatening) and 1.0 (threatening).
TYPE:
|
moderation_selfharm
¶
Uses OpenAI's Moderation API. A function that checks if text is about self harm.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_selfharm, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not self harm) and 1.0 (self harm).
TYPE:
|
moderation_sexual
¶
Uses OpenAI's Moderation API. A function that checks if text is sexual speech.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_sexual, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not sexual) and 1.0 (sexual).
TYPE:
|
moderation_sexualminors
¶
Uses OpenAI's Moderation API. A function that checks if text is about sexual minors.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_sexualminors, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not sexual minors) and 1.0 (sexual minors).
TYPE:
|
moderation_violence
¶
Uses OpenAI's Moderation API. A function that checks if text is about violence.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_violence, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not violence) and 1.0 (violence).
TYPE:
|
moderation_violencegraphic
¶
Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_violencegraphic, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not graphic violence) and 1.0 (graphic violence).
TYPE:
|
moderation_harassment
¶
Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_harassment, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not harrassment) and 1.0 (harrassment).
TYPE:
|
moderation_harassment_threatening
¶
Uses OpenAI's Moderation API. A function that checks if text is about graphic violence.
Example
from trulens_eval import Feedback
from trulens_eval.feedback.provider.openai import OpenAI
openai_provider = OpenAI()
feedback = Feedback(
openai_provider.moderation_harassment_threatening, higher_is_better=False
).on_output()
PARAMETER | DESCRIPTION |
---|---|
text |
Text to evaluate.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
float
|
A value between 0.0 (not harrassment/threatening) and 1.0 (harrassment/threatening).
TYPE:
|