Using the Perspective API, this plugin will warn users when comments exceed the predefined toxicity threshold. Toxic comments will be flagged and are held back from being posted until reviewed by a moderator.
TALK_PERSPECTIVE_API_KEY(required) - The API Key for Perspective. You can register and get your own key at http://perspectiveapi.com/.
TALK_TOXICITY_THRESHOLD- If the comments toxicity exceeds this threshold, the comment will be rejected. (Default
TALK_PERSPECTIVE_API_ENDPOINT- API Endpoint for hitting the perspective API. (Default
TALK_PERSPECTIVE_TIMEOUT- The timeout for sending a comment to be processed before it will skip the toxicity analysis, parsed by ms. (Default
TALK_PERSPECTIVE_DO_NOT_STORE- Whether the API stores or deletes the comment text and context from this request after it has been evaluated. Stored comments will be used for future research and community model building purposes to improve the API over time. (Default
true) Perspective API - Analyze Comment Request
TALK_PERSPECTIVE_SEND_FEEDBACK- If set to
TRUE, this plugin will send back moderation actions as feedback to Perspective to improve their model. (Default
TALK_PERSPECTIVE_MODEL- Determines the Perspective API toxicity model that should be used, i.e.
SEVERE_TOXICITY. A list of available models provided by the Perspective API can be found here. When displaying the toxicity score, this model will be used by default. If this model isn’t available on the comment metadata (such as when the model has been changed), it will fall back to the stored
TOXICITYmodel as Talk will always fetch that. (Default