Suggest Edits

talk-plugin-toxic-comments


Using the Perspective API, this plugin will warn users when comments exceed the predefined toxicity threshold. Toxic comments will be flagged and are held back from being posted until reviewed by a moderator.

For more information on what Toxic Comments are, check out the Toxic Comments documentation, and you can see how the plugin works on this blog post.

Configuration:

  • TALK_PERSPECTIVE_API_KEY (required) - The API Key for Perspective. You can register and get your own key at http://perspectiveapi.com/.
  • TALK_TOXICITY_THRESHOLD - If the comments toxicity exceeds this threshold, the comment will be rejected. (Default 0.8)
  • TALK_PERSPECTIVE_API_ENDPOINT - API Endpoint for hitting the perspective API. (Default https://commentanalyzer.googleapis.com/v1alpha1)
  • TALK_PERSPECTIVE_TIMEOUT - The timeout for sending a comment to be processed before it will skip the toxicity analysis, parsed by ms. (Default 300ms)
  • TALK_PERSPECTIVE_DO_NOT_STORE - Whether the API stores or deletes the comment text and context from this request after it has been evaluated. Stored comments will be used for future research and community model building purposes to improve the API over time. (Default true) Perspective API - Analyze Comment Request
  • TALK_PERSPECTIVE_SEND_FEEDBACK - If set to TRUE, this plugin will send back moderation actions as feedback to Perspective to improve their model. (Default FALSE)
  • TALK_PERSPECTIVE_MODEL - Determines the Perspective API toxicity model that should be used, i.e. TOXICITY vs SEVERE_TOXICITY. A list of available models provided by the Perspective API can be found here. When displaying the toxicity score, this model will be used by default. If this model isn’t available on the comment metadata (such as when the model has been changed), it will fall back to the stored TOXICITY model as Talk will always fetch that. (Default SEVERE_TOXICITY)