FirebaseVertexAI Framework Reference

ImagenSafetyFilterLevel

@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)
public struct ImagenSafetyFilterLevel : ProtoEnum, Sendable

A filter level controlling how aggressively to filter sensitive content.

Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include ‘harmful categories’ (for example, violence, sexual, derogatory, and toxic). This filter level controls how aggressively to filter out potentially harmful content from responses. See the safetySetting documentation and the Responsible AI and usage guidelines for more details.

  • The most aggressive filtering level; most strict blocking.

    Declaration

    Swift

    public static let blockLowAndAbove: ImagenSafetyFilterLevel
  • Blocks some problematic prompts and responses.

    Declaration

    Swift

    public static let blockMediumAndAbove: ImagenSafetyFilterLevel
  • Reduces the number of requests blocked due to safety filters.

    Important

    This may increase objectionable content generated by Imagen.

    Declaration

    Swift

    public static let blockOnlyHigh: ImagenSafetyFilterLevel
  • The least aggressive filtering level; blocks very few problematic prompts and responses.

    Important

    Access to this feature is restricted and may require your use case to be reviewed and approved by Cloud support.

    Declaration

    Swift

    public static let blockNone: ImagenSafetyFilterLevel