瞭解並使用安全性設定

您可以透過安全設定,調整收到可能有害回覆的機率。根據預設,只要內容屬於任何不安全維度/類別的機率為中等或較高,安全設定就會加以封鎖。

跳至Gemini安全設定 跳至Imagen安全設定

Gemini 模型安全性設定

按一下 Gemini API 供應商,即可在這個頁面查看供應商專屬內容和程式碼。

如要進一步瞭解 Gemini 模型適用的安全設定,請參閱Gemini Developer API說明文件。

Swift

建立 GenerativeModel 執行個體時,請設定 SafetySettings

一個安全設定的範例:


import FirebaseAI

// Specify the safety settings as part of creating the `GenerativeModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "GEMINI_MODEL_NAME",
  safetySettings: [
    SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
  ]
)

// ...

多項安全性設定的範例:


import FirebaseAI

let harassmentSafety = SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
let hateSpeechSafety = SafetySetting(harmCategory: .hateSpeech, threshold: .blockMediumAndAbove)

// Specify the safety settings as part of creating the `GenerativeModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "GEMINI_MODEL_NAME",
  safetySettings: [harassmentSafety, hateSpeechSafety]
)

// ...

Kotlin

建立 GenerativeModel 執行個體時,請設定 SafetySettings

一個安全設定的範例:


import com.google.firebase.vertexai.type.HarmBlockThreshold
import com.google.firebase.vertexai.type.HarmCategory
import com.google.firebase.vertexai.type.SafetySetting

// Specify the safety settings as part of creating the `GenerativeModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "GEMINI_MODEL_NAME",
    safetySettings = listOf(
        SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.ONLY_HIGH)
    )
)

// ...

多項安全性設定的範例:


import com.google.firebase.vertexai.type.HarmBlockThreshold
import com.google.firebase.vertexai.type.HarmCategory
import com.google.firebase.vertexai.type.SafetySetting

val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.ONLY_HIGH)
val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.MEDIUM_AND_ABOVE)

// Specify the safety settings as part of creating the `GenerativeModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "GEMINI_MODEL_NAME",
    safetySettings = listOf(harassmentSafety, hateSpeechSafety)
)

// ...

Java

建立 GenerativeModel 執行個體時,請設定 SafetySettings


SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
HarmBlockThreshold.ONLY_HIGH);

// Specify the safety settings as part of creating the `GenerativeModel` instance
GenerativeModelFutures model = GenerativeModelFutures.from(
        FirebaseAI.getInstance(GenerativeBackend.googleAI())
                .generativeModel(
                  /* modelName */ "IMAGEN_MODEL_NAME",
                  /* generationConfig is optional */ null,
                  Collections.singletonList(harassmentSafety)
                );
);

// ...

多項安全性設定的範例:


SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
HarmBlockThreshold.ONLY_HIGH);

SafetySetting hateSpeechSafety = new SafetySetting(HarmCategory.HATE_SPEECH,
HarmBlockThreshold.MEDIUM_AND_ABOVE);

// Specify the safety settings as part of creating the `GenerativeModel` instance
GenerativeModelFutures model = GenerativeModelFutures.from(
        FirebaseAI.getInstance(GenerativeBackend.googleAI())
                .generativeModel(
                  /* modelName */ "IMAGEN_MODEL_NAME",
                  /* generationConfig is optional */ null,
                  List.of(harassmentSafety, hateSpeechSafety)
                );
);

// ...

Web

建立 GenerativeModel 執行個體時,請設定 SafetySettings

一個安全設定的範例:


import { HarmBlockThreshold, HarmCategory, getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// ...

const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

const safetySettings = [
  {
    category: HarmCategory.HARM_CATEGORY_HARASSMENT,
    threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
  },
];

// Specify the safety settings as part of creating the `GenerativeModel` instance
const model = getGenerativeModel(ai, { model: "GEMINI_MODEL_NAME", safetySettings });

// ...

多項安全性設定的範例:


import { HarmBlockThreshold, HarmCategory, getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// ...

const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

const safetySettings = [
  {
    category: HarmCategory.HARM_CATEGORY_HARASSMENT,
    threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
  },
  {
    category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
    threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
  },
];

// Specify the safety settings as part of creating the `GenerativeModel` instance
const model = getGenerativeModel(ai, { model: "GEMINI_MODEL_NAME", safetySettings });

// ...

Dart

建立 GenerativeModel 執行個體時,請設定 SafetySettings

一個安全設定的範例:


// ...

final safetySettings = [
  SafetySetting(HarmCategory.harassment, HarmBlockThreshold.high)
];

// Specify the safety settings as part of creating the `GenerativeModel` instance
final model = FirebaseAI.googleAI().generativeModel(
  model: 'GEMINI_MODEL_NAME',
  safetySettings: safetySettings,
);

// ...

多項安全性設定的範例:


// ...

final safetySettings = [
  SafetySetting(HarmCategory.harassment, HarmBlockThreshold.high),
  SafetySetting(HarmCategory.hateSpeech, HarmBlockThreshold.high),
];

// Specify the safety settings as part of creating the `GenerativeModel` instance
final model = FirebaseAI.googleAI().generativeModel(
  model: 'GEMINI_MODEL_NAME',
  safetySettings: safetySettings,
);

// ...

Unity

建立 GenerativeModel 執行個體時,請設定 SafetySettings

一個安全設定的範例:


// ...

// Specify the safety settings as part of creating the `GenerativeModel` instance
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
var model = ai.GetGenerativeModel(
  modelName: "GEMINI_MODEL_NAME",
  safetySettings: new SafetySetting[] {
    new SafetySetting(HarmCategory.Harassment, SafetySetting.HarmBlockThreshold.OnlyHigh)
  }
);

// ...

多項安全性設定的範例:


// ...

var harassmentSafety = new SafetySetting(HarmCategory.Harassment, SafetySetting.HarmBlockThreshold.OnlyHigh);
var hateSpeechSafety = new SafetySetting(HarmCategory.HateSpeech, SafetySetting.HarmBlockThreshold.MediumAndAbove);

// Specify the safety settings as part of creating the `GenerativeModel` instance
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
var model = ai.GetGenerativeModel(
  modelName: "GEMINI_MODEL_NAME",
  safetySettings: new SafetySetting[] { harassmentSafety, hateSpeechSafety }
);

// ...

Imagen 模型安全性設定

按一下 Gemini API 供應商,即可在這個頁面查看供應商專屬內容和程式碼。

如要瞭解所有支援的安全設定及其可用值,請參閱 Imagen 模型適用的 Google Cloud 說明文件。

Swift

建立 ImagenModel 執行個體時,請設定 ImagenSafetySettings


import FirebaseAI

// Specify the safety settings as part of creating the `ImagenModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).imagenModel(
  modelName: "IMAGEN_MODEL_NAME",
  safetySettings: ImagenSafetySettings(
    safetyFilterLevel: .blockLowAndAbove,
    personFilterLevel: .allowAdult
  )
)

// ...

Kotlin

建立 ImagenModel 執行個體時,請設定 ImagenSafetySettings


// Specify the safety settings as part of creating the `ImagenModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
  modelName = "IMAGEN_MODEL_NAME",
  safetySettings = ImagenSafetySettings(
    safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
    personFilterLevel = ImagenPersonFilterLevel.BLOCK_ALL
  )
)

// ...

Java

建立 ImagenModel 執行個體時,請設定 ImagenSafetySettings


// Specify the safety settings as part of creating the `ImagenModel` instance
ImagenModelFutures model = ImagenModelFutures.from(
        FirebaseAI.getInstance(GenerativeBackend.googleAI())
                .imagenModel(
                  /* modelName */ "IMAGEN_MODEL_NAME",
                  /* imageGenerationConfig */ null);
);

// ...

Web

建立 ImagenModel 執行個體時,請設定 ImagenSafetySettings


// ...

const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Specify the safety settings as part of creating the `ImagenModel` instance
const model = getImagenModel(
  ai,
  {
    model: "IMAGEN_MODEL_NAME",
    safetySettings: {
      safetyFilterLevel: ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
      personFilterLevel: ImagenPersonFilterLevel.ALLOW_ADULT,
    }
  }
);

// ...

Dart

建立 ImagenModel 執行個體時,請設定 ImagenSafetySettings


// ...

// Specify the safety settings as part of creating the `ImagenModel` instance
final model = FirebaseAI.googleAI().imagenModel(
  model: 'IMAGEN_MODEL_NAME',
  safetySettings: ImagenSafetySettings(
    ImagenSafetyFilterLevel.blockLowAndAbove,
    ImagenPersonFilterLevel.allowAdult,
  ),
);

// ...

Unity

Unity 目前尚未支援使用 Imagen,但這項功能即將推出,敬請期待!

控制內容生成的其他選項

  • 進一步瞭解提示設計,讓模型根據您的需求生成輸出內容。
  • 設定模型參數,控制模型生成回覆的方式。如果是 Gemini 模型,這些參數包括輸出詞元數量上限、隨機性參數、TopK 和 TopP。如果是 Imagen 模型,則包括長寬比、人物生成、浮水印等。
  • 設定系統指令,引導模型行為。這項功能就像前言,您可以在模型接收使用者提供的任何後續指令前新增前言。
  • 連同提示傳遞回覆結構定義,指定特定輸出結構定義。這項功能最常用於產生 JSON 輸出內容,但也可用於分類工作 (例如想讓模型使用特定標籤)。