Apple 플랫폼에서 Firebase 인증 및 Firebase Functions를 사용하여 Cloud Vision으로 안전하게 이미지 라벨 지정
컬렉션을 사용해 정리하기
내 환경설정을 기준으로 콘텐츠를 저장하고 분류하세요.
앱에서 Google Cloud API를 호출하려면 승인을 처리하고 API 키와 같은 보안 비밀 값을 보호하는 중간 REST API를 만들어야 합니다. 그런 다음 모바일 앱에서 코드를 작성하여 이 중간 서비스에 인증하고 통신해야 합니다.
이 REST API를 만드는 한 가지 방법은 Firebase 인증 및 Firebase Functions를 사용하는 것입니다. 이 방법을 사용하면 인증을 처리하고 사전 빌드된 SDK를 사용하여 모바일 앱에서 호출할 수 있는 Google Cloud API에 대한 관리형 서버리스 게이트웨이가 제공됩니다.
이 가이드에서는 이 기법을 사용하여 앱에서 Cloud Vision API를 호출하는 방법을 설명합니다. 이 방법을 사용하면 인증된 모든 사용자가 Cloud 프로젝트를 통해 Cloud Vision 청구 서비스에 액세스할 수 있으므로 계속하기 전에 이 인증 메커니즘이 현재 사용 사례에 충분한지 고려해야 합니다.
[[_functionsHTTPSCallableWithName:@"annotateImage"]callWithObject:requestDatacompletion:^(FIRHTTPSCallableResult*_Nullableresult,NSError*_Nullableerror){if(error){if([error.domainisEqualToString:@"com.firebase.functions"]){FIRFunctionsErrorCodecode=error.code;NSString*message=error.localizedDescription;NSObject*details=error.userInfo[@"details"];}// ...}// Function completed succesfully// Get information about labeled objects}];
3. 라벨이 지정된 객체 정보 가져오기
이미지 라벨 지정 작업이 성공하면 BatchAnnotateImagesResponse의 JSON 응답이 작업 결과에 반환됩니다. labelAnnotations 배열의 각 객체는 이미지에서 라벨이 지정된 항목을 나타냅니다. 라벨별로 라벨의 텍스트 설명, 라벨의 지식 그래프 항목 ID(있는 경우), 일치 신뢰도 점수를 가져올 수 있습니다. 예를 들면 다음과 같습니다.
[null,null,["최종 업데이트: 2025-08-16(UTC)"],[],[],null,["| The Firebase ML Vision SDK for labeling objects in an image is\n| now deprecated\n| [(See the\n| outdated docs here).](/docs/ml/ios/label-images-deprecated)\n| This page describes how, as an alternative to the deprecated SDK, you can\n| call Cloud Vision APIs using Firebase Auth and Firebase Functions to allow\n| only authenticated users to access the API.\n\n\nIn order to call a Google Cloud API from your app, you need to create an intermediate\nREST API that handles authorization and protects secret values such as API keys. You then need to\nwrite code in your mobile app to authenticate to and communicate with this intermediate service.\n\n\nOne way to create this REST API is by using Firebase Authentication and Functions, which gives you a managed, serverless gateway to\nGoogle Cloud APIs that handles authentication and can be called from your mobile app with\npre-built SDKs.\n\n\nThis guide demonstrates how to use this technique to call the Cloud Vision API from your app.\nThis method will allow all authenticated users to access Cloud Vision billed services through your Cloud project, so\nconsider whether this auth mechanism is sufficient for your use case before proceeding.\n| Use of the Cloud Vision APIs is subject to the [Google Cloud Platform License\n| Agreement](https://cloud.google.com/terms/) and [Service\n| Specific Terms](https://cloud.google.com/terms/service-terms), and billed accordingly. For billing information, see the [Pricing](https://cloud.google.com/vision/pricing) page.\n| **Looking for on-device image labeling?** Try the [standalone ML Kit library](https://developers.google.com/ml-kit/vision/image-labeling).\n\n\u003cbr /\u003e\n\nBefore you begin\n\n\u003cbr /\u003e\n\nConfigure your project If you have not already added Firebase to your app, do so by following the steps in the [getting started guide](/docs/ios/setup).\n\nUse Swift Package Manager to install and manage Firebase dependencies.\n| Visit [our installation guide](/docs/ios/installation-methods) to learn about the different ways you can add Firebase SDKs to your Apple project, including importing frameworks directly and using CocoaPods.\n\n1. In Xcode, with your app project open, navigate to **File \\\u003e Add Packages**.\n2. When prompted, add the Firebase Apple platforms SDK repository: \n\n```text\n https://github.com/firebase/firebase-ios-sdk.git\n```\n| **Note:** New projects should use the default (latest) SDK version, but you can choose an older version if needed.\n3. Choose the Firebase ML library.\n4. Add the `-ObjC` flag to the *Other Linker Flags* section of your target's build settings.\n5. When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.\n\n\nNext, perform some in-app setup:\n\n1. In your app, import Firebase:\n\n Swift \n\n ```swift\n import FirebaseMLModelDownloader\n ```\n\n Objective-C \n\n ```objective-c\n @import FirebaseMLModelDownloader;\n ```\n\n\nA few more configuration steps, and we're ready to go:\n\n1. If you haven't already enabled Cloud-based APIs for your project, do so\n now:\n\n 1. Open the [Firebase ML\n APIs page](//console.firebase.google.com/project/_/ml/apis) in the Firebase console.\n 2. If you haven't already upgraded your project to the\n [pay-as-you-go Blaze pricing plan](/pricing), click **Upgrade** to do so. (You'll be\n prompted to upgrade only if your project isn't on the\n Blaze pricing plan.)\n\n Only projects on the Blaze pricing plan can use\n Cloud-based APIs.\n 3. If Cloud-based APIs aren't already enabled, click **Enable Cloud-based APIs**.\n2. Configure your existing Firebase API keys to disallow access to the Cloud Vision API:\n 1. Open the [Credentials](https://console.cloud.google.com/apis/credentials?project=_) page of the Cloud console.\n 2. For each API key in the list, open the editing view, and in the Key Restrictions section, add all of the available APIs *except* the Cloud Vision API to the list.\n\nDeploy the callable function\n\nNext, deploy the Cloud Function you will use to bridge your app and the Cloud\nVision API. The `functions-samples` repository contains an example\nyou can use.\n\nBy default, accessing the Cloud Vision API through this function will allow\nonly authenticated users of your app access to the Cloud Vision API. You can\nmodify the function for different requirements.\n\nTo deploy the function:\n\n1. Clone or download the [functions-samples repo](https://github.com/firebase/functions-samples) and change to the `Node-1st-gen/vision-annotate-image` directory: \n\n git clone https://github.com/firebase/functions-samples\n cd Node-1st-gen/vision-annotate-image\n\n2. Install dependencies: \n\n cd functions\n npm install\n cd ..\n\n3. If you don't have the Firebase CLI, [install it](/docs/cli#setup_update_cli).\n4. Initialize a Firebase project in the `vision-annotate-image` directory. When prompted, select your project in the list. \n\n ```\n firebase init\n ```\n5. Deploy the function: \n\n ```\n firebase deploy --only functions:annotateImage\n ```\n\nAdd Firebase Auth to your app\n\nThe callable function deployed above will reject any request from non-authenticated\nusers of your app. If you have not already done so, you will need to [add Firebase\nAuth to your app.](https://firebase.google.com/docs/auth/ios/start#add_to_your_app)\n\nAdd necessary dependencies to your app\n\n\nUse Swift Package Manager to install the Cloud Functions for Firebase library.\n\nNow you are ready to label images.\n\n1. Prepare the input image In order to call Cloud Vision, the image must be formatted as a base64-encoded string. To process a `UIImage`: \n\nSwift \n\n```swift\nguard let imageData = uiImage.jpegData(compressionQuality: 1.0) else { return }\nlet base64encodedImage = imageData.base64EncodedString()\n```\n\nObjective-C \n\n```objective-c\nNSData *imageData = UIImageJPEGRepresentation(uiImage, 1.0f);\nNSString *base64encodedImage =\n [imageData base64EncodedStringWithOptions:NSDataBase64Encoding76CharacterLineLength];\n```\n\n2. Invoke the callable function to label the image To label objects in an image, invoke the callable function passing a [JSON Cloud Vision request](https://cloud.google.com/vision/docs/request#json_request_format).\n\n\u003cbr /\u003e\n\n1. First, initialize an instance of Cloud Functions:\n\n Swift \n\n lazy var functions = Functions.functions()\n\n Objective-C \n\n @property(strong, nonatomic) FIRFunctions *functions;\n\n2. Create a request with [Type](https://cloud.google.com/vision/docs/reference/rest/v1/Feature#type) set to `LABEL_DETECTION`:\n\n Swift \n\n let requestData = [\n \"image\": [\"content\": base64encodedImage],\n \"features\": [\"maxResults\": 5, \"type\": \"LABEL_DETECTION\"]\n ]\n\n Objective-C \n\n NSDictionary *requestData = @{\n @\"image\": @{@\"content\": base64encodedImage},\n @\"features\": @{@\"maxResults\": @5, @\"type\": @\"LABEL_DETECTION\"}\n };\n\n3. Finally, invoke the function:\n\n Swift \n\n do {\n let result = try await functions.httpsCallable(\"annotateImage\").call(requestData)\n print(result)\n } catch {\n if let error = error as NSError? {\n if error.domain == FunctionsErrorDomain {\n let code = FunctionsErrorCode(rawValue: error.code)\n let message = error.localizedDescription\n let details = error.userInfo[FunctionsErrorDetailsKey]\n }\n // ...\n }\n }\n\n Objective-C \n\n [[_functions HTTPSCallableWithName:@\"annotateImage\"]\n callWithObject:requestData\n completion:^(FIRHTTPSCallableResult * _Nullable result, NSError * _Nullable error) {\n if (error) {\n if ([error.domain isEqualToString:@\"com.firebase.functions\"]) {\n FIRFunctionsErrorCode code = error.code;\n NSString *message = error.localizedDescription;\n NSObject *details = error.userInfo[@\"details\"];\n }\n // ...\n }\n // Function completed succesfully\n // Get information about labeled objects\n\n }];\n\n3. Get information about labeled objects If the image labeling operation succeeds, a JSON response of [BatchAnnotateImagesResponse](https://cloud.google.com/vision/docs/reference/rest/v1/BatchAnnotateImagesResponse) will be returned in the task's result. Each object in the `labelAnnotations` array represents something that was labeled in the image. For each label, you can get the label's text description, its [Knowledge Graph entity ID](https://developers.google.com/knowledge-graph/) (if available), and the confidence score of the match. For example:\n\n\u003cbr /\u003e\n\nSwift \n\n if let labelArray = (result?.data as? [String: Any])?[\"labelAnnotations\"] as? [[String:Any]] {\n for labelObj in labelArray {\n let text = labelObj[\"description\"]\n let entityId = labelObj[\"mid\"]\n let confidence = labelObj[\"score\"]\n }\n }\n\nObjective-C \n\n NSArray *labelArray = result.data[@\"labelAnnotations\"];\n for (NSDictionary *labelObj in labelArray) {\n NSString *text = labelObj[@\"description\"];\n NSString *entityId = labelObj[@\"mid\"];\n NSNumber *confidence = labelObj[@\"score\"];\n }"]]