diff --git a/mediapipe/tasks/ios/vision/face_detector/sources/MPPFaceDetector.h b/mediapipe/tasks/ios/vision/face_detector/sources/MPPFaceDetector.h index 885a490715..21c1b11503 100644 --- a/mediapipe/tasks/ios/vision/face_detector/sources/MPPFaceDetector.h +++ b/mediapipe/tasks/ios/vision/face_detector/sources/MPPFaceDetector.h @@ -84,11 +84,9 @@ NS_SWIFT_NAME(FaceDetector) * interest. Rotation will be applied according to the `orientation` property of the provided * `MPImage`. Only use this method when the `FaceDetector` is created with running mode `.image`. * - * This method supports classification of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the - * following pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face detection on RGBA images. If your `MPImage` has a source + * type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is * RGB with an Alpha channel. @@ -109,11 +107,9 @@ NS_SWIFT_NAME(FaceDetector) * the provided `MPImage`. Only use this method when the `FaceDetector` is created with running * mode `.video`. * - * This method supports classification of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the - * following pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face detection on RGBA images. If your `MPImage` has a source + * type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is RGB with an Alpha * channel. @@ -145,17 +141,15 @@ NS_SWIFT_NAME(FaceDetector) * It's required to provide a timestamp (in milliseconds) to indicate when the input image is sent * to the face detector. The input timestamps must be monotonically increasing. * - * This method supports classification of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the - * following pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face detection on RGBA images. If your `MPImage` has a source + * type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If the input `MPImage` has a source type of `.image` ensure that the color * space is RGB with an Alpha channel. * * If this method is used for classifying live camera frames using `AVFoundation`, ensure that you - * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32RGBA` using its + * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32BGRA` using its * `videoSettings` property. * * @param image A live stream image data of type `MPImage` on which face detection is to be diff --git a/mediapipe/tasks/ios/vision/face_landmarker/sources/MPPFaceLandmarker.h b/mediapipe/tasks/ios/vision/face_landmarker/sources/MPPFaceLandmarker.h index 53cb8ecd71..13ecd7387d 100644 --- a/mediapipe/tasks/ios/vision/face_landmarker/sources/MPPFaceLandmarker.h +++ b/mediapipe/tasks/ios/vision/face_landmarker/sources/MPPFaceLandmarker.h @@ -57,11 +57,9 @@ NS_SWIFT_NAME(FaceLandmarker) * interest. Rotation will be applied according to the `orientation` property of the provided * `MPImage`. Only use this method when the `FaceLandmarker` is created with `.image`. * - * This method supports RGBA images. If your `MPImage` has a source type of `.pixelBuffer` or - * `.sampleBuffer`, the underlying pixel buffer must have one of the following pixel format - * types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face landmark detection on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is RGB with an * Alpha channel. @@ -80,10 +78,9 @@ NS_SWIFT_NAME(FaceLandmarker) * the provided `MPImage`. Only use this method when the `FaceLandmarker` is created with running * mode `.video`. * - * This method supports RGBA images. If your `MPImage` has a source type of `.pixelBuffer` or - * `.sampleBuffer`, the underlying pixel buffer must have one of the following pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face landmark detection on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is RGB with an Alpha * channel. @@ -113,16 +110,15 @@ NS_SWIFT_NAME(FaceLandmarker) * It's required to provide a timestamp (in milliseconds) to indicate when the input image is sent * to the face detector. The input timestamps must be monotonically increasing. * - * This method supports RGBA images. If your `MPImage` has a source type of `.pixelBuffer` or - * `.sampleBuffer`, the underlying pixel buffer must have one of the following pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing face landmark detection on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If the input `MPImage` has a source type of `.image` ensure that the color space is RGB with an * Alpha channel. * * If this method is used for classifying live camera frames using `AVFoundation`, ensure that you - * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32RGBA` using its + * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32BGRA` using its * `videoSettings` property. * * @param image A live stream image data of type `MPImage` on which face landmark detection is to be diff --git a/mediapipe/tasks/ios/vision/gesture_recognizer/sources/MPPGestureRecognizer.h b/mediapipe/tasks/ios/vision/gesture_recognizer/sources/MPPGestureRecognizer.h index 4585087189..2b8e50c072 100644 --- a/mediapipe/tasks/ios/vision/gesture_recognizer/sources/MPPGestureRecognizer.h +++ b/mediapipe/tasks/ios/vision/gesture_recognizer/sources/MPPGestureRecognizer.h @@ -63,11 +63,9 @@ NS_SWIFT_NAME(GestureRecognizer) * `MPImage`. Only use this method when the `GestureRecognizer` is created with running mode, * `.image`. * - * This method supports gesture recognition of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the following - * pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing gesture recognition on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is RGB with an Alpha * channel. @@ -92,11 +90,9 @@ NS_SWIFT_NAME(GestureRecognizer) * It's required to provide the video frame's timestamp (in milliseconds). The input timestamps must * be monotonically increasing. * - * This method supports gesture recognition of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the following - * pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing gesture recognition on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If your `MPImage` has a source type of `.image` ensure that the color space is RGB with an Alpha * channel. @@ -129,18 +125,16 @@ NS_SWIFT_NAME(GestureRecognizer) * It's required to provide a timestamp (in milliseconds) to indicate when the input image is sent * to the gesture recognizer. The input timestamps must be monotonically increasing. * - * This method supports gesture recognition of RGBA images. If your `MPImage` has a source type of - * `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must have one of the following - * pixel format types: - * 1. kCVPixelFormatType_32BGRA - * 2. kCVPixelFormatType_32RGBA + * This method supports performing gesture recognition on RGBA images. If your `MPImage` has a + * source type of `.pixelBuffer` or `.sampleBuffer`, the underlying pixel buffer must use + * `kCVPixelFormatType_32BGRA` as its pixel format. * * If the input `MPImage` has a source type of `.image` ensure that the color space is RGB with an * Alpha channel. * * If this method is used for performing gesture recognition on live camera frames using * `AVFoundation`, ensure that you request `AVCaptureVideoDataOutput` to output frames in - * `kCMPixelFormat_32RGBA` using its `videoSettings` property. + * `kCMPixelFormat_32BGRA` using its `videoSettings` property. * * @param image A live stream image data of type `MPImage` on which gesture recognition is to be * performed. diff --git a/mediapipe/tasks/ios/vision/image_classifier/sources/MPPImageClassifier.h b/mediapipe/tasks/ios/vision/image_classifier/sources/MPPImageClassifier.h index 6e7ae57bfa..6b093b8c4d 100644 --- a/mediapipe/tasks/ios/vision/image_classifier/sources/MPPImageClassifier.h +++ b/mediapipe/tasks/ios/vision/image_classifier/sources/MPPImageClassifier.h @@ -199,7 +199,7 @@ NS_SWIFT_NAME(ImageClassifier) * Alpha channel. * * If this method is used for classifying live camera frames using `AVFoundation`, ensure that you - * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32RGBA` using its + * request `AVCaptureVideoDataOutput` to output frames in `kCMPixelFormat_32BGRA` using its * `videoSettings` property. * * @param image A live stream image data of type `MPImage` on which image classification is to be