一、VideoCamera的定义
OpenCV提供了一个类 CvVideoCamera,它实现了高级相机控制功能和预览GUI,但是支持高度定制。CvVideoCamera 建立在 AVFoundation 之上,并提供对一些底层类的访问。因此,应用程序开发人员可以选择使用高级 CvVideoCamera 功能和低级 AVFoundation 功能的组合。应用程序开发人员实现大多数GUI,并且可以禁用视频预览,或者指定 CvVideoCamera 将呈现它的父视图。此外,应用程序可以在捕获每个视频帧时对其进行处理,并且如果应用程序就地编辑捕获的帧,则 CvVideoCamera 将在预览中显示结果。因此,CvVideoCamera 是工程一个合适的起点。
- OpenCV还提供了一个名为 CvPhotoCamera 的类,用于捕获高质量的静态图像而不是连续的视频流。与CvVideoCamera不同的是,CvPhotoCamera不允许我们将自定义图像处理应用到实时预览。
- 自定义CvVideoCamera
我们将创建一个称为 CvVideoCamera 的子类,名为 VideoCamera.h 的新头文件。在这里,我们将声明子类的公共接口,包括新的属性和方法,如下面的代码所示:
#import <opencv2/videoio/cap_ios.h>
@interface VideoCamera : CvVideoCamera
@property (nonatomic,assign) BOOL letterboxPreview;
- (void)setPointOfInterestInParentViewSpace:(CGPoint)point;
@end
- (void)setPointOfInterestInParentViewSpace:(CGPoint)point; 方法将为相机的自动对焦和自动曝光算法设置一个关注点。在简单搜索最佳解决方案之后,相机应该重新配置自己,以便其焦距和中间色调水平匹配给定点的邻域,该邻域在预览的父视图中用像素坐标表示。换句话说,在调整之后,点和它的邻域应该是聚焦的,并且大约为50%灰色。然而,良好的自动曝光算法可以允许根据颜色和场景的其他区域的亮度变化。
类的实现文件 VideoCamera.m 代码中,我们将添加一个带有属性的私有接口,customPreviewLayer 层,如下面的代码所示:
#import "VideoCamera.h"
@interface VideoCamera ()
@property (nonatomic, strong) CALayer *customPreviewLayer;
@end
为了定制预览层的布局,我们将重写 CvVideoCamera 的以下方法:
- (int)imageWidth 和 (int)imageHeight: 这些方法应该返回相机当前使用的水平分辨率和垂直分辨率。超类的实现是错误的(在OpenCV 3.1中),因为它依赖于关于各种质量模式中的默认分辨率的一组假设,而不是直接查询当前分辨率。
- (void)updateSize: 超类使用这种方法来假设相机的分辨率。这实际上是一种适得其反的方法。正如前面的子弹点所描述的,假设是不可靠的和不必要的。
- (void)layoutPreviewLayer: 该方法应该以尊重当前设备方向的方式布置预览。超类的实现是错误的(在OpenCV 3.1中)。在某些情况下,预览被拉伸或不正确地定向。
为了获得正确的分辨率,我们可以通过一个名为 AVCaptureVideoDataOutput 的 AVFoundation 类查询相机的当前捕获参数。请参考下面的代码,该代码重写了 imageWidth 的getter方法:
- (int)imageWidth {
AVCaptureVideoDataOutput *output = [self.captureSession.outputs lastObject];
NSDictionary *videoSettings = [output videoSettings];
int videoWidth = [[videoSettings objectForKey:@"Width"] intValue];
return videoWidth;
}
类似地,让我们重写 imageHeight 的getter方法:
- (int)imageHeight {
AVCaptureVideoDataOutput *output = [self.captureSession.outputs lastObject];
NSDictionary *videoSettings = [output videoSettings];
int videoHeight = [[videoSettings objectForKey:@"Height"] intValue];
return videoHeight;
}
现在,我们已经充分解决了查询相机分辨率的问题。因此,我们可以用一个空实现重写 updateSize 方法:
- (void)updateSize {
// Do nothing.
}
当放映视频预览时,首先我们将其放在父视图中。然后,我们发现它的长宽比和选择预览大小方面的纵横比。如果 letterboxpreview 值为 YES,预览显示可能小于其在父视图的尺寸。否则,它可能大于父视图其中的一个维度,在这种情况下,它的边缘可以因此出现视图之外。下面的代码演示了如何定位和缩小预览:
- (void)layoutPreviewLayer {
if (self.parentView != nil) {
// Center the video preview.
self.customPreviewLayer.position = CGPointMake(0.5 * self.parentView.frame.size.width, 0.5 * self.parentView.frame.size.height);
// Find the video's aspect ratio.
CGFloat videoAspectRatio = self.imageWidth / (CGFloat)self.imageHeight;
// Scale the video preview while maintaining its aspect ratio.
CGFloat boundsW;
CGFloat boundsH;
if (self.imageHeight > self.imageWidth) {
if (self.letterboxPreview) {
boundsH = self.parentView.frame.size.height;
boundsW = boundsH * videoAspectRatio;
} else {
boundsW = self.parentView.frame.size.width;
boundsH = boundsW / videoAspectRatio;
}
} else {
if (self.letterboxPreview) {
boundsW = self.parentView.frame.size.width;
boundsH = boundsW / videoAspectRatio;
} else {
boundsH = self.parentView.frame.size.height;
boundsW = boundsH * videoAspectRatio;
}
}
self.customPreviewLayer.bounds = CGRectMake(0.0, 0.0, boundsW, boundsH);
}
}
现在我们来看 - (void)setPointOfInterestInParentViewSpace:(CGPoint)point; 方法。AVFoundation 允许我们为焦点和曝光指定一个兴趣点,下面是我们的方法的实现,它检查相机的自动曝光和自动对焦能力,执行坐标转换,验证坐标,并通过AVFoundation功能设置关注点:
- (void)setPointOfInterestInParentViewSpace:(CGPoint)parentViewPoint {
if (!self.running) {
return;
}
// Find the current capture device.
NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice;
for (captureDevice in captureDevices) {
if (captureDevice.position == self.defaultAVCaptureDevicePosition) {
break;
}
}
BOOL canSetFocus = [captureDevice isFocusModeSupported:AVCaptureFocusModeAutoFocus] && captureDevice.isFocusPointOfInterestSupported;
BOOL canSetExposure = [captureDevice isExposureModeSupported:AVCaptureExposureModeAutoExpose] && captureDevice.isExposurePointOfInterestSupported;
if (!canSetFocus && !canSetExposure) {
return;
}
if (![captureDevice lockForConfiguration:nil]) {
return;
}
// Find the preview's offset relative to the parent view.
CGFloat offsetX = 0.5 * (self.parentView.bounds.size.width – self.customPreviewLayer.bounds.size.width);
CGFloat offsetY = 0.5 * (self.parentView.bounds.size.height – self.customPreviewLayer.bounds.size.height);
// Find the focus coordinates, proportional to the preview size.
CGFloat focusX = (parentViewPoint.x - offsetX) / self.customPreviewLayer.bounds.size.width;
CGFloat focusY = (parentViewPoint.y - offsetY) / self.customPreviewLayer.bounds.size.height;
if (focusX < 0.0 || focusX > 1.0 || focusY < 0.0 || focusY > 1.0) {
// The point is outside the preview.
return;
}
// Adjust the focus coordinates based on the orientation.
// They should be in the landscape-right coordinate system.
switch (self.defaultAVCaptureVideoOrientation) {
case AVCaptureVideoOrientationPortraitUpsideDown: {
CGFloat oldFocusX = focusX;
focusX = 1.0 - focusY;
focusY = oldFocusX;
break;
}
case AVCaptureVideoOrientationLandscapeLeft: {
focusX = 1.0 - focusX;
focusY = 1.0 - focusY;
break;
}
case AVCaptureVideoOrientationLandscapeRight: {
// Do nothing.
break;
}
default: { // Portrait
CGFloat oldFocusX = focusX;
focusX = focusY;
focusY = 1.0 - oldFocusX;
break;
}
}
if (self.defaultAVCaptureDevicePosition == AVCaptureDevicePositionFront) {
// De-mirror the X coordinate.
focusX = 1.0 - focusX;
}
CGPoint focusPoint = CGPointMake(focusX, focusY);
if (canSetFocus) {
// Auto-focus on the selected point.
captureDevice.focusMode = AVCaptureFocusModeAutoFocus;
captureDevice.focusPointOfInterest = focusPoint;
}
if (canSetExposure) {
// Auto-expose for the selected point.
captureDevice.exposureMode = AVCaptureExposureModeAutoExpose;
captureDevice.exposurePointOfInterest = focusPoint;
}
[captureDevice unlockForConfiguration];
}
现在,我们已经实现了一个类,它能够配置相机和捕捉帧。但是,我们仍然需要实现另一个类来选择配置和接收帧。
二、ViewController的定义
- 定义 ViewController 类的私有接口。此外,它依赖于定义我们自己类的公共接口的头部,ViewController 和 VideoCamera。让我们通过在 ViewController 内添加以下代码来导入这些依赖项:
#import <Photos/Photos.h>
#import <Social/Social.h>
#import <opencv2/core.hpp>
#import <opencv2/imgcodecs.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/imgproc.hpp>
#import "ViewController.h"
#import "VideoCamera.h"
- 定义 ViewController 类的实例变量。我们将使用多个 cv::Mat 对象来存储静态图像和相机图像的颜色或灰度格式。我们的GUI对象将包括图像视图、活动指示器(繁忙的旋转器)和工具栏。我们将使用一个视频摄像机类的实例来控制摄像机,抓取并显示视频图像。最后,我们将使用布尔变量来跟踪用户是否按下保存按钮来保存即将到来的帧。下面是相关的变量声明:
@interface ViewController () <CvVideoCameraDelegate> {
cv::Mat originalStillMat;
cv::Mat updatedStillMatGray;
cv::Mat updatedStillMatRGBA;
cv::Mat updatedVideoMatGray;
cv::Mat updatedVideoMatRGBA;
}
@property IBOutlet UIImageView *imageView;
@property IBOutlet UIActivityIndicatorView *activityIndicatorView;
@property IBOutlet UIToolbar *toolbar;
@property VideoCamera *videoCamera;
@property BOOL saveNextFrame;
@end
注意,类名后面跟着 <CvVideoCameraDelegate>,这意味着该类实现了名为 CvVideoCameraDelegate 的协议。该协议是OpenCV的一部分,并定义了一种方法,- (void)processImage:(cv::Mat &)mat,用于处理视频帧。稍后,在控制相机部分,我们将讨论这种回调方法如何与我们的视频摄像机类相关。
- 定义视频摄像机的方法。有些方法是回调来处理GUI事件,比如按下一个按钮。让我们声明视频预览的点击对焦特性、颜色或灰色分段控件、切换相机按钮和保存按钮的以下回调:
- (IBAction)onTapToSetPointOfInterest:(UITapGestureRecognizer *)tapGesture;
- (IBAction)onColorModeSelected:(UISegmentedControl *)segmentedControl;
- (IBAction)onSwitchCameraButtonPressed;
- (IBAction)onSaveButtonPressed;
除了手势交互的回调方法,视频摄像机还有几种方法。在相机状态或图像处理设置的改变之后,我们将调用刷新方法来更新显示。其他方法将有助于处理、保存和共享图像,以及启动和停止应用程序的忙碌模式。以下是相关声明:
- (void)refresh;
- (void)processImage:(cv::Mat &)mat;
- (void)processImageHelper:(cv::Mat &)mat;
- (void)saveImage:(UIImage *)image;
- (void)showSaveImageFailureAlertWithMessage:(NSString *)message;
- (void)showSaveImageSuccessAlertWithImage:(UIImage *)image;
- (UIAlertAction *)shareImageActionWithTitle:(NSString *)title serviceType:(NSString *)serviceType image:(UIImage *)image;
- (void)startBusyMode;
- (void)stopBusyMode;
- 在 ViewController.m 中实现定义的方法。首先,我们将从文件加载静态图像并将其转换为适当的格式。然后,我们将用我们的图像视图创建视频摄像机的实例作为预览的父视图。我们会告诉相机发送帧到该视图控制器(委托),在30帧用一个高分辨率的模式的预览。下面是 viewDidLoad 的实现:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *originalStillImage = [UIImage imageNamed:@"Fleur.jpg"];
UIImageToMat(originalStillImage, originalStillMat);
self.videoCamera = [[VideoCamera alloc] initWithParentView:self.imageView];
self.videoCamera.delegate = self;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetHigh;
self.videoCamera.defaultFPS = 30;
self.videoCamera.letterboxPreview = YES;
}
我们在 viewDidLayoutSubviews 方法中配置摄像机的方向以匹配设备的方向(请注意,每当方向改变时,将再次调用该方法),如下面的代码所示:
- (void)viewDidLayoutSubviews {
[super viewDidLayoutSubviews];
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortraitUpsideDown:
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortraitUpsideDown;
break;
case UIDeviceOrientationLandscapeLeft:
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
break;
case UIDeviceOrientationLandscapeRight:
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeRight;
break;
default:
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
break;
}
[self refresh];
}
注意,在重新配置相机后,我们调用 refresh 刷新方法。refresh 方法将检查摄像机是否正在运行。如果是,我们将确保静态图像被隐藏,我们将停止并重新启动相机。否则(如果没有摄像头运行),我们将重新处理静态图像并显示结果。该处理包括将图像转换为适当的颜色格式并将其传递到 processImage: 方法。请记住,CvVideoCamera 和我们的 VideoCamera 子类同样将视频帧传递给 CvVideoCameraDelegate 协议的 processImage: 方法。在这里,刷新,对静态图像使用相同的图像处理方法。让我们看看刷新方法的实现。实现代码如下:
- (void)refresh {
if (self.videoCamera.running) {
// Hide the still image.
self.imageView.image = nil;
// Restart the video.
[self.videoCamera stop];
[self.videoCamera start];
}
else {
// Refresh the still image.
UIImage *image;
if (self.videoCamera.grayscaleMode) {
cv::cvtColor(originalStillMat, updatedStillMatGray, cv::COLOR_RGBA2GRAY);
[self processImage:updatedStillMatGray];
image = MatToUIImage(updatedStillMatGray);
} else {
cv::cvtColor(originalStillMat, updatedStillMatRGBA, cv::COLOR_RGBA2BGRA);
[self processImage:updatedStillMatRGBA];
cv::cvtColor(updatedStillMatRGBA, updatedStillMatRGBA, cv::COLOR_BGRA2RGBA);
image = MatToUIImage(updatedStillMatRGBA);
}
self.imageView.image = image;
}
}
当用户轻敲预览的父视图时,我们将 tap 事件传递给 setPointOfInterestInParentViewSpace: 方法,这是我们以前在视频摄像机中实现的。以下是 tap 事件的相关回调:
- (IBAction)onTapToSetPointOfInterest:(UITapGestureRecognizer *)tapGesture {
if (tapGesture.state == UIGestureRecognizerStateEnded) {
if (self.videoCamera.running) {
CGPoint tapPoint = [tapGesture locationInView:self.imageView];
[self.videoCamera setPointOfInterestInParentViewSpace:tapPoint];
}
}
}
当用户在 segmented control 控件中选择颜色时,我们将把VideoCamera的grayscaleMode(灰度模式) 属性设置为YES或NO。这个属性是从CvVideoCamera继承的。在设置了 grayscaleMode 之后,我们将调用ViewController的刷新方法来使用适当的设置重新启动相机。这里是回调来处理segmented control 控件状态的更改:
- (IBAction)onColorModeSelected:(UISegmentedControl *)segmentedControl {
switch (segmentedControl.selectedSegmentIndex) {
case 0:
self.videoCamera.grayscaleMode = NO;
break;
default:
self.videoCamera.grayscaleMode = YES;
break;
}
[self refresh];
}
当用户点击开关相机按钮,我们将激活下一个相机或循环回到静态图像的初始状态。在每个过渡期间,我们必须确保前一个相机停止或先前的静态图像。隐藏并启动下一个摄像机或处理和显示下一个静态图像。同样,我们也需要调用 refresh 刷新方法。下面是按钮回调的实现:
- (IBAction)onSwitchCameraButtonPressed {
if (self.videoCamera.running) {
switch (self.videoCamera.defaultAVCaptureDevicePosition) {
case AVCaptureDevicePositionFront:
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
[self refresh];
break;
default:
[self.videoCamera stop];
[self refresh];
break;
}
}
else {
// Hide the still image.
self.imageView.image = nil;
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
[self.videoCamera start];
}
}
在processImage: 方法中,我们确保图像的旋转正确之后,我们将把它传递给另一个名为 processImageHelper: 的方法,这将是一个实现大多数图像处理功能的方便的地方。最后,如果用户单击了Save按钮,我们将将图像转换为适当的格式并将其传递给 saveImage: 方法。 如下:
- (void)processImage:(cv::Mat &)mat {
if (self.videoCamera.running) {
switch (self.videoCamera.defaultAVCaptureVideoOrientation) {
case AVCaptureVideoOrientationLandscapeLeft:
case AVCaptureVideoOrientationLandscapeRight:
// The landscape video is captured upside-down.
// Rotate it by 180 degrees.
cv::flip(mat, mat, -1);
break;
default:
break;
}
}
[self processImageHelper:mat];
if (self.saveNextFrame) {
// The video frame, 'mat', is not safe for long-running
// operations such as saving to file. Thus, we copy its
// data to another cv::Mat first.
UIImage *image;
if (self.videoCamera.grayscaleMode) {
mat.copyTo(updatedVideoMatGray);
image = MatToUIImage(updatedVideoMatGray);
} else {
cv::cvtColor(mat, updatedVideoMatRGBA, cv::COLOR_BGRA2RGBA);
image = MatToUIImage(updatedVideoMatRGBA);
}
[self saveImage:image];
self.saveNextFrame = NO;
}
}
到目前为止,我们还没有做太多的图像处理,只是一些颜色转换和旋转。让我们添加下面的方法,在这里我们将执行附加的图像处理,混合图像:
- (void)processImageHelper:(cv::Mat &)mat {
// TODO.
}
附:
通常,摄像机的固件,或者至少它的驱动程序,可以有效地将捕获的视频转换成平面的YUV格式。然后,如果一个应用程序只需要灰度数据,它可以读取或复制Y平面。这种方法比捕获RGB帧并将它们转换成灰度级的效率更高。因此,当 CvVideoCamera 的 grayscaleMode(灰度模式) 属性为YES时,它获取平面YUV帧,并将Y平面传递给 CvVideoCameraDelegate 协议的 processImage: 方法。
- 启动和停止Loading模式
在忙于保存或共享照片时,显示活动指示符并禁用所有工具栏项。相反,当不再忙于处理照片时,我们希望隐藏活动指示符并重新启用工具栏项。当这些操作影响GUI时,我们必须确保它们运行在应用程序的主线程上。
在主线程中启动Loading模式,代码如下:
- (void)startBusyMode {
dispatch_async(dispatch_get_main_queue(), ^{
[self.activityIndicatorView startAnimating];
for (UIBarItem *item in self.toolbar.items) {
item.enabled = NO;
}
});
}
类似地,下面的方法用于停止Loading模式:
- (void)stopBusyMode {
dispatch_async(dispatch_get_main_queue(), ^{
[self.activityIndicatorView stopAnimating];
for (UIBarItem *item in self.toolbar.items) {
item.enabled = YES;
}
});
}
- 将图片保存到照片库
当用户按下保存按钮时,我们启动Loading模式。然后,如果摄像机运行,我们准备保存下一帧。否则,我们立即保存处理后的静态图像版本。下面是事件处理程序:
- (IBAction)onSaveButtonPressed {
[self startBusyMode];
if (self.videoCamera.running) {
self.saveNextFrame = YES;
} else {
[self saveImage:self.imageView.image];
}
}
saveImage: 方法处理系统和照片库的事务。首先,我们尝试向应用程序的临时目录写入PNG文件。然后,我们尝试根据这个文件在照片库中创建一个 asset。作为这个过程的一部分,文件被自动复制。我们调用其他帮手方法显示一个警报对话框,它将描述事务的成功或失败。下面是方法的实现:
- (void)saveImage:(UIImage *)image {
// Try to save the image to a temporary file.
NSString *outputPath = [NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), @"output.png"];
if (![UIImagePNGRepresentation(image) writeToFile:outputPath atomically:YES]) {
// Show an alert describing the failure.
[self showSaveImageFailureAlertWithMessage:@"The image could not be saved to the temporary directory."];
return;
}
// Try to add the image to the Photos library.
NSURL *outputURL = [NSURL URLWithString:outputPath];
PHPhotoLibrary *photoLibrary = [PHPhotoLibrary sharedPhotoLibrary];
[photoLibrary performChanges:^{
[PHAssetChangeRequest creationRequestForAssetFromImageAtFileURL:outputURL];
} completionHandler:^(BOOL success, NSError *error) {
if (success) {
// Show an alert describing the success, with sharing
// options.
[self showSaveImageSuccessAlertWithImage:image];
} else {
// Show an alert describing the failure.
[self showSaveImageFailureAlertWithMessage:error.localizedDescription];
}
}];
}
Alert代码:
- (void)showSaveImageFailureAlertWithMessage:(NSString *)message {
UIAlertController* alert = [UIAlertController alertControllerWithTitle:@"Failed to save image" message:message preferredStyle:UIAlertControllerStyleAlert];
UIAlertAction* okAction = [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) {
[self stopBusyMode];
}];
[alert addAction:okAction];
[self presentViewController:alert animated:YES completion:nil];
}
- 分享图片
如果光照成功将图像保存到照片库,我们希望向用户显示另一个带有共享选项的 Alert。下面的方法检查各种社交媒体平台的可用性,并对每个可用平台建立一个 Alert 的操作按钮。尽管针对的是不同的社交媒体平台,但是动作按钮彼此相似,所以我们使用 shareImageActionWithTitle:serviceType:image: 方法。我们还提供了一个不共享动作按钮,除了停止应用程序的忙碌模式之外,什么也不做:
- (void)showSaveImageSuccessAlertWithImage:(UIImage *)image {
// Create a "Saved image" alert.
UIAlertController* alert = [UIAlertController alertControllerWithTitle:@"Saved image" message:@"The image has been added to your Photos library. Would you like to share it with your friends?" preferredStyle:UIAlertControllerStyleAlert];
// If the user has a Facebook account on this device, add a
// "Post on Facebook" button to the alert.
if ([SLComposeViewController isAvailableForServiceType:SLServiceTypeFacebook]) {
UIAlertAction* facebookAction = [self shareImageActionWithTitle:@"Post on Facebook" serviceType:SLServiceTypeFacebook image:image];
[alert addAction:facebookAction];
}
// If the user has a Twitter account on this device, add a
// "Tweet" button to the alert.
if ([SLComposeViewController isAvailableForServiceType:SLServiceTypeTwitter]) {
UIAlertAction* twitterAction = [self shareImageActionWithTitle:@"Tweet" serviceType:SLServiceTypeTwitter image:image];
[alert addAction:twitterAction];
}
// If the user has a Sina Weibo account on this device, add a
// "Post on Sina Weibo" button to the alert.
if ([SLComposeViewController isAvailableForServiceType:SLServiceTypeSinaWeibo]) {
UIAlertAction* sinaWeiboAction = [self shareImageActionWithTitle:@"Post on Sina Weibo" serviceType:SLServiceTypeSinaWeibo image:image];
[alert addAction:sinaWeiboAction];
}
// If the user has a Tencent Weibo account on this device, add a
// "Post on Tencent Weibo" button to the alert.
if ([SLComposeViewController isAvailableForServiceType:SLServiceTypeTencentWeibo]) {
UIAlertAction* tencentWeiboAction = [self shareImageActionWithTitle:@"Post on Tencent Weibo" serviceType:SLServiceTypeTencentWeibo image:image];
[alert addAction:tencentWeiboAction];
}
// Add a "Do not share" button to the alert.
UIAlertAction* doNotShareAction = [UIAlertAction actionWithTitle:@"Do not share" style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) {
[self stopBusyMode];
}];
[alert addAction:doNotShareAction];
// Show the alert.
[self presentViewController:alert animated:YES completion:nil];
}
- (UIAlertAction *)shareImageActionWithTitle:(NSString *)title serviceType:(NSString *)serviceType image:(UIImage *)image {
UIAlertAction* action = [UIAlertAction actionWithTitle:title style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) {
SLComposeViewController *composeViewController = [SLComposeViewController composeViewControllerForServiceType:serviceType];
[composeViewController addImage:image];
[self presentViewController:composeViewController animated:YES completion:^{
[self stopBusyMode];
}];
}
return action;
}