WebRTC-Android实践01

本文是基于 Android WebRTC完整入门教程 这篇文章的实践过程记录

项目源码地址:https://github.com/popobo/WebRTC_Android

01 使用相机

基本概念

  • RTC(Real Time Communication): 实时通信
  • WebRTC: 基于web的实时通信
  • Signaling: 信令, 一些描述媒体或网络的字符串
  • SDP(Session Description Protocol): 会话描述协议, 主要描述媒体信息
  • ICE(Interactive Connectivity Establishment): 交互式连接建立
  • STUN(Session Traversal Utilities for NAT): NAT会话穿透工具
  • TURN(Traversal Using Relays around NAT): 中继穿透NAT

使用方法

添加WebRTC库

在module的build.gradle(Module:cpp)中添加依赖,这个是官方打包的最新版本(202003)。当然你也可以 自己构建.

dependencies {
    ...
    // 添加WebRTC库
    implementation 'org.webrtc:google-webrtc:1.0.30039'
    ...
}

添加权限

在AndroidManifest.xml添加相机权限和录音权限, 注意Android6.0以上需要到设置里开启相机和麦克风权限

    <!--  相机权限和录音权限  -->
    <uses-permission android:name="android.permission.CAMERA"/>
    <uses-permission android:name="android.permission.RECORD_AUDIO"/>

Android6.0以上申请权限

@Override
    protected void onCreate(Bundle savedInstanceState) {
        ...
        verifyStoragePermissions(this);
    }

private static final int REQUEST_CAMERA_RECORD_AUDIO = 1;

    private static String[] PERMISSIONS_CAMERA_RECORD_AUDIO = {
            "android.permission.CAMERA",
            "android.permission.RECORD_AUDIO" };

    //然后通过一个函数来申请
    public static void verifyStoragePermissions(Activity activity) {
        try {
            //检测是摄像头的权限
            int permission = ActivityCompat.checkSelfPermission(activity,
                    "android.permission.CAMERA");
            if (permission != PackageManager.PERMISSION_GRANTED) {
                // 没有写的权限,去申请写的权限,会弹出对话框
                ActivityCompat.requestPermissions(activity, PERMISSIONS_CAMERA_RECORD_AUDIO,REQUEST_CAMERA_RECORD_AUDIO);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

添加SurfaceViewRenderer

SurfaceView的子类

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <org.webrtc.SurfaceViewRenderer
        android:id="@+id/localView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintTop_toTopOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

使用相机

主要步骤如下

  1. 创建PeerConnectionFactory
  2. 创建并启动VideoCapturer
  3. 用PeerConnectionFactory创建VideoSource
  4. 用PeerConnectionFactory和VideoSource创建VideoTrack
  5. 初始化视频控件SurfaceViewRenderer
  6. 将VideoTrack展示到SurfaceViewRenderer中

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        // create PeerConnectionFactory
        // PeerConnectionFactory负责创建PeerConnection、VideoTrack、AudioTrack等重要对象
        PeerConnectionFactory.InitializationOptions initializationOptions =
                PeerConnectionFactory.InitializationOptions.builder(this).createInitializationOptions();
        PeerConnectionFactory.initialize(initializationOptions);
        PeerConnectionFactory peerConnectionFactory = PeerConnectionFactory.builder().createPeerConnectionFactory();

        // create AudioSource
        AudioSource audioSource = peerConnectionFactory.createAudioSource(new MediaConstraints());
        AudioTrack audioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);

        EglBase.Context eglBaseContext = EglBase.create().getEglBaseContext();

        SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext);
        // create VideoCapturer
        VideoCapturer videoCapturer = createCameraCapturer();
        VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
        videoCapturer.initialize(surfaceTextureHelper, getApplicationContext(), videoSource.getCapturerObserver());
        videoCapturer.startCapture(480, 640, 30);

        SurfaceViewRenderer localView = findViewById(R.id.localView);
        localView.setMirror(true);
        localView.init(eglBaseContext, null);

        // create VideoTrack
        VideoTrack videoTrack = peerConnectionFactory.createVideoTrack("101", videoSource);
        // display in localView
        videoTrack.addSink(localView);
    }

    private VideoCapturer createCameraCapturer() {
        Camera1Enumerator enumerator = new Camera1Enumerator(false);
        final String[] deviceNames = enumerator.getDeviceNames();

        // First, try to find front facing camera
        for (String deviceName : deviceNames) {
            if (enumerator.isFrontFacing(deviceName)) {
                VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);

                if (videoCapturer != null) {
                    return videoCapturer;
                }
            }
        }

        // Front facing camera not found, try something else
        for (String deviceName : deviceNames) {
            if (!enumerator.isFrontFacing(deviceName)) {
                VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);

                if (videoCapturer != null) {
                    return videoCapturer;
                }
            }
        }

        return null;
    }
    ...
}

  WebRTC使用了OpenGL进行渲染(预览), 涉及下面的三个问题?

  • 数据怎么来?
  • 渲染到哪里?
  • 怎么渲染?

  参考文章地址 : https://www.cnblogs.com/elesos/p/9509691.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容