Single-View Robot Pose and Joint Angle Estimation via Render & Compare
Yann Labbe, Justin Carpentier, Mathieu Aubry, Josef Sivic; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 1654-1663
关于scene flow 任务
Menze, Moritz, and Andreas Geiger. "Object scene flow for autonomous vehicles." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
任务定义:
Scene flow is commonly defined as a flow field,describing the 3D motion at every point in the scene
EffiScene: Efficient Per-Pixel Rigidity Inference for Unsupervised Joint Learning of Optical Flow, Depth, Camera Pose and Motion Segmentation
Yang Jiao, Trac D. Tran, Guangming Shi
4个不同的low-level vision子任务:optical flow F, stereo-depth D, camera pose P and motion segmentation S
关键在于大多数场景是刚体为主,对于场景的深度和物体移动速度来讲,几何结构是固定的。因此 S, 可以从F,D,P中推理得到一个鲁棒的估计结果。这篇文章提出了一个名为EffiScene的框架。在EffiScene这一框架中,文章首先从深度和光流中得到鲁棒的估计,用 Perspective-Points中计算出camera pose。
为了联合估计,本文提出了3个核心模块 Rigidity From Motion
(1) correlation extraction
(2) boundary learning
(3) outlier exclusion
Rigidity from Motion
Self-Point-Flow: Self-Supervised Scene Flow Estimation From Point Clouds With Optimal Transport and Random Walk
Ruibo Li, Guosheng Lin, Lihua Xie
Fusing the Old with the New: Learning Relative Camera Pose with Geometry-Guided Uncertainty
Bingbing Zhuang, Manmohan Chandraker
Camera Pose Matters: Improving Depth Prediction by Mitigating Pose Distribution Bias
Extreme Low-Light Environment-Driven Image Denoising Over Permanently Shadowed Lunar Regions With a Physical Noise Model
Ben Moseley, Valentin Bickel, Ignacio G. Lopez-Francos, Loveneesh Rana
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes
Zhihang Zhong, Yinqiang Zheng, Imari Sato
Learning Temporal Consistency for Low Light Video Enhancement From Single Images
Exploring intermediate representation for monocular vehicle pose estimation
其他感兴趣的论文:
Weakly Supervised Learning of Rigid 3D Scene Flow
Zan Gojcic,Or Litany,Andreas Wieser,Leonidas J. Guibas,Tolga Birdal
3d scene flow 是一种动态3D场景的表示方式:
逐点无约束的运动表示有两个弊端:
1. 大多数场景所需要的都是刚体运动,无约束的表述会导致不同部分的运动变量不同
2. 直接的运动表示依赖标注,但是scene flow的标注过于昂贵
模拟数据的domain gap不可忽略
本文首先将图像分成 foreground(movable objects) 和background(static object)
background的运动来自自车运动, foreground flow来自有刚性的运动物体
(1)我们对于物体有一个刚性的整体约束
(2)我们只需要mask(动态物体标注)和自车运动(可以从IMU获取)
取得SOTA的数据集, FT3D, stereoKITTI, lidarKITTI, and semantic KITTI,没有fine-tuning 就可以用在waymo-open dataset上面
Self-Supervised Geometric Perception
Heng Yang, Wei Dong, Luca Carlone, Vladlen Koltun
Stable View Synthesis
Gernot Riegler, Vladlen Koltun
GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields
Michael Niemeyer (Max Planck Institute for Intelligent Systems, Tübingen and University of Tübingen); Andreas Geiger (MPI-IS and University of Tuebingen)