这两天了解了一下三维坐标向二维坐标的转换过程,简单做个记录,因为不是做这个的,所以也不是很明白,如有错误,请指正。
一个三维空间里面的坐标,如果需要显示在二维的平面上,大概需要这么几个转换:
- 本地坐标转换成世界坐标
本地坐标:即物体的内部坐标系,和view的bounds一样。对于一个正方体,一般用(0,0,0)表示中心点,(-1, -1, 0)表示左下角,(1,1,0)表示右上角。(iOS里面的Metal就是这样表示的)
世界坐标: 即物体相对于外部的坐标系,和view的frame一样。所有物体所存在的三维空间,即世界,世界坐标就表示这个物体在这个世界的位置
本地坐标系转换成世界坐标系需要经过平移、旋转、伸缩三个变换,转换过后即是世界坐标系。
struct ModelMatrix: Convert {
struct TransForm {
let x: CGFloat
let y: CGFloat
let z: CGFloat
}
let translate: TransForm
let scale: TransForm
let rotate: CGFloat
func convert(points: [Point]) -> [Point] {
var t = CATransform3DMakeTranslation(translate.x, translate.y, translate.z)
t = CATransform3DRotate(t, rotate, 1.0, 0, 0) //这里只绕x轴旋转
t = CATransform3DScale(t, scale.x, scale.y, scale.z)
let matrix = t.matrix44()
let result = points.map {
simd_mul(matrix, $0)
}
return result
}
}
- 世界坐标系 转换成观察坐标系
观察坐标,即世界里面的一个观察者,一个物体可以不可以被看见,取决于在不在观察者的视野范围内。
一个观察者的选择,取决于所在的位置,即眼睛坐在的世界坐标位置,以及观察者的目视点,就像一个人转眼珠,眼睛的位置不变,目视点变了。
// Calculate the cross product and return it
func cross(srca: [Float], srcb: [Float]) -> [Float]{
let d0 = srca[1] * srcb[2] - srca[2] * srcb[1];
let d1 = srca[2] * srcb[0] - srca[0] * srcb[2];
let d2 = srca[0] * srcb[1] - srca[1] * srcb[0];
return [d0, d1, d2]
}
func normalize(src: [Float]) -> [Float] {
let squaredLen = src[0] * src[0] + src[1] * src[1] + src[2] * src[2];
let invLen = 1 / sqrt(squaredLen);
return src.map {
$0 * invLen
}
}
func normalize(src: simd_float3) -> simd_float3 {
let r = normalize(src: [src.x, src.y, src.z])
return simd_float3(arrayLiteral: r[0], r[1], r[2])
}
// Scale the given vector
func scale(src: [Float], s: Float) -> [Float] {
return src.map {
$0 * s
}
}
struct ViewMatrix: Convert {
let from: simd_float3
let to: simd_float3
let up: simd_float3
func multLookAt() -> [Float]
{
var xaxis: [Float] = [0,0,0]
var up: [Float] = [0,0,0]
var at: [Float] = [0,0,0]
// Compute our new look at vector, which will be
// the new negative Z axis of our transformed object.
at[0] = to.x - from.x
at[1] = to.y - from.y
at[2] = to.z - from.z;
at = normalize(src: at);
// Make a useable copy of the current up vector.
up[0] = self.up.x
up[1] = self.up.y
up[2] = self.up.z
// Cross product of the new look at vector and the current
// up vector will produce a vector which is the new
// positive X axis of our transformed object.
// cross(xaxis, at, up);
xaxis = cross(srca: at, srcb: up)
xaxis = normalize(src: xaxis);
// Calculate the new up vector, which will be the
// positive Y axis of our transformed object. Note
// that it will lie in the same plane as the new
// look at vector and the old up vector.
up = cross(srca: xaxis, srcb: at)
// Account for the fact that the geometry will be defined to
// point along the negative Z axis.
at = scale(src: at, s: -1.0)
return [
xaxis[0], xaxis[1], xaxis[2], 0,
up[0], up[1], up[2], 0,
at[0], at[1], at[2], 0,
from.x, from.y, from.z, 1.0
]
}
func convert(points: [Point]) -> [Point] {
let matrix = multLookAt().matrix44()
let result = points.map {
simd_mul(matrix, $0)
}
return result
}
}
- 观察坐标到透视坐标
对于两个同样的三维物体,我们所看见的影像取决于物体和我们眼睛的相对位置,离得越近,就越大,反之越小。透视坐标就是将一个三维物体的坐标,转换成二维坐标
一个透视转换,取决于观察到的最大角度,最远和最近距离。这个我也只是大概知道,这个视频可以看哈,讲的更好
!Perspective Projection Matrix
一个简单的实现
func gldPerspective(fovx: Float, aspect: Float, zNear: Float, zFar: Float) -> simd_float4x4
{
var m = [Float](repeatElement(0.0, count: 16))
let f = 1/tan(fovx * Float.pi / 360);
m[0] = f/aspect
m[1] = 0
m[2] = 0
m[3] = 0
m[4] = 0
m[5] = f
m[6] = 0
m[7] = 0
m[8] = 0
m[9] = 0
m[10] = (zFar + zNear) / (zNear - zFar)
m[11] = -1
m[12] = 0
m[13] = 0
m[14] = 2*zFar*zNear / (zNear - zFar)
m[15] = 0
return m.matrix44()
}
我也只是做了个大概的了解,详细内容可以看 https://learnopengl-cn.github.io/01%20Getting%20started/08%20Coordinate%20Systems/