Robot moving and sensing
我们将在2D网格世界中定位一个机器人。实时定位与地图构建的基础是从机器人的传感器和运动中收集信息,然后使用有关测量和运动的信息来重新构建一个该世界的地图。
不确定性;
机器人运动好传感器都存在一些与之相关的不确定性。例如:一辆汽车正在山坡上下山,车速表读数可能会高估汽车上坡的速度,并低估汽车下坡的速度,因为它无法完全解释重力因素。同样,我们无法精确地预测机器人的运动,因为机器人可能会略微超越或未达到目标位置
这里,我们将构建一个sense()
感知给定世界中的地标。
在开始分析机器人运动之前,定义一个robot
类
# import some resources
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
# --------
# init:
# creates a robot with the specified parameters and initializes
# the location (self.x, self.y) to the center of the world
class robot:
def __init__(self, world_size = 100.0, measurement_range = 30.0,
motion_noise = 1.0, measurement_noise = 1.0):
self.measurement_noise = 0.0
self.world_size = world_size
self.measurement_range = measurement_range
self.x = world_size / 2.0
self.y = world_size / 2.0
self.motion_noise = motion_noise
self.measurement_noise = measurement_noise
self.landmarks = []
self.num_landmarks = 0
# returns a positive, random float
def rand(self):
return random.random() * 2.0 - 1.0
def move(self, dx, dy):
x = self.x + dx + self.rand() * self.motion_noise
y = self.y + dy + self.rand() * self.motion_noise
if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:
return False
else:
self.x = x
self.y = y
return True
# sense: returns x- and y- distances to landmarks within visibility range
# because not all landmarks may be in this range, the list of measurements
# is of variable length. Set measurement_range to -1 if you want all
# landmarks to be visible at all times
def sense(self):
''' This function does not take in any parameters, instead it references internal variables
(such as self.landamrks) to measure the distance between the robot and any landmarks
that the robot can see (that are within its measurement range).
This function returns a list of landmark indices, and the measured distances (dx, dy)
between the robot's position and said landmarks.
This function should account for measurement_noise and measurement_range.
One item in the returned list should be in the form: [landmark_index, dx, dy].
'''
measurements = []
## TODO: iterate through all of the landmarks in a world
## 1. compute dx and dy, the distances between the robot and the landmark
for (landmark_x, landmark_y) in self.landmarks:
dx = self.x - landmark_x
dy = self.y - landmark_y
## TODO: For each landmark
## 2. account for measurement noise by *adding* a noise component to dx and dy
## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise
## - Feel free to use the function self.rand() to help calculate this noise component
## - It may help to reference the `move` function for noise calculation
noise = self.rand() * self.measurement_noise
dx += noise
dy += noise
## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range
## then we cannot record them; if they do fall in the range, then add them to the measurements list
## as list.append([index, dx, dy]), this format is important for data creation done later
if dx < self.measurement_range and dy < self.measurement_range:
measurements.append((dx, dy))
## return the final, complete list of measurements
return measurements
# --------
# make_landmarks:
# make random landmarks located in the world
#
def make_landmarks(self, num_landmarks):
self.landmarks = []
for i in range(num_landmarks):
self.landmarks.append([round(random.random() * self.world_size),
round(random.random() * self.world_size)])
self.num_landmarks = num_landmarks
# called when print(robot) is called; prints the robot's location
def __repr__(self):
return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)
sense()
函数测量格式为[i,dx,dy]
其中i
是指地标索引(0,1,2,...),dx
和dy
是机器人位置(x,y)与地标位置(x,y)之间的测量距离。由于我们的sense函数具有一些相关的测量噪声,因此该距离并不是完美精确的。
注意:机器人的位置通常被称为姿势*或[Pxi, Pyi]
,而地标位置通常写为[Lxi, Lyi]
。
定义 世界 和 机器人
定义一个10x10平方的小世界
world_size = 10.0 # size of world (square)
measurement_range = 5.0 # range at which we can sense landmarks
motion_noise = 0.2 # noise in robot motion
measurement_noise = 0.2 # noise in the measurements
# instantiate a robot, r
r = robot(world_size, measurement_range, motion_noise, measurement_noise)
# print out the location of r
print(r)
机器人在(x,y)=(5.0,5.0)处,正好位于10x10世界的中心,这正是我们所期望的!
然而,如果没有可视化网格,我们很难知道这个机器人是处于该世界的中心。因此,在下一个单元格中,我们提供了一个辅助性可视化函数display_world
,它将在一个绘图中显示一个网格世界并在我们机器人的位置r
绘制一个红色o
。
运动
# choose values of dx and dy (negative works, too)
dx = -1
dy = 1
r.move(dx, dy)
# print out the exact location
print(r)
# display the world after movement, not that this is the same call as before
# the robot tracks its own movement
display_world(int(world_size), [r.x, r.y])
地标
这些地标是地图中可测量的地理特征。你可以将地标视为知名建筑物或较小的物体,如树木、岩石或其他特征。
我们的robot类有一个make_landmarks
函数,它会随机生成指定地标数量的位置。尝试更改num_landmarks
或多次运行此单元格,查看这些地标出现的位置。我们必须将这些位置作为第三个参数传递给display_world
函数,并访问地标位置列表,这个类似于我们寻找机器人位置r.landmarks
的方法。
每个地标在网格世界中都显示为紫色x
,我们还在此单元格的末尾输出了这些地标的确切位置[x, y]
。
# create any number of landmarks
num_landmarks = 3
r.make_landmarks(num_landmarks)
# print out our robot's exact location
print(r)
# display the world including these landmarks
display_world(int(world_size), [r.x, r.y], r.landmarks)
# print the locations of the landmarks
print('Landmark locations [x,y]: ', r.landmarks)
# try to sense any surrounding landmarks
measurements = r.sense()
# this will print out an empty list if `sense` has not been implemented
print(measurements)
数据
我们将在规定时间段内按顺序收集一系列机器人传感器测量值和运动值。然后,将仅使用这些数据来重建包含该机器人和地标位置的该世界的地图
data
的结构中列出的运动与测量值列表(用于重建世界)。这是一个以特定顺序保存传感器测量值和运动的数组,当你需要提取此数据并形成约束矩阵和向量时,它会非常有用。
data
是通过下面的一系列时间步骤构建而成的:
data = []
# after a robot first senses, then moves (one time step)
# that data is appended like so:
data.append([measurements, [dx, dy]])
# for our example movement and measurement
print(data)
# in this example, we have only created one time step (0)
time_step = 0
# so you can access robot measurements:
print('Measurements: ', data[time_step][0])
# and its motion for a given time step:
print('Motion: ', data[time_step][1])
Omega and xi
为了实现Graph SLAM,我们引入了一个矩阵与一个向量,分别为和。为方阵,并标有所有机器人姿势(xi)和所有地标(li),当你在两个姿势之间移动某个距离dx并且可以将这两个位置关联起来时,可以将其表示为这些矩阵中的数值关系。
个的矩阵表示与的向量表示:
接下来是三个姿势互相关联的简单示例:
- 最开始,这些值中的大部分都是零或仅包含初始机器人位置的值
- 在此示例中,我们会给你与这些姿势相互关联的约束
- 约束转换为矩阵值
就是求解线性方程组,求解X:
为了“求解”所有这些x值,我们可以使用线性代数。 x的所有值都在向量
import numpy as np
# define omega and xi as in the example
omega = np.array([[1,0,0],
[-1,1,0],
[0,-1,1]])
xi = np.array([[-3],
[5],
[3]])
# calculate the inverse of omega
omega_inv = np.linalg.inv(np.matrix(omega))
# calculate the solution, mu
mu = omega_inv*xi
# print out the values of mu (x0, x1, x2)
print(mu)
运动约束与目标
使用运动以及类似的传感器测量来创建约束并填充约束矩阵和。首先,从空矩阵开始。
该示例还包括姿势和地标之间的关系。假设我们从x0移动到x1,位移dx为5,然后创建一个将x0与x1相关联的运动约束。之后,就可以填充这些矩阵了。
一个约束方程可以用两种方式编写。因此,通过移动5将x0和x1相关联的运动约束已经影响到了矩阵,并为对应于x0和x1的所有元素添加了值。
实现SLAM
使用之前所学的机器人运动、运动与感测中的不确定性表示以及定位技术,定义一个函数slam
,它需要接收六个参数作为输入并返回矢量mu
。
mu
包含机器人移动时的坐标位置(x,y)以及它在该空间感测到的地标位置
矢量mu
应该包含交错的坐标(x,y),例如,如果有2个姿势和2个地标,mu
将如下所示,其中P
表示机器人位置,L
表示地标位置:
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
创建一个环境
在真实的SLAM问题中,你可能会获得一个包含有关地标位置信息的地图,而在这个示例中,我们将使用make_data
函数创建自己的数据,该函数会生成一个带有地标的二维空间网格,然后把一个机器人放置在该空间中,并通过其在一些时间步内的移动和感知来生成数据。make_data
函数依赖于机器人移动与感知函数的正确实现,并且保存在robot_class.py
文件中,实例化的机器人在该世界中移动和感测的过程中,会自动收集数据,而你的SLAM函数需要接收此数据作为输入。因此,我们首先要做的是创建这些数据,然后研究它是如何表示机器人所做的运动和传感器测量。
创建世界
使用下面的代码生成一个指定大小的世界,其中的地标位置是随机生成的
data
会保存传感器测量值和机器人随时间的运动数据。它会分别将测量值存储为data[i][0]
,将运动存储为data[i][1]
。
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
关于make_data
的说明
上面的make_data
函数会接收许多关于世界和机器人的运动与传感器参数,因为它需要完成下列任务:
- 将机器人实例化 (使用 robot 类)
- 创建一个包含地标的网格世界
此函数还会输出地标的真实位置和机器人的 最终位置,在测试SLAM的实现时应该参考这些位置。
返回的data
是一个数组,其中包含有关机器人传感器测量和机器人运动 (dx, dy)
的信息,这些信息是在多个时间步N
内收集的。你只需使用这些关于运动和测量的读数来根据时间跟踪这个机器人,然后使用SLAM找到地标确定的位置。我们只需要输出真实的地标位置即可,便于稍后进行比较。
在data
中,可以从数据组的列中的第一个和第二个索引访问测量与运动数据。有关示例,请参阅以下代码,其中i
是时间步:
measurement = data[i][0]
motion = data[i][1]
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
初始化约束
创建和修改约束矩阵和向量
如下图所示:是蓝色矩阵,是粉色向量
这里我们将实现一个二维世界的约束,将机器人的姿势表示为px,py,将地标位置表示为lx,ly
通过输入值时间步
N
、num_landmarks
与world_size
, 你就可以获得构建正确大小和起始值的初始约束所需的所有信息,假设我们的机器人在置信度为100%的情况下从这个世界的中心位置开始出发(此时没有移动或测量噪声),不知道的任何值都应该用值0
来初始化.函数initialize_constraints
,使其返回机器人起始位置的omega
和xi
约束
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
size = (N + num_landmarks) * 2
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
omega = np.zeros((size, size))
omega[0][0],omega[1][1] = 1., 1.
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
world_center = world_size / 2
xi = np.zeros((size, 1))
xi[0] = world_center
xi[1] = world_center
return omega, xi
在创建过程中进行测试
所以这里需要对创建约更新约束矩阵进行测试,检查它们是否按预期为所有的参数进行初始化。这里使用使用 seaborn 库进行可视化。
omega
保存每个变量的权重,而xi
保存这些变量之和的值
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
SLAM 的输入
除了data
,SLAM函数还需要接收以下输入:
- N - 机器人将要移动和感测的时间步数
- num_landmarks - 该世界中的地标数量
- world_size - 该世界的大小(宽度 / 高度)
- motion_noise - 与运动相关的噪声;运动的更新置信度应为
1.0/motion_noise
- measurement_noise - 与测量/感测相关的噪声;测量的更新权重应为
1.0/measurement_noise
关于噪声说明:
omega
保存每个位置变量的相对“强度”或权重,你可以通过访问omega omega[row][col]
中的正确索引并在noise
为测量值或运动噪声的情况下 添加/减去1.0/noise
来更新这些权重。xi
会保存实际的位置值,因此为了更新xi
,你只需要使用运动或测量的实际值来实现类似的添加过程。因此,对于矢量索引xi[row][0]
,你最终要做的是添加/减去一个测量或运动值,然后除以它们各自的noise
。
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
## TODO: Use your initilization to create constraint matrices, omega and xi
omega, xi = initialize_constraints(N, num_landmarks, world_size)
## TODO: Iterate through each time step in the data
## get all the motion and measurement data as you iterate
for i in range(len(data)):
measurements, motion = data[i][0], data[i][1]
## TODO: update the constraint matrix/vector to account for all *measurements*
## this should be a series of additions that take into account the measurement noise
for measure in measurements:
idx_landmark = measure[0]
x, y = measure[1], measure[2]
# take into account the measurement noise,according to x value
omega[2*i, 2*i] += 1 / measurement_noise
omega[2*i, 2*N + 2*idx_landmark] += -1 / measurement_noise
omega[2*N + 2*idx_landmark, 2*i] += -1 / measurement_noise
omega[2*N + 2*idx_landmark, 2*N + 2*idx_landmark] += 1 / measurement_noise
xi[2*i, 0] += -x / measurement_noise
xi[2*N + 2*idx_landmark, 0] += x / measurement_noise
# take into account the measurement noise,according to y value
omega[2*i + 1, 2*i + 1] += 1 / measurement_noise
omega[2*i + 1, 2*N + 2*idx_landmark + 1] += -1 / measurement_noise
omega[2*N + 2*idx_landmark + 1, 2*i + 1] += -1 / measurement_noise
omega[2*N + 2*idx_landmark + 1, 2*N + 2*idx_landmark + 1] += 1 / measurement_noise
xi[2*i + 1, 0] += -y / measurement_noise
xi[2*N + 2*idx_landmark + 1, 0] += y / measurement_noise
## TODO: update the constraint matrix/vector to account for all *motion* and motion noise
dx, dy = motion[0], motion[1]
# Update according to dx value
omega[2*i, 2*i] += 1 / motion_noise
omega[2*i, 2*i + 2] += -1 / motion_noise
omega[2*i + 2, 2*i] += -1 / motion_noise
omega[2*i + 2, 2*i + 2] += 1 / motion_noise
xi[2*i, 0] += -dx / motion_noise
xi[2*i + 2, 0] += dx / motion_noise
# Update according to dy value
omega[2*i + 1, 2*i + 1] += 1 / motion_noise
omega[2*i + 1, 2*i + 3] += -1 / motion_noise
omega[2*i + 3, 2*i + 1] += -1 / motion_noise
omega[2*i + 3, 2*i + 3] += 1 / motion_noise
xi[2*i + 1, 0] += -dy / motion_noise
xi[2*i + 3, 0] += dy / motion_noise
## TODO: After iterating through all the data
## Compute the best estimate of poses and landmark positions
## using the formula, omega_inverse * Xi
mu = np.linalg.inv(np.matrix(omega)) * xi
return mu # return `mu`
辅助函数,检查SLAM是否适合各种输入
首先,给定一个结果mu和时间步N,然后,定义一个提取姿势和地标位置的函数,并将它们作为自己的单独列表返回。
最后,定义一个能够很好地输出这些列表的函数
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
运行SLAM
虽然生成的data
是随机的,但你之前确实需要指定机器人预期移动的数量或时间步N
以及该二维空间中的num_landmarks
。
-
Estimated poses: 一个(x,y)列表,其长度恰好为
N
,因为这是你的机器人所做的运动次数。第一个姿势应该是该二维空间的中心,对于一个方形大小为100.0的世界,这个坐标是[50.000, 50.000]
。 -
Estimated landmarks: 一个地标位置列表(x,y),其长度恰好是
num_landmarks
。
可视化
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
Testing
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)