8
$\begingroup$

In the book of SLAM for dummies, why do we even need the odometry when the robot would use the data retrieved from the laser scanner which is more accurate than odometry? Why not just rerly on the laser scanner and do away from the odometry? Is there any contribution by the odometry that the laser scanner does not have? Also, are all SLAM algorithms feature-based?

$\endgroup$
1
  • $\begingroup$ One question at a time, please. But I tried to answer both. Now please don't edit the question to ask more. Instead, ask in separate questions. Thanks! $\endgroup$ Commented Oct 15, 2013 at 13:06

4 Answers 4

11
$\begingroup$

You are reading it too narrowly.

  • You don't "need" odometery. SLAM is simply a way to fuse any sensor estimates into a consistent estimate of the robot's state.

  • "Feature-based" doesn't necessarily mean you need to have identifiable features everywhere in the environment.

  • First principal of sensor fusion: Two estimates are better than one!

Example

I haven't read the book "for dummies" but if they don't do the following numeric example, I'd set the book on fire and get a better one. And if they do have this example, then I wonder why you didn't mention it!

( you can follow along in the math here )

A robot is at position $x=0$, and is moving to the right (increasing $x$). In this perfect world, the dynamics and sensor modes are linear. (otherwise use EKF, PF, or some variant).

  • There's a wall at exactly $x=10$ which the robot can measure distance to.
  • The robot has a laser scanner to get distance with sensor variance $\sigma_l^2=.1$
  • The robot can measure it's distance travelled with odometers using sensor variance $\sigma_o^2 = .5$. Clearly the laser is more accurate than the odos.

Here's how the robot handles SLAM in this simple environment. (note this is actually localization since we aren't updating the position of the wall).

  • The robot tries to move one unit to the right.
  • Odometry measures $x=.9$
  • Laser scanner says you are $8.8$ units from the wall. (implying you are at 1.2)

Question: Where are you?

  • Do you choose the best sensor? In this case, the laser is the best right? So obviously I'm at $x=1.2$.

  • Do you choose the one "closest" to what you expect? Well in this case I think we should use odometry, since $.9$ is closer to what I intended, (moving one unit).

  • Maybe you could average the two? Well, that's better, but it is susceptible to outliers.

  • The shiny principals of sensor fusion tell you how to answer the question as follows:

Your minimum mean-squared estimate of the robot position is given by:

$$x_{mmse}=\frac{\sigma_l^2}{\sigma_o^2+\sigma_l^2}(.9) + \frac{\sigma_o^2}{\sigma_o^2+\sigma_l^2}(1.2)$$ $$x_{mmse}=\frac{.1}{.6}(.9) + \frac{.5}{.6}(1.2)$$ $$x_{mmse}=1.15$$

... unless I screwed up the algebra somewhere. People localize airplanes using math not much more complicated than that.

$\endgroup$
6
$\begingroup$

If you read about the principles of sensor fusion, you will always get a better estimate when you combine data in the right way. For example, if you are measuring temperature in a room with 3 different temperature sensors, it is not ideal to only use the best sensor. The ideal case would be to create a weighted combination of each sensor, where the sensor's weight is proportional to one over that sensor's variance. Not to mention, odometry data is VERY good. It's the extraction of landmarks that is noisy and will most likely have a higher variance.

If you think about it from a high level perspective it's also required that you have a motion update based on odometry. If you only used landmarks, then you would have ambiguous cases. Take for example, the case where you only identify one landmark. You'd have a distance z from your robot to the landmark, but this would map to an infinite number of points in a circle around the landmark. If you identify zero landmarks, then you can't do anything! By including odometry, we no longer have an ambiguity. Assuming we're localizing in a 2D plane (x,y), then you would have to guarantee you have readings for at least 3 landmarks to triangulate your position without odometry, and you cannot make that guarantee in normal environments.

Finally, an encoder can be sample on the order of 50Hz, whereas a LIDAR can only be sampled at around 6-7Hz (don't quote me on those frequencies). This means that you can update your current position much more frequently via odometry than you can from sensor readings. This isn't even taking into account how long it takes you to process your sensor reading to identify landmarks!

$\endgroup$
3
$\begingroup$

Just to add up on this, using odometry to estimate the robot position is much faster than using data from a laser scanner. In most situations, data from a range scanner is handled as a 2D PointCloud. This means that in order to estimate the relative pose between positions A, B you need to align their corresponding PointClouds and find the most probable pose of that alignment operation. To do that you would use ICP or a similar algorithm, which due to its iterating nature is computationally intensive.

On the other hand, using odometry information (e.g. from the wheel encoders) you just need to add up to the current estimation of your pose which is just the sum of two Probability density functions (current estimation + incremental odometry reading)

$\endgroup$
-1
$\begingroup$

The EKF principles have been well explained in the other replies.

I would like to add that you can do SLAM without using Odometry, i.e only using a LIDAR for example.

"Also, are all SLAM algorithms feature-based?"

No not all of course

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.