Control of Mobile Robots- 2.4 Sensors


We have a model of a robot, we know how
the robot can get position information, in this case we used the wheel encoders but
there are other ways we talked about compasses and accelerometers but
the robot also needs to know what the world around. It looks like. And for that
you need sensors. And we are not going to be spending too much time modeling
different kinds of sensors, and see what is the difference between an infrared and
an ultrasonic range sensor. Instead. We’re going to come up with an abstraction
that captures what a lot of different sensing modalities can do. And, it’s going
to be based on what’s called the range sensor skirt. This is the standard
sensor suite in a lot, on a lot of robots. And, it’s basically a collection of
sensors that are. The collection that is, gathered around the robot. That measures
distances in different directions. So, infra red skirts, ultrasound. LIDAR, which
are laser scanners. These are all examples of these range sensors. They’re going to
show up a lot. Now there are other standard external sensors of course,
vision, or tactile sensors, we have bumpers or other ways of physically
interacting with the world or “GPS” or I’m putting them in quotation because there
are other ways of faking GPS. For instance, in my lab I’m using a motioning,
or motion captioning system to pretend that I have GPS. But what we’re going to
do, mainly. It’s assumed that we have this kind of setup. Where a skirt around the
robot that can measure distances to, to other to things in the environment. And in
fact, here is the Chipera It’s a simulation of the Chipera. And the Chipera
in this case, has a number of infrared sensors. And Well you see the cones, you
have blue and red cones, and then you have red rectangles. The red rectangles are
obstacles and what we’re going to be able to do is measure the direction and
distance to obstacles. So this is what type of information we’re going to get out
of these range-sensor skirts. over here on the right you see two pictures of the
sensing modalities that we had on The self-driving car that was developed at
Georgia Tech. And we have laser scanners and radar and vision. but the point is the
skirt doesn’t always have to be uniform or even homogeneous across the sensors. Here
we have a skirt that is heterogeneous across different sensing modalities. But,
roughly you have the same kind of abstraction for a car like this, as well
as for. Hey, Chipera, little mobile differential drive, robot. Okay, so,
that’s fine, but we don’t actually want to worry about particular sensors. We need to
come up with an abstraction of this, sensor skirt, that makes sense, and that
we can reason about when we design our controller. So, what we’re going to do is,
we’re going to do some, or perform what’s called a disk abstract. Abstraction.
So here’s the robot, sitting here in the middle. around it are sensors. And in
fact, if you look at this picture here, here are little infrared sensors. And in
fact, here are ultrasonic sensors.You see that scattered around this robot are. It’s
a skirt of range censors. We’re, they typically have an effective range, and
we’re going to extract that and say there is a disk around the robot, of a certain
radius, where the robot can see what’s going on, so this is this, this
pinkish disk around the robot and it can detect obstacles that are around it.
So, the two red symbols there are the obstacles. And in fact, What we can do is we can figure
out how far away are the two obstacles. So, D1 is the distance to obstacle one,
which is this guy. And this is obstacle two, well, okay. join with ratts of ensure
and Pi one is the angle to that obstacle, similarly d2 is the distance to obstacle
2. Phi 2 is the angle to obstacle two. One thing to keep in mind though is that robot
has its own coordinate system in the sense that this, if this is the x axis of the
robot right now, then Phi one is measure relative to the robot’s x axis, so the
robot’s heading, right. So we need to take that into account if we want to know
globally where the obstacles are. So let’s do that. If you have that, and if you know
our own pose, so we know x, y and Phi. Then since the measured headings to the
obstacles. So this is Phi one which is measuring and we’re measuring this
relative to our orientation. Lets say that our orientation is this right. So here is
Phi and here is Phi two say, then of course the actual. direction to obstacle two is
going to be Phi2 plus Phi.So, what we can do, is we can take this into
account and compute the global position’s of these obstacles if we know where the
robot is. So, for instance, the global position of obstacle one x1 and y1. Well,
it’s the position of the robot plus the distance to that obstacle times cosine and
sine of this Phi 1 plus Phi term. So we actually know globally where the obstacles
are if we know where the robot actually is So this is an assumption we’re going
to make. We’re going to assume that we know x, y and Phi. And as a corollary to
that, we’re going to assume that we know the position of obstacles around this in
the environment. So that’s the abstraction that we’re going to be designing our
controllers around. And I just want to show you a, an I’m using an example of
this, this is known as the rendezvous problem in multi agent robotics, where you
have lots of robots that are supposed to meet at the common location but they’re
not allowed to talk, they’re not allowed to agree on where this would be by
chatting instead they have to move in such a way that. They end up meeting in same
location and one way of doing this is to assume you have a rain sensor disk around
you and then when you see other robots in that disk instead of thinking of them as obstacles we think of them as buddies so what we are going to do is each robot is going to aim toward the center of gravity of all it’s neighboors so everyone that is in that disk, and because of the disk assumption or disk abstraction we just talked about, we can actually compute where the center of gravity is of our
neighbors. So here’s an example of what this looks like. Every robot is shrinking
down. To, all the robots shrink down to meet at the same point, without any
communication, simply by taking the disk around them, looking where are my
neighbors in that disk, and now we know how to compute that. And, then, computing
the center of gravity of my neighbors, and aiming towards said center of gravity.
Okay, now we have a robot model. We have a model for figuring out how to know where
the robot is, we have a model for how do we know where obstacles and things in
environment are. Now we can use these things of course to actually start
designing controllers, so that’s what we’re going to have to do next. I do want
to point out though that the model The real encoder, and the disk abstraction.
These are but an example of what you can do and how you should make these kinds of
abstractions. But for different kinds of robots, different types of models and
abstractions may be appropriate.