CN107843258B - Indoor positioning system and method - Google Patents
Indoor positioning system and method Download PDFInfo
- Publication number
- CN107843258B CN107843258B CN201710961571.5A CN201710961571A CN107843258B CN 107843258 B CN107843258 B CN 107843258B CN 201710961571 A CN201710961571 A CN 201710961571A CN 107843258 B CN107843258 B CN 107843258B
- Authority
- CN
- China
- Prior art keywords
- light spot
- mobile robot
- image
- ceiling
- minimum circumscribed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses an indoor positioning system and a method, wherein the system comprises a positioning device with an image acquisition unit and a data processing unit and a mobile robot, when the mobile robot moves on an indoor floor, a transmitter arranged on the mobile robot emits a light beam towards an indoor ceiling to form a first light spot and a second light spot which are different in size and/or shape; an image acquisition unit which is obliquely arranged on an indoor wall acquires a contour image of a ceiling and images of a first light spot and a second light spot; the data processing unit determines the pose of the mobile robot on the indoor floor according to the contour image of the ceiling and the images of the first light spot and the second light spot, so that the influence of shielding of light beams emitted by the emitter from the ceiling to the light path of the image acquisition unit by objects such as indoor furniture, electrical appliances and the like is greatly reduced when the pose of the mobile robot on the indoor floor is monitored, and the accuracy of positioning of the mobile robot is improved.
Description
Technical Field
The invention relates to the technical field of indoor positioning, in particular to an indoor positioning system and method of a mobile robot.
Background
In the modern society, mobile robots are more and more widely used indoors, especially, floor sweeping robots, floor mopping robots, nursing robots, and the like, and a positioning technology is the most basic and important technology in realizing autonomous navigation of mobile robots. The positioning technology comprises a relative positioning method and an absolute positioning method, the dead reckoning is a classical relative positioning method, various sensors such as a code disc and a gyroscope on the mobile robot are used for acquiring dynamic information of the mobile robot, and an estimated position of the robot relative to an initial state is obtained through a recursion accumulation formula, so that the defect is that the accumulated error is large; the absolute positioning method is that the mobile robot obtains known reference information such as some external positions, and calculates the position of the mobile robot by calculating the correlation between the mobile robot and the reference information. The absolute positioning technology mainly adopts beacon-based positioning, environment map model matching positioning, visual positioning and the like.
Positioning based on beacons refers to receiving or observing beacons with known positions in the surrounding environment through sensors on the mobile robot, and calculating the relative positions of the mobile robot and the beacons, so that the mobile robot is positioned. The beacon can be active or passive, the passive beacon includes a reflective material, a special mark, etc., the active beacon includes an active optical beacon, and the light emitted by the active optical beacon can be detected by an optical sensor on the mobile robot, so that various parameters of the emitted light can be measured, such as the distance measured by using a time-of-flight (time-of-flight) method, the direction relative to the beacon, the signal intensity, etc. However, in practical applications, the line of sight between the active optical beacon and the optical sensor is easily blocked by objects such as furniture and electrical appliances in a room, and the optical sensor cannot receive the light emitted by the active optical beacon, so that the positioning of the mobile robot is affected.
Disclosure of Invention
The invention aims to solve the technical problem that in the traditional technology, the method for positioning the mobile robot by using the optical beacon easily blocks the sight line between the optical beacon and the optical sensor by objects such as indoor furniture, electric appliances and the like, so that the positioning accuracy of the mobile robot is influenced, and provides an indoor positioning system and method.
An indoor positioning system, the system comprising:
a mobile robot configured to move on a floor in a room, the mobile robot being provided with a transmitter configured to transmit a light beam toward a ceiling in the room, the light beam transmitted by the transmitter forming a first light spot and a second light spot different in size and/or shape on the ceiling in the room;
a positioning device, comprising:
the image acquisition unit is arranged on a wall in a room in an inclined upward mode so as to acquire at least a partial outline image of a ceiling and images of the first light spot and the second light spot;
and the data processing unit is in communication connection with the image acquisition unit and is configured to determine the pose of the mobile robot on the floor in the room according to at least a partial contour image of the ceiling and the images of the first light spot and the second light spot.
In one embodiment, the transmitter is an infrared laser transmitter, and the image acquisition unit is an infrared image acquisition unit.
In one embodiment, the positioning device further comprises at least three delineators configured to be disposed on a ceiling within the room; the image acquisition unit is configured to acquire images of at least three of the outlining bits; the data processing unit is configured to determine at least a partial profile image of the ceiling from an image of at least three of the contouring bits.
In one embodiment, the outline marker is an infrared dot-shaped LED lamp.
In one embodiment, the data processing unit is configured to:
obtaining a perspective deformation correction parameter of at least part of the contour image of the ceiling;
correcting the images of the first light spot and the second light spot according to the correction parameters;
and determining the pose of the mobile robot on the floor in the room according to the poses of the images of the first light spot and the second light spot on at least partial contour images of the ceiling.
In one embodiment, the data processing unit is configured to:
determining the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle according to the first minimum circumscribed circle corresponding to the image of the first light spot and the second minimum circumscribed circle corresponding to the image of the second light spot;
determining the position of the mobile robot on the indoor floor according to the position of the center of the first minimum circumscribed circle and/or the position of the center of the second minimum circumscribed circle;
and determining the posture of the mobile robot on the indoor floor according to the connecting line direction between the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle.
A method of indoor positioning, the method comprising:
when the mobile robot moves on the floor in a room, a transmitter arranged on the mobile robot transmits a light beam towards the ceiling in the room, so that the light beam transmitted by the transmitter forms a first light spot and a second light spot which are different in size and/or shape on the ceiling in the room;
an image acquisition unit which is obliquely arranged on the indoor wall acquires at least part of contour images of the ceiling and images of the first light spot and the second light spot;
and the data processing unit which is in communication connection with the image acquisition unit determines the pose of the mobile robot on the indoor floor according to at least part of the contour image of the ceiling and the images of the first light spot and the second light spot.
In one embodiment, the transmitter is an infrared laser transmitter, and the image acquisition unit is an infrared image acquisition unit.
In one embodiment, the image acquisition unit acquires images of at least three of the delineators disposed on a ceiling in a room; the data processing unit determines at least a partial profile image of the ceiling from the image of at least three of the profile bits.
In one embodiment, the outline marker is an infrared dot-shaped LED lamp.
In one embodiment, the data processing unit finds perspective deformation correction parameters of at least partial outline images of the ceiling; correcting the images of the first light spot and the second light spot according to the correction parameters; and determining the pose of the mobile robot on the floor in the room according to the poses of the images of the first light spot and the second light spot on at least partial contour images of the ceiling.
In one embodiment, the data processing unit determines the center of the first minimum circumscribed circle and the center of the second minimum circumscribed circle according to a first minimum circumscribed circle corresponding to the image of the first light spot and a second minimum circumscribed circle corresponding to the image of the second light spot; determining the position of the mobile robot on the indoor floor according to the position of the center of the first minimum circumscribed circle and/or the position of the center of the second minimum circumscribed circle; and determining the posture of the mobile robot on the indoor floor according to the connecting line direction between the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle.
An indoor coverage test system comprising the system of any preceding claim.
An indoor coverage test method comprising the method of any one of the above.
The invention provides an indoor positioning system, which comprises a positioning device and a mobile robot, wherein the mobile robot is configured to move on an indoor floor, and a transmitter on the mobile robot emits light beams to a ceiling to form a first light spot and a second light spot which are different in size and/or shape; the image acquisition unit of the positioning device is obliquely and upwards arranged on an indoor wall and used for acquiring the outline image of a ceiling and the images of the first light spot and the second light spot; the data processing unit of the positioning device determines the pose of the mobile robot on the indoor floor according to the contour image of the ceiling and the images of the first light spot and the second light spot, so that the influence of shielding of light beams emitted by the emitter from the ceiling to the light path of the image acquisition unit by objects such as indoor furniture, electrical appliances and the like is greatly reduced when the pose of the mobile robot on the indoor floor is monitored, and the accuracy of positioning the mobile robot on the indoor floor is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other modifications can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario of an indoor positioning system according to an embodiment of the present invention;
FIG. 2 is a block diagram of the positioning device shown in FIG. 1;
fig. 3 and 4 are two visual representations of a first spot and a second spot, respectively;
FIG. 5 is a schematic diagram of a contour image of a ceiling and images of a first light spot and a second light spot acquired by an image acquisition unit;
FIG. 6 is the perspective distortion corrected image of FIG. 5;
FIG. 7 is a flow chart of steps performed by the data processing unit;
FIG. 8 is a flowchart illustrating the steps of one embodiment of step S3 of FIG. 7;
fig. 9 is an explanatory diagram for determining the pose of the mobile robot from the corrected first light spot and second light spot;
FIG. 10 is a schematic diagram of an indoor top view in an application scenario;
fig. 11 is a schematic view of the position of a mobile robot equipped with a mid-sweep at two adjacent sampling instants.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, an embodiment of the present invention provides an indoor positioning system including a mobile robot 10 and a positioning apparatus 20, the positioning apparatus 20 including an image acquisition unit 21 and a data processing unit 22. The mobile robot 10 may be any one of a floor sweeping robot, a floor mopping robot, and a nursing robot, or may be a robot integrating any two or more functions of floor sweeping, dust collection, floor mopping, nursing, monitoring, and the like.
The mobile robot 10 is configured to move on a floor 31 in a room, and a moving mechanism configuration for driving the mobile robot 10 to move may include two driving wheels and one universal wheel provided on the mobile robot 10, and may further include a crawler, a mecanum wheel, and the like.
The mobile robot 10 is provided with the transmitter 11, the transmitter 11 is configured to emit a light beam toward the ceiling 32 of the room, and the light beam emitted by the transmitter 11 forms the first light spot a1 and the second light spot a2 on the ceiling 32 of the room. The mobile robot 10 may be provided with two transmitters 11, one transmitter 11 emitting a light beam forming a first light spot a1 on the ceiling 32 of the room, and the other transmitter 11 emitting a light beam forming a second light spot a2 on the ceiling 32 of the room. The mobile robot 10 may also be provided with a single transmitter 11, and the light beam emitted by the single transmitter 11 can form the first light spot a1 and the second light spot a2 on the ceiling 32 of the room. It should be noted that, in the visual sense of human eyes, the first light spot a1 and the second light spot a2 may be represented as two independent and spaced light spots as shown in fig. 3, the first light spot a1 and the second light spot a2 may also be represented as two separate and spaced light spots as shown in fig. 4, and the first light spot a1 and the second light spot a2 are two main components of the overall light spot.
Since the first light spot a1 and the second light spot a2 are for indicating the posture of the mobile robot 10 on the floor 31, and thus show the orientation of the mobile robot 10, the first light spot a1 and the second light spot a2 are two light spots with different sizes and/or shapes, for example, the first light spot a1 and the second light spot a2 are two light spots with the same size and different shapes, as well as the first light spot a1 and the second light spot a2 are two light spots with the same shape and different sizes, as well as the first light spot a1 and the second light spot a2 are two light spots with different shapes and sizes.
The image acquisition unit 21 is in communication connection with the data processing unit 22, and in practical applications, the image acquisition unit 21 and the data processing unit 22 may be integrated together to form the positioning device 20, and based on this, the image acquisition unit 21 and the data processing unit 22 may be in communication connection through a conductive circuit; the image acquisition unit 21 and the data processing unit 22 may also be disposed in a decentralized manner to form the positioning device 20, and based on this, the image acquisition unit 21 and the data processing unit 22 may be in communication connection based on technologies such as bluetooth, wifi, zigbee, and the like.
The image pickup unit 21 is configured to be provided obliquely upward on the wall 33 in the room so as to pick up at least a partial outline image of the ceiling 32 and the images of the first spot a1 and the second spot a 2. The reason why the image pickup unit 21 is installed obliquely upward on the indoor wall 33 is to reduce obstruction of lighting lines of the image pickup unit 21 by indoor furniture, electric appliances, and the like, and to capture a wider-range outline image of the ceiling 32 by increasing a visual field range. In practical applications, by selecting the image capturing unit 21 with the corresponding viewing angle and matching the angle adjustment of the image capturing unit 21 in the oblique direction, the rectangular ceiling 32 can be located in the viewing range of the image capturing unit 21, i.e. the image capturing unit 21 can capture the whole contour image of the rectangular ceiling 32.
In the embodiment of the present invention, in order to reduce interference of ambient light such as sunlight and light to the image collected by the image collecting unit 21, the image collecting unit 21 is preferably an infrared image collecting unit, for example, an infrared camera; the transmitter 11 is based on an infrared laser transmitter. To delineate at least part of the contour of the ceiling 32, the positioning device 20 further comprises at least three contour marking bits 23, the contour marking bits 23 being configured to be provided on the ceiling 32 in the room. Generally, the ceiling 32 of the room is rectangular as shown in fig. 1, the number of the outline indication positions 23 may be four, and four outline indication positions 23 are respectively installed at four corners of the rectangular ceiling 32. When the image capturing unit 21 is an infrared camera, the outline indication position 23 is preferably an infrared point-like LED lamp in order to better capture an image by the infrared camera, and of course, the outline indication position 23 may also be a block capable of emitting stronger infrared rays.
The data processing unit 22 determines at least a partial profile image of the ceiling 32 from the image of the at least three profile bits 23. It should be noted that the area on the floor 31 that can be reached by the mobile robot 10 is mapped on the ceiling 32 and includes the area on the ceiling 32 that is enclosed by the connecting line between the adjacent delineators 23, so that when the mobile robot 10 moves on the floor 31 in the room, the image capturing unit 21 can capture the image of the first light spot a1 and the image of the second light spot a2 that are formed on the ceiling 32 in the room by the light beam emitted by the emitter 11 at all times.
In an alternative embodiment, the image capturing unit 21 may also be a common camera for capturing visible light, and the transmitter 11 may be a visible light laser transmitter with other frequencies. The contour bits 23 may be a special pattern, and after the image acquisition unit 21 acquires the image of the special pattern, the data processing unit 22 identifies the special pattern by using an image identification technology, so as to determine at least a partial contour image of the ceiling 32.
The data processing unit 22 is configured to determine the pose of the mobile robot 10 on the floor 31 in the room from the at least partial contour image of the ceiling 32 and the images of the first spot a1 and the second spot a 2. In an alternative embodiment, the data processing unit 22 includes a printed circuit board and a number of electronic and computing components (e.g., computer memory and computer processing chips, etc.) carried by the printed circuit board.
For example, due to the effect of the perspective deformation, the image capturing unit 21 captures an image as shown in fig. 5, the rectangular ceiling 32 forms an irregular quadrilateral contour in the image, and accordingly, the first light spot a1 and the second light spot a2 are also subjected to the perspective deformation in the image, so that, in order to improve the accuracy of the pose of the mobile robot 10 on the floor 31, the data processing unit 22 may first correct the image of the first light spot a1 and the image of the second light spot a2 which are subjected to the perspective deformation, refer to the corrected images shown in fig. 6, and then determine the pose of the mobile robot 10 on the floor 31 according to the corrected images of the first light spot a1 and the second light spot a 2. Specifically, the data processing unit 22 is configured to execute step S1, step S2, and step S3 as shown in fig. 7.
Step S1 includes: a perspective deformation correction parameter for at least a part of the contour image of the ceiling 32 is obtained.
In an alternative embodiment, a two-dimensional coordinate system may be pre-established, taking fig. 5 and fig. 6 as an example, coordinates of four points of the trapezoid in fig. 5 and coordinates of four points of the rectangle in fig. 6 are first obtained, and a transformation matrix of perspective transformation, i.e. a correction parameter, may be calculated through two sets of coordinate points. For example, it can be implemented using the warPerspectral function in OpenCV.
Step S2 includes: the images of the first and second spots a1 and a2 were corrected according to the correction parameters.
In the embodiment of the invention, after the correction parameters are obtained, the image of the first light spot a1 and the image of the second light spot a2 are corrected by performing inverse conversion according to the correction parameters.
Step S3 includes: the pose of the mobile robot 10 on the floor 31 in the room is determined from the poses of the images of the first spot a1 and the second spot a2 on the at least partial contour image of the ceiling 32.
In an embodiment of the present invention, the step S3 that the data processing unit 22 is configured to execute may include: as in step S31, step S32, and step S33 shown in fig. 8, the following description will be made by taking fig. 9 as an example.
Step S31 includes: the center C11 of the first minimum circumscribed circle C1 and the center C21 of the second minimum circumscribed circle C2 are determined according to the first minimum circumscribed circle C1 corresponding to the image of the first spot A1 and the second minimum circumscribed circle C2 corresponding to the image of the second spot A2.
For example, a minEnclosingCircle function in OpenCV may be used to find a first minimum circumscribed circle C1 of the image of the first light spot a1 and a second minimum circumscribed circle C2 of the image of the second light spot a 2.
Step S32 includes: the position of the mobile robot 10 on the floor 31 in the room is determined based on the position of the center C11 of the first minimum circumscribed circle C1 and/or the position of the center C21 of the second minimum circumscribed circle C2.
For example, the center C11 position of the first minimum circumscribed circle C1 may be taken as the position of the mobile robot 10 on the floor 31 in the room; for another example, the center C21 position of the second minimum circumscribed circle C2 may be taken as the position of the mobile robot 10 on the floor 31 in the room; as another example, a position between the position of the center C11 of the first minimum circumscribed circle C1 and the position of the center C21 of the second minimum circumscribed circle C2 may be taken as the position of the mobile robot 10 on the floor 31 in the room.
Step S33 includes: the posture of the mobile robot 10 on the floor 31 in the room is determined according to the direction of the line connecting the center C11 of the first minimum circumscribed circle C1 and the center C21 of the second minimum circumscribed circle C2.
For example, when the first light spot a1 and the second light spot a2 correspond to the front-back direction of the mobile robot 10, that is, the first light spot a1 is close to the front of the mobile robot 10, and the second light spot a2 is close to the rear of the mobile robot 10, the direction of the line connecting the circle center C21 to the circle center C11 is taken as the posture of the mobile robot 10 on the indoor floor 31; for another example, when the second light spot a2 and the first light spot a1 correspond to the front-back direction of the mobile robot 10, that is, the second light spot a2 is close to the front of the mobile robot 10, and the first light spot a1 is close to the back of the mobile robot 10, the direction of the line connecting the circle center C11 to the circle center C21 is taken as the posture of the mobile robot 10 on the indoor floor 31; for another example, when the first light spot a1 and the second light spot a2 correspond to the left and right directions of the mobile robot 10, the direction perpendicular to the line from the center C21 to the center C11 is the posture of the mobile robot 10 on the indoor floor 31.
In an optional embodiment, the positioning apparatus 20 may further include a wireless communication unit, which may be configured to transmit the pose information of the mobile robot 10 on the floor 31 in the room determined by the data processing unit 22 to the mobile robot 10, so that the mobile robot 10 applies the pose information of itself to autonomous navigation.
The invention provides an indoor positioning system, which comprises a mobile robot 10 and a positioning device 20, wherein the mobile robot 10 is configured to move on the floor in a room, and a transmitter 11 on the mobile robot 10 transmits light beams to a ceiling 32 to form a first light spot A1 and a second light spot A2, wherein the first light spot A1 and the second light spot A2 are different in size and/or shape; the image acquisition unit 21 of the positioning device 20 is obliquely arranged upwards on the indoor wall 33 and is used for acquiring the contour image of the ceiling 32 and the images of the first light spot A1 and the second light spot A2; the data processing unit 22 of the positioning device 20 determines the pose of the mobile robot 10 on the indoor floor 31 according to the contour image of the ceiling 32 and the images of the first light spot a1 and the second light spot a2, so that when the pose of the mobile robot 10 on the indoor floor 31 is monitored, the influence that the light path from the light beam emitted by the emitter 11 to the image acquisition unit 21 through the ceiling 32 is blocked by objects such as indoor furniture and electric appliances is greatly reduced, and the accuracy of positioning the mobile robot 10 on the indoor floor 31 is improved.
An embodiment of the present invention further provides an indoor positioning method, which is described in detail with reference to fig. 1 to 8 for the above-mentioned indoor positioning system, and the indoor positioning method includes:
when the mobile robot 10 moves on the floor 31 in the room, the transmitter 11 provided in the mobile robot 10 emits a light beam toward the ceiling 32 in the room, so that the light beam emitted from the transmitter 11 forms the first light spot a1 and the second light spot a2 having different sizes and/or shapes on the ceiling 32 in the room.
The image pickup unit 21 provided obliquely upward on the wall 33 in the room picks up at least a partial outline image of the ceiling 32 and images of the first spot a1 and the second spot a 2.
The data processing unit 22, which is communicatively connected to the image capturing unit 21, determines the pose of the mobile robot 10 on the floor 31 in the room from the at least partial contour image of the ceiling 32 and the images of the first spot a1 and the second spot a 2.
The mobile robot 10 can move on the floor 31 in the room, and may include two driving wheels and one universal wheel provided on the mobile robot 10, and may further include a crawler, a mecanum wheel, and the like, in a moving mechanism configuration for driving the mobile robot 10 to move.
The mobile robot 10 may be provided with two transmitters 11, one transmitter 11 emitting a light beam forming a first light spot a1 on the ceiling 32 of the room, and the other transmitter 11 emitting a light beam forming a second light spot a2 on the ceiling 32 of the room. The mobile robot 10 may also be provided with a single transmitter 11, and the light beam emitted by the single transmitter 11 can form the first light spot a1 and the second light spot a2 on the ceiling 32 of the room. It should be noted that, in the visual sense of human eyes, the first light spot a1 and the second light spot a2 may be represented as two independent and spaced light spots as shown in fig. 3, the first light spot a1 and the second light spot a2 may also be represented as two separate and spaced light spots as shown in fig. 4, and the first light spot a1 and the second light spot a2 are two main components of the overall light spot.
Since the first light spot a1 and the second light spot a2 are for indicating the posture of the mobile robot 10 on the floor 31, and thus show the orientation of the mobile robot 10, the first light spot a1 and the second light spot a2 are two light spots with different sizes and/or shapes, for example, the first light spot a1 and the second light spot a2 are two light spots with the same size and different shapes, as well as the first light spot a1 and the second light spot a2 are two light spots with the same shape and different sizes, as well as the first light spot a1 and the second light spot a2 are two light spots with different shapes and sizes.
The image acquisition unit 21 is in communication connection with the data processing unit 22, and in practical applications, the image acquisition unit 21 and the data processing unit 22 may be integrated together to form the positioning device 20, and based on this, the image acquisition unit 21 and the data processing unit 22 may be in communication connection through a conductive circuit; the image acquisition unit 21 and the data processing unit 22 may also be disposed in a decentralized manner to form the positioning device 20, and based on this, the image acquisition unit 21 and the data processing unit 22 may be in communication connection based on technologies such as bluetooth, wifi, zigbee, and the like.
The image pickup unit 21 is provided obliquely upward on the wall 33 in the room so as to pick up at least a partial outline image of the ceiling 32 and images of the first spot a1 and the second spot a 2. The reason why the image pickup unit 21 is installed obliquely upward on the indoor wall 33 is to reduce obstruction of lighting lines of the image pickup unit 21 by indoor furniture, electric appliances, and the like, and to capture a wider-range outline image of the ceiling 32 by increasing a visual field range. In practical applications, by selecting the image capturing unit 21 with the corresponding viewing angle and matching the angle adjustment of the image capturing unit 21 in the oblique direction, the rectangular ceiling 32 can be located in the viewing range of the image capturing unit 21, i.e. the image capturing unit 21 can capture the whole contour image of the rectangular ceiling 32.
In the embodiment of the present invention, in order to reduce interference of ambient light such as sunlight and light to the image collected by the image collecting unit 21, the image collecting unit 21 is preferably an infrared image collecting unit, for example, an infrared camera; the transmitter 11 is based on an infrared laser transmitter. In order to outline at least part of the ceiling 32, the positioning device 20 further comprises at least three outline marking locations 23, the outline marking locations 23 being provided on the ceiling 32 in the room. Generally, the ceiling 32 of the room is rectangular as shown in fig. 1, the number of the outline indication positions 23 may be four, and four outline indication positions 23 are respectively installed at four corners of the rectangular ceiling 32. When the image capturing unit 21 is an infrared camera, the outline indication position 23 is preferably an infrared point-like LED lamp in order to better capture an image by the infrared camera, and of course, the outline indication position 23 may also be a block capable of emitting stronger infrared rays.
The data processing unit 22 determines at least a partial profile image of the ceiling 32 from the image of the at least three profile bits 23. It should be noted that the area on the floor 31 that can be reached by the mobile robot 10 is mapped on the ceiling 32 and includes the area on the ceiling 32 that is enclosed by the connecting line between the adjacent delineators 23, so that when the mobile robot 10 moves on the floor 31 in the room, the image capturing unit 21 can capture the image of the first light spot a1 and the image of the second light spot a2 that are formed on the ceiling 32 in the room by the light beam emitted by the emitter 11 at all times.
In an alternative embodiment, the image capturing unit 21 may also be a common camera for capturing visible light, and the transmitter 11 may be a visible light laser transmitter with other frequencies. The contour bits 23 may be a special pattern, and after the image acquisition unit 21 acquires the image of the special pattern, the data processing unit 22 identifies the special pattern by using an image identification technology, so as to determine at least a partial contour image of the ceiling 32.
The data processing unit 22 determines the pose of the mobile robot 10 on the floor 31 in the room from the at least partial contour image of the ceiling 32 and the images of the first spot a1 and the second spot a 2. In an alternative embodiment, the data processing unit 22 includes a printed circuit board and a number of electronic and computing components (e.g., computer memory and computer processing chips, etc.) carried by the printed circuit board.
For example, due to the effect of the perspective deformation, the image capturing unit 21 captures an image as shown in fig. 5, the rectangular ceiling 32 forms an irregular quadrilateral contour in the image, and accordingly, the first light spot a1 and the second light spot a2 are also subjected to the perspective deformation in the image, so that, in order to improve the accuracy of the pose of the mobile robot 10 on the floor 31, the data processing unit 22 may first correct the image of the first light spot a1 and the image of the second light spot a2 which are subjected to the perspective deformation, refer to the corrected images shown in fig. 6, and then determine the pose of the mobile robot 10 on the floor 31 according to the corrected images of the first light spot a1 and the second light spot a 2. Specifically, the data processing unit 22 executes step S1, step S2, and step S3 as shown in fig. 7.
Step S1 includes: a perspective deformation correction parameter for at least a part of the contour image of the ceiling 32 is obtained.
In an alternative embodiment, a two-dimensional coordinate system may be pre-established, taking fig. 5 and fig. 6 as an example, coordinates of four points of the trapezoid in fig. 5 and coordinates of four points of the rectangle in fig. 6 are first obtained, and a transformation matrix of perspective transformation, i.e. a correction parameter, may be calculated through two sets of coordinate points. For example, it can be implemented using the warPerspectral function in OpenCV.
Step S2 includes: the images of the first and second spots a1 and a2 were corrected according to the correction parameters.
In the embodiment of the invention, after the correction parameters are obtained, the image of the first light spot a1 and the image of the second light spot a2 are corrected by performing inverse conversion according to the correction parameters.
Step S3 includes: the pose of the mobile robot 10 on the floor 31 in the room is determined from the poses of the images of the first spot a1 and the second spot a2 on the at least partial contour image of the ceiling 32.
In the embodiment of the present invention, step S3 executed by the data processing unit 22 may include: as in step S31, step S32, and step S33 shown in fig. 8, the following description will be made by taking fig. 9 as an example.
Step S31 includes: the center C11 of the first minimum circumscribed circle C1 and the center C21 of the second minimum circumscribed circle C2 are determined according to the first minimum circumscribed circle C1 corresponding to the image of the first spot A1 and the second minimum circumscribed circle C2 corresponding to the image of the second spot A2.
For example, a minEnclosingCircle function in OpenCV may be used to find a first minimum circumscribed circle C1 of the image of the first light spot a1 and a second minimum circumscribed circle C2 of the image of the second light spot a 2.
Step S32 includes: the position of the mobile robot 10 on the floor 31 in the room is determined based on the position of the center C11 of the first minimum circumscribed circle C1 and/or the position of the center C21 of the second minimum circumscribed circle C2.
For example, the center C11 position of the first minimum circumscribed circle C1 may be taken as the position of the mobile robot 10 on the floor 31 in the room; for another example, the center C21 position of the second minimum circumscribed circle C2 may be taken as the position of the mobile robot 10 on the floor 31 in the room; as another example, a position between the position of the center C11 of the first minimum circumscribed circle C1 and the position of the center C21 of the second minimum circumscribed circle C2 may be taken as the position of the mobile robot 10 on the floor 31 in the room.
Step S33 includes: the posture of the mobile robot 10 on the floor 31 in the room is determined according to the direction of the line connecting the center C11 of the first minimum circumscribed circle C1 and the center C21 of the second minimum circumscribed circle C2.
For example, when the first light spot a1 and the second light spot a2 correspond to the front-back direction of the mobile robot 10, that is, the first light spot a1 is close to the front of the mobile robot 10, and the second light spot a2 is close to the rear of the mobile robot 10, the direction of the line connecting the circle center C21 to the circle center C11 is taken as the posture of the mobile robot 10 on the indoor floor 31; for another example, when the second light spot a2 and the first light spot a1 correspond to the front-back direction of the mobile robot 10, that is, the second light spot a2 is close to the front of the mobile robot 10, and the first light spot a1 is close to the back of the mobile robot 10, the direction of the line connecting the circle center C11 to the circle center C21 is taken as the posture of the mobile robot 10 on the indoor floor 31; for another example, when the first light spot a1 and the second light spot a2 correspond to the left and right directions of the mobile robot 10, the direction perpendicular to the line from the center C21 to the center C11 is the posture of the mobile robot 10 on the indoor floor 31.
In an alternative embodiment, the positioning apparatus 20 may further include a wireless communication unit that may transmit the pose information of the mobile robot 10 on the floor 31 in the room determined by the data processing unit 22 to the mobile robot 10, so that the mobile robot 10 applies the pose information of itself to autonomous navigation.
According to the indoor positioning method provided by the embodiment of the invention, when the mobile robot 10 moves on the indoor floor 31, the emitter 11 arranged on the mobile robot 10 emits the light beam towards the indoor ceiling 32, so that the light beam emitted by the emitter 11 forms the first light spot A1 and the second light spot A2 with different sizes and/or shapes on the indoor ceiling 32; the image pickup unit 21 provided obliquely upward on the wall 33 in the room picks up at least a partial outline image of the ceiling 32 and images of the first spot a1 and the second spot a 2; the data processing unit 22 connected to the image acquisition unit 21 in communication determines the pose of the mobile robot 10 on the indoor floor 31 according to the at least partial contour image of the ceiling 32 and the images of the first light spot a1 and the second light spot a2, so that the influence of the light path from the light beam emitted by the emitter 11 to the image acquisition unit 21 through the ceiling 32 being blocked by objects such as indoor furniture, electrical appliances and the like is greatly reduced, and the accuracy of positioning the mobile robot 10 on the indoor floor 31 is improved.
When the mobile robot 10 is any one of a floor sweeping robot, a floor mopping robot and a dust collection robot, in a practical application, the mobile robot 10 cleans a floor 31 when moving on the indoor floor 31, and a coverage rate of cleaning the floor 31 within a certain time period needs to be tested.
In an alternative embodiment, in addition to including the indoor positioning system described above, the data processing unit 22 in the indoor coverage test system is further configured to determine the indoor coverage of the mobile robot 10 based on the ratio of the set of pixels mapped on the ceiling 32 of the actual area that has been cleaned by the mobile robot 10 to the total set of pixels mapped on the ceiling 32 of the active area that can be cleaned by the mobile robot 10.
In practical applications, objects such as furniture and electric appliances are often placed on the floor 31 in a room, and a plan view of the room shown in fig. 10 is taken as an example, and the effective area that can be cleaned by the mobile robot 10 is calculated by excluding the area occupied by the objects such as furniture and electric appliances on the floor 31. The effective area may be obtained by means of manual measurements and written in the form of constant parameters into a program executed by the data processing unit 22.
Since the mobile robot 10 still needs to be optimized and improved in terms of cleaning path planning, navigation, etc., the mobile robot 10 generally cannot cover the effective area one hundred percent, and therefore the actual area that has been cleaned by the mobile robot 10 is smaller than the effective area.
The image acquisition unit 21 acquires an image of the contour of the ceiling 32 and determines the effective area by the data processing unit 22, resulting in a set of pixels of the effective area mapped on the ceiling 32. The data processing unit 22 may use a pixel statistical method in the field of image processing technology, that is, the cleaning area of the mobile robot 10 is stored in the form of pixels, and a pixel set corresponding to the cleaning area is counted in each sampling of the ceiling 32, so as to obtain a statistical map of three-dimensional coordinates, where the plane of the X-axis and the Y-axis of the three-dimensional coordinates is parallel to the floor 31, and the Z-axis is the cleaning frequency. It is easy to understand that, since the mobile robot 10 performs repeated cleaning more than once on some areas while moving on the floor 31, the sampling process will perform repeated statistics on the pixel points in the areas, and therefore, the cleaning frequency can be understood as the sampling times of the pixel points. The data processing unit 22 counts the pixel points whose Z-axis is a non-zero number, so as to obtain a pixel set on the ceiling 32, where the actual area cleaned by the mobile robot 10 is mapped, and the sum of the areas corresponding to the pixel set is the actual area cleaned by the mobile robot 10.
In an alternative embodiment, from the pose information of the images of the first and second spots a1 and a2 in the contour image of the ceiling 32, the data processing unit 22 can determine the pose of the mobile robot 10 on the floor 31, and calculates the coordinate positions of both end points of the mid-scan of the mobile robot 10 as shown in fig. 11, for example, at the sampling time t1, the positions of the two ends of the middle scan 12 are R, T, at the sampling time t2, the positions of the two ends of the middle scan 12 are J, K, the quadrangle surrounded by four points R, T, J, K can be approximately used as the cleaned area of the sweep 12 in the time period t2-t1, by analogy, the area of the area cleaned by the middle sweep 12 is obtained by accumulation calculation within a certain time length, the ratio of the area of the region to the effective area that can be cleaned by the mobile robot 10 is considered as the indoor coverage of the mobile robot 10.
In an alternative embodiment, the light beam emitted by the emitter 11 on the mobile robot 10 forms a single light spot on the ceiling of the room, the image acquisition unit 21 acquires the contour image of the ceiling 32 and the image of the single light spot, and the data processing unit 22 determines the indoor coverage of the mobile robot 10 by using the pixel statistical method described above according to the contour image of the ceiling 32 and the image of the single light spot.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example" or "an alternative embodiment," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-described embodiments do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the above-described embodiments should be included in the protection scope of the technical solution.
Claims (10)
1. An indoor positioning system, the system comprising:
a mobile robot configured to move on a floor in a room, the mobile robot being provided with a transmitter configured to transmit a light beam toward a ceiling in the room, the light beam transmitted by the transmitter forming a first light spot and a second light spot different in size and/or shape on the ceiling in the room;
a positioning device, comprising:
the image acquisition unit is arranged on a wall in a room in an inclined upward mode so as to acquire at least a partial outline image of a ceiling and images of the first light spot and the second light spot;
a data processing unit communicatively connected to the image acquisition unit, configured to:
obtaining a perspective deformation correction parameter of at least part of the contour image of the ceiling;
correcting the images of the first light spot and the second light spot according to the correction parameters;
determining the pose of the mobile robot on the indoor floor according to the poses of the images of the first light spot and the second light spot on at least partial contour images of the ceiling;
the data processing unit is configured to:
determining the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle according to the first minimum circumscribed circle corresponding to the image of the first light spot and the second minimum circumscribed circle corresponding to the image of the second light spot;
determining the position of the mobile robot on the indoor floor according to the position of the center of the first minimum circumscribed circle and/or the position of the center of the second minimum circumscribed circle;
and determining the posture of the mobile robot on the indoor floor according to the connecting line direction between the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle.
2. The system of claim 1, wherein the transmitter is an infrared laser transmitter and the image capture unit is an infrared image capture unit.
3. The system of claim 1 or 2, wherein the positioning device further comprises at least three delineators configured to be provided on a ceiling within the room; the image acquisition unit is configured to acquire images of at least three of the outlining bits; the data processing unit is configured to determine at least a partial profile image of the ceiling from an image of at least three of the contouring bits.
4. The system of claim 3, wherein the outline marker is an infrared punctiform LED light.
5. An indoor positioning method, characterized in that the method comprises:
when the mobile robot moves on the floor in a room, a transmitter arranged on the mobile robot transmits a light beam towards the ceiling in the room, so that the light beam transmitted by the transmitter forms a first light spot and a second light spot which are different in size and/or shape on the ceiling in the room;
an image acquisition unit which is obliquely arranged on the indoor wall acquires at least part of contour images of the ceiling and images of the first light spot and the second light spot;
the data processing unit which is in communication connection with the image acquisition unit obtains perspective deformation correction parameters of at least part of the outline image of the ceiling; correcting the images of the first light spot and the second light spot according to the correction parameters; determining the pose of the mobile robot on the indoor floor according to the poses of the images of the first light spot and the second light spot on at least partial contour images of the ceiling;
the data processing unit determines the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle according to the first minimum circumscribed circle corresponding to the image of the first light spot and the second minimum circumscribed circle corresponding to the image of the second light spot; determining the position of the mobile robot on the indoor floor according to the position of the center of the first minimum circumscribed circle and/or the position of the center of the second minimum circumscribed circle; and determining the posture of the mobile robot on the indoor floor according to the connecting line direction between the circle center of the first minimum circumscribed circle and the circle center of the second minimum circumscribed circle.
6. The method of claim 5, wherein the transmitter is an infrared laser transmitter and the image capture unit is an infrared image capture unit.
7. The method according to claim 5 or 6, wherein the image acquisition unit acquires images of at least three of the delineators provided on a ceiling in a room; the data processing unit determines at least a partial profile image of the ceiling from the image of at least three of the profile bits.
8. The method of claim 7, wherein the outline marker is an infrared punctiform LED lamp.
9. An indoor coverage test system, comprising the system of any one of claims 1-4.
10. A method for testing indoor coverage, comprising the method of any one of claims 5-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710961571.5A CN107843258B (en) | 2017-10-17 | 2017-10-17 | Indoor positioning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710961571.5A CN107843258B (en) | 2017-10-17 | 2017-10-17 | Indoor positioning system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107843258A CN107843258A (en) | 2018-03-27 |
CN107843258B true CN107843258B (en) | 2020-01-17 |
Family
ID=61661383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710961571.5A Active CN107843258B (en) | 2017-10-17 | 2017-10-17 | Indoor positioning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107843258B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101900B (en) * | 2018-07-23 | 2021-10-01 | 北京旷视科技有限公司 | Method and device for determining object distribution information and electronic equipment |
CN108937742A (en) * | 2018-09-06 | 2018-12-07 | 苏州领贝智能科技有限公司 | A kind of the gyroscope angle modification method and sweeper of sweeper |
CN112468736A (en) * | 2020-10-26 | 2021-03-09 | 珠海市一微半导体有限公司 | Ceiling vision robot capable of intelligently supplementing light and control method thereof |
CN113096179B (en) * | 2021-03-09 | 2024-04-02 | 杭州电子科技大学 | Coverage rate detection method of sweeping robot based on visual positioning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070109592A (en) * | 2006-05-12 | 2007-11-15 | 주식회사 한울로보틱스 | Localization system and the method of the mobile robot using the charging station |
CN104487864A (en) * | 2012-08-27 | 2015-04-01 | 伊莱克斯公司 | Robot positioning system |
CN105856227A (en) * | 2016-04-18 | 2016-08-17 | 呼洪强 | Robot vision navigation technology based on feature recognition |
CN105865438A (en) * | 2015-01-22 | 2016-08-17 | 青岛通产软件科技有限公司 | Autonomous precise positioning system based on machine vision for indoor mobile robots |
CN106681510A (en) * | 2016-12-30 | 2017-05-17 | 光速视觉(北京)科技有限公司 | Posture identification device, virtual reality display device and virtual reality system |
-
2017
- 2017-10-17 CN CN201710961571.5A patent/CN107843258B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070109592A (en) * | 2006-05-12 | 2007-11-15 | 주식회사 한울로보틱스 | Localization system and the method of the mobile robot using the charging station |
CN104487864A (en) * | 2012-08-27 | 2015-04-01 | 伊莱克斯公司 | Robot positioning system |
CN105865438A (en) * | 2015-01-22 | 2016-08-17 | 青岛通产软件科技有限公司 | Autonomous precise positioning system based on machine vision for indoor mobile robots |
CN105856227A (en) * | 2016-04-18 | 2016-08-17 | 呼洪强 | Robot vision navigation technology based on feature recognition |
CN106681510A (en) * | 2016-12-30 | 2017-05-17 | 光速视觉(北京)科技有限公司 | Posture identification device, virtual reality display device and virtual reality system |
Also Published As
Publication number | Publication date |
---|---|
CN107843258A (en) | 2018-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107843258B (en) | Indoor positioning system and method | |
JP6585262B2 (en) | System and method for camera position and orientation measurement | |
EP3104194B1 (en) | Robot positioning system | |
CN103353758B (en) | A kind of Indoor Robot navigation method | |
CN105631390B (en) | Method and system for spatial finger positioning | |
US9046360B2 (en) | System and method of acquiring three dimensional coordinates using multiple coordinate measurement devices | |
US20220161430A1 (en) | Recharging Control Method of Desktop Robot | |
Gschwandtner et al. | Infrared camera calibration for dense depth map construction | |
CN207115193U (en) | A kind of mobile electronic device for being used to handle the task of mission area | |
WO2019012770A1 (en) | Imaging device and monitoring device | |
US10346995B1 (en) | Remote distance estimation system and method | |
US20230270309A1 (en) | Linear laser beam-based method and device for obstacle avoidance | |
US20210291376A1 (en) | System and method for three-dimensional calibration of a vision system | |
CN106537185B (en) | Device for detecting obstacles by means of intersecting planes and detection method using said device | |
CN108459597A (en) | A kind of mobile electronic device and method for handling the task of mission area | |
KR100749923B1 (en) | Localization system of mobile robot based on camera and landmarks and method there of | |
US10038895B2 (en) | Image capture device calibration | |
CN113848944A (en) | Map construction method and device, robot and storage medium | |
WO2018043524A1 (en) | Robot system, robot system control device, and robot system control method | |
CN110274594A (en) | A kind of indoor positioning device and method | |
JP2017083663A (en) | Coincidence evaluation device and coincidence evaluation method | |
CN116592899B (en) | Pose measurement system based on modularized infrared targets | |
CN110648362A (en) | Binocular stereo vision badminton positioning identification and posture calculation method | |
CN110646231A (en) | Floor sweeping robot testing method and device | |
CN114370871A (en) | Close coupling optimization method for visible light positioning and laser radar inertial odometer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518110 Guangdong province Shenzhen city Longhua District Guanlan Street sightseeing road rich industrial zone Huiqing Technology Park building D Patentee after: Shenzhen flying mouse Power Technology Co., Ltd Address before: 518110 Guangdong province Shenzhen city Longhua District Guanlan Street sightseeing road rich industrial zone Huiqing Technology Park building D Patentee before: Shenzhen Xiluo Robot Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |