CN114018268A - Indoor mobile robot navigation method - Google Patents
Indoor mobile robot navigation method Download PDFInfo
- Publication number
- CN114018268A CN114018268A CN202111307887.5A CN202111307887A CN114018268A CN 114018268 A CN114018268 A CN 114018268A CN 202111307887 A CN202111307887 A CN 202111307887A CN 114018268 A CN114018268 A CN 114018268A
- Authority
- CN
- China
- Prior art keywords
- target
- target object
- angle
- mobile robot
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013136 deep learning model Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a navigation method of an indoor mobile robot, which comprises the following steps: step 1: the robot body automatically explores and takes a picture through an RGBD camera to obtain a picture; step 2: the robot body detects and identifies a target according to a picture shot by an RGBD camera; and step 3: the robot body estimates the angle and the distance of a target object according to the result of target detection and identification; and 4, step 4: and the robot body formulates a target object navigation rule according to the estimation of the angle and the distance of the target object. The invention does not need to be deployed in advance and can adapt to different indoor environments.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a navigation method of an indoor mobile robot.
Background
With the work rhythm of modern office workers becoming faster and the labor cost of manpower rising, the application of using the robot in daily life will become more and more extensive. Unlike industrial robots deployed in specific production environments, indoor mobile robots are faced with diverse indoor layout environments, and therefore indoor robots need to have navigation capabilities capable of adapting to different indoor environments. Generally, a method for establishing an indoor map in advance and manually marking a target position is used in the industry to guide a robot to navigate, but the method needs to be deployed in advance and is time-consuming and labor-consuming.
The patent document with publication number CN107450540A discloses an indoor mobile robot navigation system and method based on infrared road signs, a monocular vision recognition system collects images of the infrared road signs emitting light in the infrared road sign guidance system, and calculates the position of the indoor mobile robot relative to the infrared road sign emitting light currently through image processing, according to the relative position of the indoor mobile robot, the indoor mobile robot is driven to move towards the infrared road sign emitting light currently, after the indoor mobile robot enters the set area range of the current light-emitting infrared road sign, the monocular vision recognition system sends a light-emitting stop instruction to the current light-emitting infrared road sign, and simultaneously sends a light-emitting start instruction to the next light-emitting infrared road sign, then the monocular vision recognition system controls the indoor mobile robot to drive towards the next light-emitting infrared road sign according to the position of the next light-emitting infrared road sign, and repeating the steps until the mobile robot moves to the set area range of the last luminous infrared road sign, and finishing the movement of the indoor mobile robot. However, the patent document still has the defect that the deployment needs to be carried out in advance, and the deployment is time-consuming and labor-consuming.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a navigation method of an indoor mobile robot.
The invention provides a navigation method of an indoor mobile robot, which comprises the following steps:
step 1: the robot body automatically explores and takes a picture through an RGBD camera to obtain a picture;
step 2: the robot body detects and identifies a target according to a picture shot by an RGBD camera;
and step 3: the robot body estimates the angle and the distance of a target object according to the result of target detection and identification;
and 4, step 4: and the robot body formulates a target object navigation rule according to the estimation of the angle and the distance of the target object.
Preferably, the step 1 specifically comprises: entering a room from a set starting point, slowly rotating the navigation control robot body from right to left by 180 degrees, and acquiring image data once by the RGBD camera at intervals in the rotating process.
Preferably, the RGBD camera acquires a color image and a depth image of the current position, and records the current photographing angle of the robot body.
Preferably, the set angle is not greater than a horizontal viewing angle of the RGBD camera.
Preferably, the step 2 comprises the following steps:
step 2.1: acquiring image data every time, sending the image data into a trained deep learning model for detection, identifying a target object in an image, and outputting a target category, a probability value and a bounding box;
step 2.2: judging whether the bounding box of the target object is identified repeatedly or not when the targets of the same category are detected in the two continuous collected images;
step 2.3: and accurately updating the angle of the target object according to the coordinate position of the central point of the surrounding frame of the target object in the image.
Preferably, in the step 2.2, the horizontal viewing angle of the camera is set to be DHDegree, interval of taking pictures being interval DintDegree;
setting a bounding box B of a target object on a currently acquired image1Respectively has a center point, a length and a width of (x)1,y1,w1,h1) The surrounding frame B of the target object on the last acquired image0The center point, length and width are respectively (x)0,y0,w0,h0);
To (x)0,y0,w0,h0) Move to the right as a new bounding box BMIs composed ofCalculation of B1And BMThe IOU of (2);
and if the IOU is more than 0.4, the detection is regarded as the same target, the probability values of the two target object detections are screened, the target with the high probability value is selected as the actually detected target, and the detection target with the low probability value is discarded.
Preferably, in step 2.3, the center point of the bounding box of the target object is (Xc, Yc) pixel, the image Width is Width pixel, and the horizontal viewing angle of the camera is DHDegree, corresponding to camera shooting position angle DCDegree, then calculating the angle D of the targetTThe formula is as follows:
preferably, the step 3 comprises the following steps:
step 3.1: the detection type of the target object is 'gate', the length and the width of an enclosure frame A on the color image corresponding to the 'gate' type are respectively expanded by 1.2 times, and an expanded enclosure frame B on the color image is obtained;
mapping an enclosing frame A and an expanded enclosing frame B on the color image to the depth image respectively to obtain an enclosing frame C and an expanded enclosing frame D on the depth image respectively;
extracting pixels between the inside of the expanded surrounding width D and the outside of the surrounding frame C on the depth image, and calculating the average value of all the pixels to be used as angle estimation of a 'door' of the target object;
step 3.2: respectively mapping corresponding surrounding frames E on the color image to the depth image to obtain surrounding frames F if the detection category of the target object is not 'door';
and extracting all pixels in the surrounding frame F, sorting all pixel values from small to large, and calculating an average value of the first 10% of pixel values to be used as a distance estimation for the target object.
Preferably, the step 4 comprises the following steps:
step 4.1: recording all target objects identified at the current positions, and recording corresponding angles and distances of the target objects;
step 4.2: for a target object with a detection category of 'door', guiding the robot body to navigate through the door according to the angle and the distance, entering a new room, repeating the steps 1-3 and 4.1 for the whole room, and returning to a shooting point of the previous room after the completion;
step 4.3: and finishing the angle and distance estimation of all target objects in all rooms, and returning to the starting point.
Preferably, the mobile robot includes a robot body and an RGBD camera horizontally installed in front of the robot body, and is used based on the following.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention does not need to be deployed in advance and can adapt to different indoor environments;
2. according to the invention, a three-dimensional point cloud map is not required to be established, so that the calculation amount consumed by establishing the map is reduced;
3. the invention can automatically search the target without manually marking the position of the target.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a robot body according to the present invention;
FIG. 2 is a tree branch diagram of the target recognition of the present invention;
FIG. 3 is a semantic tree map of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1:
the embodiment provides a navigation method of an indoor mobile robot, which comprises the following steps:
step 1: the robot body independently explores through the RGBD camera and shoots, obtains the photo, gets into the room from a starting point of setting for in, navigation control robot body accomplishes from the right side to left 180 degrees slow rotations, rotates the in-process, and every interval sets for the angle, and image data is once gathered to the RGBD camera, and the RGBD camera gathers the color image and the depth image of current position to record the angle that present robot body shot, set for the angle and be not more than the horizontal visual angle of RGBD camera.
Step 2: the robot body carries out target detection and identification according to the photos shot by the RGBD camera, and the step 2 comprises the following steps:
step 2.1: acquiring image data every time, sending the image data into a trained deep learning model for detection, identifying a target object in an image, and outputting a target category, a probability value and a bounding box;
step 2.2: for the targets of the same category detected in the two continuous collected images, judging whether the surrounding frame of the target object is repeatedly identified or not, and setting the horizontal visual angle of the camera as DHDegree, interval of taking pictures being interval DintAnd setting a bounding box B of the target object on the currently acquired image1Respectively has a center point, a length and a width of (x)1,y1,w1,h1) The surrounding frame B of the target object on the last acquired image0The center point, length and width are respectively (x)0,y0,w0,h0) Pair of (x)0,y0,w0,h0) Move to the right as a new bounding box BMIs composed ofCalculation of B1And BMIf the IOU is more than 0.4, the detection is regarded as the same target, the probability values of the two times of target object detection are screened, and the probability value is selectedThe large target is actually detected, and the detected target with the small probability value is discarded.
Step 2.3: accurately updating the angle of the target object according to the coordinate position of the center point of the surrounding frame of the target object in the image, setting the center point of the surrounding frame of the target object as (Xc, Yc) pixel, the Width of the image as Width pixel, and the horizontal visual angle of the camera as DHDegree, corresponding to camera shooting position angle DCDegree, then calculating the angle D of the targetTThe formula is as follows:
and step 3: the robot body estimates the angle and the distance of the target object according to the result of target detection and identification, and the step 3 comprises the following steps:
step 3.1: the detection type of the target object is 'gate', the length and the width of an enclosure frame A on the color image corresponding to the 'gate' type are respectively expanded by 1.2 times, and an expanded enclosure frame B on the color image is obtained;
mapping an enclosing frame A and an expanded enclosing frame B on the color image to the depth image respectively to obtain an enclosing frame C and an expanded enclosing frame D on the depth image respectively;
extracting pixels between the inside of the expanded surrounding width D and the outside of the surrounding frame C on the depth image, and calculating the average value of all the pixels to be used as angle estimation of a 'door' of the target object;
step 3.2: respectively mapping corresponding surrounding frames E on the color image to the depth image to obtain surrounding frames F if the detection category of the target object is not 'door';
and extracting all pixels in the surrounding frame F, sorting all pixel values from small to large, and calculating an average value of the first 10% of pixel values to be used as a distance estimation for the target object.
And 4, step 4: the robot body formulates a target object navigation rule according to the estimation of the angle and the distance of the target object, and the step 4 comprises the following steps:
step 4.1: recording all target objects identified at the current positions, and recording corresponding angles and distances of the target objects;
step 4.2: for a target object with a detection category of 'door', guiding the robot body to navigate through the door according to the angle and the distance, entering a new room, repeating the steps 1-3 and 4.1 for the whole room, and returning to a shooting point of the previous room after the completion;
step 4.3: and finishing the angle and distance estimation of all target objects in all rooms, and returning to the starting point.
The method is used based on a mobile robot comprising a robot body and an RGBD camera mounted horizontally in front of the robot body.
Example 2:
those skilled in the art will understand this embodiment as a more specific description of embodiment 1.
As shown in fig. 1 to 3, the navigation method for autonomous exploration of unknown environment based on target recognition provided in this embodiment is built on a system composed of a mobile robot and an RGBD camera, wherein the RGBD camera can acquire a color image and a depth image and is horizontally installed in front of the robot.
The method comprises the following steps:
step 1: the robot automatically explores and takes pictures;
step 2: detecting and identifying a target;
and step 3: estimating the angle and the distance of the target object;
and 4, step 4: and (4) target object navigation rules.
Wherein, step 1 includes the following steps:
step 1.1: entering a room from a set starting point, slowly rotating the body of the navigation control robot from right to left by 180 degrees at intervals D in the movement processint(can let Dint30 degrees, but DintShould not be greater than the horizontal viewing angle of the camera), the RGBD camera collects the data once, collects the color image and depth image of the current position, and records the current machineThe angle at which the person is photographed.
Wherein, the step 2 comprises the following steps:
step 2.1: each time, collecting an image, sending the image into a trained deep learning model yoloV5 for detection, identifying a target object in the image, and outputting a target category, a probability value and a bounding box (bounding box); the model collects a large number of pictures in an indoor scene, marks a target (the target mainly comprises indoor main objects, such as a door, a window, a counter basin, a closestool, a mirror, a shower head and a bathtub), and trains by using a yoloV5 deep learning model;
step 2.2: judging whether the surrounding frame of the target is repeatedly identified or not when the same type of target is detected in the two continuous collected images; setting a horizontal field of view (HFOV) of a camera to DHDegree, interval of taking pictures being interval DintDegree; setting a bounding box B for target detection on the current acquired image1Respectively has a center point, a length and a width of (x)1,y1,w1,h1) Surrounding frame B for target detection on last acquired image0The center point, length and width are respectively (x)0,y0,w0,h0) (ii) a To (x)0,y0,w0,h0) Move to the right as a new bounding box BMIs composed ofCalculation of B1And BMIf the IOU is greater than 0.4, the detection is regarded as the same target; screening probability values of more than two target detection, selecting a target with a high prediction probability value as an actually detected target, and discarding a detection target with a low prediction probability value;
step 2.3: according to the coordinate position of the central point of the target object surrounding frame in the image, the angle D of the target is accurately updatedT. Let the center point of the bounding box be (Xc, Yc) pixel, the image Width be Width pixel, and the horizontal view angle (HFOV) of the camera be DHDegree, corresponding to camera shooting position angle DCDegree, then calculating the angle D of the targetTThe formula is as follows:
wherein, the step 3 comprises the following steps:
step 3.1: the target detection type is 'gate', and the length and width of the bounding box A on the color image corresponding to the 'gate' type are respectively expanded by 1.2 times to obtain an expanded bounding box B on the color image; mapping an enclosing frame A and an expanded enclosing frame B on the color image to the depth image respectively to obtain an enclosing frame C and an expanded enclosing frame D on the depth image respectively; extracting pixels on the depth image between the inside of the expanded bounding width D and the outside of the bounding box C, and calculating the average value of all the pixels to be used as depth estimation of a target gate;
step 3.2: respectively mapping corresponding surrounding frames E on the color image to the depth image to obtain surrounding frames F if the target detection category is not 'gate'; all pixels inside the bounding box F are fetched, all pixel values are sorted from small to large, and the average value is calculated for the first 10% of the pixel values. As a distance estimate to the target.
Wherein, the step 4 comprises the following steps:
step 4.1: recording all the target objects identified by the current positions, and recording the corresponding angles and distances of the target objects;
step 4.2: for the target with the detection category of 'door', guiding the robot to navigate through the door and enter a new room according to the included angle and the position, repeating the exploration process of the whole room, starting from the step 1, and returning to the shooting point of the previous room after the end;
step 4.3: and finishing the angle and distance estimation of all the targets in the room, and returning to the last starting point.
Taking the layout of a toilet as an example (see fig. 3), the front surface is provided with a mirror and a basin after entering the toilet, the left side and the right side are respectively provided with a door, the right room is provided with a closestool, and the left room is provided with a shower head and a bathtub.
When the robot enters a room, at the position of a door 1, photographing is performed at intervals of 30 degrees from 0 degree to 180 degrees, and angle and depth information of a target object are calculated, wherein the angle and the depth information are respectively as follows: door 2(25 degrees, 0.9 meter), mirror (90 degrees, 1.5 meters), counter basin (90 degrees, 1.5 meters), door 3(135 degrees, 0.9 meters).
According to the rules, the navigation robot crosses "door 2" to enter the right room, explores the target in the same way, calculates the target: toilet (60 degree, 1.5 m). After the calculation of the angles and distances of all the targets is completed, the position of the last navigation point, namely the door 1, is returned.
According to the rules, the navigation robot crosses "door 3", enters the left room, explores the targets in the same way, calculates the targets: shower head (115 degrees, 1.1 meters) and bathtub (165 degrees, 0.8 meters). After the calculation of the angles and distances of all the targets is completed, the position of the last navigation point, namely the door 1, is returned.
The invention discloses a navigation method of an indoor mobile robot, and aims to search specific targets such as a counter basin, a toilet and the like in an unfamiliar indoor environment. The invention does not need to be deployed in advance and can adapt to different indoor environments.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.
Claims (10)
1. A navigation method for an indoor mobile robot is characterized by comprising the following steps:
step 1: the robot body automatically explores and takes a picture through an RGBD camera to obtain a picture;
step 2: the robot body detects and identifies a target according to a picture shot by an RGBD camera;
and step 3: the robot body estimates the angle and the distance of a target object according to the result of target detection and identification;
and 4, step 4: and the robot body formulates a target object navigation rule according to the estimation of the angle and the distance of the target object.
2. The navigation method for the indoor mobile robot according to claim 1, wherein the step 1 is specifically: entering a room from a set starting point, slowly rotating the navigation control robot body from right to left by 180 degrees, and acquiring image data once by the RGBD camera at intervals in the rotating process.
3. The navigation method for the indoor mobile robot according to claim 2, wherein the RGBD camera collects a color image and a depth image of a current position, and records a current photographing angle of the robot body.
4. The navigation method for the indoor mobile robot according to claim 2, wherein the set angle is not greater than a horizontal viewing angle of the RGBD camera.
5. The indoor mobile robot navigation method according to claim 1, wherein the step 2 includes the steps of:
step 2.1: acquiring image data every time, sending the image data into a trained deep learning model for detection, identifying a target object in an image, and outputting a target category, a probability value and a bounding box;
step 2.2: judging whether the bounding box of the target object is identified repeatedly or not when the targets of the same category are detected in the two continuous collected images;
step 2.3: and accurately updating the angle of the target object according to the coordinate position of the central point of the surrounding frame of the target object in the image.
6. The navigation method for the indoor mobile robot according to claim 5, wherein in the step 2.2, the horizontal angle of view of the camera is set to DHDegree, interval of taking pictures being interval DintDegree;
setting a bounding box B of a target object on a currently acquired image1Respectively has a center point, a length and a width of (x)1,y1,w1,h1) The surrounding frame B of the target object on the last acquired image0The center point, length and width are respectively (x)0,y0,w0,h0);
To (x)0,y0,w0,h0) Move to the right as a new bounding box BMIs composed ofCalculation of B1And BMThe IOU of (2);
and if the IOU is more than 0.4, the detection is regarded as the same target, the probability values of the two target object detections are screened, the target with the high probability value is selected as the actually detected target, and the detection target with the low probability value is discarded.
7. The method as claimed in claim 5, wherein in step 2.3, the center point of the bounding box of the target object is (Xc, Yc) pixel, the image Width is Width pixel, and the horizontal angle of view of the camera is DHDegree, corresponding to camera shooting position angle DCDegree, then calculating the angle D of the targetTThe formula is as follows:
8. the indoor mobile robot navigation method according to claim 1, wherein the step 3 includes the steps of:
step 3.1: the detection type of the target object is 'gate', the length and the width of an enclosure frame A on the color image corresponding to the 'gate' type are respectively expanded by 1.2 times, and an expanded enclosure frame B on the color image is obtained;
mapping an enclosing frame A and an expanded enclosing frame B on the color image to the depth image respectively to obtain an enclosing frame C and an expanded enclosing frame D on the depth image respectively;
extracting pixels between the inside of the expanded surrounding width D and the outside of the surrounding frame C on the depth image, and calculating the average value of all the pixels to be used as angle estimation of a 'door' of the target object;
step 3.2: respectively mapping corresponding surrounding frames E on the color image to the depth image to obtain surrounding frames F if the detection category of the target object is not 'door';
and extracting all pixels in the surrounding frame F, sorting all pixel values from small to large, and calculating an average value of the first 10% of pixel values to be used as a distance estimation for the target object.
9. The indoor mobile robot navigation method according to claim 1, wherein the step 4 includes the steps of:
step 4.1: recording all target objects identified at the current positions, and recording corresponding angles and distances of the target objects;
step 4.2: for a target object with a detection category of 'door', guiding the robot body to navigate through the door according to the angle and the distance, entering a new room, repeating the steps 1-3 and 4.1 for the whole room, and returning to a shooting point of the previous room after the completion;
step 4.3: and finishing the angle and distance estimation of all target objects in all rooms, and returning to the starting point.
10. The indoor mobile robot navigation method of claim 1, wherein the mobile robot includes a robot body and an RGBD camera horizontally installed in front of the robot body based on use of the mobile robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111307887.5A CN114018268B (en) | 2021-11-05 | 2021-11-05 | Indoor mobile robot navigation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111307887.5A CN114018268B (en) | 2021-11-05 | 2021-11-05 | Indoor mobile robot navigation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114018268A true CN114018268A (en) | 2022-02-08 |
CN114018268B CN114018268B (en) | 2024-06-28 |
Family
ID=80061822
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111307887.5A Active CN114018268B (en) | 2021-11-05 | 2021-11-05 | Indoor mobile robot navigation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114018268B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117333539A (en) * | 2023-10-09 | 2024-01-02 | 南京华麦机器人技术有限公司 | Mobile robot-oriented charging pile positioning method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105717928A (en) * | 2016-04-26 | 2016-06-29 | 北京进化者机器人科技有限公司 | Vision-based robot navigation door-passing method |
CN105740910A (en) * | 2016-02-02 | 2016-07-06 | 北京格灵深瞳信息技术有限公司 | Vehicle object detection method and device |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN109341689A (en) * | 2018-09-12 | 2019-02-15 | 北京工业大学 | Vision navigation method of mobile robot based on deep learning |
CN109998429A (en) * | 2018-01-05 | 2019-07-12 | 艾罗伯特公司 | Mobile clean robot artificial intelligence for context aware |
CN110136186A (en) * | 2019-05-10 | 2019-08-16 | 安徽工程大学 | A kind of detection target matching method for mobile robot object ranging |
US20210096579A1 (en) * | 2016-08-05 | 2021-04-01 | RobArt GmbH | Method For Controlling An Autonomous Mobile Robot |
CN113110513A (en) * | 2021-05-19 | 2021-07-13 | 哈尔滨理工大学 | ROS-based household arrangement mobile robot |
-
2021
- 2021-11-05 CN CN202111307887.5A patent/CN114018268B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740910A (en) * | 2016-02-02 | 2016-07-06 | 北京格灵深瞳信息技术有限公司 | Vehicle object detection method and device |
CN105717928A (en) * | 2016-04-26 | 2016-06-29 | 北京进化者机器人科技有限公司 | Vision-based robot navigation door-passing method |
US20210096579A1 (en) * | 2016-08-05 | 2021-04-01 | RobArt GmbH | Method For Controlling An Autonomous Mobile Robot |
CN109998429A (en) * | 2018-01-05 | 2019-07-12 | 艾罗伯特公司 | Mobile clean robot artificial intelligence for context aware |
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN109341689A (en) * | 2018-09-12 | 2019-02-15 | 北京工业大学 | Vision navigation method of mobile robot based on deep learning |
CN110136186A (en) * | 2019-05-10 | 2019-08-16 | 安徽工程大学 | A kind of detection target matching method for mobile robot object ranging |
CN113110513A (en) * | 2021-05-19 | 2021-07-13 | 哈尔滨理工大学 | ROS-based household arrangement mobile robot |
Non-Patent Citations (1)
Title |
---|
李新征, 易建强, 赵冬斌: "基于视觉的机器人定位精度提高方法", 计算机测量与控制, no. 06 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117333539A (en) * | 2023-10-09 | 2024-01-02 | 南京华麦机器人技术有限公司 | Mobile robot-oriented charging pile positioning method and device |
Also Published As
Publication number | Publication date |
---|---|
CN114018268B (en) | 2024-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109506658B (en) | Robot autonomous positioning method and system | |
US9990726B2 (en) | Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image | |
US7598976B2 (en) | Method and apparatus for a multisensor imaging and scene interpretation system to aid the visually impaired | |
EP3159125A1 (en) | Device for recognizing position of mobile robot by using direct tracking, and method therefor | |
EP3159122A1 (en) | Device and method for recognizing location of mobile robot by means of search-based correlation matching | |
EP3159126A1 (en) | Device and method for recognizing location of mobile robot by means of edge-based readjustment | |
CN112465960B (en) | Size calibration device and method for three-dimensional model | |
WO2018101247A1 (en) | Image recognition imaging apparatus | |
CN111046843B (en) | Monocular ranging method in intelligent driving environment | |
JP2013508874A (en) | Map generation and update method for mobile robot position recognition | |
JP4042517B2 (en) | Moving body and position detection device thereof | |
CN106599776B (en) | A kind of demographic method based on trajectory analysis | |
CN110136186B (en) | Detection target matching method for mobile robot target ranging | |
CN108544494A (en) | A kind of positioning device, method and robot based on inertia and visual signature | |
CN114018268A (en) | Indoor mobile robot navigation method | |
CN111160280B (en) | RGBD camera-based target object identification and positioning method and mobile robot | |
CN114445494A (en) | Image acquisition and processing method, image acquisition device and robot | |
CN111571561B (en) | Mobile robot | |
Gabaldon et al. | A framework for enhanced localization of marine mammals using auto-detected video and wearable sensor data fusion | |
KR100906991B1 (en) | Method for detecting invisible obstacle of robot | |
JP7375901B2 (en) | Display device, display method, program and image projection system | |
TWI771960B (en) | Indoor positioning and searching object method for intelligent unmanned vehicle system | |
CN115511970A (en) | Visual positioning method for autonomous parking | |
CN105631431B (en) | The aircraft region of interest that a kind of visible ray objective contour model is instructed surveys spectral method | |
CN114155258A (en) | Detection method for highway construction enclosed area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |