CN113535877A - Intelligent robot map updating method, device, equipment, medium and chip - Google Patents

Intelligent robot map updating method, device, equipment, medium and chip Download PDF

Info

Publication number
CN113535877A
CN113535877A CN202110806684.4A CN202110806684A CN113535877A CN 113535877 A CN113535877 A CN 113535877A CN 202110806684 A CN202110806684 A CN 202110806684A CN 113535877 A CN113535877 A CN 113535877A
Authority
CN
China
Prior art keywords
intelligent robot
map
data
updating
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110806684.4A
Other languages
Chinese (zh)
Other versions
CN113535877B (en
Inventor
沈孝通
王健威
曹鹏
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202110806684.4A priority Critical patent/CN113535877B/en
Publication of CN113535877A publication Critical patent/CN113535877A/en
Application granted granted Critical
Publication of CN113535877B publication Critical patent/CN113535877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an updating method, device, equipment, medium and chip of an intelligent robot map. The intelligent robot is provided with a plurality of cameras; the method comprises the following steps: acquiring visual data acquired by the multiple cameras; fusing the plurality of visual data to obtain the current environment data of the intelligent robot; and updating the current map according to the environment data. According to the map updating method provided by the embodiment of the invention, the current map is updated through the environmental data acquired by the plurality of cameras arranged on the intelligent robot, so that the map updating accuracy is improved, and the safety of the robot in the driving process is improved.

Description

Intelligent robot map updating method, device, equipment, medium and chip
Technical Field
The embodiment of the invention relates to the technical field of intelligent robots, in particular to an updating method, device, equipment, medium and chip of an intelligent robot map.
Background
An intelligent mobile robot is a device with high intelligent degree and integrating multiple functions of environment perception, dynamic decision and planning, behavior control and execution and the like, and is widely operated in public places such as markets, supermarkets, venues and the like.
The robot senses the surrounding environment through a sensor, and commonly used sensors include a laser radar, a camera, a millimeter wave radar, an ultrasonic radar and the like. The camera has the advantages of high precision, wide range and the like, and is a unique and irreplaceable sensor for the robot.
Disclosure of Invention
The invention provides an updating method, device, equipment, medium and chip of an intelligent robot map, which can improve the reliability of the intelligent robot map updating, thereby improving the safety of a robot in the operation process.
In a first aspect, an embodiment of the present invention provides an updating method for a map of an intelligent robot, where the intelligent robot is provided with multiple cameras; the method comprises the following steps:
acquiring visual data acquired by the multiple cameras;
fusing the plurality of visual data to obtain the current environment data of the intelligent robot;
and updating the current map according to the environment data. .
In a second aspect, an embodiment of the present invention further provides an updating apparatus for a map of an intelligent robot, where the intelligent robot is provided with multiple cameras; the device comprises:
the visual data acquisition module is used for acquiring the visual data acquired by the multiple cameras;
the environment data acquisition module is used for fusing the plurality of visual data to acquire the current environment data of the intelligent robot;
and the map updating module is used for updating the current map according to the environment data.
In a third aspect, an embodiment of the present invention further provides a computer device, including: the intelligent robot map updating system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the intelligent robot map updating method according to the embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processing device, implements the method for updating the map of the intelligent robot according to the embodiment of the present invention.
In a fifth aspect, an embodiment of the present invention further provides a chip, including: at least one processor and an interface;
the interface is used for providing program instructions or data for the at least one processor;
the at least one processor is used for executing the program instructions to realize the updating method of the intelligent robot map according to the embodiment of the invention.
The embodiment of the invention discloses an updating method, device, equipment, medium and chip of an intelligent robot map. The intelligent robot is provided with a plurality of cameras; the method comprises the following steps: acquiring visual data acquired by the multiple cameras; fusing the plurality of visual data to obtain the current environment data of the intelligent robot; and updating the current map according to the environment data. According to the map updating method provided by the embodiment of the invention, the current map is updated through the environmental data acquired by the plurality of cameras arranged on the intelligent robot, so that the map updating accuracy is improved, and the safety of the robot in the driving process is improved.
Drawings
Fig. 1 is a flowchart of an update method of an intelligent robot map according to a first embodiment of the present invention;
FIG. 2 is a front view of an intelligent robot according to a first embodiment of the present invention;
fig. 3 is a schematic plane view of scanning ranges of a front head-up image collector, a front oblique image collector, and a rear head-up image collector of an intelligent robot according to a first embodiment of the present invention;
fig. 4 is a schematic plane view of scanning ranges of a front head-up image collector, a front oblique image collector, and a rear head-up image collector of an intelligent robot according to a first embodiment of the present invention;
FIG. 5 is a rear view of an intelligent robot in accordance with a first embodiment of the present invention;
FIG. 6a is a schematic diagram of slice layering for a field of view range according to a first embodiment of the present invention;
FIG. 6b is a schematic view of an erasure barrier marker in accordance with a first embodiment of the present invention;
fig. 7 is a schematic structural diagram of an updating apparatus of an intelligent robot map according to a second embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a computer device according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of a chip in a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an update method for an intelligent robot map according to an embodiment of the present invention, where the present embodiment is applicable to a situation of updating a map in an intelligent robot, and the method may be executed by an update apparatus for an intelligent robot map.
The intelligent robot in this embodiment is provided with many cameras, and many cameras include at least: one of a front-view camera, a front-oblique-view camera, a left-view camera, and a right-view camera. Fig. 2 is a front view of an intelligent robot of an embodiment of the present invention. As shown in fig. 2, the intelligent robot includes a robot body 1, a front head-up camera 2, a front oblique-view camera 3, and a side-view camera 4 (including a left-side-view camera and a right-side-view camera). In the direction that intelligent robot gos forward, preceding head-up camera 2 sets up in the front side of robot body 1 to be used for higher barrier of intellectual detection system pedestrian, car etc.. The front oblique view camera 3 is arranged on the robot body 1 and is arranged below the front head-up camera 2, and the collecting end of the front oblique view camera 3 is arranged in a downward inclination mode. The front oblique view camera 3 is used for detecting short obstacles on the floor, can acquire image information on the ground and below the ground, prevents the intelligent robot from falling, and can enhance the perception capability of the intelligent robot to the advancing front environment by matching the front oblique view camera 2 with the front oblique view camera 3. The both sides of robot body 1 all are provided with looks sideways at camera 4, and two look sideways at 4 symmetrical settings of image collector, look sideways at the collection end downward sloping of camera 4 and be the contained angle setting with the collection end of preceding head up camera 2. Two look sideways at camera 4 and strengthened intelligent robot side perception ability, safety when guaranteeing intelligent robot and turning, and look sideways at camera 4's collection scope and preceding look head up camera 2, preceding oblique looking camera 3's collection scope and all have intersection, realize not having the detection blind area, and a plurality of cameras can gather the information of an obstacle simultaneously for the confidence of this obstacle rises, has improved map mark and path planning's accuracy nature.
The height of the front side of the robot body 1 of the intelligent robot provided by the embodiment is 800mm-1100mm, the height of the rear side of the robot body 1 is 850mm-1200mm, and the length of the bottom of the robot body 1 is 520mm-720 mm. However, the front height, the rear height, and the bottom length of the robot body 1 are not limited to the above-mentioned dimensions, and are not particularly limited thereto.
In the embodiment, the front head-up camera 2 is arranged at the middle position of the front side of the robot body 1, so that the detection range is expanded from the middle position of the robot body 1 to two sides, and the detection accuracy of the obstacle in front of the intelligent robot is ensured. The distance between the front head-up camera 2 and the bottom of the robot body 1 is 70% -90% of the height of the front side of the robot body 1, so that the robot is suitable for detecting high obstacles. According to the height of the front side of the robot body 1 provided in the present embodiment, the distance between the front head-up image collector 2 and the bottom of the robot body 1 is 670mm to 850mm, and the distance between the front head-up camera 2 and the bottom of the robot body 1 is preferably 670mm, 680mm, 690mm, 700mm, 710mm, 720mm, 730mm, 740mm, 750mm, 760mm, 770mm, 780, 790mm, 800mm, 810mm, 820mm, 830mm, 840mm, or 850mm, which is not particularly limited herein.
Further, the acquisition end of the front head-up camera 2 is inclined downwards by 0-5 degrees, preferably by 0, 1, 2, 3, 4 or 5 degrees, so as to be better matched with the front oblique view image acquisition sensor 3 for use, and further improve the perception capability of the intelligent robot on the environment in front of the moving direction. The position that preceding head-up camera 2 set up, need use with preceding oblique view camera 3, the cooperation of looking sideways at camera 4 like height, angle etc to eliminate and detect the blind area.
The preferred degree of depth camera of preceding head-up camera 2 in this embodiment has advantages such as the precision of gathering is high, the collection scope is wide. In this embodiment, the depth camera is preferably a large white camera having an infrared lens and an RGB lens, and can acquire an infrared image, a depth image, and an RGB image.
Fig. 3 is a schematic plane view of scanning ranges of a front head-up image collector, a front oblique image collector and a rear head-up image collector of the intelligent robot. Fig. 4 is a schematic plane view of scanning ranges of a front head-up image collector, a front oblique image collector, and a rear head-up image collector of the intelligent robot according to the embodiment of the present invention. In this embodiment, as shown in fig. 3 and 4, the distance between the side-view camera 4 and the bottom of the robot body 1 is 75% -95% of the height of the front side of the robot body 1, and the distance between the two side-view image collectors 4 is 40% -60% of the length of the bottom of the robot body 1, so that the detection range of the side-view camera 4 can cover the front side of the intelligent robot. According to the front height of the robot body 1 provided by the embodiment, the distance between the side-view image collector 4 and the bottom of the robot body 1 is 713mm-903 mm. Preferably, the distance between the side-looking camera 4 and the bottom of the robot body 1 is 720mm, 725mm, 730mm, 735mm, 740mm, 745mm, 750mm, 755mm, 760mm, 765mm, 770mm, 775mm, 780mm, 785mm, 790mm or 795 mm. According to the bottom length of the robot body 1 provided by the embodiment, the distance between the two side-view image collectors 4 is 248mm-372 mm. Preferably, the distance between the two side view image collectors 4 is 280mm, 285mm, 290mm, 295mm, 300mm, 305mm, 310mm, 315mm, 320mm, 325mm or 330mm, which is not particularly limited herein.
Furthermore, the included angle between the acquisition end of the side-looking camera 4 and the acquisition end of the front head-up camera 2 in the horizontal direction is 25-35 degrees, the acquisition end of the side-looking camera 4 inclines downwards by 5-15 degrees, and intersection is formed among the side-looking camera 4, the front head-up camera 2 and the front oblique camera 3 so as to eliminate the detection blind area. Preferably, the acquisition end of the side view camera 4 is horizontally inclined at an angle of 25 °, 26 °, 27 °, 28 °, 29 °, 30 °, 31 °, 32 °, 33 °, 34 ° or 35 ° with the acquisition end of the front view camera 2, and the acquisition end of the side view camera 4 is inclined downward at 5 °, 6 °, 7 °, 8 °, 9 °, 10 °, 11 °, 12 °, 13 °, 14 ° or 15 °, which is not particularly limited.
The preferred degree of depth camera of camera 4 is looked to side in this embodiment, has advantages such as the precision height of gathering, collection wide range. In this embodiment, the depth camera is preferably a large white camera having an infrared lens and an RGB lens, and can acquire an infrared image, a depth image, and an RGB image.
In the embodiment, as shown in fig. 4, in order to ensure that the front oblique-view camera 3, the side-view camera 4 and the front head-up camera 2 all have intersection to eliminate the detection blind area, the front oblique-view camera 3 is disposed at the middle position of the front side of the robot body 1, so that the detection range is expanded from the middle position of the robot body 1 to both sides, and the accuracy of detecting the obstacle ahead of the intelligent robot is ensured. Distance between the bottom of preceding oblique view camera 3 and robot 1 is 55% -75% of robot 1 front side height, ensure that preceding oblique view camera 3 can detect ground and following image, in order to eliminate the blind area that short barrier detected and realize intelligent robot's dropproof function, ensure preceding oblique view camera 3 and look sideways at camera 4, preceding head up camera 2 all has intersection, the problem that sets up the laser sensor on intelligent robot among the prior art, there is the detection blind area in ultrasonic sensor etc. is solved. According to the height of the front side of the robot body 1 provided by the embodiment, the distance between the front oblique image collector 3 and the bottom of the robot body 1 is 522mm-712 mm. Preferably, the distance between the front-looking camera 3 and the bottom of the robot body 1 is 610mm, 615mm, 620mm, 625mm or 630mm, which is not particularly limited herein.
Further, the acquisition end of the front oblique-view camera 3 is inclined downward by 45 to 55 degrees so as to acquire images of the ground and below more accurately. Preferably, the capturing end of the front looking camera 3 is tilted downward by 45 °, 46 °, 47 °, 48 °, 49 °, 50 °, 51 °, 52 °, 53 °, 54 ° or 55 °. The position that 3 set up of preceding strabismus camera, like height, angle etc. need use with 4 cooperations of side looking at the camera to eliminate and detect the blind area.
The preferred degree of depth camera of preceding strabismus camera 3 in this embodiment has advantages such as the precision height of gathering, collection wide range. In this embodiment, the depth camera is preferably a large white camera having an infrared lens and an RGB lens, and can acquire an infrared image, a depth image, and an RGB image.
In the present embodiment, fig. 5 is a rear view of the smart robot, and as shown in fig. 5, the smart robot further includes a rear head-up camera 5, and the rear head-up camera 5 is disposed at a rear side of the robot body 1, i.e., a side opposite to the front head-up camera 2, in a direction in which the smart robot advances. The back head-up camera 5 mainly is when intelligent robot goes to fill electric pile and charges for the position of electric pile is filled in the detection, makes intelligent robot's the mouth that charges and fills the mouth that charges on the electric pile and be connected, and whether there is the barrier in the in-process detection that intelligent robot retreats.
The back looks head up camera 5, preceding look head up camera 2, preceding strabismus camera 3 and two look sideways at camera 4 use of mutually supporting, have covered different height, angle and scope, have realized that intelligent robot is advancing, retreating, the diversified image acquisition when turning to.
Further, back head-up camera 5 sets up in the intermediate position of robot body 1 rear side, and the distance between back head-up camera 5 and the robot body 1 bottom is 45% -65% of robot body 1 rear side height to can accurately gather the information that fills the electric pile position, and with back head-up camera 5 set up in the intermediate position of robot body 1 rear side, be convenient for intelligent robot's the mouth that charges and fill the accurate counterpoint of the mouth that charges on the electric pile. According to the height of the rear side of the robot body 1 provided by the embodiment, the distance between the rear head-up image collector 5 and the bottom of the robot body 1 is 464mm-669 mm. The distance between the rear-view camera 5 and the bottom of the robot body 1 is preferably 595mm, 597mm, 599mm, or 601mm, which is not particularly limited herein.
Still further, the rear view camera 5 in this embodiment is a depth camera, and has the advantages of high acquisition precision, wide acquisition range, and the like. In this embodiment, the depth camera is preferably a large white camera having an infrared lens and an RGB lens, and can acquire an infrared image, a depth image, and an RGB image.
In the embodiment, referring to fig. 2, a fall-prevention sensor 6 is further disposed on the robot body, and the fall-prevention sensor 6 is disposed below the front oblique-view camera 3 and located in the middle of the robot body 1 for detecting information of a ground obstacle. The fall arrest sensor 6 is preferably a laser sensor. The information detected by the falling prevention sensor 6 is combined with the information collected by the front oblique view camera 3 to obtain more obstacle data, and then the map data with more obstacle information is obtained.
And a front head-up 2D laser radar 7, a front air pressure anti-collision sensor 8 and a bottom RFID sensor (not shown in the figure) are further arranged below the anti-falling sensor 6 in sequence. Wherein the front head-up 2D laser radar 7 is used for acquiring 2D map information in front of the intelligent robot. The front air pressure anti-collision sensor 8 is used for detecting an obstacle signal so as to avoid collision between the intelligent robot and the obstacle. The bottom RFID sensor is arranged at the bottom of the robot body 1 to detect the road condition at the bottom of the intelligent robot. The front head-up 2D laser radar 7, the front air pressure anti-collision sensor 8, the bottom RFID sensor, the falling prevention sensor 6 and the front squint camera 3 are combined to obtain more obstacle information, and the obstacle detection accuracy is further improved.
Still be equipped with single line laser radar 9 in robot body 1's both sides, and single line laser radar 9 sets up looks sideways at camera 4 and keeps away from preceding one side of looking squarely camera 2, and arranges the oblique top of looking sideways at camera 4 in. The top of the robot body 1 is also provided with a top view RGB camera 10. Ultrasonic sensors 11 are uniformly distributed along the circumferential direction of the robot body 1.
In this embodiment, 2D map information in the lidar data and obstacle information in the image data are obtained, the obstacle data in the video image is projected into the map data, and the map data with more information is obtained according to the intersection of the two. Laser data, image data, ultrasonic data, data of the front air pressure anti-collision sensor 8 and data of the top view RGB camera 10 are fused to obtain information of more obstacles, and finally scene data of automatic driving with higher precision can be obtained, so that accurate map marking and path planning are provided for automatic driving.
As shown in fig. 1, the method for updating an intelligent robot map provided in the embodiment of the present invention includes the following steps:
and step 110, acquiring visual data acquired by a plurality of cameras.
The visual data may include depth data, infrared data, and RGB data. In this embodiment, the multiple cameras arranged on the intelligent robot collect images in the field of view, so as to obtain visual data.
And step 120, fusing the plurality of visual data to obtain the current environmental data of the intelligent robot.
In this embodiment, the visual field ranges (or the acquisition ranges) of the multiple cameras all have an intersection, and the visual data is represented as visual data having an overlapping region, so that multiple visual data need to be fused to obtain the environmental data where the intelligent robot is located. The way of fusing the plurality of visual data may be: and fusing the overlapping areas of the plurality of visual data according to a set algorithm, and splicing the non-overlapping areas.
Specifically, the method for fusing the multiple visual data to obtain the current environmental data of the intelligent robot may be: determining an overlapping region of the plurality of visual data; and fusing the overlapped areas according to the confidence degrees of the visual data to obtain the merged environmental data.
Specifically, the process of fusing the plurality of visual data may be: respectively carrying out feature point detection on the plurality of visual data to obtain feature point operators, carrying out feature matching according to the feature point operators to obtain feature matching pairs, extracting the obtained feature matching pairs to obtain robust feature matching pairs, and carrying out image splicing according to the robust feature matching pairs, so that the overlapping area of adjacent images can be stably eliminated to obtain environmental data.
In this embodiment, the manner of fusing the overlapping regions according to the confidence of each piece of visual data may be: the method comprises the steps of respectively detecting Feature points of a plurality of visual data by adopting a Scale-invariant Feature Transform (SIFT) algorithm to obtain Feature point operators, then carrying out Feature matching on the Feature point operators by adopting an optimal node priority algorithm (Best BinFirst, BBF) to obtain a Feature matching group, and then extracting the obtained Feature matching group by adopting a Random Sample Consensus algorithm (RANSAC) to obtain a robust Feature matching group, wherein pixel point regions corresponding to the Feature matching group are overlapping regions.
After the robust feature matching group is obtained, weighting and summing at least two matching features in the robust feature matching group according to the confidence of each visual data to obtain a target feature, wherein the region corresponding to the target feature is the fused overlapping region. And finally, splicing the non-overlapping area and the overlapping area to obtain environment data. In the embodiment, the visual data are fused, so that blind-area-free detection can be realized, and the robustness is improved.
Step 130, updating the current map according to the environment data.
The method for updating the current map may include: marking the detected narrow pit area in the current map, carrying out layered erasing on the mark of the removed obstacle in the current map, marking the identified new obstacle on the current map, and recalculating the ground parameters in the current map when the ground is changed.
In this embodiment, in the operation process, the intelligent robot needs to plan a path according to the current map and drive to a destination according to the planned path. The current map is constructed by scanning the environment where the intelligent robot is located, information such as obstacles existing in the environment is marked, and when the environment changes, the current map needs to be updated, so that the accuracy of path planning can be guaranteed.
Specifically, the current map is updated according to the environment data in the following manner: extracting ground pixel point clouds in the environment data; determining the point cloud with the height value smaller than a first set value in the ground pixel point cloud as a target ground point; clustering the target ground points to obtain a target ground point group; determining the area corresponding to the target ground point group with the number of the target ground points larger than a second set value as a narrow pit area; the narrow pit area is marked on the current map.
The environment data is composed of pixel point clouds, each pixel point carries coordinate information and height information, ground pixel point clouds can be extracted according to the height information, and the pixel points with the height smaller than a certain value are used as ground pixel points. The target ground point can be understood as a pixel point having a height smaller than the current ground.
In this embodiment, the manner of extracting the ground pixel point cloud in the environment data may be: converting the pixel point cloud of the environment data into a bird's-eye view; and extracting ground pixel point cloud from the aerial view according to the height value of each pixel point.
The bird's-eye view may be a perspective view drawn by looking down the ground from a certain point at a high altitude by a high viewpoint perspective method according to a perspective principle. And after the aerial view is obtained, extracting pixel points with the height smaller than a certain value from the aerial view to obtain a ground pixel point cloud.
The target ground point group comprises a plurality of target ground points. In this embodiment, the target ground points may be clustered in such a manner that the target ground points whose distances are less than a set value are clustered into one class. When the number of the target ground points included in the target ground group is larger than the second set value, the area corresponding to the target ground group forms a narrow pit area, and if the intelligent robot walks from the narrow pit area, the intelligent robot falls. Therefore, the pit area needs to be marked on the current map, so that the intelligent robot can bypass the pit area during path planning.
Optionally, after extracting the ground pixel point cloud in the environment data, the method further includes the following steps: fitting a ground pixel point cloud; and if the number of the ground pixel point clouds participating in fitting and the proportion of the total number of the ground pixel points are smaller than a set threshold value, recalculating the ground parameters according to the ground pixel point clouds.
Wherein, a RANdom SAmple Consensus (RANSAC) algorithm may be used to fit the ground pixel point cloud. When the number of the ground pixel point clouds participating in fitting and the proportion of the total number of the ground pixel points are smaller than a set threshold value, it is indicated that a plurality of planes exist in the current environment where the intelligent robot is located (for example, the robot is in the environment with steps), at this moment, the ground parameters need to be recalculated according to the ground pixel point clouds, and the recalculated ground parameters need to be updated to the current map.
In this embodiment, recalculating the ground parameters according to the ground pixel point cloud may be implemented by using an existing fitting algorithm, which is not limited herein.
Specifically, the method for updating the current map according to the environment data may be: determining an obstacle to be erased according to the environmental data and the current map; slicing a view field range corresponding to the environment data according to a set height to obtain a plurality of layered regions; determining an erasing range according to the height of each layered area; and erasing the mark of the obstacle to be erased from the current map according to the erasing range.
The obstacle to be erased can be understood as a mark containing the obstacle in the current map, but a pixel point of the obstacle is not contained in the environment data, and at this time, the obstacle can be indicated to be removed. Wherein the set height may be set to any value between 10-20 cn. The erasing range may be a region surrounded by the visual field ranges of the layered regions, that is, the erasing range is determined by the visual field range of the current camera. Illustratively, fig. 6a is a schematic diagram of slice layering of a field of view in an embodiment of the invention. As shown in fig. 6a, a region surrounded by triangles is a view field of the camera, and the view field is sliced to obtain a plurality of layered regions, and an erasing range of each layered region is a region surrounded by a boundary a, a boundary b, and a layered surface.
In this embodiment, the process of erasing the obstacle to be erased from the current map according to the erasing range may be: determining a layered area where the barrier to be erased is located according to the height information, and determining the layered area as a target layered area; and erasing the mark of the obstacle to be erased in the erasing range corresponding to the target hierarchical region from the current map.
Wherein the height information is height information of the obstacle. Erasing the mark of the obstacle to be erased in the erasing range corresponding to the target hierarchical area from the current map can be understood as erasing the mark of the obstacle falling into the erasing range, and keeping the mark of the obstacle not falling into the erasing range. For example, fig. 6b is a schematic diagram of erasing the obstacle mark, as shown in fig. 6b, an ellipse enclosed by a dotted line is an object to be erased, and the object to be erased is located in the lower three layered regions, wherein the white region falls into an erasing range of the lower three layered regions, and the shadow region does not fall into an erasing range of the lower three layered regions, so that, when erasing, only the obstacle mark corresponding to the white region needs to be erased, and the obstacle mark corresponding to the shadow region remains. This has the advantage of avoiding the inaccuracy problem of erasing a mark with the full field of view.
Specifically, the method for updating the current map according to the environment data may be: acquiring the height of an object in the environmental data; if the distance between the object and the intelligent robot is smaller than the set distance threshold and the height of the object is larger than the first height threshold, the object is an obstacle; if the distance between the object and the intelligent robot is greater than the set distance threshold value and the height of the object is greater than the second height threshold value, the object is an obstacle; the obstacle is marked in the current map.
Wherein the first height threshold is less than the second height threshold. Wherein the set threshold is set to an arbitrary value between 0.8m and 1 m. The first height threshold may be set to 1cm and the second height threshold may be set to 3 cm. For example, assuming that the set threshold is 0.8m, for an object within 0.8m from the smart robot, if the height is greater than 1cm, the object is determined as an obstacle, and for an object other than 0.8m from the smart robot, if the height is greater than 3cm, the object is determined as an obstacle. This has the advantage that short obstacles can be accurately identified.
According to the technical scheme of the embodiment, the visual data acquired by the multiple cameras is acquired; fusing the plurality of visual data to obtain the current environment data of the intelligent robot; and updating the current map according to the environment data. According to the map updating method provided by the embodiment of the invention, the current map is updated through the environmental data acquired by the plurality of cameras arranged on the intelligent robot, so that the map updating accuracy can be improved, and the safety of the robot in the driving process is improved.
Example two
Fig. 7 is a schematic structural diagram of an updating apparatus for an intelligent robot map according to a second embodiment of the present invention. The intelligent robot is provided with many cameras, as shown in fig. 7, the device includes:
a visual data obtaining module 210, configured to obtain visual data collected by the multiple cameras;
the environment data acquisition module 220 is configured to fuse the plurality of pieces of visual data to obtain current environment data of the intelligent robot;
and the map updating module 230 is configured to update the current map according to the environment data.
Optionally, the multiple cameras at least include: one of a front head-up camera, a front oblique-view camera, a left-side view camera and a right-side view camera; the environmental data obtaining module 220 is further configured to:
determining an overlapping region of a plurality of the visual data;
and fusing the overlapped areas according to the confidence degrees of the visual data to obtain merged environmental data.
Optionally, the map updating module 230 is further configured to:
extracting a ground pixel point cloud in the environment data;
determining the point cloud with the height value smaller than a first set value in the ground pixel point cloud as a target ground point;
clustering the target ground points to obtain a target ground point group;
determining the area corresponding to the target ground point group with the number of the target ground points larger than a second set value as a narrow pit area;
marking the narrow pit area on the current map.
Optionally, the map updating module 230 is further configured to:
converting the pixel point cloud of the environment data into a bird's-eye view;
and extracting ground pixel point clouds from the aerial view according to the height values of the pixel points.
Optionally, the map updating module 230 is further configured to:
fitting the ground pixel point cloud;
and if the number of the ground pixel point clouds participating in fitting and the proportion of the total number of the ground pixel points are smaller than a set threshold value, recalculating the ground parameters according to the ground pixel point clouds.
Optionally, the map updating module 230 is further configured to:
determining an obstacle to be erased according to the environment data and the current map;
slicing the view range corresponding to the environment data according to a set height to obtain a plurality of layered regions;
determining an erasing range according to the height of each layered area;
and erasing the mark of the obstacle to be erased from the current map according to the erasing range.
Optionally, the map updating module 230 is further configured to:
determining a layered area where the barrier to be erased is located according to the height information, and determining the layered area as a target layered area;
and erasing the mark of the obstacle to be erased in the erasing range corresponding to the target layered region from the current map.
Optionally, the map updating module 230 is further configured to:
acquiring the height of an object in the environment data;
if the distance between the object and the intelligent robot is smaller than a set distance threshold and the height of the object is larger than a first height threshold, the object is an obstacle;
if the distance between the object and the intelligent robot is larger than the set distance threshold and the height of the object is larger than a second height threshold, the object is an obstacle; wherein the first height threshold is less than the second height threshold;
marking the obstacle in the current map.
The device can execute the methods provided by all the embodiments of the invention, and has corresponding functional modules and beneficial effects for executing the methods. For details not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of the present invention.
EXAMPLE III
Fig. 8 is a schematic structural diagram of a computer device according to a third embodiment of the present invention. FIG. 8 illustrates a block diagram of a computer device 312 suitable for use in implementing embodiments of the present invention. The computer device 312 shown in FIG. 8 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention. The device 312 is a computing device for update functionality of a typical intelligent robot map.
As shown in FIG. 8, computer device 312 is in the form of a general purpose computing device. The components of computer device 312 may include, but are not limited to: one or more processors 316, a storage device 328, and a bus 318 that couples the various system components including the storage device 328 and the processors 316.
Bus 318 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Computer device 312 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 312 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 328 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 330 and/or cache Memory 332. The computer device 312 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 334 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk-Read Only Memory (CD-ROM), a Digital Video disk (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 318 by one or more data media interfaces. Storage 328 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program 336 having a set (at least one) of program modules 326 may be stored, for example, in storage 328, such program modules 326 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which may comprise an implementation of a network environment, or some combination thereof. Program modules 326 generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
The computer device 312 may also communicate with one or more external devices 314 (e.g., keyboard, pointing device, camera, display 324, etc.), with one or more devices that enable a user to interact with the computer device 312, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 312 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 322. Also, computer device 312 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), etc.) and/or a public Network, such as the internet, via Network adapter 320. As shown, network adapter 320 communicates with the other modules of computer device 312 via bus 318. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 312, including but not limited to: microcode, device drivers, Redundant processing units, external disk drive Arrays, disk array (RAID) systems, tape drives, and data backup storage systems, to name a few.
The processor 316 executes various functional applications and data processing by executing programs stored in the storage 328, for example, implementing the method for updating the map of the intelligent robot according to the above-described embodiment of the present invention.
Example four
Embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processing apparatus, implements an update method of an intelligent robot map as in embodiments of the present invention. The computer readable medium of the present invention described above may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring visual data acquired by the multiple cameras; fusing the plurality of visual data to obtain the current environment data of the intelligent robot; and updating the current map according to the environment data.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
EXAMPLE five
Fig. 9 is a schematic structural diagram of a chip according to a fifth embodiment of the present application. Chip 900 includes one or more processors 901 and interface circuits 902. Optionally, chip 900 may also include a bus 903. Wherein:
the processor 901 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 901. The processor 901 described above may be one or more of a general purpose processor, a Digital Signal Processor (DSP), an application specific integrated circuit ((ASIC), a field programmable gate array ((FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, MCU, MPU, CPU, or co-processor.
The interface circuit 902 may be used for sending or receiving data, instructions or information, and the processor 901 may perform processing by using the data, instructions or other information received by the interface circuit 902, and may send out processing completion information through the interface circuit 902.
Optionally, the chip further comprises a memory, which may include read only memory and random access memory, and provides operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
Optionally, the memory stores executable software modules or data structures, and the processor may perform corresponding operations by calling the operation instructions stored in the memory (the operation instructions may be stored in an operating system).
Alternatively, the chip may be used in the target detection apparatus according to the embodiment of the present application. Optionally, the interface circuit 902 may be used to output the execution result of the processor 901. For the target detection method provided in one or more embodiments of the present application, reference may be made to the foregoing embodiments, and details are not repeated here.
It should be noted that the respective functions of the processor 901 and the interface circuit 902 may be implemented by hardware design, software design, or a combination of hardware and software, which is not limited herein.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (12)

1. The method for updating the map of the intelligent robot is characterized in that the intelligent robot is provided with a plurality of cameras; the method comprises the following steps:
acquiring visual data acquired by the multiple cameras;
fusing the plurality of visual data to obtain the current environment data of the intelligent robot;
and updating the current map according to the environment data.
2. The method of claim 1, wherein the multiple cameras comprise at least: one of a front head-up camera, a front oblique-view camera, a left-side view camera and a right-side view camera; fusing a plurality of visual data to obtain the current environmental data of the intelligent robot, wherein the method comprises the following steps:
determining an overlapping region of a plurality of the visual data;
and fusing the overlapped areas according to the confidence degrees of the visual data to obtain merged environmental data.
3. The method of claim 1, wherein updating the current map based on the environmental data comprises:
extracting a ground pixel point cloud in the environment data;
determining the point cloud with the height value smaller than a first set value in the ground pixel point cloud as a target ground point;
clustering the target ground points to obtain a target ground point group;
determining the area corresponding to the target ground point group with the number of the target ground points larger than a second set value as a narrow pit area;
marking the narrow pit area on the current map.
4. The method of claim 3, wherein extracting the ground pixel point cloud in the environmental data comprises:
converting the pixel point cloud of the environment data into a bird's-eye view;
and extracting ground pixel point clouds from the aerial view according to the height values of the pixel points.
5. The method of claim 3, wherein after extracting the ground pixel point cloud in the environmental data, further comprising:
fitting the ground pixel point cloud;
and if the number of the ground pixel point clouds participating in fitting and the proportion of the total number of the ground pixel points are smaller than a set threshold value, recalculating the ground parameters according to the ground pixel point clouds.
6. The method of claim 1, wherein updating the current map based on the environmental data comprises:
determining an obstacle to be erased according to the environment data and the current map;
slicing the view range corresponding to the environment data according to a set height to obtain a plurality of layered regions;
determining an erasing range according to the height of each layered area;
and erasing the mark of the obstacle to be erased from the current map according to the erasing range.
7. The method of claim 6, wherein erasing the obstacle to be erased from the current map according to the erasure range comprises:
determining a layered area where the barrier to be erased is located according to the height information, and determining the layered area as a target layered area;
and erasing the mark of the obstacle to be erased in the erasing range corresponding to the target layered region from the current map.
8. The method of claim 1, wherein updating the current map based on the environmental data comprises:
acquiring the height of an object in the environment data;
if the distance between the object and the intelligent robot is smaller than a set distance threshold and the height of the object is larger than a first height threshold, the object is an obstacle;
if the distance between the object and the intelligent robot is larger than the set distance threshold and the height of the object is larger than a second height threshold, the object is an obstacle; wherein the first height threshold is less than the second height threshold;
marking the obstacle in the current map.
9. An updating device of an intelligent robot map is characterized in that the intelligent robot is provided with a plurality of cameras; the device comprises:
the visual data acquisition module is used for acquiring the visual data acquired by the multiple cameras;
the environment data acquisition module is used for fusing the plurality of visual data to acquire the current environment data of the intelligent robot;
and the map updating module is used for updating the current map according to the environment data.
10. A computer device, comprising: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of updating a map of intelligent robots as claimed in any one of claims 1 to 8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processing device, implements the method of updating a map of intelligent robots as claimed in any one of claims 1 to 8.
12. A chip, comprising: at least one processor and an interface;
the interface is used for providing program instructions or data for the at least one processor;
the at least one processor is configured to execute the program instructions to implement the method for updating a map of intelligent robots of any of claims 1 to 8.
CN202110806684.4A 2021-07-16 2021-07-16 Update method, device, equipment, medium and chip of intelligent robot map Active CN113535877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110806684.4A CN113535877B (en) 2021-07-16 2021-07-16 Update method, device, equipment, medium and chip of intelligent robot map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110806684.4A CN113535877B (en) 2021-07-16 2021-07-16 Update method, device, equipment, medium and chip of intelligent robot map

Publications (2)

Publication Number Publication Date
CN113535877A true CN113535877A (en) 2021-10-22
CN113535877B CN113535877B (en) 2023-05-30

Family

ID=78128455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110806684.4A Active CN113535877B (en) 2021-07-16 2021-07-16 Update method, device, equipment, medium and chip of intelligent robot map

Country Status (1)

Country Link
CN (1) CN113535877B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440882A (en) * 2022-02-07 2022-05-06 电子科技大学 Multi-intelligent-home mobile equipment and cooperative path-finding anti-collision method thereof

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN208289901U (en) * 2018-05-31 2018-12-28 珠海市一微半导体有限公司 A kind of positioning device and robot enhancing vision
CN109284348A (en) * 2018-10-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of update method of electronic map, device, equipment and storage medium
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN109872324A (en) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 Ground obstacle detection method, device, equipment and storage medium
CN110686687A (en) * 2019-10-31 2020-01-14 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
WO2020135810A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Multi-sensor data fusion method and device
US20200215694A1 (en) * 2019-01-03 2020-07-09 Ecovacs Robotics Co., Ltd. Dynamic region division and region passage identification methods and cleaning robot
CN112445208A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Robot, method and device for determining travel route, and storage medium
CN112486171A (en) * 2020-11-30 2021-03-12 中科院软件研究所南京软件技术研究院 Robot obstacle avoidance method based on vision
WO2021134325A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Obstacle detection method and apparatus based on driverless technology and computer device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN108481327A (en) * 2018-05-31 2018-09-04 珠海市微半导体有限公司 A kind of positioning device, localization method and the robot of enhancing vision
CN208289901U (en) * 2018-05-31 2018-12-28 珠海市一微半导体有限公司 A kind of positioning device and robot enhancing vision
CN109284348A (en) * 2018-10-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of update method of electronic map, device, equipment and storage medium
WO2020135810A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Multi-sensor data fusion method and device
US20200215694A1 (en) * 2019-01-03 2020-07-09 Ecovacs Robotics Co., Ltd. Dynamic region division and region passage identification methods and cleaning robot
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN109872324A (en) * 2019-03-20 2019-06-11 苏州博众机器人有限公司 Ground obstacle detection method, device, equipment and storage medium
CN112445208A (en) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 Robot, method and device for determining travel route, and storage medium
CN110686687A (en) * 2019-10-31 2020-01-14 珠海市一微半导体有限公司 Method for constructing map by visual robot, robot and chip
WO2021134325A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Obstacle detection method and apparatus based on driverless technology and computer device
CN112486171A (en) * 2020-11-30 2021-03-12 中科院软件研究所南京软件技术研究院 Robot obstacle avoidance method based on vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114440882A (en) * 2022-02-07 2022-05-06 电子科技大学 Multi-intelligent-home mobile equipment and cooperative path-finding anti-collision method thereof
CN114440882B (en) * 2022-02-07 2023-10-31 电子科技大学 Multi-intelligent home mobile device and collaborative road-finding anti-collision method thereof

Also Published As

Publication number Publication date
CN113535877B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
WO2020043041A1 (en) Method and device for point cloud data partitioning, storage medium, and electronic device
WO2020102944A1 (en) Point cloud processing method and device and storage medium
TWI710798B (en) Laser scanning system atteched on moving object, laser scanning method for laser scanner atteched on moving object, and laser scanning program
CN112017251B (en) Calibration method and device, road side equipment and computer readable storage medium
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
US9911038B2 (en) Survey data processing device, survey data processing method, and program therefor
JP7209115B2 (en) Detection, 3D reconstruction and tracking of multiple rigid objects moving in relatively close proximity
JP2007183432A (en) Map creation device for automatic traveling and automatic traveling device
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
KR102260975B1 (en) Apparatus and method for controlling automatic driving using 3d grid map
WO2022179207A1 (en) Window occlusion detection method and apparatus
CN113469045B (en) Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium
JP2001052171A (en) Surrounding environment recognizing device
CN111507973A (en) Target detection method and device, electronic equipment and storage medium
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
JP2006012178A (en) Method and system for detecting parking vehicle
CN111830969B (en) Fusion butt joint method based on reflecting plate and two-dimensional code
WO2022083529A1 (en) Data processing method and apparatus
CN113535877A (en) Intelligent robot map updating method, device, equipment, medium and chip
CN116508071A (en) System and method for annotating automotive radar data
US20230341558A1 (en) Distance measurement system
CN113500600B (en) Intelligent robot
US20230266469A1 (en) System and method for detecting road intersection on point cloud height map
US20210302991A1 (en) Method and system for generating an enhanced field of view for an autonomous ground vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant