CN112964263A - Automatic drawing establishing method and device, mobile robot and readable storage medium - Google Patents

Automatic drawing establishing method and device, mobile robot and readable storage medium Download PDF

Info

Publication number
CN112964263A
CN112964263A CN202110138351.9A CN202110138351A CN112964263A CN 112964263 A CN112964263 A CN 112964263A CN 202110138351 A CN202110138351 A CN 202110138351A CN 112964263 A CN112964263 A CN 112964263A
Authority
CN
China
Prior art keywords
point
current
target
point cloud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110138351.9A
Other languages
Chinese (zh)
Other versions
CN112964263B (en
Inventor
欧阳真超
谷宁波
任涛
牛建伟
马群
李青锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hejian Technology Partnership LP
Original Assignee
Hangzhou Weishi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Weishi Technology Co ltd filed Critical Hangzhou Weishi Technology Co ltd
Priority to CN202110138351.9A priority Critical patent/CN112964263B/en
Publication of CN112964263A publication Critical patent/CN112964263A/en
Application granted granted Critical
Publication of CN112964263B publication Critical patent/CN112964263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides an automatic drawing establishing method and device, a mobile robot and a readable storage medium, and relates to the technical field of robots. The method comprises the following steps: generating a current aerial view according to the current detection points and the three-dimensional point cloud sets of the historical detection points; according to a local point cloud set, a local aerial view and a target point corresponding to a current sensing range of a current detection point, carrying out maximum likelihood path search under the condition of avoiding collision to obtain a target path reaching the target point, wherein the target path comprises a first intermediate point located on an arc of the current sensing range and a second intermediate point located on a bottom edge of the local aerial view and belonging to an undetected area; if so, controlling the mobile robot to move to a first intermediate point, and updating the first intermediate point to be a current detection point; if not, rotating the angle and searching the target path again; and if the target path is not searched after one rotation, taking the current aerial view as a target scene map. Thus, autonomous navigation and map building can be performed.

Description

Automatic drawing establishing method and device, mobile robot and readable storage medium
Technical Field
The application relates to the technical field of robots, in particular to an automatic mapping method and device, a mobile robot and a readable storage medium.
Background
With the technological change of a new round of industry 4.0, industrial scenes (such as warehouse logistics, intelligent factories, intelligent power stations and the like) are rapidly developed towards the direction of intellectualization, maximization and informatization. More and more mobile robots are introduced in industrial environments for performing various tasks such as work, exploration, transportation, etc.
In the existing scheme, a pre-established two-dimensional plane map needs to be stored in the mobile robot, and the mobile robot performs operation navigation according to the two-dimensional plane map. However, when a change occurs in the actual scene (e.g., a reconfiguration or transformation of the industrial pipeline), the map stored in the mobile robot may fail. In this case, the manual control device can only be moved in the changed scene again to obtain the environmental information of the scene, and then construct a new map. The mode needs manual control in the whole graph building process and is troublesome. Therefore, how to automatically navigate and map a mobile robot has become a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an automatic mapping method and device, a mobile robot and a readable storage medium, which can perform autonomous navigation and mapping, so that the intelligence of the mobile robot is effectively enhanced, and the production and service efficiency is improved conveniently.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides an automatic graph creating method, including:
acquiring a three-dimensional point cloud set of current detection points, and generating a current aerial view according to the three-dimensional point cloud set of the current detection points and the three-dimensional point cloud sets of the historical detection points;
performing a maximum likelihood path search under a condition of avoiding collision according to a local point cloud set, a local aerial view and a determined target point corresponding to a current sensing range of the current detection point, so as to obtain a target path reaching the target point, wherein the current sensing range is determined by a current sensing angle range and a sensing radius, the local aerial view is an isosceles triangle area overlapping with the current sensing range in the current aerial view, a vertex of the isosceles triangle area is the current detection point, a waist length is a preset length set according to the sensing radius, the target point is a point located outside the current sensing angle range and located outside the local aerial view, the target path includes the current detection point, a first intermediate point, a second intermediate point and the target point, and the first intermediate point is a point on an arc of the current sensing range, the second intermediate point is a point which belongs to an undetected area on the bottom edge of the local aerial view;
if the target path is obtained, controlling the mobile robot to move to the first intermediate point, and updating the first intermediate point to the current detection point;
if the target path is not obtained, rotating according to a preset direction, and searching the target path again; and if the target path is not searched after the rotation of one circle, taking the current aerial view corresponding to the current detection point as a target scene map.
In an optional embodiment, the obtaining a three-dimensional point cloud set of current detection points includes:
acquiring an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point;
performing movable object recognition on the two-dimensional image to determine a target area of the two-dimensional image where a movable object is located;
projecting the original three-dimensional point cloud to a plane where the two-dimensional image is located according to a pre-calibrated projection conversion relation to obtain a projection point cloud;
deleting the projection point cloud located in the target area to obtain a processed projection point cloud;
and obtaining a three-dimensional point cloud set of the current detection point according to the projection conversion relation and the processed projection point cloud.
In an alternative embodiment, the generating a current aerial view from the three-dimensional point cloud sets of the current detection points and the three-dimensional point cloud sets of the historical detection points includes:
generating a point cloud map according to the three-dimensional point cloud set of each detection point;
carrying out voxelization processing on the point cloud map to obtain a voxel map;
and obtaining the current aerial view according to the voxel map.
In an alternative embodiment, the obtaining the current bird's eye view from the voxel map includes:
deleting voxels with a height greater than a preset height in the voxel map to obtain a processed voxel map, wherein the preset height is greater than or equal to the height of the mobile robot;
and generating the current aerial view according to the processed voxel map.
In an alternative embodiment, the generating the current aerial view from the processed voxel map includes:
projecting the processed voxel map to a bird's-eye view plane to obtain an initial bird's-eye view;
judging whether the width between adjacent immovable areas in the initial aerial view is smaller than a preset width, wherein the preset width is set according to the length and the width of the mobile robot;
if the current bird's-eye view image is smaller than the preset width, communicating the adjacent immovable areas to obtain the current bird's-eye view image, wherein the current bird's-eye view image comprises travelable areas, and the travelable areas are areas between the adjacent immovable areas with the width not smaller than the preset width.
In an optional embodiment, the path search is performed according to a preset rule, where the preset rule includes: the mobile robot cannot move in the opposite direction for a preset duration.
In a second aspect, the present invention provides an automatic drawing device, including:
the aerial view acquisition module is used for acquiring a three-dimensional point cloud set of current detection points and generating a current aerial view according to the three-dimensional point cloud set of the current detection points and the three-dimensional point cloud sets of the historical detection points;
a path searching module, configured to perform a maximum likelihood path search under a condition of avoiding a collision according to a local point cloud set, a local aerial view and a determined target point corresponding to a current sensing range of the current detection point, so as to obtain a target path reaching the target point, where the current sensing range is determined by a current sensing angle range and a sensing radius, the local aerial view is an isosceles triangle area overlapping with the current sensing range in the current aerial view, a vertex of the isosceles triangle area is the current detection point, a waist length of the isosceles triangle area is a preset length set according to the sensing radius, the target point is a point located within the current sensing angle range and outside the local aerial view, the target path includes the current detection point, a first intermediate point, a second intermediate point and the target point, and the first intermediate point is a point on an arc of the current sensing range, the second intermediate point is a point which belongs to an undetected area on the bottom edge of the local aerial view;
the path searching module is further configured to control the mobile robot to move to the first intermediate point and update the first intermediate point as the current detection point when the target path is obtained;
the path searching module is further configured to rotate in a preset direction and search the target path again when the target path is not obtained; and after the target path is rotated for one circle, when the target path is not searched yet, taking the current aerial view corresponding to the current detection point as a target scene map.
In an alternative embodiment, the manner of obtaining the three-dimensional point cloud set of the current detection point by the aerial view acquisition module includes:
acquiring an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point;
performing movable object recognition on the two-dimensional image to determine a target area of the two-dimensional image where a movable object is located;
projecting the original three-dimensional point cloud to a plane where the two-dimensional image is located according to a pre-calibrated projection conversion relation to obtain a projection point cloud;
deleting the projection point cloud located in the target area to obtain a processed projection point cloud;
and obtaining the three-dimensional point cloud set according to the projection conversion relation and the processed projection point cloud.
In a third aspect, the present invention provides a mobile robot, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor can execute the machine executable instructions to implement the automatic mapping method according to any one of the foregoing embodiments.
In a fourth aspect, the present invention provides a readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing the automatic mapping method according to any one of the preceding embodiments.
According to the automatic image building method, the automatic image building device, the mobile robot and the readable storage medium, the current aerial view is generated according to the current detection points and the three-dimensional point cloud sets of the historical detection points, and then the maximum likelihood path search is carried out under the condition of avoiding collision according to the local point cloud set, the local aerial view and the target point corresponding to the current sensing range of the current detection points, so that the target path reaching the target point is obtained. The local aerial view is an isosceles triangle area overlapped with the current sensing range in the current aerial view, the target point is a point which is located in the current sensing angle range and outside the local aerial view, the target path comprises a current detection point, a first intermediate point, a second intermediate point and a target point, the first intermediate point is a point on an arc of the current sensing range, and the second intermediate point is a point which is on a bottom edge of the local aerial view and belongs to an undetected area. And in the case that the target path is obtained by searching, controlling the mobile robot to move to a first intermediate point, and updating the first intermediate point to a current detection point. Under the condition that the target path is not searched, rotating a certain angle and searching the target path again; and if the target path is not searched after one rotation, taking the current aerial view as a target scene map. Therefore, autonomous exploration and scene map construction of a closed scene can be completed, manual control in the whole process is not needed, the intelligence of the mobile robot can be enhanced, and the production and service efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic block diagram of a mobile robot provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an automatic map creating method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an implementation of an automatic graph creation method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating one of the sub-steps included in step S110 in FIG. 2;
FIG. 5 is a second schematic flowchart of the sub-steps included in step S110 in FIG. 2;
FIG. 6 is a diagram illustrating a maximum likelihood probability path search based on collision avoidance;
FIG. 7 is a schematic diagram of likelihood probability path search;
fig. 8 is a block diagram illustrating an automatic map creating apparatus according to an embodiment of the present application.
Icon: 100-mobile robot; 110-a memory; 120-a processor; 130-a communication unit; 200-an automatic mapping device; 210-a bird's eye view acquisition module; 220-path search module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a block diagram of a mobile robot 100 according to an embodiment of the present disclosure. The mobile robot 100 may be, but is not limited to, an indoor vulgar robot the mobile robot 100 may include a memory 110, a processor 120, and a communication unit 130. The elements of the memory 110, the processor 120 and the communication unit 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used to store programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the memory 110 stores therein an automatic mapping apparatus 200, and the automatic mapping apparatus 200 includes at least one software function module which can be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 executes various functional applications and data processing by executing software programs and modules stored in the memory 110, such as the automatic mapping apparatus 200 in the embodiment of the present application, so as to implement the automatic mapping method in the embodiment of the present application.
The communication unit 130 is used to establish a communication connection between the mobile robot 100 and another communication terminal through a network, and to transmit and receive data through the network.
In this embodiment, the mobile robot 100 may further include a radar and an image capturing unit. The radar is used for obtaining three-dimensional point cloud data, and the radar can be a multi-line rotatable laser radar. The image acquisition unit is used for acquiring a two-dimensional image of the surrounding environment at each detection point, wherein the two-dimensional image can be an RGB (Red Green Blue) image. The image acquisition unit may be a camera or the like that can obtain an image. Alternatively, as a possible implementation, the radar and image acquisition unit may be disposed at a chassis of the mobile robot 100.
It should be understood that the structure shown in fig. 1 is only a schematic structural diagram of the mobile robot 100, and the mobile robot 100 may include more or less components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic flow chart of an automatic drawing setup method according to an embodiment of the present application, and fig. 3 is a schematic execution diagram of the automatic drawing setup method according to the embodiment of the present application. The method may be applied to the mobile robot 100. The following describes the specific flow of the automatic map creating method in detail with reference to fig. 2 and 3. The method may include steps S110 to S140.
And step S110, obtaining a three-dimensional point cloud set of the current detection point, and generating the current aerial view according to the three-dimensional point cloud set of the current detection point and the three-dimensional point cloud sets of the historical detection points.
In this embodiment, after reaching a detection point, the detection point may be used as a current detection point. That is, the current detection point indicates the detection point at which the current is located. And obtaining original three-dimensional point clouds of the surrounding environment at the current detection point through a radar, and obtaining a three-dimensional point cloud set of the current detection point according to the obtained original three-dimensional point clouds. The three-dimensional point cloud set comprises three-dimensional point clouds corresponding to surrounding environments. The history detection points indicate detection points that have traveled past. For example, detection points 1, 2, and 3 are sequentially present in the traveling order, and if the current detection point is 3, the detection points 1 and 2 can be regarded as history detection points. After each three-dimensional point cloud set of one detection point is obtained, it can be saved for subsequent use.
Therefore, a three-dimensional point cloud set of each detection point (including the current detection point and the historical detection points) can be obtained, and the current aerial view can be generated. It is understood that, if the current detection point is the first detection point of the current map building, when the current bird's-eye view corresponding to the current detection point is generated, the three-dimensional point cloud sets of the historical detection points are empty, that is, the current bird's-eye view of the current detection point is generated only from the three-dimensional point cloud sets of the current detection point. The aerial view is a 2D aerial view, and subsequent navigation is convenient to use. The current aerial view is an aerial view of a scene of the driven area.
How to obtain a three-dimensional point cloud set of current detection points is explained in detail below.
Referring to fig. 4, fig. 4 is a flowchart illustrating one of the sub-steps included in step S110 in fig. 2. In the present embodiment, step S110 may include substeps S111 to substep S115.
And a substep S111, obtaining an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point.
In this embodiment, at the current detection point, the original three-dimensional point cloud of the surrounding environment of the current detection point can be obtained through the radar, and simultaneously, the two-dimensional image of the surrounding environment can be obtained through the image acquisition unit. The image acquisition unit can be triggered according to the acquisition frequency of the radar so that the image acquisition unit can simultaneously acquire the two-dimensional image, and therefore the relative synchronization of the acquired original three-dimensional point cloud and the two-dimensional image is guaranteed. The original three-dimensional point cloud is point cloud data which is obtained by a radar directly and is not subjected to screening processing.
And a substep S112, performing movable object recognition on the two-dimensional image to determine a target region of the two-dimensional image where the movable object is located.
After the two-dimensional image is obtained, the target area of the movable object included in the two-dimensional image can be obtained by recognizing the movable object in the two-dimensional image. Alternatively, the target area may be represented by calibration frame coordinates, i.e., calibration frame top left and bottom right corner marks (X) may be usedmin,Ymin,Xmax,Ymax) Representing a target area in which a movable object is located.
As an alternative embodiment, the deep learning detection network may be trained according to the image of the pre-acquired movable object, so as to obtain a trained target detection model shown in fig. 3. The image of the movable object acquired in advance can be determined according to the movable object possibly existing in the scene corresponding to the actual mapping, that is, the specific category of the movable object can be set according to the actual scene requirement. Then, movable object recognition can be performed on the two-dimensional image through the target detection model, so that the target area is determined.
The projection transformation relationship between the two-dimensional image and the original three-dimensional point cloud can also be calibrated in advance so as to be used in sub-step S113. As shown in fig. 3, during calibration, external reference calibration may be performed first, that is, coordinate information of an angular point of a square calibration plate in a two-dimensional image and a three-dimensional point cloud is determined. Specifically, the method comprises the following steps: a square calibration plate can be arranged, then a two-dimensional image A used for calibration is obtained through the image acquisition unit, and the position P 'of the corner point of the square calibration plate in the two-dimensional image A is obtained'c∈[1,2,3,4](x, y), namely obtaining two-dimensional coordinates of the corner points of the square calibration plate in the two-dimensional image A; a three-dimensional point cloud P (x, y, z) of the corner points of the square calibration plate can also be obtained by radar. Aiming at the condition that the sparse point cloud cannot completely guarantee that the angular point can be directly obtained and the point cloud can only support obtaining of the edge of the calibration plate, the intersection point coordinates of four edges can be calculated by combining the edge length constraint of the calibration plate and the plane constraint of the calibration plate, namely the three-dimensional coordinates of the angular point of the calibration plate.
As described in the above process, multiple sampling may be performed, and multiple sets of information pairs may be generated according to the multiple sampling information, where each set of information pairs includes two-dimensional coordinates and three-dimensional coordinates corresponding to the same corner point. Then, a projective transformation matrix from P 'to P may be calculated based on the random sampling consistency, where P' E is P, and E represents a projective transformation matrix (which may also be referred to as a projection matrix), i.e., a perceptual N Point (PnP) that calibrates the corner points of the board mapped from the two-dimensional space to the three-dimensional space. Thus, a projection matrix as the projection conversion relationship can be obtained by external reference calibration.
And a substep S113, projecting the original three-dimensional point cloud to the plane of the two-dimensional image according to a pre-calibrated projection conversion relation to obtain a projection point cloud.
Under the condition of obtaining a projection conversion relation, the original three-dimensional point cloud corresponding to the current detection point and the two-dimensional image, the obtained original three-dimensional point cloud is projected to the plane of the two-dimensional image through the projection conversion relation, and the projection point cloud corresponding to the original three-dimensional point cloud is obtained after the original three-dimensional point cloud is projected.
And a substep S114, deleting the projection point cloud located in the target area to obtain the processed projection point cloud.
And a substep S115, obtaining a three-dimensional point cloud set of the current detection point according to the projection conversion relation and the processed projection point cloud.
Then, the obtained projection point cloud can be filtered according to the target area, and the processed projection point cloud is obtained. And according to the projection conversion relation, carrying out back projection on the processed projection point cloud to obtain an original three-dimensional space, thereby obtaining the three-dimensional point cloud contained in the three-dimensional point cloud set. Therefore, the influence of the movable object in the scene on the subsequent image construction can be eliminated.
Referring to fig. 5, fig. 5 is a second schematic flowchart illustrating the sub-steps included in step S110 in fig. 2. Step S110 may further include substeps S117 to substep S119.
And a substep S117, generating a point cloud map according to the three-dimensional point cloud set of each detection point.
In this embodiment, under the condition that a three-dimensional point cloud set of each detection point is obtained, a point cloud map corresponding to the current detection point may be generated through a SLAM (Simultaneous Localization And Mapping) algorithm. Optionally, a point cloud map corresponding to the current detection point may be directly generated according to the three-dimensional point cloud set of each detection point; the point cloud map corresponding to the current detection point can also be generated according to the three-dimensional point cloud set of the current detection point and the point cloud map corresponding to the previous detection point, wherein the point cloud map corresponding to the previous detection point is generated by the three-dimensional point cloud set of each historical detection point before the current detection point.
It is understood that the above manner is only an example, and how to generate the point cloud map according to the three-dimensional point cloud set of each detection point may be set according to actual situations.
As an optional implementation manner, a load odometer and Mapping in Real-time laser odometer and map building scene map may be constructed according to a three-dimensional point cloud set of each monitoring point, and a scene point cloud map in a driving range of the mobile robot 100 is generated by superimposing point clouds based on a load onto algorithm. When the point cloud map is generated, radar ranging needs to be carried out so as to determine the pose of the radar at each detection point (current detection point and historical detection point), and then the point cloud map corresponding to the current detection point is generated by combining the pose and the three-dimensional point cloud set.
When radar range finding is carried out, points where the robot starts to build a map at this time are taken as starting points, and points on each scanning line of the multi-line laser radar are evaluated respectively based on a smoothness formula aiming at continuous two frames of point cloud data. Firstly, the point cloud obtained by the radar can be projected on a cylindrical surface taking the radar as the center, and the minimum scanning included angle of the radar is set as r, namely the horizontal length is set as
Figure BDA0002927656200000111
The vertical length is the number of laser lines. Based on the smoothness formula, each time taking the continuous point with the window length N on the horizontal single scanning line for calculation (window discontinuity, namely sliding window step)<Window length). Wherein, the smoothness formula is as follows:
Figure BDA0002927656200000112
wherein, X(k,i)As coordinates of the current point, X(k,j)Coordinates of other points within the current window.
The calculated smoothness may be below a threshold CsIs divided into edge features and points above the threshold are divided into face features. Then, based on a least square algorithm, aiming at the edge characteristics of two adjacent frames, finding the edge characteristic pair with the closest distance for pairing; the distance maximum is also searched for the surface characteristics of two adjacent framesMatching the near surface characteristics; then, based on the edge feature pair and the face feature pair, rotation and translation vectors [ R, T ] of two adjacent frames are calculated]Based on [ R, T]And obtaining the displacement and speed information of the radar. And if the radar is installed on a rigid body structure of the vehicle body, the calculated displacement and speed information of the radar is the displacement and speed information of the vehicle.
And a substep S118, performing voxelization processing on the point cloud map to obtain a voxel map.
Then, the point cloud map corresponding to the current detection point can be subjected to voxelization processing according to the set voxel size, so that the voxel map is generated. The voxel map comprises point cloud voxels, and the point cloud voxels are coarser than the point cloud granularity in the point cloud map. The data volume of the voxel map is smaller than that of the point cloud map, so that the subsequent calculation speed is accelerated.
And a substep S119, obtaining the current aerial view according to the voxel map.
Alternatively, in an alternative embodiment, the voxel map may be directly projected onto a 2D bird's eye view plane, and the projection result may be used as the current bird's eye view corresponding to the current detection point.
Optionally, in another optional embodiment, the obstacles in the voxel map may be searched and connected to construct an immovable connected region, that is, an immovable region; and deleting the voxels with the height higher than the preset height in the voxel map to obtain the processed voxel map. Wherein, the preset height is greater than or equal to the height of the mobile robot 100, and the specific value can be set according to actual requirements. For example, the height of the mobile robot is H, and the preset height may be set to 1.5H. Then, the voxel map after direct processing is projected to a 2D aerial view plane, and the projection result is used as the current aerial view corresponding to the current detection point.
The processed voxel map may be projected onto a 2D bird's eye view plane, and the projection result may be used as the initial bird's eye view. And then judging whether the width between the adjacent immovable areas is smaller than a preset width or not according to the adjacent immovable areas in the initial aerial view, if so, communicating the adjacent immovable areas, and obtaining the current aerial view after the treatment is finished. Wherein the preset width is set according to the length and width of the mobile robot 100. For example, the mobile robot 100 is a wheel robot, and the preset width may be set to 1.2 × Max (W, C). Through the above-described process, the search range of the travelable area (driable) can be further reduced, and a safe corridor satisfying the travel of the mobile robot 100 is formed. And then carrying out maximum likelihood probability path search based on collision avoidance in the safety corridor, and carrying out dynamic path planning based on the search result. And finally, the mobile robot 100 can perform autonomous exploration and scene mapping construction on an unknown environment under the motion safety constraint.
And step S120, according to the local point cloud set corresponding to the current sensing range of the current detection point, the local aerial view and the determined target point, carrying out maximum likelihood path search under the condition of avoiding collision so as to obtain a target path reaching the target point.
The local cloud set and the local bird's-eye view to be used may be determined based on the current pose information of the mobile robot 100, that is, based on the position and orientation of the current coordinate of the mobile robot 100 in the global coordinate system and the determined target point.
Referring to fig. 6 and 7, fig. 6 is a schematic diagram of maximum likelihood probability path search based on collision avoidance, and fig. 7 is a schematic diagram of likelihood probability path search. The local point cloud set is a point cloud within a range sensed by a radar of the mobile robot 100 at a current detection point in real time, that is, an initial three-dimensional point cloud detected by the radar at a current position and in a current orientation. The three-dimensional point cloud included in the local point cloud may be projected onto a 2D bird's eye view plane, and the resulting projection may be taken as an online map (online map).
The current sensing range (namely, the real-time sensing area) of the radar is determined by the current sensing angle range and the sensing radius. The current perception angle range represents the perception angle ranges a 1-a 2, and the difference between a 1-a 2 is determined by radar. Ideal situationIn the situation, as shown in fig. 6, the current sensing range of the radar is an opening angle θ and a side length rsThe sector of (and the grey sector in fig. 6), i.e. the sensing radius of the radar (i.e. the sensor in fig. 6) is rs. Where S in fig. 6 denotes a current detection point at which the mobile robot 100 (i.e., radar) is located.
The local aerial view is an isosceles triangle area overlapped with the current sensing range in the current aerial view, and the vertex of the isosceles triangle area is the current detection point, and the waist length is a preset length set according to the sensing radius. Optionally, the waist length is not greater than 2 times of the sensing radius, and the specific value may be set according to actual requirements.
As shown in fig. 6, based on the opening angle θ, an isosceles triangle area where a portion overlapping with the current sensing range is located is selected from the current bird's eye view (i.e., the global map in fig. 6) as the local bird's eye view. The local aerial view can be used as an off-line map (offline map). Wherein the length of the waist of the isosceles triangle can be 1.5rs. In practice, the partial bird's eye view is filled to a different extent due to the presence of irregular obstacles.
The target point is a point which is located in the current perception angle range and is located outside the local aerial view. If the isosceles triangle's waist is extended, the selected target point D may be located in the area after the extension, as shown in fig. 6. As an alternative embodiment, the farthest distance between a point in the current bird's eye view and the current detection point may be taken as the radius rmaxThen, at the present detection point, at an opening angle of theta and a radius of rmaxRandomly selecting a point on the edge of the sector as the target point.
Therefore, the current position of the radar and a selected target point can be divided into three parts through the online map and the offline map, namely, the sensor area (namely the current sensing range of the radar), the map area (the area corresponding to the local aerial view) and the unknown area (the area between the local aerial view and the target point D) shown in FIG. 7 are divided, and a probability path searching model is constructed to search for a target path. The target path comprises a current detection point, a first intermediate point, a second intermediate point and a target point, wherein the first intermediate point is a point on the arc of the current sensing range, and the second intermediate point is a point on the bottom edge of the local aerial view belonging to an undetected area. As shown in fig. 6, the current detection point is S, the first intermediate point is F, the second intermediate point is M, and the target point is D. The target path can be searched in the above manner.
The online map comprises moving and static obstacles, the sensing range of the online map is relatively limited, and the information is most accurate. The sensing range of the local aerial view is larger, but an unexplored area exists, and the information is relatively accurate. The remaining area to the temporary destination D is then a completely unknown area. By accurately searching in the sensor area, approximately searching in a relatively stable map area and widely searching in an unknown area, the searching accuracy is gradually reduced, and the searching path is gradually increased. And selecting a path with the maximum probability from the current position to the target point for path planning based on a likelihood estimation formula.
The optimal path within the perception range can be searched based on the probability path searching model. Wherein, the probability path search model can be expressed as:
Xs=arg max PD(xs)
Figure BDA0002927656200000141
PD(xs)=∫PD(xf)PD(xf|xs)PD((xf|xm)|xs)dxf
wherein, in the above formula, XsRepresenting a target function, and maximizing the probability from the current detection point S to the point D; pD(xs) Representing the probability, x, of a point S to a point DsRepresenting a current detection point S; x is the number offA certain point F on the edge of the sensor confidence interval (i.e. the current sensing range); x is the number ofmA point M on the local bird's-eye view boundary; p is a radical ofD(xf) Express fromProbability that the previous detection point S can reach the point D; p (x)f|xs) Representing the conditional probability of starting from the current detection point S and reaching the point D through the point M; p ((x)f|xm)|xs) The conditional probability of the point D arriving through the point M under the conditional probability of starting from the current detection point S, passing through the point F and arriving at the point M is shown; (ζ)i) The conditional distribution representing the navigation of the local bird's eye view from the current detection point to the target point D can be approximated as a finite voxel path computation on the road.
Optionally, when the path search is performed, the search may be performed according to a preset rule. The preset rules include: the mobile robot cannot move in the opposite direction for a preset duration. Thereby, the falling into the local optimum can be prevented.
Step S130, if the target path is obtained, controlling the mobile robot 100 to move to the first intermediate point, and updating the first intermediate point to the current detection point.
If the target path is obtained by the search, the mobile robot 100 may be moved to the first intermediate point, and the first intermediate point may be used as the current detection point, and then the step S110 may be executed again to start a new round of sensing.
Step S140, if the target path is not obtained, rotating according to a preset direction, and searching the target path again; and if the target path is not searched after the rotation of one circle, taking the current aerial view corresponding to the current detection point as a target scene map.
If the target path is not searched in the current sensing range, the radar may be rotated in the preset direction (for example, clockwise), and after the rotation, step S120 is executed again to search the target path in the rotated current sensing range. The rotation angle may be an opening angle θ when rotating. If the target path is searched in the rotated current sensing range, executing step S130; and if the target path is not searched in the rotated current sensing range, rotating again.
If the target path is not obtained after the rotation of one circle, all point clouds in the scene to be mapped can be considered to be obtained, and the current aerial view corresponding to the current detection point can be used as a target scene map. Therefore, the autonomous exploration and the scene map construction of the closed scene are finished, and manual control in the whole process is not needed.
According to the embodiment of the application, the understanding capacity of the mobile robot 100 to the environment is enhanced by fusing data of the radar and the image acquisition unit, and specifically, a mobile robot environment sensing system based on the fused data is constructed by utilizing a camera-radar external parameter calibration and pre-fusion scheme. In addition, the movable object is filtered through an image-based target detection algorithm and perception data projection, and mileage estimation and scene map construction are performed by combining a laser SLAM algorithm, so that the influence of movable obstacles on a final map is avoided, and the constructed map is more stable.
After the point cloud map is constructed based on the point cloud, the point cloud map is subjected to voxelization to obtain a voxel map, and dimension reduction of the point cloud map can be realized. And then, dimension reduction is carried out on the voxel map by using height filtering and aerial view projection, finally, a safety corridor required by the moving process is constructed through the volume of the mobile robot 100, and the final dimension reduction is finished, so that the path search space of the robot in the moving process is gradually compressed.
When path searching is carried out, based on maximum likelihood probability path searching of collision avoidance, a radar real-time perception result is used as an online map, a local aerial view of a part corresponding to a constructed SLAM map is used as an offline map, and a path searching space is compressed through search strategy optimization; meanwhile, the searching process is divided in a three-stage mode through a deterministic real-time sensing area, a high-deterministic historical map and an uncertain unknown map, a probability model is built, likelihood path searching is carried out, and planning is achieved through an optimization problem.
The automatic mapping method provided by the embodiment of the application can be used for indoor robot scene exploration and detection. The two-dimensional scene construction based on the single-line laser radar is a common scheme for indoor low-speed robots, but semantic information is lacked, and potential dynamic targets in the scene are difficult to filter. In the embodiment of the application, a multi-sensor data fusion sensing module is constructed based on radar and images, target detection in a scene is realized based on image detection, and further detection of a conventional movable object is realized; and then, projecting the 3-dimensional target to the 2-dimensional point cloud by using a pre-calibrated projection conversion relation, and filtering the movable object point cloud to obtain the residual point cloud. On the basis, the remaining point clouds are used for carrying out radar odometer and map building based on point cloud registration, a safe channel based on voxels is generated for an unexplored area, collision avoidance path search based on likelihood estimation is carried out in the channel, autonomous navigation is realized based on the searched safe path, and autonomous exploration and scene map building of a closed scene are completed.
In order to perform the corresponding steps in the above embodiments and various possible manners, an implementation manner of the automatic mapping apparatus 200 is given below, and optionally, the automatic mapping apparatus 200 may adopt the device structure of the mobile robot 100 shown in fig. 1. Further, referring to fig. 8, fig. 8 is a block diagram illustrating an automatic drawing setup apparatus 200 according to an embodiment of the present disclosure. It should be noted that the basic principle and the generated technical effect of the automatic drawing device 200 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The automatic mapping apparatus 200 may include: the bird's eye view acquisition module 210 and the path search module 220.
The bird's-eye view acquisition module 210 is configured to acquire a three-dimensional point cloud set of current detection points, and generate a current bird's-eye view according to the three-dimensional point cloud set of the current detection points and the three-dimensional point cloud sets of the historical detection points.
The path searching module 220 is configured to perform maximum likelihood path search under a condition of avoiding collision according to the local point cloud set corresponding to the current sensing range of the current detection point, the local aerial view, and the determined target point, so as to obtain a target path reaching the target point. The current sensing range is determined by a current sensing angle range and a sensing radius, the local aerial view is an isosceles triangle area which is overlapped with the current sensing range in the current aerial view, the vertex of the isosceles triangle area is the current detection point, the waist length is a preset length which is set according to the sensing radius, the target point is a point which is located in the current sensing angle range and is located outside the local aerial view, the target path comprises the current detection point, a first middle point, a second middle point and the target point, the first middle point is a point on an arc of the current sensing range, and the second middle point is a point which is located in an undetected area on the bottom edge of the local aerial view.
The path searching module 220 is further configured to control the mobile robot to move to the first intermediate point and update the first intermediate point as the current detection point when the target path is obtained.
The path searching module 220 is further configured to: when the target path is not obtained, rotating according to a preset direction, and searching the target path again; and after one rotation, if the target path is not searched yet, taking the current aerial view corresponding to the current detection point as a target scene map.
Optionally, in this embodiment, the manner of obtaining the three-dimensional point cloud set of the current detection point by the bird's eye view acquisition module 210 includes: acquiring an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point; performing movable object recognition on the two-dimensional image to determine a target area of the two-dimensional image where a movable object is located; projecting the original three-dimensional point cloud to a plane where the two-dimensional image is located according to a pre-calibrated projection conversion relation to obtain a projection point cloud; deleting the projection point cloud located in the target area to obtain a processed projection point cloud; and obtaining the three-dimensional point cloud set according to the projection conversion relation and the processed projection point cloud.
Alternatively, the modules may be stored in the memory 110 shown in fig. 1 in the form of software or Firmware (Firmware) or may be fixed in an Operating System (OS) of the mobile robot 100, and may be executed by the processor 120 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above-described modules may be stored in the memory 110.
The embodiment of the application also provides a readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the automatic mapping method.
In summary, embodiments of the present invention provide an automatic image construction method, an automatic image construction device, a mobile robot, and a readable storage medium, where a current bird's-eye view is generated according to a current detection point and a three-dimensional cloud set of points of historical detection points, and then a maximum likelihood path search is performed under a collision avoidance condition according to a local cloud set of points, a local bird's-eye view, and a target point corresponding to a current sensing range of the current detection point, so as to obtain a target path to the target point. The local aerial view is an isosceles triangle area overlapped with the current sensing range in the current aerial view, the target point is a point which is located in the current sensing angle range and outside the local aerial view, the target path comprises a current detection point, a first intermediate point, a second intermediate point and a target point, the first intermediate point is a point on an arc of the current sensing range, and the second intermediate point is a point which is on a bottom edge of the local aerial view and belongs to an undetected area. And in the case that the target path is obtained by searching, controlling the mobile robot to move to a first intermediate point, and updating the first intermediate point to a current detection point. Under the condition that the target path is not searched, rotating a certain angle and searching the target path again; and if the target path is not searched after one rotation, taking the current aerial view as a target scene map. Therefore, autonomous exploration and scene map construction of a closed scene can be completed, manual control in the whole process is not needed, the intelligence of the mobile robot can be enhanced, and the production and service efficiency can be improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An automatic mapping method, characterized in that the method comprises:
acquiring a three-dimensional point cloud set of current detection points, and generating a current aerial view according to the three-dimensional point cloud set of the current detection points and the three-dimensional point cloud sets of the historical detection points;
performing a maximum likelihood path search under a condition of avoiding collision according to a local point cloud set, a local aerial view and a determined target point corresponding to a current sensing range of the current detection point, so as to obtain a target path reaching the target point, wherein the current sensing range is determined by a current sensing angle range and a sensing radius, the local aerial view is an isosceles triangle area overlapping with the current sensing range in the current aerial view, a vertex of the isosceles triangle area is the current detection point, a waist length is a preset length set according to the sensing radius, the target point is a point located within the current sensing angle range and outside the local aerial view, the target path includes the current detection point, a first intermediate point, a second intermediate point and the target point, and the first intermediate point is a point on an arc of the current sensing range, the second intermediate point is a point which belongs to an undetected area on the bottom edge of the local aerial view;
if the target path is obtained, controlling the mobile robot to move to the first intermediate point, and updating the first intermediate point to the current detection point;
if the target path is not obtained, rotating according to a preset direction, and searching the target path again; and if the target path is not searched after the rotation of one circle, taking the current aerial view corresponding to the current detection point as a target scene map.
2. The method of claim 1, wherein obtaining a three-dimensional cloud of points for a current detection point comprises:
acquiring an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point;
performing movable object recognition on the two-dimensional image to determine a target area of the two-dimensional image where a movable object is located;
projecting the original three-dimensional point cloud to a plane where the two-dimensional image is located according to a pre-calibrated projection conversion relation to obtain a projection point cloud;
deleting the projection point cloud located in the target area to obtain a processed projection point cloud;
and obtaining a three-dimensional point cloud set of the current detection point according to the projection conversion relation and the processed projection point cloud.
3. The method of claim 2, wherein generating the current aerial view from the three-dimensional cloud set of points for the current detection point and the three-dimensional cloud set of points for each historical detection point comprises:
generating a point cloud map according to the three-dimensional point cloud set of each detection point;
carrying out voxelization processing on the point cloud map to obtain a voxel map;
and obtaining the current aerial view according to the voxel map.
4. The method of claim 3, wherein said deriving the current aerial view from the voxel map comprises:
deleting voxels with a height greater than a preset height in the voxel map to obtain a processed voxel map, wherein the preset height is greater than or equal to the height of the mobile robot;
and generating the current aerial view according to the processed voxel map.
5. The method of claim 4, wherein the generating the current aerial view from the processed voxel map comprises:
projecting the processed voxel map to a bird's-eye view plane to obtain an initial bird's-eye view;
judging whether the width between adjacent immovable areas in the initial aerial view is smaller than a preset width, wherein the preset width is set according to the length and the width of the mobile robot;
if the current bird's-eye view image is smaller than the preset width, communicating the adjacent immovable areas to obtain the current bird's-eye view image, wherein the current bird's-eye view image comprises travelable areas, and the travelable areas are areas between the adjacent immovable areas with the width not smaller than the preset width.
6. The method of claim 1, wherein the path search is performed according to a preset rule, wherein the preset rule comprises: the mobile robot cannot move in the opposite direction for a preset duration.
7. An automatic map building apparatus, characterized in that the apparatus comprises:
the aerial view acquisition module is used for acquiring a three-dimensional point cloud set of current detection points and generating a current aerial view according to the three-dimensional point cloud set of the current detection points and the three-dimensional point cloud sets of the historical detection points;
a path searching module, configured to perform a maximum likelihood path search under a condition of avoiding a collision according to a local point cloud set, a local aerial view and a determined target point corresponding to a current sensing range of the current detection point, so as to obtain a target path reaching the target point, where the current sensing range is determined by a current sensing angle range and a sensing radius, the local aerial view is an isosceles triangle area overlapping with the current sensing range in the current aerial view, a vertex of the isosceles triangle area is the current detection point, a waist length of the isosceles triangle area is a preset length set according to the sensing radius, the target point is a point located within the current sensing angle range and outside the local aerial view, the target path includes the current detection point, a first intermediate point, a second intermediate point and the target point, and the first intermediate point is a point on an arc of the current sensing range, the second intermediate point is a point which belongs to an undetected area on the bottom edge of the local aerial view;
the path searching module is further configured to control the mobile robot to move to the first intermediate point and update the first intermediate point as the current detection point when the target path is obtained;
the path searching module is further configured to rotate in a preset direction and search the target path again when the target path is not obtained; and after the target path is rotated for one circle, when the target path is not searched yet, taking the current aerial view corresponding to the current detection point as a target scene map.
8. The apparatus of claim 7, wherein the manner in which the aerial view acquisition module obtains the three-dimensional cloud set of points of the current detection point comprises:
acquiring an original three-dimensional point cloud and a two-dimensional image of the surrounding environment at the current detection point;
performing movable object recognition on the two-dimensional image to determine a target area of the two-dimensional image where a movable object is located;
projecting the original three-dimensional point cloud to a plane where the two-dimensional image is located according to a pre-calibrated projection conversion relation to obtain a projection point cloud;
deleting the projection point cloud located in the target area to obtain a processed projection point cloud;
and obtaining the three-dimensional point cloud set according to the projection conversion relation and the processed projection point cloud.
9. A mobile robot comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the method of automatically mapping according to any one of claims 1 to 6.
10. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of automatically creating a map as claimed in any one of claims 1 to 6.
CN202110138351.9A 2021-02-01 2021-02-01 Automatic drawing establishing method and device, mobile robot and readable storage medium Active CN112964263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110138351.9A CN112964263B (en) 2021-02-01 2021-02-01 Automatic drawing establishing method and device, mobile robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110138351.9A CN112964263B (en) 2021-02-01 2021-02-01 Automatic drawing establishing method and device, mobile robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN112964263A true CN112964263A (en) 2021-06-15
CN112964263B CN112964263B (en) 2022-11-01

Family

ID=76272946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110138351.9A Active CN112964263B (en) 2021-02-01 2021-02-01 Automatic drawing establishing method and device, mobile robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN112964263B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100630A (en) * 2022-07-04 2022-09-23 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, medium, and chip
CN115578694A (en) * 2022-11-18 2023-01-06 合肥英特灵达信息技术有限公司 Video analysis computing power scheduling method, system, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN111670339A (en) * 2019-03-08 2020-09-15 深圳市大疆创新科技有限公司 Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles
US20200293751A1 (en) * 2019-03-11 2020-09-17 Beijing Horizon Robotics Technology Research And Development Co., Ltd. Map construction method, electronic device and readable storage medium
CN111784836A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method, device and equipment and readable storage medium
CN111912418A (en) * 2020-07-16 2020-11-10 知行汽车科技(苏州)有限公司 Method, device and medium for deleting obstacles in non-driving area of mobile carrier
US10860034B1 (en) * 2017-09-27 2020-12-08 Apple Inc. Barrier detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860034B1 (en) * 2017-09-27 2020-12-08 Apple Inc. Barrier detection
CN109285220A (en) * 2018-08-30 2019-01-29 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map
CN111670339A (en) * 2019-03-08 2020-09-15 深圳市大疆创新科技有限公司 Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles
US20200293751A1 (en) * 2019-03-11 2020-09-17 Beijing Horizon Robotics Technology Research And Development Co., Ltd. Map construction method, electronic device and readable storage medium
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN111784836A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method, device and equipment and readable storage medium
CN111912418A (en) * 2020-07-16 2020-11-10 知行汽车科技(苏州)有限公司 Method, device and medium for deleting obstacles in non-driving area of mobile carrier

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI YANQI ET AL.: "Fusion Strategy of Multi-sensor Based Object Detection for Self-driving Vehicles", 《2020 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING (IWCMC)》 *
程传奇: "非结构场景下移动机器人自主导航关键技术研究", 《中国博士学位论文全文数据库 (信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100630A (en) * 2022-07-04 2022-09-23 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, medium, and chip
CN115100630B (en) * 2022-07-04 2023-07-14 小米汽车科技有限公司 Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115578694A (en) * 2022-11-18 2023-01-06 合肥英特灵达信息技术有限公司 Video analysis computing power scheduling method, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112964263B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
JP7341652B2 (en) Information processing device, information processing method, program, and system
CN108776492B (en) Binocular camera-based autonomous obstacle avoidance and navigation method for quadcopter
CN111797734B (en) Vehicle point cloud data processing method, device, equipment and storage medium
Heng et al. Autonomous visual mapping and exploration with a micro aerial vehicle
Ali et al. Path planning and control of mobile robot in road environments using sensor fusion and active force control
US9251417B1 (en) Fast open doorway detection for autonomous robot exploration
US20170193830A1 (en) Controlling unmanned aerial vehicles to avoid obstacle collision
Stepan et al. Robust data fusion with occupancy grid
US20200359867A1 (en) Determining Region Attribute
US10872228B1 (en) Three-dimensional object detection
US20210365038A1 (en) Local sensing based autonomous navigation, and associated systems and methods
CN111220143B (en) Method and device for determining position and posture of imaging equipment
CN112964263B (en) Automatic drawing establishing method and device, mobile robot and readable storage medium
US10578453B2 (en) Render-based trajectory planning
Ravankar et al. A hybrid topological mapping and navigation method for large area robot mapping
Cherubini et al. A new tentacles-based technique for avoiding obstacles during visual navigation
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
Fedorenko et al. Global UGV path planning on point cloud maps created by UAV
KR20220129218A (en) Speed control method of unmanned vehicle to awareness the flight situation about an obstacle, and, unmanned vehicle the performed the method
CN110389587A (en) A kind of robot path planning&#39;s new method of target point dynamic change
US11677931B2 (en) Automated real-time calibration
WO2022115215A1 (en) Systems and methods for monocular based object detection
Arulmurugan et al. Kinematics and plane decomposition algorithm for non linear path planning navigation and tracking of unmanned aerial vehicles
CN112597946A (en) Obstacle representation method and device, electronic equipment and readable storage medium
WO2024036984A1 (en) Target localization method and related system, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220920

Address after: Room 1016, 10th Floor, Building 1, No. 768, Jianghong Road, Changhe Street, Binjiang District, Hangzhou, Zhejiang 310000

Applicant after: Hangzhou Hejian Technology Partnership (L.P.)

Address before: 310000 room 1014, 10th floor, building 1, no.768 Jianghong Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Hangzhou Weishi Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant