EP4083921A1 - Procédé de construction de carte de nuage de points locaux et robot de vision - Google Patents

Procédé de construction de carte de nuage de points locaux et robot de vision Download PDF

Info

Publication number
EP4083921A1
EP4083921A1 EP21884422.3A EP21884422A EP4083921A1 EP 4083921 A1 EP4083921 A1 EP 4083921A1 EP 21884422 A EP21884422 A EP 21884422A EP 4083921 A1 EP4083921 A1 EP 4083921A1
Authority
EP
European Patent Office
Prior art keywords
mobile robot
preset
key frame
point cloud
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21884422.3A
Other languages
German (de)
English (en)
Other versions
EP4083921A4 (fr
Inventor
Yongxian YAN
Qinwei LAI
Gangjun XIAO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Publication of EP4083921A1 publication Critical patent/EP4083921A1/fr
Publication of EP4083921A4 publication Critical patent/EP4083921A4/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40298Manipulator on vehicle, wheels, mobile
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the disclosure relates to the technical field of local point cloud maps, in particular to a method for building a local point cloud map based on point cloud data, and a vision robot.
  • a three-dimensional time-of-flight (3d-tof) technology can restore a 3d point cloud map within an effective range, but a 3d-tof camera has the following problems: Since a relatively accurate depth value cannot be obtained by 3d-tof, an error is usually too large. Specifically, when an obstacle is too close or too far, there will be an obvious error in a depth value, whereby an obstacle that is too close cannot be effectively recognized by single-frame-based depth data analysis. Therefore, a more accurate point cloud map is required to complete positioning of an obstacle.
  • a laser positioning initialization method disclosed in Chinese patent CN110031825A , multiple frames of laser point clouds which are continuously acquired are spliced to perform point cloud sampling to build a 3d coordinate system, and sampled point cloud information is stored in a real-time histogram.
  • a mass of feature matrix operations need to be invoked, and a plurality of coordinate systems need to be built to establish a real-time 3d histogram; furthermore, a feature matching operation corresponding to point cloud data is too complex, which is not favorable for building an accurate and simple local point cloud map; and the problems in positioning and recognition of the above obstacle which is too close still cannot be solved.
  • the disclosure discloses a method for building a local point cloud map. Specific technical solutions are as follows: a method for building a local point cloud map.
  • the method for the building local point cloud map includes: step 1, in a moving process of a mobile robot, a 3d-tof camera of the mobile robot is controlled to keep collecting depth images, and according to a preset salient pose change condition, a current frame of depth image collected in real time by the 3d-tof camera is interpolated into a key frame sequence; then step 2 is executed; step 2, preset effective feature points of the depth image in the key frame sequence is controlled to be subjected to pose transformation, so as to become point clouds in a camera coordinate system, and step 3 is executed; step 3, a preset target planar region is configured located on a level ground within a real-time visual angle range of the 3d-tof camera of the mobile robot, and detection blocks matched with the preset target planar region are set in the preset target planar region to mark the point clouds over the preset target planar region in the form of a 3d histogram, and step 4 is executed; step 4, first according to height position distribution features of the point clouds in all the detection blocks, point clouds
  • this technical solution has the advantages that point clouds which reflect positions of a large-range region around a robot are sampled according to the preset salient pose change condition, whereby the mobile robot can completely cover a local point cloud map of a distribution of obstacles in front of the mobile robot by using these key frames; point clouds at different height positions are described in the form of a 3d histogram, which is convenient for removing stray points and point clouds of a blocked collection visual line of a current position by using distribution features of the point clouds in 3d regions of different detection blocks, thus improving the adaptability of a local point cloud map in scenarios with too close and too far obstacles and significantly reducing the difficulty in position recognition and positioning of a super-close obstacle and the probability of misjudgment.
  • interpolating, according to the preset salient pose change condition, the current frame of depth image collected in real time by the 3d-tof camera into the key frame sequence specifically, includes: a relative pose between a preset effective feature point in a current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same preset effective feature point in each key frame in the key frame sequence are respectively calculated, wherein the key frame sequence is a sequence of continuously arranged depth images which are pre-stored in the mobile robot and include the preset effective feature points; when all the calculated relative poses satisfy the preset salient pose change condition, the current frame of depth image currently collected by the 3d-tof camera is configured to be a new key frame, and the new key frame is interpolated into the key frame sequence to enable a key frame which participates in the foregoing relative pose calculation next time to reflect a latest relative position relationship of the mobile robot, wherein the preset effective feature point is used for representing a distribution position feature of an obstacle in front of the mobile robot within a visual angle range of
  • this technical solution has the advantages that the mobile robot makes use of these key frames to build the local point cloud map that can cover, in a large range, the distribution of the obstacles in front of the mobile robot, so that the following problem is solved by performing fusion processing on relative poses of the current frame of depth image and these key frames that support update: it is difficult for the mobile robot to recognize, on the basis of a single frame of depth image that is collected in real time, an obstacle close to an edge position of the body or an obstacle which is too close to the body.
  • a distance and a deflection angle between the preset effective feature point in each key frame in the key frame sequence and the 3d-tof camera under a corresponding key frame are both converted, in a translational rotation manner, to a camera coordinate system to which the current frame of depth image correspondingly belongs, so as to achieve the pose transformation, wherein a map coordinate system of the local point cloud map is the camera coordinate system corresponding to the current frame of depth image. It is favorable for building a more accurate local point cloud map and convenient for instant positioning navigation of the mobile robot.
  • the calculated relative poses include: a depth distance variation and a corresponding deflection angle variation between the preset effective feature point in the current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same preset effective feature point in each key frame in the key frame sequence after the pose transformation; and when all the depth distance variations are greater than a preset distance change threshold, and all the deflection angle variations are greater than a preset angle change threshold, the preset salient pose change condition is determined to be satisfied.
  • this technical solution has the advantage that by means of determining whether a variation of a distance between actual points (each of which is considered to be stationary) corresponding to the preset effective feature points of two frames of depth images and the 3d-tof camera is large enough and determining a variation of a deflection angle of the actual points (each of which is considered to be stationary) corresponding to the preset effective feature points of the two frames of depth images relative to the 3d-tof camera is large enough, whether a displacement of the body is large enough or an angle of rotation of the body is large enough is represented, thus making a gap between a depth image configured as a new key frame and the current frame of depth image; clear and wide-area peripheral information of the robot can be provided; the depth images in the key frame sequence to which the current frame of depth image belongs cover a large enough visual range (such as an angle, seen from the left and seen from the right, or a displacement, seen closely and seen far, wherein the pose of the robot should be transformed greatly among these angle positions), so
  • the distance between the same preset effective feature point in each key frame in the key frame sequence and the 3d-tof camera under the corresponding key frame is a depth distance from an actual position corresponding to the preset effective feature point of the depth image which is pre-collected by the 3d-tof camera and interpolated into the key frame sequence to an imaging plane of the 3d-tof camera, and a variation of the depth distance is used for representing a degree of a displacement of the mobile robot;
  • the deflection angle of the same preset effective feature point in each key frame in the key frame sequence relative to the 3d-tof camera under the corresponding key frame is a lens orientation angle when the 3d-tof collects the depth image that is interpolated into the key frame sequence, and a variation of the lens orientation angle is used for representing an angle of rotation of the mobile robot.
  • this technical solution is convenient for screening out multiple key frames that represent large relative poses of the mobile robot; particularly after the body of the mobile robot rotates in a larger angle or has a larger displacement, if the 3d-tof camera cannot recognize a too close obstacle, an exact position of the too close obstacle is positioned by the assistance of pre-stored associated effective feature points (that is, pre-stored key frames that include obstacles) in the key frame sequence.
  • the step 1 includes: when a 3d-tof camera of a mobile robot currently collects a current frame of depth image, an inertial sensor configured inside the mobile robot to measure a current pose of the mobile robot is invoked; and then a pre-stored real-time mobile robot pose corresponding to each key frame in a key frame sequence is invoked, and a relative pose between the current pose of the mobile robot and the real-time mobile robot pose corresponding to each key frame are respectively calculated.
  • this technical solution has the advantage that the inertial sensor inside the body is used to calculate the relative pose between an effective acquisition position of the current frame of depth image and an effective acquisition position of each key frame to directly obtain a position variation between the preset effective feature points collected by the mobile robot, which is favorable for simplifying a robot positioning operation and is convenient for instant positioning navigation of the mobile robot.
  • the calculated relative poses include: a walking distance variation of the mobile robot generated between the preset effective feature point in the current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same or same type of preset effective feature point in the pre-collected key frame, and a deflection angle variation generated in the same acquisition process; when all the walking distance variations are greater than a preset distance change threshold, and all the deflection angle variations are greater than a preset angle change threshold, the preset salient pose change condition is determined to be satisfied, wherein the preset effective feature point is used for representing a distribution position feature of an obstacle in front of the mobile robot within a visual angle range of the 3d-tof camera, and the distribution position feature of the obstacle includes presence of the obstacle and/or absence of the obstacle.
  • this technical solution has the advantage that by means of determining whether a variation of actual acquisition position points corresponding to two frames of depth images is large enough and determining whether a variation of actual body deflection angles corresponding to the two frames of depth images is large enough, whether a displacement of the body is large enough or an angle of rotation of the body is large enough is represented, thus making a gap between a depth image configured as a new key frame and the current frame of depth image; clear and wide-area peripheral information of the robot can be provided; the depth images in the key frame sequence to which the current frame of depth image belongs cover a large enough visual range (such as an angle, seen from the left and seen from the right, or a displacement, seen closely and seen far, wherein the pose of the robot should be transformed greatly among these angle positions), so that both a key frame including an obstacle and a key frame including an obstacle exist in the depth images in the key frame sequence.
  • the inertial sensor includes a coded disc arranged in a body center of the mobile robot and a gyroscope arranged inside the body center of the mobile robot; the coded disc is configured for measuring the walking distance variation of the mobile robot; the gyroscope is configured for measuring the deflection angle variation of the mobile robot in the same acquisition process, wherein position coordinates of the mobile robot measured by the coded disc are converted, through a rigid connection relationship between the body center of the mobile robot and the 3d-tof camera, to the camera coordinate system to which the current frame of depth image correspondingly belongs, so as to build the local point cloud map.
  • the key frame sequence includes three or more frames of depth images configured for building the local point cloud map. This technical solution can provide more sufficient point cloud samples for building the local point cloud map and positioning, by means of the effective feature points, obstacles, so that the local point cloud map is more stable and effective.
  • a first-in first-out internal-memory storage form is used to cache the key frame sequence in the mobile robot, so that in a moving process, the mobile robot obtains the key frame which is closest in time to one frame of depth image currently collected in real time by the 3d-tof camera.
  • a first-in first-out storage queue is used to store the latest screened and interpolated key frames to ensure that only the most effective frame sequences are stored, and the latest screened and interpolated key frames can be three frames of depth images closest to the current frame.
  • the preset target planar region is a rectangular region; a middle point of one edge of the rectangular region is the position of the 3d-tof camera; a length of the edge is equal to a body diameter of the mobile robot; and the other edge perpendicular to the edge is disposed in front of the visual angle of the 3d-tof camera and is used for representing a detectable depth distance in a depth image collected by the 3d-tof camera.
  • the preset target planar region is configured to be a target detection region that allows the mobile robot to successfully pass.
  • a maximum height position and distribution feature of obstacle point clouds above this region are positioned, so that an obstacle avoiding path for an obstacle under a near-distance condition is effectively planned.
  • step 4 when a 3d coordinate position of one point cloud falls into one detection block, the point cloud is determined to be distributed in the one detection block, and a height of the point cloud is recorded.
  • the method for storing and applying position information of a point cloud is simplified.
  • storing each point cloud in the form of the 3d histogram includes: a coordinate origin of the 3d coordinate system to which the 3d histogram belongs is set to be the position of the 3d-tof camera; the preset target planar region covers a distance that is half of the body diameter in a forward direction of a horizontal axis of the 3d coordinate system is set; the preset target planar region covers a distance that is half of the body diameter in a negative direction of the horizontal axis of the 3d coordinate system is set; the preset target planar region covers the detectable depth distance in a forward direction of a longitudinal axis of the 3d coordinate system is set; the detection blocks are distributed along a vertical axis of the 3d coordinate system is set; a vertical height occupied by each detection block on the vertical axis of the 3d coordinate system is configured to represent a height of a point cloud distributed in the detection block; and the point clouds are divided to obtain the preset number of 3d histograms, so as to realize the point clouds in the form
  • the point clouds of the local map are converted into a 3d dot map in a specific detection region in a coordinate system of the local map, and the height of a bar distributed in the 3d dot map is used to characterize a point cloud height of an obstacle, so that it is convenient for significantly and quickly screening and removing stray point clouds.
  • a ratio of an area of the preset target planar region to a horizontal projection area of each detection block is a preset integer, so that the entire preset target planar region is evenly divided by the preset integer number of the detection blocks and occupied by the detection blocks of the preset integer number.
  • the point cloud coverage rate of the detection blocks for the preset target planar region is increased.
  • culling out the point clouds in the discrete distribution state in corresponding the detection blocks includes: whether the number of point clouds distributed in each detection block is less than a positioning number threshold is determined; and if YES, the point clouds distributed in the detection block are culled out, and then point clouds in the detection blocks with height features of blocked collection visual lines are culled out; otherwise, the point clouds in the detection blocks with the height features of the blocked collection visual lines are directly culled out, wherein the positioning number threshold is used for describing the discrete distribution state; and/or, whether no point clouds are distributed in neighborhoods of one detection block is determined; and if YES, the point clouds distributed in the one detection block are culled out, and then point clouds in the detection blocks with height features of blocked collection visual lines are culled out; otherwise, the point clouds in the detection blocks with the height features of the blocked collection visual lines are directly culled out.
  • this technical solution reduces the noise interference caused by point clouds which are in over-discrete distribution to recognition and positioning of an obstacle, particularly, relieves the problem of a positioning error caused by a too far obstacle, thus improving the accuracy of recognition of a too close obstacle.
  • step 4 culling out the point clouds in the detection blocks with the height features of the blocked collection visual lines, includes: step 41, a height of the point clouds distributed in each column of detection blocks is detected along a direction away from the 3d-tof camera; whether a detection block sequence changing from short to high exists among all the columns of detection blocks are respectively determined; if YES, step 42 is executed, or step 43 is executed, wherein the change from short to high corresponds to a change of height positions of the point clouds having the maximum heights and distributed in all the detection blocks that participate in the determination, and the maximum heights of the point clouds distributed in each detection block represents the height of each detection block; step 42, the detection block to which the point clouds with the maximum heights in the detection block sequence is recorded, then point clouds distributed in relatively short detection blocks behind the recorded detection block are culled out, and return to step 41; and step 43, the detection block to which the point clouds with the maximum heights in the detection block sequence is recorded, and then point clouds distributed in detection blocks behind the recorded detection block are culled
  • the method for culling out the point clouds distributed in the detection blocks is carried out on the basis of the point clouds with the maximum heights.
  • the point clouds which are not lifted to a peak height position are selected to be maintained according to a shape of a hill in an actual physical environment, and the relatively short point clouds of the blocked collection visual lines on the back side of the hill are deleted, so that an obstacle which achieves a significant visual line blocking effect is maintained in the current position of the mobile robot;
  • the point clouds which cannot be applied to obstacle positioning in an actual scenario due to the blockage of an actual height position are deleted from the corresponding detection blocks on the preset target planar region; the misjudgment phenomenon caused by the positions of the shorter point clouds pre-collected in the depth images is overcome; and the misjudgment of a position of a too close obstacle is significantly reduced.
  • the preset effective feature points on the key frames in the key frame sequence include a non-overexposed point, a non-underexposed point, a non-too-far point, a non-too-close point, and a pixel point with a maximum vertical height of greater than 20 mm, wherein a depth value of the non-too-far point and a depth value of the non-too-close point are both between 200 mm and 500 mm.
  • the preset effective feature point is a feature point used for representing the obstacle or used for representing a standard road sign beyond the obstacle, wherein the 3d-tof camera collects different frames of depth images of the preset effective feature point from different directions in the moving process of the mobile robot, thus representing a distribution position feature of an obstacle in front of the mobile robot within the visual angle range of the 3d-tof camera.
  • a vision robot includes a 3d-tof camera and a processing unit, wherein the processing unit is configured for implementing the method for building the local point cloud map; and the 3d-tof camera is assembled on a body of the vision robot, so that the visual angle of the 3d-tof camera covers a position in front of the vision robot.
  • the 3d-tof camera samples the point clouds of a target obstacle in the moving process of the mobile robot; when the vision robot moves to a certain position, stray points are removed by using the height position distribution feature of the point clouds of the preset target planar region within the current visual angle range of the 3d-tof camera; only the reasonably distributed point clouds used for accurate positioning and having the maximum heights are allowed to be retained in the local map; and in a turning process, the vision robot can re-recognize an obstacle which has been recognized within the visual angle range and is close to an edge position of the body or an obstacle which is too close to the body, so that the positioning adaptability of scenarios in which obstacles are too close and too far is improved, and the probability of misjudgment on the position of a super-close obstacle is significantly reduced.
  • FIG. 1 is a flow chart of a method for building a local point cloud map disclosed in one embodiment of the disclosure.
  • An embodiment of the disclosure discloses a method for building a local point cloud map.
  • the method for building the local point cloud map includes: step 1: in a moving process of a mobile robot, a 3d-tof camera of the mobile robot to is controlled keep collecting depth images, and according to a preset salient pose change condition, a current frame of depth image collected in real time by the 3d-tof camera is interpolated into a key frame sequence; and step 2 is executed.
  • the mobile robot can be a floor mopping robot, which is also referred to as an automatic cleaning machine, an intelligent vacuum cleaner, a robot vacuum cleaner, and the like. It is one kind of intelligent household appliances and can automatically complete -shaped planned cleaning work in a room by virtue of certain artificial intelligence.
  • the method for interpolating, according to a preset salient pose change condition, a current frame of depth image collected in real time by the 3d-tof camera into a key frame sequence specifically includes: A relative pose between a preset effective feature point in a current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same preset effective feature point in each key frame (a key frame subjected to pose transformation) in the key frame sequence are respectively calculated, wherein the key frame sequence is a sequence of continuously arranged depth images which are pre-stored in the mobile robot and include the preset effective feature points, and is used for storing relative position information of the body and orientation information of the camera in each of the continuous depth image frames, including depth image information of an obstacle which has been recognized by traversing and is too close to the body or is fitted to the body, so that when it is impossible to recognize this type of obstacle subsequently, reference coordinate information of these associated depth images are used to assist in positioning; wherein the preset effective feature point is used for representing a distribution
  • the current frame of depth image currently collected by the 3d-tof camera to be a new key frame is configured, and the new key frame is interpolated into the key frame sequence to enable a key frame which participates in the foregoing relative pose calculation next time to reflect a latest relative position relationship of the mobile robot, wherein the step 1 is configured for determining a degree of the relative pose between the depth image currently collected by the 3d-tof camera of the mobile robot and each frame of depth image in the key frame sequence, so as to subsequently screen out a depth image in which the body has a large enough displacement salience degree or a depth image in which the body has a large enough rotation salience degree; when the deflection angle and relative distance included in the relative pose calculated in the step 1 are both determined to be large, which represents that the mobile robot has a large displacement or has an obvious change in a turning angle, the current frame of depth image currently collected by the 3d-tof camera is configured to be a new key frame, and
  • the calculated relative poses include: a depth distance variation and a corresponding deflection angle variation between the preset effective feature point in the current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same preset effective feature point in each key frame in the key frame sequence after the pose transformation, representing a displacement variation (corresponding to the depth distance variation) of the mobile robot relative to the stationary preset effective feature point and an angle deflection (corresponding to the deflection angle variation) generated around the preset effective feature point; when all the depth distance variations are greater than a preset distance change threshold, and all the deflection angle variations are greater than a preset angle change threshold, the preset salient pose change condition is determined to be satisfied, representing that the mobile robot has a large displacement or has an obvious change in a turning angle (for example, the mobile robot turns 90 degrees in the -shaped planned walking process).
  • this embodiment has the advantage that by means of determining whether a variation of a distance between actual points (each of which is considered to be stationary) corresponding to the preset effective feature points of two frames of depth images and the 3d-tof camera is large enough and determining a variation of a deflection angle of the actual points (each of which is considered to be stationary) corresponding to the preset effective feature points of the two frames of depth images relative to the 3d-tof camera is large enough, whether a displacement of the body is large enough or an angle of rotation of the body is large enough is represented, thus making a gap between a depth image configured as a new key frame and the current frame of depth image; clear and wide-area peripheral information of the robot can be provided; the depth images in the key frame sequence to which the current frame of depth image belongs cover a large enough visual range (such as an angle, seen from the left and seen from the right, or a displacement, seen closely and seen far, wherein the pose of the robot should be transformed greatly among these angle positions), so that
  • the distance between the same preset effective feature point in each key frame in the key frame sequence and the 3d-tof camera under the corresponding key frame is a depth distance from an actual position corresponding to the preset effective feature point of the depth image which is pre-collected by the 3d-tof camera and interpolated into the key frame sequence to an imaging plane of the 3d-tof camera, and a variation of the depth distance is used for representing a degree of a displacement of the mobile robot;
  • the deflection angle of the same preset effective feature point in each key frame in the key frame sequence relative to the 3d-tof camera under the corresponding key frame is a lens orientation angle when the 3d-tof collects the depth image that is interpolated into the key frame sequence, and a variation of the lens orientation angle is used for representing an angle of rotation of the mobile robot.
  • this embodiment extracts pose information of the preset effective feature point and participates in comparison operation of inter-frame offsets, which is convenient for screening out multiple key frames that represent large relative poses of the mobile robot; particularly after the body of the mobile robot rotates in a larger angle or has a larger displacement, if the 3d-tof camera cannot recognize a too close obstacle, a specific position of the too close obstacle is positioned by cooperation between pre-stored associated effective feature points (that is, pre-stored key frames that include obstacles) in the key frame sequence and the subsequent point cloud culling operation.
  • pre-stored associated effective feature points that is, pre-stored key frames that include obstacles
  • the method step for interpolating, according to a preset salient pose change condition, a current frame of depth image collected in real time by the 3d-tof camera into a key frame sequence specifically includes:
  • the calculated relative poses include: a walking distance variation of the mobile robot between the preset effective feature point in the current frame of depth image currently collected by the 3d-tof camera of the mobile robot and the same or same type of preset effective feature point in the pre-collected key frame, and a deflection angle variation generated in the same acquisition process; when all the walking distance variations are greater than a preset distance change threshold, and all the deflection angle variations are greater than a preset angle change threshold, the preset salient pose change condition is determined to be satisfied.
  • the inertial sensor includes a coded disc arranged in a body center of the mobile robot and a gyroscope arranged inside the body center of the mobile robot; the coded disc is configured for measuring the walking distance variation of the mobile robot; the gyroscope is configured for measuring the deflection angle variation of the mobile robot in the same acquisition process; position coordinates of the mobile robot measured by the coded disc are converted, through a rigid connection relationship between the body center of the mobile robot and the 3d-tof camera, to the camera coordinate system to which the current frame of depth image correspondingly belongs, so as to build the local point cloud map, thus marking out a moving trajectory of the mobile robot during acquisition of the preset effective feature point on the local point cloud map. Since the pose change information of the mobile robot is directly obtained by using inertial data in this embodiment, the calculation for the relative pose is simplified relative to the calculation of the feature points of the foregoing embodiment.
  • the feature points in each key frame in the key frame sequence about the same target obstacle is transformed to the same local point cloud map through pose transformation, so as to mark out the same target obstacle until poses of the mobile robot under different key frames are obtained.
  • a fusion operation is performed on the inertial data acquired synchronously when the 3d-tof camera collects the current frame of depth image and the inertial data acquired synchronously when all the depth images in the key frame sequence are acquired, so as to obtain the relative pose that characterizes the position change of the mobile robot.
  • the key frames that reflect clear and wide-area peripheral information of the robot are selected according to the preset salience pose change condition.
  • the mobile robot can use these key frames to establish the local point cloud map which can cover the distribution of the obstacles in front of the mobile robot in a large range, so as to solve the following problem by performing fusion processing on relative poses of the current frame of depth image and these key frames that support update: it is difficult for the mobile robot to recognize, on the basis of a single frame of depth image collected in real time, an obstacle close to an edge position of the body or an obstacle which is too close to the body.
  • depth images in a pre-stored image sequence are used. If the current collection time is t, depth image frames collected at t-1, t-2 and t-3, or other early key frame images with temporality can be used, but it is not necessarily obstacles corresponding to edges of the depth images. In the key frames, those effective points that appear in front of the body can be used to be fused with the current frame to calculate the relative poses. At the same time, it is also necessary to acquire the position information collected in real time by the inertial sensor at the body center.
  • the preset salient pose change condition is determined to be satisfied, which represents that the mobile robot has a large displacement or has an obvious change in a turning angle (for example turning 90 degrees in the -shaped planned walking process).
  • this technical solution has the advantage that by means of determining whether a variation of actual acquisition position points corresponding to two frames of depth images is large enough and determining whether a variation of actual body deflection angles corresponding to the two frames of depth images is large enough, whether a displacement of the body is large enough or an angle of rotation of the body is large enough is represented, thus making a gap between a depth image configured as a new key frame and the current frame of depth image; clear and wide-area peripheral information of the robot can be provided; the depth images in the key frame sequence to which the current frame of depth image belongs cover a large enough visual range (such as an angle, seen from the left and seen from the right, or a displacement, seen closely and seen far, wherein the pose of the robot should be transformed greatly among these angle positions), so that both a key frame including an obstacle and a key frame including an obstacle exist in the depth images in the key frame sequence; and the positioning calculation steps for an obstacle are simplified.
  • the preset target planar region is a rectangular region; a middle point of one edge of the rectangular region is the position of the 3d-tof camera; a length of the edge is equal to a body diameter of the mobile robot; and the other edge perpendicular to the edge is disposed in front of the visual angle of the 3d-tof camera and is used for representing a detectable depth distance in a depth image collected by the 3d-tof camera.
  • the preset target planar region is configured to be a target detection region that allows the mobile robot to successfully pass.
  • a maximum height position and distribution feature of obstacle point clouds above this region are positioned, so that an obstacle avoiding path for an obstacle under a near-distance condition is effectively planned.
  • the method for marking each point cloud in the form of a 3d histogram includes: a coordinate origin of the 3d coordinate system to which the 3d histogram belongs is set to be the position of the 3d-tof camera; the preset target planar region covers a distance that is half of the body diameter in a forward direction of a horizontal axis of the 3d coordinate system is set; the preset target planar region covers a distance that is half of the body diameter in a negative direction of the horizontal axis of the 3d coordinate system is set; the preset target planar region covers the detectable depth distance in a forward direction of a longitudinal axis of the 3d coordinate system is set; the detection blocks are distributed along a vertical axis of the 3d coordinate system is set; a vertical height occupied by each detection block on the vertical axis of the 3d coordinate system is configured to represent a height of a point cloud distributed in the detection block; and the point clouds are divided to obtain the preset number of 3d histograms, so as to realize the point clouds
  • the point clouds of the local map are converted into a 3d dot map in a specific detection region in a coordinate system of the local map, and the height of a bar distributed in the 3d dot map is used to characterize a point cloud height of an obstacle, so that it is convenient for significantly and quickly removing stray point clouds.
  • a preset coordinate value range corresponding to the preset target planar region includes: by taking the center of the 3d-tof camera as a coordinate origin, a coordinate coverage range in a horizontal axis direction is: from the leftmost side -150 mm to the rightmost side +150 mm, and the coordinate range (expressed as a depth distance) in a longitudinal axis direction is: the nearest 0 mm to the farthest 300 mm.
  • the point cloud positions of the local map in each detection block with a horizontal projection area of 30 mm*30 mm are counted in a 3d histogram; those point clouds which are not within the coordinate value range of the preset target planar region are not segmented, and the maximum heights of the point clouds falling into this range are recorded; and a 3d dot map that marks the maximum height of the point clouds is generated in an effective target space region.
  • the preset target planar region is configured to be a target detection region that allows the mobile robot to successfully pass. A maximum height position and distribution feature of obstacle point clouds above this region are recognized and positioned, so that an obstacle avoiding path for an obstacle under a near-distance condition is in real time and effectively planned.
  • a ratio of an area of the preset target planar region to a horizontal projection area of each detection block is a preset integer, so that the entire preset target planar region is evenly divided and occupied by a preset integer number of detection blocks.
  • N detection blocks are arranged in each row, and M detection blocks are arranged in each column.
  • the direction of a row is a side direction perpendicular to a forward motion direction of the body; and the direction of a column is the forward motion direction of the body, which is equivalent to an optical axis direction of the 3d-tof camera in this embodiment.
  • the preset integer is a product of N and M, so that the point cloud coverage rate of the detection blocks for the preset target planar region is increased.
  • step 4 according to height position distribution features of the point clouds in all the detection blocks, point clouds in a discrete distribution in corresponding detection blocks are first culled out, and then point clouds in the detection blocks with height features of blocked collection visual lines are culled out; then step 5 is executed.
  • step 4 when a 3d coordinate position of one point cloud falls into one detection block, the one point cloud is determined to be distributed in the one detection block, and a height of the one point cloud is recorded, thus simplifying the method for storing and applying the position information of the point cloud.
  • the heights of the point clouds distributed in all the detection blocks are respectively recorded according to the situation that the point clouds fall into all the detection blocks, including the maximum height of the point clouds distributed in each detection block, and the point clouds with the maximum height in each detection block is used to identify the maximum height position of the detection block, so that the point clouds are marked over the preset target planar region in the form of a 3d histogram.
  • the point clouds in the detection block with the height feature of the blocked collection visual line are the point clouds distributed in the shorter detection block, a collection visual line is blocked by a certain higher detection block.
  • the shorter point clouds are located in front of the 3d-tof camera and are arranged behind a higher detection block along a direction away from the 3d-tof camera.
  • the method for culling out point clouds in a discrete distribution state in corresponding detection blocks specifically includes: whether the number of point clouds distributed in each detection block is less than a positioning number threshold is determined; and if YES, the point clouds distributed in the detection block are culled out, and then point clouds in the detection blocks with height features of blocked collection visual lines are culled out; otherwise, the point clouds in the detection blocks with the height features of the blocked collection visual lines are directly culled out, wherein the positioning number threshold is used for describing the discrete distribution state; and/or, determining whether no point clouds are distributed in neighborhoods of one detection block; and if YES, the point clouds distributed in the detection block are culled out, and then point clouds in the detection blocks with height features of blocked collection visual lines are culled out; otherwise, the point clouds in the detection blocks with the height features of the blocked collection visual lines are directly culled out.
  • the step 4 specifically includes:
  • the noise interference caused by point clouds which are in over-discrete distribution to recognition and positioning of an obstacle is reduced, particularly, the problem of a positioning error caused by a too far obstacle is relieved, thus improving the accuracy of recognition of a too close obstacle.
  • the method for culling out point clouds in the detection blocks with height features of blocked collection visual lines includes:
  • the point clouds stored in the form of a 3d histogram include many unreasonable points.
  • the thickness of the point clouds is more than 10 cm, so that it can be determined, on the basis of the foregoing step, whether the detection block which is collected in front of the current position of the body and represents the height of wall point clouds can block the visual line; if YES, even if the point clouds in a detection block behind this detection block is effective (that is, the point clouds are pre-collected by the 3d-tof camera and marked on the local map), they should not be collected by the 3d-tof camera at the current position; therefore, the point clouds in all the detection blocks behind this detection block need to be culled out to avoid such a phenomenon that the corresponding pre-collected point clouds marked on the local map for positioning are directly used for positioning, without considering the influence of the actual
  • the 3d-tof camera has collected related depth images, and the mobile robot of this embodiment easily uses the point clouds in the detection blocks behind the highest detection block to position an obstacle, so that a positioning error is easy to generate. Therefore, the point clouds in all the shorter detection blocks behind this detection block, and the point clouds in the highest detection block to in real time position and recognize an actual obstacle in the front, thus overcoming the influence caused by a point cloud position measurement error.
  • this embodiment has the advantages that on the basis of the foregoing steps 41 to 43, the method for culling out the point clouds distributed in the detection blocks is carried out on the basis of the point clouds with the maximum heights. Specifically, the point clouds which are not lifted to a peak height position are selected to be maintained according to a shape of a hill in an actual physical environment, and the relatively short point clouds (the point clouds in the detection blocks with height features of the blocked collection visual lines) corresponding to the back side of the hill are deleted, so that an obstacle which achieves a significant visual line blocking effect is maintained in the current position of the mobile robot; the point clouds which cannot be applied to obstacle positioning at the current position due to the blockage of the actual height position are deleted from the corresponding detection blocks on the preset target planar region, thus optimizing the filtering of the point cloud data of the local map.
  • step 5 positions of the remaining point clouds distributed in all the detection blocks are marked as 3d positions of a local point cloud map, so as to build the local point cloud map.
  • the key frame sequence includes three or more frames of depth images, so that more sufficient point cloud samples can be provided for building the local point cloud map and positioning, by means of the effective feature points, obstacles.
  • a first-in first-out (FIFO queue) internal-memory storage form is used to cache the key frame sequence in the mobile robot, so that in the moving process, the mobile robot refreshes the latest interpolated key frame in the key frame sequence, and obtains the key frame which is closest in time to one frame of depth image currently collected in real time by the 3d-tof camera.
  • a FIFO storage queue is used to store the latest screened and interpolated key frames to ensure that only the most effective frame sequences are stored, and the latest screened and interpolated key frames can be three frames of depth images closest to the current frame.
  • the depth image data cached in the form of an FIFO queue is not redundant, which further maintains effective positioning information of the local point cloud map accurately.
  • the mobile robot uses these key frames to build a local point cloud map that can cover the distribution of obstacles in front of the mobile robot in a large range, so as to sample, according to the preset salient pose change condition, the point clouds that reflect positions in a large-range region around the robot by means of fusion processing of the relative poses between the current frame of depth image and these key frames that support update, so that the mobile robot can use these key frames to build a local point cloud map that can completely cover the distribution of obstacles in front of the mobile robot.
  • the point clouds at different heights are described in the form of a 3d histogram, which is convenient for removing stray points by using the 3d region distribution features of the point clouds in the different detection blocks; the following problem is solved: it is difficult for the robot to recognize, on the basis of a single frame of depth image collected in real time, an obstacle that is close to an edge position of the body or an obstacle that is too close to the body, thus improving the adaptability of the local point cloud map in scenarios with too close and too far obstacles and significantly reducing the difficulty in position recognition and positioning of a super-close obstacle and the probability of misjudgment.
  • the preset effective feature points include non-overexposed points, non-underexposed points, non-too-far points, non-too-close points, and pixel points with a maximum vertical height of greater than 20 mm.
  • a depth value of the non-too-far point and a depth value of the non-too-close point are both between 200 mm and 500 m, and are used for representing a set of effective position feature points of obstacles in front of the mobile robot within the visual angle range of the 3d-tof camera.
  • the non-overexposed point and non-underexposed point are obtained directly from the 3d-tof camera, and each pixel point in each frame of depth image has four states, including overexposed, underexposed, too far, and too close; the overexposed and underexposed points are both unreliable points; too close and too far points will not be stored in the queue (FIFO).
  • untrusted pixel points in the depth images are excluded, including overexposed points and underexposed points that are greatly affected by light brightness, and too far and too close points that are greatly affected by distance errors. The stability of depth image positioning is improved.
  • the preset effective feature points are feature points of effective markers in the forward motion direction of the mobile robot, and the 3d-tof camera collects different frames of depth images of the preset effective feature points from different directions in the moving process of the mobile robot, including successively collecting the preset effective feature points from the left, a direction close to a target, the right, and a direction away from the target in the turning process, so as to successively collect the depth images in which the visual line to the same marker is not blocked and the depth images in which the visual line to the same marker has been blocked; and when the mobile robot completes the method for building the local point cloud map, the point clouds with highest heights stored in the form of a 3d histogram are possibly environment position features representing that the collection visual line to this marker has been blocked.
  • a vision robot includes a 3d-tof camera and a processing unit, wherein the processing unit is configured for implementing the method for building the local point cloud map; and the 3d-tof camera is assembled on a body of the vision robot, so that the visual angle of the 3d-tof camera covers a position in front of the vision robot.
  • the 3d-tof camera samples the point clouds of a target obstacle in the moving process of the mobile robot; when the vision robot moves to a certain position, stray points are removed by using the height position distribution feature of the point clouds of the preset target planar region within the current visual angle range of the 3d-tof camera; only the reasonably distributed point clouds used for accurate positioning and having the maximum heights are allowed to be retained in the local map; and in a turning process, the vision robot can re-recognize an obstacle which has been recognized within the visual angle range and is close to an edge position of the body or an obstacle which is too close to the body, so that the positioning adaptability of scenarios in which obstacles are too close and too far is improved, and the probability of misjudgment on the position of a super-close obstacle is significantly reduced.
EP21884422.3A 2020-10-30 2021-06-03 Procédé de construction de carte de nuage de points locaux et robot de vision Pending EP4083921A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011192850.8A CN112348893B (zh) 2020-10-30 2020-10-30 一种局部点云地图构建方法及视觉机器人
PCT/CN2021/098145 WO2022088680A1 (fr) 2020-10-30 2021-06-03 Procédé de construction de carte de nuage de points locaux et robot de vision

Publications (2)

Publication Number Publication Date
EP4083921A1 true EP4083921A1 (fr) 2022-11-02
EP4083921A4 EP4083921A4 (fr) 2023-08-23

Family

ID=74356944

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21884422.3A Pending EP4083921A4 (fr) 2020-10-30 2021-06-03 Procédé de construction de carte de nuage de points locaux et robot de vision

Country Status (6)

Country Link
US (1) US20230249353A1 (fr)
EP (1) EP4083921A4 (fr)
JP (1) JP7375206B2 (fr)
KR (1) KR20220113977A (fr)
CN (1) CN112348893B (fr)
WO (1) WO2022088680A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348893B (zh) * 2020-10-30 2021-11-19 珠海一微半导体股份有限公司 一种局部点云地图构建方法及视觉机器人
CN113686347A (zh) * 2021-08-11 2021-11-23 追觅创新科技(苏州)有限公司 机器人导航路径的生成方法及装置
CN113787516A (zh) * 2021-08-16 2021-12-14 深圳优地科技有限公司 定位方法、装置和机器人
WO2023204105A1 (fr) * 2022-04-21 2023-10-26 ソニーセミコンダクタソリューションズ株式会社 Dispositif de traitement de signal, procédé de traitement de signal et procédé de production de données
CN115273068B (zh) * 2022-08-02 2023-05-12 湖南大学无锡智能控制研究院 一种激光点云动态障碍物剔除方法、装置及电子设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183631B2 (en) * 2012-06-29 2015-11-10 Mitsubishi Electric Research Laboratories, Inc. Method for registering points and planes of 3D data in multiple coordinate systems
US9037396B2 (en) * 2013-05-23 2015-05-19 Irobot Corporation Simultaneous localization and mapping for a mobile robot
CN104732518B (zh) * 2015-01-19 2017-09-01 北京工业大学 一种基于智能机器人地面特征的ptam改进方法
KR102403504B1 (ko) * 2015-11-26 2022-05-31 삼성전자주식회사 이동 로봇 및 그 제어 방법
CN106441306A (zh) * 2016-10-18 2017-02-22 华东师范大学 自主定位与地图构建的智能生命探测机器人
CN107160395B (zh) * 2017-06-07 2020-10-16 中国人民解放军装甲兵工程学院 地图构建方法及机器人控制系统
WO2019039733A1 (fr) * 2017-08-21 2019-02-28 (주)유진로봇 Objet mobile et capteur combiné utilisant une caméra et un lidar
US10708627B2 (en) * 2019-03-04 2020-07-07 Intel Corporation Volumetric video compression with motion history
CN110031825B (zh) 2019-04-17 2021-03-16 北京智行者科技有限公司 激光定位初始化方法
CN110221603B (zh) * 2019-05-13 2020-08-14 浙江大学 一种基于激光雷达多帧点云融合的远距离障碍物检测方法
CN110264567B (zh) * 2019-06-19 2022-10-14 南京邮电大学 一种基于标记点的实时三维建模方法
CN110689622B (zh) * 2019-07-05 2021-08-27 电子科技大学 一种基于点云分割匹配闭环校正的同步定位与构图方法
CN110501017A (zh) * 2019-08-12 2019-11-26 华南理工大学 一种基于orb_slam2的移动机器人导航地图生成方法
CN112348893B (zh) * 2020-10-30 2021-11-19 珠海一微半导体股份有限公司 一种局部点云地图构建方法及视觉机器人

Also Published As

Publication number Publication date
KR20220113977A (ko) 2022-08-17
CN112348893B (zh) 2021-11-19
EP4083921A4 (fr) 2023-08-23
WO2022088680A1 (fr) 2022-05-05
JP2023509153A (ja) 2023-03-07
JP7375206B2 (ja) 2023-11-07
US20230249353A1 (en) 2023-08-10
CN112348893A (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
EP4083921A1 (fr) Procédé de construction de carte de nuage de points locaux et robot de vision
CN110472496B (zh) 一种基于目标检测与跟踪的交通视频智能分析方法
EP3519770B1 (fr) Procédés et systèmes de génération et d'utilisation de données de référence de localisation
EP3886048A1 (fr) Procédé et système de jonction de cartes slam
CN110794405B (zh) 一种基于相机和雷达融合的目标检测方法及系统
CN106529587B (zh) 基于目标点识别的视觉航向识别方法
CN110458161B (zh) 一种结合深度学习的移动机器人门牌定位方法
CN114018236B (zh) 一种基于自适应因子图的激光视觉强耦合slam方法
EP2164043A1 (fr) Calibration et calcul d'une perspective d'une caméra vidéo
CN110136174B (zh) 一种目标对象跟踪方法和装置
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN112802196B (zh) 基于点线特征融合的双目惯性同时定位与地图构建方法
CN105116886A (zh) 一种机器人自主行走的方法
CN112288811A (zh) 多帧深度图像定位的关键帧融合控制方法及视觉机器人
CN111950440A (zh) 一种识别与定位门的方法、装置及存储介质
CN208289901U (zh) 一种增强视觉的定位装置及机器人
CN115240094A (zh) 一种垃圾检测方法和装置
CN113768419B (zh) 确定扫地机清扫方向的方法、装置及扫地机
CN117152949A (zh) 一种基于无人机的交通事件识别方法及系统
CN111781606A (zh) 一种新型激光雷达和超声波雷达融合的小型化实现方法
CN111780744B (zh) 移动机器人混合导航方法、设备及存储装置
CN115790568A (zh) 基于语义信息的地图生成方法及相关设备
CN112330808B (zh) 一种基于局部地图的优化方法及视觉机器人
CN117044478B (zh) 割草机控制方法、装置、割草机、电子设备及存储介质
CN115994934B (zh) 数据时间对齐方法、装置以及域控制器

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

PUAB Information related to the publication of an a document modified or deleted

Free format text: ORIGINAL CODE: 0009199EPPU

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20220627

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20230720

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/55 20170101ALI20230714BHEP

Ipc: B25J 9/16 20060101ALI20230714BHEP

Ipc: G05D 1/02 20200101ALI20230714BHEP

Ipc: G06T 7/73 20170101AFI20230714BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)