CN113091759A - Pose processing and map building method and device - Google Patents

Pose processing and map building method and device Download PDF

Info

Publication number
CN113091759A
CN113091759A CN202110266387.5A CN202110266387A CN113091759A CN 113091759 A CN113091759 A CN 113091759A CN 202110266387 A CN202110266387 A CN 202110266387A CN 113091759 A CN113091759 A CN 113091759A
Authority
CN
China
Prior art keywords
frame image
image
edge
target
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110266387.5A
Other languages
Chinese (zh)
Other versions
CN113091759B (en
Inventor
姚秀勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anker Innovations Co Ltd
Original Assignee
Anker Innovations Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anker Innovations Co Ltd filed Critical Anker Innovations Co Ltd
Priority to CN202110266387.5A priority Critical patent/CN113091759B/en
Publication of CN113091759A publication Critical patent/CN113091759A/en
Application granted granted Critical
Publication of CN113091759B publication Critical patent/CN113091759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to a pose processing and map building method and a device, comprising the following steps: determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object; performing edge matching on the first frame image and the second frame image to determine a corresponding edge set; aiming at any target edge in the edge set, the initial pose transformation data is adjusted to minimize the difference between a first gradient vector of any target point in the target edge in a first frame image and a second gradient vector in a second frame image to obtain target pose transformation data, the two frames of images are subjected to edge matching to obtain a common target edge of the two frames of images, the initial pose transformation data is adjusted through the gradient vector difference value corresponding to the target point in the target edge, the influence of illumination change and few characteristic points in the frame image on the pose transformation data is avoided, and the accuracy of the pose transformation data is improved.

Description

Pose processing and map building method and device
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a pose processing and map building method and device.
Background
With the continuous development of the artificial intelligence technology, artificial intelligence products, such as self-moving devices, are increasingly popularized, the self-moving devices can be sweeping robots, picking robots and the like, when the self-moving devices are controlled to move, the self-moving devices need to be located firstly, namely, the positions of the self-moving devices in the space where the self-moving devices are located are identified, and then the self-moving devices can be navigated.
In the related art, the 2D grid map is generated by using the track, and the obstacle is determined in a collision mode, that is, the position point of the self-moving device is determined as the obstacle when the self-moving device has a collision, so that the 2D grid map generated by using the track has low precision and is not beneficial to obstacle avoidance and path planning.
Disclosure of Invention
In view of this, to solve the technical problems or some technical problems, embodiments of the present invention provide a pose processing method and a map constructing method and apparatus.
In a first aspect, an embodiment of the present invention provides a pose processing method, including:
determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object;
performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set;
and aiming at any target edge in the edge set, adjusting the initial pose transformation data to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, so as to obtain target pose transformation data.
In an optional embodiment, the adjusting the initial pose transformation data includes:
and adjusting the initial pose transformation data through an objective function.
In an optional embodiment, when the number of the second frame images is one, the objective function is:
Figure BDA0002971799930000021
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure BDA0002971799930000022
the gradient of the target point m in the x direction in the first frame image is obtained,
Figure BDA0002971799930000023
the gradient of the target point m in the y direction in the first frame image,
Figure BDA0002971799930000024
the gradient of the target point m in the x direction in the second frame image,
Figure BDA0002971799930000025
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
In an optional embodiment, when the number of the second frame images is multiple, the objective function is:
Figure BDA0002971799930000026
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
In an optional embodiment, the method further comprises:
converting the first frame image into a first gray scale image, and converting the second frame image into a second gray scale image;
converting the first grayscale image to a first gradient image, and converting the second grayscale image to a second gradient image.
In an optional embodiment, the method further comprises:
determining a first gradient vector corresponding to the target point from the first gradient image, and determining a second gradient vector corresponding to the target point from the second gradient image.
In a second aspect, an embodiment of the present invention provides a pose processing apparatus, including:
the determining module is used for determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object;
the matching module is used for carrying out edge matching on the first frame image and the second frame image and determining a corresponding edge set;
and the adjusting module is used for adjusting the initial pose transformation data aiming at any target edge in the edge set so as to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, and thus target pose transformation data is obtained.
In a third aspect, an embodiment of the present invention provides a map construction method, including:
acquiring point cloud data corresponding to a first frame of image;
determining a plane image corresponding to a preset height in the point cloud data;
determining a target edge point set corresponding to the plane image according to the first edge set;
and mapping the edge point set to a grid map of the object according to target pose transformation data, wherein the target pose transformation data is obtained by the pose processing method of any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a map building apparatus, including:
the acquisition module is used for acquiring point cloud data corresponding to the first frame of image;
the determining module is used for determining a plane image corresponding to a preset height in the point cloud data; determining an edge point set corresponding to the plane image according to the first edge set;
a construction module, configured to map the edge point set to a grid map of the object according to target pose transformation data, where the target pose transformation data is obtained by the pose processing method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including: a processor and a memory, the processor being configured to execute a pose processing program stored in the memory to implement the pose processing method of any one of the first aspects, or being configured to execute a map construction program stored in the memory to implement the map construction method of any one of the second aspects.
In a sixth aspect, an embodiment of the present invention provides a storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the pose processing method according to any one of the first aspects or to implement the map construction method according to any one of the second aspects
According to the pose processing scheme provided by the embodiment of the invention, in the running process of an object, initial pose transformation data corresponding to a first frame image and a second frame image are determined, wherein the first frame image is a currently acquired frame image, and the second frame image is a previous frame image; performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set; and adjusting the initial pose transformation data aiming at any target edge in the edge set to ensure that the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image is minimum to obtain target pose transformation data, performing edge matching on the two frame images to obtain a common target edge of the two frame images, and adjusting the initial pose transformation data according to the gradient vector difference value corresponding to the target point in the target edge to avoid the influence of illumination change and few characteristic points in the frame image on the pose transformation data and improve the precision of the pose transformation data.
Drawings
Fig. 1 is a schematic flow chart of a pose processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another pose processing method according to the embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for constructing a map according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a pose processing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a map building apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For the convenience of understanding of the embodiments of the present invention, the following description will be further explained with reference to specific embodiments, which are not to be construed as limiting the embodiments of the present invention.
Fig. 1 is a schematic flow diagram of a pose processing method according to an embodiment of the present invention, and as shown in fig. 1, the method specifically includes:
and S11, determining the initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object.
The pose processing method provided by the embodiment of the invention is applied to determining the pose of an object in a moving scene, the object is provided with a visual sensor, the object can be self-moving equipment, such as a sweeping robot, a picking robot and the like, and the visual sensor can be: a binocular camera or an RGB-D camera, etc., and the moving scene of the object may be: the sweeping robot performs a sanitary sweeping operation in a certain area, or the picking robot performs a picking task in a certain warehouse.
Further, each frame of image of the object in the motion process is acquired by arranging a visual sensor, the currently acquired frame of image is used as a first frame of image, and the last frame of image of the first frame of image is used as a second frame of image.
Determining pose transformation data corresponding to a first frame image and a second frame image by adopting a moving track, acquiring a first time point corresponding to the first frame image and a second time point corresponding to the second frame image, and determining initial pose transformation data corresponding to the first frame image and the second frame image of the object according to a difference value between the first time point and the second time point, the motion speed of the object and the motion deflection angle of the object, wherein the initial pose transformation data can comprise: translation data and deflection data of the object.
S12, performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set.
The method comprises the steps of performing edge feature matching from a first frame image and a second frame image in a feature matching mode, obtaining a first edge set from the first frame image, obtaining a second edge set from the second frame image, calculating the similarity of edges in the first edge set and the second edge set, and taking one or more edges with the similarity larger than a set threshold (for example, 90%) as an edge set.
And S13, aiming at any target edge in the edge set, adjusting the initial pose transformation data to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, so as to obtain target pose transformation data.
Determining any target edge from the edge set, determining one or more target points from the target edge, and determining first position information of the target point in a coordinate system corresponding to the first frame image and second position information of the target point in a coordinate system corresponding to the second frame image; the first position information in the first frame image corresponding coordinate system may be understood as: when the vision sensor arranged on the object acquires the first frame image, the vision sensor is first position information (position information of a 3D point) under the origin of coordinates, and the second position information under the corresponding coordinate system of the second frame image can be understood as: when the vision sensor provided on the object acquires the second frame image, the vision sensor is second position information (position information of the 3D point) at the origin of coordinates.
Further, gradient conversion is carried out on the first frame image and the second frame image, and a first gradient vector corresponding to the target point in the first frame image and a second gradient vector corresponding to the target point in the second frame image are obtained.
And adjusting the initial pose transformation data corresponding to the first frame image and the second frame image according to the relationship between the first position information and the second position information, wherein the adjustment process can be to adjust the initial pose data according to the difference value of the gradient vectors so as to achieve the minimum difference between the first gradient vector of any target point in the target edge in the first frame image and the second gradient vector in the second frame image, and obtain the target pose transformation data.
In the pose processing method provided by the embodiment of the invention, in the running process of an object, initial pose transformation data corresponding to a first frame image and a second frame image are determined, wherein the first frame image is a currently acquired frame image, and the second frame image is a previous frame image; performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set; and adjusting the initial pose transformation data aiming at any target edge in the edge set to ensure that the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image is minimum to obtain target pose transformation data, performing edge matching on the two frame images to obtain a common target edge of the two frame images, and adjusting the initial pose transformation data according to the gradient vector difference value corresponding to the target point in the target edge to avoid the influence of illumination change and few characteristic points in the frame image on the pose transformation data and improve the precision of the pose transformation data.
Fig. 2 is a schematic flow chart of another pose processing method according to an embodiment of the present invention, and as shown in fig. 2, the method specifically includes:
and S21, acquiring a first frame image and a second frame image.
The object related to the present embodiment may be a self-moving device, for example, a sweeping robot, a picking robot, etc., and a vision sensor is disposed on the object, and the vision sensor may be: the binocular camera or the RGB-D camera acquires a first frame image and a second frame image through the vision sensor, wherein the first frame image is a currently acquired frame image, and the second frame image is a previous frame image.
S22, converting the first frame image into a first gray scale image, and converting the second frame image into a second gray scale image.
S23, converting the first gray image into a first gradient image, and converting the second gray image into a second gradient image.
And respectively carrying out gray level conversion and gradient conversion on the first frame image and the second frame image to obtain a corresponding first gradient image and a corresponding second gradient image.
Further, the processing of the image is divided into two steps, namely, gray level conversion and gradient conversion, and each conversion step is configured to synchronously perform the processing of a plurality of frame images, for example, in a gray level conversion link, a first frame image and a second frame image are synchronously subjected to gray level conversion to obtain a first gray level image corresponding to the first frame image and a second gray level image corresponding to the second frame image; and in the gradient conversion link, performing gradient conversion on the first gray level image and the second gray level image synchronously to obtain a first gradient image corresponding to the first gray level image and a second gradient image corresponding to the second gray level image.
S24, performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set.
The method comprises the steps of extracting edges of a first frame image to obtain a first edge corresponding to the first frame image, wherein the first edge comprises one or more edges, determining gradient information (the gradient information comprises a gradient direction and an amplitude) corresponding to the first edge, extracting edges of a second frame image to obtain a second edge corresponding to the second frame image, wherein the second edge comprises one or more edges, and determining gradient information (the gradient information comprises a gradient direction and an amplitude) corresponding to the second edge.
And performing edge matching on the first edge and the second edge, and taking the edge with the similarity larger than a set threshold (90%) as a target edge.
And S25, determining the initial pose transformation data corresponding to the first frame image and the second frame image.
Determining pose transformation data corresponding to a first frame image and a second frame image by adopting a moving track, acquiring a first time point corresponding to the first frame image, acquiring a second time point corresponding to the second frame image, and determining initial pose transformation data corresponding to the first frame image and the second frame image of the object according to a difference value between the first time point and the second time point, the motion speed of the object and the motion deflection angle of the object, wherein the initial pose transformation data can comprise: translation data and deflection data of the object.
And S26, aiming at any target edge in the edge set, adjusting the initial pose transformation data through an objective function so as to enable the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image to be minimum, and obtaining target pose transformation data.
Determining any target edge from the edge set, and then determining first position information of the target point in a coordinate system corresponding to a first frame image and second position information of the target point in a coordinate system corresponding to a second frame image from one or more target points in the target edge; the first position information in the first frame image corresponding coordinate system may be understood as: when the vision sensor arranged on the object acquires the first frame image, the vision sensor is first position information (position information of a 3D point) under the origin of coordinates, and the second position information under the corresponding coordinate system of the second frame image can be understood as: when the vision sensor provided on the object acquires the second frame image, the vision sensor is second position information (position information of the 3D point) at the origin of coordinates.
The second frame image according to this embodiment includes one or more second frame images, and when the second frame image is one, the initial pose transformation data is adjusted by an objective function according to a relationship between the first position information and the second position information, so that a difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image is minimized, and target pose transformation data is obtained.
The relationship between the first position information and the second position information may be obtained by:
defining a front-to-back projection of the binocular camera, which may be:
Figure BDA0002971799930000091
wherein, P is a first point position information pi of a pixel point P ═ x; y) in a double-sided camera coordinate system (a coordinate system corresponding to a first frame image, which can be called an i coordinate system) is a projection function, z (P) represents the depth of P, (cx; cy) is a main point in the internal parameters of the binocular camera, and (fx; fy) is the focal length of the internal parameters of the binocular camera.
Accordingly, the forward projection may be:
Figure BDA0002971799930000101
from the above two equations, can be obtained by variationConverting the function to convert the pixel point P in the first frame imageiProjected to a pixel point P in the second frame imagejThe method specifically comprises the following steps:
pj=τ(ξji,pi,Zi(p))=π(Tjiπ-1(pi,Zi(pi) ) (III)
Where τ is the transformation function, TjiFor the transformation of a pixel from the i-coordinate system to the j-coordinate system, xijiIs TjiLie algebra of.
Further, the objective function is:
Figure BDA0002971799930000102
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure BDA0002971799930000103
the gradient of the target point m in the x direction in the first frame image,
Figure BDA0002971799930000104
the gradient of the target point m in the y direction in the first frame image,
Figure BDA0002971799930000105
the gradient of the target point m in the x direction in the second frame image,
Figure BDA0002971799930000106
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
In an alternative of the embodiment of the present invention, n may be set to a positive integer greater than 100, for example, n ∈ (100, + ∞), and n may be: 110. 120, 130, etc., the specific value of n may be set according to actual requirements, and this embodiment is not limited in particular.
Inverse depth λmCan pass through the depth Zm(p) converting to obtain a corresponding conversion formula:
Figure BDA0002971799930000107
depth Zm(P) may be determined from the transformation relationship of the pixel point P projected from the first frame image to the second frame image (equation (iii) above).
When the number of the second frame images is multiple, the objective function is:
Figure BDA0002971799930000111
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
It should be noted that when a plurality of second frame images are provided, gradient vector difference values between the two frame images are sequentially calculated, and then summed to obtain a total gradient vector difference value between the plurality of frame images, and when the total gradient vector difference value is the minimum, target pose transformation data corresponding to the plurality of frame images is obtained.
In the pose processing method provided by the embodiment of the invention, in the running process of an object, initial pose transformation data corresponding to a first frame image and a second frame image are determined, wherein the first frame image is a currently acquired frame image, and the second frame image is a previous frame image; performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set; and adjusting the initial pose transformation data aiming at any target edge in the edge set to ensure that the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image is minimum to obtain target pose transformation data, performing edge matching on the two frame images to obtain a common target edge of the two frame images, and adjusting the initial pose transformation data according to the gradient vector difference value corresponding to the target point in the target edge to avoid the influence of illumination change and few characteristic points in the frame image on the pose transformation data and improve the precision of the pose transformation data.
Fig. 3 is a schematic flowchart of a map building method according to an embodiment of the present invention, and as shown in fig. 3, the method specifically includes:
and S31, acquiring point cloud data corresponding to the first frame of image.
In this embodiment, map construction is performed on the basis of the determination of the target pose transformation data in the above embodiment, and then edge positions are marked in the constructed map, and the map construction process may be the fusion of 3D point cloud data and a 2D grid map.
The method comprises the steps of directly obtaining point cloud data of a first frame image (a current frame image) through a vision sensor arranged on an object, wherein the point cloud data is massive 3D point data of a current area recorded in a scanning mode by the vision sensor at the position of the object.
And S32, determining a plane image corresponding to the preset height in the point cloud data.
And cutting the 3D point cloud data corresponding to the acquired first frame image to obtain plane data, wherein the plane data is a plane formed by an X axis and a Y axis which are intercepted by the set height of the Y axis in the 3D point cloud data.
In an alternative of the embodiment of the invention, the preset height may be determined by a visual sensor arranged at the position of the object, for example, the preset height may be a fixed value, such as 10 cm.
And S33, determining a target edge point set corresponding to the plane image according to the first edge set.
After obtaining the clipped planar image, the first frame image determined according to the above embodiment corresponds to the first edge set, the first frame image is used as the currently captured frame image, the previous frame image of the first frame image is used as the second frame image, and the first edge is obtained according to the first frame image and the second frame image (specifically, refer to S12 in fig. 1 and S24 in fig. 2 to determine the correlation step of the edge set).
Further, a corresponding target edge point set of the first edge set in the plane image (the target edge set may be an intersection of the first edge set and the plane image) is determined, and the target edge point set constitutes a boundary between the obstacle and the ground at the current view angle.
And S34, mapping the edge point set into the grid map of the object according to the target pose transformation data.
And the object maps the edge point set in the plane image into the grid map according to the target pose transformation data and the position information of the object in the grid map so as to mark the boundary position of the barrier and the ground in the grid map to form a dense 2D map.
According to the map construction method provided by the embodiment of the invention, point cloud data corresponding to a first frame of image is acquired through a visual sensor arranged on an object; determining a plane image corresponding to a preset height in the point cloud data; determining an edge point set corresponding to the plane image according to the first edge set; and mapping the edge point set into the grid map of the object according to the target pose transformation data, and determining the barrier in a collision mode so as to estimate the boundary position of the barrier and the ground more accurately, thereby facilitating the subsequent obstacle avoidance and path planning of the robot in the moving process.
Fig. 4 is a schematic structural diagram of a pose processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the apparatus specifically includes:
a determining module 41, configured to determine, during an operation of the object, initial pose transformation data corresponding to the first frame image and the second frame image;
a matching module 42, configured to perform edge matching on the first frame image and the second frame image, and determine a corresponding edge set;
an adjusting module 43, configured to adjust the initial pose transformation data for any target edge in the edge set, so as to minimize a difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector of any target point in the second frame image, thereby obtaining target pose transformation data.
In an optional embodiment, the adjusting module 43 is specifically configured to adjust the initial pose transformation data by an objective function.
In an optional embodiment, when the number of the second frame images is one, the objective function is:
Figure BDA0002971799930000131
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure BDA0002971799930000132
the gradient of the target point m in the x direction in the first frame image,
Figure BDA0002971799930000133
the gradient of the target point m in the y direction in the first frame image,
Figure BDA0002971799930000134
the gradient of the target point m in the x direction in the second frame image,
Figure BDA0002971799930000135
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
In an optional embodiment, when the number of the second frame images is multiple, the objective function is:
Figure BDA0002971799930000141
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
In an optional embodiment, the apparatus further comprises: a conversion module 44, configured to convert the first frame image into a first grayscale image, and convert the second frame image into a second grayscale image; converting the first grayscale image to a first gradient image, and converting the second grayscale image to a second gradient image.
In an optional embodiment, the determining module 41 is further configured to determine a first gradient vector corresponding to the target point from the first gradient image, and determine a second gradient vector corresponding to the target point from the second gradient image.
The pose processing apparatus provided in this embodiment may be the pose processing apparatus shown in fig. 4, and may perform all the steps of the pose processing method shown in fig. 1-2, so as to achieve the technical effect of the pose processing method shown in fig. 1-2, and for brevity, it is described with reference to fig. 1-2, and details are not described here.
Fig. 5 is a schematic structural diagram of a map building apparatus provided in an embodiment of the present invention, and as shown in fig. 5, the structure specifically includes:
an obtaining module 51, configured to obtain point cloud data corresponding to the first frame of image;
a determining module 52, configured to determine a planar image corresponding to a preset height in the point cloud data; determining a target edge point set corresponding to the plane image according to the first edge set;
a construction module 53, configured to map the edge point set in a grid map of the object according to the target pose transformation data.
The map building apparatus provided in this embodiment may be the map building apparatus shown in fig. 5, and may perform all the steps of the bit map building method shown in fig. 3, so as to achieve the technical effect of the map building method shown in fig. 3, and please refer to the related description of fig. 3 for brevity, which is not described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 600 shown in fig. 6 includes: at least one processor 601, memory 602, at least one network interface 604, and other user interfaces 603. The various components in the electronic device 600 are coupled together by a bus system 605. It is understood that the bus system 605 is used to enable communications among the components. The bus system 605 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 605 in fig. 6.
The user interface 603 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It will be appreciated that the memory 602 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), synchlronous SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 602 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 602 stores the following elements, executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system 6021 and application programs 6022.
The operating system 6021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program 6022 includes various application programs such as a Media Player (Media Player), a Browser (Browser), and the like, and is used to implement various application services. A program implementing the method of an embodiment of the invention can be included in the application program 6022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 602, specifically, a program or an instruction stored in the application program 6022, the processor 601 is configured to execute the method steps provided by the method embodiments, for example, including:
determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object; performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set; and aiming at any target edge in the edge set, adjusting the initial pose transformation data to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, so as to obtain target pose transformation data.
In an optional embodiment, the initial pose transformation data is adjusted by an objective function.
In an optional embodiment, when the number of the second frame images is one, the objective function is:
Figure BDA0002971799930000161
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure BDA0002971799930000171
the gradient of the target point m in the x direction in the first frame image,
Figure BDA0002971799930000172
the gradient of the target point m in the y direction in the first frame image,
Figure BDA0002971799930000173
the gradient of the target point m in the x direction in the second frame image,
Figure BDA0002971799930000174
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
In an optional embodiment, when the number of the second frame images is multiple, the objective function is:
Figure BDA0002971799930000175
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
In an alternative embodiment, the first frame image is converted into a first gray scale image, and the second frame image is converted into a second gray scale image; converting the first grayscale image to a first gradient image, and converting the second grayscale image to a second gradient image.
In an alternative embodiment, a first gradient vector corresponding to the target point is determined from the first gradient image, and a second gradient vector corresponding to the target point is determined from the second gradient image.
The method disclosed by the above-mentioned embodiment of the present invention can be applied to the processor 601, or implemented by the processor 601. The processor 601 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 601. The Processor 601 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 602, and the processor 601 reads the information in the memory 602 and completes the steps of the method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 6, and may execute all the steps of the pose processing method shown in fig. 1-2, so as to achieve the technical effect of the pose processing method shown in fig. 1-2, and for brevity, it is not described herein again.
The embodiment of the invention also provides a storage medium (computer readable storage medium). The storage medium herein stores one or more programs. Among others, the storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
When one or more programs in the storage medium are executable by one or more processors to implement the above-described pose processing method executed on the pose processing apparatus side.
The processor is configured to execute the pose processing program stored in the memory to implement the following steps of the pose processing method executed on the pose processing apparatus side:
determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object; performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set; and aiming at any target edge in the edge set, adjusting the initial pose transformation data to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, so as to obtain target pose transformation data.
In an optional embodiment, the initial pose transformation data is adjusted by an objective function.
In an optional embodiment, when the number of the second frame images is one, the objective function is:
Figure BDA0002971799930000191
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure BDA0002971799930000192
the gradient of the target point m in the x direction in the first frame image,
Figure BDA0002971799930000193
the gradient of the target point m in the y direction in the first frame image,
Figure BDA0002971799930000194
the gradient of the target point m in the x direction in the second frame image,
Figure BDA0002971799930000195
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
In an optional embodiment, when the number of the second frame images is multiple, the objective function is:
Figure BDA0002971799930000201
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
In an alternative embodiment, the first frame image is converted into a first gray scale image, and the second frame image is converted into a second gray scale image; converting the first grayscale image to a first gradient image, and converting the second grayscale image to a second gradient image.
In an alternative embodiment, a first gradient vector corresponding to the target point is determined from the first gradient image, and a second gradient vector corresponding to the target point is determined from the second gradient image.
Fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present invention, where the electronic device 700 shown in fig. 7 includes: at least one processor 701, memory 702, at least one network interface 704, and other user interfaces 703. The various components in the electronic device 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), synchlronous SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 702 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 702 stores the following elements, executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, the processor 701 is configured to execute the method steps provided by the method embodiments by calling a program or an instruction stored in the memory 702, specifically, a program or an instruction stored in the application 7022, for example, and includes:
acquiring point cloud data corresponding to a first frame of image; determining a plane image corresponding to a preset height in the point cloud data; determining a target edge point set corresponding to the plane image according to the first edge set; and mapping the edge point set into a grid map of the object according to the target pose transformation data.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 7, and may perform all the steps of the map building method shown in fig. 3, so as to achieve the technical effect of the map building method shown in fig. 3, please refer to the description related to fig. 3 for brevity, which is not described herein again.
The embodiment of the invention also provides a storage medium (computer readable storage medium). The storage medium herein stores one or more programs. Among others, the storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
When one or more programs in the storage medium are executable by one or more processors to implement the above-described map construction method executed on the map construction apparatus side.
The processor is configured to execute a map construction program stored in the memory to implement the following steps of a map construction method performed on the map construction device side:
acquiring point cloud data corresponding to a first frame of image; determining a plane image corresponding to a preset height in the point cloud data; determining a target edge point set corresponding to the plane image according to the first edge set; and mapping the edge point set into a grid map of the object according to the target pose transformation data.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A pose processing method, comprising:
determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object;
performing edge matching on the first frame image and the second frame image, and determining a corresponding edge set;
and aiming at any target edge in the edge set, adjusting the initial pose transformation data to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, so as to obtain target pose transformation data.
2. The method according to claim 1, wherein the adjusting the initial pose transformation data comprises:
and adjusting the initial pose transformation data through an objective function.
3. The method according to claim 2, wherein when the number of the second frame images is one, the objective function is:
Figure FDA0002971799920000011
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemIs the inverse depth corresponding to the target point m,
Figure FDA0002971799920000012
the gradient of the target point m in the x direction in the first frame image is obtained,
Figure FDA0002971799920000013
the gradient of the target point m in the y direction in the first frame image,
Figure FDA0002971799920000014
the gradient of the target point m in the x direction in the second frame image,
Figure FDA0002971799920000015
and n is a positive integer which is greater than or equal to 3 and is the gradient of the target point m in the y direction in the second frame image.
4. The method according to claim 2, wherein when the number of the second frame images is plural, the objective function is:
Figure FDA0002971799920000021
wherein i is the index value of the first frame image, j is the index value of the second frame image, ξjiA lie algebra, lambda, corresponding to the pose transformation of the first frame image and the second frame imagemThe inverse depth corresponding to the target point m, ejiN is a positive integer greater than or equal to 3, and is the gradient vector difference between any two continuous frame images.
5. The method of claim 1, further comprising:
converting the first frame image into a first gray scale image, and converting the second frame image into a second gray scale image;
converting the first grayscale image to a first gradient image, and converting the second grayscale image to a second gradient image.
6. The method of claim 5, further comprising:
determining a first gradient vector corresponding to the target point from the first gradient image, and determining a second gradient vector corresponding to the target point from the second gradient image.
7. A pose processing apparatus characterized by comprising:
the determining module is used for determining initial pose transformation data corresponding to the first frame image and the second frame image in the running process of the object;
the matching module is used for carrying out edge matching on the first frame image and the second frame image and determining a corresponding edge set;
and the adjusting module is used for adjusting the initial pose transformation data aiming at any target edge in the edge set so as to minimize the difference between a first gradient vector of any target point in the target edge in the first frame image and a second gradient vector in the second frame image, and thus target pose transformation data is obtained.
8. A map construction method, comprising:
acquiring point cloud data corresponding to a first frame of image;
determining a plane image corresponding to a preset height in the point cloud data;
determining a target edge point set corresponding to the plane image according to the first edge set;
mapping the set of edge points into a grid map of the object according to object pose transformation data derived by the pose processing method of any of claims 1-6.
9. A map building apparatus, comprising:
the acquisition module is used for acquiring point cloud data corresponding to the first frame of image;
the determining module is used for determining a plane image corresponding to a preset height in the point cloud data; determining a target edge point set corresponding to the plane image according to the first edge set;
a construction module, configured to map the edge point set into a grid map of the object according to target pose transformation data, where the target pose transformation data is obtained by the pose processing method according to any one of claims 1 to 6.
10. An electronic device, comprising: a processor and a memory, the processor being configured to execute a pose processing program stored in the memory to implement the pose processing method of any one of claims 1 to 6, or being configured to execute a map construction program stored in the memory to implement the map construction method of any one of claims 7 to 8.
CN202110266387.5A 2021-03-11 2021-03-11 Pose processing and map building method and device Active CN113091759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110266387.5A CN113091759B (en) 2021-03-11 2021-03-11 Pose processing and map building method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110266387.5A CN113091759B (en) 2021-03-11 2021-03-11 Pose processing and map building method and device

Publications (2)

Publication Number Publication Date
CN113091759A true CN113091759A (en) 2021-07-09
CN113091759B CN113091759B (en) 2023-02-28

Family

ID=76666891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110266387.5A Active CN113091759B (en) 2021-03-11 2021-03-11 Pose processing and map building method and device

Country Status (1)

Country Link
CN (1) CN113091759B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346164A (en) * 2002-05-27 2003-12-05 Ricoh Co Ltd Edge extraction device of image
CN101408931A (en) * 2007-10-11 2009-04-15 Mv科技软件有限责任公司 System and method for 3D object recognition
CN108256394A (en) * 2016-12-28 2018-07-06 中林信达(北京)科技信息有限责任公司 A kind of method for tracking target based on profile gradients
CN109685825A (en) * 2018-11-27 2019-04-26 哈尔滨工业大学(深圳) Local auto-adaptive feature extracting method, system and storage medium for thermal infrared target tracking
CN109903313A (en) * 2019-02-28 2019-06-18 中国人民解放军国防科技大学 Real-time pose tracking method based on target three-dimensional model
CN110517283A (en) * 2019-07-18 2019-11-29 平安科技(深圳)有限公司 Attitude Tracking method, apparatus and computer readable storage medium
CN111060115A (en) * 2019-11-29 2020-04-24 中国科学院计算技术研究所 Visual SLAM method and system based on image edge features
US10657659B1 (en) * 2017-10-10 2020-05-19 Slightech, Inc. Visual simultaneous localization and mapping system
CN111583357A (en) * 2020-05-20 2020-08-25 重庆工程学院 Object motion image capturing and synthesizing method based on MATLAB system
CN111709984A (en) * 2020-06-08 2020-09-25 亮风台(上海)信息科技有限公司 Pose depth prediction method, visual odometer method, device, equipment and medium
CN112102403A (en) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346164A (en) * 2002-05-27 2003-12-05 Ricoh Co Ltd Edge extraction device of image
CN101408931A (en) * 2007-10-11 2009-04-15 Mv科技软件有限责任公司 System and method for 3D object recognition
CN108256394A (en) * 2016-12-28 2018-07-06 中林信达(北京)科技信息有限责任公司 A kind of method for tracking target based on profile gradients
US10657659B1 (en) * 2017-10-10 2020-05-19 Slightech, Inc. Visual simultaneous localization and mapping system
CN109685825A (en) * 2018-11-27 2019-04-26 哈尔滨工业大学(深圳) Local auto-adaptive feature extracting method, system and storage medium for thermal infrared target tracking
CN109903313A (en) * 2019-02-28 2019-06-18 中国人民解放军国防科技大学 Real-time pose tracking method based on target three-dimensional model
CN110517283A (en) * 2019-07-18 2019-11-29 平安科技(深圳)有限公司 Attitude Tracking method, apparatus and computer readable storage medium
CN111060115A (en) * 2019-11-29 2020-04-24 中国科学院计算技术研究所 Visual SLAM method and system based on image edge features
CN111583357A (en) * 2020-05-20 2020-08-25 重庆工程学院 Object motion image capturing and synthesizing method based on MATLAB system
CN111709984A (en) * 2020-06-08 2020-09-25 亮风台(上海)信息科技有限公司 Pose depth prediction method, visual odometer method, device, equipment and medium
CN112102403A (en) * 2020-08-11 2020-12-18 国网安徽省电力有限公司淮南供电公司 High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜鑫峰等: "仿人足球机器人视觉系统快速识别与精确定位", 《浙江大学学报(工学版)》 *

Also Published As

Publication number Publication date
CN113091759B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
US10755428B2 (en) Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model
RU2713611C2 (en) Three-dimensional space simulation method
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US11830216B2 (en) Information processing apparatus, information processing method, and storage medium
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101784183B1 (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
JP7131994B2 (en) Self-position estimation device, self-position estimation method, self-position estimation program, learning device, learning method and learning program
GB2580691A (en) Depth estimation
KR20150119337A (en) Generation of 3d models of an environment
CN110738730B (en) Point cloud matching method, device, computer equipment and storage medium
WO2021195939A1 (en) Calibrating method for external parameters of binocular photographing device, movable platform and system
JPS63213005A (en) Guiding method for mobile object
Lui et al. Eye-full tower: A gpu-based variable multibaseline omnidirectional stereovision system with automatic baseline selection for outdoor mobile robot navigation
CN113052907B (en) Positioning method of mobile robot in dynamic environment
CN109102524B (en) Tracking method and tracking device for image feature points
JP7219561B2 (en) In-vehicle environment recognition device
CN116630442B (en) Visual SLAM pose estimation precision evaluation method and device
CN113091759B (en) Pose processing and map building method and device
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
CN108846856B (en) Picture feature point tracking method and tracking device
KR102438490B1 (en) Heterogeneous sensors calibration method and apparatus using single checkerboard
CN112669388B (en) Calibration method and device for laser radar and camera device and readable storage medium
Martinez et al. Map-based lane identification and prediction for autonomous vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant