CN110599586A - Semi-dense scene reconstruction method and device, electronic equipment and storage medium - Google Patents

Semi-dense scene reconstruction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110599586A
CN110599586A CN201910722061.1A CN201910722061A CN110599586A CN 110599586 A CN110599586 A CN 110599586A CN 201910722061 A CN201910722061 A CN 201910722061A CN 110599586 A CN110599586 A CN 110599586A
Authority
CN
China
Prior art keywords
image
point set
feature point
semi
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910722061.1A
Other languages
Chinese (zh)
Inventor
杜银和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Ecarx Technology Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN201910722061.1A priority Critical patent/CN110599586A/en
Publication of CN110599586A publication Critical patent/CN110599586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method and a device for reconstructing a semi-dense scene, electronic equipment and a storage medium, wherein the method comprises the following steps: performing binocular parallel correction on the first image according to the internal parameters and the external parameters of the binocular camera to obtain a second image; selecting a plurality of key image frames from the second image based on a preset rule; calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters; grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected; in each image frame group, acquiring a feature point set of a target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group; and obtaining a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.

Description

Semi-dense scene reconstruction method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of binocular camera systems, in particular to a semi-dense scene reconstruction method and device, electronic equipment and a storage medium.
Background
The binocular camera system is more and more widely applied to the fields of automatic driving and robots, and not only has excellent depth perception performance, but also has the advantage of cost.
In the prior art, there are many methods for scene reconstruction based on a binocular camera system, but most of the methods rely on a dense matching method, and the problems of high mismatching rate and the like exist when the dense matching method is adopted for scene reconstruction.
Disclosure of Invention
In order to solve the technical problems, the invention discloses a reconstruction method of a semi-dense scene, which aims at solving the technical problems.
In order to achieve the above object, the present invention provides a method for reconstructing a semi-dense scene, wherein the method comprises:
acquiring internal parameters and external parameters of a binocular camera;
acquiring a first image acquired by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to obtain a second image;
selecting a plurality of key image frames from the second image based on a preset rule;
calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters;
grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected;
in each image frame group, acquiring a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group;
and obtaining a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
Further, the acquiring a first image collected by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to obtain a second image includes:
carrying out distortion correction on the first image according to the internal parameters of the binocular camera to obtain an intermediate image;
and performing epipolar line correction on the intermediate image according to the external parameters of the binocular camera to obtain a second image.
Further, the selecting a plurality of key image frames from the second image based on a preset rule includes:
and selecting a key image frame every N frames in the second image to obtain a plurality of key image frames.
Further, the acquiring, in each image frame group, a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected onto the feature point set of the target frame to obtain a projected feature point set corresponding to each image frame group includes:
for each frame to be projected, acquiring each feature point in a feature point set of the frame to be projected, a conversion coefficient corresponding to each feature point and depth value data corresponding to each feature point;
obtaining a plurality of first characteristic points and a first characteristic point set consisting of the plurality of first characteristic points according to each characteristic point of the frame to be projected, the conversion coefficient corresponding to each characteristic point and the depth value data corresponding to each characteristic point;
and carrying out superposition combination according to the feature point set of the target frame and the first feature point set to obtain a projection feature point set corresponding to each image frame group.
Further, the obtaining a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group includes:
determining two-dimensional coordinates of each feature point in a projection feature point set corresponding to each image frame group in an image coordinate system;
converting two-dimensional coordinates of each feature point in the image coordinate system into three-dimensional coordinates in the camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system to obtain a first coordinate point corresponding to the image frame group and a first coordinate point set consisting of the first coordinate points;
converting each first coordinate point in the first coordinate point set into a three-dimensional coordinate in the target coordinate system according to a conversion rule between a camera coordinate system and the target coordinate system to obtain a second coordinate point corresponding to the image frame group and a second coordinate point set consisting of the second coordinate points;
and obtaining a semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group.
Further, the obtaining a semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group includes:
and placing coordinate values of coordinate points in the second coordinate point set corresponding to each image frame group at corresponding positions in the target coordinate system, thereby forming a semi-dense scene corresponding to the first image.
Further, before acquiring the internal parameters and the external parameters of the binocular camera, the method further comprises the following steps:
and geometrically calibrating the two cameras of the binocular camera.
The invention provides a reconstruction device of a semi-dense scene, which comprises:
the parameter acquisition module is used for acquiring internal parameters and external parameters of the binocular camera;
the image acquisition module is used for acquiring a first image acquired by a binocular camera and carrying out binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera so as to acquire a second image;
the key image frame selecting module is used for selecting a plurality of key image frames from the second image based on a preset rule;
the characteristic point set calculation module is used for calculating and obtaining a characteristic point set corresponding to each key image frame based on a direct motion estimation method and preset parameters;
the image frame group acquisition module is used for grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected;
a projection feature point set obtaining module, configured to obtain, in each image frame group, a feature point set of the target frame and a feature point set of at least one to-be-projected frame, and project the feature point set of the at least one to-be-projected frame to the feature point set of the target frame, so as to obtain a projection feature point set corresponding to each image frame group;
and the semi-dense scene acquisition module is used for acquiring a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
The invention provides an electronic device, comprising a processor and a memory;
the processor adapted to implement one or more instructions;
the memory storing one or more instructions adapted to be loaded and executed by the processor to implement the semi-dense scene reconstruction method as described above.
The embodiment of the invention has the following beneficial effects:
according to the semi-dense scene reconstruction method disclosed by the invention, through a multi-frame data stacking method, the data matching rate is high, the reconstruction of the semi-dense scene of the static scene is realized, and the obtained semi-dense scene is clearer.
Drawings
In order to more clearly illustrate the semi-dense scene reconstruction method, apparatus, electronic device and storage medium according to the present invention, the drawings required for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for reconstructing a semi-dense scene according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a multi-frame data projection method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a semi-dense scene acquisition method according to an embodiment of the present invention;
fig. 4 is an implementation schematic diagram of simulation scene reconstruction provided in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a semi-dense scene reconstruction apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a semi-dense scene reconstruction terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention can be applied to a binocular camera system, for example, scene reconstruction of the binocular camera system in the technical field of automatic driving.
Referring to fig. 1, which is a flowchart illustrating a semi-dense scene reconstruction method according to an embodiment of the present invention, the present specification provides the method steps as described in the embodiment or the flowchart, but based on the conventional method; or the inventive process may include additional or fewer steps. The step sequence recited in the embodiments is only one of many step execution sequences, and does not represent a unique execution sequence, and the semi-dense scene reconstruction method in the present application may be executed according to the method sequence shown in the embodiments or the drawings. Specifically, as shown in fig. 1, the method includes:
s101, acquiring internal parameters and external parameters of a binocular camera;
it should be noted that, in the embodiment of the present specification, the binocular video camera includes two cameras, and the internal parameter includes focal lengths f of the two camerasxAnd fyPrincipal point coordinates (u)0,v0) And distortion coefficients (ki1, ki2, ki3, pi1, pi 2); the extrinsic parameters include the amount of rotation R (including pitch, roll and yaw) and the amount of translation T (T) between the two camerasx,ty,tz) And the like.
Specifically, the principal point coordinates may be center point coordinates of a two-dimensional image coordinate system, the distortion coefficients include distortion coefficients of the left and right cameras, and the distortion coefficient of the left camera may be (k11, k12, k13, p11, p12), where k11 represents a 1% order radial distortion coefficient, k12 represents a 2% order radial distortion coefficient, k13 represents a 3% order radial distortion coefficient, p11 represents a 1% order tangential distortion coefficient, and p12 represents a 2% order tangential distortion coefficient; the distortion coefficients of the right camera may be (k21, k22, k23, p21, p22), where k21 represents a 1% order radial distortion coefficient, k22 represents a 2% order radial distortion coefficient, k23 represents a 3% order radial distortion coefficient, p21 represents a 1% order tangential distortion coefficient, and p22 represents a 2% order tangential distortion coefficient.
The image coordinate system may be: and establishing a plane rectangular coordinate system u-v taking the upper left corner of the image as an origin and taking the pixel as a unit. The abscissa u and the ordinate v of a pixel are the number of columns and the number of rows in the image array, respectively.
Before the obtaining of the internal parameters and the external parameters of the binocular camera, the method further comprises the following steps:
and geometrically calibrating the two cameras of the binocular camera.
In the embodiment of the specification, geometric calibration is carried out on a left eye camera and a right eye camera by using an open source binocular calibration method; specifically, firstly, the same clock trigger source is adopted to trigger two cameras, namely, the hardware of the two cameras is synchronous; the sizes of the left and right target images are kept consistent, and the sizes can be set in various ways;
for example, the image size may be determined to be 640 x 480 based on computational resources and algorithm requirements.
S103, acquiring a first image acquired by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to acquire a second image;
in the embodiment of the present specification, the first image may be an image acquired by a left eye camera or an image acquired by a right eye camera in a binocular camera, and the first image is subjected to distortion correction according to internal parameters of the binocular camera to obtain an intermediate image; performing epipolar line correction on the intermediate image according to the external parameters of the binocular camera to obtain a second image;
specifically, the second image is an image obtained by correcting an image obtained by the left eye camera.
In the embodiment of the present specification, distortion correction is performed on the first image by the focal length, principal point coordinates, and distortion coefficient of the camera; simultaneously, performing epipolar line correction on the first image through the rotation amount R and the translation amount T of the camera; so that the optical axes of the two cameras are parallel; the imaging origin coordinates of the left view and the right view are consistent, the optical axes of the two cameras are parallel, the left imaging plane and the right imaging plane are coplanar, and the epipolar lines are aligned.
Specifically, when carrying out binocular parallel correction, through correcting left eye camera and the right eye camera of binocular camera respectively, the step of specific binocular parallel correction includes:
A. firstly, respectively converting an image coordinate system of an image acquired by a left eye camera and an image coordinate system of an image acquired by a right eye camera into a camera coordinate system of the left eye camera and a camera coordinate system of the right eye camera through a common internal reference matrix;
B. obtaining a new left camera coordinate system by left-multiplying the camera coordinate system of the left eye camera by a rotation matrix (namely, the rotation amount of the left eye camera) R1; obtaining a new right camera coordinate system by left-multiplying the rotation matrix (namely the rotation amount of the right eye camera) R2 to the camera coordinate system of the right eye camera;
C. carrying out distortion removal operation of the left camera through a left camera coordinate system, and carrying out distortion removal operation of the right camera through a right camera coordinate system;
D. after the distortion removing operation is finished, the camera coordinate system of the left eye camera is converted into the image coordinate system of the image acquired by the left eye camera again by using the internal reference matrix of the left eye camera, and the camera coordinate system of the right eye camera is converted into the image coordinate system of the image acquired by the right eye camera again by using the internal reference matrix of the right eye camera;
E. and the pixel points of the new left image and the new right image are interpolated by using the pixel values of the left image and the right image respectively.
The camera coordinate system is a three-dimensional rectangular coordinate system established by taking a focusing center of the camera as an origin and taking an optical axis as a Z axis; specifically, the origin of the camera coordinate system is the optical center of the camera, the X-axis and the Y-axis are parallel to the X-axis and the Y-axis of the image, and the z-axis is the optical axis of the camera, which is perpendicular to the graphic plane.
In the embodiment of the present specification, the luminosity in the image may also be calibrated by an open-source luminosity calibration method (local image luminosity calculation method); the generated luminosity coefficient can carry out certain distortion compensation on the relationship between the input exposure and the output brightness value of the left camera and the right camera so as to eliminate luminosity errors existing in partial left eye images and partial right eye images and further obtain a second image with higher quality.
S105, selecting a plurality of key image frames from the second image based on a preset rule;
in the embodiment of the present specification, a graying process is performed on a second image, a grayed image of the second image is obtained, the grayed image is input into a DSO (direct Sparse overlay) system, that is, a plurality of pictures in the image are input into the DSO system, and then a plurality of key image frames are fixedly selected;
wherein, DSO belongs to visual odometer of sparse direct method, which can be DSO direct motion estimation method;
in the embodiment of the present specification, in the second image, one key image frame is selected every N frames to obtain a plurality of key image frames.
Preferably, one key image frame is fixedly selected every 5 frames, and then a plurality of key image frames are obtained;
s107, calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters;
in the embodiment of the present specification, based on the selected key image frame and preset parameters, a feature point set corresponding to each key image frame is calculated by using an open source dso direct motion estimation method; calculating the depth value of the parameter by using the dual-purpose baseline information;
the preset parameters may be prior parameters, for example, the preset parameters may be maximum and minimum thresholds of the passing distance of the point cloud of the image, a threshold of the number of outliers, and initialized depth values of the left and right eye cameras after calibration; the outliers are points outside the search radius of the point cloud.
In the embodiment of the present specification, the dso direct motion estimation method may use a minimized photometric error (photometric error) to calculate feature point information in a semi-dense scene; thereby realizing real-time performance; among them, by photometric error is meant that the objective function of the minimization is usually determined by the error between images, wherein the objective function is the photometric error function between images.
S109, grouping the plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected;
in a preferred embodiment of the present specification, each image frame group may include 8 key image frames, 8 key image frames being consecutive key image frames, and if the 8 key image frames are numbered, the 8 key image frames may be numbered as 1, 2, 3, 4, 5,6,7, 8, eight key image frames; the 8 key image frames of the image frame group comprise a target frame and 7 frames to be projected;
s111, in each image frame group, acquiring a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group;
as shown in fig. 2, in the embodiment of the present disclosure, a schematic flow chart of an image projection method provided in the embodiment of the present disclosure is shown, which is specifically as follows:
s201, aiming at each frame to be projected, obtaining each feature point in a feature point set of the frame to be projected, a conversion coefficient corresponding to each feature point and depth value data corresponding to each feature point;
in the embodiments of the present specification, the feature point refers to a point at which the image gradation value changes drastically or a point at which the curvature is large on the edge of the image (i.e., the intersection of two edges).
The conversion coefficient is a coefficient for converting each feature point in the obtained feature point set into a world coordinate in a world coordinate system; the world coordinate system is an absolute coordinate system of the system, and the coordinates of all points on the image are determined by the origin of the world coordinate system before the user-defined coordinate system is not established.
S203, obtaining a plurality of first feature points and a first feature point set consisting of the first feature points according to each feature point of the frame to be projected, the conversion coefficient corresponding to each feature point and the depth value data corresponding to each feature point;
in the embodiment of the present specification, a new first feature point and a first feature point set composed of the first feature points are obtained by calculation according to each feature point, a depth value corresponding to each feature point and a world coordinate conversion coefficient corresponding to each feature point;
s205, carrying out superposition combination according to the feature point set of the target frame and the first feature point set to obtain a projection feature point set corresponding to each image frame group.
In the embodiment of the present specification, the projection feature point set is a result obtained by stacking a feature point set of a target frame and a new first feature point set, that is, a set of feature points in the feature point set of the target frame and feature points in the first feature point set, where if there are coincident feature points in the first feature point set and the second feature point set, one of the feature points is retained; for example, if the first feature point set includes 500 feature points and the second feature point set includes 1000 feature points, the number of feature points in the third feature point set is greater than or equal to 1000 and less than or equal to 1500;
in a preferred embodiment of the present specification, selecting a key image frame ordered at the last serial number of 8 from 8 key image frames as a target frame, selecting three frames to be projected in the orders of 5,6, and 7, and projecting the three frames to be projected onto the target frame respectively to obtain new first feature point sets, where the first feature point sets are sets formed by stacking and projecting 3 frames to be projected onto the target frame respectively;
taking a frame to be projected for projection as an example, if the coordinate of the feature point of a frame to be projected is PCThe coordinate of the first characteristic point is PTAnd then:
wherein, PCRepresenting the feature points of the frame C to be projected, K representing the internal parameters of the cameraThe matrix is a matrix of a plurality of matrices,representing inverse depth value, d representing depth value, which is the depth value of the initialized depth value optimized by dso direct motion estimation method; h represents a feature point PCCorresponding world coordinate conversion coefficients.
S113, obtaining a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
In an embodiment of the present specification, after obtaining the projection feature point set, the method further includes performing filtering and denoising on the projection feature point set.
Specifically, the existing filtering and denoising method in the point cloud library can be adopted for removing dryness, so that the calculation error and noise are reduced.
As shown in fig. 3, in the embodiment of the present specification, a schematic flow chart of a semi-dense scene acquisition method provided in the embodiment of the present invention is shown, and the specific steps are as follows:
s301, determining two-dimensional coordinates of each feature point in a projection feature point set corresponding to each image frame group in an image coordinate system;
s303, converting two-dimensional coordinates of each feature point in the image coordinate system into three-dimensional coordinates in the camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system to obtain a first coordinate point corresponding to the image frame group and a first coordinate point set consisting of the first coordinate points;
in the embodiment of the present specification, each feature point in a projection feature point set corresponding to an image frame group is a two-dimensional coordinate of an image coordinate system, and the two-dimensional coordinate is converted into a three-dimensional coordinate of a camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system, so that a first coordinate point set corresponding to the image frame group can be obtained;
specifically, the image coordinate system is converted to a camera coordinate system based on the camera's internal reference matrix.
S305, converting each first coordinate point in the first coordinate point set into a three-dimensional coordinate in a target coordinate system according to a conversion rule between a camera coordinate system and the target coordinate system to obtain a second coordinate point corresponding to the image frame group and a second coordinate point set consisting of the second coordinate points;
in the embodiment of the present specification, the target coordinate system may be a coordinate system of a radar system, the second preset rule may be a conversion calibration coefficient between a binocular camera system and the radar system, and calibration data of the binocular camera system and the radar system determines a calibration relationship between the binocular camera system and the radar system when calibrating the binocular camera system; wherein the calibration coefficient is a fixed value;
in the embodiment of the specification, in the field of automatic driving, since the current sensing system uses a radar coordinate system as a reference coordinate system; the camera coordinate system belongs to a local coordinate system and cannot be used in a map of an automatic driving scene, so that in the automatic driving field, the camera coordinate system needs to be converted into a radar system coordinate system so as to display a semi-dense scene graph obtained subsequently in the map of the automatic driving scene. Converting each characteristic point in a first coordinate point set corresponding to an image frame group according to a calibration coefficient of a camera coordinate system and a radar system coordinate system, and converting the characteristic points into three-dimensional coordinates in the radar system coordinate system to obtain a second coordinate point set corresponding to the image frame group;
specifically, a camera coordinate system is converted into a radar system coordinate system based on the rotation matrix and the offset vector.
Preferably, the radar system may be a lidar system.
S307, according to the second coordinate point set corresponding to each image frame group, obtaining a semi-dense scene corresponding to the first image.
In the embodiment of the specification, coordinate values of coordinate points in the second coordinate point set corresponding to each image frame group are placed at corresponding positions in the target coordinate system, so that a semi-dense scene corresponding to the first image is formed;
specifically, specific coordinate values of coordinate points in the second coordinate point set corresponding to each image frame group (e.g., point a (x, y, z)) may be placed at corresponding positions in the radar system coordinate system, thereby forming a semi-dense scene graph corresponding to the first image.
Specifically, as shown in fig. 4, the scene reconstructed from the simulation result of the underground garage is shown; wherein, the left image in fig. 4 is an initial image collected by a left eye camera in the binocular camera and used for reconstructing a semi-dense scene, and the right image in fig. 4 is a semi-dense scene image reconstructed by taking the left image in fig. 4 as a reference; the method for generating the semi-dense scene comprises the following specific steps:
calibrating a binocular camera of the vehicle;
acquiring internal parameters and external parameters of a binocular camera;
acquiring a first garage image acquired by a binocular camera; carrying out distortion correction on the first garage image according to the internal parameters of the binocular camera to obtain an intermediate image, and carrying out epipolar correction on the intermediate image to obtain a second garage image; wherein the second garage image comprises a plurality of consecutive image frames; the second garage image obtained by the method has low noise, small distortion and high image quality; more reliable data can be provided for the establishment of semi-dense scenes;
selecting a key image frame every 5 frames in the second garage image to obtain a plurality of key image frames; calculating to obtain a feature point set corresponding to each key image frame based on dso direct motion estimation method and preset prior parameters;
grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each specific image frame group can comprise 8 key image frames, and each specific 8 key image frames can comprise a target frame and seven frames to be projected;
in each image frame group, projecting a feature point set of at least one frame to be projected (for example, 3 frames) to a feature point set of a target frame to obtain a projection feature point set corresponding to each image frame group;
converting the two-dimensional image coordinate points in the projection feature point set corresponding to each image frame group into three-dimensional coordinates in a radar system coordinate system, and placing the coordinate values of the obtained three-dimensional coordinates at corresponding positions in the radar system coordinate system to form a semi-dense scene graph of the underground garage, as shown in the right graph in fig. 4, it can be seen that the pillars 41 and the lane lines 42 can be well reconstructed in the semi-dense scene of the underground garage.
As can be seen from the embodiments of the semi-dense scene reconstruction method, the semi-dense scene reconstruction device, the electronic device and the storage medium provided by the invention, the internal parameters and the external parameters of the binocular camera are acquired in the embodiments of the invention; acquiring a first image acquired by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to obtain a second image; selecting a plurality of key image frames from the second image based on a preset rule; calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters; grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected; in each image frame group, acquiring a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group; obtaining a semi-dense scene corresponding to the first image based on a projection feature point set corresponding to each image frame group; by using the technical scheme provided by the embodiment of the specification, the data matching rate is high through a multi-frame data stacking method, the reconstruction of semi-dense scenes of static scenes is realized, and the obtained semi-dense scenes are clearer.
The embodiment of the present invention further provides a semi-dense scene reconstruction apparatus, as shown in fig. 5, which is a schematic structural diagram of the semi-dense scene reconstruction apparatus provided in the embodiment of the present invention; specifically, the device comprises:
a parameter obtaining module 510, configured to obtain internal parameters and external parameters of the binocular camera;
the image acquisition module 520 is used for acquiring a first image acquired by a binocular camera and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to acquire a second image;
a key image frame selecting module 530, configured to select a plurality of key image frames from the second image based on a preset rule;
a feature point set calculating module 540, configured to calculate a feature point set corresponding to each key image frame based on a direct motion estimation method and a preset parameter;
an image frame group obtaining module 550, configured to group the plurality of key image frames to obtain a plurality of image frame groups, where each image frame group includes a target frame and a plurality of frames to be projected;
a projection feature point set obtaining module 560, configured to obtain, in each image frame group, a feature point set of the target frame and a feature point set of at least one to-be-projected frame, and project the feature point set of the at least one to-be-projected frame to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group;
and a semi-dense scene obtaining module 570, configured to obtain a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
In an embodiment of the present specification, the image acquiring module 520 includes:
the distortion correction unit is used for carrying out distortion correction on the first image according to the internal parameters of the binocular camera to obtain an intermediate image;
and the epipolar line correction unit is used for performing epipolar line correction on the intermediate image according to the external parameters of the binocular camera to obtain a second image.
In this embodiment, the key image frame selection module 530 includes:
and the key image frame selecting unit is used for selecting one key image frame every N frames in the second image so as to obtain a plurality of key image frames.
In this embodiment, the projection feature point set obtaining module 560 includes:
a first obtaining unit, configured to obtain, for each frame to be projected, each feature point in a feature point set of the frame to be projected, a conversion coefficient corresponding to each feature point, and depth value data corresponding to each feature point;
the second acquisition unit is used for acquiring a plurality of first characteristic points and a first characteristic point set consisting of the plurality of first characteristic points according to each characteristic point of the frame to be projected, the conversion coefficient corresponding to each characteristic point and the depth value data corresponding to each characteristic point;
and the third acquisition unit is used for carrying out superposition combination according to the feature point set of the target frame and the first feature point set to acquire a projection feature point set corresponding to each image frame group.
In this embodiment, the semi-dense scene obtaining module 570 includes:
the first determining unit is used for determining two-dimensional coordinates of each feature point in the projection feature point set corresponding to each image frame group in an image coordinate system;
the fourth acquisition unit is used for converting the two-dimensional coordinates of each feature point in the image coordinate system into three-dimensional coordinates in the camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system to obtain a first coordinate point corresponding to the image frame group and a first coordinate point set consisting of the first coordinate points;
a fifth obtaining unit, configured to convert each first coordinate point in the first coordinate point set into a three-dimensional coordinate in a target coordinate system according to a conversion rule between a camera coordinate system and the target coordinate system, so as to obtain a second coordinate point corresponding to the image frame group and a second coordinate point set composed of the second coordinate points;
and the sixth acquisition unit is used for acquiring the semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group.
In an embodiment of the present specification, the sixth obtaining unit includes:
and the first acquisition subunit is used for placing the coordinate values of the coordinate points in the second coordinate point set corresponding to each image frame group into corresponding positions in the target coordinate system, so that a semi-dense scene corresponding to the first image is formed.
In the embodiment of this specification, still include:
and the calibration module is used for carrying out geometric calibration on the two cameras of the binocular camera.
The embodiment of the invention provides electronic equipment, which comprises a processor and a memory;
the processor adapted to implement one or more instructions;
the memory stores one or more instructions adapted to be loaded and executed by the processor to implement the semi-dense scene reconstruction method as described in the above method embodiments.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
Fig. 6 is a schematic structural diagram provided in an embodiment of the present invention, where the internal structure of the semi-dense scene reconstruction terminal may include, but is not limited to: the processor, the network interface and the memory in the semi-dense scene reconstruction terminal may be connected by a bus or in another manner, and the connection by the bus is taken as an example in fig. 6 shown in the embodiments of the present specification.
The processor (or CPU) is a computing core and a control core of the semi-dense scene reconstruction terminal. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.). The Memory (Memory) is a Memory device in the semi-dense scene reconstruction terminal and is used for storing programs and data. It is understood that the memory herein may be a high-speed RAM storage device, or may be a non-volatile storage device (non-volatile memory), such as at least one magnetic disk storage device; optionally, at least one memory device located remotely from the processor. The memory provides a storage space storing an operating system of the semi-dense scene reconstruction terminal, which may include, but is not limited to: windows system (an operating system), Linux (an operating system), etc., which are not limited thereto; also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. In this embodiment of the present specification, the processor loads and executes one or more instructions stored in the memory to implement the semi-dense scene reconstruction method provided in the foregoing method embodiment.
Embodiments of the present invention also provide a computer-readable storage medium, which may be disposed in an electronic device, and store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a semi-dense scene reconstruction method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions may be loaded into and executed by a processor of the electronic device to implement the semi-dense scene reconstruction method provided in the method embodiments.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the embodiments of the semi-dense scene reconstruction method, the semi-dense scene reconstruction device, the electronic device and the storage medium provided by the invention, the embodiment of the invention performs geometric calibration on two cameras of a binocular camera to obtain internal parameters and external parameters of the binocular camera; acquiring a first image acquired by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to obtain a second image; the binocular parallel correction of the first image according to the internal parameters and the external parameters of the binocular camera to obtain a second image comprises: carrying out distortion correction on the first image according to the internal parameters of the binocular camera to obtain an intermediate image; and performing epipolar line correction on the intermediate image according to the external parameters of the binocular camera to obtain a second image. Selecting a plurality of key image frames from the second image based on a preset rule; the selecting a plurality of key image frames from the second image based on the preset rule comprises: and selecting a key image frame every N frames in the second image to obtain a plurality of key image frames. Calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters; grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected; in each image frame group, acquiring a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group; the obtaining, in each image frame group, a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected onto the feature point set of the target frame to obtain a projected feature point set corresponding to each image frame group includes:
for each frame to be projected, acquiring each feature point in a feature point set of the frame to be projected, a conversion coefficient corresponding to each feature point and depth value data corresponding to each feature point; obtaining a plurality of first characteristic points and a first characteristic point set consisting of the plurality of first characteristic points according to each characteristic point of the frame to be projected, the conversion coefficient corresponding to each characteristic point and the depth value data corresponding to each characteristic point; and carrying out superposition combination according to the feature point set of the target frame and the first feature point set to obtain a projection feature point set corresponding to each image frame group. Obtaining a semi-dense scene corresponding to the first image based on a projection feature point set corresponding to each image frame group; the obtaining of the semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group includes: determining two-dimensional coordinates of each feature point in a projection feature point set corresponding to each image frame group in an image coordinate system; converting two-dimensional coordinates of each feature point in the image coordinate system into three-dimensional coordinates in the camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system to obtain a first coordinate point corresponding to the image frame group and a first coordinate point set consisting of the first coordinate points; converting each first coordinate point in the first coordinate point set into a three-dimensional coordinate in the target coordinate system according to a conversion rule between a camera coordinate system and the target coordinate system to obtain a second coordinate point corresponding to the image frame group and a second coordinate point set consisting of the second coordinate points; and obtaining a semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group. The obtaining of the semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group includes: and placing coordinate values of coordinate points in the second coordinate point set corresponding to each image frame group at corresponding positions in the target coordinate system, thereby forming a semi-dense scene corresponding to the first image. By using the technical scheme provided by the embodiment of the specification, the obtained first push information is subjected to binocular parallel correction, so that the obtained second image is low in noise and high in image quality, and the data matching rate is high by a multi-frame data stacking method, so that the reconstructed semi-dense scene is clearer.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A semi-dense scene reconstruction method is applied to a binocular camera system and is characterized in that: the method comprises the following steps:
acquiring internal parameters and external parameters of a binocular camera;
acquiring a first image acquired by a binocular camera, and performing binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera to obtain a second image;
selecting a plurality of key image frames from the second image based on a preset rule;
calculating to obtain a feature point set corresponding to each key image frame based on a direct motion estimation method and preset parameters;
grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected;
in each image frame group, acquiring a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected to the feature point set of the target frame to obtain a projection feature point set corresponding to each image frame group;
and obtaining a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
2. The method of semi-dense scene reconstruction of claim 1, wherein: the method for acquiring the first image acquired by the binocular camera and performing binocular parallel correction on the first image according to the internal parameters and the external parameters of the binocular camera to acquire the second image comprises the following steps:
carrying out distortion correction on the first image according to the internal parameters of the binocular camera to obtain an intermediate image;
and performing epipolar line correction on the intermediate image according to the external parameters of the binocular camera to obtain a second image.
3. The method of semi-dense scene reconstruction of claim 1, wherein: the selecting a plurality of key image frames from the second image based on a preset rule comprises:
and selecting a key image frame every N frames in the second image to obtain a plurality of key image frames.
4. The method of semi-dense scene reconstruction of claim 1, wherein: the acquiring, in each image frame group, a feature point set of the target frame and a feature point set of at least one frame to be projected, and projecting the feature point set of the at least one frame to be projected onto the feature point set of the target frame to obtain a projected feature point set corresponding to each image frame group includes:
for each frame to be projected, acquiring each feature point in a feature point set of the frame to be projected, a conversion coefficient corresponding to each feature point and depth value data corresponding to each feature point;
obtaining a plurality of first characteristic points and a first characteristic point set consisting of the plurality of first characteristic points according to each characteristic point of the frame to be projected, the conversion coefficient corresponding to each characteristic point and the depth value data corresponding to each characteristic point;
and carrying out superposition combination according to the feature point set of the target frame and the first feature point set to obtain a projection feature point set corresponding to each image frame group.
5. The method of semi-dense scene reconstruction of claim 1, wherein: the obtaining of the semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group includes:
determining two-dimensional coordinates of each feature point in a projection feature point set corresponding to each image frame group in an image coordinate system;
converting two-dimensional coordinates of each feature point in the image coordinate system into three-dimensional coordinates in the camera coordinate system according to a conversion rule between the image coordinate system and the camera coordinate system to obtain a first coordinate point corresponding to the image frame group and a first coordinate point set consisting of the first coordinate points;
converting each first coordinate point in the first coordinate point set into a three-dimensional coordinate in the target coordinate system according to a conversion rule between a camera coordinate system and the target coordinate system to obtain a second coordinate point corresponding to the image frame group and a second coordinate point set consisting of the second coordinate points;
and obtaining a semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group.
6. The semi-dense scene reconstruction method of claim 5, wherein: the obtaining of the semi-dense scene corresponding to the first image according to the second coordinate point set corresponding to each image frame group includes:
and placing coordinate values of coordinate points in the second coordinate point set corresponding to each image frame group at corresponding positions in the target coordinate system, thereby forming a semi-dense scene corresponding to the first image.
7. The method of semi-dense scene reconstruction of claim 1, wherein: before obtaining the internal parameters and the external parameters of the binocular camera, the method further comprises the following steps:
and geometrically calibrating the two cameras of the binocular camera.
8. A semi-dense scene reconstruction apparatus, characterized by: the device comprises:
the parameter acquisition module is used for acquiring internal parameters and external parameters of the binocular camera;
the image acquisition module is used for acquiring a first image acquired by a binocular camera and carrying out binocular parallel correction on the first image according to internal parameters and external parameters of the binocular camera so as to acquire a second image;
the key image frame selecting module is used for selecting a plurality of key image frames from the second image based on a preset rule;
the characteristic point set calculation module is used for calculating and obtaining a characteristic point set corresponding to each key image frame based on a direct motion estimation method and preset parameters;
the image frame group acquisition module is used for grouping a plurality of key image frames to obtain a plurality of image frame groups, wherein each image frame group comprises a target frame and a plurality of frames to be projected;
a projection feature point set obtaining module, configured to obtain, in each image frame group, a feature point set of the target frame and a feature point set of at least one to-be-projected frame, and project the feature point set of the at least one to-be-projected frame to the feature point set of the target frame, so as to obtain a projection feature point set corresponding to each image frame group;
and the semi-dense scene acquisition module is used for acquiring a semi-dense scene corresponding to the first image based on the projection feature point set corresponding to each image frame group.
9. An electronic device, characterized in that: the apparatus includes a processor and a memory;
the processor adapted to implement one or more instructions;
the memory storing one or more instructions adapted to be loaded and executed by the processor to implement the semi-dense scene reconstruction method of any one of claims 1 to 7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the semi-dense scene reconstruction method according to any one of claims 1-7.
CN201910722061.1A 2019-08-06 2019-08-06 Semi-dense scene reconstruction method and device, electronic equipment and storage medium Pending CN110599586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910722061.1A CN110599586A (en) 2019-08-06 2019-08-06 Semi-dense scene reconstruction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910722061.1A CN110599586A (en) 2019-08-06 2019-08-06 Semi-dense scene reconstruction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110599586A true CN110599586A (en) 2019-12-20

Family

ID=68853655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910722061.1A Pending CN110599586A (en) 2019-08-06 2019-08-06 Semi-dense scene reconstruction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110599586A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001857A (en) * 2020-08-04 2020-11-27 北京中科慧眼科技有限公司 Image correction method, system and equipment based on binocular camera and readable storage medium
CN113763545A (en) * 2021-09-22 2021-12-07 拉扎斯网络科技(上海)有限公司 Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN113763544A (en) * 2021-09-22 2021-12-07 拉扎斯网络科技(上海)有限公司 Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN114332509A (en) * 2021-12-29 2022-04-12 阿波罗智能技术(北京)有限公司 Image processing method, model training method, electronic device and automatic driving vehicle
CN115412718A (en) * 2022-08-17 2022-11-29 华伦医疗用品(深圳)有限公司 Endoscope camera shooting system, image processing method and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945220A (en) * 2017-11-30 2018-04-20 华中科技大学 A kind of method for reconstructing based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
半闲居士: "DSO详解", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/29177540》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001857A (en) * 2020-08-04 2020-11-27 北京中科慧眼科技有限公司 Image correction method, system and equipment based on binocular camera and readable storage medium
CN113763545A (en) * 2021-09-22 2021-12-07 拉扎斯网络科技(上海)有限公司 Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN113763544A (en) * 2021-09-22 2021-12-07 拉扎斯网络科技(上海)有限公司 Image determination method, image determination device, electronic equipment and computer-readable storage medium
CN114332509A (en) * 2021-12-29 2022-04-12 阿波罗智能技术(北京)有限公司 Image processing method, model training method, electronic device and automatic driving vehicle
CN114332509B (en) * 2021-12-29 2023-03-24 阿波罗智能技术(北京)有限公司 Image processing method, model training method, electronic device and automatic driving vehicle
CN115412718A (en) * 2022-08-17 2022-11-29 华伦医疗用品(深圳)有限公司 Endoscope camera shooting system, image processing method and readable storage medium

Similar Documents

Publication Publication Date Title
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN107633526B (en) Image tracking point acquisition method and device and storage medium
EP3200148B1 (en) Image processing method and device
CN107566688B (en) Convolutional neural network-based video anti-shake method and device and image alignment device
CN109598744B (en) Video tracking method, device, equipment and storage medium
CN111311632B (en) Object pose tracking method, device and equipment
US20190073749A1 (en) Method and apparatus for image processing
CN107481271B (en) Stereo matching method, system and mobile terminal
JP6843212B2 (en) Homography correction
CN111160298A (en) Robot and pose estimation method and device thereof
CN110378250B (en) Training method and device for neural network for scene cognition and terminal equipment
CN106570907B (en) Camera calibration method and device
CN112862897B (en) Phase-shift encoding circle-based rapid calibration method for camera in out-of-focus state
CN111325792B (en) Method, apparatus, device and medium for determining camera pose
TW202103106A (en) Method and electronic device for image depth estimation and storage medium thereof
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN115205383A (en) Camera pose determination method and device, electronic equipment and storage medium
CN113610918A (en) Pose calculation method and device, electronic equipment and readable storage medium
WO2020092051A1 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN111179408B (en) Three-dimensional modeling method and equipment
KR20150097251A (en) Camera alignment method using correspondences between multi-images
CN113298187A (en) Image processing method and device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication