CN112884817B - Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium - Google Patents

Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium Download PDF

Info

Publication number
CN112884817B
CN112884817B CN201911198748.6A CN201911198748A CN112884817B CN 112884817 B CN112884817 B CN 112884817B CN 201911198748 A CN201911198748 A CN 201911198748A CN 112884817 B CN112884817 B CN 112884817B
Authority
CN
China
Prior art keywords
target
eye image
optical flow
matching
corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911198748.6A
Other languages
Chinese (zh)
Other versions
CN112884817A (en
Inventor
樊辉
史冰清
张文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile IoT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911198748.6A priority Critical patent/CN112884817B/en
Publication of CN112884817A publication Critical patent/CN112884817A/en
Application granted granted Critical
Publication of CN112884817B publication Critical patent/CN112884817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention provides a dense optical flow calculation method, a dense optical flow calculation device, an electronic device and a computer-readable storage medium. The method comprises the following steps: carrying out corner extraction on the target left eye image to obtain a first corner; extracting corner points of the target right eye image to obtain a second corner point; matching the obtained first angular point and the second angular point to obtain a matched angular point pair; determining the super pixel area to which the obtained matching corner pairs belong; finally, determining the optical flow data of each pixel in the super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair; and determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super pixel area to which the matching corner points belong. The embodiment of the invention can greatly shorten the calculation time of the dense optical flow field of the binocular image and improve the algorithm efficiency on the basis of ensuring the accuracy.

Description

Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the technical field of binocular matching, in particular to a dense optical flow calculation method, a dense optical flow calculation device, electronic equipment and a computer-readable storage medium.
Background
The binocular vision system is widely applied to the fields of robot navigation, probe car navigation, industrial measurement, vehicle early warning system, 3D scene drawing and reconstruction, pose perception, military and the like, and becomes a popular research. And the matching of binocular images is the key point for the application of a binocular vision system.
According to different constraint conditions, common binocular image matching algorithms can be generally divided into a local matching algorithm and a global matching algorithm. The local matching algorithm is mainly a matching algorithm which utilizes the constraint information of the corresponding point and the adjacent local area; the global matching algorithm mainly utilizes global constraint information of the image, and is a matching algorithm for constraining the whole image. The local matching algorithm is sensitive to some blurs caused by shielding, single texture and the like due to local optimization, and mismatching is easily caused, and the global matching algorithm is insensitive to the blurs of local images and high in matching accuracy, so that the global matching algorithm is widely used in the technical field of binocular matching.
However, although the matching accuracy of a global matching algorithm such as a conventional dense optical flow calculation method is high, the optical flow matching needs to be performed on pixels in a binocular image one by one, so that the time complexity of the algorithm is high, and the calculation efficiency is low.
Disclosure of Invention
The embodiment of the invention provides a dense optical flow calculation method, a dense optical flow calculation device, electronic equipment and a computer readable storage medium, and aims to solve the problems that in the prior art, a dense optical flow calculation algorithm is high in time complexity and low in calculation efficiency.
In a first aspect, an embodiment of the present invention provides a dense optical flow calculation method, including:
carrying out corner extraction on the target left eye image to obtain a first corner; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a left eye image to be matched according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a right eye image to be matched according to the resolution, and the left eye image to be matched and the right eye image to be matched are binocular images shot under the same scene aiming at the same object;
matching the obtained first angular point and the second angular point to obtain a matched angular point pair;
determining the super pixel area to which the obtained matching corner pairs belong; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
determining optical flow data of each pixel in a super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair;
and determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super pixel area to which the matching corner points belong.
In a second aspect, embodiments of the present invention provide a dense optical flow computing device, the device comprising:
the extraction module is used for extracting angular points of the target left eye image to obtain a first angular point; extracting corner points of the target right eye image to obtain a second corner point; the target left-eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing the left-eye image to be matched according to the resolution, the target right-eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing the right-eye image to be matched according to the resolution, and the left-eye image to be matched and the right-eye image to be matched are binocular images shot under the same scene aiming at the same object;
the matching module is used for matching the obtained first angular point and the second angular point to obtain a matched angular point pair;
the first determining module is used for determining the super-pixel area to which the obtained matching corner pairs belong; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
the second determining module is used for determining the optical flow data of each pixel in the super-pixel area to which the obtained matching angle pair belongs based on the obtained optical flow data of the matching angle pair;
and the third determining module is used for determining the dense optical flows of the target left-eye image and the target right-eye image based on the obtained optical flow data of each pixel in the super-pixel area to which the matching corner points belong.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the above-described dense optical flow calculation method.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described dense optical flow computation method.
The embodiment of the invention provides a dense optical flow calculation method, a dense optical flow calculation device, electronic equipment and a computer-readable storage medium, wherein firstly, angular points of a target left eye image are extracted to obtain first angular points; extracting corner points of the target right eye image to obtain a second corner point; then, matching the obtained first corner point and the second corner point to obtain a matched corner point pair; determining the super pixel area to which the obtained matching corner pairs belong; finally, determining the optical flow data of each pixel in the super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair; and determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super pixel area to which the matching angle pair belongs.
In the embodiment of the invention, when dense optical flow calculation is carried out on a binocular image based on a pyramid image layer, optical flow data of each pixel in a super-pixel area to which a matched corner pair belongs is calculated by adopting optical flow data of the matched corner pair obtained by matching a first corner of a target left eye image and a second corner of a target right eye image, wherein the target left eye image and the target right eye image are images corresponding to image layers with the minimum resolution in the pyramid image layer of the binocular image, and the super-pixel area is an area obtained by super-pixel segmentation of the target left eye image and/or the target right eye image.
In this way, by matching the optical flow data of the corner point pairs, the optical flow data of each pixel in the super-pixel region can be obtained, thereby obtaining an initial value for performing dense optical flow calculation on the binocular image based on the pyramid image layer. Compared with the traditional dense optical flow calculation method, the initial value of the dense optical flow calculation of the binocular image can be obtained without carrying out optical flow matching on the pixels in the target left eye image and the target right eye image one by one, so that the calculation time of the dense optical flow field of the binocular image can be greatly shortened on the basis of ensuring the accuracy, and the algorithm efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of a dense optical flow calculation method provided by an embodiment of the invention;
FIG. 2 is a second schematic flow chart of the dense optical flow calculation method according to the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a dense optical flow computing device provided by an embodiment of the invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Based on this, the embodiment of the present invention provides a new dense optical flow calculation scheme, and the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is a part of embodiments of the present invention, but not a whole embodiment. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following first explains the dense optical flow calculation method provided by the embodiment of the present invention.
It should be noted that the dense optical flow calculation method provided by the embodiment of the present invention may be applied to a binocular vision system, and the binocular vision system may include an electronic device for performing dense optical flow calculation on a binocular image. Here, the electronic device may be a terminal device (e.g., a binocular camera) configured to perform the calculation of the dense optical flow field for a binocular image captured by the binocular camera. In the following embodiments, the electronic device will be described in detail by taking a binocular camera as an example.
Before describing the dense optical flow calculation method provided by the embodiment of the present invention, terms related to the embodiment of the present invention may be first described to facilitate understanding of the present invention.
1) Introduction to dense light flow
The dense optical flow is an image registration method for performing point-by-point matching on an image, the offset of all pixels on the image is calculated by the dense optical flow, a dense optical flow field is finally formed, and a process of establishing the dense optical flow field on a binocular image is a process of matching the binocular image.
2) Introduction to the pyramid layer
The pyramid layer is a series of sets of progressively lower resolution images arranged in a pyramid shape. The time required for establishing a dense optical flow field is much less for the bottom layer of the pyramid than for the original image due to the low resolution of the picture.
3) Calculating dense optical flow of binocular image based on pyramid image layer
The dense optical flow of the binocular image is calculated by utilizing the pyramid image layer, the dense optical flow field of the bottommost image layer is required to be obtained firstly, the dense optical flow field of the bottommost image layer is required to be scaled to obtain the initial value of the optical flow field of the previous image layer, the accurate optical flow field of the image layer is calculated by utilizing the initial value, layer-by-layer iteration is carried out by utilizing the method, and finally the dense optical flow of the binocular image can be obtained.
Referring to fig. 1, a schematic flow chart of a dense optical flow calculation method provided by the embodiment of the invention is shown. As shown in fig. 1, the method may include the steps of:
step 101, extracting corner points of a target left eye image to obtain a first corner point; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a to-be-matched left eye image according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a to-be-matched right eye image according to the resolution, and the to-be-matched left eye image and the to-be-matched right eye image are binocular images shot under the same scene aiming at the same object.
And 102, matching the obtained first corner point and the second corner point to obtain a matched corner point pair.
Step 103, determining the super pixel area to which the obtained matching corner pair belongs; the super pixel region is obtained by performing super pixel segmentation on the basis of the target left eye image and/or the target right eye image.
And 104, determining the optical flow data of each pixel in the super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair.
And 105, determining dense optical flows of the target left-eye image and the target right-eye image based on the optical flow data of each pixel in the super-pixel region to which the obtained matching corner pairs belong.
In step 101, before the corner extraction, a target left eye image and a target right eye image may be obtained first, and the target left eye image and the target right eye image may be obtained by a pyramid layer decomposition method.
Specifically, the binocular camera acquires shot binocular images, the binocular images include a left eye image to be matched and a right eye image to be matched, and the left eye image to be matched I and the right eye image to be matched J can be marked as I and J respectively, and the left eye image to be matched I and the right eye image to be matched J are two images shot in the same scene for the same object. Fast bilateral filtering and brightness normalization processing can be performed on the binocular image, so that the enhanced image matching characteristics are achieved.
Performing layer decomposition on the left eye image I to be matched according to the resolution to obtain a first pyramid image layer, wherein the first pyramid image layer comprises at least two image layers, images corresponding to the image layers are sequenced from small to large according to the resolution, and each image layer is recorded as { I } L } L=1,...,M . That is, the image corresponding to the mth layer, which is the highest layer in the first pyramid image layer, is the image with the smallest resolution, i.e., the image with the smallest size, and this image is the target left-eye image, and is denoted as I M And the image corresponding to the lowest layer, namely the 1 st layer, in the first pyramid image layer is the image with the maximum resolution, namely the image with the maximum size, and the image is the left eye image I to be matched.
Similarly, performing layer decomposition on the right-eye image J to be matched according to the resolution to obtain a second pyramid image layer, wherein the second pyramid image layer comprises at least two image layers, and the images corresponding to the image layers are sequenced from small to large according to the resolution, and { J L } L=1,...,M . That is, the image corresponding to the mth layer of the highest layer arranged in the second pyramid image layer is the image with the smallest resolution, i.e., the image with the smallest size, and the image is the target right-eye image and is marked as J M And the image corresponding to the 1 st layer of the lowest layer in the second pyramid image layer is the image with the maximum resolution, namely the image with the maximum size, and the image is the right-eye image J to be matched.
After the images with the resolutions corresponding to the left eye image to be matched are obtained based on the pyramid image layer, and the images with the resolutions corresponding to the right eye image to be matched are obtained based on the pyramid image layer, the target left eye image I can be subjected to the matching M And/or target Right eye image J M Performing superpixel segmentation, and simultaneously, performing target left eye image I M Extracting angular points and obtaining a target right eye image J M And performing corner extraction.
Because the left eye image I to be matched and the right eye image J to be matched are two images of the same object shot in the same scene, the depth positions of different parts of the same object in the target left eye image subjected to size scaling based on the left eye image I to be matched and the target right eye image subjected to size scaling based on the right eye image J to be matched are not greatly different, and different objects often have different depth positions, so that different objects in the target left eye image and the target right eye image can be divided by proper region division through a super-pixel region, and the optical flow field estimation is respectively carried out on each object, so that a relatively accurate optical flow field initial value can be obtained.
Specifically, when performing superpixel segmentation, only the target left eye image I may be subjected to the target left eye image I due to the similarity between the target left eye image and the target right eye image M Performing superpixel segmentation, or only performing superpixel segmentation on the target right eye image J M Performing superpixel segmentation, and also can be used for the target left eye image I M Performing superpixel segmentation, and performing target right eye image J M Super-pixel segmentation is performed. In the following examples, the target eye image I will be referred to M And target Right eye image J M The super-pixel division will be described in detail as an example.
For target left eye image I M And performing superpixel segmentation to obtain a plurality of first superpixel regions, and performing superpixel segmentation to the target right eye image to obtain a plurality of second superpixel regions.
In practical application, a Simple Linear Iterative Clustering (SLIC) algorithm can be sampled to perform feature segmentation on the target left eye image and the target right eye image, so that the purpose of performing region segmentation on the target left eye image and the target right eye image is achieved. The SLIC algorithm is that a five-dimensional space is formed through a CIE-Lab domain and a coordinate position, images are classified and aggregated through the similarity between pixels, and finally a super-pixel region with similar characteristics is formed.
Further, a corner detection method based on template and machine learning, such as Fast corner algorithm, can be adopted to detect the target left eye image I M And target Right eye image J M And the angular point extraction is carried out, the calculation amount of the Fast angular point algorithm is relatively small, the characteristic extraction is also relatively reliable, and the use requirement can be met. Therefore, the embodiment of the invention adopts Fast corner algorithm to extract the corners of the image, and can improve the realization efficiency of the algorithm.
To the target left eye image I after noise reduction M Extracting Fast angular points to obtain first angular points, and denoising the target right eye image J M And extracting the Fast corner point to obtain a second corner point. For the first Corner point, a structure Corner1 can be designed, the structure Corner1 can store the coordinates of the first Corner point and the first super-pixel area where the first Corner point is located, and the first Corner point is taken as SPI i p n (ii) a Wherein, SPI i Respectively representing the first superpixel region, p, at which the first corner is located n Is a first corner coordinate; for the second Corner, a structure Corner2 can be designed, the structure Corner2 can store the coordinates of the second Corner and the second super-pixel area where the second Corner is located, and the second Corner is taken as SPI j p m (ii) a Wherein, SPI j Respectively representing the second superpixel regions, p, at which the second corner points are located m The second corner point coordinates.
It should be noted that the first angular points may be a plurality of characteristic points of the target left eye image, and the second angular points may be a plurality of characteristic points of the target right eye image.
In step 102, after acquiring the first corner points and the second corner points, performing feature matching on the acquired first corner points and second corner points, and specifically, calculating an offset vector between each first corner point and each acquired second corner point for each acquired first corner point; the offset vector can be used for indicating coordinate offset of the first corner point and the second corner point, and the first corner point and the second corner point related to the target offset vector are determined as a matching corner pair; and the target offset vector is an offset vector of which the corresponding offset information meets a preset condition.
The offset information may refer to a distance value of a target offset vector, and the target offset vector may be an offset vector whose corresponding offset information satisfies a preset condition, which may be understood as that, when the distance value corresponding to the target offset vector is smaller than the preset distance value, it may indicate that the offset information corresponding to the target offset vector satisfies the preset condition. The offset information may include offset amounts of components of a target offset vector, and the target offset vector may be an offset vector whose corresponding offset information satisfies a preset condition, which may be understood as that, when the offset amounts of the components of the target offset vector are smaller than the corresponding preset offset amounts, it may be indicated that the offset information corresponding to the target offset vector satisfies the preset condition.
In step 103, the superpixel region to which the matching corner pair belongs may be determined based on a first superpixel region to which a first corner in the matching corner pair belongs or a second superpixel region to which a second corner in the matching corner pair belongs. In practice, the first and second superpixel regions to which the matching Corner pairs belong can be obtained by querying the structure Corner1 and the structure Corner 2.
In step 104, due to the characteristics of the super-pixel region, the optical flow data of each pixel in the super-pixel region to which the obtained matching corner pair belongs may be determined directly based on the optical flow data of the obtained matching corner pair.
Specifically, the region type in the first super pixel region and/or the second super pixel region may be determined first, and the region type may include at least one of the following three types.
The first region type may be a super-pixel region including at least two matching corner point pairs, that is, a first target super-pixel region;
the second area type may be a super-pixel area including only one matching corner pair, that is, a second target super-pixel area;
the third area type may be a super pixel area excluding the matching corner pair, that is, the third target super pixel area.
If a first target superpixel area exists in the first superpixel area and/or the second superpixel area, after an abnormal matching corner pair is removed from the matching corner pair in the first target superpixel area to obtain a target matching corner pair, averaging the optical flow data of the target matching corner pair, and determining the average as the optical flow data of each pixel in the first target superpixel area.
Under normal conditions, the same object is basically in the same plane, so that the actual optical flow difference is not too large, if a matching angular point pair with an optical flow value obviously abnormal with other matching angular point pairs exists in the first target super-pixel region, the matching angular point pair is an abnormal matching angular point pair, and the matching angular point pair is removed, so that the target matching angular point pair can be obtained.
And if a second target super-pixel area exists in the first super-pixel area and/or the second super-pixel area, acquiring optical flow data of a matching corner point pair in the second target super-pixel area, and determining the optical flow data as the optical flow data of each pixel in the second target super-pixel area.
If a third target superpixel area exists in the first superpixel area and/or the second superpixel area, acquiring optical flow data of two superpixel areas adjacent to the third target superpixel area, and determining the optical flow data of each pixel in the third target superpixel area by adopting a bilinear interpolation method based on the optical flow data of the two adjacent superpixel areas.
In step 105, after the optical flow data of each pixel in all the super-pixel areas of the target left eye image and/or the target right eye image is obtained through calculation, all the optical flow data obtained through calculation are determined to be dense optical flows of the target left eye image and the target right eye image, and the dense optical flows are initial values of the dense optical flows of the binocular images.
After determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super-pixel region to which the matching corner pair belongs, the method further comprises:
and performing layer-by-layer iterative computation on the dense optical flows of the images corresponding to the layers in the first pyramid image layer and the second pyramid image layer according to the sequence of the resolution from small to large on the basis of the dense optical flows of the target left eye image and the target right eye image, so as to obtain the dense optical flows of the left eye image to be matched and the right eye image to be matched.
Specifically, the dense optical flows of the target left-eye image and the target right-eye image are used as an initial value of an optical flow field of a top layer in a pyramid image layer, the initial value is used for calculating an accurate optical flow field of the layer, and then the method is used for carrying out layer-by-layer iteration to finally obtain the dense optical flows of the binocular image.
Referring to fig. 2, a second flowchart of the dense optical flow calculation method according to the embodiment of the present invention is shown. As shown in fig. 2, the flow is as follows:
step 201, acquiring a binocular image;
step 202, carrying out layer decomposition on the binocular image according to the resolution ratio to obtain a pyramid image layer;
step 203, performing superpixel segmentation on the target left eye image of the bottommost layer in the pyramid image layers to obtain a superpixel region;
step 204, extracting the corner points of the target left eye image to obtain a first corner point, and extracting the corner points of the target right eye image of the bottommost layer in the pyramid image layers to obtain a second corner point;
step 205, matching the obtained first corner point and the second corner point to obtain a matched corner point pair; calculating the obtained optical flow data of the matching angle point pairs;
step 206, determining the super pixel area to which the obtained matching corner pair belongs;
step 207, determining optical flow data of each pixel in the super pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair;
step 208, determining the initial value of the dense optical flow according to the optical flow data of each pixel in each whole super-pixel area;
step 209, based on the initial value, the pyramid graph layer iteratively calculates the dense optical flow of the binocular image layer by layer;
return to execute step 201.
The dense optical flow calculation method provided by the embodiment of the invention is characterized in that when dense optical flow calculation is carried out on a binocular image based on a pyramid image layer, optical flow data of each pixel in a superpixel area to which a matched corner pair belongs is calculated by adopting optical flow data of the matched corner pair obtained by matching a first corner of a target left eye image and a second corner of the target right eye image, wherein the target left eye image and the target right eye image are images corresponding to image layers with the minimum resolution in the pyramid image layer of the binocular image, and the superpixel area is an area obtained by superpixel segmentation of the target left eye image and/or the target right eye image.
In this way, by matching the optical flow data of the corner point pairs, the optical flow data of each pixel in the super-pixel region can be obtained, thereby obtaining an initial value for performing dense optical flow calculation on the binocular image based on the pyramid image layer. Compared with the traditional dense optical flow calculation method, the initial value of the dense optical flow calculation of the binocular image can be obtained without carrying out optical flow matching on the pixels in the target left eye image and the target right eye image one by one, so that the calculation time of the dense optical flow field of the binocular image can be greatly shortened on the basis of ensuring the accuracy, and the algorithm efficiency is improved.
The following describes a dense optical flow calculation apparatus provided by an embodiment of the present invention.
Referring to FIG. 3, a schematic structural diagram of a dense optical flow computing device provided by an embodiment of the invention is shown. As shown in FIG. 3, the dense optical flow computing device 300 includes:
an extraction module 301, configured to perform corner extraction on a target left eye image to obtain a first corner; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a left eye image to be matched according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a right eye image to be matched according to the resolution, and the left eye image to be matched and the right eye image to be matched are binocular images shot under the same scene aiming at the same object;
a matching module 302, configured to match the obtained first corner point and the second corner point to obtain a pair of matched corner points;
a first determining module 303, configured to determine a super-pixel region to which the obtained matching corner pair belongs; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
a second determining module 304, configured to determine, based on the obtained optical flow data of the matching corner pairs, optical flow data of each pixel in a super-pixel region to which the obtained matching corner pairs belong;
a third determining module 305, configured to determine dense optical flows of the target left-eye image and the target right-eye image based on the obtained optical flow data of each pixel in the super-pixel region to which the matching corner pair belongs.
Optionally, the obtained number of the first corner points and the second corner points includes at least one; the matching module 302 is specifically configured to calculate, for each obtained first corner, an offset vector between the first corner and each obtained second corner; determining a first angular point and a second angular point related to the target offset vector as a matching angular point pair; and the target offset vector is an offset vector of which the corresponding offset information meets a preset condition.
Optionally, the second determining module 304 includes:
a first determining unit, configured to determine, if a first target superpixel area exists in an area obtained by performing superpixel segmentation on the basis of the target left eye image and/or the target right eye image, an average value of optical flow data of a target matching corner point pair in the first target superpixel area as optical flow data of each pixel in the first target superpixel area; the first target super-pixel region comprises at least two matching angle point pairs, and the target matching angle point pairs are matching angle point pairs with optical flow data smaller than or equal to a preset threshold value;
a second determining unit, configured to determine, if a second target superpixel area exists in an area obtained by performing superpixel segmentation on the basis of the target left eye image and/or the target right eye image, optical flow data of a matching corner point pair in the second target superpixel area as optical flow data of each pixel in the second target superpixel area; wherein the first target superpixel region includes only one matching corner point pair;
a third determining unit, configured to determine, if a third target superpixel area exists in an area obtained by performing superpixel segmentation based on the target left eye image and/or the target right eye image, optical flow data of each pixel in the third target superpixel area based on optical flow data of two superpixel areas adjacent to the third target superpixel area; wherein no matching corner point pairs are included in the third target superpixel region.
Optionally, if a first target super-pixel region exists in a region obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image; the second determining module 304 further comprises:
and the eliminating unit is used for eliminating the matching corner pairs with the optical flow data being larger than the preset threshold value in the first target super-pixel area if the matching corner pairs with the optical flow data being larger than the preset threshold value exist in the first target super-pixel area, so as to obtain the target matching corner pairs in the first target super-pixel area.
Optionally, the apparatus further comprises:
and the computing module is used for performing layer-by-layer iterative computation on the dense optical flows of the images corresponding to the layers in the first pyramid image layer and the second pyramid image layer according to the sequence of the resolution from small to large on the basis of the dense optical flows of the target left eye image and the target right eye image to obtain the dense optical flows of the left eye image to be matched and the right eye image to be matched.
It should be noted that, the apparatus in the embodiment of the present invention can implement each process implemented in the above method embodiments, and can achieve the same beneficial effects, and for avoiding repetition, details are not described here again.
Referring to fig. 4, a schematic structural diagram of an electronic device provided by an embodiment of the present invention is shown. As shown in fig. 4, the electronic device 400 includes: a processor 401, a memory 402, a user interface 403 and a bus interface 404.
A processor 401 for reading the program in the memory 402, and executing the following processes:
carrying out corner extraction on the target left eye image to obtain a first corner; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a left eye image to be matched according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a right eye image to be matched according to the resolution, and the left eye image to be matched and the right eye image to be matched are binocular images shot under the same scene aiming at the same object;
matching the obtained first angular point and the second angular point to obtain a matched angular point pair;
determining the super pixel area to which the obtained matching corner pairs belong; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
determining optical flow data of each pixel in a super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair;
and determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super pixel area to which the matching corner points belong.
In FIG. 4, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 401, and various circuits, represented by memory 402, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. Bus interface 404 provides an interface. The user interface 403 may also be an interface capable of interfacing with a desired device for different user devices, including but not limited to a keypad, a display, a speaker, a microphone, a joystick, etc.
The processor 401 is responsible for managing the bus architecture and general processing, and the memory 402 may store data used by the processor 401 in performing operations.
Optionally, the number of the obtained first corner points and second corner points includes at least one; the processor 401 is specifically configured to:
calculating an offset vector of each first corner point and each second corner point for each obtained first corner point;
determining a first angular point and a second angular point related to the target offset vector as a matching angular point pair; and the target offset vector is an offset vector of which the corresponding offset information meets a preset condition.
Optionally, the processor 401 is specifically configured to:
if a first target superpixel area exists in an area obtained by performing superpixel segmentation on the basis of the target left eye image and/or the target right eye image, determining the average value of optical flow data of target matching corner point pairs in the first target superpixel area as the optical flow data of each pixel in the first target superpixel area; the first target super-pixel region comprises at least two matching angle point pairs, and the target matching angle point pairs are matching angle point pairs with optical flow data smaller than or equal to a preset threshold value;
if a second target super-pixel area exists in an area obtained by performing super-pixel segmentation on the basis of the target left-eye image and/or the target right-eye image, determining optical flow data of a matching corner point pair in the second target super-pixel area as optical flow data of each pixel in the second target super-pixel area; wherein the first target superpixel region includes only one matching corner point pair;
if a third target super-pixel area exists in an area obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image, determining optical flow data of each pixel in the third target super-pixel area on the basis of optical flow data of two super-pixel areas adjacent to the third target super-pixel area; wherein no matching corner point pairs are included in the third target superpixel region.
Optionally, if a first target super-pixel region exists in a region obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image; the processor 401 is specifically configured to:
and if the first target super-pixel area has the matching corner pairs with the optical flow data larger than the preset threshold, removing the matching corner pairs with the optical flow data larger than the preset threshold in the first target super-pixel area to obtain the target matching corner pairs in the first target super-pixel area.
Optionally, the processor 401 is further configured to:
and performing layer-by-layer iterative computation of the dense optical flows on the images corresponding to each of the first pyramid image layer and the second pyramid image layer according to the sequence from small resolution to large resolution based on the dense optical flows of the target left eye image and the target right eye image to obtain the dense optical flows of the left eye image to be matched and the right eye image to be matched.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 401, a memory 402, and a computer program stored in the memory 402 and capable of running on the processor 401, where the computer program is executed by the processor 401 to implement each process of the above-mentioned dense optical flow calculation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the dense optical flow calculation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method of dense optical flow computation, the method comprising:
carrying out corner extraction on the target left eye image to obtain a first corner; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a left eye image to be matched according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a right eye image to be matched according to the resolution, and the left eye image to be matched and the right eye image to be matched are binocular images shot under the same scene aiming at the same object;
matching the obtained first angular point and the second angular point to obtain a matched angular point pair;
determining the super pixel area to which the obtained matching corner pairs belong; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
determining optical flow data of each pixel in a super-pixel area to which the obtained matching corner pair belongs based on the obtained optical flow data of the matching corner pair;
and determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of each pixel in the super pixel area to which the matching corner points belong.
2. The method according to claim 1, wherein the obtained number of first corner points and second corner points comprises at least one; the matching the obtained first corner point and the second corner point to obtain a matching corner point pair includes:
calculating an offset vector of each first corner point and each second corner point for each obtained first corner point;
determining a first angular point and a second angular point related to the target offset vector as a matching angular point pair; and the target offset vector is an offset vector of which the corresponding offset information meets a preset condition.
3. The method according to claim 1, wherein the determining optical flow data of each pixel element in the super pixel region to which the obtained matching corner point pair belongs based on the obtained optical flow data of the matching corner point pair comprises:
if a first target super-pixel area exists in an area obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image, determining the average value of optical flow data of target matching corner point pairs in the first target super-pixel area as the optical flow data of each pixel in the first target super-pixel area; the first target super-pixel region comprises at least two matching angle point pairs, and the target matching angle point pairs are matching angle point pairs with optical flow data smaller than or equal to a preset threshold value;
if a second target super-pixel area exists in an area obtained by performing super-pixel segmentation on the basis of the target left-eye image and/or the target right-eye image, determining optical flow data of a matching corner point pair in the second target super-pixel area as optical flow data of each pixel in the second target super-pixel area; wherein the first target superpixel region includes only one matching corner point pair;
if a third target super-pixel area exists in an area obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image, determining optical flow data of each pixel in the third target super-pixel area on the basis of optical flow data of two super-pixel areas adjacent to the third target super-pixel area; wherein no matching corner point pairs are included in the third target superpixel region.
4. The method according to claim 3, wherein if there is a first target superpixel region in the region obtained by superpixel segmentation based on the target left eye image and/or the target right eye image; before determining the mean of the optical flow data of the target matching corner pairs in the first target superpixel area as the optical flow data of each pixel in the first target superpixel area, the method further comprises:
and if the first target super-pixel area has the matching corner pairs with the optical flow data larger than the preset threshold, removing the matching corner pairs with the optical flow data larger than the preset threshold in the first target super-pixel area to obtain the target matching corner pairs in the first target super-pixel area.
5. The method according to claim 1, wherein after determining the dense optical flows of the target left eye image and the target right eye image based on the obtained optical flow data of the matching corner points for each pixel element in the super-pixel region to which the target left eye image and the target right eye image belong, the method further comprises:
and performing layer-by-layer iterative computation on the dense optical flows of the images corresponding to the layers in the first pyramid image layer and the second pyramid image layer according to the sequence of the resolution from small to large on the basis of the dense optical flows of the target left eye image and the target right eye image, so as to obtain the dense optical flows of the left eye image to be matched and the right eye image to be matched.
6. A dense optical flow computing apparatus, the apparatus comprising:
the extraction module is used for extracting angular points of the target left eye image to obtain a first angular point; extracting corner points of the target right eye image to obtain a second corner point; the target left eye image is an image corresponding to a layer with the minimum resolution in a first pyramid image layer, the first pyramid image layer is a pyramid image layer obtained by decomposing a left eye image to be matched according to the resolution, the target right eye image is an image corresponding to a layer with the minimum resolution in a second pyramid image layer, the second pyramid image layer is a pyramid image layer obtained by decomposing a right eye image to be matched according to the resolution, and the left eye image to be matched and the right eye image to be matched are binocular images shot under the same scene aiming at the same object;
the matching module is used for matching the obtained first angular point and the second angular point to obtain a matched angular point pair;
the first determining module is used for determining the super-pixel area to which the obtained matching corner pairs belong; the super-pixel region is obtained by performing super-pixel segmentation on the basis of the target left eye image and/or the target right eye image;
the second determining module is used for determining the optical flow data of each pixel in the super pixel area to which the obtained matching corner pair belongs based on the optical flow data of the obtained matching corner pair;
and the third determining module is used for determining the dense optical flows of the target left-eye image and the target right-eye image based on the obtained optical flow data of each pixel in the super-pixel area to which the matching corner points belong.
7. The apparatus according to claim 6, wherein the obtained number of first corner points and second corner points comprises at least one; the matching module is specifically configured to calculate, for each obtained first corner point, an offset vector between the first corner point and each obtained second corner point; determining a first angular point and a second angular point related to the target offset vector as a matching angular point pair; and the target offset vector is an offset vector of which the corresponding offset information meets a preset condition.
8. The apparatus of claim 6, wherein the second determining module comprises:
a first determining unit, configured to determine, if a first target superpixel area exists in an area obtained by performing superpixel segmentation on the basis of the target left eye image and/or the target right eye image, an average value of optical flow data of a target matching corner point pair in the first target superpixel area as optical flow data of each pixel in the first target superpixel area; the first target super-pixel region comprises at least two matching angle point pairs, and the target matching angle point pairs are matching angle point pairs with optical flow data smaller than or equal to a preset threshold value;
a second determining unit, configured to determine, if a second target superpixel area exists in an area obtained by performing superpixel segmentation on the basis of the target left eye image and/or the target right eye image, optical flow data of a matching corner point pair in the second target superpixel area as optical flow data of each pixel in the second target superpixel area; wherein the first target superpixel region includes only one matching corner point pair;
a third determining unit, configured to determine, if a third target superpixel area exists in an area obtained by performing superpixel segmentation based on the target left eye image and/or the target right eye image, optical flow data of each pixel in the third target superpixel area based on optical flow data of two superpixel areas adjacent to the third target superpixel area; wherein no matching corner point pairs are included in the third target superpixel region.
9. The apparatus according to claim 8, wherein if there is a first target superpixel region in the region obtained by superpixel segmentation based on the target left eye image and/or the target right eye image; the second determining module further comprises:
and the eliminating unit is used for eliminating the matching corner pairs with the optical flow data being larger than the preset threshold value in the first target super-pixel area if the matching corner pairs with the optical flow data being larger than the preset threshold value exist in the first target super-pixel area, so as to obtain the target matching corner pairs in the first target super-pixel area.
10. The apparatus of claim 6, further comprising:
and the computing module is used for performing layer-by-layer iterative computation on the dense optical flows of the images corresponding to the layers in the first pyramid image layer and the second pyramid image layer according to the sequence of the resolution from small to large on the basis of the dense optical flows of the target left eye image and the target right eye image to obtain the dense optical flows of the left eye image to be matched and the right eye image to be matched.
11. An electronic device comprising a processor, a memory, a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the dense optical flow computation method as claimed in any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when being executed by a processor, carries out the steps of the dense optical flow calculation method according to any one of claims 1 to 5.
CN201911198748.6A 2019-11-29 2019-11-29 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium Active CN112884817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911198748.6A CN112884817B (en) 2019-11-29 2019-11-29 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911198748.6A CN112884817B (en) 2019-11-29 2019-11-29 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN112884817A CN112884817A (en) 2021-06-01
CN112884817B true CN112884817B (en) 2022-08-02

Family

ID=76038421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911198748.6A Active CN112884817B (en) 2019-11-29 2019-11-29 Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112884817B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022518B (en) * 2022-01-05 2022-04-12 深圳思谋信息科技有限公司 Method, device, equipment and medium for acquiring optical flow information of image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978728A (en) * 2014-04-08 2015-10-14 南京理工大学 Image matching system of optical flow method
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN105809712A (en) * 2016-03-02 2016-07-27 西安电子科技大学 Effective estimation method for large displacement optical flows
CN106570888A (en) * 2016-11-10 2017-04-19 河海大学 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)
CN107808388A (en) * 2017-10-19 2018-03-16 中科创达软件股份有限公司 Image processing method, device and electronic equipment comprising moving target
CN107992073A (en) * 2017-12-07 2018-05-04 深圳慧源创新科技有限公司 Unmanned plane fixed point flying method, unmanned plane fixed point flight instruments and unmanned plane
CN109146833A (en) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 A kind of joining method of video image, device, terminal device and storage medium
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109741387A (en) * 2018-12-29 2019-05-10 北京旷视科技有限公司 Solid matching method, device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031497B2 (en) * 2001-11-05 2006-04-18 Koninklijke Philips Electronics N.V. Method for computing optical flow under the epipolar constraint
JP4964852B2 (en) * 2008-09-24 2012-07-04 富士フイルム株式会社 Image processing apparatus, method, and program
WO2015154286A1 (en) * 2014-04-10 2015-10-15 深圳市大疆创新科技有限公司 Method and device for measuring flight parameters of unmanned aircraft
US10467765B2 (en) * 2017-06-29 2019-11-05 Texas Instruments Incorporated Dense optical flow processing in a computer vision system
TWI673653B (en) * 2018-11-16 2019-10-01 財團法人國家實驗研究院 Moving object detection system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978728A (en) * 2014-04-08 2015-10-14 南京理工大学 Image matching system of optical flow method
CN105261042A (en) * 2015-10-19 2016-01-20 华为技术有限公司 Optical flow estimation method and apparatus
CN105809712A (en) * 2016-03-02 2016-07-27 西安电子科技大学 Effective estimation method for large displacement optical flows
CN106570888A (en) * 2016-11-10 2017-04-19 河海大学 Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi)
CN107808388A (en) * 2017-10-19 2018-03-16 中科创达软件股份有限公司 Image processing method, device and electronic equipment comprising moving target
CN107992073A (en) * 2017-12-07 2018-05-04 深圳慧源创新科技有限公司 Unmanned plane fixed point flying method, unmanned plane fixed point flight instruments and unmanned plane
CN109146833A (en) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 A kind of joining method of video image, device, terminal device and storage medium
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN109741387A (en) * 2018-12-29 2019-05-10 北京旷视科技有限公司 Solid matching method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Densely sampled noiseless optical flow for motion based visual activity analysis;Naresh Kumar等;《2017 3rd International Conference on Advances in Computing,Communication & Automation (ICACCA) (Fall)》;20180423;第1-3节 *
非刚性稠密匹配大位移运动光流估计;张聪炫等;《电子学报》;20190630(第6期);第1317-1323页 *

Also Published As

Publication number Publication date
CN112884817A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN107220997B (en) Stereo matching method and system
CN110135455B (en) Image matching method, device and computer readable storage medium
CN111066065A (en) System and method for hybrid depth regularization
CN110276317B (en) Object size detection method, object size detection device and mobile terminal
CN108596975B (en) Stereo matching algorithm for weak texture region
CN109840881B (en) 3D special effect image generation method, device and equipment
CN111833393A (en) Binocular stereo matching method based on edge information
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
CN107798702B (en) Real-time image superposition method and device for augmented reality
CN107481271B (en) Stereo matching method, system and mobile terminal
Lo et al. Joint trilateral filtering for depth map super-resolution
CN108986197B (en) 3D skeleton line construction method and device
JP2014197314A (en) Image processor and image processing method
CN109493373B (en) Stereo matching method based on binocular stereo vision
CN108765317A (en) A kind of combined optimization method that space-time consistency is stablized with eigencenter EMD adaptive videos
WO2017070923A1 (en) Human face recognition method and apparatus
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN111476812A (en) Map segmentation method and device, pose estimation method and equipment terminal
CN110070610B (en) Feature point matching method, and feature point matching method and device in three-dimensional reconstruction process
Camplani et al. Accurate depth-color scene modeling for 3D contents generation with low cost depth cameras
CN112884817B (en) Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
CN117372647A (en) Rapid construction method and system of three-dimensional model for building

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant