CN106952247B - Double-camera terminal and image processing method and system thereof - Google Patents

Double-camera terminal and image processing method and system thereof Download PDF

Info

Publication number
CN106952247B
CN106952247B CN201710161892.7A CN201710161892A CN106952247B CN 106952247 B CN106952247 B CN 106952247B CN 201710161892 A CN201710161892 A CN 201710161892A CN 106952247 B CN106952247 B CN 106952247B
Authority
CN
China
Prior art keywords
image
matching
points
depth map
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710161892.7A
Other languages
Chinese (zh)
Other versions
CN106952247A (en
Inventor
黎礼铭
马骏
刘勇
邹泽东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Topplusvision Science & Technology Co ltd
Original Assignee
Chengdu Topplusvision Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Topplusvision Science & Technology Co ltd filed Critical Chengdu Topplusvision Science & Technology Co ltd
Priority to CN201710161892.7A priority Critical patent/CN106952247B/en
Publication of CN106952247A publication Critical patent/CN106952247A/en
Application granted granted Critical
Publication of CN106952247B publication Critical patent/CN106952247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, and provides a double-camera terminal and an image processing method and system thereof. The method comprises the steps that a first image and a second image are respectively collected through two cameras of a double-camera terminal; performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image; projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image; matching and expanding the initial depth map according to the size of the spliced image, and supplementing the depth value of the expanded area to obtain an expanded depth map; and performing blurring processing on the spliced image according to the depth information of the expanded depth map. The invention can give consideration to the problems of image blurring effect and calculated amount, can completely output the visual information in the camera visual field, and fully utilizes the functions and functions of the two cameras.

Description

Double-camera terminal and image processing method and system thereof
Technical Field
The invention relates to the technical field of image processing, in particular to a double-camera terminal and an image processing method and system thereof.
Background
With the development of computer vision technology, dual-camera mobile terminals have become popular, and at present, dual-camera devices of mobile terminals mainly aim to meet the photographing requirements of users. However, due to the high price of the camera device and the conflicting market requirements of the volume, weight and volume of the mobile terminal, it is very difficult to manufacture a mobile terminal that can match the shooting effect of a professional camera, and it is expensive and limited in manufacturing process. In order to obtain a better picture at a lower cost, people simulate the shooting effect of a professional camera by using a digital image processing technology and perform operations such as enhancement, filtering, blurring and the like on a shot image.
Some digital image blurring processing methods have been proposed, such as an object space-based depth rendering method, an image space-based depth rendering method, and the like, in which calculation is performed on a three-dimensional scene representation and a depth effect is calculated directly in a rendering pipeline. The image space-based method, also called post-processing method, is to perform blurring processing on a clear scene image by using information of a scene depth map on an image.
However, these methods are complicated to operate and large in calculation amount when processing images, and all the methods are based on processing one image of the two cameras, and the final output image is only an image shot by one camera, which may cause that the output image cannot fully represent the actual field of view of the camera, and the functions of the two cameras are not well played, thereby wasting resources to a certain extent.
Disclosure of Invention
In order to solve the technical problems, the invention provides a double-camera terminal and also provides an image processing method and system based on the double-camera terminal.
Scheme one
The invention provides an image processing method based on a double-camera terminal, which mainly comprises the following steps:
collecting an image: respectively acquiring a first image and a second image through two cameras of a double-camera terminal;
obtaining an initial depth map: performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
image splicing: projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
and (3) expanding the depth map: matching and expanding the initial depth map according to the size of the spliced image, and supplementing the depth value of the expanded area to obtain an expanded depth map;
blurring an image: and performing blurring processing on the spliced image according to the depth information of the expanded depth map.
Scheme two
The invention also discloses a double-camera terminal, which comprises a first image collector, a second image collector, a memory, a processor and a computer program which is stored on the memory and can run on the processor;
the first image collector is used for collecting a first image;
the second image collector is used for collecting a second image;
the processor implements the following steps when executing the program: respectively acquiring a first image and a second image through two cameras of a double-camera terminal;
performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
matching and expanding the initial depth map according to the size of the spliced image, and supplementing the depth value of the expanded area to obtain an expanded depth map;
and performing blurring processing on the spliced image according to the depth information of the expanded depth map.
Scheme three
The invention also provides an image processing system based on the double-camera terminal, which comprises:
the first image acquisition module is used for acquiring a first image;
the second image acquisition module is used for acquiring a second image;
the depth extraction module is used for carrying out stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
the image splicing module is used for projecting the first image and the second image into a preset empty image and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
the depth expansion module is used for performing matching expansion on the initial depth map according to the size of the spliced image and supplementing the depth value of the expanded area to obtain an expanded depth map;
and the image blurring module is used for blurring the spliced image according to the depth information of the expanded depth map.
The invention has the beneficial effects that:
1. by utilizing a semi-dense stereo matching algorithm, a depth map with high accuracy can be obtained, and meanwhile, the image splicing precision is good.
2. The acquisition of the depth map and the image splicing are carried out simultaneously, so that the running time is greatly reduced, and the memory is saved.
3. The image is layered and subjected to blurring treatment, and blurring parameters are adaptively modified, so that the blurring effect can be good, and the calculation amount is small.
4. The spliced image output by the invention can output the information in the field of view of the binocular camera, and fully utilizes the functions and functions of the two cameras.
5. And expanding the depth map according to the spliced image to obtain the depth information of the whole field of view of the camera.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first scheme is as follows: image processing method based on double-camera terminal
As shown in fig. 1, the present invention discloses an image processing method suitable for a mobile terminal with two cameras, which mainly comprises the following steps:
s100, collecting an image: respectively acquiring a first image and a second image through two cameras of a double-camera terminal;
s200, acquiring an initial depth map: performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
s300, image splicing: projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
s400, expanding the depth map: matching and expanding the initial depth map according to the size of the spliced image, and supplementing the depth value of the expanded area to obtain an expanded depth map;
s500, blurring the image: and performing blurring processing on the spliced image according to the depth information of the expanded depth map.
In the present invention, the execution sequence of the step S200 of obtaining the initial depth map and the step S300 of image stitching may be determined according to actual requirements, for example, after the step S100 of acquiring the image is executed, the step S200 of obtaining the initial depth map may be performed first, and then the step S300 of image stitching may be performed; after the step of collecting the image S100 is executed, the step of image stitching S300 is performed, and then the step of obtaining the initial depth map S200 is performed; of course, after the step S100 of acquiring an image is performed, the step S200 of acquiring an initial depth map and the step S300 of stitching an image may also be performed at the same time.
Further, the step S100 of acquiring an image in the present invention may further include an image preprocessing step of performing preprocessing such as enhancing and denoising on the first image and the second image, respectively.
Further, the stereo matching in the present invention includes sparse matching and dense matching, and the step S200 of obtaining an initial depth map includes the following sub-steps:
s201, sparse matching: respectively extracting the characteristic points of the first image and the second image, performing sparse matching on the characteristic points of the first image and the characteristic points of the second image, taking the characteristic points successfully subjected to sparse matching in the first image as first sparse matching homonymy points, taking the characteristic points successfully subjected to sparse matching in the second image as second sparse matching homonymy points, and forming sparse matching homonymy point pairs by the mutually matched first sparse matching homonymy points and the second sparse matching homonymy points.
S202, dense matching: and carrying out dense matching on the pixel points of the first image and the second image according to the sparse matching homonym point pair, taking the pixel points which are successfully densely matched in the first image as first dense matching homonym points, taking the pixel points which are successfully densely matched in the second image as second dense matching homonym points, and forming dense matching homonym point pairs by the first dense matching homonym points and the second dense matching homonym points which are mutually matched.
S203, calculating an initial depth map: and calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain an initial depth map.
When the feature points of the first image and the second image are extracted in step S201, feature point extraction may be performed on the first image and the second image, respectively, using a SIFT algorithm (scale invariant feature point extraction algorithm). The first set of scales may be established by first filtering the first/second image several times in succession using a gaussian filter. Reducing the first image/second image to half of the original image, performing the same Gaussian filtering to form a second scale group, and repeating the operation until the first image/second image is smaller than a given threshold value; the gaussian images in each scale group are then differentiated to form a set of gaussian difference scales. Then, local extrema in the first image/the second image of the Gaussian difference are taken to obtain the characteristic points of the first image/the second image on the scale space domain.
When the feature points of the first image and the feature points of the second image are sparsely matched, the matching characteristic between the two feature points can be characterized by calculating the Euclidean distance between the two feature points, namely finding out the feature point p in the first imageiTwo neighboring feature point descriptors q 'in the second image that are closest and next to euclidean distance of'iAnd q ″)iCalculating q'iAnd piAnd q ″)iAnd piAnd (3) the ratio r of Euclidean distances between the two groups of feature points, if the ratio r is smaller than a specified threshold value, the group of feature points are regarded as successfully matched, otherwise, the group of feature points are regarded as unsuccessfully matched, and the points which are unsuccessfully matched are no longer the feature points. And taking the two feature points which are successfully matched as sparse matching homonymous point pairs, and respectively taking the two feature points of the sparse matching homonymous point pairs as a first sparse matching homonymous point and a second sparse matching homonymous point.
Further, when densely matching the first image and the second image, the following may be performed:
s2021: triangulating the first image by taking the first sparse matching homonymy point as a supporting point to obtain a plurality of first image triangles; triangulating the second image by taking the second sparse matching homonymy point as a supporting point to obtain a plurality of second image triangles; delaunay triangulation may be preferred for processing.
S2022: and estimating the image parallax according to the sparse matching relationship between the first image triangle and the second image triangle to obtain an estimated parallax map.
If the Delaunay triangulation is adopted for processing, the estimated view of any pixel point p in the Delaunay triangleDifference d, estimated disparity dpThe coordinate of the pixel point and the fitting parameter of the pixel point on the Delaunay triangular plane are estimated, and the following mathematical expression is shown in detail:
dp=aup+bvp+c
wherein, a, b and c are parameters, and the values are obtained by fitting the Delaunay triangular plane where the pixel point is located; (u)p,vp) The coordinate of the pixel point p in the image is shown.
S2023: and matching the pixel points in the first image with the pixel points in the second image by using the estimated disparity map by using the first image as a reference image, wherein the pixel points successfully matched with each other are used as a first dense matching homonym point and a second dense matching homonym point.
Since sparse matching cannot obtain a dense disparity map, accurate image registration cannot be obtained for image stitching. Therefore, dense matching of the first image and the second image is required. The invention constructs the Delaunay triangle according to the matched points, and the parallaxes of all other non-matched points are estimated according to the parallactic Delaunay triangle, thereby completing the matching.
And when the initial depth map is calculated, calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain the initial depth map. Specifically, the method comprises the following steps: according to the first dense matching homonym point and the second dense matching homonym point xl,xrThe formula D ═ x is obtained by using the parallaxl-xrObtaining the parallax D, and then calculating a formula according to the depth map:
Figure BDA0001248792660000051
an initial depth map is obtained, where B is the camera baseline length, f is the camera principal distance, and subscripts l and r denote the first and second images, respectively.
Further, the invention mainly comprises the following substeps when image splicing is carried out:
s301, establishing a blank image, wherein the length of the blank image I is the sum of the lengths of the first image and the second image, and the width of the blank image I is the width of the first image;
s302, projecting the pixel point coordinate information of the first image into an empty image by using a first initial projection transformation matrix; projecting the pixel point coordinate information of the second image into the empty image by using the second initial projection transformation matrix to obtain a first pixel point coordinate image;
s303, according to coordinate information of a first dense matching homonymy point and a second dense matching homonymy point which are matched with each other in pixel point coordinates, optimizing and calibrating the first initial projective transformation matrix and the second initial projective transformation matrix to enable the first dense matching homonymy point and the second dense matching homonymy point to be overlapped and/or the distance between the first dense matching homonymy point and the second dense matching homonymy point to be within a preset error range, and obtaining a first projective transformation matrix HlAnd a second projective transformation matrix Hr
S304, transforming the matrix H through the first projectionlRe-projecting the pixel point coordinate information of the first image into the empty image, and simultaneously performing second projection transformation on a matrix H according to the coordinate information of the first dense matching homonymous point in the empty imagerProjecting the pixel point coordinate information of the second image into the empty image to obtain a second pixel point coordinate image;
s305, according to the pixel point coordinate information of the first dense matching homonymous point, matching and projecting the pixel point coordinate information of the pixel point which is not successfully subjected to stereo matching in the first image into an empty image; matching and projecting the pixel point coordinate information of the pixel points which are not successfully subjected to stereo matching in the second image into the empty image according to the pixel point coordinate information of the second dense matching identical-name point;
s306, assigning the pixel point values of the first image and the second image to corresponding positions of the empty image according to the pixel point coordinate information of each pixel point of the first image and the second image in the empty image to obtain a spliced image.
The method specifically comprises the following steps: computing a first projective transformation matrix HlAnd a second projective transformation matrix HrFor a pixel point (x) on the first imagel,yl) The transformation relationship with the pixel point (x, y) on the stitched image is as follows:
Figure BDA0001248792660000061
substituting more than 4 groups of matching points with the same name into the formula to obtain an image transformation matrix Hl. However, in order to obtain the exact first projective transformation matrix HlThe method calculates the dense matching homonymy points sampled randomly by repeating for N times to obtain N first projective transformation matrixes Hl iN, the average of the N first projective transformation matrices may be calculated as the final first projective transformation matrix
Figure BDA0001248792660000062
Namely, it is
Figure BDA0001248792660000063
Similarly, a second projective transformation matrix H of the second image and the stitched image can also be obtainedr
Since the obtained depth map corresponds to only an image captured by one camera, it cannot present depth information of the entire field of view of the camera. However, the depth map can only present information of points that the left and right images can match, and for points that have no match all the time, especially some edge points, their depth information is not presented on the depth map. In order to obtain a depth map which can correspond to the spliced images, the invention obtains an estimated depth map which can present the depth of the whole field of view of the camera by using a depth map expansion method.
Further, the depth map expanding step S400 mainly includes the following sub-steps:
s401: generating a space depth map with the same size as the spliced image, and projecting the depth information of the initial depth map into the space depth map to obtain a second depth map;
s402: and if the depth information of a certain pixel point p in the second depth map is empty, assigning the depth information of a pixel point q which is closest to the pixel point p and the depth information of which is not empty to the pixel point q to obtain the expanded depth map.
Further, the image blurring step of the present invention mainly includes the following sub-steps:
s501, collecting a target depth value Z;
s502, layering the spliced image according to the depth value Z, the depth information of the expanded depth map, a preset lower limit value Th1 of the depth of the focusing layer and a preset upper limit value Th2 of the depth of the focusing layer, wherein Th1 is Z-a, Th2 is Z-b, and a and b are two constant coefficients respectively; corresponding depth value Z in the spliced imagei∈[Th1,Th2]The pixel points are used as focusing layers, and the other pixel points are used as focusing layers;
s503, carrying out contrast equalization processing on the focusing layer;
s504, areas with different depth values in the out-of-focus layer are subjected to adaptive matching with different blurring coefficients to perform blurring processing.
The invention completes the blurring of the whole spliced image by utilizing the blurring coefficient and the image depth value. The image blurring expression is:
Figure BDA0001248792660000071
wherein,
Figure BDA0001248792660000072
is the new pixel value of pixel point I, IiThe pixel value of the pixel point i is, and δ is the image blurring coefficient. The calculation formula of the blurring system delta is as follows:
Figure BDA0001248792660000073
wherein Z isiIs the depth value, mean value of pixel point i
Figure BDA0001248792660000074
Standard deviation of the mean
Figure BDA0001248792660000075
The invention carries out equalization processing on the spliced image according to the equalization coefficient α, and the new pixel value of the focusing layer point i in the spliced image
Figure BDA0001248792660000076
Wherein, IiThe original pixel value of the focus layer point i in the stitched image.
Scheme II: double-camera terminal
The invention also provides a double-camera terminal, which comprises a first image collector, a second image collector, a memory, a processor and a computer program which is stored on the memory and can run on the processor;
the first image collector is used for collecting a first image;
the second image collector is used for collecting a second image;
the processor implements the following steps when executing the program: respectively acquiring a first image and a second image through two cameras of a double-camera terminal;
performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
matching and expanding the initial depth map according to the size of the spliced image, and supplementing the depth value of the expanded area to obtain an expanded depth map;
and performing blurring processing on the spliced image according to the depth information of the expanded depth map.
In particular, when the processor of the dual-camera mobile terminal of the present invention is capable of executing the program, the steps and sub-steps described in each embodiment of the method of the first embodiment can be implemented.
The double-camera terminal can be a double-camera mobile device such as a camera, a mobile phone, a tablet, a mobile computer and the like.
The third scheme is as follows: image processing system based on double-camera terminal
The utility model provides an image processing system based on two camera terminals, the system mainly includes first image acquisition module, second image acquisition module, degree of depth extraction module, image concatenation module, degree of depth extension module and image blurring module.
The first image acquisition module is used for acquiring a first image.
And the second image acquisition module is used for acquiring a second image.
The depth extraction module is configured to perform stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image.
The image stitching module projects the first image and the second image into a preset empty image, and assigns pixels of the first image and the second image into the empty image to obtain a stitched image.
And the depth expansion module is used for performing matching expansion on the initial depth map according to the size of the spliced image and supplementing the depth value of the expanded area to obtain an expanded depth map.
And the image blurring module is used for blurring the spliced image according to the depth information of the expanded depth map.
Further, the depth extraction module comprises a sparse matching module, a dense matching module and a depth map generation module.
The sparse matching module is used for performing sparse matching on the feature points of the first image and the feature points of the second image, the feature points successfully subjected to sparse matching in the first image are used as first sparse matching homonymy points, the feature points successfully subjected to sparse matching in the second image are used as second sparse matching homonymy points, and the first sparse matching homonymy points and the second sparse matching homonymy points which are matched with each other form sparse matching homonymy point pairs.
The dense matching module is used for performing dense matching on pixel points of the first image and pixel points of the second image according to the sparse matching homonym point pairs, taking the pixel points which are successfully densely matched in the first image as first dense matching homonym points, taking the pixel points which are successfully densely matched in the second image as second dense matching homonym points, and forming dense matching homonym point pairs by the first dense matching homonym points and the second dense matching homonym points which are mutually matched.
And the depth map generation module is used for calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain an initial depth map.
In particular, in the present invention, the first image capturing module may be further configured to perform step S100 and the substeps thereof, the second image capturing module may be further configured to perform step S100 and the substeps thereof, the depth extracting module may be further configured to perform step S200 and the substeps thereof, the image stitching module may be further configured to perform step S300 and the substeps thereof, the depth expanding module may be further configured to perform step S400 and the substeps thereof, and the image blurring module may be further configured to perform step S500 and the substeps thereof.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed.

Claims (8)

1. An image processing method based on a dual-camera terminal is characterized by comprising the following steps:
collecting an image: respectively acquiring a first image and a second image through two cameras of a double-camera terminal;
obtaining an initial depth map: performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image; image splicing: projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
and (3) expanding the depth map: generating a null depth map with the same size as the spliced image according to the size of the spliced image, performing matched expansion on the initial depth map, projecting the depth information of the initial depth map into the null depth map, and supplementing the depth value of an expanded area to obtain an expanded depth map;
blurring an image: blurring the spliced image according to the depth information of the expanded depth map;
the stereo matching comprises sparse matching and dense matching, and the step of obtaining the initial depth map comprises the following substeps: s201, sparse matching: respectively extracting characteristic points of the first image and the second image, performing sparse matching on the characteristic points of the first image and the characteristic points of the second image, taking the characteristic points successfully subjected to sparse matching in the first image as first sparse matching homonymy points, taking the characteristic points successfully subjected to sparse matching in the second image as second sparse matching homonymy points, and forming sparse matching homonymy point pairs by the mutually matched first sparse matching homonymy points and the second sparse matching homonymy points; s202, dense matching: performing dense matching on pixel points of the first image and the second image according to the sparse matching homonym point pair, taking the pixel points which are successfully densely matched in the first image as first dense matching homonym points, taking the pixel points which are successfully densely matched in the second image as second dense matching homonym points, and forming dense matching homonym point pairs by the first dense matching homonym points and the second dense matching homonym points which are mutually matched; s203, calculating an initial depth map: and calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain an initial depth map.
2. The image processing method based on the dual-camera terminal as claimed in claim 1, wherein the sub-step S202 comprises the sub-steps of: s2021: triangulating the first image by taking the first sparse matching homonymy point as a supporting point to obtain a plurality of first image triangles; triangulating the second image by taking the second sparse matching homonymy point as a supporting point to obtain a plurality of second image triangles; s2022: estimating image parallax according to the sparse matching relationship between the first image triangle and the second image triangle to obtain an estimated parallax map; s2023: and matching the pixel points in the first image with the pixel points in the second image by using the estimated disparity map by using the first image as a reference image, wherein the pixel points successfully matched with each other are used as a first dense matching homonym point and a second dense matching homonym point.
3. The image processing method based on the dual-camera terminal as claimed in claim 1, wherein the step of obtaining the initial depth map further comprises the sub-steps of: s204, optimizing the initial depth map: and taking the first image as a reference image, and optimizing the initial depth map by using the edge information of the reference image.
4. The image processing method based on the dual-camera terminal as claimed in claim 1, wherein the image stitching step comprises the sub-steps of: s301, establishing a blank image, wherein the length of the blank image I is the sum of the lengths of the first image and the second image, and the width of the blank image I is the width of the first image; s302, projecting the pixel point coordinate information of the first image into an empty image by using a first initial projection transformation matrix; projecting the pixel point coordinate information of the second image into the empty image by using the second initial projection transformation matrix to obtain a first pixel point coordinate image; s303, according to coordinate information of a first dense matching homonymy point and a second dense matching homonymy point which are matched with each other in the pixel point coordinates, optimizing and calibrating the first initial projective transformation matrix and the second initial projective transformation matrix to enable the first dense matching homonymy point and the second dense matching homonymy point to be overlapped and/or enable the distance between the first dense matching homonymy point and the second dense matching homonymy point to be within a preset error range, and obtaining a first projective transformation matrixH l And a second projective transformation matrixH r (ii) a S304, transforming the matrix through the first projectionH l The coordinate information of the pixel points of the first image is re-projected into the empty image, and meanwhile, according to the coordinate information of the first dense matching homonymous points in the empty image, a second projection transformation matrix is used forH rProjecting the pixel point coordinate information of the second image into the empty image to obtain a second pixel point coordinate image; s305, according to the pixel point coordinate information of the first dense matching homonymous point, matching and projecting the pixel point coordinate information of the pixel point which is not successfully subjected to stereo matching in the first image into an empty image; matching same names according to second densityMatching and projecting the pixel point coordinate information of the pixel points which are not successfully subjected to stereo matching in the second image into the empty image; s306, assigning the pixel point values of the first image and the second image to corresponding positions of the empty image according to the pixel point coordinate information of each pixel point of the first image and the second image in the empty image to obtain a spliced image.
5. The image processing method based on the dual-camera terminal as claimed in claim 1, wherein the depth map expanding step comprises the sub-steps of: s401: generating a space depth map with the same size as the spliced image, and projecting the depth information of the initial depth map into the space depth map to obtain a second depth map; s402: and if the depth information of a certain pixel point p in the second depth map is empty, assigning the depth information of a pixel point q which is closest to the pixel point p and the depth information of which is not empty to the pixel point q to obtain the expanded depth map.
6. The image processing method based on the dual-camera terminal as claimed in claim 1, wherein the image blurring step includes the sub-steps of S501 collecting a target depth value Z, S502 layering the stitched image according to a preset lower limit value Th1 of the depth of the focusing layer, an upper limit value Th2 of the depth of the focusing layer and depth information of the expanded depth map, taking pixel points of corresponding depth values ∈ [ Z-Th 1, Z + Th2] in the stitched image as focusing layers and the rest pixel points as out-of-focus layers, S503 contrast equalization processing on the focusing layers, and S504 blurring processing by self-adaptively matching different blurring coefficients for areas of different depth values in the out-of-focus layers.
7. A double-camera terminal is characterized in that the terminal comprises a first image collector, a second image collector, a memory, a processor and a computer program which is stored on the memory and can run on the processor; the first image collector is used for collecting a first image; the second image collector is used for collecting a second image; the processor implements the following steps when executing the program: respectively acquiring a first image and a second image through two cameras of a double-camera terminal; performing stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image; projecting the first image and the second image into a preset empty image, and assigning pixels of the first image and the second image into the empty image to obtain a spliced image; generating a null depth map with the same size as the spliced image according to the size of the spliced image, performing matched expansion on the initial depth map, projecting the depth information of the initial depth map into the null depth map, and supplementing the depth value of an expanded area to obtain an expanded depth map; blurring the spliced image according to the depth information of the expanded depth map;
the stereo matching comprises sparse matching and dense matching, and the step of obtaining the initial depth map comprises the following substeps: sparse matching: respectively extracting characteristic points of the first image and the second image, performing sparse matching on the characteristic points of the first image and the characteristic points of the second image, taking the characteristic points successfully subjected to sparse matching in the first image as first sparse matching homonymy points, taking the characteristic points successfully subjected to sparse matching in the second image as second sparse matching homonymy points, and forming sparse matching homonymy point pairs by the mutually matched first sparse matching homonymy points and the second sparse matching homonymy points; dense matching: performing dense matching on pixel points of the first image and the second image according to the sparse matching homonym point pair, taking the pixel points which are successfully densely matched in the first image as first dense matching homonym points, taking the pixel points which are successfully densely matched in the second image as second dense matching homonym points, and forming dense matching homonym point pairs by the first dense matching homonym points and the second dense matching homonym points which are mutually matched; calculating an initial depth map: and calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain an initial depth map.
8. The utility model provides an image processing system based on two camera terminals which characterized in that: the system comprises:
the first image acquisition module is used for acquiring a first image;
the second image acquisition module is used for acquiring a second image;
the depth extraction module is used for carrying out stereo matching on the first image and the second image to obtain an initial depth map between the first image and the second image;
the image splicing module is used for projecting the first image and the second image into a preset empty image and assigning pixels of the first image and the second image into the empty image to obtain a spliced image;
the depth expansion module generates a hollow depth map with the same size as the spliced image according to the size of the spliced image, performs matching expansion on the initial depth map, projects the depth information of the initial depth map into the hollow depth map, and supplements the depth value of the expanded area to obtain an expanded depth map;
the image blurring module is used for blurring the spliced image according to the depth information of the expanded depth map;
the depth extraction module comprises a sparse matching module, a dense matching module and a depth map generation module;
the sparse matching module is used for performing sparse matching on the feature points of the first image and the feature points of the second image, the feature points which are successfully subjected to sparse matching in the first image are used as first sparse matching homonymous points, the feature points which are successfully subjected to sparse matching in the second image are used as second sparse matching homonymous points, and the first sparse matching homonymous points and the second sparse matching homonymous points which are matched with each other form sparse matching homonymous point pairs;
the dense matching module is used for performing dense matching on pixel points of the first image and pixel points of the second image according to the sparse matching homonym point pairs, taking the pixel points which are successfully densely matched in the first image as first dense matching homonym points, taking the pixel points which are successfully densely matched in the second image as second dense matching homonym points, and forming dense matching homonym point pairs by the first dense matching homonym points and the second dense matching homonym points which are mutually matched;
and the depth map generation module is used for calculating depth information contained between the first image and the second image according to the dense matching homonymous point pairs to obtain an initial depth map.
CN201710161892.7A 2017-03-17 2017-03-17 Double-camera terminal and image processing method and system thereof Active CN106952247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710161892.7A CN106952247B (en) 2017-03-17 2017-03-17 Double-camera terminal and image processing method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710161892.7A CN106952247B (en) 2017-03-17 2017-03-17 Double-camera terminal and image processing method and system thereof

Publications (2)

Publication Number Publication Date
CN106952247A CN106952247A (en) 2017-07-14
CN106952247B true CN106952247B (en) 2020-06-23

Family

ID=59473615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710161892.7A Active CN106952247B (en) 2017-03-17 2017-03-17 Double-camera terminal and image processing method and system thereof

Country Status (1)

Country Link
CN (1) CN106952247B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109274785B (en) * 2017-07-17 2021-04-16 中兴通讯股份有限公司 Information processing method and mobile terminal equipment
CN107623806B (en) * 2017-08-02 2020-04-14 中控智慧科技股份有限公司 Image processing method and related product
CN109087271A (en) * 2018-09-28 2018-12-25 珠海格力电器股份有限公司 method, system and mobile phone for realizing video blurring
CN114946170B (en) * 2019-12-19 2024-04-19 Oppo广东移动通信有限公司 Method for generating image and electronic equipment
CN111324267B (en) * 2020-02-18 2021-06-22 Oppo(重庆)智能科技有限公司 Image display method and related device
CN112001848B (en) * 2020-09-07 2022-04-26 鹏祥智慧保安有限公司 Image identification splicing method and system in big data monitoring system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN104331872A (en) * 2014-11-26 2015-02-04 中测新图(北京)遥感技术有限责任公司 Image splicing method
CN104463775A (en) * 2014-10-31 2015-03-25 小米科技有限责任公司 Device and method for achieving depth-of-field effect of image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499166A (en) * 2009-03-16 2009-08-05 北京中星微电子有限公司 Image splicing method and apparatus
CN104463775A (en) * 2014-10-31 2015-03-25 小米科技有限责任公司 Device and method for achieving depth-of-field effect of image
CN104331872A (en) * 2014-11-26 2015-02-04 中测新图(北京)遥感技术有限责任公司 Image splicing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CONSTRUCTION OF A 3D DEPTH MAP FROM BINOCULAR STEREO;Wojciech Mokrzycki;《ResearchGate》;19940228;第1-15页 *
一种用于立体图像匹配的改进稀疏匹配算法;陈佳坤 等;《计算机技术与发展》;20111031;第21卷(第10期);第64-65页 *

Also Published As

Publication number Publication date
CN106952247A (en) 2017-07-14

Similar Documents

Publication Publication Date Title
CN106952247B (en) Double-camera terminal and image processing method and system thereof
CN110135455B (en) Image matching method, device and computer readable storage medium
CN110363858B (en) Three-dimensional face reconstruction method and system
CN106228507B (en) A kind of depth image processing method based on light field
CN110176032B (en) Three-dimensional reconstruction method and device
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN112116639B (en) Image registration method and device, electronic equipment and storage medium
CN106033621B (en) A kind of method and device of three-dimensional modeling
CN107274483A (en) A kind of object dimensional model building method
KR102415505B1 (en) Method and apparatus for matching stereo images
CN105374019A (en) A multi-depth image fusion method and device
CN103440653A (en) Binocular vision stereo matching method
CN114693760A (en) Image correction method, device and system and electronic equipment
CN107170008A (en) A kind of depth map creation method, system and image weakening method, system
CN103824303A (en) Image perspective distortion adjusting method and device based on position and direction of photographed object
CN115035235A (en) Three-dimensional reconstruction method and device
CN112150518B (en) Attention mechanism-based image stereo matching method and binocular device
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN115457176A (en) Image generation method and device, electronic equipment and storage medium
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN113034666B (en) Stereo matching method based on pyramid parallax optimization cost calculation
CN117152330B (en) Point cloud 3D model mapping method and device based on deep learning
US11475629B2 (en) Method for 3D reconstruction of an object
CN107240149A (en) Object dimensional model building method based on image procossing
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant