CN106952247A - A kind of dual camera terminal and its image processing method and system - Google Patents
A kind of dual camera terminal and its image processing method and system Download PDFInfo
- Publication number
- CN106952247A CN106952247A CN201710161892.7A CN201710161892A CN106952247A CN 106952247 A CN106952247 A CN 106952247A CN 201710161892 A CN201710161892 A CN 201710161892A CN 106952247 A CN106952247 A CN 106952247A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- pixel
- same place
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Abstract
The present invention relates to technical field of image processing, it is proposed that a kind of dual camera terminal and its image processing method and system.The present invention gathers the first image and the second image respectively by two cameras of dual camera terminal;Stereo matching is carried out to described first image and second image, the ID figure between described first image and second image is obtained;By described first image and second image projection into default null images, and described first image and the pixel assignment of second image are obtained into stitching image into the null images;According to the size of stitching image, matching expansion is carried out to ID figure, and supplements the depth value of extended region, obtains expanding depth map;According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.The present invention can take into account image virtualization effect and amount of calculation problem, and can export the visual information in viewing field of camera completely, make full use of function and the effect of twin camera.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of dual camera terminal and its image processing method and
System.
Background technology
With the development of computer vision technique, double-camera mobile terminal is popular, at this stage, and mobile terminal is double to be taken the photograph
As head apparatus primarily to meeting the demand of taking pictures of user.However, it is expensive due to picture pick-up device, and because equipment
The conflict of the market demands of volume, weight and size of mobile terminals, producing can match in excellence or beauty with professional camera shooting effect
Mobile terminal be extremely difficult, one is high cost, and two be that manufacturing process is limited.In order to relatively low cost obtain compared with
Good photo, people imitate the shooting effect of professional camera using digital image processing techniques, and the image that shooting is obtained is entered
Row strengthens, filtered, blurring the operation such as fuzzy.
Proposed that some digital pictures blur processing method at present, such as depth of field rendering intent based on object space,
Depth of field rendering intent based on image space etc., its calculating of the method based on object space acts on three-dimensional scenic and represented, and
Deep Canvas is directly calculated in rendering pipeline.Method based on image space, also referred to as post-processing approach, are action diagram pictures
On, Fuzzy Processing is done to clear scene image using the information of scene depth figure.
But these methods exist cumbersome, computationally intensive when image is handled, and dual camera is all based on
Wherein piece image handled, final output is also the image shot of a camera, and this can cause output
The practical field of view of camera can not be presented completely for image, the effect without performance dual camera carefully, unrestrained to a certain extent
Take resource.
The content of the invention
In order to solve the above technical problems, the present invention provides a kind of dual camera terminal, also proposed based on dual camera end
The image processing method and system at end, the present invention can take into account image virtualization effect and amount of calculation problem, and can regard camera
Visual information in is exported completely, makes full use of function and the effect of twin camera.
Scheme one
A kind of image processing method based on dual camera terminal that the present invention is provided, is mainly included the following steps that:
Gather image:The first image and the second image are gathered respectively by two cameras of dual camera terminal;
Obtain ID figure:Stereo matching is carried out to described first image and second image, described first is obtained
ID figure between image and second image;
Image mosaic:By described first image and second image projection into default null images, and by described
The pixel assignment of one image and second image obtains stitching image into the null images;
Depth map is expanded:According to the size of stitching image, matching expansion is carried out to ID figure, and supplement extended region
Depth value, obtain expand depth map;
Image is blurred:According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
Scheme two
Also a kind of dual camera terminal of the present invention, the terminal includes the first image acquisition device, the second image acquisition device, deposited
Reservoir, processor and storage are on a memory and the computer program that can run on a processor;
Described first image collector is used to gather the first image;
Second image acquisition device is used to gather the second image;
Following steps are realized during the computing device described program:Distinguished by two cameras of dual camera terminal
Gather the first image and the second image;
Stereo matching is carried out to described first image and second image, described first image and second figure is obtained
ID figure as between;
By described first image and second image projection into default null images, and by described first image and institute
The pixel assignment of the second image is stated into the null images, stitching image is obtained;
According to the size of stitching image, matching expansion is carried out to ID figure, and supplements the depth value of extended region, is obtained
To expansion depth map;
According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
Scheme three
The invention allows for a kind of image processing system based on dual camera terminal, the system includes:
First image capture module, for gathering the first image;
Second image capture module, for gathering the second image;
Depth extraction module, for carrying out Stereo matching to described first image and second image, obtains described the
ID figure between one image and second image;
Image mosaic module, by described first image and second image projection into default null images, and by institute
The pixel assignment of the first image and second image is stated into the null images, stitching image is obtained;
Depth Expansion module, according to the size of stitching image, carries out matching expansion, and supplement expansion regions to ID figure
The depth value in domain, obtains expanding depth map;
Image blurring module, according to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
The beneficial effects of the invention are as follows:
1. utilizing a kind of half intensive Stereo Matching Algorithm, the high depth map of accuracy can be obtained, simultaneously for image
Splice the splicing precision also having had.
2. the acquisition of depth map and image mosaic are carried out simultaneously, run time is greatly reduced, internal memory is saved.
3. a pair image is layered, virtualization processing is carried out, by self adaptation modification virtualization parameter, there can be good virtualization
Effect, and amount of calculation is small.
4. the stitching image that the present invention is exported, can make full use of double shootings by the information output in binocular camera visual field
The function of machine and effect.
5. expanding depth map according to stitching image, the whole depth of field information of camera is resulted in.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be attached to what is used needed for embodiment
Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore is not construed as pair
The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this
A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 flow charts of the present invention.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
A part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Scheme one:A kind of image processing method based on dual camera terminal
As shown in figure 1, the invention discloses a kind of image processing method for being applied to possess the mobile terminal of dual camera,
Mainly include the following steps that:
S100, gathers image:The first image and the second image are gathered respectively by two cameras of dual camera terminal;
S200, obtains ID figure:Stereo matching is carried out to described first image and second image, obtains described
ID figure between first image and second image;
S300, image mosaic:By described first image and second image projection into default null images, and by institute
The pixel assignment of the first image and second image is stated into the null images, stitching image is obtained;
S400, depth map expansion:According to the size of stitching image, matching expansion is carried out to ID figure, and supplement expansion
The depth value in region is opened, obtains expanding depth map;
S500, image virtualization:According to the depth information of the expansion depth map, the stitching image is carried out at virtualization
Reason.
In the present invention, above-mentioned acquisition ID figure step S200 and image mosaic step S300 execution sequence can be with
Depending on according to the actual requirements, for example, performing after collection image step S100, it can first carry out obtaining ID figure step S200,
Then image mosaic step S300 is carried out again;Image mosaic step can also be first carried out after collection image step S100 is performed
S300, then carries out obtaining ID figure step S200 again;Certainly, can also be same after collection image step S100 is performed
Shi Jinhang obtains ID figure step S200 and image mosaic step S300.
Further, image step S100 is gathered in the present invention, image preprocessing step is may also include, respectively to the first figure
The pretreatments such as picture and the second image are strengthened, denoising.
Further, heretofore described Stereo matching includes sparse matching and dense matching, and the acquisition is initially deep
Spending figure step S200 includes following sub-step:
S201, sparse matching:The characteristic point of the first image and the second image is extracted respectively, and to the characteristic point of the first image
Sparse matching is carried out with the characteristic point of the second image, the sparse characteristic point that the match is successful in the first image is regard as first sparse
With same place, the sparse characteristic point that the match is successful is as the second sparse matching same place in the second image, and be mutually matched
One sparse matching same place and the second sparse sparse matching same place pair of matching same place composition.
S202, dense matching:The pixel to the first image and the second image is carried out according to sparse matching same place thick
Close matching, will be thick in the second image using the successful pixel of dense matching in the first image as the first dense matching same place
The close pixel that the match is successful is as the second dense matching same place, and the first dense matching same place being mutually matched and second
Dense matching same place constitutes dense matching same place pair.
S203, calculates ID figure:Wrapped between the first image and the second image according to dense matching same place being calculated
The depth information contained, obtains ID figure.
When extracting the characteristic point of the first image and the second image in step s 201, using SIFT algorithm (Scale invariants
Feature point extraction algorithm) feature point extraction is carried out to the first image and the second image respectively.Can be first using Gaussian filter to the
The image of one image/second carries out continuous filtering several times and sets up first yardstick group.The image of the first image/second is reduced to again
Original half is carried out after same gaussian filtering second yardstick group of formation, is repeated until the figure of the first image/second
As untill being less than a certain given threshold value;Next difference formation Gaussian difference scale is carried out to the Gaussian image in each yardstick group
Group.Then the local extremum in the image of these images of difference of Gaussian first/second is taken just to obtain the first figure on metric space domain
As the characteristic point of the/the second image.
, can be by calculating two spies when carrying out sparse matching to the characteristic point of the first image and the characteristic point of the second image
Euclidean distance between levying a little portrays the matching properties between two characteristic points, that is, finds out and characteristic point p in the first imagei's
The nearest and secondary near two adjacent features point descriptor q ' in the second image of Euclidean distanceiWith q "i, calculate q 'iWith piAnd q "i
With piThe ratio r of Euclidean distance between two groups of characteristic points, matching is considered as if ratio r is less than defined threshold by this group of characteristic point
This group of characteristic point, is otherwise considered as that it fails to match by success, and the point that it fails to match is no longer characteristic point.By two spies that the match is successful
Levy a little as sparse matching same place pair, it is respectively that two characteristic points of sparse matching same place pair are same as the first sparse matching
Famous cake and the second sparse matching same place.
Further, when carrying out dense matching to the first image and the second image, it may be accomplished by:
S2021:Using the first sparse matching same place as the strong point, triangulation is carried out to the first image, several are obtained
First image triangle;Using the second sparse matching same place as the strong point, triangulation is carried out to the second image, several are obtained
Second image triangle;Preferable Delaunay Triangulation is handled.
S2022:According to the first image triangle and the sparse matching relationship of the second image triangle, image parallactic is estimated,
Obtain estimating disparity figure.
Such as when being handled using Delaunay Triangulation, any pixel p estimation in Delaunay triangles
Parallax d, estimating disparity dpEstimated by the coordinate and the pixel of the pixel in the fitting parameter of Delaunay triangle projective planums
Obtain, shown in mathematic(al) representation specific as follows:
dp=aup+bvp+c
Wherein, a, b, c are parameter, and its value Delaunay triangle projective planums where by being fitted the pixel are obtained;(up,
vp) it is the coordinates of pixel p in the picture.
S2023:Using the first image as reference picture, using estimating disparity figure, by the pixel in the first image and
Pixel in two images is matched, and is mutually matched successful pixel dense as the first dense matching same place and second
Match same place.
It is also that can not access essence for image mosaic because sparse matching can not access dense disparity map
Accurate image registration.Accordingly, it would be desirable to carry out dense matching to the first image and the second image.The present invention passes through with according to having completed
The point matched somebody with somebody, builds Delaunay triangles, and the parallax of other all non-matching points is estimated according to parallax Delaunay triangles
Meter, so as to complete matching.
When calculating ID figure, included according to dense matching same place to calculating between the first image and the second image
Depth information, obtain ID figure.Specifically:According to the first dense matching same place and the second dense matching same place
xl,xr, ask for formula D=using parallax | xl-xr| parallax D is tried to achieve, further according to depth map computing formula:Obtain just
Beginning depth map, wherein, B is camera baseline length, and f is phase owner away from subscript l and r represent the first image and the second image respectively.
Further, the present invention is when carrying out image mosaic, mainly including following sub-step:
S301, sets up a null images, null images I a length of first image and the long sum of the second image, null images I's
The width of a width of first image;
S302, null images are projected to using the first initial projections transformation matrix by the pixel coordinate information of the first image
In;The pixel coordinate information of the second image is projected in null images using the second initial projections transformation matrix, first is obtained
Pixel image coordinate;
S303, according to the first dense matching same place and the second dense matching same place being mutually matched in pixel point coordinates
Coordinate information, calibration is optimized to the first initial projections transformation matrix and the second initial projections transformation matrix, so that first
Dense matching same place and the second dense matching same place are overlapped and/or the spacing between it is in default error range, obtains
To the first projective transformation matrix HlWith the second projective transformation matrix Hr;
S304, passes through the first projective transformation matrix HlThe pixel coordinate information of first image is projected into null images again
In, while the coordinate information of the first dense matching same place in the null images, crosses the second projective transformation matrix HrBy
The pixel coordinate information of two images is projected in null images, obtains the second pixel image coordinate;
S305, according to the pixel coordinate information of the first dense matching same place, will not have Stereo matching in the first image
The pixel coordinate information matching pursuit of successful pixel is into null images;According to the pixel of the second dense matching same place
Coordinate information, by the pixel coordinate information matching pursuit for not having the successful pixel of Stereo matching in the second image to null images
In;
S306, according to the pixel coordinate information of the first image and the second image each pixel in null images, by
The pixel point value assignment of one image and the second image obtains stitching image in the corresponding position of null images.
Specially:Calculate the first projective transformation matrix HlWith the second projective transformation matrix HrWhen, for the picture on the first image
Vegetarian refreshments (xl,yl) with stitching image on pixel (x, y) transformational relation, it is as follows:
Above formula is substituted into using the homonymy matching point more than 4 groups, you can try to achieve image transformation matrix Hl.But, in order to obtain
Accurate first projective transformation matrix Hl, it is of the invention by repeating n times, the dense matching same place of stochastical sampling is counted
Calculate, obtain N number of first projective transformation matrix Hl i, i=1,2 ... N, the average for calculating N number of first projective transformation matrix can be made
For the first final projective transformation matrixI.e.Similarly, also can obtain the second image with
Second projective transformation matrix H of stitching imager。
Because the above-mentioned depth map tried to achieve simply corresponds to the image that a camera is shot, it can not be by the whole of camera
The depth information of individual visual field is showed.But depth map is merely capable of showing the information for the point that left and right image can be matched,
For point all the time all without matching, some marginal points are particularly, their depth information is not showed on depth map.For
Obtain can be corresponding with spliced image depth map, the method that the present invention is expanded using depth map obtains an energy
The estimating depth figure of the depth of the whole visual field of camera is enough presented.
Further, the depth map expansion step S400 mainly includes following sub-step:
S401:Generation and the empty depth map of size identical of stitching image, the depth information of ID figure is projected to
In empty depth map, the second depth map is obtained;
S402:If the depth information of certain pixel p in the second depth map is sky, will be nearest apart from pixel p
And depth information that depth information is for empty pixel q is assigned to pixel q, obtain expanding depth map.
Further, image virtualization step of the present invention mainly includes following sub-step:
S501, collection target depth value Z;
S502, according to depth value Z, expands the depth information and default focus layer depth limit value Th1 of depth map and gathers
Focus layer upper depth limit value Th2, to stitching image progress layered shaping, wherein Th1=Z-a, Th2=Z-b, a, b is respectively two
Constant coefficient;By corresponding depth value Z in stitching imagei∈ [Th1, Th2] pixel is made as focus layer, rest of pixels point
For defocus layer;
S503, contrast equalization processing is carried out to focus layer;
S504, to the region of different depth value in defocus layer, the different virtualization coefficient of Adaptive matching is to carry out at virtualization
Reason.
The present invention completes the virtualization of whole stitching image by using virtualization coefficient and image depth values.Image is blurred
Expression formula is:Wherein,For pixel i new pixel value, IiFor pixel i pixel value, δ is that image is empty
Change coefficient.The computing formula of the virtualization system δ is:
Wherein, ZiFor pixel i depth value, averageStandard variance
The present invention carries out equalization processing according to equalization factor alpha to stitching image.Focus layer point i's is new in stitching image
Pixel valueWherein, IiFor the original pixel value of focus layer point i in stitching image.
Scheme two:A kind of dual camera terminal
The invention allows for a kind of dual camera terminal, the terminal includes the first image acquisition device, the second image and adopted
Storage, memory, processor and storage are on a memory and the computer program that can run on a processor;
Described first image collector is used to gather the first image;
Second image acquisition device is used to gather the second image;
Following steps are realized during the computing device described program:Distinguished by two cameras of dual camera terminal
Gather the first image and the second image;
Stereo matching is carried out to described first image and second image, described first image and second figure is obtained
ID figure as between;
By described first image and second image projection into default null images, and by described first image and institute
The pixel assignment of the second image is stated into the null images, stitching image is obtained;
According to the size of stitching image, matching expansion is carried out to ID figure, and supplements the depth value of extended region, is obtained
To expansion depth map;
According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
Particularly, being capable of implementation when the processor of double-camera mobile terminal can perform described program in the present invention
Step and sub-step in one methods described described in each embodiment.
Heretofore described dual camera terminal can set for the movement of the dual cameras such as camera, mobile phone, flat board, removable computer
It is standby.
Scheme three:A kind of image processing system based on dual camera terminal
A kind of image processing system based on dual camera terminal, the system mainly include the first image capture module,
Second image capture module, depth extraction module, image mosaic module, Depth Expansion module and image blurring module.
Described first image acquisition module, for gathering the first image.
Second image capture module, for gathering the second image.
The depth extraction module, for carrying out Stereo matching to described first image and second image, obtains institute
State the ID figure between the first image and second image.
Described image concatenation module, by described first image and second image projection into default null images, and
By described first image and the pixel assignment of second image into the null images, stitching image is obtained.
The Depth Expansion module, according to the size of stitching image, carries out matching expansion, and supplement expansion to ID figure
The depth value in region is opened, obtains expanding depth map.
Described image blurring module, according to the depth information of the expansion depth map, is blurred to the stitching image
Processing.
Further, the depth extraction module includes sparse matching module, dense matching module and depth map generation mould
Block.
The sparse matching module is used to carry out sparse matching to the characteristic point of the first image and the characteristic point of the second image,
Using the sparse characteristic point that the match is successful in the first image as the first sparse matching same place, sparse in the second image the match is successful
Characteristic point as the second sparse matching same place, and the first sparse matching same place being mutually matched and the second sparse matching are same
The sparse matching same place pair of famous cake composition.
The dense matching module is used for according to sparse matching same place to the pixel to the first image and the second image
Dense matching is carried out, using the successful pixel of dense matching in the first image as the first dense matching same place, by the second figure
The successful pixel of dense matching is as the second dense matching same place as in, and the first dense matching same place being mutually matched
Dense matching same place pair is constituted with the second dense matching same place.
The depth map generation module is used for according to dense matching same place to calculating between the first image and the second image
Comprising depth information, obtain ID figure.
Particularly, heretofore described first image capture module can be also used for performing step S100 and its sub-step,
Second image capture module be can be also used for performing step S100 and its sub-step, and the depth extraction module can also be used
In performing step S200 and its sub-step, described image concatenation module can be also used for performing step S300 and its sub-step, institute
State Depth Expansion module to can be also used for performing step S400 and its sub-step, described image blurring module can be also used for performing
Step S500 and its sub-step.
The invention is not limited in foregoing embodiment.The present invention, which is expanded to, any in this manual to be disclosed
New feature or any new combination, and disclose any new method or process the step of or any new combination.
Claims (10)
1. a kind of image processing method based on dual camera terminal, it is characterised in that the described method comprises the following steps:
Gather image:The first image and the second image are gathered respectively by two cameras of dual camera terminal;
Obtain ID figure:Stereo matching is carried out to described first image and second image, described first image is obtained
With the ID figure between second image;
Image mosaic:By described first image and second image projection into default null images, and by first figure
The pixel assignment of picture and second image obtains stitching image into the null images;
Depth map is expanded:According to the size of stitching image, matching expansion is carried out to ID figure, and supplement the depth of extended region
Angle value, obtains expanding depth map;
Image is blurred:According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
2. a kind of image processing method based on dual camera terminal according to claim 1, it is characterised in that described vertical
Body matching includes sparse matching and dense matching, and the acquisition ID figure step includes following sub-step:
S201, sparse matching:The characteristic point of the first image and the second image, and the characteristic point to the first image and are extracted respectively
The characteristic point of two images carries out sparse matching, and the sparse characteristic point that the match is successful in the first image is same as the first sparse matching
The sparse characteristic point that the match is successful is as the second sparse matching same place in famous cake, the second image, and first be mutually matched is dilute
Dredge matching same place and the second sparse sparse matching same place pair of matching same place composition;
S202, dense matching:Dense is carried out to the pixel to the first image and the second image according to sparse matching same place
Match somebody with somebody, using the successful pixel of dense matching in the first image as the first dense matching same place, by dense in the second image
With successful pixel as the second dense matching same place, and the first dense matching same place being mutually matched and second dense
Match same place composition dense matching same place pair;
S203, calculates ID figure:According to dense matching same place to calculating what is included between the first image and the second image
Depth information, obtains ID figure.
3. a kind of image processing method based on dual camera terminal according to claim 2, it is characterised in that the son
Step S202 includes following sub-step:
S2021:Using first it is sparse matching same place as the strong point, to the first image carry out triangulation, obtain several first
Image triangle;Using second it is sparse matching same place as the strong point, to the second image carry out triangulation, obtain several second
Image triangle;
S2022:According to the first image triangle and the sparse matching relationship of the second image triangle, estimate image parallactic, obtain
Estimating disparity figure;
S2023:Using the first image as reference picture, using estimating disparity figure, by the pixel in the first image and the second figure
Pixel as in is matched, and is mutually matched successful pixel as the first dense matching same place and the second dense matching
Same place.
4. a kind of image processing method based on dual camera terminal according to claim 2, it is characterised in that described to obtain
Taking ID figure step also includes sub-step:
S204, optimizes ID figure:Using the first image as reference picture, using the marginal information of reference picture to initial deep
Degree figure is optimized.
5. a kind of image processing method based on dual camera terminal according to claim 1, it is characterised in that the figure
As splicing step includes following sub-step:
S301, sets up a null images, null images I a length of first image and the long sum of the second image, and null images I's is a width of
The width of first image;
S302, is projected to the pixel coordinate information of the first image in null images using the first initial projections transformation matrix;Profit
The pixel coordinate information of the second image is projected in null images with the second initial projections transformation matrix, the first pixel is obtained
Image coordinate;
S303, according to the first dense matching same place and the seat of the second dense matching same place being mutually matched in pixel point coordinates
Information is marked, calibration is optimized to the first initial projections transformation matrix and the second initial projections transformation matrix, so that first is dense
Match that same place and the second dense matching same place are overlapped and/or the spacing between it is in default error range, obtain the
One projective transformation matrix HlWith the second projective transformation matrix Hr;
S304, passes through the first projective transformation matrix HlThe pixel coordinate information of first image is projected in null images again, together
When the first dense matching same place in the null images coordinate information, cross the second projective transformation matrix HrBy the second image
Pixel coordinate information project in null images, obtain the second pixel image coordinate;
S305, according to the pixel coordinate information of the first dense matching same place, will not have Stereo matching success in the first image
Pixel pixel coordinate information matching pursuit into null images;According to the pixel point coordinates of the second dense matching same place
Information, will not have the pixel coordinate information matching pursuit of the successful pixel of Stereo matching into null images in the second image;
S306, according to the pixel coordinate information of the first image and the second image each pixel in null images, by the first figure
The pixel point value assignment of picture and the second image obtains stitching image in the corresponding position of null images.
6. a kind of image processing method based on dual camera terminal according to claim 1, it is characterised in that the depth map
Expansion step includes following sub-step:
S401:Generation and the empty depth map of size identical of stitching image, the depth information of ID figure are projected to empty deep
Spend in figure, obtain the second depth map;
S402:If the depth information of certain pixel p in the second depth map is sky, by apart from pixel p it is nearest and
Depth information of the depth information not for empty pixel q is assigned to pixel q, obtains expanding depth map.
7. a kind of image processing method based on dual camera terminal according to claim 1, it is characterised in that the figure
As virtualization step includes following sub-step:
S501, collection target depth value Z;
S502, according to default focus layer depth limit value Th1, focus layer upper depth limit value Th2 and expansion depth map depth
Information, layered shaping is carried out to stitching image;The pixel of corresponding depth value ∈ [Z-Th1, Z+Th2] in stitching image is made
For focus layer, rest of pixels point is used as defocus layer;
S503, contrast equalization processing is carried out to focus layer;
S504, to the region of different depth value in defocus layer, the different virtualization coefficient of Adaptive matching is to carry out virtualization processing.
8. a kind of dual camera terminal, it is characterised in that the terminal include the first image acquisition device, the second image acquisition device,
Memory, processor and storage are on a memory and the computer program that can run on a processor;
Described first image collector is used to gather the first image;
Second image acquisition device is used to gather the second image;
Following steps are realized during the computing device described program:Gathered respectively by two cameras of dual camera terminal
First image and the second image;
Stereo matching is carried out to described first image and second image, obtain described first image and second image it
Between ID figure;
By described first image and second image projection into default null images, and by described first image and described
The pixel assignment of two images obtains stitching image into the null images;
According to the size of stitching image, matching expansion is carried out to ID figure, and supplements the depth value of extended region, is expanded
Open depth map;
According to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
9. a kind of image processing system based on dual camera terminal, it is characterised in that:The system includes:
First image capture module, for gathering the first image;
Second image capture module, for gathering the second image;
Depth extraction module, for carrying out Stereo matching to described first image and second image, obtains first figure
Picture and the ID figure between second image;
Image mosaic module, by described first image and second image projection into default null images, and by described
The pixel assignment of one image and second image obtains stitching image into the null images;
Depth Expansion module, according to the size of stitching image, carries out matching expansion, and supplement extended region to ID figure
Depth value, obtains expanding depth map;
Image blurring module, according to the depth information of the expansion depth map, virtualization processing is carried out to the stitching image.
10. a kind of image processing system based on dual camera terminal according to claim 9, it is characterised in that the depth
Extraction module includes sparse matching module, dense matching module and depth map generation module;
The sparse matching module is used to carry out sparse matching to the characteristic point of the first image and the characteristic point of the second image, by the
The sparse characteristic point that the match is successful is as the first sparse matching same place in one image, the sparse spy that the match is successful in the second image
Levy o'clock as the second sparse matching same place, and the sparse matching same place of first be mutually matched and the second sparse matching same place
The sparse matching same place pair of composition;
The dense matching module is used for according to sparse matching same place to the pixel progress to the first image and the second image
Dense matching, using the successful pixel of dense matching in the first image as the first dense matching same place, by the second image
The successful pixel of dense matching is as the second dense matching same place, and the first dense matching same place being mutually matched and
Two dense matching same places constitute dense matching same place pair;
The depth map generation module is used to be included between the first image and the second image to calculating according to dense matching same place
Depth information, obtain ID figure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710161892.7A CN106952247B (en) | 2017-03-17 | 2017-03-17 | Double-camera terminal and image processing method and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710161892.7A CN106952247B (en) | 2017-03-17 | 2017-03-17 | Double-camera terminal and image processing method and system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106952247A true CN106952247A (en) | 2017-07-14 |
CN106952247B CN106952247B (en) | 2020-06-23 |
Family
ID=59473615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710161892.7A Active CN106952247B (en) | 2017-03-17 | 2017-03-17 | Double-camera terminal and image processing method and system thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106952247B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107623806A (en) * | 2017-08-02 | 2018-01-23 | 中控智慧科技股份有限公司 | Image processing method and related product |
CN109087271A (en) * | 2018-09-28 | 2018-12-25 | 珠海格力电器股份有限公司 | Realize method, system and the mobile phone of video recording virtualization |
CN109274785A (en) * | 2017-07-17 | 2019-01-25 | 中兴通讯股份有限公司 | A kind of information processing method and mobile terminal device |
CN111324267A (en) * | 2020-02-18 | 2020-06-23 | Oppo(重庆)智能科技有限公司 | Image display method and related device |
CN112001848A (en) * | 2020-09-07 | 2020-11-27 | 杨仙莲 | Image identification splicing method and system in big data monitoring system |
WO2021120107A1 (en) * | 2019-12-19 | 2021-06-24 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method of generating captured image and electrical device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101499166A (en) * | 2009-03-16 | 2009-08-05 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN104331872A (en) * | 2014-11-26 | 2015-02-04 | 中测新图(北京)遥感技术有限责任公司 | Image splicing method |
CN104463775A (en) * | 2014-10-31 | 2015-03-25 | 小米科技有限责任公司 | Device and method for achieving depth-of-field effect of image |
-
2017
- 2017-03-17 CN CN201710161892.7A patent/CN106952247B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101499166A (en) * | 2009-03-16 | 2009-08-05 | 北京中星微电子有限公司 | Image splicing method and apparatus |
CN104463775A (en) * | 2014-10-31 | 2015-03-25 | 小米科技有限责任公司 | Device and method for achieving depth-of-field effect of image |
CN104331872A (en) * | 2014-11-26 | 2015-02-04 | 中测新图(北京)遥感技术有限责任公司 | Image splicing method |
Non-Patent Citations (2)
Title |
---|
WOJCIECH MOKRZYCKI: "CONSTRUCTION OF A 3D DEPTH MAP FROM BINOCULAR STEREO", 《RESEARCHGATE》 * |
陈佳坤 等: "一种用于立体图像匹配的改进稀疏匹配算法", 《计算机技术与发展》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109274785A (en) * | 2017-07-17 | 2019-01-25 | 中兴通讯股份有限公司 | A kind of information processing method and mobile terminal device |
CN107623806A (en) * | 2017-08-02 | 2018-01-23 | 中控智慧科技股份有限公司 | Image processing method and related product |
CN107623806B (en) * | 2017-08-02 | 2020-04-14 | 中控智慧科技股份有限公司 | Image processing method and related product |
CN109087271A (en) * | 2018-09-28 | 2018-12-25 | 珠海格力电器股份有限公司 | Realize method, system and the mobile phone of video recording virtualization |
WO2021120107A1 (en) * | 2019-12-19 | 2021-06-24 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method of generating captured image and electrical device |
CN114946170A (en) * | 2019-12-19 | 2022-08-26 | Oppo广东移动通信有限公司 | Method and electronic device for generating image |
CN114946170B (en) * | 2019-12-19 | 2024-04-19 | Oppo广东移动通信有限公司 | Method for generating image and electronic equipment |
CN111324267A (en) * | 2020-02-18 | 2020-06-23 | Oppo(重庆)智能科技有限公司 | Image display method and related device |
CN112001848A (en) * | 2020-09-07 | 2020-11-27 | 杨仙莲 | Image identification splicing method and system in big data monitoring system |
Also Published As
Publication number | Publication date |
---|---|
CN106952247B (en) | 2020-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xian et al. | Space-time neural irradiance fields for free-viewpoint video | |
CN106952247A (en) | A kind of dual camera terminal and its image processing method and system | |
WO2021077720A1 (en) | Method, apparatus, and system for acquiring three-dimensional model of object, and electronic device | |
TWI554936B (en) | Image processing device, image processing method and computer product program | |
CN108876836B (en) | Depth estimation method, device and system and computer readable storage medium | |
CN108510535A (en) | A kind of high quality depth estimation method based on depth prediction and enhancing sub-network | |
CN104616247B (en) | A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT | |
CN108053373A (en) | One kind is based on deep learning model fisheye image correcting method | |
WO2024007478A1 (en) | Three-dimensional human body modeling data collection and reconstruction method and system based on single mobile phone | |
CN108663026A (en) | A kind of vibration measurement method | |
CN114782864B (en) | Information processing method, device, computer equipment and storage medium | |
CN106709862B (en) | A kind of image processing method and device | |
CN114996814A (en) | Furniture design system based on deep learning and three-dimensional reconstruction | |
CN115222889A (en) | 3D reconstruction method and device based on multi-view image and related equipment | |
CN107155100A (en) | A kind of solid matching method and device based on image | |
CN113034666B (en) | Stereo matching method based on pyramid parallax optimization cost calculation | |
CN117132737B (en) | Three-dimensional building model construction method, system and equipment | |
CN110012236A (en) | A kind of information processing method, device, equipment and computer storage medium | |
EP3906530B1 (en) | Method for 3d reconstruction of an object | |
CN117058183A (en) | Image processing method and device based on double cameras, electronic equipment and storage medium | |
Gava et al. | Dense scene reconstruction from spherical light fields | |
CN110147809A (en) | Image processing method and device, storage medium and vision facilities | |
CN112991207B (en) | Panoramic depth estimation method and device, terminal equipment and storage medium | |
CN111524075A (en) | Depth image filtering method, image synthesis method, device, equipment and medium | |
CN111524087B (en) | Image processing method and device, storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |