CN104318514A - Three-dimensional significance based image warping method - Google Patents
Three-dimensional significance based image warping method Download PDFInfo
- Publication number
- CN104318514A CN104318514A CN201410553252.7A CN201410553252A CN104318514A CN 104318514 A CN104318514 A CN 104318514A CN 201410553252 A CN201410553252 A CN 201410553252A CN 104318514 A CN104318514 A CN 104318514A
- Authority
- CN
- China
- Prior art keywords
- target image
- formula
- described target
- image
- triangle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims description 23
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- IAPHXJRHXBQDQJ-ODLOZXJASA-N jacobine Natural products O=C1[C@@]2([C@H](C)O2)C[C@H](C)[C@](O)(C)C(=O)OCC=2[C@H]3N(CC=2)CC[C@H]3O1 IAPHXJRHXBQDQJ-ODLOZXJASA-N 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G06T3/067—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a three-dimensional significance based image warping method. The three-dimensional significance based image warping method is characterized by the following steps of 1 obtaining a depth map of a target image through depth data; 2 combining the depth map and a two-dimensional model to construct a three-dimensional significance model; 3 updating a weight between the depth data and the two-dimensional model according to distribution of image gradation in a self-adapting mode; 4 calculating the gradient of an image energy function through the three-dimensional significance model; 5 extracting an image two-dimensional edge and a depth contour feature and generating into a triangular mesh through feature points; 6 establishing a target function and increasing constraint; 7 requesting an extreme value for the target function and constructing a transformational relation. The three-dimensional significance based image warping method can combine the depth information and the two-dimensional significance and increases the warping robustness.
Description
Technical field
The invention belongs to image processing field, relate generally to a kind of image warping method based on three-dimensional significance.
Background technology
Universal along with mobile device such as smart mobile phone and panel computer, the photo upload taken by them is shared with friend by user habit to social network sites.Consider that the model of the mobile device that user uses is different, the photo how making user share can show well on different terminals, is one of heat subject of current computer vision field research.
For the problems referred to above, researchers propose certain methods, but inadequate due to distortion and accuracy of detection, and the distortion of objects in images and the loss of part important information annoying researchers all the time.In 2009, international top-level meeting IEEE International Conference on Image Processing having delivered article " Saliency detection for content-aware image resizing " this article recently proposes to utilize the significance calculated in target image image to be carried out to the constraint of distortion, and the point making significance in distort process large is preserved and sacrifices the little point of significance.But when scene more complicated, texture and the object in image is more, the method of this article is utilized to calculate the significance of many points in image all very large, if retained these points, image just can not get effective distortion, if give up these points, some important profile informations will be cast out.Up to the present, still do not occur that one can ensure that objects in images can not be out of shape or lose, the method for distortion can be carried out again the image of scene more complicated.
Summary of the invention
The present invention is intended to solve Most current image warping method and carries out target image producing distortion, lost part important information to the object in image and effectively can not carrying out the problem of distortion to the image having complex scene in the process of distortion, a kind of image warping method based on three-dimensional significance is proposed, depth information and two-dimentional significance can be combined, and strengthen the robustness of distortion.
The present invention is that technical solution problem adopts following technical scheme:
The feature of a kind of image warping method based on three-dimensional significance of the present invention is carried out as follows:
Step 1: utilize formula (1) computed image size to be the energy function E of each pixel in the target image I of m × n:
In formula (1), E (x, y) is the energy value of described target image I at pixel (x, y) place; I (x, y) is the gray-scale value of described target image I at pixel (x, y) place; X ∈ (0, m); Y ∈ (0, n);
Step 2: carry out feature extraction to described target image I, obtains two dimensional character matrix X;
Step 3, formula (2) is utilized to obtain the two-dimentional significance S of described target image I
2D:
In formula (2), X
i, X
jbe respectively described two dimensional character matrix two different rows vectors; σ is constant;
Step 4, formula (3) is utilized to build three-dimensional significance model S
3D:
S
3D=(1-α)S
2D+α·E
depth (3)
In formula (3), E
depthfor the depth map utilizing 3D camera to obtain described target image I, α is auto-adaptive parameter; And have:
In formula (4), n (x, y) represents the number of pixels equaling pixel (x, y) gray-scale value; D
maxfor constant;
Step 5: utilize formula (1) and (3) that described energy function E is newly defined as E':
E'(x,y)=E(x,y)·S
3D(x,y) (5)
In formula (5), E'(x, y) for described target image I is at the new energy value at pixel (x, y) place;
Step 6: utilize formula (6) to calculate the image significance S of described target image I:
In formula (6), (x
b, n) be b pixel of the n-th row in described target image I, (x
a, n-1) and be the (n-1)th a pixel arranged in described target image I, a ≠ b, and a, b ∈ (0, m); S ((x
b, n), (x
a, n-1)) to represent in described target image I that n-th arranges b pixel x
ba pixel x is arranged with (n-1)th
aenergy differences;
represent the gradient on described target image I horizontal direction v; And have
represent the gradient on described target image I diagonal d, and have
Step 7, depth map according to described target image I, obtain the profile of body surface in described target image I;
Step 8, Delaunay trigonometry is utilized to be coupled together and build triangle gridding on described target image by each point in described two dimensional character matrix X; Described triangle gridding comprises several triangles t;
Step 9, formula (7) is utilized to carry out distortion to triangles all in described triangle gridding:
In formula (7),
with
be respectively the distortion of described triangle t on transverse direction α and longitudinal direction β; G
trepresent the warp function to any one triangle t;
Step 10, formula (8) is utilized to obtain target equation E
s:
In formula (8), T is the set of described triangle t; A
tfor the area of described triangle t, J
tq () expression carries out the Jacobin matrix after distortion to described triangle t; S
tfor the significance of any one triangle t;
Step 11: utilize formula (9) to define constraint condition E
f:
In formula (9),
for three summits without distortion on described triangle t,
for three summits of three summits on described triangle t after distortion; r
iit is summit
and summit
between limit and summit
and summit
between the side ratio on limit, R
tit is summit
and summit
between limit and summit
and summit
between the rotation matrix that forms of limit;
Step 12: utilize formula (10) to obtain distortion matrix F:
F=λE
s+(1-λ)E
f (10)
In formula (10), λ is coefficient; Translation or the distortion of rotation realization to described target image I is carried out according to the value that distortion matrix F is corresponding with triangle t each in described target image I.
Compared with prior art, beneficial effect of the present invention is embodied in:
1, the method for target image being carried out to energy function calculating based on L1 normal form of classics combines with depth information and proposes a kind of three-dimensional significance newly by the present invention, both ensure that the advantage of original two-dimentional significance, and with the addition of again depth information and make still can well carry out distortion when scene more complicated in target image.
2, the parameter in the present invention between two-dimentional significance and depth information is according to the adaptive adjustment of the distribution of gray scale in image, when intensity profile in image is more single and image scene is more single, the weight of two dimension significance, by the weight higher than depth information, therefore remains the advantage of conventional two-dimensional significance; And when intensity profile is more extensive and image scene more complicated time, the weight of depth information, higher than two-dimentional significance, therefore better can carry out distortion to the image of complex scene.
3, because the precision of two-dimentional significance is subject to the comparatively large and precision of depth information of the impact of ambient light photograph hardly by extraneous illumination effect, therefore the inventive method to external world environment there is stronger noise immunity.
4, the present invention is owing to the addition of depth information and triangle gridding as constraint, both the size of the significance of each point had been considered, consider the integrality at objects in images edge simultaneously, the large significance of two somes significance on such hypothesis same limit is little, and final distortion result also can not make this edge that the situations such as fracture occur; Thus ensure that the robustness of scalloping.
Accompanying drawing explanation
Fig. 1 is target image of the present invention;
Fig. 2 is the distortion result figure of the present invention to target image.
Embodiment
In the present embodiment, a kind of image warping method based on three-dimensional significance is the depth map utilizing depth data to obtain target image; Then depth map and two dimensional model are combined and build three-dimensional significance model; Secondly according to the weight between the adaptive renewal depth data of the distribution of gradation of image and two dimensional model; Again utilize three-dimensional significance model computed image energy function gradient; Extract two-dimensional image edge and depth profile feature afterwards, utilize unique point to generate triangle gridding; Final establishing target function is asked for extreme value and adds constraint, obtains distortion matrix; Carry out as follows specifically:
Step 1: utilize formula (1) computed image size to be the energy function E of each pixel in the target image I of m × n: this energy function is obtained by the gradient utilizing L1 normal form and calculate each pixel of target image; Gradient information reflects the edge of objects in images usually, can effectively ensure the complete of image in tailoring process, target image as shown in Figure 1:
In formula (1), E (x, y) is for target image I is at the energy value at pixel (x, y) place; I (x, y) is for target image I is at the gray-scale value at pixel (x, y) place; X ∈ (0, m); Y ∈ (0, n);
Step 2: feature extraction is carried out to target image I, the method of image being carried out to feature extraction is unrestricted, and the feature extracting methods such as Local Binary Pattern (LBP) or SIFT such as can be adopted can to obtain obtaining two dimensional character matrix X;
Step 3, formula (2) is utilized to obtain the two-dimentional significance S of target image I
2D:
In formula (2), X
i, X
jbe respectively two dimensional character matrix two different rows vectors; σ is constant;
Step 4, formula (3) is utilized to build three-dimensional significance model S
3D: depth information combines by model on original two-dimentional significance basis, and according to the difference of image, the weights relation between Automatic adjusument two dimension significance and depth information;
S
3D=(1-α)S
2D+α·E
depth (3)
In formula (3), E
depthfor the depth map utilizing 3D camera to obtain target image I, α is auto-adaptive parameter; And have:
In formula (4), n (x, y) represents the number of pixels equaling pixel (x, y) gray-scale value; D
maxfor constant; When scene more complicated in image, the distribution of its gray-scale value is by more concentrated, and in contrast, when the scene in image is fairly simple, the distribution of its gray-scale value is just more sparse; When the scene of image is fairly simple, utilizes traditional two-dimentional significance just can calculate the significance of image well, and add that depth information has come to calculate the significance of image when scene more complicated with regard to needing;
Step 5: utilize formula (1) and (3) that energy function E is newly defined as E':
E'(x,y)=E(x,y)·S
3D(x,y) (5)
In formula (5), E'(x, y) for target image I is at the new energy value at pixel (x, y) place;
Step 6: utilize formula (6) to calculate the image significance S of target image I:
In formula (6), (x
b, n) be b pixel of the n-th row in target image I, (x
a, n-1) and be the (n-1)th a pixel arranged in target image I, a ≠ b, and a, b ∈ (0, m); S ((x
b, n), (x
a, n-1)) to represent in target image I that n-th arranges b pixel x
ba pixel x is arranged with (n-1)th
aenergy differences;
represent the gradient on target image I horizontal direction v; And have
represent the gradient on target image I diagonal d, and have
by calculating the difference of the energy function in each pixel horizontal direction and diagonal, calculating the significance of each point thus obtaining the significance of whole image;
Step 7, according to the depth information in the depth map of target image I, gradient is asked for the depth information of each point, obtain the profile of body surface in target image I;
Step 8, Delaunay trigonometry is utilized to be coupled together and build triangle gridding on target image by point each in two dimensional character matrix X; Triangle gridding comprises several triangles t; Delaunay trigonometry utilizes the character of discrete point, the while of being linked to be leg-of-mutton, ensure that immediate three discrete points and calculates and all can obtain identical triangular mesh from which point;
Step 9, formula (7) is utilized to carry out distortion to triangles all in triangle gridding:
In formula (7),
with
be respectively the distortion of triangle t on transverse direction α and longitudinal direction β; G
trepresent the warp function to any one triangle t;
Step 10, formula (8) is utilized to obtain target equation E
s:
In formula (8), T is the set of triangle t; A
tfor the area of triangle t, J
tq () expression diabolo t carries out the Jacobin matrix after distortion; S
tfor the significance of any one triangle t, this target equation is minimized, just can obtain optimum transformation matrix;
Step 11: the profiling object surface obtained to prevent depth information can not because of conversion distortion too many, therefore need on the basis of above-mentioned objective function, add a constraint: utilize formula (9) to define constraint condition E
f:
In formula (9),
for three summits without distortion on triangle t,
for three summits of the summit of three on triangle t after distortion; r
iit is summit
and summit
between limit and summit
and summit
between the side ratio on limit, R
tit is summit
and summit
between limit and summit
and summit
between the rotation matrix that forms of limit; By minimizing this objective function, make the shape of former three points as far as possible similar to the shape of three points after conversion;
Step 12: utilize formula (10) to obtain distortion matrix F:
F=λE
s+(1-λ)E
f (10)
In formula (10), λ is coefficient; Carry out translation or the distortion of rotation realization to target image I with triangle t each in target image I according to the value that distortion matrix F is corresponding, thus both saved the high point of significance in image, in turn ensure that validity.
Claims (1)
1., based on an image warping method for three-dimensional significance, it is characterized in that carrying out as follows:
Step 1: utilize formula (1) computed image size to be the energy function E of each pixel in the target image I of m × n:
In formula (1), E (x, y) is the energy value of described target image I at pixel (x, y) place; I (x, y) is the gray-scale value of described target image I at pixel (x, y) place; X ∈ (0, m); Y ∈ (0, n);
Step 2: carry out feature extraction to described target image I, obtains two dimensional character matrix X;
Step 3, formula (2) is utilized to obtain the two-dimentional significance S of described target image I
2D:
In formula (2), X
i, X
jbe respectively described two dimensional character matrix two different rows vectors; σ is constant;
Step 4, formula (3) is utilized to build three-dimensional significance model S
3D:
S
3D=(1-α)S
2D+α·E
depth (3)
In formula (3), E
depthfor the depth map utilizing 3D camera to obtain described target image I, α is auto-adaptive parameter; And have:
In formula (4), n (x, y) represents the number of pixels equaling pixel (x, y) gray-scale value; D
maxfor constant;
Step 5: utilize formula (1) and (3) that described energy function E is newly defined as E':
E'(x,y)=E(x,y)·S
3D(x,y) (5)
In formula (5), E'(x, y) for described target image I is at the new energy value at pixel (x, y) place;
Step 6: utilize formula (6) to calculate the image significance S of described target image I:
In formula (6), (x
b, n) be b pixel of the n-th row in described target image I, (x
a, n-1) and be the (n-1)th a pixel arranged in described target image I, a ≠ b, and a, b ∈ (0, m); S ((x
b, n), (x
a, n-1)) to represent in described target image I that n-th arranges b pixel x
ba pixel x is arranged with (n-1)th
aenergy differences;
represent the gradient on described target image I horizontal direction v; And have
represent the gradient on described target image I diagonal d, and have
Step 7, depth map according to described target image I, obtain the profile of body surface in described target image I;
Step 8, Delaunay trigonometry is utilized to be coupled together and build triangle gridding on described target image by each point in described two dimensional character matrix X; Described triangle gridding comprises several triangles t;
Step 9, formula (7) is utilized to carry out distortion to triangles all in described triangle gridding:
In formula (7),
with
be respectively the distortion of described triangle t on transverse direction α and longitudinal direction β; G
trepresent the warp function to any one triangle t;
Step 10, formula (8) is utilized to obtain target equation E
s:
In formula (8), T is the set of described triangle t; A
tfor the area of described triangle t, J
tq () expression carries out the Jacobin matrix after distortion to described triangle t; S
tfor the significance of any one triangle t;
Step 11: utilize formula (9) to define constraint condition E
f:
In formula (9),
for three summits without distortion on described triangle t
for three summits of three summits on described triangle t after distortion; r
iit is summit
and summit
between limit and summit
and summit
between the side ratio on limit, R
tit is summit
and summit
between limit and summit
and summit
between the rotation matrix that forms of limit;
Step 12: utilize formula (10) to obtain distortion matrix F:
F=λE
s+(1-λ)E
f (10)
In formula (10), λ is coefficient; Translation or the distortion of rotation realization to described target image I is carried out according to the value that distortion matrix F is corresponding with triangle t each in described target image I.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410553252.7A CN104318514B (en) | 2014-10-17 | 2014-10-17 | Three-dimensional significance based image warping method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410553252.7A CN104318514B (en) | 2014-10-17 | 2014-10-17 | Three-dimensional significance based image warping method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104318514A true CN104318514A (en) | 2015-01-28 |
CN104318514B CN104318514B (en) | 2017-05-17 |
Family
ID=52373740
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410553252.7A Active CN104318514B (en) | 2014-10-17 | 2014-10-17 | Three-dimensional significance based image warping method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104318514B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
EP2523165A2 (en) * | 2011-05-13 | 2012-11-14 | Omron Co., Ltd. | Image processing method and image processing device |
CN103050110A (en) * | 2012-12-31 | 2013-04-17 | 华为技术有限公司 | Method, device and system for image adjustment |
WO2014116346A1 (en) * | 2013-01-24 | 2014-07-31 | Google Inc. | Systems and methods for resizing an image |
-
2014
- 2014-10-17 CN CN201410553252.7A patent/CN104318514B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510299A (en) * | 2009-03-04 | 2009-08-19 | 上海大学 | Image self-adapting method based on vision significance |
EP2523165A2 (en) * | 2011-05-13 | 2012-11-14 | Omron Co., Ltd. | Image processing method and image processing device |
CN103050110A (en) * | 2012-12-31 | 2013-04-17 | 华为技术有限公司 | Method, device and system for image adjustment |
WO2014116346A1 (en) * | 2013-01-24 | 2014-07-31 | Google Inc. | Systems and methods for resizing an image |
Also Published As
Publication number | Publication date |
---|---|
CN104318514B (en) | 2017-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578436B (en) | Monocular image depth estimation method based on full convolution neural network FCN | |
CN105303616B (en) | Embossment modeling method based on single photo | |
CN107578430B (en) | Stereo matching method based on self-adaptive weight and local entropy | |
CN106709948A (en) | Quick binocular stereo matching method based on superpixel segmentation | |
CN111079685A (en) | 3D target detection method | |
CN106127818B (en) | A kind of material appearance acquisition system and method based on single image | |
CN104299250A (en) | Front face image synthesis method and system based on prior model | |
CN102663399B (en) | Image local feature extracting method on basis of Hilbert curve and LBP (length between perpendiculars) | |
CN102074014A (en) | Stereo matching method by utilizing graph theory-based image segmentation algorithm | |
CN104156957A (en) | Stable and high-efficiency high-resolution stereo matching method | |
CN104820991A (en) | Multi-soft-constraint stereo matching method based on cost matrix | |
CN104715504A (en) | Robust large-scene dense three-dimensional reconstruction method | |
CN103927727A (en) | Method for converting scalar image into vector image | |
CN103778598A (en) | Method and device for disparity map improving | |
CN102609936A (en) | Stereo image matching method based on belief propagation | |
CN111553296B (en) | Two-value neural network stereo vision matching method based on FPGA | |
CN115861570A (en) | Multi-view human body reconstruction method based on luminosity consistency matching and optimization algorithm | |
CN104301706B (en) | A kind of synthetic method for strengthening bore hole stereoscopic display effect | |
To et al. | Bas-relief generation from face photograph based on facial feature enhancement | |
Vázquez‐Delgado et al. | Real‐time multi‐window stereo matching algorithm with fuzzy logic | |
CN104796624A (en) | Method for editing and propagating light fields | |
CN107330930A (en) | Depth of 3 D picture information extracting method | |
CN117132737A (en) | Three-dimensional building model construction method, system and equipment | |
CN104200469B (en) | Data fusion method for vision intelligent numerical-control system | |
CN104318514A (en) | Three-dimensional significance based image warping method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |