CN103810674B - Based on the image enchancing method relying on the reconstruct of perception object's position - Google Patents
Based on the image enchancing method relying on the reconstruct of perception object's position Download PDFInfo
- Publication number
- CN103810674B CN103810674B CN201210455248.8A CN201210455248A CN103810674B CN 103810674 B CN103810674 B CN 103810674B CN 201210455248 A CN201210455248 A CN 201210455248A CN 103810674 B CN103810674 B CN 103810674B
- Authority
- CN
- China
- Prior art keywords
- image
- perceptive
- perception
- relying
- optimizing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a kind of based on relying on the image enchancing method that perception object's position reconstructs, the method includes: the preliminary foreground object extracting image;Described image is divided into some regions, and regional is carried out foreground object dependency analysis, obtain relying on perceptive object;Position relationship between the aesthetic criteria of foundation photography and dependence perceptive object formulates the optimization aim relying on perception object's position, and is patterned optimizing;The lack part of image after optimizing is carried out background completion, and carries out deburring process, obtain final image enhaucament result.The present invention can automatically analyze and extract the dependence perceptive object in image, and analysis result is used for image composition optimization.
Description
Technical field
The present invention relates to technical field of image processing, particularly to a kind of figure relying on perception object analysis and position reconstruct
Image intensifying method.
Background technology
Along with the development of digital media technology, increasing image enhancement technique is devoted to improve the vision matter of image
Amount.The mankind are converted to computable model, such as tone and definition etc. to the perception of image by these methods.
Image enhaucament and editing technique are very important instruments in field of Computer Graphics, and such as P ' erez et al. carries
The Possion blending method gone out is for image co-registration;At the deburring after image synthesizes of the Alpha-matting method
Reason;The image edit method of shape perception may be used for the editor of picture material.But these methods only provide picture editting
Instrument, not actively sense organ and photography viewpoint according to the mankind do not process image, make image more attractive in appearance.
Many has worked it has been investigated how the most attractive in appearance with calculating model evaluation image, and these work are based on image
The extraction of global or local feature, and give quantifiable evaluation criterion, in order to improve the aesthetic measure of image, many scholars
Extract computation model according to Aesthetic Standards, and edit picture material according to this.As Santella et al. proposes " Gaze-based
Interaction forsemi-automatic photo cropping ", Nishiyama et al. proposes " Sensation-
Based photo cropping " the two work is all based on photo and cuts out and optimize image composition;Liu et al. proposes
" Optimizing photo composition ", the method uses photo to cut out with reorientation and operates the structure optimizing photo
Figure.But image is not operated on object hierarchy by above method, the composition degree that can change has limitation.
Recently, Bhattacharya et al. proposes " A holistic approach to
Aestheticenhancement of photographs ", the method utilizes photography composition rule that foreground image is carried out position
Put movement.But the method does not accounts for the dependence of foreground image, and it cannot be guaranteed that the correctness of image, semantic information.
Summary of the invention
(1) technical problem to be solved
The present invention is by providing a kind of based on relying on the image enchancing method that perception object's position reconstructs, on object hierarchy
Image is operated, it is contemplated that rely on the dependence between perceptive object, and ensure that the correctness of image, semantic information.
(2) technical scheme
The invention provides a kind of based on relying on the image enchancing method that perception object's position reconstructs, the method includes:
S1, the preliminary foreground object extracting image;
S2, described image is divided into some regions, and regional is carried out foreground object dependency analysis, obtain dependence feeling
Know object;
S3, the aesthetic criteria of foundation photography and the position relationship formulation relied between perceptive object rely on perception object's position
Optimization aim, and be patterned optimize;
S4, the lack part of image after optimizing is carried out background completion, and carry out deburring process, obtain final image and increase
Strong result.
Preferably, described step S1 use significance dividing method tentatively extract the foreground object of image.
Preferably, described step S2 specifically includes:
S21, with image segmentation method described image is divided into some regions, if regional has the area of 1/2
The foreground object extracted described in covering step S1, then be set to rely on perceptive object, if the significance value in region is less than threshold value,
Then it is set to pure background;
S22, to segmentation remaining extracted region acutance, definition, color harmony degree feature;
S23, carry out region dependency analysis with multi-tag figure segmentation method, obtain final dependence perceptive object.
Preferably, optimization aim described in step S3 include three branches relying on perceptive object away from, rely on perceptive object
Diagonal is away from, vision equilibrium item, the associations that relies between perceptive object and constraint penalty term.
Preferably, heuristic is used to be optimized described optimization aim.
Preferably, described step S4 use perception of content method the lack part of image after optimizing is carried out background benefit
Entirely.
(3) beneficial effect
Image enchancing method based on dependence perception object's position reconstruct proposed by the invention is by display foreground thing
Body and remaining region carry out dependency analysis, obtain relying on perceptive object, and to relying on the reconstruct of perception object's position, can change
Composition degree no longer has limitation.Optimization aim considers the pass between foreground object on the basis of photography composition rule
System so that semantic information is more accurate, and image is more attractive in appearance.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of method provided by the present invention.
Detailed description of the invention
With specific embodiment, the present invention is described in further detail below in conjunction with the accompanying drawings.
The invention discloses a kind of based on relying on the image enchancing method that perception object's position reconstructs, as it is shown in figure 1, the party
Method includes:
S1, the preliminary foreground object extracting image;
S2, described image is divided into some regions, and regional is carried out foreground object dependency analysis, obtain dependence feeling
Know object;
S3, the aesthetic criteria of foundation photography and the position relationship formulation relied between perceptive object rely on perception object's position
Optimization aim, and be patterned optimize;
S4, the lack part of image after optimizing is carried out background completion, and carry out deburring process, obtain final image and increase
Strong result.
The method, by display foreground and remaining region are carried out dependency analysis, obtains relying on perceptive object.To dependence feeling
Knowing that object is patterned optimizing, its optimization aim considers the relation between foreground object on the basis of photography composition rule,
Make the semantic information after optimizing more accurate.
Described step S1 use significance dividing method tentatively extract the foreground object of image.
Described step S2 specifically includes:
S21, with image segmentation method described image is divided into some regions, if regional has the area of 1/2
The foreground object extracted described in covering step S1, then be set to rely on perceptive object, if the significance value in region is less than threshold value,
Then it is set to pure background;
S22, to segmentation remaining extracted region acutance, definition, color harmony degree feature;
S23, carry out region dependency analysis with multi-tag figure segmentation method, obtain final dependence perceptive object.
Optimization aim described in step S3 include rely on perceptive object three branches away from, rely on perceptive object diagonal
Away from, vision equilibrium item, the associations that relies between perceptive object and constraint penalty term.
Wherein, heuristic is used to be optimized described optimization aim.
Described step S4 use perception of content method the lack part of image after optimizing is carried out background completion.
Image after the composition optimization finally given is better than the effect of traditional method.
Concrete, the method is:
S1, a width input picture significance dividing method carried out foreground object tentatively extract, specifically include:
S11, use region contrast method zoning significance value, wherein significance value interval is [0,1], if region
Significance value height is then set to prospect candidate region, and significance value is low, is set to background candidate region;
S12, carry out foreground object target according to prospect candidate region and background candidate region information image partition method
Extract.Repeat said method and extract display foreground object one by one, wherein, last fetched foreground object region out is set to the back of the body
Scape, and region significance value is reduced.After this step, the foreground object in image will be extracted one by one.
S2, described image is divided into some regions, and cut zone is carried out dependency analysis, obtain relying on perceptive object:
S21, with image segmentation method divide the image into some regions, to each piece of regional analysis being partitioned into: if
There is the foreground object extracted in area covering step S1 of 1/2, be then set to this region rely on perceptive object, if this region
Significance value is less than threshold value t, then be set to pure background;
S22, for remaining extracted region acutance E remaininga, definition Es, color harmony degree Δ CH feature:
Acutance EaFor:
Wherein D (i) is the second dervative of image, if D (i) > 0.1, then δ (i) value is 1, and n is the number of pixels in region,
G is Gaussian normalization function.
Definition EsFor:
Wherein, FH={ (u, v) | β W < | u-u0|≤α W, β H < | v-v0|≤αH}
FL=(u, v) | | u-u0|≤β W, | v-v0|≤βH}
W and H is respectively width and the height of image, FHFor high frequency band, FLFor low-frequency band, (u0, v0Frequency centered by), α=0.4,
β=0.2.
Color harmony degree Δ CH is:
Δ CH=exp{ (CH+5)2/2} (3)
According to Ou et al. about the document A Colour Harmony Model forTwo-Colour of color harmony degree
Combinations, CH=HC+HL+HH, wherein HCFor colored item, HLFor brightness item, HHFor tone item.Obtaining the feature in region
Afterwards, cut (Multi-label GraphCut) method with multi-tag figure and carry out region dependency analysis, obtain final dependence feeling
Knowing object, energy equation is:
Wherein,
Wherein, r represents the region of segmentation, and L represents label.
S3, the aesthetic criteria of foundation photography and the position relationship formulation relied between perceptive object rely on perceptive object position
The optimization aim put, and the method carrying out position reconstruct with heuristic: first definition and layout optimization aim is:
Wherein n represents that perception relies on the number of object, if three branches of image refer to the length of image and wide normalization
Interval to [0,1], on its image, coordinate is (1/3,1/3), (1/3,2/3), (2/3,1/3), four points of (2/3,2/3).
Rely on three branches of perceptive object away from energy term DPFor:
Wherein, miRepresent the pixel count relying on perceptive object, ciRepresent and rely on perceptive object barycenter, PjRepresent away from dependence feeling
Know three branches that object is nearest.
Rely on the diagonal of perceptive object away from energy term DLFor:
Wherein L1, L2For two diagonal of image, lI, jBeing two relies on the straight line that perceptive object barycenter is formed, MI, jFor
The midpoint of two barycenter, θkFor lI, jWith LkAngle.
fa(lI, j, L1, L2)=| θ1||θ2|/4π2, d
(M, L) represents the distance of some M to straight line L.
Vision balanced energy item DVFor:
Wherein, miRepresent the pixel count relying on perceptive object, ciRepresent and rely on perceptive object barycenter, vision equilibrium pointC is the center of image, and (a b) represents the distance between 2, σ=0.2 to d.
Associations R relied between perceptive object is:
R (i, j)=S (i, j) | | ΔI, j-Δ′I, j|| (9)
Wherein S (i, j)=λ Sshape(i, j)+(1-λ) Scolor(i, j);SshapeFor relying on Shape between perceptive object
Context similarity, ScolorFor relying on perception object color histogrammic card side distance, ΔI, jRepresent and optimize the first two dependence feeling
Know the distance of object, Δ 'I, jRepresent the distance of latter two dependence perceptive object optimized.
It is nearest optimal solution that constraint penalty item P can retrain final position.E heuristic is optimized,
Position after optimization.
S4, by perception of content method, image lack part after optimizing is carried out background completion, and carry out deburring (Alpha-
Matting) process, obtain final image enhaucament result: after dependence perception object's position is optimized in image, need
By the image completion method of perception of content, the lack part of image is carried out completion, meanwhile, needing dependence in some cases
The boundary member of perceptive object carries out deburring process, using result as the final image after the composition optimization of output.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For Yuan, on the premise of without departing from the technology of the present invention principle, it is also possible to make some improvement and replacement, these improve and replace
Also should be regarded as protection scope of the present invention.
Claims (6)
1. based on the image enchancing method relying on the reconstruct of perception object's position, it is characterised in that the method comprises the following steps:
S1, the preliminary foreground object extracting image;
S2, described image is divided into some regions, and regional is carried out foreground object dependency analysis, obtain relying on perception pair
As;
S3, the aesthetic criteria of foundation photography and the position relationship formulation relied between perceptive object rely on the excellent of perception object's position
Change target, and be patterned optimizing;
S4, the lack part of image after optimizing is carried out background completion, and carry out deburring process, obtain final image enhaucament knot
Really.
2. method as claimed in claim 1, it is characterised in that use significance dividing method tentatively to extract in described step S 1
The foreground object of image.
3. method as claimed in claim 1, it is characterised in that described step S2 specifically includes:
S21, with image segmentation method described image is divided into some regions, if regional has the area covering of 1/2 to walk
The foreground object extracted described in rapid S1, then be set to rely on perceptive object, if the significance value in region is less than threshold value, be then set to
Pure background;
S22, to segmentation remaining extracted region acutance, definition, color harmony degree feature;
S23, carry out region dependency analysis with multi-tag figure segmentation method, obtain final dependence perceptive object.
4. method as claimed in claim 1, it is characterised in that optimization aim described in step S3 includes relying on the three of perceptive object
Branch away from, rely on the diagonal of perceptive object away from, vision equilibrium item, the associations that relies between perceptive object and constraint penalty term.
5. method as claimed in claim 4, it is characterised in that use heuristic to be optimized described optimization aim.
6. method as claimed in claim 1, it is characterised in that use perception of content method to image after optimizing in described step S4
Lack part carry out background completion.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210455248.8A CN103810674B (en) | 2012-11-13 | 2012-11-13 | Based on the image enchancing method relying on the reconstruct of perception object's position |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210455248.8A CN103810674B (en) | 2012-11-13 | 2012-11-13 | Based on the image enchancing method relying on the reconstruct of perception object's position |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103810674A CN103810674A (en) | 2014-05-21 |
CN103810674B true CN103810674B (en) | 2016-09-21 |
Family
ID=50707396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210455248.8A Active CN103810674B (en) | 2012-11-13 | 2012-11-13 | Based on the image enchancing method relying on the reconstruct of perception object's position |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103810674B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107707818B (en) * | 2017-09-27 | 2020-09-29 | 努比亚技术有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1525401A (en) * | 2003-02-28 | 2004-09-01 | ��˹���´﹫˾ | Method and system for enhancing portrait images that are processed in a batch mode |
CN1957371A (en) * | 2004-05-31 | 2007-05-02 | 诺基亚公司 | Method and system for viewing and enhancing images |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2004280500A1 (en) * | 2003-10-09 | 2005-04-21 | De Beers Consolidated Mines Limited | Enhanced video based surveillance system |
-
2012
- 2012-11-13 CN CN201210455248.8A patent/CN103810674B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1525401A (en) * | 2003-02-28 | 2004-09-01 | ��˹���´﹫˾ | Method and system for enhancing portrait images that are processed in a batch mode |
CN1957371A (en) * | 2004-05-31 | 2007-05-02 | 诺基亚公司 | Method and system for viewing and enhancing images |
Non-Patent Citations (1)
Title |
---|
Piecewise Planar and Non-Planar Stereo for Urban Scene Reconstruction;David Gallup等;《Computer Vision and Pattern Recognition(CVPR),2010 IEEE Conference on》;20100618;第1420页第3.3节第1-4行,第1421页第1-33行,公式(1)-公式(3) * |
Also Published As
Publication number | Publication date |
---|---|
CN103810674A (en) | 2014-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109978930A (en) | A kind of stylized human face three-dimensional model automatic generation method based on single image | |
CN110853026B (en) | Remote sensing image change detection method integrating deep learning and region segmentation | |
CN102354397B (en) | Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs | |
Zheng et al. | Understanding the tourist mobility using GPS: How similar are the tourists? | |
CN107808381A (en) | A kind of unicellular image partition method | |
CN103544697B (en) | A kind of image partition method based on hypergraph analysis of spectrum | |
CN102779270B (en) | Target clothing image extraction method aiming at shopping image search | |
CN103942794A (en) | Image collaborative cutout method based on confidence level | |
CN109948593A (en) | Based on the MCNN people counting method for combining global density feature | |
CN103020971A (en) | Method for automatically segmenting target objects from images | |
CN103366394B (en) | The Direct volume rendering of medical volume data feature abstraction | |
CN110047081A (en) | Example dividing method, device, equipment and the medium of chest x-ray image | |
CN105929962A (en) | 360-DEG holographic real-time interactive method | |
CN103295219B (en) | Method and device for segmenting image | |
CN103578107B (en) | A kind of interactive image segmentation method | |
CN106855851A (en) | Knowledge extraction method and device | |
CN108564012A (en) | A kind of pedestrian's analytic method based on characteristics of human body's distribution | |
CN105809666A (en) | Image matting method and device | |
CN106909537A (en) | A kind of polysemy analysis method based on topic model and vector space | |
CN104200505A (en) | Cartoon-type animation generation method for human face video image | |
CN108664885A (en) | Human body critical point detection method based on multiple dimensioned Cascade H ourGlass networks | |
CN107909306A (en) | A kind of urban landscape style and features evaluation method | |
CN105046689A (en) | Method for fast segmenting interactive stereo image based on multilayer graph structure | |
Liu et al. | A physiognomy based method for facial feature extraction and recognition | |
CN103810674B (en) | Based on the image enchancing method relying on the reconstruct of perception object's position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |