CN107085828A - Image mosaic fusion method based on human-eye visual characteristic - Google Patents

Image mosaic fusion method based on human-eye visual characteristic Download PDF

Info

Publication number
CN107085828A
CN107085828A CN201710298017.3A CN201710298017A CN107085828A CN 107085828 A CN107085828 A CN 107085828A CN 201710298017 A CN201710298017 A CN 201710298017A CN 107085828 A CN107085828 A CN 107085828A
Authority
CN
China
Prior art keywords
region
image
pixel
mrow
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710298017.3A
Other languages
Chinese (zh)
Other versions
CN107085828B (en
Inventor
史再峰
高阳
庞科
高静
徐江涛
刘铭赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710298017.3A priority Critical patent/CN107085828B/en
Publication of CN107085828A publication Critical patent/CN107085828A/en
Application granted granted Critical
Publication of CN107085828B publication Critical patent/CN107085828B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention relates to image procossing and field of Computer Graphics, using vision significance and masking effect lifting stitching image quality, the image quality decrease that reduction edge breaks are introduced, to obtain the high-quality splicing fused images for meeting vision.The technical solution adopted by the present invention is that the image mosaic fusion method based on human-eye visual characteristic, step is as follows:The two figure overlapping regions after matching operation is handled are handled, stitching path is found in the region:First, there is the region of visual masking in search overlapping region;2nd, the pixel weight of smooth region and texture region is sought;3rd, the vision significance of overlapping region is sought;4th, stitching path is solved;5th, image mosaic is completed using the path.Present invention is mainly applied to image procossing occasion.

Description

Image mosaic fusion method based on human-eye visual characteristic
Technical field
The present invention relates to image procossing and field of Computer Graphics and based on human eye characteristic lifting picture quality field. More particularly to when carrying out images match splicing, the field of image syncretizing effect is optimized based on human-eye visual characteristic.Specifically, It is related to the image mosaic fusion method based on human-eye visual characteristic.
Background technology
Image mosaic integration technology is widely used in the fields such as automotive electronics, unmanned plane, military affairs, remote sensing.The technology is by two Width or several have overlapping region image characteristic information extracted, matched, calculate obtain splicing parameter, according to splicing Parameter is by the image for the width wide viewing angle that permeated after anamorphose.In actual applications, due to stitching algorithm limitation, bat Some calculation errors can be introduced by taking the photograph method and lens distortion, cause to calculate obtained splicing parameter inaccurately, the figure after deformation As overlapping region can not be completely superposed, stitching image is caused the edge being broken occur.The crack edge captured by human eye, can be serious Influence the quality of stitching image, subjective feeling of the reduction human eye to image.
Image quality evaluation based on human visual system is given more sustained attention in recent years, because using this kind of method can be with Obtain the Objective image quality evaluation consistent with subjective feeling.Human visual system has multifrequency nature.Wherein, vision significance It is two prominent features with masking characteristics.Vision significance is the study hotspot in human eye attention mechanism field, is regarded in computer Feel field is mainly shown as simulation human visual attention mechanism.The vision noticing mechanism of the mankind can gather limited cognitive resources Combine in stimulation important in scene and suppress those unessential information.For image, the figure in image has difference Shape, color, thus different graphic is different to the stimulation of human eye, with different vision significances.Masking characteristics are human eyes Another property possessed.Human eye has higher susceptibility to contrasting strong image, when jobbie is much like with background When, human eye cannot effectively recognize the object.This is that one of human eye masking characteristics embodies.
The content of the invention
To overcome the deficiencies in the prior art, it is contemplated that occurring during for commonly using image co-registration in image split-joint method The problems such as dislocation, edge breaks, splicing gap, the purpose of the present invention is to be based on human eye characteristic, specially utilizes vision significance And masking effect lifting stitching image quality, the image quality decrease that reduction edge breaks are introduced, obtain the height for meeting vision Quality splices fused images.The technical solution adopted by the present invention is that the image mosaic fusion method based on human-eye visual characteristic is walked It is rapid as follows:
The two figure overlapping regions after matching operation is handled are handled, stitching path is found in the region:
First, there is the region of visual masking, this kind of region is divided into two classes in search overlapping region:One class is smooth area Domain;Another kind of is the region with tiny texture;Dislocation is not visible when occurring in first kind region, and is occurred second When in class region, tiny dislocation is by the tiny texture masking in background, it is impossible to effectively cause the attention of human eye;Specific method For:The marginal information in image is extracted using edge detection operator, is obtained in 2 value the image C, image C on marginal information, side The corresponding brightness of edge pixel is 1, and the edge in image C is carried out into morphological dilations computing with r amplitude obtains 2 value image D, when Brightness value is that can be considered the region seldom changed for 0 region in 2 value image D, and image D is carried out continuously with parameter r amplitude Morphological erosion computing twice, it is the region with tiny texture for 1 region to obtain brightness in image E, 2 value image E;
2nd, the pixel weight of smooth region and texture region is sought, each picture is tried to achieve with formula 1 to the pixel of smooth region Extreme difference in plain window, obtains characterizing the numerical value of the pixel and surrounding environment similarity degree:
G=xmax-xmin (1)
In formula 1, G represents the extreme difference value of the window centered on pixel to be calculated, xmaxThe maximum in window is represented, xminThe minimum value in window is represented, the smaller environment represented around the pixel of G is more similar, and the pixel in close grain region is with formula 2 Try to achieve local entropy:
(i, j) is coordinate of the pixel in calculation window, and the summit in the window upper left corner is the origin (0,0) of window, p It is the probability that current pixel gray scale accounts for local total gray scale, m, n are the length and width of pixel window, and ∑ is summation symbol, and H is the office of pixel Portion's entropy, represents the confusion degree at the point, according to the local extreme difference and local entropy tried to achieve, calculates and is sheltered according to formula 3 The weights figure mask in region:
In formula 3, mask represents masking characteristics weights figure, and k1, k2 are respectively the proportion of local extreme difference drawn game portion entropy.
3rd, the vision significance of overlapping region is sought:Using the wider vision significance algorithm of applicability to overlapping administrative division map As being handled, the higher pixel of vision significance weights figure Saliency, Saliency value for obtaining overlapping region is more easily drawn Play the attention of human eye;
4th, stitching path is solved:It is first according to the selection weight that formula 4 tries to achieve stitching path:
Cost=k3mask+k4Saliency (4)
Cost represents the weight of selection stitching path pixel, and k3, k4 are respectively masking characteristics and the proportion of conspicuousness;
Path blend is searched for according to cost as starting point in two figure boundary intersections using in overlapping region, is sat in overlapping region In the range of mark, by leu time path selection point, if it is (x, y) to have obtained path point coordinates, then the search of next path point takes model Enclose and arrive (x+1, y+b) for (x+1, y-a), a, b are the constant more than 0, under the point coordinates for taking the cost values in the range of this minimum is One path point coordinates, by that analogy, obtains the splicing path blend optimized based on human eye characteristic;
5th, image mosaic is completed using the path.
One instantiation be using the edge of canny operator detection images, then expanded with 5 amplitude, it is rotten Erosion, obtains the region with masking characteristics, 255 subtract the local pole difference of smooth region after be multiplied by proportion k1=0.2, close grain The local entropy in region is multiplied by the weight map mask that proportion k2=0.3 obtains masking characteristics;
Mask is multiplied by proportion k3=0.4 it is multiplied by proportion k4=0.6 with conspicuousness Saliency and is added and obtains stitching path Selection weight cost;
When searching for stitching path, current path point coordinates is that the search of (x, y) next path point takes scope to be (x+1, y- 3) (x+1, y+5) is arrived, point minimum selection cost is used as next path point.
The features of the present invention and beneficial effect are:
1. the present invention carries out splicing fusion using the region for having masking effect to dislocation in image, eliminate or reduction dislocation is drawn The vision attention risen, improves the quality of fused images;
2. the present invention uses the low pixel of vision significance as splicing path blend, make dislocation generation is located away from people Eye region interested, occurs the region focused in human eye, remains the key message of image, improve the quality of image.
Brief description of the drawings:
Effect after Fig. 1 close grains pattern edge and progress morphological dilations computing.
Fig. 2 morphological dilations, erosion operation processing and obtained close grain region (last width figure black and grey area Domain).
The selection of Fig. 3 stitching paths.
Fig. 4 flow charts of the present invention.
Embodiment
Image mosaic fusion is in the nature using the overlapping region of two width figures as reference, by the image pixel set after processing to one On width figure.Common fusion method be by a wherein width input stitching image on the basis of, another width stitching image computing it is laggard Row is filled up.Therefore carry out the border of image on the basis of the path of image mosaic fusion.When operational parameter is necessarily inclined with physical presence When poor and pattern edge intersects with stitching path, it may occur that edge dislocation or the phenomenon of fracture.If these dislocation appear in vision The higher region of conspicuousness, then can have a strong impact on the quality of image.Meanwhile, there is the higher line of some confusion degrees in image Manage region (such as sandstone, tree crown etc.), dislocation edge occurs at the region, due to human eye masking characteristics, these dislocation edges are not It can be perceived.Therefore the present invention utilizes human eye characteristic, it would be possible to which it is relatively low that the stitching path for occurring misplacing constrains in vision significance Region and close grain region with masking characteristics, reduce the recognizable dislocation amount of edge of human eye, effectively improve Picture quality.Specific method is as follows:
The two figure overlapping regions after matching operation is handled are handled.Stitching path is found in the region.
First, there is the region of visual masking in search overlapping region.This kind of region is broadly divided into two classes:One class is flat Skating area domain, such as sky, tranquil lake surface etc.;Another kind of is the region with tiny texture, such as sandstone, intensive leaf etc..It is wrong Position is not visible when occurring in first kind region, and when occurring in Equations of The Second Kind region, tiny dislocation is by background Tiny texture masking, it is impossible to effectively cause the attention of human eye.Specific method is:The side in image is extracted using edge detection operator Edge information, obtains the 2 value image C on marginal information.In image C, the corresponding brightness of edge pixel is 1.As shown in figure 1, black The grid of color represents the edge pixel point detected.The edge in image C is carried out into morphological dilations computing with r amplitude to obtain 2 value image D.Now brightness value is that can be considered that (right figure is white in such as Fig. 1 in the region seldom changed for 0 region in 2 value image D Shown in pixel).As shown in Fig. 2 image D is carried out continuously into morphological erosion computing twice with r amplitude, image E is obtained.2 values The region that brightness is 1 in image E is the region of grey and black in the region with tiny texture, Fig. 2.
2nd, the pixel weight of smooth region and texture region is sought.Each picture is tried to achieve with formula 1 to the pixel of smooth region Extreme difference in plain window, obtains characterizing the numerical value of the pixel and surrounding environment similarity degree.
G=xmax-xmin (1)
In formula 1, G represents the extreme difference value of the window centered on pixel to be calculated, xmaxThe maximum in window is represented, xminRepresent the minimum value in window.The smaller environment represented around the pixel of G is more similar.To the pixel in close grain region with public affairs Formula 2 tries to achieve local entropy.
(i, j) is coordinate of the pixel in calculation window, and the summit in the window upper left corner is the origin (0,0) of window.p It is the probability that current pixel gray scale accounts for local total gray scale, m, n are the length and width of pixel window, and H is the local entropy of pixel, represents the point The confusion degree at place.According to the local extreme difference and local entropy tried to achieve, the weights figure for obtaining masking regional is calculated according to formula 3 mask。
In formula 3, mask represents masking characteristics weights figure, and k1, k2 are respectively the proportion of local extreme difference drawn game portion entropy.
3rd, the vision significance of overlapping region is sought.Using the wider vision significance algorithm of applicability to overlapping administrative division map As being handled, the vision significance weights figure Saliency of overlapping region is obtained.Vision significance is higher, the attention rate of human eye It is higher.
4th, stitching path is solved.It is first according to the selection weight that formula 4 tries to achieve stitching path.
Cost=k3mask+k4Saliency (4)
Cost represents the weight of selection stitching path pixel, and k3, k4 are respectively masking characteristics and the proportion of conspicuousness.
Using in overlapping region path blend is searched in two figure boundary intersections according to cost as starting point.Sat in overlapping region In the range of mark, by leu time path selection point.If as shown in figure 3, path point coordinates is (x, y), then next path point Search take scope for (x+1, y-a) arrive (x+1, y+b).The point coordinates for taking the cost values in the range of this minimum is next path Point coordinates.By that analogy, the splicing path blend optimized based on human eye characteristic is obtained.
5th, image mosaic is completed using the path.
In implementation process, the edge of canny operator detection images is used.Then expanded, corroded with 5 amplitude, Obtain the region with masking characteristics.255 subtract proportion k1=0.2, close grain area are multiplied by after the local pole difference of smooth region The local entropy in domain is multiplied by the weight map mask that proportion k2=0.3 obtains masking characteristics.
Mask is multiplied by proportion k3=0.4 it is multiplied by proportion k4=0.6 with conspicuousness Saliency and is added and obtains stitching path Selection weight cost.
When searching for stitching path, current path point coordinates is that the search of (x, y) next path point takes scope to be (x+1, y- 3) (x+1, y+5) is arrived.Point minimum selection cost is used as next path point.
According to this embodiment, the present invention has optimal splicing syncretizing effect.

Claims (2)

1. a kind of image mosaic fusion method based on human-eye visual characteristic, it is characterized in that, step is as follows:
The two figure overlapping regions after matching operation is handled are handled, stitching path is found in the region:
First, there is the region of visual masking, this kind of region is divided into two classes in search overlapping region:One class is smooth region; Another kind of is the region with tiny texture;Dislocation is not visible when occurring in first kind region, and is occurred in Equations of The Second Kind When in region, tiny dislocation is by the tiny texture masking in background, it is impossible to effectively cause the attention of human eye;Specific method is: The marginal information in image is extracted using edge detection operator, is obtained in 2 value the image C, image C on marginal information, edge The corresponding brightness of pixel is 1, and the edge in image C is carried out into morphological dilations computing with r amplitude obtains 2 value image D, when 2 Brightness value is that can be considered the region seldom changed for 0 region in value image D, and image D is carried out continuously into two with parameter r amplitude Secondary morphological erosion computing, it is the region with tiny texture for 1 region to obtain brightness in image E, 2 value image E;
2nd, the pixel weight of smooth region and texture region is sought, each pixel window is tried to achieve with formula 1 to the pixel of smooth region Intraoral extreme difference, obtains characterizing the numerical value of the pixel and surrounding environment similarity degree:
G=xmax-xmin (1)
In formula 1, G represents the extreme difference value of the window centered on pixel to be calculated, xmaxRepresent the maximum in window, xminGeneration Minimum value in table window, the smaller environment represented around the pixel of G is more similar, and the pixel in close grain region is tried to achieve with formula 2 Local entropy:
<mrow> <mi>H</mi> <mo>=</mo> <mo>-</mo> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>logp</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
(i, j) is coordinate of the pixel in calculation window, and the summit in the window upper left corner is the origin (0,0) of window, and p is to work as Preceding pixel gray scale accounts for the probability of local total gray scale, and m, n are the length and width of pixel window, and ∑ is summation symbol, and H is the part of pixel Entropy, represents the confusion degree at the point, according to the local extreme difference and local entropy tried to achieve, is calculated according to formula 3 and obtains blasnket area The weights figure mask in domain:
In formula 3, mask represents masking characteristics weights figure, and k1, k2 are respectively the proportion of local extreme difference drawn game portion entropy.
3rd, the vision significance of overlapping region is sought:Entered using the wider vision significance algorithm of applicability to overlapping area image Row processing, the higher pixel of vision significance weights figure Saliency, Saliency value for obtaining overlapping region more easily causes people The attention of eye;
4th, stitching path is solved:It is first according to the selection weight that formula 4 tries to achieve stitching path:
Cost=k3mask+k4Saliency (4)
Cost represents the weight of selection stitching path pixel, and k3, k4 are respectively masking characteristics and the proportion of conspicuousness;
Path blend is searched for according to cost as starting point in two figure boundary intersections using in overlapping region, in overlapping region coordinate model In enclosing, by leu time path selection point, if path point coordinates is (x, y), then the search of next path point takes the scope to be (x+1, y-a) arrives (x+1, y+b), and a, b are the constant more than 0, and the point coordinates for taking the cost values in the range of this minimum is next Path point coordinates, by that analogy, obtains the splicing path blend optimized based on human eye characteristic;
5th, image mosaic is completed using the path.
2. the image mosaic fusion method as claimed in claim 1 based on human-eye visual characteristic, it is characterized in that, one is specific real Example is, using the edge of canny operator detection images, then to be expanded, corroded with 5 amplitude, obtained with masking characteristics Region, 255 subtract proportion k1=0.2 are multiplied by after the local pole difference of smooth region, and the local entropy in close grain region is multiplied by ratio Weight k2=0.3 obtains the weight map mask of masking characteristics;
Mask is multiplied by proportion k3=0.4 and conspicuousness Saliency it is multiplied by proportion k4=0.6 and is added and obtains the choosing of stitching path Select weight cost;
When searching for stitching path, current path point coordinates is that the search of (x, y) next path point takes scope to be arrived for (x+1, y-3) (x+1, y+5), point minimum selection cost is used as next path point.
CN201710298017.3A 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics Expired - Fee Related CN107085828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710298017.3A CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710298017.3A CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Publications (2)

Publication Number Publication Date
CN107085828A true CN107085828A (en) 2017-08-22
CN107085828B CN107085828B (en) 2020-06-26

Family

ID=59612216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710298017.3A Expired - Fee Related CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Country Status (1)

Country Link
CN (1) CN107085828B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064409A (en) * 2018-10-19 2018-12-21 广西师范大学 A kind of the visual pattern splicing system and method for mobile robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903093A (en) * 2012-09-28 2013-01-30 中国航天科工集团第三研究院第八三五八研究所 Poisson image fusion method based on chain code mask
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method
WO2015169137A1 (en) * 2014-05-09 2015-11-12 华为技术有限公司 Image data collection processing method and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903093A (en) * 2012-09-28 2013-01-30 中国航天科工集团第三研究院第八三五八研究所 Poisson image fusion method based on chain code mask
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
WO2015169137A1 (en) * 2014-05-09 2015-11-12 华为技术有限公司 Image data collection processing method and related device
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
V T MANU等: "Visual artifacts based image splicing detection in uncompressed images", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, VISION AND INFORMATION SECURITY》 *
ZHENHUA QU等: "《Information Hiding》", 10 June 2009 *
孔韦韦: "基于NSST域人眼视觉特性的图像融合方法", 《哈尔滨工程大学学报》 *
李钊等: "基于视觉显著性与对比度特性的图像质量评价", 《南开大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064409A (en) * 2018-10-19 2018-12-21 广西师范大学 A kind of the visual pattern splicing system and method for mobile robot
CN109064409B (en) * 2018-10-19 2023-04-11 广西师范大学 Visual image splicing system and method for mobile robot

Also Published As

Publication number Publication date
CN107085828B (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN104574366B (en) A kind of extracting method in the vision significance region based on monocular depth figure
Yeh et al. Haze effect removal from image via haze density estimation in optical model
Yu et al. Physics-based fast single image fog removal
CN105631880B (en) Lane line dividing method and device
CN103971126B (en) A kind of traffic sign recognition method and device
Dai et al. Single underwater image restoration by decomposing curves of attenuating color
CN110991266B (en) Binocular face living body detection method and device
CN104504745B (en) A kind of certificate photo generation method split based on image and scratch figure
CN103207664A (en) Image processing method and equipment
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
Yeh et al. Efficient image/video dehazing through haze density analysis based on pixel-based dark channel prior
CN111160291B (en) Human eye detection method based on depth information and CNN
US9565417B2 (en) Image processing method, image processing device, and electronic device
CN106251298B (en) Method and apparatus for processing image
CN108171674B (en) Vision correction method for projector image with any visual angle
TWI457853B (en) Image processing method for providing depth information and image processing system using the same
CN105513105A (en) Image background blurring method based on saliency map
CN104182970A (en) Souvenir photo portrait position recommendation method based on photography composition rule
CN110472628A (en) A kind of improvement Faster R-CNN network detection floating material method based on video features
CN102542544A (en) Color matching method and system
CN105657401A (en) Naked eye 3D display method and system and naked eye 3D display device
CN112396050A (en) Image processing method, device and storage medium
JP6797046B2 (en) Image processing equipment and image processing program
CN101853500A (en) Colored multi-focus image fusing method
CN104537632A (en) Infrared image histogram enhancing method based on edge extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200626

Termination date: 20210429

CF01 Termination of patent right due to non-payment of annual fee