CN101853499A - A Clear Photo Synthesis Method Based on Detail Detection - Google Patents

A Clear Photo Synthesis Method Based on Detail Detection Download PDF

Info

Publication number
CN101853499A
CN101853499A CN 201010162169 CN201010162169A CN101853499A CN 101853499 A CN101853499 A CN 101853499A CN 201010162169 CN201010162169 CN 201010162169 CN 201010162169 A CN201010162169 A CN 201010162169A CN 101853499 A CN101853499 A CN 101853499A
Authority
CN
China
Prior art keywords
pixel
detail
value
photo
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010162169
Other languages
Chinese (zh)
Other versions
CN101853499B (en
Inventor
石洗凡
刁常宇
鲁东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2010101621699A priority Critical patent/CN101853499B/en
Publication of CN101853499A publication Critical patent/CN101853499A/en
Application granted granted Critical
Publication of CN101853499B publication Critical patent/CN101853499B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于细节检测的清晰照片合成方法,对两张大小和位置关系完全对应的待合成照片中每个像素的细节程度进行计算;将第一张待合成照片中的每个像素的细节程度减去第二张待合成照片中相应位置像素的细节程度得到每个位置的像素细节程度的判别值;将每个位置的像素细节程度的判别值与设定的高阈值与低阈值进行比较,并将细节程度与设定值比较,得到每个位置的像素细节的类型值;对每个位置的像素细节的类型值进行低通滤波,去除可能的细节类型误判;根据去除误判后的每个位置的像素细节的类型值,分别选取相应的方式进行图像合成得到清晰照片。利用本发明方法能够合成清晰照片,应用于需要大景深以及中心和边缘都清晰拍摄的场合。The invention discloses a method for synthesizing clear photos based on detail detection, which calculates the degree of detail of each pixel in two photos to be synthesized that completely correspond in size and position; The degree of detail of the pixel in the second photo to be combined is subtracted from the degree of detail of the corresponding pixel in the second photo to be synthesized to obtain the discriminant value of the degree of detail of the pixel at each position; the discriminant value of the degree of detail of the pixel at each position is compared with the set high threshold and low threshold Compare and compare the degree of detail with the set value to obtain the type value of the pixel detail at each position; perform low-pass filtering on the type value of the pixel detail at each position to remove possible misjudgments of the detail type; After judging the type value of the pixel details of each position, select the corresponding method for image synthesis to obtain a clear photo. The method of the invention can synthesize clear photos, and is applied to occasions requiring large depth of field and clear shooting of the center and edges.

Description

A kind of clear picture synthesis method based on detail detection
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of clear pictures synthetic technology.
Background technology
At present, the photo-sensitive cell density of camera improves constantly, Sony has developed the sensor of 3480 perfectly sound pictures recently, believe and to be used in that (the big horse three of Canon's 2,100 ten thousand pixels (EOS 1Ds MarkIII) was issued more than 2 years on its single anti-product, according to the rule of renewal in 3 years in the past, big horse three will upgrade to estimate this year).Simultaneously, as the participant of pixel contest, Canon also is unwilling to be lag behind naturally, and the next generation of its big horse three also is the product of a 3,000 ten thousand above pixels certainly.
According to domestic certain photography website during for the EOS 1Ds MarkIII of Canon collocation EF 50mm F1.4USM camera lens the sharpness value result under MTF50 as can be known, the best aperture of center and peripheral is respectively F4 and F5.6.But the depth of field of F4 and F5.6 is all superficial, and in the shooting of reality, particularly some inclined-plane mural paintings or irregular arch mural painting are difficult to make optical axis to be parallel to surface being shot fully, and this has just caused depth of field fuzzy problem.If (F8, F11 even F16) can obtain the big depth of field with little aperture, but but because diffraction causes whole photograph image fuzzy.Be to use for the solution of this class occasion on the optics and move lens shaft, make that certain planar imaging is clear.But in fact for one of some arch mural painting clearly the plane can not deal with problems because mural painting itself be not picture in one plane.In addition, move lens shaft and also have shortcomings such as image quality difference and price height.Therefore, can consider by taking multiple pictures that some is clear relatively for every photo, finally synthetic each zone of a width of cloth is digital picture clearly all.
In addition, also have similar problem in the photography at historical relic and archaeology scene, its solution generally is to select suitable aperture according to the degree of depth of object and the depth of field scale on the camera lens.But done shortcoming like this, must use very little aperture in order to guarantee the depth of field, it is slight fuzzy to make that therefore entire image all has, and we can say that this method is the sharpness that guarantees entire image with the sharpness of sacrificing important area.But in fact on-the-spot for historical relic and archaeology, individual region-of-interest is generally arranged, this part zone needs higher sharpness, can alleviate the diffraction effect of region-of-interest by increasing aperture, makes this part content clear.And, use little aperture to guarantee the depth of field for the background area, and at last that this two width of cloth image is synthetic, generate a best photo.
In addition, even for the plane subject, in fact, center and peripheral all has different resolution under the different apertures, and for a camera lens, must there be the aperture of a best center resolution and the aperture of best edge resolution, these two apertures are often different, for example but unfortunately,, in the website test of front, the sharpest aperture of center and peripheral is respectively F4 and F5.6.The film epoch can only be between these two best apertures trade-off, but in the digital age, can utilize digital image processing techniques, detect the photo acutance of zones of different under the different apertures, the sharpest keen zone is combined, and finally synthetic center and peripheral all is the sharpest keen image.
Summary of the invention
The invention provides a kind of clear picture synthesis method based on detail detection, judge clear part in several photos automatically by algorithm, a finally synthetic each several part is clear photograph all.Can be applied to need the clear and center and peripheral of the big depth of field all to take occasion clearly.
A kind of based on the synthetic method of the clear pictures of detail detection, comprising:
(1) calculates the level of detail of each pixel in two big little and photos to be synthesized that the position relation is corresponding fully;
Suppose that two photo size and position relations to be synthesized are corresponding fully (if do not satisfy, can at first utilize the registration Algorithm registration and adjust and make it satisfied), and suppose two lightness to be synthesized and color basically identical (, can at first adjust and make it satisfied) if do not satisfy.
So-called details can be understood as the contrast on object edge (border), and such as branch and background sky intersection, if details is abundant, the pixel at the branch of boundary and sky place still can keep original color in the photo so; Otherwise then the color of branch and sky can merge in these pixels, therefore, can characterize with the difference of maximal value and minimum value in the zonule.This value is the component in certain color space or the weighted mean of component, and color space can be chosen as required flexibly, includes but not limited to HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab and YUV etc.Further even can be with the component weighted mean in the different color space, such as the weighted mean of M among the CMYK and the H among the HSV.So-called zonule be can be around this pixel n * n (such as n=3) neighborhood or other certain get neighborhood method (such as circular neighborhood).Therefore, finally the level of detail of certain pixel can be described with the maximal value of certain value (perhaps certain several values is average) in certain (several) individual color space of all pixels in the neighborhood around this pixel and the difference degree of minimum value.Imagination is if a blurred picture, and obviously, no matter at any color space, minimum and maximum value difference is not little in the small neighbourhood of this pixel; And, then at least in certain color space, exist certain component to exist than big difference for picture rich in detail.
(2) level of detail that the level of detail of each pixel in first photo to be synthesized is deducted relevant position pixel in second photo to be synthesized obtains the discriminant value of the pixel level of detail of each position;
(3) discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, obtain the types value of the pixel details of each position;
If discriminant value>high threshold, then types value pType=1;
If discriminant value<low threshold value, then types value pType=2;
If low threshold value≤discriminant value≤high threshold, and in any photo, the level of detail of the pixel on the current location reaches setting value, then pType=3; The setting value of the level of detail of pixel can be set as required; The setting value height represents then to require that level of detail clearly just can be detected in the photo, and setting value is low represents that then the level of detail in the comparison film is less demanding;
Remaining whole classes that are classified as, i.e. types value pType=0 is because pType all is set to 0 when initial.
(4) types value to the pixel details of each position carries out low-pass filtering, removes possible detail type erroneous judgement;
If certain pixel place is communicated with the number of pixels in (pType is identical) zone and is less than certain threshold value, think that then the pType of this pixel place connected region is erroneous judgement, replace with the pType of this connected region surrounding pixel.
Because pTye is more abstract, can be imagined as width of cloth pTye types of image, the color that different pType is corresponding different is such as 1,2, the 3 and 4 above-mentioned corresponding red, green, blues of difference and black.It is contemplated that, pType type map after the dyeing is divided into the subregion that a piece varies in color, the color of each subregion is identical, any two pixels are communicated with in it, so-called connection is meant between these two pixels and has a road warp at least, this road through on the color (being pType) of (comprising starting point and terminal point) of having a few identical.
(5), choose corresponding mode respectively and carry out the synthetic clear pictures that obtains of image according to the types value of the pixel details of removing each position after judging by accident;
Can see that after handling through step (4), each pixel is divided into following four classes, final image combining method is as follows:
If types value pType=0 shows that the pixel details of this position on two figure is all less or do not have details, final composograph adopts the weighted mean through two images of noise reduction process; Two weights are respectively the distance of discriminant value and high threshold and low threshold value.
As preferably, adopt described weighted mean after, adopt again after synthetic around the neighborhood territory pixel mean filter carry out noise reduction process (noise reduction after the first weighted mean), can be by noise reduction so that the image after handling seems that noise is still less.
This mainly is because two exposure maps have JND.If still do not exist brightness inconsistent through overcorrect or after proofreading and correct, such as it is contemplated that is such a case, under a white background, two figure nearly all do not have details, it is in full accord that but the level of detail of these positions at two figure white background places is difficult to, and weighting comes according to discriminant value, and discriminant value calculates according to level of detail again, so may there be certain fluctuation in discriminant value.Further it is contemplated that following scene, first figure white background zone bright partially and second partially secretly, there is certain difference in the discriminant value of supposing two adjacent pixels, these two neighbors cause existing difference in the image after so synthetic owing to weights are different, just composition algorithm has produced original non-existent details " artificially ", this type of false details (noise) obviously is undesirable, also need be removed.Otherwise if just there was certain details herein originally, neighbor can be covered (be real details difference greater than even much larger than error) by true details in other words because the error that causes of weights difference is not enough to influence true details so.Therefore, Here it is, and types value is 0 and is 3 reasons that need treat respectively.Types value is 0 o'clock noise reduction those false details (noise) of can erasing slightly, and if types value to be 3 o'clock noise reductions can weaken even erase the on the contrary original details that exists.
If types value pType=1 illustrates that first image obviously has more details, then final image is got first.
If types value pType=2 illustrates that second image obviously has more details, then final image is got second.
If types value pType=3 illustrate that one of them image has details, and readability is suitable substantially, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
The inventive method is judged clear part in several photos automatically by algorithm, and a finally synthetic each several part is clear photograph all.Need can be applied to the big depth of field and center and peripheral all to take occasion clearly.
Description of drawings
Fig. 1 is the shooting effect figure of large aperture.
Fig. 2 is the shooting effect figure of little aperture.
Fig. 3 is that displayed map is amplified by the branch of the shooting results of large aperture.
Fig. 4 is that displayed map is amplified by the branch of the shooting results of little aperture.
Fig. 5 is the synthetic later branch's amplification displayed map of shooting results of the shooting results and the little aperture of large aperture.
Embodiment
Use Canon EF 28-135 camera lens on Canon EOS 400D fuselage, to take two notebooks (ThinkPad W500) with F5.6 and F16 respectively and go up content displayed at 135 ends, the camera lens optical axis of camera is not orthogonal to the screen of notebook computer, but certain included angle (45 degree) is arranged, the artificial Deep Canvas that produces, and the notebook screen is perpendicular to desktop, and the optical axis of camera lens is parallel to desktop.Done adjustment except aperture size when taking two photos, all the other comprise camera position, towards and camera lens and focal length etc. all constant, so doing is that position for same object in two photos that guarantee to take does not change.
When using large aperture (F5.6) to take, the part in the field depth is very clear, and the part beyond the scope is just fuzzyyer.On the contrary, adopt little aperture (F16) to take, the depth of field is bigger, but because diffraction effect, whole figure has fuzzy, still still clear than the part outside the field depth that uses large aperture slightly.
Space of a whole page size is limited because the resolution of Canon EOS 400D is very high, for algorithm effects is described, image has been done to simplify processing.At first, at vertical direction,, therefore chosen 120 pixels tall one section, enough representative because object distance is identical.Then, in the horizontal direction, it is fuzzyyer that the rightmost of original photo is compared the right of Fig. 1, therefore also removed.In addition,, selected 15 inches high resolving power WUXGA display screen for use in order to make fog-level obvious, and camera lens have certain nearest focusing from, therefore, can't make that display screen is full of whole image.The factor of comprehensive this two aspect, the zone of finally extracting size and be 2400 pixels * 128 pixels from the photo that uses large aperture (F5.6) to take obtains Fig. 1, to extract size be that the zone of 2400 pixels * 128 pixels obtains Fig. 2 in the relevant position from use the photo that little aperture (F16) takes, consider that Fig. 1 and Fig. 2 may be clear inadequately, they are amplified demonstration in lines, and the result respectively as shown in Figure 3 and Figure 4.
Level of detail to each pixel among Fig. 1 and Fig. 2 is calculated; The pixel level of detail is described with the difference degree of the minimum and maximum value in 3 * 3 neighborhoods around this pixel.Get the value of the R passage in the RGB passage of all pixels in described 3 * 3 neighborhoods when calculating the difference degree of maximal value and minimum value, maximum R value is subtracted each other with the R value of minimum, promptly obtains the difference degree.
The level of detail that the level of detail of each pixel in first photo to be synthesized (Fig. 1) is deducted relevant position pixel in second photo to be synthesized (Fig. 2) obtains the discriminant value of the pixel level of detail of each position;
The discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, and with level of detail and setting value relatively, obtain the types value of the pixel details of each position;
If discriminant value>high threshold (getting 15), then types value pType=1;
If discriminant value<low threshold value (getting-11), then types value pType=2;
If-11≤discriminant value≤15, and in any photo, the level of detail of the pixel on the current location reaches setting value 35, then pType=3;
Remaining whole classes that are classified as, i.e. types value pType=0 is because the whole zero clearings of pType when initial.
Types value to the pixel details of each position carries out low-pass filtering, removes possible detail type erroneous judgement; If the number of pixels of certain pixel place connected region is less than certain threshold value (10), think that then the pType of this pixel place connected region is erroneous judgement, with the pType replacement of this connected region surrounding pixel.
After removing noise, the types value according to the pixel details of each position synthesizes processing respectively again:
If types value pType=0 illustrates that two images all do not have details, final composograph adopts through the weighted mean of two images of noise reduction process (noise reduction after average earlier), and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
If types value pType=1 illustrates that first image obviously has more details, then final image is got first.
If types value pType=2 illustrates that second image obviously has more details, then final image is got second.
If types value pType=3 illustrate that one of them image has details, and readability is suitable substantially, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
To the types value of the pixel details of each position among Fig. 1 and Fig. 2 carry out corresponding judgment and pixel synthetic after, obtain clear pictures, demonstration is amplified by branch, as shown in Figure 5, can see, by synthetic, absorbed the advantage of two photos really, reached than any effect clearly all in two.

Claims (9)

1.一种基于细节检测的清晰照片合成方法,其特征在于:包括如下步骤:1. A clear photo synthesis method based on detail detection, characterized in that: comprise the steps: (1)计算两张大小和位置关系完全对应的待合成照片中每个像素的细节程度;(1) Calculating the degree of detail of each pixel in the photos to be synthesized that are completely corresponding in size and position; (2)将第一张待合成照片中的每个像素的细节程度减去第二张待合成照片中相应位置像素的细节程度得到每个位置的像素细节程度的判别值;(2) subtract the degree of detail of each pixel in the first photo to be synthesized from the degree of detail of the corresponding pixel in the second photo to be synthesized to obtain the discriminant value of the degree of detail of the pixel at each position; (3)将每个位置的像素细节程度的判别值与设定的高阈值与低阈值进行比较,并将细节程度与设定值比较,得到每个位置的像素细节的类型值;(3) Compare the discriminant value of the pixel detail level of each position with the set high threshold and low threshold, and compare the detail level with the set value to obtain the type value of the pixel detail level of each position; (4)对每个位置的像素细节的类型值进行低通滤波,去除可能的细节类型误判;(4) Carry out low-pass filtering to the type value of the pixel detail of each position, remove possible misjudgment of detail type; (5)根据去除误判后的每个位置的像素细节的类型值,分别选取相应的方式进行图像合成得到清晰照片。(5) According to the type value of the pixel details of each position after the misjudgment is removed, the corresponding method is selected for image synthesis to obtain a clear photo. 2.如权利要求1所述的清晰照片合成方法,其特征在于:步骤(1)所述的每个像素的细节程度是该像素周围邻域中的最大值和最小值的差别程度。2. The clear photo synthesis method as claimed in claim 1, characterized in that: the degree of detail of each pixel described in step (1) is the degree of difference between the maximum value and the minimum value in the neighborhood around the pixel. 3.如权利要求2所述的清晰照片合成方法,其特征在于:所述的像素周围邻域是该像素周围n×n方形邻域或圆形邻域。3. The clear photo synthesis method according to claim 2, wherein the neighborhood around the pixel is an n×n square neighborhood or a circular neighborhood around the pixel. 4.如权利要求1所述的清晰照片合成方法,其特征在于:步骤(3)中,如果判别值>高阈值,则类型值pType=1;4. The clear photo synthesis method as claimed in claim 1, characterized in that: in step (3), if the discriminant value>high threshold, then the type value pType=1; 如果判别值<低阈值,则类型值pType=2;If the discriminant value<low threshold, the type value pType=2; 如果低阈值≤判别值≤高阈值,且任何一张照片中,当前位置上的像素的细节程度达到设定值,则pType=3;If the low threshold ≤ discriminant value ≤ high threshold, and in any photo, the detail level of the pixel at the current position reaches the set value, then pType=3; 剩下的全部归为一类,类型值pType=0。All the rest are classified into one category, and the type value pType=0. 5.如权利要求1所述的清晰照片合成方法,其特征在于:步骤(4)中,如果某一连续区域相同像素细节类型值的个数少于设定的阈值,则认为该像素细节类型值是误判。5. The clear photo synthesis method according to claim 1, characterized in that: in step (4), if the number of the same pixel detail type values in a certain continuous area is less than the set threshold, then it is considered that the pixel detail type value is a false positive. 6.如权利要求5述的清晰照片合成方法,其特征在于:步骤(4)所述的去除误判是将认为是误判的像素细节的类型值用其周围不是误判的像素细节的类型值代替。6. The clear photo synthesis method as claimed in claim 5, characterized in that: the removal of misjudgment described in step (4) is to use the type value of the pixel detail that is considered to be misjudged as the type value of the pixel detail that is not misjudged around it. value instead. 7.如权利要求1所述的清晰照片合成方法,其特征在于:步骤(5)中,最终该位置合成图像的像素采用如下合成方法:7. clear photo composition method as claimed in claim 1, is characterized in that: in step (5), finally the pixel of this position synthesis image adopts following synthesis method: 如果某位置去除误判后的像素细节的类型值为0或3,则该位置采用两张待合成照片像素的加权平均,两个权值分别为判别值与高阈值和低阈值之间的距离;If the type value of the pixel detail after misjudgment is removed at a certain position is 0 or 3, then the position uses the weighted average of the pixels of the two photos to be synthesized, and the two weights are the distance between the discriminant value and the high threshold and low threshold ; 如果某位置去除误判后的像素细节的类型值为1,则选取第一张待合成照片对应位置的像素;If the type value of the pixel details after removal of misjudgment at a certain position is 1, select the pixel corresponding to the position of the first photo to be synthesized; 如果某位置去除误判后的像素细节的类型值为2,则选取第二张待合成照片对应位置的像素;If the type value of the pixel detail after the misjudgment is removed at a certain position is 2, then select the pixel at the corresponding position of the second photo to be synthesized; 8.如权利要求7所述的清晰照片合成方法,其特征在于:如果某位置去除误判后的像素细节的类型值为0,则该位置采用两张待合成照片像素的加权平均之后,再采用合成后的周围邻域像素均值滤波进行降噪处理。8. The method for synthesizing clear photos as claimed in claim 7, characterized in that: if the type value of the pixel detail after misjudgment is removed at a certain position is 0, then the position adopts the weighted average of the pixels of the two photos to be synthesized, and then The noise reduction process is carried out by means of the mean filtering of the surrounding neighborhood pixels after synthesis. 9.如权利要求2所述的清晰照片合成方法,其特征在于:所述的最大值和最小值的差别程度是在某个或者某几个色彩空间中的分量或者分量的加权平均,色彩空间包括但不限于HSV、HSI、RGB、CMYK、HSL、HSB、Ycc、XYZ、Lab或YUV色彩空间。9. The clear photo synthesis method as claimed in claim 2, characterized in that: the degree of difference between the maximum value and the minimum value is the weighted average of the components or components in one or several color spaces, and the color space Including but not limited to HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab or YUV color spaces.
CN2010101621699A 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection Expired - Fee Related CN101853499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101621699A CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101621699A CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Publications (2)

Publication Number Publication Date
CN101853499A true CN101853499A (en) 2010-10-06
CN101853499B CN101853499B (en) 2012-01-25

Family

ID=42804964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101621699A Expired - Fee Related CN101853499B (en) 2010-04-30 2010-04-30 Clear picture synthesis method based on detail detection

Country Status (1)

Country Link
CN (1) CN101853499B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622736A (en) * 2011-01-28 2012-08-01 鸿富锦精密工业(深圳)有限公司 Image processing system and method
CN104952048A (en) * 2015-06-09 2015-09-30 浙江大学 Focus stack photo fusing method based on image reconstruction
CN103795920B (en) * 2014-01-21 2017-06-20 宇龙计算机通信科技(深圳)有限公司 Photo processing method and device
CN112381836A (en) * 2020-11-12 2021-02-19 贝壳技术有限公司 Image processing method and device, computer readable storage medium, and electronic device
CN115358951A (en) * 2022-10-19 2022-11-18 广东电网有限责任公司佛山供电局 Intelligent ring main unit monitoring system based on image recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003225428A (en) * 2002-02-05 2003-08-12 Shinnichi Electronics Kk Picture display device for pachinko machine, and picture displaying method and picture displaying program for the picture display device
US20070002070A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Sub-pass correction using neighborhood matching
CN101052100A (en) * 2007-03-29 2007-10-10 上海交通大学 Multiple exposure image intensifying method
CN101394485A (en) * 2007-09-20 2009-03-25 华为技术有限公司 Image generating method, apparatus and image composition equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003225428A (en) * 2002-02-05 2003-08-12 Shinnichi Electronics Kk Picture display device for pachinko machine, and picture displaying method and picture displaying program for the picture display device
US20070002070A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Sub-pass correction using neighborhood matching
CN101052100A (en) * 2007-03-29 2007-10-10 上海交通大学 Multiple exposure image intensifying method
CN101394485A (en) * 2007-09-20 2009-03-25 华为技术有限公司 Image generating method, apparatus and image composition equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622736A (en) * 2011-01-28 2012-08-01 鸿富锦精密工业(深圳)有限公司 Image processing system and method
CN102622736B (en) * 2011-01-28 2017-08-04 鸿富锦精密工业(深圳)有限公司 Image processing system and method
CN103795920B (en) * 2014-01-21 2017-06-20 宇龙计算机通信科技(深圳)有限公司 Photo processing method and device
CN104952048A (en) * 2015-06-09 2015-09-30 浙江大学 Focus stack photo fusing method based on image reconstruction
CN104952048B (en) * 2015-06-09 2017-12-08 浙江大学 A kind of focus storehouse picture synthesis method based on as volume reconstruction
CN112381836A (en) * 2020-11-12 2021-02-19 贝壳技术有限公司 Image processing method and device, computer readable storage medium, and electronic device
CN115358951A (en) * 2022-10-19 2022-11-18 广东电网有限责任公司佛山供电局 Intelligent ring main unit monitoring system based on image recognition
CN115358951B (en) * 2022-10-19 2023-01-24 广东电网有限责任公司佛山供电局 Intelligent ring main unit monitoring system based on image recognition

Also Published As

Publication number Publication date
CN101853499B (en) 2012-01-25

Similar Documents

Publication Publication Date Title
CN108377343B (en) Exposure selector for high dynamic range imaging and related method
CN103366352B (en) Apparatus and method for producing the image that background is blurred
JP4772839B2 (en) Image identification method and imaging apparatus
TWI464706B (en) Dark portion exposure compensation method for simulating high dynamic range with single image and image processing device using the same
EP1583356B1 (en) Image processing device and image processing program
WO2007095483A2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
WO2017214523A1 (en) Mismatched foreign light detection and mitigation in the image fusion of a two-camera system
US9361669B2 (en) Image processing apparatus, image processing method, and program for performing a blurring process on an image
WO2002005544A1 (en) Image processing method, recording medium, and image processing device
CN101853499B (en) Clear picture synthesis method based on detail detection
CN103797782A (en) Image processing device and program
CN102737365B (en) Image processing apparatus, camera head and image processing method
CN106296625A (en) Image processing apparatus and image processing method, camera head and image capture method
JP2013025650A (en) Image processing apparatus, image processing method, and program
JP2013025651A (en) Image processing apparatus, image processing method, and program
JP2005079856A (en) Image processing unit and picture processing program
CN117456371B (en) A method, device, equipment and medium for detecting hot spots in strings
CN106791351A (en) Panoramic picture treating method and apparatus
CN114549373A (en) HDR image generation method and device, electronic equipment and readable storage medium
JP2015192338A (en) Image processing device and image processing program
US20160275345A1 (en) Camera systems with enhanced document capture
TW201820261A (en) Image synthesis method for character images characterized by using a processing module combining a first image with a second image
JP5213493B2 (en) Motion detection device
Zamfir et al. An optical model of the appearance of blemishes in digital photographs
JP4523945B2 (en) Image processing method, image processing apparatus, and image processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120125

Termination date: 20140430