CN101853499A - Clear picture synthesis method based on detail detection - Google Patents
Clear picture synthesis method based on detail detection Download PDFInfo
- Publication number
- CN101853499A CN101853499A CN 201010162169 CN201010162169A CN101853499A CN 101853499 A CN101853499 A CN 101853499A CN 201010162169 CN201010162169 CN 201010162169 CN 201010162169 A CN201010162169 A CN 201010162169A CN 101853499 A CN101853499 A CN 101853499A
- Authority
- CN
- China
- Prior art keywords
- pixel
- value
- detail
- details
- synthesis method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 13
- 238000001514 detection method Methods 0.000 title claims abstract description 6
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 238000011946 reduction process Methods 0.000 claims description 4
- 238000010189 synthetic method Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 7
- 230000015572 biosynthetic process Effects 0.000 abstract 1
- 238000003786 synthesis reaction Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000009467 reduction Effects 0.000 description 5
- 238000010422 painting Methods 0.000 description 4
- 239000004744 fabric Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000001915 proofreading effect Effects 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a clear picture synthesis method based on detail detection, which comprises the following steps that: the detail degree of each pixel in two pictures to be synthesized with totally corresponding size and position relationship; the detail degree of each pixel in a first picture to be synthesized is subtracted by the detail degree of the pixel at the corresponding position in a second picture to be synthesized to obtain the discriminant value of the detail degree of the pixel at each position; the discriminant value of the detail degree of the pixel at each position is compared with a set high threshold and a set low threshold, and the detail degree is compared with a set value to obtain the type value of the pixel detail of each position; low-pass filtration is carried out to the type value of the pixel detail of each position to remove possible detail type misjudgment; and according to the type value of the pixel detail of each position after misjudgment removal, corresponding methods are respectively selected for image synthesis to obtain a clear picture. The method can synthesize clear pictures, and is applied to occasions which need large depth-of-field and the clear photography of the center and the edge.
Description
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of clear pictures synthetic technology.
Background technology
At present, the photo-sensitive cell density of camera improves constantly, Sony has developed the sensor of 3480 perfectly sound pictures recently, believe and to be used in that (the big horse three of Canon's 2,100 ten thousand pixels (EOS 1Ds MarkIII) was issued more than 2 years on its single anti-product, according to the rule of renewal in 3 years in the past, big horse three will upgrade to estimate this year).Simultaneously, as the participant of pixel contest, Canon also is unwilling to be lag behind naturally, and the next generation of its big horse three also is the product of a 3,000 ten thousand above pixels certainly.
According to domestic certain photography website during for the EOS 1Ds MarkIII of Canon collocation EF 50mm F1.4USM camera lens the sharpness value result under MTF50 as can be known, the best aperture of center and peripheral is respectively F4 and F5.6.But the depth of field of F4 and F5.6 is all superficial, and in the shooting of reality, particularly some inclined-plane mural paintings or irregular arch mural painting are difficult to make optical axis to be parallel to surface being shot fully, and this has just caused depth of field fuzzy problem.If (F8, F11 even F16) can obtain the big depth of field with little aperture, but but because diffraction causes whole photograph image fuzzy.Be to use for the solution of this class occasion on the optics and move lens shaft, make that certain planar imaging is clear.But in fact for one of some arch mural painting clearly the plane can not deal with problems because mural painting itself be not picture in one plane.In addition, move lens shaft and also have shortcomings such as image quality difference and price height.Therefore, can consider by taking multiple pictures that some is clear relatively for every photo, finally synthetic each zone of a width of cloth is digital picture clearly all.
In addition, also have similar problem in the photography at historical relic and archaeology scene, its solution generally is to select suitable aperture according to the degree of depth of object and the depth of field scale on the camera lens.But done shortcoming like this, must use very little aperture in order to guarantee the depth of field, it is slight fuzzy to make that therefore entire image all has, and we can say that this method is the sharpness that guarantees entire image with the sharpness of sacrificing important area.But in fact on-the-spot for historical relic and archaeology, individual region-of-interest is generally arranged, this part zone needs higher sharpness, can alleviate the diffraction effect of region-of-interest by increasing aperture, makes this part content clear.And, use little aperture to guarantee the depth of field for the background area, and at last that this two width of cloth image is synthetic, generate a best photo.
In addition, even for the plane subject, in fact, center and peripheral all has different resolution under the different apertures, and for a camera lens, must there be the aperture of a best center resolution and the aperture of best edge resolution, these two apertures are often different, for example but unfortunately,, in the website test of front, the sharpest aperture of center and peripheral is respectively F4 and F5.6.The film epoch can only be between these two best apertures trade-off, but in the digital age, can utilize digital image processing techniques, detect the photo acutance of zones of different under the different apertures, the sharpest keen zone is combined, and finally synthetic center and peripheral all is the sharpest keen image.
Summary of the invention
The invention provides a kind of clear picture synthesis method based on detail detection, judge clear part in several photos automatically by algorithm, a finally synthetic each several part is clear photograph all.Can be applied to need the clear and center and peripheral of the big depth of field all to take occasion clearly.
A kind of based on the synthetic method of the clear pictures of detail detection, comprising:
(1) calculates the level of detail of each pixel in two big little and photos to be synthesized that the position relation is corresponding fully;
Suppose that two photo size and position relations to be synthesized are corresponding fully (if do not satisfy, can at first utilize the registration Algorithm registration and adjust and make it satisfied), and suppose two lightness to be synthesized and color basically identical (, can at first adjust and make it satisfied) if do not satisfy.
So-called details can be understood as the contrast on object edge (border), and such as branch and background sky intersection, if details is abundant, the pixel at the branch of boundary and sky place still can keep original color in the photo so; Otherwise then the color of branch and sky can merge in these pixels, therefore, can characterize with the difference of maximal value and minimum value in the zonule.This value is the component in certain color space or the weighted mean of component, and color space can be chosen as required flexibly, includes but not limited to HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab and YUV etc.Further even can be with the component weighted mean in the different color space, such as the weighted mean of M among the CMYK and the H among the HSV.So-called zonule be can be around this pixel n * n (such as n=3) neighborhood or other certain get neighborhood method (such as circular neighborhood).Therefore, finally the level of detail of certain pixel can be described with the maximal value of certain value (perhaps certain several values is average) in certain (several) individual color space of all pixels in the neighborhood around this pixel and the difference degree of minimum value.Imagination is if a blurred picture, and obviously, no matter at any color space, minimum and maximum value difference is not little in the small neighbourhood of this pixel; And, then at least in certain color space, exist certain component to exist than big difference for picture rich in detail.
(2) level of detail that the level of detail of each pixel in first photo to be synthesized is deducted relevant position pixel in second photo to be synthesized obtains the discriminant value of the pixel level of detail of each position;
(3) discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, obtain the types value of the pixel details of each position;
If discriminant value>high threshold, then types value pType=1;
If discriminant value<low threshold value, then types value pType=2;
If low threshold value≤discriminant value≤high threshold, and in any photo, the level of detail of the pixel on the current location reaches setting value, then pType=3; The setting value of the level of detail of pixel can be set as required; The setting value height represents then to require that level of detail clearly just can be detected in the photo, and setting value is low represents that then the level of detail in the comparison film is less demanding;
Remaining whole classes that are classified as, i.e. types value pType=0 is because pType all is set to 0 when initial.
(4) types value to the pixel details of each position carries out low-pass filtering, removes possible detail type erroneous judgement;
If certain pixel place is communicated with the number of pixels in (pType is identical) zone and is less than certain threshold value, think that then the pType of this pixel place connected region is erroneous judgement, replace with the pType of this connected region surrounding pixel.
Because pTye is more abstract, can be imagined as width of cloth pTye types of image, the color that different pType is corresponding different is such as 1,2, the 3 and 4 above-mentioned corresponding red, green, blues of difference and black.It is contemplated that, pType type map after the dyeing is divided into the subregion that a piece varies in color, the color of each subregion is identical, any two pixels are communicated with in it, so-called connection is meant between these two pixels and has a road warp at least, this road through on the color (being pType) of (comprising starting point and terminal point) of having a few identical.
(5), choose corresponding mode respectively and carry out the synthetic clear pictures that obtains of image according to the types value of the pixel details of removing each position after judging by accident;
Can see that after handling through step (4), each pixel is divided into following four classes, final image combining method is as follows:
If types value pType=0 shows that the pixel details of this position on two figure is all less or do not have details, final composograph adopts the weighted mean through two images of noise reduction process; Two weights are respectively the distance of discriminant value and high threshold and low threshold value.
As preferably, adopt described weighted mean after, adopt again after synthetic around the neighborhood territory pixel mean filter carry out noise reduction process (noise reduction after the first weighted mean), can be by noise reduction so that the image after handling seems that noise is still less.
This mainly is because two exposure maps have JND.If still do not exist brightness inconsistent through overcorrect or after proofreading and correct, such as it is contemplated that is such a case, under a white background, two figure nearly all do not have details, it is in full accord that but the level of detail of these positions at two figure white background places is difficult to, and weighting comes according to discriminant value, and discriminant value calculates according to level of detail again, so may there be certain fluctuation in discriminant value.Further it is contemplated that following scene, first figure white background zone bright partially and second partially secretly, there is certain difference in the discriminant value of supposing two adjacent pixels, these two neighbors cause existing difference in the image after so synthetic owing to weights are different, just composition algorithm has produced original non-existent details " artificially ", this type of false details (noise) obviously is undesirable, also need be removed.Otherwise if just there was certain details herein originally, neighbor can be covered (be real details difference greater than even much larger than error) by true details in other words because the error that causes of weights difference is not enough to influence true details so.Therefore, Here it is, and types value is 0 and is 3 reasons that need treat respectively.Types value is 0 o'clock noise reduction those false details (noise) of can erasing slightly, and if types value to be 3 o'clock noise reductions can weaken even erase the on the contrary original details that exists.
If types value pType=1 illustrates that first image obviously has more details, then final image is got first.
If types value pType=2 illustrates that second image obviously has more details, then final image is got second.
If types value pType=3 illustrate that one of them image has details, and readability is suitable substantially, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
The inventive method is judged clear part in several photos automatically by algorithm, and a finally synthetic each several part is clear photograph all.Need can be applied to the big depth of field and center and peripheral all to take occasion clearly.
Description of drawings
Fig. 1 is the shooting effect figure of large aperture.
Fig. 2 is the shooting effect figure of little aperture.
Fig. 3 is that displayed map is amplified by the branch of the shooting results of large aperture.
Fig. 4 is that displayed map is amplified by the branch of the shooting results of little aperture.
Fig. 5 is the synthetic later branch's amplification displayed map of shooting results of the shooting results and the little aperture of large aperture.
Embodiment
Use Canon EF 28-135 camera lens on Canon EOS 400D fuselage, to take two notebooks (ThinkPad W500) with F5.6 and F16 respectively and go up content displayed at 135 ends, the camera lens optical axis of camera is not orthogonal to the screen of notebook computer, but certain included angle (45 degree) is arranged, the artificial Deep Canvas that produces, and the notebook screen is perpendicular to desktop, and the optical axis of camera lens is parallel to desktop.Done adjustment except aperture size when taking two photos, all the other comprise camera position, towards and camera lens and focal length etc. all constant, so doing is that position for same object in two photos that guarantee to take does not change.
When using large aperture (F5.6) to take, the part in the field depth is very clear, and the part beyond the scope is just fuzzyyer.On the contrary, adopt little aperture (F16) to take, the depth of field is bigger, but because diffraction effect, whole figure has fuzzy, still still clear than the part outside the field depth that uses large aperture slightly.
Space of a whole page size is limited because the resolution of Canon EOS 400D is very high, for algorithm effects is described, image has been done to simplify processing.At first, at vertical direction,, therefore chosen 120 pixels tall one section, enough representative because object distance is identical.Then, in the horizontal direction, it is fuzzyyer that the rightmost of original photo is compared the right of Fig. 1, therefore also removed.In addition,, selected 15 inches high resolving power WUXGA display screen for use in order to make fog-level obvious, and camera lens have certain nearest focusing from, therefore, can't make that display screen is full of whole image.The factor of comprehensive this two aspect, the zone of finally extracting size and be 2400 pixels * 128 pixels from the photo that uses large aperture (F5.6) to take obtains Fig. 1, to extract size be that the zone of 2400 pixels * 128 pixels obtains Fig. 2 in the relevant position from use the photo that little aperture (F16) takes, consider that Fig. 1 and Fig. 2 may be clear inadequately, they are amplified demonstration in lines, and the result respectively as shown in Figure 3 and Figure 4.
Level of detail to each pixel among Fig. 1 and Fig. 2 is calculated; The pixel level of detail is described with the difference degree of the minimum and maximum value in 3 * 3 neighborhoods around this pixel.Get the value of the R passage in the RGB passage of all pixels in described 3 * 3 neighborhoods when calculating the difference degree of maximal value and minimum value, maximum R value is subtracted each other with the R value of minimum, promptly obtains the difference degree.
The level of detail that the level of detail of each pixel in first photo to be synthesized (Fig. 1) is deducted relevant position pixel in second photo to be synthesized (Fig. 2) obtains the discriminant value of the pixel level of detail of each position;
The discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, and with level of detail and setting value relatively, obtain the types value of the pixel details of each position;
If discriminant value>high threshold (getting 15), then types value pType=1;
If discriminant value<low threshold value (getting-11), then types value pType=2;
If-11≤discriminant value≤15, and in any photo, the level of detail of the pixel on the current location reaches setting value 35, then pType=3;
Remaining whole classes that are classified as, i.e. types value pType=0 is because the whole zero clearings of pType when initial.
Types value to the pixel details of each position carries out low-pass filtering, removes possible detail type erroneous judgement; If the number of pixels of certain pixel place connected region is less than certain threshold value (10), think that then the pType of this pixel place connected region is erroneous judgement, with the pType replacement of this connected region surrounding pixel.
After removing noise, the types value according to the pixel details of each position synthesizes processing respectively again:
If types value pType=0 illustrates that two images all do not have details, final composograph adopts through the weighted mean of two images of noise reduction process (noise reduction after average earlier), and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
If types value pType=1 illustrates that first image obviously has more details, then final image is got first.
If types value pType=2 illustrates that second image obviously has more details, then final image is got second.
If types value pType=3 illustrate that one of them image has details, and readability is suitable substantially, then final composograph adopts the weighted mean of two images, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value.
To the types value of the pixel details of each position among Fig. 1 and Fig. 2 carry out corresponding judgment and pixel synthetic after, obtain clear pictures, demonstration is amplified by branch, as shown in Figure 5, can see, by synthetic, absorbed the advantage of two photos really, reached than any effect clearly all in two.
Claims (9)
1. the clear picture synthesis method based on detail detection is characterized in that: comprise the steps:
(1) calculates the level of detail of each pixel in two big little and photos to be synthesized that the position relation is corresponding fully;
(2) level of detail that the level of detail of each pixel in first photo to be synthesized is deducted relevant position pixel in second photo to be synthesized obtains the discriminant value of the pixel level of detail of each position;
(3) discriminant value of the pixel level of detail of each position and the high threshold and the low threshold value of setting are compared, and with level of detail and setting value relatively, obtain the types value of the pixel details of each position;
(4) types value to the pixel details of each position carries out low-pass filtering, removes possible detail type erroneous judgement;
(5), choose corresponding mode respectively and carry out the synthetic clear pictures that obtains of image according to the types value of the pixel details of removing each position after judging by accident.
2. clear picture synthesis method as claimed in claim 1 is characterized in that: the level of detail of described each pixel of step (1) is the maximal value in the neighborhood and the difference degree of minimum value around this pixel.
3. clear picture synthesis method as claimed in claim 2 is characterized in that: neighborhood is square neighborhood of n * n or a circular neighborhood around this pixel around the described pixel.
4. clear picture synthesis method as claimed in claim 1 is characterized in that: in the step (3), if discriminant value>high threshold, then types value pType=1;
If discriminant value<low threshold value, then types value pType=2;
If low threshold value≤discriminant value≤high threshold, and in any photo, the level of detail of the pixel on the current location reaches setting value, then pType=3;
Remaining whole classes that are classified as, types value pType=0.
5. clear picture synthesis method as claimed in claim 1 is characterized in that: in the step (4), if the number of a certain continuum same pixel detail type value is less than preset threshold, think that then this pixel detail type value is erroneous judgement.
6. the clear picture synthesis method of stating as claim 5 is characterized in that: the described removal erroneous judgement of step (4) is that the types value of the pixel details that will think to judge by accident replaces with the types value that is not the pixel details of erroneous judgement around it.
7. clear picture synthesis method as claimed in claim 1 is characterized in that: in the step (5), finally the pixel of this position composograph adopts following synthetic method:
If it is 0 or 3 that the types value of the pixel details after the erroneous judgement is removed in certain position, then the weighted mean of two photo pixels to be synthesized is adopted in this position, and two weights are respectively the distance between discriminant value and high threshold and the low threshold value;
If it is 1 that the types value of the pixel details after the erroneous judgement is removed in certain position, then choose the pixel of first photo correspondence position to be synthesized;
If it is 2 that the types value of the pixel details after the erroneous judgement is removed in certain position, then choose the pixel of second photo correspondence position to be synthesized;
8. clear picture synthesis method as claimed in claim 7, it is characterized in that: if the types value of the pixel details after judging by accident is removed in certain position is 0, then adopt after the weighted mean of two photo pixels to be synthesized this position, adopts the neighborhood territory pixel mean filter on every side after synthesizing to carry out noise reduction process again.
9. clear picture synthesis method as claimed in claim 2, it is characterized in that: the difference degree of described maximal value and minimum value is the component in certain or certain several color spaces or the weighted mean of component, and color space includes but not limited to HSV, HSI, RGB, CMYK, HSL, HSB, Ycc, XYZ, Lab or YUV color space.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101621699A CN101853499B (en) | 2010-04-30 | 2010-04-30 | Clear picture synthesis method based on detail detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101621699A CN101853499B (en) | 2010-04-30 | 2010-04-30 | Clear picture synthesis method based on detail detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101853499A true CN101853499A (en) | 2010-10-06 |
CN101853499B CN101853499B (en) | 2012-01-25 |
Family
ID=42804964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010101621699A Expired - Fee Related CN101853499B (en) | 2010-04-30 | 2010-04-30 | Clear picture synthesis method based on detail detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101853499B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622736A (en) * | 2011-01-28 | 2012-08-01 | 鸿富锦精密工业(深圳)有限公司 | Image processing system and method |
CN104952048A (en) * | 2015-06-09 | 2015-09-30 | 浙江大学 | Focus stack photo fusing method based on image reconstruction |
CN103795920B (en) * | 2014-01-21 | 2017-06-20 | 宇龙计算机通信科技(深圳)有限公司 | Photo processing method and device |
CN112381836A (en) * | 2020-11-12 | 2021-02-19 | 贝壳技术有限公司 | Image processing method and device, computer readable storage medium, and electronic device |
CN115358951A (en) * | 2022-10-19 | 2022-11-18 | 广东电网有限责任公司佛山供电局 | Intelligent ring main unit monitoring system based on image recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003225428A (en) * | 2002-02-05 | 2003-08-12 | Shinnichi Electronics Kk | Picture display device for pachinko machine, and picture displaying method and picture displaying program for the picture display device |
US20070002070A1 (en) * | 2005-06-30 | 2007-01-04 | Microsoft Corporation | Sub-pass correction using neighborhood matching |
CN101052100A (en) * | 2007-03-29 | 2007-10-10 | 上海交通大学 | Multiple exposure image intensifying method |
CN101394485A (en) * | 2007-09-20 | 2009-03-25 | 华为技术有限公司 | Image generating method, apparatus and image composition equipment |
-
2010
- 2010-04-30 CN CN2010101621699A patent/CN101853499B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003225428A (en) * | 2002-02-05 | 2003-08-12 | Shinnichi Electronics Kk | Picture display device for pachinko machine, and picture displaying method and picture displaying program for the picture display device |
US20070002070A1 (en) * | 2005-06-30 | 2007-01-04 | Microsoft Corporation | Sub-pass correction using neighborhood matching |
CN101052100A (en) * | 2007-03-29 | 2007-10-10 | 上海交通大学 | Multiple exposure image intensifying method |
CN101394485A (en) * | 2007-09-20 | 2009-03-25 | 华为技术有限公司 | Image generating method, apparatus and image composition equipment |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622736A (en) * | 2011-01-28 | 2012-08-01 | 鸿富锦精密工业(深圳)有限公司 | Image processing system and method |
CN102622736B (en) * | 2011-01-28 | 2017-08-04 | 鸿富锦精密工业(深圳)有限公司 | Image processing system and method |
CN103795920B (en) * | 2014-01-21 | 2017-06-20 | 宇龙计算机通信科技(深圳)有限公司 | Photo processing method and device |
CN104952048A (en) * | 2015-06-09 | 2015-09-30 | 浙江大学 | Focus stack photo fusing method based on image reconstruction |
CN104952048B (en) * | 2015-06-09 | 2017-12-08 | 浙江大学 | A kind of focus storehouse picture synthesis method based on as volume reconstruction |
CN112381836A (en) * | 2020-11-12 | 2021-02-19 | 贝壳技术有限公司 | Image processing method and device, computer readable storage medium, and electronic device |
CN115358951A (en) * | 2022-10-19 | 2022-11-18 | 广东电网有限责任公司佛山供电局 | Intelligent ring main unit monitoring system based on image recognition |
CN115358951B (en) * | 2022-10-19 | 2023-01-24 | 广东电网有限责任公司佛山供电局 | Intelligent ring main unit monitoring system based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN101853499B (en) | 2012-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108377343B (en) | Exposure selector for high dynamic range imaging and related method | |
CN101778203B (en) | Image processing device | |
TWI464706B (en) | Dark portion exposure compensation method for simulating high dynamic range with single image and image processing device using the same | |
CN104519328B (en) | Image processing equipment, image capture device and image processing method | |
CN101889453B (en) | Image processing device, imaging device, method, and program | |
WO2017214523A1 (en) | Mismatched foreign light detection and mitigation in the image fusion of a two-camera system | |
WO2007095483A2 (en) | Detection and removal of blemishes in digital images utilizing original images of defocused scenes | |
CN108537155A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109493283A (en) | A kind of method that high dynamic range images ghost is eliminated | |
US9361669B2 (en) | Image processing apparatus, image processing method, and program for performing a blurring process on an image | |
CN101853499B (en) | Clear picture synthesis method based on detail detection | |
CN102480595B (en) | Image processing apparatus and image processing method | |
CN103797782A (en) | Image processing device and program | |
CN103366352A (en) | Device and method for producing image with background being blurred | |
CN102737365B (en) | Image processing apparatus, camera head and image processing method | |
CN103563350A (en) | Image processing device, image processing method, and digital camera | |
WO2002005544A1 (en) | Image processing method, recording medium, and image processing device | |
CN106296625A (en) | Image processing apparatus and image processing method, camera head and image capture method | |
CN110352592A (en) | Imaging device and imaging method and image processing equipment and image processing method | |
TW201813371A (en) | Ghost artifact removal system and method | |
JP2013025650A (en) | Image processing apparatus, image processing method, and program | |
CN107169973A (en) | The background removal and synthetic method and device of a kind of image | |
US8885971B2 (en) | Image processing apparatus, image processing method, and storage medium | |
CN117456371B (en) | Group string hot spot detection method, device, equipment and medium | |
CN111711766B (en) | Image processing method and device, terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120125 Termination date: 20140430 |