CN108111785A - Image processing method and device, computer readable storage medium and computer equipment - Google Patents

Image processing method and device, computer readable storage medium and computer equipment Download PDF

Info

Publication number
CN108111785A
CN108111785A CN201711465123.2A CN201711465123A CN108111785A CN 108111785 A CN108111785 A CN 108111785A CN 201711465123 A CN201711465123 A CN 201711465123A CN 108111785 A CN108111785 A CN 108111785A
Authority
CN
China
Prior art keywords
image
pixel
dynamic range
green
tap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711465123.2A
Other languages
Chinese (zh)
Other versions
CN108111785B (en
Inventor
李智乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711465123.2A priority Critical patent/CN108111785B/en
Publication of CN108111785A publication Critical patent/CN108111785A/en
Application granted granted Critical
Publication of CN108111785B publication Critical patent/CN108111785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/63Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current

Abstract

A kind of image processing method of disclosure.Image processing method includes:It handles pending image and obtains just removing picture black so that the pixel value of each pixel of pending image is subtracted the black level of predetermined ratio;To first de-black image noise reduction to obtain noise-reduced image;With processing noise-reduced image the pixel value of each pixel of noise-reduced image to be subtracted to the black level of remaining proportion to obtain image after de-black, remaining proportion and the predetermined ratio and be 1, remaining proportion is more than the predetermined ratio.A kind of image processing apparatus, computer readable storage medium and computer equipment is also disclosed in the application.The image processing method and device of the application embodiment, computer readable storage medium and computer equipment can to avoid image dark portion in noise reduction process because signal is weaker and effect is poor, it can also avoid because the presence of black level causes subsequent various image processing effects to be deteriorated, so as to promote the effect of image procossing on the whole.

Description

Image processing method and device, computer readable storage medium and computer equipment
Technical field
This application involves image processing techniques, more particularly to a kind of image processing method, image processing apparatus, computer can Read storage medium and computer equipment.
Background technology
The image processing method of correlation technique first carries out image on black-level correction (such as the picture by each pixel of image Plain value subtracts black level), noise reduction process then is carried out to the image after correction again.However, the original signal (pixel of the dark portion of image Pixel value) it is just weaker, signal is weaker after black-level correction, and the effect for causing the dark portion noise reduction process of image is poor.
The content of the invention
Embodiments herein provides a kind of image processing method, image processing apparatus, computer readable storage medium And computer equipment.
The image processing method of the application embodiment comprises the following steps:
Pending image is handled so that the pixel value of each pixel of the pending image to be subtracted to the dark electricity of predetermined ratio It puts down to obtain just to remove picture black;
To the just de-black image noise reduction to obtain noise-reduced image;With
The noise-reduced image is handled so that the pixel value of each pixel of the noise-reduced image to be subtracted to the dark electricity of remaining proportion It is flat to obtain image after de-black, the remaining proportion and the predetermined ratio and be 1, the remaining proportion is more than described predetermined Ratio.
A kind of image processing apparatus of the application embodiment, including:
First processing module, the first processing module is for handling pending image with by the every of the pending image The pixel value of a pixel subtracts the black level of predetermined ratio to obtain just removing picture black;
Second processing module, Second processing module are used to the just de-black image noise reduction obtain noise-reduced image;With
3rd processing module, the 3rd processing module is for handling the noise-reduced image with by each picture of the noise-reduced image The pixel value of element subtracts the black level of remaining proportion to obtain image after de-black, the remaining proportion and the predetermined ratio and For 1, the remaining proportion is more than the predetermined ratio.
Non-volatile computer of the one or more of the application embodiment comprising computer executable instructions is readable to be deposited Storage media, when the computer executable instructions are executed by one or more processors so that described in the processor performs Image pickup method.
A kind of computer equipment of the application embodiment including memory and processor, stores in the memory Computer-readable instruction, when described instruction is performed by the processor so that the processor performs described image processing method.
The image processing method and device of the application embodiment, computer readable storage medium and computer equipment are right Before image carries out noise reduction process, at most part black level is only subtracted, can largely keep the signal of the dark portion of image, Avoid the dark portion of image in noise reduction process because signal is weaker and effect is poor, and further subtracted after noise reduction process all Black level, figure can be promoted on the whole to avoid because the presence of black level cause subsequent various image processing effects to be deteriorated As the effect of processing.
The additional aspect and advantage of the application will be set forth in part in the description, and will partly become from the following description It obtains substantially or is recognized by the practice of the application.
Description of the drawings
It in order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application, for those of ordinary skill in the art, without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow diagram of the image processing method of the application certain embodiments.
Fig. 2 is the module diagram of the image processing apparatus of the application certain embodiments.
Fig. 3 is the floor map of the computer equipment of the application certain embodiments.
Fig. 4 is the flow diagram of the image processing method of the application certain embodiments.
Fig. 5 is the module diagram of the image processing apparatus of the application certain embodiments.
Fig. 6 is the flow diagram of the image processing method of the application certain embodiments.
Fig. 7 is the module diagram of the Second processing module of the image processing apparatus of the application certain embodiments.
Fig. 8 is the flow diagram of the image processing method of the application certain embodiments.
Fig. 9 is the module diagram of the first processing units of the image processing apparatus of the application certain embodiments.
Figure 10 is the flow diagram of the image processing method of the application certain embodiments.
Figure 11 is the module diagram of the second processing unit of the image processing apparatus of the application certain embodiments.
Figure 12 is the horizontal filtering pixel column schematic diagram of the application certain embodiments.
Figure 13 is the flow diagram of the image processing method of the application certain embodiments.
Figure 14 is the module diagram of the 3rd processing unit of the image processing apparatus of the application certain embodiments.
Figure 15 is the vertical filtering pixel column schematic diagram of the application certain embodiments.
Figure 16 is the flow diagram of the image processing method of the application certain embodiments.
Figure 17 is the module diagram of the image processing apparatus of the application certain embodiments.
Figure 18 is the flow diagram of the image processing method of the application certain embodiments.
Figure 19 is the module diagram of the first correction module of the image processing apparatus of the application certain embodiments.
Figure 20 is the schematic diagram of the not pixel on gain mesh point of the application certain embodiments.
Figure 21 is the flow diagram of the image processing method of the application certain embodiments.
Figure 22 is the module diagram of the image processing apparatus of the application certain embodiments.
Figure 23 is the flow diagram of the image processing method of the application certain embodiments.
Figure 24 is the module diagram of the second correction module of the image processing apparatus of the application certain embodiments.
Figure 25 is the schematic diagram of one group of adjacent pixel of the application certain embodiments.
Figure 26 is the flow diagram of the image processing method of the application certain embodiments.
Figure 27 is the module diagram of the image processing apparatus of the application certain embodiments.
Figure 28 is the flow diagram of the image processing method of the application certain embodiments.
Figure 29 is the module diagram of the 6th processing module of the image processing apparatus of the application certain embodiments.
Figure 30 is the flow diagram of the image processing method of the application certain embodiments.
Figure 31 is the module diagram of the 6th processing module of the image processing apparatus of the application certain embodiments.
Figure 32 is the module diagram of the computer equipment of the application certain embodiments.
Figure 33 is the module diagram of the image processing circuit of the application certain embodiments.
Main element symbol description:
Computer equipment 100, image processing apparatus 10, first processing module 12, the processing of Second processing module 14, first are single First 142, first determination subelement 1422, the second determination subelement 1424, the first computation subunit 1426, the first judgment sub-unit 1428 and first handle subelement 1421, second processing unit 144, the 3rd determination subelement 1442, the 4th determination subelement 1444th, the second computation subunit 1446, the 5th determination subelement 1448, the second judgment sub-unit 1441, second processing subelement 1443 and the 6th determination subelement 1445, the 3rd processing unit 146, the 7th determination subelement 1462, the 8th determination subelement 1464th, the 3rd computation subunit 1466, the 9th determination subelement 1468, the 3rd judgment sub-unit the 1461, the 3rd processing subelement 1463 and the tenth determination subelement 1465, the 3rd processing module 16, the first acquisition module 18, fourth processing module 11, the first school Positive module 13, the first determination unit 132, the second determination unit 134, the first judging unit 136, fourth processing unit the 138, the 5th Processing unit 131, the second correction module 15, the 3rd determination unit 152, the first computing unit 154, the 4th determination unit 156, Five determination units 158, second judgment unit 151, the 6th processing unit 153, the 7th processing unit 155, the 5th processing module 17, 6th processing module 19, the 8th processing unit 192, the 6th determination unit 194, the 7th determination unit 196, the 8th determination unit 198th, the 9th processing unit 191, the 9th determination unit 193, the 3rd judging unit 195, the processing of the tenth processing unit the 197, the 7th Module 1X, system bus 20, processor 40, memory 60, built-in storage 80, display screen 30, input unit 50, image procossing electricity Road 70, ISP processors 72, control logic device 74, camera 76, lens 762 and imaging sensor 764, sensor 78, image are deposited Reservoir 71, display 73, encoder/decoder 75.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the object, technical solution and advantage for making the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used to distinguish first element and another element.Citing comes It says, in the case where not departing from scope of the present application, the first weights can be known as the second weights, and similarly, it can be by second Weights are known as the first weights.First weights and the second weights both weights, but be not same weights.
Referring to Fig. 1, the image processing method of the application embodiment comprises the following steps:
S12:Pending image is handled so that the pixel value of each pixel of pending image to be subtracted to the dark electricity of predetermined ratio It puts down to obtain just to remove picture black;
S14:To first de-black image noise reduction to obtain noise-reduced image;With
S16:Processing noise-reduced image with by the pixel value of each pixel of noise-reduced image subtract the black level of remaining proportion with Obtain image after de-black, remaining proportion and predetermined ratio and be 1, remaining proportion is more than predetermined ratio.
Referring to Fig. 2, the image processing apparatus 10 of the application embodiment includes first processing module 12, second processing mould Block 14, the 3rd processing module 16.First processing module 12 is for handling pending image with by each pixel of pending image Pixel value subtract the black level of predetermined ratio with obtain just remove picture black.Second processing module 14 is used for just picture black being gone to drop It makes an uproar to obtain noise-reduced image.3rd processing module 16 is for handling noise-reduced image with by the pixel value of each pixel of noise-reduced image The black level of remaining proportion is subtracted to obtain image after de-black, remaining proportion and predetermined ratio and be 1, remaining proportion is more than pre- Certainty ratio.
In other words, the image processing method of the application embodiment can be filled by the image procossing of the application embodiment 10 realizations are put, wherein, step S12 can be realized by first processing module 12, and step S14 can be real by Second processing module 14 Existing, step S16 can be realized by the 3rd processing module 16.
Referring to Fig. 3, the image processing apparatus 10 of the application embodiment can be applied to the meter of the application embodiment It calculates in machine equipment 100, in other words, the computer equipment 100 of the application embodiment can include the application embodiment Image processing apparatus 10.
Image processing method, image processing apparatus 10 and the computer equipment 100 of the application embodiment to image into Before row noise reduction process, at most part black level is only subtracted, the signal of the dark portion of image can be largely kept, avoid figure The dark portion of picture in noise reduction process because signal is weaker and effect is poor, and all black appliances are further subtracted after noise reduction process It is flat, image procossing can be promoted on the whole to avoid because the presence of black level causes subsequent various image processing effects to be deteriorated Effect.
Referring to Fig. 4, in some embodiments, the image processing method of the application embodiment further includes following step Suddenly:
S18:Original image is read from imaging sensor;With
S11:Original image is pre-processed to determine statistical information and obtains pending image.
Referring to Fig. 5, in some embodiments, image processing apparatus 10 includes the first acquisition module 18 and fourth process Module 11.First acquisition module 11 is used to read original image from imaging sensor.Fourth processing module 11 be used for original image into Row pretreatment is with definite statistical information and obtains pending image.
Imaging sensor includes the integrated circuit with pel array and the colour filter array being arranged on pel array. Each pixel includes using photodetector, and photodetector can be used for detecting light intensity.Colour filter array can be used for detecting Wavelength.According to the light intensity and wavelength information detected, original image can be formed.Original image is pre-processed, it may be determined that The statistical information used is needed in subsequent image procossing so that subsequent image procossing is normally carried out.
In some embodiments, original image includes the initial data of Bayer array.
Bayer array is to apply to obtain most commonly used colour filter array, simple and easy to get, easy to implement.When using Bayer battle array When row are as colour filter array, original image includes the initial data of Bayer array.
In some embodiments, imaging sensor includes cmos image sensor.
Cmos image sensor not only very power saving, can reduce the cost used, and can be greatly simplified system Hardware configuration.
In some embodiments, predetermined ratio is less than or equal to 10%.
Predetermined ratio is less than or equal to 10% so that just the signal of picture black in the dark is gone to be unlikely to too weak, so that noise reduction Effect it is more obvious.Predetermined ratio can be 0, that is to say without black-level correction.
Referring to Fig. 6, in some embodiments, step S14 comprises the following steps:
S142:Correct the inconsistent green consistent to obtain of first two neighboring green pixels gone on diagonally opposed in picture black Image;
S144:Green coherent image application level is filtered to obtain image after horizontal filtering;With
S146:To image application vertical filtering after horizontal filtering to obtain noise-reduced image.
Referring to Fig. 7, in some embodiments, Second processing module 14 includes first processing units 142, second processing Unit 144, the 3rd processing unit 146.First processing units 142 for correct just go it is two on diagonally opposed in picture black adjacent The inconsistent of green pixel filters green coherent image application level with obtaining green coherent image second processing unit 144 Ripple is to obtain image after horizontal filtering.3rd processing unit 146 is used to image application vertical filtering after horizontal filtering obtain Noise-reduced image.
Under the planar condition of given Uniform Illumination, it is diagonally opposed on two neighboring green pixel Gr and Gb have it is small Lightness difference, it is corrected can be to avoid there is pseudomorphism in the full-colour image after demosaicing.To green coherent image Application level filters and vertical filtering so that the noise of image reduces.
Referring to Fig. 8, in some embodiments, step S142 comprises the following steps:
S1422:Determine the inconsistent corrected threshold of green;
S1424:The pixel value and the lower right corner green picture positioned at the current green pixel lower right corner for determining current green pixel The pixel value of element;
S1426:Calculate the difference of the pixel value of current pixel and the pixel value of lower right corner green pixel;
S1428:Judge whether difference is less than the inconsistent corrected threshold of green;With
S1421:It is green with the pixel value of current green pixel and the lower right corner when difference is less than green inconsistent corrected threshold The average value of the pixel value of color pixel replaces the pixel value of current green pixel and the pixel value of lower right corner green pixel.
Referring to Fig. 9, in some embodiments, first processing units 142 include the first determination subelement 1422, second Determination subelement 1424, the first computation subunit 1426, the first judgment sub-unit 1428 and the first processing subelement 1421.First Determination subelement 1422 is for the definite inconsistent corrected threshold of green.Second determination subelement 1424 is for definite current green picture The pixel value of the pixel value of element and the lower right corner green pixel positioned at the current green pixel lower right corner.First computation subunit 1426 For calculating the difference of the pixel value of current pixel and the pixel value of lower right corner green pixel.First judgment sub-unit 1428 is used for Judge whether difference is less than the inconsistent corrected threshold of green.First processing subelement 1421 is used to be less than in difference green inconsistent During corrected threshold, current green is replaced with the average value of the pixel value of current green pixel and the pixel value of lower right corner green pixel The pixel value of pixel and the pixel value of lower right corner green pixel.
Current green picture is replaced with the average value of the pixel value of current green pixel and the pixel value of lower right corner green pixel The pixel value of element and the pixel value of lower right corner green pixel, can cause the pixel value of current green pixel and lower right corner green picture The pixel value of element is consistent, and then avoids pseudomorphism occur in the full-colour image after demosaicing.In addition, such bearing calibration helps In bounding edge is avoided to be averaged current green pixel and lower right corner green pixel, so as to improve with keep acutance.
Referring to Fig. 10, in some embodiments, step S144 comprises the following steps:
S1442:Determine horizontal filter, horizontal filter includes N number of tap, and N number of tap includes centre cap;
S1444:Determining horizontal filtering pixel column, horizontal filtering pixel column includes corresponding to N number of pixel of N number of tap respectively, Horizontal filtering pixel column includes center pixel;
S1446:The horizontal gradient at the edge of each tap in calculating across N number of tap;
S1448:Determine horizontal edge threshold value;
S1441:Judge each horizontal gradient whether higher than horizontal edge threshold value;
S1443:When horizontal gradient is higher than horizontal edge threshold value, the corresponding tap of horizontal gradient is folded into middle imago Element;With
S1445:Determine that horizontal filtering exports based on horizontal gradient.
1 is please referred to Fig.1, in some embodiments, second processing unit 144 includes the 3rd determination subelement 1442, the Four determination subelements 1444, the second computation subunit 1446, the 5th determination subelement 1448, the second judgment sub-unit 1441, Two processing 1443 and the 6th determination subelements 1445 of subelement.3rd determination subelement 1442 is for determining horizontal filter, water Flat wave filter includes N number of tap, and N number of tap includes centre cap.4th determination subelement 1444 is used to determine horizontal filtering picture Plain row, horizontal filtering pixel column include corresponding to N number of pixel of N number of tap respectively, and horizontal filtering pixel column includes center pixel.The Two computation subunits 1446 cross over the horizontal gradient at the edge of each tap in N number of tap for calculating.5th determination subelement 1448 are used to determine horizontal edge threshold value.Whether the second judgment sub-unit 1441 is used to judge each horizontal gradient higher than horizontal sides Edge threshold value.Second processing subelement 1443 is used for when horizontal gradient is higher than horizontal edge threshold value, by the corresponding pumping of horizontal gradient Head folds into center pixel.6th determination subelement 1445 is used to determine that horizontal filtering exports based on horizontal gradient.
By being filtered to green coherent image application level so that image is than the noise of green coherent image after horizontal filtering It is lower.
2 are please referred to Fig.1, in some embodiments, horizontal filter includes 7 tap horizontal filters, the filter of 7 tap horizontals Ripple device includes 7 taps, and horizontal filtering pixel column includes corresponding to 7 pixels (P0, P1, P2 ... P6) of 7 taps, center Tap is placed in center pixel P3, and horizontal gradient and horizontal filtering output can be determined by equation below:
Eh0=abs (P0-P1);
Eh1=abs (P1-P2);
Eh2=abs (P2-P3);
Eh3=abs (P3-P4);
Eh4=abs (P4-P5);
Eh5=abs (P5-P6);
Phorz=C0 × [(Eh2>horzTh[c])P3:(Eh1>horzTh[c])P2:(Eh0>horzTh[c])P1: P0]+
C1×[(Eh2>horzTh[c])P3:(Eh1>horzTh[c])P2:P1]+
C2×[(Eh2>horzTh[c])P3:P2]+
C3×P3+
C4×[(Eh3>horzTh[c])P3:P4]+
C5×[(Eh3>horzTh[c])P3:(Eh4>horzTh[c])P4:P5]+
C6×[(Eh3>horzTh[c])P3:(Eh4>horzTh[c])P4:(Eh5>horzTh[c])P5:P6]
Wherein, Eh0, Eh1, Eh2, Eh3, Eh4 and Eh5 are horizontal gradients, PhorzIt is that horizontal filtering exports, horzTh [c] It is horizontal edge threshold value, wherein, C0~C6 corresponds to the filter tap of pixel P0~P6 in horizontal filtering pixel column respectively Coefficient.
In some embodiments, filter tap coefficients C0~C6 includes having 3 integer bits and 13 decimal ratios 2 special complemented value.
Sign bit and Numerical Range can be uniformly processed for complement code so that addition and subtraction can be uniformly processed.
In some embodiments, filter tap coefficients C0~C6 is symmetrical on center pixel P3.
In some embodiments, it is symmetrical to be not in relation to center pixel P3 by filter tap coefficients C0~C6.
Filter tap coefficients C0~C6 can be symmetrical on center pixel P3, can not also be P3 pairs on center pixel Claim so that the processing on horizontal filter is more flexible.
3 are please referred to Fig.1, in some embodiments, step S146 comprises the following steps:
S1462:Determine vertical filter, vertical filter includes N number of tap, and N number of tap includes centre cap;
S1464:Determining vertical filtering pixel column, vertical filtering pixel column includes corresponding to N number of pixel of N number of tap respectively, Vertical filtering pixel column includes center pixel;
S1466:The vertical gradient at the edge of each tap in calculating across N number of tap;
S1468:Determine vertical edge threshold value;
S1461:Judge vertical gradient whether higher than vertical edge threshold value;
S1463:When vertical gradient is higher than vertical edge threshold value, the corresponding tap of vertical gradient is folded into middle imago Element;With
S1465:Determine that vertical filtering exports based on vertical gradient.
4 are please referred to Fig.1, in some embodiments, the 3rd processing unit 146 includes the 7th determination subelement 1462, the Eight determination subelements 1464, the 3rd computation subunit 1466, the 9th determination subelement 1468, the 3rd judgment sub-unit 1461, Three processing 1463 and the tenth determination subelements 1465 of subelement.
For 7th determination subelement 1462 for determining vertical filter, vertical filter includes N number of tap, N number of tap bag Include centre cap.For determining vertical filtering pixel column, it is right respectively that vertical filtering pixel column includes 8th determination subelement 1464 N number of pixel of N number of tap is answered, vertical filtering pixel column includes center pixel.3rd computation subunit 1466 crosses over N for calculating The vertical gradient at the edge of each tap in a tap.9th determination subelement 1468 is used to determine vertical edge threshold value.3rd Whether judgment sub-unit 1461 is used to judge vertical gradient higher than vertical edge threshold value.3rd processing subelement 1463 is used for when vertical When vertical ladder degree is higher than vertical edge threshold value, the corresponding tap of vertical gradient is folded into center pixel.Tenth determination subelement 1465 are used to determine that vertical filtering exports based on vertical gradient.
By to image application vertical filtering after horizontal filtering so that noise-reduced image than image after horizontal filtering noise more It is low.
5 are please referred to Fig.1, in some embodiments, vertical filter includes 5 tap vertical wave filters, the filter of 5 tap verticals Ripple device includes 5 taps, and vertical filtering pixel column includes corresponding to 5 pixels (P0, P1, P2 ... P4) of 5 taps respectively, in Heart tap is placed in center pixel P2, and vertical gradient and vertical filtering output can be determined by equation below:
Ev0=abs (P0-P1);
Ev1=abs (P1-P2);
Ev2=abs (P2-P3);
Ev3=abs (P3-P4);
Pvert=C0 × [(Ev1>vertTh[c])P2:(Ev0>vertTh[c])P1:P0]+
C1×[(Ev1>vertTh[c])P2:P1]+
C2×P2+
C3×[(Ev2>vertTh[c])P2:P3]+
C4×[(Ev2>vertTh[c])P2:(Ev3>vertTh[c])P3:P4];
Wherein, Ev0, Ev1, Ev2 and Ev3 are vertical gradients, PvertIt is vertical filtering output, vertTh [c] is vertical edges Edge threshold value, wherein, C0~C4 corresponds to the filter tap coefficients of pixel P0~P4 in vertical filtering pixel column respectively.
Utilize above-mentioned formula so that vertical filtering exports PvertIt can determine.
In some embodiments, filter tap coefficients C0~C4 can have 3 integer bits and 13 decimals 2 complemented value of bit.
Sign bit and Numerical Range can be uniformly processed for complement code so that addition and subtraction can be uniformly processed.
In some embodiments, filter tap coefficients C0~C4 is symmetrical on center pixel P2.
In some embodiments, it is symmetrical to be not in relation to center pixel P2 by filter tap coefficients C0~C4.
Filter tap coefficients C0~C4 can be symmetrical on center pixel P2, can not also be P2 pairs on center pixel Claim so that the processing on vertical filter is more flexible.
6 are please referred to Fig.1, in some embodiments, the image processing method of the application embodiment further includes following step Suddenly:
S13:To picture black progress correcting lens shadow is just gone to obtain image after correcting lens shadow.
7 are please referred to Fig.1, the image processing apparatus 10 of the application embodiment further includes the first correction module 13.First school Positive module 13 is used to picture black is just gone to carry out correcting lens shadow obtain image after correcting lens shadow.
Due to camera image-forming range farther out when, as field angle slowly increases, the skew ray of camera gun can be passed through Beam will be reduced slowly so that brighter among the image of acquisition, edge is than dark, so that brightness of image is uneven.To just going Picture black, which carries out correcting lens shadow, can eliminate this harmful effect, improve the accuracy of subsequent processing.
8 are please referred to Fig.1, in some embodiments, S13 comprises the following steps:
S132:Determine two dimensional gain grid, two dimensional gain grid includes gain mesh point, and gain mesh point is with fixed water Flat interval and perpendicular separation are distributed in primitive frame;
S134:Determine current pixel and activation treatment region, activation treatment region includes carrying out the region of correcting lens shadow;
S136:Judge current pixel whether on gain mesh point;
S138:When current pixel is on the ad hoc networks lattice point in gain mesh point, the yield value of ad hoc networks lattice point is used Yield value as current pixel;With
S131:When current pixel is not when on gain mesh point, using the upper left mesh point of the grid where current pixel, Upper right mesh point, lower-left mesh point and bottom right mesh point determine the yield value of current pixel.
9 are please referred to Fig.1, in some embodiments, the first correction module 13 includes the first determination unit 132, second really Order member 134, the first judging unit 136,138 and the 5th processing unit 131 of fourth processing unit.First determination unit 132 is used In definite two dimensional gain grid, two dimensional gain grid includes gain mesh point, and gain mesh point is with fixed horizontal interval and hangs down Directly it is distributed in primitive frame.Second determination unit 134 activates treatment region bag for determining current pixel and activation treatment region Include the region for carrying out correcting lens shadow.Whether the first judging unit 136 is used to judge current pixel in gain mesh point.The Four processing units 138 are used to, when current pixel is on ad hoc networks lattice point in gain mesh point, use the increasing of ad hoc networks lattice point Yield value of the benefit value as current pixel.5th processing unit 131 is used for when current pixel is not when on gain mesh point, is utilized Upper left mesh point, upper right mesh point, lower-left mesh point and the bottom right mesh point of grid where current pixel determine current pixel Yield value.
Figure 20 is referred to, in some embodiments, utilizes the upper left mesh point of the grid where current pixel, upper right net The yield value for the current pixel that lattice point, lower-left mesh point and bottom right mesh point determine can be determined by equation below:
Wherein G is the yield value of current pixel, and G0, G1, G2 and G3 are upper left mesh point, upper right mesh point, lower-left respectively The yield value of mesh point and bottom right mesh point, X and Y are the horizontal size of the grid interval of two dimensional gain grid respectively and vertical big Small, ii and jj are horizontal offset and vertical offset of the current pixel compared with upper left mesh point.
Figure 21 is referred to, in some embodiments, the image processing method of the application embodiment further includes following step Suddenly:
S15:To picture black is just gone to carry out defect pixel correction.
Since the element of imaging sensor is numerous, easily there is defect pixel, to picture black is just gone to carry out defect pixel correction Defect pixel can be eliminated, reduces influence of the defect pixel to subsequent processing.
Figure 22 is referred to, the image processing apparatus 10 of the application embodiment further includes the second correction module 15.Second school Positive module is used for picture black is just gone to carry out defect pixel correction.
Figure 23 is referred to, in some embodiments, S15 comprises the following steps:
S152:Determine current pixel and adjacent with current pixel one group of adjacent pixel, it is each in one group of adjacent pixel Pixel is in primitive frame;
S154:Calculate the pixel-pixel gradient of current pixel and each pixel in one group of adjacent pixel;
S156:It determines pixel gradient threshold value (dprTh) and determines that pixel-pixel gradient is less than pixel gradient threshold value (dprTh) number C;
S158:Determine maximum count (dprMaxC);
S151:Judge whether number is less than maximum count (dprMaxC);
S153:When number is less than maximum count (dprMaxC), current pixel is identified as defect pixel;With
S155:The pixel value of current pixel is replaced using replacement values.
Figure 24 is referred to, in some embodiments, the second correction module 15 is counted including the 3rd determination unit 152, first Calculate unit 154, the 4th determination unit 156, the 5th determination unit 158, second judgment unit 151, the 6th processing unit 153 and the Seven processing units 155.3rd determination unit 152 is used to determine current pixel and adjacent with current pixel one group of adjacent pixel, Each pixel in one group of adjacent pixel is in primitive frame.First computing unit 154 is used to calculate current pixel and one group of phase The pixel-pixel gradient of each pixel in adjacent pixel.4th determination unit 156 is used to determine pixel gradient threshold value (dprTh) And determine that pixel-pixel gradient is less than the number C of pixel gradient threshold value (dprTh).5th determination unit 158 is maximum for determining It counts (dprMaxC).Second judgment unit 151 is used to judge whether number is less than maximum count (dprMaxC).6th processing is single Member 153 is used to, when number is less than maximum count (dprMaxC), current pixel is identified as defect pixel.7th processing unit 155 are used to replace the pixel value of current pixel using replacement values.
Refer to Figure 25, in some embodiments, one group of adjacent pixel horizontal direction be successively pixel P0, P1, P2 and P3, current pixel are included in the P between P1 and P2, and pixel-pixel gradient and number can be determined by formula below:
Gk=abs (P-Pk), wherein 0≤k≤3;
Wherein 0≤k≤3.
Figure 26 is referred to, in some embodiments, the image processing method of the application embodiment further includes following step Suddenly:
S17:Image is to obtain pre- tone mapping graph picture after processing de-black;
S19:The local tone of pre- tone mapping graph picture application is mapped to obtain local tone mapped image;With
S1X:Local tone mapped image is handled to obtain output image.
Figure 27 is referred to, the image processing apparatus 10 of the application embodiment further includes:
5th processing module 17, the 5th processing module is for handling after de-black image to obtain pre- tone mapping graph picture;
6th processing module 19, the 6th processing module are used to the local tone mapping of pre- tone mapping graph picture application obtain Local tone mapped image;With
7th processing module 1X, the 7th processing module are used to handle local tone mapped image to obtain output image.
Tone mapping technique is mapped to another group of pixel value in image procossing, available for one group of pixel value.Work as input When image has different bit accuracies from output image, input image value input picture can be mapped to using tone mapping Output area respective value.Tone mapping technique includes local tone mapping technique and whole tone mapping techniques.Local color For mapping techniques are adjusted compared to global tone mapping technique, local contrast can be improved, output local contrast is better Image, so as to aesthetically make spectators more satisfied.Thus, the local tone of image application after de-black is mapped, can cause office The contrast property of portion's tone mapped image makes moderate progress.
Figure 28 is referred to, in some embodiments, S19 comprises the following steps:
S192:Pre- tone mapping graph picture is divided into multiple portions by the local feature based on pre- tone mapping graph picture;
S194:Current portions are determined always to can use out-put dynamic range;
S196:Out-put dynamic range is determined based on total available out-put dynamic range, out-put dynamic range is total available output The 60% to 70% of dynamic range;
S198:Determine the actual dynamic range in current portions;With
S191:Actual dynamic range is mapped into out-put dynamic range.
Figure 29 is referred to, in some embodiments, the 6th processing module 19 includes the 8th processing unit the 192, the 6th really Order member 194, the 7th determination unit 196, the 8th determination unit 198 and the 9th processing unit 191.8th processing unit 192 is used Pre- tone mapping graph picture is divided into multiple portions in the local feature based on pre- tone mapping graph picture.6th determination unit 194 For determining always to can use out-put dynamic range to current portions.7th determination unit 196 is used for based on total available output dynamic model Definite out-put dynamic range is enclosed, out-put dynamic range is the 60% to 70% of total available out-put dynamic range.8th determination unit 198 are used to determine the actual dynamic range in current portions.9th processing unit 191 is defeated for actual dynamic range to be mapped to Go out dynamic range.
Out-put dynamic range is limited in the 90% of total available out-put dynamic range, local tone mapping can be weakened, into And make the noise more unobvious of local tone mapped image.
Figure 30 is referred to, in some embodiments, S19 comprises the following steps:
S192:Pre- tone mapping graph picture is divided into multiple portions by the local feature based on pre- tone mapping graph picture;
S194:Current portions are determined always to can use out-put dynamic range;
S196:Out-put dynamic range is determined based on total available out-put dynamic range, out-put dynamic range is total available output The 60% to 70% of dynamic range;
S198:Determine the actual dynamic range of current portions;
S193:Determine total availability of dynamic range of current portions;
S195:Judge whether actual dynamic range is less than total availability of dynamic range;
S197:When actual dynamic range is less than total availability of dynamic range, always may be used by the way that actual dynamic range is mapped to Actual dynamic range is extended with actual dynamic range after being expanded and is reflected actual dynamic range after extension with dynamic range It is incident upon out-put dynamic range.
Figure 31 is referred to, in some embodiments, the 6th processing module 19 includes the 8th processing unit the 192, the 6th really Order member 194, the 7th determination unit 196, the 8th determination unit 198, the 9th determination unit 193, the 3rd judging unit 195 and Ten processing units 197.8th processing unit 192 is used for the local feature based on pre- tone mapping graph picture by pre- tone mapping graph picture It is divided into multiple portions.6th determination unit 194 is used to that current portions to be determined always to can use out-put dynamic range.7th determines list Member 196 is used to determine out-put dynamic range based on total available out-put dynamic range, and out-put dynamic range is total available output dynamic The 60% to 70% of scope.8th determination unit 198 is used to determine the actual dynamic range of current portions.9th determination unit 193 are used to determine total availability of dynamic range of current portions.3rd judging unit 195 is used to judge whether actual dynamic range is small In total availability of dynamic range.Tenth processing unit 197 be used for when actual dynamic range be less than total availability of dynamic range when, pass through by Actual dynamic range maps to total availability of dynamic range to extend actual dynamic range with actual dynamic range after being expanded simultaneously Actual dynamic range after extension is mapped into out-put dynamic range.
Out-put dynamic range is limited in the 90% of total available out-put dynamic range, local tone mapping can be weakened, into And make the noise more unobvious of local tone mapped image.In addition, local tone mapping technique is generally without considering unused Value or the scope of value whether be mapped, a part for output valve is caused to be used to indicate and is not, actually, present in current portions Input value, so as to reduce can be used for represent be present in current portions input value available output valve.This is in current portions Actual dynamic range be less than total availability of dynamic range when, actual dynamic range is extended into total availability of dynamic range, Ran Houzai Map to out-put dynamic range so that this problem can solve, so that entire output area can be by the defeated of current portions Enter value utilization.
The embodiment of the present application additionally provides a kind of computer readable storage medium.One or more can perform comprising computer The non-volatile computer readable storage medium storing program for executing of instruction, when computer executable instructions are executed by one or more processors, So that processor performs following steps:
S12:Pending image is handled so that the pixel value of each pixel of pending image to be subtracted to the dark electricity of predetermined ratio It puts down to obtain just to remove picture black;
S14:To first de-black image noise reduction to obtain noise-reduced image;With
S16:Processing noise-reduced image with by the pixel value of each pixel of noise-reduced image subtract the black level of remaining proportion with Obtain image after de-black, remaining proportion and predetermined ratio and be 1, remaining proportion is more than predetermined ratio.
Figure 32 is the internal structure schematic diagram of one embodiment Computer equipment.As shown in figure 32, the computer equipment 100 include the processor 40 connected by system bus 20, memory 60 (being, for example, non-volatile memory medium), built-in storage 80th, display screen 30 and input unit 50.Wherein, the memory 60 of computer equipment 100 is stored with operating system and computer can Reading instruction.The computer-readable instruction can be performed by processor 40, to realize the image processing method of the application embodiment.It should Processor 40 supports the operation of entire computer equipment 100 for providing calculating and control ability.Computer equipment 100 it is interior Memory 80 provides environment for the operation of the computer-readable instruction in memory 60.The display screen 30 of computer equipment 100 can To be liquid crystal display or electric ink display screen etc., input unit 50 can be the touch layer covered on display screen 30, Can be button, trace ball or the Trackpad or external keyboard, Trackpad set on 100 shell of computer equipment Or mouse etc..The computer equipment 100 can be mobile phone, tablet computer, laptop, personal digital assistant or wearable set Standby (such as Intelligent bracelet, smartwatch, intelligent helmet, intelligent glasses) etc..It will be understood by those skilled in the art that show in Figure 32 The structure gone out only with the schematic diagram of the relevant part-structure of application scheme, does not form and application scheme is applied The restriction of computer equipment 100 thereon, specific computer equipment 100 can include more more or fewer than shown in figure Component either combines some components or is arranged with different components.
Figure 33 is referred to, the computer equipment 100 of the embodiment of the present application includes image processing circuit 70, image procossing electricity Road 70 can utilize hardware and or software component to realize, it may include define ISP (Image Signal Processing, image Signal processing) pipeline various processing units.Figure 33 is the schematic diagram of image processing circuit 70 in one embodiment.Such as Figure 33 institutes Show, for purposes of illustration only, only showing the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in figure 33, including ISP processors 72, (ISP processors 72 can be processor 40 or place to image processing circuit 70 Manage a part for device 40) and control logic device 74.The image data that camera 76 captures is handled first by ISP processors 72, ISP Processor 72 analyzes image data to capture the image for the one or more control parameters that can be used for determining camera 76 Statistical information.Camera 76 may include one or more lens 762 and imaging sensor 764.Imaging sensor 764 may include color Color filter array (such as Bayer filters), imaging sensor 764 can obtain luminous intensity and the wavelength letter that each imaging pixel captures Breath, and the one group of raw image data that can be handled by ISP processors 72 is provided.Sensor 78 (such as gyroscope) can be based on sensor The parameter (such as stabilization parameter) of the image procossing of acquisition is supplied to ISP processors 72 by 78 interface types.78 interface of sensor can Think SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface, other serial Or the combination of parallel camera interface or above-mentioned interface.
In addition, raw image data can be also sent to sensor 78 by imaging sensor 764, sensor 78 can be based on sensing 78 interface type of device is supplied to ISP processors 72 or sensor 78 to arrive raw image data storage raw image data In video memory 71.
ISP processors 72 handle raw image data pixel by pixel in various formats.For example, each image pixel can have There is the bit depth of 8,10,12 or 14 bits, ISP processors 72 can carry out raw image data one or more image procossing behaviour Make, statistical information of the collection on image data.Wherein, image processing operations can by identical or different bit depth precision into Row.
ISP processors 72 can also receive image data from video memory 71.For example, 78 interface of sensor is by original image For data sending to video memory 71, the raw image data in video memory 71 is available to ISP processors 72 for place Reason.Video memory 71 can be independent special in memory 60, a part for memory 60, storage device or electronic equipment With memory, and it may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving the original from 764 interface of imaging sensor or from 78 interface of sensor or from video memory 71 During beginning image data, ISP processors 72 can carry out one or more image processing operations, such as time-domain filtering.Treated image Data can be transmitted to video memory 71, to carry out other processing before shown.ISP processors 72 are stored from image Device 71 receives processing data, and processing data are carried out at the image data in original domain and in RGB and YCbCr color spaces Reason.Treated that image data may be output to display 73 (display 73 may include display screen 30) for ISP processors 72, for Family is watched and/or is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).This Outside, the output of ISP processors 72 also can be transmitted to video memory 71, and display 73 can read image from video memory 71 Data.In one embodiment, video memory 71 can be configured as realizing one or more frame buffers.In addition, ISP processing The output of device 72 can be transmitted to encoder/decoder 75, so as to encoding/decoding image data.The image data of coding can be protected It deposits, and is decompressed before being shown in 73 equipment of display.Encoder/decoder 75 can be real by CPU or GPU or coprocessor It is existing.
The definite statistics of ISP processors 72, which can be transmitted, gives control logic device Unit 74.For example, statistics may include The imaging sensors such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 762 shadow correction of lens 764 statistical informations.Control logic device 74 may include the processing element and/or microcontroller that perform one or more routines (such as firmware) Device, one or more routines according to the statistics of reception, can determine the control parameter of camera 76 and the control of ISP processors 72 Parameter processed.For example, the control parameter of camera 76 may include 78 control parameter of sensor (such as gain, the integration of spectrum assignment Time, stabilization parameter etc.), camera flash control parameter, 762 control parameter of lens (such as focus on or zoom focal length) or The combination of these parameters.ISP control parameters may include for automatic white balance and color adjustment (for example, during RGB processing) 762 shadow correction parameter of gain level and color correction matrix and lens.
It is the step of realizing image processing method with image processing techniques in Figure 15 below:
S12:Pending image is handled so that the pixel value of each pixel of pending image to be subtracted to the dark electricity of predetermined ratio It puts down to obtain just to remove picture black;
S14:To first de-black image noise reduction to obtain noise-reduced image;With
S16:Processing noise-reduced image with by the pixel value of each pixel of noise-reduced image subtract the black level of remaining proportion with Obtain image after de-black, remaining proportion and predetermined ratio and be 1, remaining proportion is more than predetermined ratio.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in that a non-volatile computer is readable to be deposited In storage media, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium Can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Embodiment described above only expresses the several embodiments of the application, and description is more specific and detailed, but simultaneously Cannot the limitation to the application the scope of the claims therefore be interpreted as.It should be pointed out that for those of ordinary skill in the art For, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the guarantor of the application Protect scope.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (20)

1. a kind of image processing method, it is characterised in that comprise the following steps:
Handle pending image with by the pixel value of each pixel of the pending image subtract the black level of predetermined ratio with It obtains just removing picture black;
To the just de-black image noise reduction to obtain noise-reduced image;With
The noise-reduced image is handled so that the pixel value of each pixel of the noise-reduced image to be subtracted to the dark electricity of remaining proportion It is flat to obtain image after de-black, the remaining proportion and the predetermined ratio and be 1, the remaining proportion is more than described predetermined Ratio.
2. image processing method as described in claim 1, which is characterized in that the predetermined ratio is less than or equal to 10%.
3. image processing method as described in claim 1, which is characterized in that it is described to the just de-black image noise reduction to obtain The step of noise-reduced image, comprises the following steps:
Correct the inconsistent to obtain green coherent image of two neighboring green pixels just gone on diagonally opposed in picture black;
The green coherent image application level is filtered to obtain image after horizontal filtering;With
To image application vertical filtering after the horizontal filtering to obtain noise-reduced image.
4. image processing method as claimed in claim 3, which is characterized in that the correction is described just to go to diagonal side in picture black Two upward neighboring green pixels it is inconsistent to obtain green coherent image the step of comprise the following steps:
Determine the inconsistent corrected threshold of green;
Determine the picture of the pixel value and the lower right corner green pixel positioned at the current green pixel lower right corner of current green pixel Element value;
Calculate the difference of the pixel value of the current pixel and the pixel value of the lower right corner green pixel;
Judge whether the difference is less than the inconsistent corrected threshold of green;With
When the difference is less than the green inconsistent corrected threshold, with the pixel value of the current green pixel and the right side The average value of the pixel value of inferior horn green pixel replaces the pixel value of the current green pixel and the lower right corner green pixel Pixel value.
5. image processing method as claimed in claim 3, which is characterized in that described to the green coherent image application level Filtering the step of obtaining image after horizontal filtering to comprise the following steps:
Determine horizontal filter, the horizontal filter includes N number of tap, and N number of tap includes centre cap;
Determine horizontal filtering pixel column, the horizontal filtering pixel column includes corresponding to N number of pixel of N number of tap, institute respectively Stating horizontal filtering pixel column includes center pixel;
The horizontal gradient at the edge of each tap in calculating across N number of tap;
Determine horizontal edge threshold value;
Judge whether each horizontal gradient is higher than the horizontal edge threshold value;
When the horizontal gradient is higher than the horizontal edge threshold value, the corresponding tap of the horizontal gradient is folded into middle imago Element;With
Determine that horizontal filtering exports based on the horizontal gradient.
6. image processing method as claimed in claim 3, which is characterized in that described to hang down to image application after the horizontal filtering The step of straight filtering is to obtain noise-reduced image comprises the following steps:
Determine vertical filter, the vertical filter includes N number of tap, and N number of tap includes centre cap;
Determine vertical filtering pixel column, the vertical filtering pixel column includes corresponding to N number of pixel of N number of tap, institute respectively Stating vertical filtering pixel column includes center pixel;
The vertical gradient at the edge of each tap in calculating across N number of tap;
Determine vertical edge threshold value;
Judge the vertical gradient whether higher than the vertical edge threshold value;
The vertical gradient be higher than the vertical edge threshold value when, by the corresponding tap of the vertical gradient fold into it is described in Imago element;With
Determine that vertical filtering exports based on the vertical gradient.
7. image processing method as described in claim 1, which is characterized in that described image processing method further includes following step Suddenly:
Image is handled after the de-black to obtain pre- tone mapping graph picture;
The pre- local tone of tone mapping graph picture application is mapped to obtain local tone mapped image;With
The local tone mapped image is handled to obtain output image.
8. image processing method as claimed in claim 7, which is characterized in that the pre- local color of tone mapping graph picture application Mapping is adjusted to include the step of obtaining local tone mapped image:
The pre- tone mapping graph picture is divided into multiple portions by the local feature based on the pre- tone mapping graph picture;
Current portions are determined always to can use out-put dynamic range;
Out-put dynamic range is determined based on the always available out-put dynamic range, the out-put dynamic range is described always available defeated Go out the 60% to 70% of dynamic range;
Determine the actual dynamic range in the current portions;With
The actual dynamic range is mapped into the out-put dynamic range.
9. image processing method as claimed in claim 7, which is characterized in that the pre- local color of tone mapping graph picture application Mapping is adjusted to include the step of obtaining local tone mapped image:
The pre- tone mapping graph picture is divided into multiple portions by the local feature based on the pre- tone mapping graph picture;
Current portions are determined always to can use out-put dynamic range;
Out-put dynamic range is determined based on the always available out-put dynamic range, the out-put dynamic range is described always available defeated Go out the 60% to 70% of dynamic range;
Determine the actual dynamic range of the current portions;
Determine total availability of dynamic range of the current portions;
Judge whether the actual dynamic range is less than total availability of dynamic range;
When the actual dynamic range is less than total availability of dynamic range, by the way that the actual dynamic range is mapped to institute Total availability of dynamic range is stated to extend the actual dynamic range with actual dynamic range after being expanded and will be after the extension Actual dynamic range maps to the out-put dynamic range.
10. a kind of image processing apparatus, it is characterised in that including:
First processing module, the first processing module is for handling pending image with by each picture of the pending image The pixel value of element subtracts the black level of predetermined ratio to obtain just removing picture black;
Second processing module, the Second processing module are used to the just de-black image noise reduction obtain noise-reduced image;With
3rd processing module, the 3rd processing module is for handling the noise-reduced image with by each picture of the noise-reduced image The pixel value of element subtracts the black level of remaining proportion to obtain image after de-black, the remaining proportion and the predetermined ratio And for 1, the remaining proportion is more than the predetermined ratio.
11. image processing apparatus as claimed in claim 10, which is characterized in that the predetermined ratio is less than 10%.
12. image processing apparatus as claimed in claim 10, which is characterized in that the Second processing module includes:
First processing units, the first processing units for correct it is described just go it is two on diagonally opposed in picture black adjacent green Color pixel it is inconsistent to obtain green coherent image;
Second processing unit, the second processing unit are used to the green coherent image application level filtering obtain level Filtered image;With
3rd processing unit, the 3rd processing unit are used to image application vertical filtering after the horizontal filtering be dropped It makes an uproar image.
13. image processing apparatus as claimed in claim 12, which is characterized in that the first processing units include:
First determination subelement, first determination subelement is for the definite inconsistent corrected threshold of green;
Second determination subelement, second determination subelement are used to determine the pixel value of current green pixel with working as positioned at described The pixel value of the lower right corner green pixel in the preceding green pixel lower right corner;
First computation subunit, first computation subunit are used to calculate the pixel value of the current pixel and the lower right corner The difference of the pixel value of green pixel;
First judgment sub-unit, first judgment sub-unit are used to judge whether the difference is less than the inconsistent school of green Positive threshold value;With
First processing subelement, described first, which replaces subelement, is used to be less than the inconsistent corrected threshold of green in the difference When, it is replaced with the average value of the pixel value of the current green pixel and the pixel value of the lower right corner green pixel described current The pixel value of the pixel value of green pixel and the lower right corner green pixel.
14. image processing apparatus as claimed in claim 12, which is characterized in that the second processing unit includes:
3rd determination subelement, for determining horizontal filter, the horizontal filter includes N number of the 3rd determination subelement Tap, N number of tap include centre cap;
4th determination subelement, the 4th determination subelement are used to determine horizontal filtering pixel column, the horizontal filtering pixel Row includes corresponding to N number of pixel of N number of tap respectively, and the horizontal filtering pixel column includes center pixel;
Second computation subunit, second computation subunit cross over the edge of each tap in N number of tap for calculating Horizontal gradient;
5th determination subelement, the 5th determination subelement are used to determine horizontal edge threshold value;
Whether the second judgment sub-unit, second judgment sub-unit are used to judge each horizontal gradient higher than the level Edge threshold;
Second processing subelement, the second processing subelement are used to be higher than the horizontal edge threshold value in the horizontal gradient When, the corresponding tap of the horizontal gradient is folded into center pixel;With
6th determination subelement, the 6th determination subelement are used to determine that horizontal filtering exports based on the horizontal gradient.
15. image processing apparatus as claimed in claim 12, which is characterized in that the 3rd processing unit includes:
7th determination subelement, for determining vertical filter, the vertical filter includes N number of the 7th determination subelement Tap, N number of tap include centre cap;
8th determination subelement, the 8th determination subelement are used to determine vertical filtering pixel column, the vertical filtering pixel Row includes corresponding to N number of pixel of N number of tap respectively, and the vertical filtering pixel column includes center pixel;
3rd computation subunit, the 3rd computation subunit cross over the edge of each tap in N number of tap for calculating Vertical gradient;
9th determination subelement, the 9th determination subelement are used to determine vertical edge threshold value;
Whether the 3rd judgment sub-unit, the 3rd judgment sub-unit are used to judge the vertical gradient higher than vertical edge threshold Value;
3rd processing subelement, the 3rd processing subelement are used to be higher than the vertical edge threshold value when the vertical gradient When, the corresponding tap of the vertical gradient is folded into the center pixel;With
Tenth determination subelement, the tenth determination subelement are used to determine that vertical filtering exports based on the vertical gradient.
16. image processing apparatus as claimed in claim 10, which is characterized in that described image processing unit includes:
5th processing module, the 5th processing module is for handling after the de-black image to obtain pre- tone mapping graph picture;
6th processing module, the 6th processing module are used to the pre- local tone mapping of tone mapping graph picture application obtain To local tone mapped image;With
7th processing module, the 7th processing module are used to handle the local tone mapped image to obtain output figure Picture.
17. image processing apparatus as claimed in claim 16, which is characterized in that the 6th processing module includes:
8th processing unit, the 8th processing unit will be described pre- for the local feature based on the pre- tone mapping graph picture Tone mapping graph picture is divided into multiple portions;
6th determination unit, the 6th determination unit are used to that current portions to be determined always to can use out-put dynamic range;
7th determination unit, the 7th determination unit are used to determine output dynamic model based on the always available out-put dynamic range It encloses, the out-put dynamic range is the 60% to 70% of total available out-put dynamic range;
8th determination unit, the 8th determination unit are used to determine the actual dynamic range in the current portions;With
9th processing unit, the 9th processing unit are used to the actual dynamic range mapping to the output dynamic model It encloses.
18. image processing apparatus as claimed in claim 16, which is characterized in that the 6th processing module includes:
8th processing unit, the 8th processing unit will be described pre- for the local feature based on the pre- tone mapping graph picture Tone mapping graph picture is divided into multiple portions;
6th determination unit, the 6th determination unit are used to that current portions to be determined always to can use out-put dynamic range;
7th determination unit, the 7th determination unit are used to determine output dynamic model based on the always available out-put dynamic range It encloses, the out-put dynamic range is the 60% to 70% of total available out-put dynamic range;
8th determination unit, the 8th determination unit are used to determine the actual dynamic range of the current portions;
9th determination unit, the 9th determination unit are used to determine total availability of dynamic range of the current portions;
3rd judging unit, the 3rd judging unit move for judging whether the actual dynamic range is less than total can use State scope;
Tenth processing unit, the tenth processing unit are used to be less than total availability of dynamic range when the actual dynamic range When, the actual dynamic range is extended to obtain by the way that the actual dynamic range is mapped to total availability of dynamic range Actual dynamic range and actual dynamic range after the extension is mapped into the out-put dynamic range after extension.
19. one or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating When machine executable instruction is executed by one or more processors so that the processor perform claim requires any one of 1 to 9 institute The image processing method stated.
20. a kind of computer equipment including memory and processor, stores computer-readable instruction, institute in the memory Instruction is stated when being performed by the processor so that the processor perform claim requires the image procossing any one of 1 to 9 Method.
CN201711465123.2A 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device Active CN108111785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711465123.2A CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711465123.2A CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Publications (2)

Publication Number Publication Date
CN108111785A true CN108111785A (en) 2018-06-01
CN108111785B CN108111785B (en) 2020-05-15

Family

ID=62214320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711465123.2A Active CN108111785B (en) 2017-12-28 2017-12-28 Image processing method and device, computer readable storage medium and computer device

Country Status (1)

Country Link
CN (1) CN108111785B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193859A (en) * 2019-03-29 2020-05-22 安庆市汇智科技咨询服务有限公司 Image processing system and work flow thereof
CN112487424A (en) * 2020-11-18 2021-03-12 重庆第二师范学院 Computer processing system and computer processing method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982705B2 (en) * 2002-02-27 2006-01-03 Victor Company Of Japan, Ltd. Imaging apparatus and method of optical-black clamping
KR20070070990A (en) * 2005-12-29 2007-07-04 매그나칩 반도체 유한회사 Image sensor having automatic black level compensation function and method for compensating black level automatically
CN101035195A (en) * 2006-03-08 2007-09-12 深圳Tcl新技术有限公司 Adjusting method for the image quality
CN101076080A (en) * 2006-05-18 2007-11-21 富士胶片株式会社 Image-data noise reduction apparatus and method of controlling same
CN101086786A (en) * 2006-06-07 2007-12-12 富士施乐株式会社 Image generating apparatus, image processing apparatus and computer readable recording medium
CN102131040A (en) * 2010-06-04 2011-07-20 苹果公司 Adaptive lens shading correction
CN102457684A (en) * 2010-10-21 2012-05-16 英属开曼群岛商恒景科技股份有限公司 Black level calibration method and system for same
CN103795942A (en) * 2014-01-23 2014-05-14 中国科学院长春光学精密机械与物理研究所 Smear correction method of frame transfer CCD on basis of virtual reference lines
US20140152844A1 (en) * 2012-09-21 2014-06-05 Sionyx, Inc. Black level calibration methods for image sensors
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
CN105578082A (en) * 2016-01-29 2016-05-11 深圳市高巨创新科技开发有限公司 adaptive black level correction method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6982705B2 (en) * 2002-02-27 2006-01-03 Victor Company Of Japan, Ltd. Imaging apparatus and method of optical-black clamping
KR20070070990A (en) * 2005-12-29 2007-07-04 매그나칩 반도체 유한회사 Image sensor having automatic black level compensation function and method for compensating black level automatically
CN101035195A (en) * 2006-03-08 2007-09-12 深圳Tcl新技术有限公司 Adjusting method for the image quality
CN101076080A (en) * 2006-05-18 2007-11-21 富士胶片株式会社 Image-data noise reduction apparatus and method of controlling same
CN101086786A (en) * 2006-06-07 2007-12-12 富士施乐株式会社 Image generating apparatus, image processing apparatus and computer readable recording medium
CN102131040A (en) * 2010-06-04 2011-07-20 苹果公司 Adaptive lens shading correction
CN102457684A (en) * 2010-10-21 2012-05-16 英属开曼群岛商恒景科技股份有限公司 Black level calibration method and system for same
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
US20140152844A1 (en) * 2012-09-21 2014-06-05 Sionyx, Inc. Black level calibration methods for image sensors
CN103795942A (en) * 2014-01-23 2014-05-14 中国科学院长春光学精密机械与物理研究所 Smear correction method of frame transfer CCD on basis of virtual reference lines
CN105578082A (en) * 2016-01-29 2016-05-11 深圳市高巨创新科技开发有限公司 adaptive black level correction method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193859A (en) * 2019-03-29 2020-05-22 安庆市汇智科技咨询服务有限公司 Image processing system and work flow thereof
CN112487424A (en) * 2020-11-18 2021-03-12 重庆第二师范学院 Computer processing system and computer processing method

Also Published As

Publication number Publication date
CN108111785B (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN213279832U (en) Image sensor, camera and terminal
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
US8537234B2 (en) Image restoration with enhanced filtering
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108419022A (en) Control method, control device, computer readable storage medium and computer equipment
CN109348088B (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
CN107959851B (en) Colour temperature detection method and device, computer readable storage medium and computer equipment
CN109089046B (en) Image noise reduction method and device, computer readable storage medium and electronic equipment
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107872663B (en) Image processing method and device, computer readable storage medium and computer equipment
CN111711755B (en) Image processing method and device, terminal and computer readable storage medium
CN112118378A (en) Image acquisition method and device, terminal and computer readable storage medium
CN109076139A (en) It is merged for the colour of macroshot and the parallax mask of monochrome image
CN107395991A (en) Image combining method, device, computer-readable recording medium and computer equipment
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107194900A (en) Image processing method, device, computer-readable recording medium and mobile terminal
CN102984527A (en) Image processing apparatus, image processing method, information recording medium, and program
US8379977B2 (en) Method for removing color fringe in digital image
CN107704798A (en) Image weakening method, device, computer-readable recording medium and computer equipment
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
CN107194901B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN108111785A (en) Image processing method and device, computer readable storage medium and computer equipment
CN111711766B (en) Image processing method and device, terminal and computer readable storage medium
CN107454317B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

GR01 Patent grant
GR01 Patent grant