CN109247067A - Image processing apparatus and image processing method - Google Patents
Image processing apparatus and image processing method Download PDFInfo
- Publication number
- CN109247067A CN109247067A CN201680085977.5A CN201680085977A CN109247067A CN 109247067 A CN109247067 A CN 109247067A CN 201680085977 A CN201680085977 A CN 201680085977A CN 109247067 A CN109247067 A CN 109247067A
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- image processing
- sequence
- processing apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/745—Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination
Abstract
The invention discloses a kind of image processing apparatus, comprising: imaging sensor, the frame of the First ray for being captured and being exported image with frame per second 2 (n+1) f, wherein n indicates to be greater than or equal to 1 integer, f expression commercial frequency;And at least one processor, the frame of the second sequence for extracting image from the frame of the First ray of described image, and merge described image the second sequence frame to generate the single frame of image.
Description
Technical field
The present invention relates to a kind of image processing apparatus and a kind of image processing methods.
Background technique
When camera captures image in the case where using fluorescent lamp as room lighting, institute's captured image there may be
Some problems, this is because the luminous intensity of fluorescent lamp can change periodically under commercial frequency.The chromatography of fluorescent lamp exists
Can also it change under commercial frequency.
In institute's captured image, the brightness of the part captured during low light intensity and the part captured during high light intensity
Brightness it is different.Equally, the color of these parts is also different.
Most of camera uses rolling shutter type imaging sensor, by being likely to occur in these camera captured images
Colour band.
If imaging sensor is rolling shutter type, there may be colour band problems.But if image sensing utensil
There is global shutter mode, then there may be the phenomenons different from colour band problem.That is, signal level and face between each frame
Color is different.This phenomenon referred to as " flashes " (or more accurately, referred to as " scintillation fluor ").Due to historical reasons, colour band phenomenon
Sometimes referred to as flash.
Fig. 1 shows the reason of colour band problem.
If the reflectivity R (t) of an object be it is constant and uniform, during t0 to t1 in picture signal can lead to
Formula [mathematical expression 1] is crossed to indicate.
[mathematical expression 1]
Signal
Due to fluorescent lamp luminous intensity at any time t and change, so during t0 to t1, during t2 to t3, during t4 to t5
Signal between relationship can be indicated by formula [mathematical expression 2].
[mathematical expression 2]
Signal (t0 → t1) > signal (t2 → t3) > signal (t4 → t5)
Therefore, colour band is generated.
Fig. 2 shows an examples of the image with colour band.
When in the related art, when someways detect the colour band caused in captured images the case where, camera will
Automatic exposure (Automatic Exposure, AE) mode becomes colour band compensation model from normal AE control model.It is mended in colour band
It repays under mode, the time for exposure is fixed as value m/ (2 × f) (second), and (wherein, f indicates commercial frequency, and m indicates to be greater than or equal to 1
Integer).The product of half duration and integer m in 1 period of the value m/ (2 × f) equal to commercial frequency f.This
In the case of, all incident lights during exposure are kept constant, even if the luminous intensity of light period at any time under commercial frequency
Change to property.Formula [mathematical expression 3] shows this point." (duration in m period)/2 " in formula [mathematical expression 3] are equal to m/
(2 × f), as described above.
[mathematical expression 3]
In other words, as shown in figure 3, relationship shown in formula [mathematical expression 4] is set up.
[mathematical expression 4]
Signal (t0 → t1)=signal (t2 → t3)=signal (t4 → t5)
Therefore, colour band problem is resolved.If commercial frequency is 60Hz, the time for exposure should correspondingly be following
One of: 1/120 second, 2/120 second, 3/120 second, 4/120 second, etc..Equally, it if commercial frequency is 50Hz, exposes
It should be correspondingly one of following item between light time: 1/100 second, 2/100 second, 3/100 second, etc..
For global shutter type imaging sensor, the technical solution for flashing and the technical side for above-mentioned colour band problem
Case is identical.
It is known as " colour band compensation ", " colour band reduction ", " flashing is reduced " or " flicker compensation " for colour band and the countermeasure of flashing.
Under " colour band compensation " mode, the time for exposure is set as a definite value, as described above.This means that the time for exposure sets
It is longer or shorter than from the AE angle estimated time.
Fig. 4 show the example of object illumination and the relationship between the time for exposure.(void shown in Fig. 4 in the normal mode
Line), the time for exposure control time estimated for slave AE angle as described above.In contrast, (the figure under colour band compensation model
Solid line in 4), time for exposure control is a definite value as described above.
It is less than the part of dotted line in solid line, output signal is lower than predicted value, that is, is lower than AE target value.In this case,
Amplify output signal so that signal level corresponding A E target value.
However, (referring to fig. 4 with the shadow region in Fig. 5) can have some problems in the case where solid line is greater than dotted line.
Firstly, the reason of solid line becomes larger than dotted line will now be described.In the example of fig. 4, it is assumed that commercial frequency is
50Hz.Therefore, as described above, the time for exposure should be one of following item: 1/100 second, 2/100 second, 3/100 second, etc., so that
Colour band compensating effect can play.That is, the minimum exposure time is 1/100 second, i.e., in order to play colour band compensating effect
10ms.Therefore, as shown in figure 4, the time for exposure of solid line is 10ms, within this time, then solid line is crossed beyond dotted line, is made
Obtaining colour band compensating effect can play.
It now will assume the analog-digital converter (Analog-to-Digital in the image sensor using 10
Converter, ADC), and in the output of ADC, stain value is set as 64 digital quantization values (digital number, DN), white point
Value (i.e. maximum value) is set as 1023DN.
If averaged image data value is greater than AE target value, AE controller calculates additional gain, which is less than
1.0.Therefore, white point value is reduced to less than 1023DN.
Fig. 5 shows the example of the problem.As a result, by using the gain less than 1.0, the dynamic range of picture signal and
Corresponding picture quality may decline.Therefore, the gain less than 1.0 and undesirable is used.However, if feelings shown in Fig. 4
Gain in condition 3 is 1.0, then exporting picture signal will be as shown in Figure 5.This means that output video level is mended in colour band
It is not constant for repaying under mode.
Summary of the invention
The embodiment of the invention provides a kind of image processing apparatus and a kind of image processing method, with solve the problems, such as colour band and
The two technical problems of the reduced dynamic range of picture signal.
In order to solve aforementioned technical problem, the embodiment of the invention discloses following technical schemes.
According in a first aspect, a kind of image processing apparatus includes:
Imaging sensor, the frame of the First ray for capturing and exporting image with frame per second 2 (n+1) f, wherein n indicates big
In or equal to 1 integer, f indicate commercial frequency;And
At least one processor, the frame of the second sequence for extracting image from the frame of the First ray of described image,
And merge the frame of the second sequence of described image to generate the single frame of image.
The reason of number " n " in frame per second 2 (n+1) f does not include zero is, if the number in the frame per second 2 (n+1) f
Word " n " is zero, then the frame per second is 2 × f.In this case, as above in association with shown in Fig. 3 to Fig. 5, colour band problem can be solved
Certainly, and dynamic range may reduce.
According in a first aspect, having when increasing the total exposure time of described image sensor to solve the problems, such as the colour band
It likely ensure that wider dynamic range.
It can also be configured as follows according to the described image processing unit of the first aspect: described image sensor
It is captured with frame per second 2 (n+1) f and the frame of the First ray that exports image, and when at least one described processor detects
When scintillation or colour band phenomenon, at least one described processor extracts the of image from the frame of the First ray of described image
The frame of two sequences and merge described image the second sequence frame to generate the single frame of image;Otherwise, described image senses
Device is captured with the frame per second less than frame per second 2 (n+1) f and the frame of the First ray that exports image, and described at least one
Reason device extracts the frame of the second sequence of image not from the frame of the First ray of described image, also the second of nonjoinder described image
The frame of sequence generates the single frame of image.
It can also be configured as follows according to the described image processing unit of the first aspect: due to described at least one
A processor merges the frame of second sequence, so the total exposure time of described image sensor is m/ (2 × f).Cause
This, it is possible to the colour band is solved the problems, such as, as described in above in association with Fig. 3.
In this regard, for example, it is assumed that m=1, in order to ensure the total exposure time is 1/ (2 × f), the 2 (n+ of frame per second
1) relationship between f and the quantity of the frame of second sequence is that the frame per second is bigger, the number of the frame of second sequence
Amount is bigger, so that the frame for ensuring that at least one described processor merges second sequence can obtain the total exposure time 1/
(2×f)。
For example, it is assumed that f=50 and n=1, frame per second 200fps, wherein every frame is 1/200 second.This means that single frame
Time for exposure is 1/200 second.In order to ensure the total exposure time is 1/ (2 × f)=1/100 second, two frames should merge, that is, 2 ×
1/200=1/100.
In the first possible embodiment of the first aspect, the institute of the frame of the second sequence of included described image
Stating quantity is n+1.
In first possible embodiment of the first aspect, it is therefore possible to ensure that the total exposure described above
Time is 1/ (2 × f).
In the second possible embodiment of the first aspect, at least one described processor is also used to: it is described extremely
Before a few processor merges single frame of the frame of the second sequence of described image to generate described image, by described image
The image data of each frame in the frame of second sequence is multiplied by a gain.
In second possible embodiment of the first aspect, it is possible to which acquisition will be by subsequent image processing unit
The proper signal level handled.
In the third possible embodiment of the first aspect, at least one described processor be also used to by it is described at least
The image data of the single frame for the described image that one processor generates is multiplied by a gain.
In the third possible embodiment of the first aspect, it is possible to which acquisition will be by subsequent image processing unit
The proper signal level handled.
According to first possible embodiment, in the 4th possible embodiment of the first aspect, it is described at least
One processor is also used to: merging the frame of the second sequence of described image at least one described processor to generate described image
Single frame before, by the image data of each frame in the frame of the second sequence of described image multiplied by a gain.
In the 4th possible embodiment of the first aspect, it is possible to which acquisition will be by subsequent image processing unit
The proper signal level handled.
According to first possible embodiment, in the 5th possible embodiment of the first aspect, it is described at least
The image data of the single frame for the described image that one processor is also used to generate at least one described processor is multiplied by one
Gain.
In the 5th possible embodiment of the first aspect, it is possible to which acquisition will be by subsequent image processing unit
The proper signal level handled.
In the 6th possible embodiment of the first aspect, at least one described processor is also used to control the number
Measuring includes how many a picture frames in the frame of the second sequence of n and described image.
In the 6th possible embodiment of the first aspect, it is possible to improve controllability.
According to first possible embodiment, in the 7th possible embodiment of the first aspect, it is described at least
One processor is also used to control the quantity n.
In the 7th possible embodiment of the first aspect, it is possible to improve controllability.
According to described second or third possible embodiment, in the 8th possible embodiment of the first aspect, institute
State at least one processor be also used to control n, described image the second sequence frame in comprising how many a picture frames and described
Gain.
In the 8th possible embodiment of the first aspect, it is possible to improve controllability.
According to the 4th or the 5th possible embodiment, in the 9th possible embodiment of the first aspect, institute
It states at least one processor and is also used to control the quantity n and the gain.
In the 9th possible embodiment of the first aspect, it is possible to improve controllability.
It may be implemented according to any one of the described first to the 9th possible embodiment, the in the first aspect the tenth
In mode, at least one described processor is also used to: when at least one described processor detects scintillation or colour band phenomenon
When, the frame per second of described image sensor is increased to second frame per second 2 (n+1) f from the first frame per second less than 2 (n+1) f.
In the tenth possible embodiment of the first aspect, when sudden strain of a muscle is not detected at least one described processor
When bright phenomenon or colour band phenomenon, it is possible to reduce the operation load of at least one processor.
According to second aspect, include: by a kind of image processing method that at least one processor executes
Control imaging sensor is captured and is exported the frame of the First ray of image with frame per second 2 (n+1) f, wherein n indicates big
In or equal to 1 integer, f indicate commercial frequency;And
The frame of the second sequence of image is extracted from the frame of the First ray of described image, and merges the of described image
The frame of two sequences is to generate the single frame of image.
According to the second aspect, when the total exposure time of increase described image sensor is to solve the problems, such as the colour band
When, it is therefore possible to ensure that wider dynamic range.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, institute in being described below to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described.The accompanying drawings in the following description shows some embodiments of the present invention, for ability
For the those of ordinary skill of domain, it can also be obtained according to these attached drawings without creative efforts other attached
Figure.
Fig. 1 shows the reason of colour band problem;
Fig. 2 shows an examples of the image with colour band problem;
Fig. 3 shows the method for solving the problems, such as colour band;
Fig. 4 shows the problem under colour band compensation model;
Fig. 5 shows the problem under colour band compensation model;
Fig. 6 shows an example of the hardware block diagram of image processing apparatus according to an embodiment of the invention;
Fig. 7 shows an example of the hardware block diagram of control unit shown in fig. 6;
Fig. 8 shows an example of the functional block diagram of control unit;
Fig. 9 shows another example of the functional block diagram of control unit;
Figure 10 show the flow chart of the operation executed as the control unit that configures with functional block shown in Fig. 8 one shows
Example;
Figure 11 show the flow chart of the operation executed as the control unit that configures with functional block shown in Fig. 9 one shows
Example;
Figure 12 shows an example of the process executed by control unit;
Figure 13 shows the actual example of the setting in control unit;
Figure 14 shows actual example of the embodiment of the present invention compared between the relevant technologies.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention
Technical solution in embodiment is clearly described.Similar number represents similar element in attached drawing.Attached drawing is only used for
Improving eyesight.Embodiment described in the present invention is a part of the embodiments of the present invention, rather than whole embodiments.This field is common
Technical staff is based on embodiment described in the present invention all other reality obtained under the premise of not making creative work
Applying example shall fall within the protection scope of the present invention.
The purpose of the embodiment of the present invention is that realizing a kind of solution dynamic range as caused by non-smooth AE control program loss
Problem colour band compensation model (be also referred to as " colour band reduction mode ", " flashing reduction mode ", " flicker compensation mode ", etc.
Deng).
The embodiment of the present invention has 2 key points.
One key point is that imaging sensor has high frame per second readability.
For example, the frame per second of imaging sensor can be 2 (n+1) times of commercial frequency f, wherein n expression is greater than or waits
In 1 integer.The series of frames of the image obtained with the frame per second can merge so that total exposure time is m/ (2 × f) second,
Wherein m indicates to be greater than or equal to 1 integer, in this way can be to avoid colour band problem.This is because as described in above in association with Fig. 3,
In situation mentioned above, all incident lights during exposure are kept constant, even if the luminous intensity of light is under commercial frequency
It can change periodically at any time.
Another key point is, in order to make the average and maximum image data value for merging image be not more than corresponding be used for
The signal level of signal processing, original image or the image data for merging image can be multiplied by a gains.The gain passes through AE
Function controls.The AE function is executed by central processing unit (Central Processing Unit, CPU).
Fig. 6 shows an example of the hardware block diagram of image processing apparatus according to an embodiment of the invention.At image
Reason device may be embodied in digital camera.In addition, image processing apparatus may be embodied in the mobile communications device with camera
In, such as cellular phone, smart phone, tablet computer etc..
Image processing apparatus includes imaging sensor 2 and control unit 1.
Imaging sensor 2 can be a kind of charge (charge-coupled device, CCD), a kind of complementation
Type metal oxide semiconductor (complementary metal-oxide-semiconductor, CMOS), etc..
Imaging sensor 2 captures image, and the image data of institute's captured image is output to control unit 1.
Control unit 1 by it is a kind of ensure wider dynamic range while avoiding colour band problem in a manner of passed to from image
The image data for the image that sensor 2 is sent is handled.
Fig. 7 shows an example of the hardware block diagram of control unit shown in fig. 6.
As shown in fig. 7, control unit 1 includes CPU 11, memory 12, interface (interface, I/F) 13 and by each list
Member bus 14 interconnected.
CPU 11 executes image processing operations to ensure 2 captured image of imaging sensor while avoiding colour band problem
Image data have wider dynamic range.
Memory 12 includes such as read-only memory (Read-Only Memory, ROM), random access memory (Random
Access Memory, RAM) etc., and store the various programs and data that image processing operations are executed for CPU 11.
Interface 13 is inserted between control unit 1 and imaging sensor 2, for if necessary by properly converting number
According to/instruct and carry out suitably transmission/reception data/commands between control unit 1 and imaging sensor 2.
Control unit 1 controls frame per second, the gain of time for exposure and imaging sensor 2.Control unit 1 also controls and AE function
Relevant other settings and colour band compensation model.
In general, there are two functions for control unit 1.One function is to merge the series of frames of image.Another function
It is by the way that raw image data is controlled signal level multiplied by a gain.
When detecting flashing/colour band phenomenon, imaging sensor 2 is set, so that the series of frames after the merging of image
Total exposure time be m/ (2 × f), wherein m indicate be greater than or equal to 1 integer, f indicate commercial frequency.Frame per second is cut
Change to higher rate, i.e., 2 (n+1) f, wherein n indicates to be greater than or equal to 1 integer.In this regard, Figure 13 shows reality
Example value.
Incorporate a series of m (n+1) a frame.The total exposure time of this m (n+1) a frame is m (n+1)/{ 2 (n+1) f }=m/
(2 × f) second.As long as the time for exposure meets this condition, colour band/scintillation can be eliminated or reduced.
In embodiments of the present invention, any flashing/colour band phenomenon detection method is all suitable for.
Therefore, according to embodiments of the present invention, flashing/colour band phenomenon can be eliminated or be reduced.
In addition, signal level can be controlled smoothly.It means that even if luminous intensity can be at any time under commercial frequency
Between change periodically, signal level can also remain at same value or so, in addition, being also able to maintain wider dynamic range.
Fig. 8 shows an example of the functional block diagram of control unit 1.
Figure 10 shows one of the flow chart of the operation executed as the control unit 1 configured with functional block shown in Fig. 8
Example.
As shown in figure 8, control unit 1 includes flashing/colour band detection part 111, frame per second control section 112, frame extraction unit
113, image is divided to merge part 114 and multiplied by gains part 115.These parts are embodied as storing in the execution memory 12 of CPU 11
Program result.
Flashing/colour band detection part 111 is carried out by the image data to the captured images sent from imaging sensor 2
Image procossing detects flashing/colour band phenomenon, or passes through the image data to the captured images sent from imaging sensor 2
Image procossing is carried out to detect flashing/colour band phenomenon high probability.
Frame per second control section 112 controls the frame per second of imaging sensor 2 in the following manner: when flashing/colour band detection part
Frame per second when flashing/colour band phenomenon is not detected (being no in step S1), is maintained at the set rate less than 2 (n+1) f by 111;Instead
Frame per second is increased to frame per second when flashing/colour band detection part 111 detects flashing/colour band phenomenon (being yes in step S1) by it
2 (n+1) f (step S2).
In step s3, frame extracts part 113 and extracts the pre- of image from the image for being captured and being sent by imaging sensor 2
The successive frame of fixed number amount.
In step s 4, image merges part 114 and merges the picture frame extracted by frame extraction part 113, to ensure
Colour band is solved the problems, such as while wider dynamic range.
In step s 5, the image data that multiplied by gains part generates image merging part 113 in this way is multiplied by predetermined increasing
Benefit, so that corresponding picture signal has the proper signal level that will be handled by subsequent image processing unit.
Fig. 9 shows another example of the functional block diagram of control unit 1.
Figure 11 shows one of the flow chart of the operation executed as the control unit 1 configured with functional block shown in Fig. 9
Example.
As shown in figure 9, in this example still, control unit 1 includes flashing/colour band detection part 111, frame per second control unit
112, frame is divided to extract part 113, image merging part 114 and multiplied by gains part 115.These parts are embodied as the execution of CPU 11
The result of the program stored in memory 12.
In this example still, flashing/colour band detection part 111 passes through to the institute's capture figure sent from imaging sensor 2
The image data of picture carries out image procossing to detect flashing/colour band phenomenon.
Frame per second control section 112 controls the frame per second of imaging sensor 2 in the following manner: when flashing/colour band detection part
Frame per second when flashing/colour band phenomenon is not detected (being no in step S11), is maintained at the set rate less than 2 (n+1) f by 111;
Conversely, frame per second is increased to when flashing/colour band detection part 111 detects flashing/colour band phenomenon (being yes in step S11)
Frame per second 2 (n+1) f (step S12).
In step s 13, frame extracts part 113 and extracts image from the image for being captured and being sent by imaging sensor 2
The successive frame of predetermined quantity.
In step S14, multiplied by gains part is by the image data of each frame in the picture frame extracted in this way multiplied by pre-
Determine gain, is filled so that picture signal corresponding with the merging image that will be generated in step S15 has to be handled by subsequent image
Set the proper signal level handled.
In step S15, image merges part 114 and merges the picture frame generated by multiplied by gains part 115, so as to true
Colour band is solved the problems, such as while protecting wider dynamic range.
Figure 12 shows an example of the process executed by control unit 1.
Example shown in Figure 12 corresponds to the example of Fig. 8 and Figure 10, specifically, corresponding to step S3 shown in Fig. 10 extremely
S5。
In the example depicted in fig. 12, for example, the corresponding instruction in response to user operates, imaging sensor 2 captures image
A series of first, second, third ... frame.
Control unit 1 extracts continuous third and fourth frame (the mentioning in Figure 12 of such as image from institute's captured image frame
Take), and they are merged into (additions) (the step S3 in Figure 10 is extremely for " third+the four " frame (addition in Figure 12) of image
S4)。
Then, control unit 1 is by the image data for the image being achieved in that multiplied by specified gain (multiplication in Figure 12) (figure
Step S5 in 10).
Figure 13 shows the actual example of the setting in control unit 1.
As shown in figure 13, the time for exposure under colour band compensation model is 1/ (2 × f), as described above, therefore for 60Hz
Commercial frequency be 1/120 second, for 50Hz commercial frequency be 1/100 second.
Frame per second under colour band compensation model is 2 (n+1) times of commercial frequency f, as described above, therefore for 60Hz
It is 50 × 2 (n+1)=100 (n+1) for 50Hz for 60 × 2 (n+1)=120 (n+1).Its actual example is as follows: such as Figure 13 institute
Show, is 240 frames/second, 360 frames/second, 480 frames/second ... for 60Hz, is 200 frames/second, 300 frames/second, 400 for 50Hz
Frame/second ...
Figure 14 shows actual example of the embodiment of the present invention compared between the relevant technologies.
In the every case in situation 1 and 2 (referring to fig. 4), wherein from the time for exposure (void in Fig. 4 of AE angle
Line) it is greater than or equal to the time for exposure (solid line in Fig. 4) under colour band compensation model, in the relevant technologies and the embodiment of the present invention
Image data carry out identical processing.In other words, as described above, in situation 3, wherein the time for exposure from AE angle is small
Time for exposure under colour band compensation model, it may occur however that the problem of reduced dynamic range described above, such as according to the present invention
Embodiment is advantageous above in association with processing scheme described in Figure 12.
In the example in figure 14, it is assumed that 10 ADC are used in imaging sensor 2, AE target value is set as 236DN, and
In the output of ADC, stain value is set as 64DN, and white point value is set as 1023DN (that is, maximum value).ADC in imaging sensor 2 will
The analog picture signal obtained in imaging sensor 2 is converted to the image data value (DN) that will be handled by CPU 11
Correspondence data image signal.
In situation 1, averaged image data value is 150DN, is less than AE target value 236DN (referring to " situation 1 " in Figure 14
" average " and " white point " field of " before 1 frame is multiplied " field in field).Thus, for example, by gain (referring in Figure 14
" gain " field) it is set as 2 times.Image data value after multiplication is calculated by the following formula:
(image data value -64 before multiplication) × gain -64
That is, subtracting stain value 64DN from the image data before multiplication, multiplied by gain, stain value is then added
64DN。
In situation 1, according to above-mentioned formula, the averaged image data value after being multiplied is calculated as (150-64) 2+64=
236。
Similarly, the white point value after being multiplied is calculated as (1023-64) 2+64=1982.In this case, due to phase
Multiplying result 1982DN is more than maximum value 1023DN, so image data value is reduced to maximum value 1023DN (referring to " feelings in Figure 14
" average " and " white point " field of " after multiplication " field in 1 " field of condition).
In situation 2, the averaged image data value before being multiplied is equal to AE target value 236DN.Therefore, gain is set as 1
Times.So averaged image data value (237DN) and white point value (1023DN) and the averaged image data value and white point before being multiplied
It is worth identical.
In situation 3, according to the relevant technologies (referring to " the relevant technologies " field in Figure 14), for averaged image data value
Such as 408DN, it is greater than AE target value 236DN.Thus, for example, gain is set as 0.5 times.
In situation 3, according to above-mentioned formula, the averaged image data value after being multiplied is calculated as (408-64)/2+64=
236。
Similarly, the white point value after being multiplied is calculated as (1023-64)/2+64=543.2.Then will value " 543.2 " to
Upper be rounded is " 544 ".
In this way, white point value is reduced to " 544 " from " 1023 " in situation 3, dynamic range and picture quality can accordingly drop
It is low, as described above.
In contrast, according to embodiments of the present invention (referring to " embodiment " field in Figure 14), since frame per second is for example in work
Industry is improved and (is doubled) to 200 frames/second (that is, 2 (n+1) f, wherein n=1, f=50) from 100 frames/second under electric frequency f=50Hz.
In this case, for example, as shown in figure 14, the averaged image data value in third frame is 200DN, the 4th frame
Averaged image data value in (referring to Figure 12) is 272DN.The different reason of the averaged image data value of two successive frames is to dodge
Bright phenomenon etc..
According to embodiments of the present invention, for example, as shown in figure 12, third frame and the 4th frame are merged.Therefore, merge the flat of frame
Equal image data value is calculated as 200+272=472.Similarly, the white point value for merging frame is calculated as 1023+1023=2046 (ginseng
See " average " and " white point " field of field " after 2 frames are added ").
In this regard, as shown in figure 14, white before the multiplication in situation 3 in the relevant technologies and the embodiment of the present invention
Point value is identical (i.e. 1023DN), although the frame per second in the embodiment of the present invention doubles (referring to " white point " of " before 1 frame is multiplied " field
Field).This is because in this example, practical white point value is such as 2400DN or so.Imaging sensor 2 is by image data value
It is reduced to 1023DN.In the related art, practical white point value 2400DN is reduced to 1023DN.In embodiments of the present invention, by
It is doubled in frame per second, so the total exposure time of every frame halves from 10ms as 5ms.Therefore, practical white point value halves as 2400/2=
1200.Then, for each frame in extracted third frame and the 4th frame, practical white point value 1200DN is reduced to
1023DN。
Averaged image data value and white point value due to merging frame image are respectively 472DN and 2046DN, (are joined as described above
See Figure 14), so gain is for example set as 0.42 times according to embodiments of the present invention.
Therefore, for averaged image data value, the image data value after being multiplied is calculated as (472-64) × 0.42+64=
235.4, for white point value, it is calculated as (2046-64) × 0.42+64=896.4.Value 235.4 and 896.4 is rounded up respectively
For 236 and 897 (referring to " average " fields and " white point " field of " after being multiplied " field).
In this case, white point value reduces to " 897 " from " 1023 "." 544 " are reduced to from " 1023 " compared to white point value
The relevant technologies, in embodiments of the present invention, dynamic range expand.
Those of ordinary skill in the art will appreciate that be each aspect or each aspect of the invention possibility embodiment party
Formula can be specifically embodied as a kind of equipment, a kind of method or a kind of computer program product.Therefore each aspect of the invention or
The possible embodiment of each aspect can be used only hardware embodiment, only software implementation (including firmware, resident software etc.),
Or the mode of the embodiment of hardware and software combination.In addition the possible embodiment of each aspect of the invention or each aspect can be with
The form of computer program product is taken, wherein computer program product refers to the computer of storage in computer-readable medium
Readable program code.
Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable
Storage medium include but is not limited to electronics, magnetism, optics, electromagnetism, infrared ray or semiconductor system, device or they in
Random suitable combination, such as RAM, ROM, Erarable Programmable Read only Memory (erasable programmable read
Only memory, EPROM or flash memory), optical fiber and compact disc read-only memory (compact disc read-only memory,
CD-ROM)。
Processor in computer reads the computer readable program code of storage in computer-readable medium, to locate
Reason device is able to carry out the function and movement specified in the combination of each step or each step in flow chart;And generate a device
To execute function and movement specified in the combination of each box or each box in block diagram.
It should be noted that in some alternate embodiments, each of each step or block diagram in flow chart
The specified function of box can not execute in the order illustrated.For example, two steps continuously being illustrated according to correlation function or
Two boxes can actually be generally performed simultaneously or these boxes may execute in reverse order sometimes.
Obviously, those skilled in the art can without departing from the spirit and scope of the present invention make respectively the present invention
Kind modifications and changes.The present invention is directed to cover these modifications and changes, if they fall within following claims or they
In the protection scope that equivalent is defined.
Claims (12)
1. a kind of image processing apparatus characterized by comprising
Imaging sensor, for capture and exports with frame per second 2 (n+1) f image First ray frame, wherein n indicate be greater than or
Integer equal to 1, f indicate commercial frequency;And
At least one processor, the frame of the second sequence for extracting image from the frame of the First ray of described image, and
Merge the frame of the second sequence of described image to generate the single frame of image.
2. image processing apparatus according to claim 1, it is characterised in that:
The quantity of the frame of second sequence of described image is n+1.
3. image processing apparatus according to claim 1, it is characterised in that:
At least one described processor is also used to: at least one described processor merge described image the second sequence frame with
Before the single frame for generating described image, by the image data of each frame in the frame of the second sequence of described image multiplied by one
Gain.
4. image processing apparatus according to claim 1, it is characterised in that:
The image of the single frame for the described image that at least one described processor is also used to generate at least one described processor
Data are multiplied by a gain.
5. image processing apparatus according to claim 2, it is characterised in that:
At least one described processor is also used to: at least one described processor merge described image the second sequence frame with
Before the single frame for generating described image, by the image data of each frame in the frame of the second sequence of described image multiplied by one
Gain.
6. image processing apparatus according to claim 2, it is characterised in that:
The image of the single frame for the described image that at least one described processor is also used to generate at least one described processor
Data are multiplied by a gain.
7. image processing apparatus according to claim 1, it is characterised in that:
At least one described processor is also used to control in the frame of the second sequence of the quantity n and described image comprising how many
Picture frame.
8. image processing apparatus according to claim 2, it is characterised in that:
At least one described processor is also used to control the quantity n.
9. image processing apparatus according to claim 3 or 4, it is characterised in that:
At least one described processor be also used to control the quantity n, described image the second sequence frame in comprising how many
Picture frame and the gain.
10. image processing apparatus according to claim 5 or 6, it is characterised in that:
At least one described processor is also used to control the quantity n and the gain.
11. according to claim 1 to image processing apparatus described in any one of 10, it is characterised in that:
At least one described processor is also used to: when at least one described processor detects scintillation or colour band phenomenon,
The frame per second of described image sensor is increased to second frame per second 2 (n+1) f from the first frame per second less than 2 (n+1) f.
12. a kind of image processing method executed by least one processor, which is characterized in that described image processing method packet
It includes:
Control imaging sensor capture and exports with frame per second 2 (n+1) f image First ray frame, wherein n indicate be greater than or
Integer equal to 1, f indicate commercial frequency;And
The frame of the second sequence of image is extracted from the frame of the First ray of described image, and merges the second sequence of described image
The frame of column is to generate the single frame of image.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/085290 WO2017210897A1 (en) | 2016-06-08 | 2016-06-08 | Image processing apparatus, and image processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109247067A true CN109247067A (en) | 2019-01-18 |
CN109247067B CN109247067B (en) | 2021-02-12 |
Family
ID=60578311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680085977.5A Active CN109247067B (en) | 2016-06-08 | 2016-06-08 | Image processing apparatus and image processing method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109247067B (en) |
WO (1) | WO2017210897A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108462837A (en) * | 2018-03-13 | 2018-08-28 | 中兴通讯股份有限公司 | Image pickup method and device |
CN110035234A (en) * | 2019-04-16 | 2019-07-19 | 深圳市道通智能航空技术有限公司 | A kind of filming control method of aircraft, aircraft and flight system |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220121712A (en) | 2021-02-25 | 2022-09-01 | 캐논 가부시끼가이샤 | Image capturing apparatus capable of detecting flicker due to periodic change in light amount of object, flicker detecting method, and non-transitory computer-readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101281437A (en) * | 2008-01-29 | 2008-10-08 | 埃派克森微电子(上海)有限公司 | Method for regulating optical indication device image quality controlling parameter |
US20140078358A1 (en) * | 2012-09-14 | 2014-03-20 | Canon Kabushiki Kaisha | Solid-state imaging apparatus and driving method of solid-state imaging apparatus |
CN104104882A (en) * | 2013-04-09 | 2014-10-15 | 展讯通信(上海)有限公司 | Image flicker detection method and device, and image capturing device |
US20150002694A1 (en) * | 2013-06-26 | 2015-01-01 | Canon Kabushiki Kaisha | Image capturing apparatus and control method thereof |
US20150103209A1 (en) * | 2013-10-14 | 2015-04-16 | Stmicroelectronics (Grenoble 2) Sas | Flicker compensation method using two frames |
WO2016063023A1 (en) * | 2014-10-20 | 2016-04-28 | Apical Limited | Method of reducing digital video flicker |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100473119C (en) * | 2006-11-07 | 2009-03-25 | 北京中星微电子有限公司 | Method and device for clearing explosure flash |
JP2012222739A (en) * | 2011-04-13 | 2012-11-12 | Panasonic Corp | Flicker correction device, flicker correction method and flicker correction program |
US9077913B2 (en) * | 2013-05-24 | 2015-07-07 | Google Inc. | Simulating high dynamic range imaging with virtual long-exposure images |
CN105007429B (en) * | 2015-08-10 | 2018-01-19 | 广东欧珀移动通信有限公司 | A kind of method, system and mobile terminal for eliminating flicker |
-
2016
- 2016-06-08 CN CN201680085977.5A patent/CN109247067B/en active Active
- 2016-06-08 WO PCT/CN2016/085290 patent/WO2017210897A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101281437A (en) * | 2008-01-29 | 2008-10-08 | 埃派克森微电子(上海)有限公司 | Method for regulating optical indication device image quality controlling parameter |
US20140078358A1 (en) * | 2012-09-14 | 2014-03-20 | Canon Kabushiki Kaisha | Solid-state imaging apparatus and driving method of solid-state imaging apparatus |
CN104104882A (en) * | 2013-04-09 | 2014-10-15 | 展讯通信(上海)有限公司 | Image flicker detection method and device, and image capturing device |
US20150002694A1 (en) * | 2013-06-26 | 2015-01-01 | Canon Kabushiki Kaisha | Image capturing apparatus and control method thereof |
US20150103209A1 (en) * | 2013-10-14 | 2015-04-16 | Stmicroelectronics (Grenoble 2) Sas | Flicker compensation method using two frames |
WO2016063023A1 (en) * | 2014-10-20 | 2016-04-28 | Apical Limited | Method of reducing digital video flicker |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108462837A (en) * | 2018-03-13 | 2018-08-28 | 中兴通讯股份有限公司 | Image pickup method and device |
CN108462837B (en) * | 2018-03-13 | 2022-06-21 | 中兴通讯股份有限公司 | Shooting method and device |
CN110035234A (en) * | 2019-04-16 | 2019-07-19 | 深圳市道通智能航空技术有限公司 | A kind of filming control method of aircraft, aircraft and flight system |
Also Published As
Publication number | Publication date |
---|---|
CN109247067B (en) | 2021-02-12 |
WO2017210897A1 (en) | 2017-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4960605B2 (en) | Automatic exposure compensation method and compensation device | |
US8169502B2 (en) | Video camera | |
CN103945145A (en) | Apparatus and method for processing image | |
JP6525543B2 (en) | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM | |
US20130265412A1 (en) | Image processing apparatus and control method therefor | |
CN109247067A (en) | Image processing apparatus and image processing method | |
US11831991B2 (en) | Device, control method, and storage medium | |
GB2499668A (en) | Exposure Controller | |
US9615031B2 (en) | Imaging device and scene determination method | |
TWI528163B (en) | Power saving surveillance system and method | |
US20220021800A1 (en) | Image capturing apparatus, method of controlling image capturing apparatus, and storage medium | |
US9672598B2 (en) | Color moire reducing method, color moire reducing apparatus, and image processing apparatus | |
US20140043502A1 (en) | Flicker noise detection apparatus, flicker noise detection method, and computer-readable storage device storing flicker noise detection program | |
JP2010187409A (en) | Apparatus and method for correcting defects, and imaging apparatus | |
WO2017051511A1 (en) | Illuminance acquiring device, illuminance control system, and program | |
US20200077006A1 (en) | Image processing method and imaging device | |
US8154618B2 (en) | Imaging apparatus and method for setting the same | |
KR101165450B1 (en) | Black level compensation apparatus and method | |
JP6570252B2 (en) | Image processing apparatus, information processing method, and program | |
JP2008252402A (en) | Imaging system, imaging method, and imaging program | |
JP5978829B2 (en) | Image sensor exposure control apparatus and method, and image pickup apparatus | |
JP2021145178A (en) | Image processing device, determination method, and program | |
JP2007329604A (en) | Fluorescent light flicker detection circuit | |
JP4591037B2 (en) | Flicker component detection apparatus and flicker component detection method | |
JP6494351B2 (en) | Image processing apparatus, image processing method, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |