CN108024062A - Image processing method and image processing apparatus - Google Patents
Image processing method and image processing apparatus Download PDFInfo
- Publication number
- CN108024062A CN108024062A CN201711330136.9A CN201711330136A CN108024062A CN 108024062 A CN108024062 A CN 108024062A CN 201711330136 A CN201711330136 A CN 201711330136A CN 108024062 A CN108024062 A CN 108024062A
- Authority
- CN
- China
- Prior art keywords
- image
- exposure
- time
- parameter
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/684—Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6812—Motion detection based on additional sensors, e.g. acceleration sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Abstract
The invention discloses a kind of image processing method, comprise the following steps:With first the first image of gain of parameter;Second parameter is obtained based on the first parameter, according to second the second image of gain of parameter;Second image is split to obtain M region, calculating its influence power to other pixels in second image in addition to the region K respectively for any region K in M region maps;Described first image and second image are subjected to color space conversion respectively, is mapped using the influence power as weighting coefficient and changes the colouring information of described first image to second image.The image processing method of the present invention changes the full color information of long exposure image into short exposed images, so that transformed image not only had the color included but also eliminated the offset of color.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and graphic processing facility.
Background technology
Photographing device, for example, camera, shooting mobile phone, during exercise during shooting image, understand the figure captured by causing because of shake
It is fuzzy as producing, that is, there is dither image, to avoid the occurrence of dither image, occur following several solution party in the prior art
Case:
Electronic flutter-proof, the drawback is that:Anti-shaking process does not have auxiliary and the participation of any component, is to be directed to the photosensitive members of CCD
Image on part carries out numerical analysis, then, is compensated using edge image, just as optical zoom is as digital zoom,
It simply carries out post-processing, stabilization less effective to the data collected.
Optical anti-vibration, concrete scheme are:What optical anti-vibration (OIS) relied on is optical principle, is perceived and shaken by gyroscope,
And drive related stabilization component to produce and reversely offset.Including a gyroscope that can sense hand shaking, which causes hand shaking
Camera tilt angles degree measure, system further according to the angle predict inclination caused by image shift amount, then, system control mirror
Head produces the opposite image shift of formed objects but direction relative to Image sensor shift, thus by image caused by hand shaking
Offset cancellation falls, and ensures that camera can still keep image stabilization in hand shaking environment.
The shortcomings that optical anti-vibration is:Lens construction is complicated, and manufacture is of high cost, causes expensive, and drives power consumption big,
The thickness of camera model is added at the same time.
Natural stabilization, concrete scheme are:Shaken by improving ISO sensitivity to improve the speed of shutter to make up.It is natural
The shortcomings that stabilization is:Increase the noise of still image photo significantly, reduce the practicality of image.
In view of the above-mentioned problems, those skilled in the art it is necessary to provide it is a kind of it is simple efficiently, image procossing it is lower-cost
Image anti-fluttering method.
The content of the invention
For above-mentioned technical problem in the prior art, the embodiment provides a kind of image anti-fluttering method.
In order to solve the above technical problems, the technical solution that the embodiment of the present invention uses is:
A kind of image processing method, comprises the following steps:
With first the first image of gain of parameter;
Second parameter is obtained based on the first parameter, according to second the second image of gain of parameter;
Second image is split to obtain M region, it is right to calculate its respectively for any region K in M region
The influence power mapping of other pixels in second image in addition to the region K;
Described first image and second image are subjected to color space conversion respectively, using influence power mapping as
Weighting coefficient changes the colouring information of described first image to second image.
Preferably, it is described to being removed in second image according to any region K in equation below calculating second image
The influence power mapping of other pixels outside the K of region:
Wherein:
ct(i, j) is the pixel numerical value of the second image;
For the gray average of the region k of the second image.
Preferably, it is described that described first image and second image are subjected to color space conversion respectively, with described
Influence power mapping, which changes the colouring information of described first image to second image as weighting coefficient, to be comprised the following steps:
Color conversion is carried out to each passage using equation below to obtain the new pixel number corresponding to each passage
Value:
The pixel numerical value of all passages of acquisition is transformed into color space so that the first image color information to be changed
To the second image;
Wherein:
For the gray variance of the region k of the second image;
For the gray average of the region k of the second image;
Correspond to the gray variance of the region k of the second image for the first image;
Correspond to the gray average of the region k of the second image for the first image.
Preferably, first parameter included for the first time for exposure, and second parameter included for the second time for exposure;It is described
Second time for exposure was obtained based on first time for exposure.
Preferably, first parameter further includes the first yield value, and second parameter includes the second yield value;Described
Two time for exposure were based on obtaining between first yield value.
Preferably, first time for exposure is twice of second time for exposure.
Preferably, second yield value is 1-1.5 times of first yield value.
The invention also discloses a kind of image processing apparatus, including:
Image acquisition part, it is configured to first the first image of gain of parameter;Obtained based on the first parameter, the second ginseng
Number, according to second the second image of gain of parameter;
Image processing part, it is configured to be split to obtain M region by second image, appoints in M region
One region K calculates its mapping of influence power to other pixels in second image in addition to the region K respectively;Respectively will
Described first image and second image carry out color space conversion, using influence power mapping as weighting coefficient by described in
The colouring information of first image is changed to second image.
Preferably, described image processing unit is configured to:
According to equation below calculate in second image any region K to removed in second image region K it
The influence power mapping of other outer pixels:
Wherein:
ct(i, j) is the pixel numerical value of the second image;
For the gray average of the region k of the second image.
Preferably, described image processing unit is configured to:
Based on the influence power mapping calculated, color conversion is carried out to each passage using equation below to obtain correspondence
In the new pixel numerical value of each passage:
The pixel numerical value of all passages of acquisition is transformed into color space so that the first image color information to be changed
To the second image;
Wherein:
For the gray variance of the region k of the second image;
For the gray average of the region k of the second image;
Correspond to the gray variance of the region k of the second image for the first image;
Correspond to the gray average of the region k of the second image for the first image.
Preferably, first parameter included for the first time for exposure, and second parameter included for the second time for exposure;It is described
Second time for exposure was obtained based on first time for exposure.
Preferably, first parameter further includes the first yield value, and second parameter includes the second yield value;Described
Two time for exposure were based on obtaining between first yield value.
Preferably, first time for exposure is twice of second time for exposure.
Preferably, second yield value is 1-1.5 times of first yield value.
Compared with prior art, the beneficial effect of image anti-fluttering method of the invention and image processing apparatus is:The present invention
Image processing method the full color information of long exposure image is changed into short exposed images so that transformed image
Not only there is the color included but also eliminate the offset of color.
Brief description of the drawings
Fig. 1 is the image processing method outflow figure that the embodiment of the present invention provides.
Embodiment
In the following, the specific embodiment of the present invention is described in detail with reference to attached drawing, but it is not as limiting to the invention.
It should be understood that various modifications can be made to disclosed embodiments.Therefore, description above should not regard
To limit, and only as the example of embodiment.Those skilled in the art will expect within the scope and spirit of this
Other modifications.
Comprising in the description and the attached drawing of a part for constitution instruction shows embodiment of the disclosure, and with it is upper
What face provided is used to explain the disclosure together to the substantially description of the disclosure and the detailed description given below to embodiment
Principle.
It is of the invention by the description to the preferred form of the embodiment that is given as non-limiting examples with reference to the accompanying drawings
These and other characteristic will become apparent.
It is also understood that although with reference to some instantiations, invention has been described, but people in the art
Member realize with can determine the present invention many other equivalents, they have feature as claimed in claim and therefore all
In the protection domain limited whereby.
When read in conjunction with the accompanying drawings, in view of described further below, above and other aspect, the feature and advantage of the disclosure will become
It is more readily apparent.
The specific embodiment of the disclosure is described hereinafter with reference to attached drawing;It will be appreciated, however, that the disclosed embodiments are only
The example of the disclosure, it can use various ways to implement.It is known and/or repeat function and structure be not described in detail to avoid
Unnecessary or unnecessary details make it that the disclosure is smudgy.Therefore, specific structural and feature disclosed herein is thin
Section is not intended to restrictions, but as just the basis of claim and representative basis for instruct those skilled in the art with
Substantially any appropriate detailed construction diversely uses the disclosure.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it may refer to according to one or more of identical or different embodiment of the disclosure.
As shown in Figure 1, embodiment of the invention discloses that a kind of image processing method, the image processing method include as follows
Step:
S10:With first the first image of gain of parameter.Wherein:First parameter may include it is a variety of, in the present embodiment,
One parameter includes the first time for exposure and the first yield value.
S20:Second parameter is obtained based on the first parameter, according to second the second image of gain of parameter.Second parameter includes base
The second time for exposure in the first time for exposure and the second yield value based on the first yield value, and the second time for exposure be less than
First time for exposure, the second yield value are more than the first yield value, that is to say, that the first image is the time for exposure when being the first exposure
Between long exposure image, short exposed images that it was the second time for exposure the time for exposure that the second image, which is, and the gain of the second image
Yield value of the value more than the first image.In the present embodiment, the first time for exposure was 2 times of the second time for exposure, and dark
Light environment under, the second yield value is 1.3 to 1.5 times of the first yield value.
It should be noted that:The yield value of second image can not set it is too high with prevent excessive yield value give second
Picture strip come it is obvious, be difficult to the noise that removes.
As described above, it is full by controlling the time for exposure to obtain color information, but there is certain color offset
The first image (or dither image) of amount, and obtain that color translational movement is smaller, when the second figure that color confidence is not full
Picture.And the present invention image processing method below the step of for by the color information of the first full image of color change to
In second image.
S30:Second image is split to obtain M region, it is right to calculate its respectively for any region K in M region
The influence power mapping of other pixels in second image in addition to the K of region.Wherein, can be calculated according to equation below in the second image
Any region K maps the influence power of other pixels in the second image in addition to the K of region:
Wherein:
ct(i, j) is the pixel numerical value of the second image;
For the gray average of the region k of the second image.
Above-mentioned formula can be attributed to following implication:By calculating in the second image (short exposed images) each pixel and the
Similarity in two images between the k of region determines influence powers of the region k to each pixel in figure, the average color of region k
It is more close, then CIM is bigger.To each pixel in the second image, M CIM numerical value is had, M is the number in region.
The purpose of the step is in color transfer process, by ensuring what is split to the average weighted methods of CIM
Color between region and region is smooth naturally excessive, prevents the color change between region and region.
S40:The first image and the second image are subjected to color space conversion respectively, weighting coefficient is used as using influence power mapping
The colouring information of first image is changed to the second image.Wherein, following two steps are specifically included in this step:
Comprise the following steps:
S41:Color conversion is carried out to each passage using equation below to obtain the new pixel corresponding to each passage
Point value:
S42:The pixel numerical value of all passages of acquisition is transformed into color space with by the first image color information
Change to the second image;
Wherein:
For the gray variance of the region k of the second image;
For the gray average of the region k of the second image;
Correspond to the gray variance of the region k of the second image for the first image;
Correspond to the gray average of the region k of the second image for the first image.
As described above, the image processing method of the present invention changes the full color information of long exposure image to short exposure
In light image, so that transformed image not only had the color included but also eliminated the offset of color.
The above-mentioned image of dither image Producing reason, the influence factor of dither image and the present invention is described in detail below
Processing method can handle the reason for dither image and specific beneficial effect.
First, the reason for producing dither image is introduced.
User is using photographing device, when such as camera, mobile phone take pictures object, between photographing device and object
It is interior at any time in most cases all to produce opposite movement.For example, photographing device is static, the object being photographed is opposite to be clapped
Moved according to equipment, such as photographing device takes pictures the flower of breeze animated;For another example subject is static, and takes pictures and set
Standby to be moved relative to subject, for example, subject is fixed building, photographing device holds shooting by user, due to
User can not cause absolute static of photographing device so that photographing device always produces movement with the object being taken;Example again
Such as, photographing device keeps movement, and subject also moves when being taken, for example, user on mobile vehicle to another
Object on the vehicle of a movement is shot.
When photographing device is being taken pictures, there is certain time for exposure t to subject, if subject with
Photographing device is moved within the time for exposure (in fact, any process of taking pictures, in exposure process, both can produce
Relative movement), the process of the relative movement just embodies in the image taken, and the moving process of the object is with " ghost image "
Or the mode of image shift is shown, that is, so-called dither image.
It should be appreciated that:It is any to expose as long as producing relative movement between photographing device and subject in exposure process
Image shift phenomenon occurs in (even if the time for exposure is small again) movement between light time.However, influenced by eye recognition visual ability,
When the offset of image is smaller, human eye just None- identified image shift, at this time, it is generally recognized that captured image does not have image
Shift phenomenon, belongs to non-dither image;When the offset of image is larger, human eye can identify image shift, at this time, usually recognize
There is image shift phenomenon for captured image, belong to dither image.For example, it is assumed that human eye can identify the minimum of image partially
When shifting amount is 0.01mm, the relative moving speed of photographing device and subject is 1m/s, when photographing device is to subject
When the time for exposure of body is more than 0.01 second, it is captured go out the offset of image be just more than 0.01mm, at this time, human eye just can be known
Do not go out image shift, it is believed that image is dither image;When photographing device is less than the time for exposure of subject 0.01 second
When, it is captured go out the offset of image be just less than 0.01mm, just None- identified goes out image shift to human eye at this time, at this time, it is believed that
Image is non-dither image.
As described above, dither image and non-shake figure can be taken to control by controlling the time for exposure of photographing device
Picture.
As described above, image shift the defects of being image that dither image produces, the quality of image can be influenced, and can
By shortening the time for exposure to prevent dither image, however, when the short exposure not shifted for shortening time for exposure generation
But there are the not full phenomenon of image color for light image.That is, the time for exposure is longer, it is inclined that it more produces harmful image
Move, and but produce beneficial full color;Time for exposure is shorter, it more produces harmful color defect, and but has
The image shift amount of benefit is smaller.
The image processing method of the present invention is mainly used for solving the problems, such as above-mentioned contradiction, and makes the both colors of the image after processing
Coloured silk is full and will not produce flating.
Popular says, the present invention is by the full color information of the image (hereinafter referred to as long exposure image) of long time for exposure
It is transferred in the image (hereinafter referred to as short exposed images) of short exposure time, makes the new image of generation both there are short exposed images
Not the advantages of not producing image shift, and have the advantages that color is full.
The present invention realizes that the detailed process of above-mentioned purpose is:Short exposed images are divided into limited a region (or unit),
Certainly, long exposure image has region (or unit) corresponding with the limited a region (or unit), then, will long exposure
The color information of all areas in image is changed to all areas corresponding with long exposure image, so that whole long
The full color information of exposure image, which is changed, goes new have higher color saturation degree and not to be formed into short exposed images
The new image of flating occurs.
In specific transfer process, progressively changed in a manner of the pixel of passage, it is specifically, all logical by obtaining
The new pixel in road carries out color conversion.
In color conversion process, since the region being divided into is limited a region, if between adjacent area without
It is excessively not gentle that processing can produce color, so that image produces the defects of being similar to splicing phenomenon.To solve the problems, such as this, this
Invention introduces influence power mapping equation and weighted formula, the effect of influence power mapping equation and weighted formula are:For M
Any region K calculates its influence to other pixels in the short exposed images in addition to the region K respectively in a region
Power maps;The long exposure image and the short exposed images are subjected to color space conversion respectively, mapped with the influence power
The colouring information of the long exposure image is changed to the short exposed images as weighting coefficient.
Generally, influence power mapping equation is exactly to be used to calculate arbitrary region in short exposed images (or unit) and its
The similarity of the color of his unit, and the similarity of the color in each region of correspondence of long exposure image can be obtained indirectly.So as to
The region to be compared and the size of the heterochromia in other regions are obtained, so that for the difference between adjacent area
Junction is handled so that the transition that the color between adjacent area can be naturally gentle.
During the technical method of the present invention is implemented, the setting of time for exposure is very important, i.e. the long time for exposure
The setting of (i.e. corresponding first time for exposure) and short exposure time (i.e. corresponding second time for exposure) and the setting of both sides relation
It is extremely important.
The relation for setting both long time for exposure and short exposure time and setting is mainly used for obtaining and will suitably tremble
Motion video and non-dither image, so-called setting appropriate long time for exposure refer to:Dither image is obtained by the long time for exposure
Need preferable full color;The appropriate short exposure time of so-called setting refers to:Obtained by short exposure time
Image cannot produce the jitter phenomenon of image shift.
The setting of long time for exposure and short exposure time need to be set according to the shooting environmental of user, so-called shooting ring
Border refers to:Translational speed between photographing device and subject.For example, it is assumed that human eye can identify the smallest offset of image
Measure as 0.01mm (minimum offset set is generally definite value, and the recognition capability of different people is almost consistent), work as photographing device
With the relative moving speed of the equipment that is taken it is 1m/s, it is necessary to which the long time for exposure to be set greater than to the time of 0.01 second, so that
So that it is captured go out the offset of image be just more than 0.01mm so that it is captured go out image become human eye and just can identify
Go out dither image, when the relative moving speed of photographing device and the equipment that is taken is 1m/s, it is necessary to which short exposure time is arranged to
Time less than 0.01 second so that it is captured go out the offset of image be just less than 0.01mm so that it is captured go out
Image just cannot recognize that non-dither image as human eye.For another example assume that human eye can identify that the minimum offset of image is
0.01mm (minimum offset set is generally definite value, and the recognition capability of different people is almost consistent), when photographing device and quilt
The relative moving speed of capture apparatus be 0.5m/s, it is necessary to the long time for exposure to be set greater than to the time of 0.005 second so that
It is captured go out the offset of image be just more than 0.01mm so that it is captured go out image just can recognize that as human eye
Dither image, when the relative moving speed of photographing device and the equipment that is taken is 0.5m/s, it is necessary to which short exposure time is arranged to
Time less than 0.005 second so that it is captured go out the offset of image be just less than 0.01mm so that it is captured go out
Image just cannot recognize that non-dither image as human eye.
The embodiment of the present invention also discloses a kind of image processing apparatus, including:
Image acquisition part, it is configured to first the first image of gain of parameter;Obtained based on the first parameter, the second ginseng
Number, according to second the second image of gain of parameter;
Image processing part, it is configured to be split the second image to obtain M region, for any area in M region
Domain K calculates its mapping of influence power to other pixels in the second image in addition to the K of region respectively;Respectively by the first image and
Two images carry out color space conversion, are mapped using influence power as weighting coefficient and change the colouring information of the first image to second
Image.
As further preferred, image processing part is configured to:
According to any region K in equation below the second image of calculating to other pixels in the second image in addition to the K of region
Influence power mapping:
Wherein:
ct(i, j) is the pixel numerical value of the second image;
For the gray average of the region k of the second image.
As further preferred, image processing part is configured to:
Based on the influence power mapping calculated, color conversion is carried out to each passage using equation below to obtain correspondence
In the new pixel numerical value of each passage:
The pixel numerical value of all passages of acquisition is transformed into color space so that the first image color information to be changed
To the second image;
Wherein:
For the gray variance of the region k of the second image;
For the gray average of the region k of the second image;
Correspond to the gray variance of the region k of the second image for the first image;
Correspond to the gray average of the region k of the second image for the first image.
As further preferred, the first parameter included for the first time for exposure, and the second parameter included for the second time for exposure;Second
Time for exposure is obtained based on the first time for exposure.First parameter further includes the first yield value, and the second parameter includes the second yield value;
Second time for exposure was based on obtaining between the first yield value.First time for exposure was twice of the second time for exposure.Second yield value
For 1-1.5 times of the first yield value.
Image acquisition part in the image processing apparatus that the above embodiment of the present invention provides may also include a velocity pick-up
Device and processor, which is used to measure when taking pictures exposure, opposite between image acquisition part and subject
Speed, the processor control the time for exposure of image acquisition part according to the speed measured by velocity sensor.For example, it is assumed that people
Eye can identify that the minimum offset of image is that (minimum offset set is generally definite value, the identification of different people to 0.01mm
Ability is almost consistent), the image acquisition part and the relative moving speed for the equipment that is taken that velocity sensor is measured are 1m/s, place
Manage device by calculate control the long time for exposure so that the long time for exposure be more than 0.01 second so that it is captured go out image it is inclined
Shifting amount is just more than 0.01mm so that it is captured go out image just can recognize that dither image, velocity sensor as human eye
Measured image acquisition part and the relative moving speed for the equipment that is taken are 1m/s, when processor controls short exposure by calculating
Between so that short exposure time is less than 0.01 second so that it is captured go out the offset of image be just less than 0.01mm so that
It is captured go out image just cannot recognize that non-dither image as human eye.
It is apparent to those skilled in the art that for convenience and simplicity of description, the data of foregoing description
The electronic equipment that processing method is applied to, may be referred to the corresponding description in before-mentioned products embodiment, details are not described herein.
Above example is only the exemplary embodiment of the present invention, is not used in the limitation present invention, protection scope of the present invention
It is defined by the claims.Those skilled in the art can make the present invention respectively in the essence and protection domain of the present invention
Kind modification or equivalent substitution, this modification or equivalent substitution also should be regarded as being within the scope of the present invention.
Claims (10)
1. a kind of image processing method, comprises the following steps:
With first the first image of gain of parameter;
Second parameter is obtained based on the first parameter, according to second the second image of gain of parameter;
Second image is split to obtain M region, it is calculated respectively to described for any region K in M region
The influence power mapping of other pixels in second image in addition to the region K;
Described first image and second image are subjected to color space conversion respectively, weighting is used as using influence power mapping
Coefficient changes the colouring information of described first image to second image.
2. image processing method as claimed in claim 1, it is characterised in that described respectively by described first image and described
Second image carries out color space conversion, using influence power mapping as weighting coefficient by the colouring information of described first image
Conversion to second image comprises the following steps:
Color conversion is carried out to each passage to obtain the new pixel numerical value corresponding to each passage;
The pixel numerical value of all passages of acquisition is transformed into color space changing the first image color information to
Two images.
3. image processing method as claimed in claim 1, it is characterised in that first parameter included for the first time for exposure,
Second parameter included for the second time for exposure;Second time for exposure is obtained based on first time for exposure.
4. image processing method as claimed in claim 3, it is characterised in that first parameter further includes the first yield value,
Second parameter includes the second yield value;Second time for exposure is based on obtaining between first yield value.
5. image processing method as claimed in claim 4, it is characterised in that first time for exposure is the described second exposure
Twice of time;Second yield value is 1-1.5 times of first yield value.
6. a kind of image processing apparatus, including:
Image acquisition part, it is configured to first the first image of gain of parameter;Obtained based on the first parameter, the second parameter, root
According to second the second image of gain of parameter;
Image processing part, it is configured to be split to obtain M region by second image, for any area in M region
Domain K calculates its mapping of influence power to other pixels in second image in addition to the region K respectively;Respectively by described in
First image and second image carry out color space conversion, using influence power mapping as weighting coefficient by described first
The colouring information of image is changed to second image.
7. image processing apparatus as claimed in claim 6, it is characterised in that described image processing unit is configured to:
Color conversion is carried out to each passage to obtain the new pixel numerical value corresponding to each passage;
The pixel numerical value of all passages of acquisition is transformed into color space changing the first image color information to
Two images.
8. image processing apparatus as claimed in claim 6, it is characterised in that first parameter included for the first time for exposure,
Second parameter included for the second time for exposure;Second time for exposure is obtained based on first time for exposure.
9. image processing apparatus as claimed in claim 8, it is characterised in that first parameter further includes the first yield value,
Second parameter includes the second yield value;Second time for exposure is based on obtaining between first yield value.
10. image processing apparatus as claimed in claim 9, it is characterised in that first time for exposure exposes for described second
Twice between light time;Second yield value is 1-1.5 times of first yield value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711330136.9A CN108024062A (en) | 2017-12-13 | 2017-12-13 | Image processing method and image processing apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711330136.9A CN108024062A (en) | 2017-12-13 | 2017-12-13 | Image processing method and image processing apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108024062A true CN108024062A (en) | 2018-05-11 |
Family
ID=62073350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711330136.9A Pending CN108024062A (en) | 2017-12-13 | 2017-12-13 | Image processing method and image processing apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108024062A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110708458A (en) * | 2018-07-10 | 2020-01-17 | 杭州海康威视数字技术股份有限公司 | Image frame compensation method, camera and thermal imaging camera |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110292477A1 (en) * | 2010-05-28 | 2011-12-01 | Xerox Corporation | Hierarchical scanner characterization |
CN103955902A (en) * | 2014-05-08 | 2014-07-30 | 国网上海市电力公司 | Weak light image enhancing method based on Retinex and Reinhard color migration |
JP2015080157A (en) * | 2013-10-18 | 2015-04-23 | キヤノン株式会社 | Image processing device, image processing method and program |
CN105120247A (en) * | 2015-09-10 | 2015-12-02 | 联想(北京)有限公司 | White-balance adjusting method and electronic device |
CN106060412A (en) * | 2016-08-02 | 2016-10-26 | 乐视控股(北京)有限公司 | Photographic processing method and device |
CN106797453A (en) * | 2014-07-08 | 2017-05-31 | 富士胶片株式会社 | Image processing apparatus, camera head, image processing method and image processing program |
CN107292804A (en) * | 2017-06-01 | 2017-10-24 | 西安电子科技大学 | Direct many exposure fusion parallel acceleration methods based on OpenCL |
-
2017
- 2017-12-13 CN CN201711330136.9A patent/CN108024062A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110292477A1 (en) * | 2010-05-28 | 2011-12-01 | Xerox Corporation | Hierarchical scanner characterization |
JP2015080157A (en) * | 2013-10-18 | 2015-04-23 | キヤノン株式会社 | Image processing device, image processing method and program |
CN103955902A (en) * | 2014-05-08 | 2014-07-30 | 国网上海市电力公司 | Weak light image enhancing method based on Retinex and Reinhard color migration |
CN106797453A (en) * | 2014-07-08 | 2017-05-31 | 富士胶片株式会社 | Image processing apparatus, camera head, image processing method and image processing program |
CN105120247A (en) * | 2015-09-10 | 2015-12-02 | 联想(北京)有限公司 | White-balance adjusting method and electronic device |
CN106060412A (en) * | 2016-08-02 | 2016-10-26 | 乐视控股(北京)有限公司 | Photographic processing method and device |
CN107292804A (en) * | 2017-06-01 | 2017-10-24 | 西安电子科技大学 | Direct many exposure fusion parallel acceleration methods based on OpenCL |
Non-Patent Citations (3)
Title |
---|
ALLA MASLENNIKOVA, VLADIMIR VEZHNEVETS: ""Interactive Local Color Transfer Between Images"", 《GRAPHICON., 2007》 * |
HO-GUN HA等: ""Local Color Transfer using Modified Color Influence Map with Color Category"", 《2011 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - BERLIN (ICCE-BERLIN)》 * |
MIGUEL OLIVEIRA等: ""Unsupervised Local Color Correction for Coarsely Registered Images"", 《2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110708458A (en) * | 2018-07-10 | 2020-01-17 | 杭州海康威视数字技术股份有限公司 | Image frame compensation method, camera and thermal imaging camera |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102642993B1 (en) | Night scene photography methods, devices, electronic equipment, and storage media | |
US11582400B2 (en) | Method of image processing based on plurality of frames of images, electronic device, and storage medium | |
CN109068058B (en) | Shooting control method and device in super night scene mode and electronic equipment | |
JP4846259B2 (en) | Brightness correction | |
CN107948514B (en) | Image blurs processing method, device, mobile device and computer storage medium | |
CN110191291B (en) | Image processing method and device based on multi-frame images | |
EP3443736B1 (en) | Method and apparatus for video content stabilization | |
CN110072052B (en) | Image processing method and device based on multi-frame image and electronic equipment | |
US20160219217A1 (en) | Camera Field Of View Effects Based On Device Orientation And Scene Content | |
CN106791380B (en) | Method and device for shooting dynamic photos | |
JP2005295567A (en) | Digital camera with luminance correction | |
CN101764934A (en) | Image capturing apparatus having subject cut-out function | |
US20100220208A1 (en) | Image processing method and apparatus and digital photographing apparatus using the same | |
US11388334B2 (en) | Automatic camera guidance and settings adjustment | |
CN109348088A (en) | Image denoising method, device, electronic equipment and computer readable storage medium | |
CN109005369A (en) | Exposal control method, device, electronic equipment and computer readable storage medium | |
WO2020029679A1 (en) | Control method and apparatus, imaging device, electronic device and readable storage medium | |
JP2012039591A (en) | Imaging apparatus | |
CN107613190A (en) | A kind of photographic method and terminal | |
JP6021512B2 (en) | Imaging device | |
McHugh | Understanding photography: master your digital camera and capture that perfect photo | |
WO2019124289A1 (en) | Device, control method, and storage medium | |
KR101094648B1 (en) | Auto Photograph Robot for Taking a Composed Picture and Method Thereof | |
CN106559614A (en) | Method, device and the terminal taken pictures | |
CN108024062A (en) | Image processing method and image processing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180511 |