CN110121882A - A kind of image processing method and device - Google Patents

A kind of image processing method and device Download PDF

Info

Publication number
CN110121882A
CN110121882A CN201780081683.XA CN201780081683A CN110121882A CN 110121882 A CN110121882 A CN 110121882A CN 201780081683 A CN201780081683 A CN 201780081683A CN 110121882 A CN110121882 A CN 110121882A
Authority
CN
China
Prior art keywords
exposure
adjustment
frame
terminal
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201780081683.XA
Other languages
Chinese (zh)
Other versions
CN110121882B (en
Inventor
王军
杜成
敖欢欢
徐荣跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN110121882A publication Critical patent/CN110121882A/en
Application granted granted Critical
Publication of CN110121882B publication Critical patent/CN110121882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Abstract

A kind of image processing method and device, to cut down motion blur.This method are as follows: terminal is during the picture frame that preview is acquired by camera, when there are when relative motion with the terminal for captured target object, the exposure parameter initial to camera described under preview state carries out the first adjustment, and the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain;The terminal is after receiving shooting instruction, second adjustment is carried out to the exposure parameter after the first adjustment, and the first exposure frame is generated according to the exposure parameter after the first adjustment, and at least two second exposure frames are generated according to the exposure parameter after the second adjustment, wherein, the second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after the increase the first adjustment;The terminal merges the first exposure frame and at least two second exposures frame, exports fused image.

Description

A kind of image processing method and device Technical field
This application involves technical field of image processing, in particular to a kind of image processing method and device.
Background technique
Some terminals with camera function can all have different degrees of motion blur when shooting moving object, if the target speed being taken is larger, will appear more serious motion blur phenomenon.However the moving scenes such as human motion, object of which movement, flowers and plants shake are that user takes pictures the important scenes of concern.Professional photographer shoots the clarity of moving object in order to be promoted, and is often equipped with expensive large aperture, suitable f-number is arranged and shutter speed carrys out capture movement target.But for all the time, whenever and wherever possible all for the mobile phone user to take pictures: mobile phone camera aperture is fixed and can not voluntarily extend first, it can not support adjustment aperture that high shutter speed is cooperated to be shot;Secondly, general user is also arranged the camera parameters such as suitable diaphragm shutter without enough priori knowledges in time and shoots even if mobile phone camera aperture is adjustable.
In recent years, the capability improving of handset image post processing operations is significant, and improves mobile phone camera hardware cost height, and difficulty is big.So some software approach are able to develop and apply.The software approach of existing anti motion-blur of taking pictures mainly uses the strategy that the time for exposure is reduced based on motion detection, this method carries out motion detection first, when detecting camera or object of which movement, it is automatic to improve shutter speed, improving shutter speed reduces the time for exposure, it can weaken in proportion with the reduction of time for exposure in image imaging session motion blur, but the reduction time for exposure can reduce image overall brightness, therefore also need to increase exposure gain in proportion to keep image overall brightness while reducing the time for exposure.
But the above-mentioned this method for reducing the time for exposure and increasing exposure gain in proportion will lead to picture noise under dark scene excessive, reduction overall picture quality.Horizontal if picture noise is guaranteed, limitation reduces the dynamics of time for exposure, and can weaken anti motion-blur ability.To sum up, it reduces the time for exposure and increases exposure gain and condition each other, terminal is caused to cut down motion blur scarce capacity.
Summary of the invention
The application provides a kind of image processing method and device, when solving to cut down motion blur by reducing the time for exposure, because tradeoff noise level is restricted the dynamics for reducing the time for exposure, and the problem of leading to the scarce capacity of terminal abatement motion blur.
On the one hand, a kind of image processing method is provided, this method are as follows: terminal is in preview phase, during the picture frame that preview is acquired by camera, when there are when relative motion with the terminal for captured target object, the exposure parameter initial to camera described under preview state is adjusted, referred to herein as the first adjustment, and the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain;The terminal is after receiving shooting instruction, exposure parameter after the first adjustment is adjusted again, referred to herein as second adjustment, the second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after the increase the first adjustment, terminal generates the first exposure frame according to the exposure parameter after the first adjustment, and at least two second exposure frames are generated according to the exposure parameter after the second adjustment, the terminal merges the first exposure frame and at least two second exposures frame, exports fused image.In this way the time for exposure is reduced by imaging session twice, higher shutter speed can be obtained.By reducing the time for exposure in previews, the time delay that the shooting that can effectively shorten is imaged, the time for exposure is reduced again after shooting order issues, it being capable of significantly more efficient abatement motion blur, by carrying out time domain multiframe noise reduction process to the short frame of multiframe in post processing of image link, reduction can aid in because increasing exposure gain Bring noise, to have stronger anti motion-blur ability and more preferably anti motion-blur effect.
Wherein, for terminal under preview state, camera can generate initial exposure parameter under automatic exposure.It include initial time for exposure and initial exposure gain in initial exposure parameter.
In a possible design, the terminal is when meeting certain condition, the exposure parameter after the first adjustment can be subjected to second adjustment, when condition is not met, it only will do it the first adjustment, frame directly gone out according to the exposure parameter after the first adjustment, not will do it second adjustment, the condition can be it is following any one, be also possible to other conditions.Condition may is that the exposure parameter after the first adjustment when the target object and the rate of the relative motion of the terminal are greater than setting rate-valve value, is carried out second adjustment by the terminal;It is also possible to: when the initial exposure gain of terminal camera under preview state is greater than setting gain threshold, the exposure parameter after the first adjustment is subjected to second adjustment;Or when may also is that the light value LV of terminal camera under preview state is less than setting luminance threshold, the exposure parameter after the first adjustment is subjected to second adjustment.In this way, can realize and handle the specific aim of varying environment brightness scene with the strategy of the different abatement motion blur of flexible choice by being chosen whether to carry out second adjustment according to exposure gain and LV value.
In a possible design, if terminal determines that the relative motion rate of target object and terminal is not more than setting rate-valve value, the first adjustment only then is carried out to the initial exposure parameter of camera under preview state, imaging process uses the exposure parameter after the first adjustment to generate exposure frame, image, the i.e. not no process of second adjustment are exported according to the exposure frame of generation
In a possible design, the terminal determines the corresponding ratio value for reducing the time for exposure of the rate of the relative motion according to the relationship between preset movement rate and the ratio value of reduction time for exposure, and determines the value for increasing exposure gain;According to the ratio value of determining reduction time for exposure and the value of increase exposure gain, the first adjustment is carried out to the initial exposure parameter of camera described under preview state.Pass through the value of movement rate in this way, proportionally adjustment reduces the value of time for exposure, the ratio value for reducing the time for exposure can be connected with movement rate, the suitable ratio value for reducing the time for exposure is selected for current kinetic rate, prevents the too low reduction time for exposure from bringing bigger noise level.
In a possible design, a maximum exposure gain threshold is set, when the exposure parameter to preview state carries out the first adjustment, the exposure gain after increase needs to be less than the maximum exposure gain threshold of setting.The setting of this method can control the noise level of preview image by limiting the upper limit of exposure gain.
In a possible design, the terminal carries out time domain multiframe noise reduction fusion treatment before being merged the first exposure frame and at least two second exposures frame, by at least two second exposures frame, obtains a short frame.Since the second exposure frame was generated using the shorter time for exposure, bigger exposure gain can be increased in proportion, also can bring about more noises, using such processing mode, can aid in the noise level for reducing the second exposure frame.
In a possible design, the first exposure frame and the short frame are carried out image registration and ghost detect by the terminal using the short frame as reference frame, and according to the result of ghost detection, the first exposure frame after image registration is carried out ghost to handle, the long frame of after obtaining ghost;According to ghost detection as a result, being fusion benchmark with the ghost region of the long frame, by the long frame and the short frame progress frequency domain fusion.Specifically, in ghost region, short frame pixel fusion weight is greater than long frame pixel fusion weight, in other regions, long frame pixel fusion weight is greater than short frame pixel fusion weight, ghost region, that is, moving region, i.e. there are the partial regions of relative motion with terminal for target object, non-moving areas can consider opposing stationary or approximate opposing stationary with terminal, and moving region is the flowers and plants of shake, and non-moving areas is the backgrounds such as sky ground.The luminance information of long exposure frame can be retained in this way, so that the moving region of shooting image and non-moving areas are more clear, the scarce capacity of terminal abatement motion blur caused by because of tradeoff noise level the dynamics for reducing the time for exposure being restricted can be effectively avoided the problem that.
Second aspect provides a kind of image processing apparatus, which has the function of realizing method in any possible design of above-mentioned first aspect and first aspect.The function can also be executed corresponding soft by hardware realization by hardware Part is realized.The hardware or software include one or more modules corresponding with above-mentioned function.
In a possible design, which can be chip or integrated circuit.
In a possible design, the device includes camera and processor, and camera is used for acquired image frames, and processor is for executing batch processing, when program is performed, described device can execute method described in any one possible design of above-mentioned first aspect and first aspect.
In a possible design, which further includes memory, the code executed for storing the processor.
The third aspect provides a kind of computer storage medium, is stored with computer program, which includes the instruction for executing method in any possible design of first aspect and first aspect.
Fourth aspect provides a kind of computer program product comprising instruction, when run on a computer, so that computer executes method described in any possible design of above-mentioned first aspect and first aspect.
Detailed description of the invention
Fig. 1 is terminal hardware structural schematic diagram in the embodiment of the present application;
Fig. 2 is image processing method flow diagram in the embodiment of the present application;
Fig. 3 a is that the ratio of time for exposure and the relation schematic diagram of exposure gain are reduced in the embodiment of the present application;
Fig. 3 b is the proportionate relationship schematic diagram of movement rate grade and reduction time for exposure in the embodiment of the present application;
Fig. 4 is the schematic diagram of the short frame noise reduction of multiframe and the fusion of length frame in the embodiment of the present application;
Fig. 5 is the flow diagram that long exposure frame removes ghost in the embodiment of the present application;
Fig. 6 is the flow diagram of length frame frequency domain fusion in the embodiment of the present application;
Fig. 7 is image processing method flow diagram under application scenarios a kind of in the embodiment of the present application;
Fig. 8 is one of image processing apparatus structural schematic diagram in the embodiment of the present application;
Fig. 9 is image processing apparatus second structural representation in the embodiment of the present application.
Specific embodiment
Below in conjunction with attached drawing, the embodiment of the present application is described in detail.
The invention relates to terminal can be any electronic equipment with camera function, which includes but is not limited to personal computer, server computer, hand-held or laptop devices, mobile phone, tablet computer, personal digital assistant, media player, consumer electronic devices, minicomputer, mainframe computer.
Refering to what is shown in Fig. 1, the hardware structural diagram of the terminal for the embodiment of the present application application.As shown in Figure 1, terminal 100 includes display equipment 110, processor 120 and memory 130.Memory 130 can be used for storing software program and data, and processor 120 is stored in the software program and data of memory 130 by operation, thereby executing the various function application and data processing of terminal 100.Memory 130 can mainly include storing program area and storage data area, wherein storing program area can application program needed for storage program area, at least one function etc.;Storage data area can store the created data that use according to terminal 100, such as audio data, phone directory, exchangeable image file EXIF.In addition, memory 130 may include high-speed random access memory, it can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.Processor 120 is the control centre of terminal 100, utilize the various pieces of various interfaces and the entire terminal of connection, by running or executing the software program and/or data that are stored in memory 130, the various functions and processing data of terminal 100 are executed, to carry out integral monitoring to terminal.Processor 120 may include one or more general processors, may also include one or more digital signal processors (digital signal Processor, DSP), for executing relevant operation, to realize technical solution provided by the embodiment of the present application.Specifically, processor 120 can be central processing unit (central processing unit, CPU), the combination of network processing unit (network processor, NP) or CPU and NP.Processor 120 can further include hardware chip.Above-mentioned hardware chip can be specific integrated circuit (appJication-specific integrated circuit, ASIC), programmable logic device (programmable logic device, PLD) or combinations thereof.Above-mentioned PLD can be Complex Programmable Logic Devices (complex programmable logic device, CPLD), field programmable gate array (field-programmable gate array, FPGA), Universal Array Logic (generic array logic, GAL) or any combination thereof.Memory 130 may include volatile memory (volatile memory), such as random access memory (random-access memory, RAM);Memory 130 also may include nonvolatile memory (non-volatile memory), such as flash memory (flash memory), hard disk (hard disk drive, HDD) or solid state hard disk (soJid-state drive, SSD);Memory 130 can also include the combination of the memory of mentioned kind.
Terminal 100 can also include input equipment 140, digital information, character information or contact touch operation/non-contact gesture for receiving input, and generation signal input related with the user setting of terminal 100 and function control etc..The input equipment 140 may include touch panel 141.Touch panel 141, also referred to as touch screen collect the touch operation of user on it or nearby, and drive corresponding attachment device according to preset formula.Touch panel 141 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel 141, input equipment 140 can also include other input equipments 142, other input equipments 142 can include but is not limited to one of physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. or a variety of.Show equipment 110, including display panel 111, for showing information input by user or being supplied to the information of user and the various menu interfaces of terminal device 100.Optionally, display panel can configure display panel 111 using the forms such as liquid crystal display (liquid crystal display, LCD) or Organic Light Emitting Diode (organic light-emitting diode, OLED).
In addition to the above, terminal 100 can also include for the power supply 150 to other module for power supply and the camera 160 for shooting photo or video.Terminal 100 can also include one or more sensors 170, such as acceleration transducer, light sensor, GPS sensor, infrared sensor, laser sensor, position sensor or camera lens orientation angle sensor etc..Terminal 100 can also include less radio-frequency (Radio Frequency, RF) circuit 180, can also include WiFi module 190 for carrying out network communication with Wireless Communication Equipment, for carrying out WiFi communication with other equipment.
Based on the hardware configuration of terminal shown in FIG. 1, image processing method provided by the embodiments of the present application can be executed by terminal 100.
Terminal is after camera unlatching, and into preview state, during the picture frame that preview is acquired by camera, captured target object is likely to be at motion state, for example, human motion, object of which movement, flowers and plants shake etc..In addition, captured target object is likely to be at stationary state, but terminal is kept in motion, for example, user's handheld terminal is shot during exercise in traveling train photographs or user's handheld terminal, etc..No matter which kind of above-mentioned state, it is considered that captured target object is with terminal, there are relative motion relations.In captured target object and terminal there are under this application scenarios of relative motion, the image of terminal shooting is possible to that there are motion blur problems.Method provided by the embodiments of the present application can aid in the motion blur problems solved under this application scenarios.
The process of image processing method provided by the embodiments of the present application is described in further detail with reference to the accompanying drawing.
As shown in Fig. 2, the process of image processing method provided by the embodiments of the present application is as described below.
Step 201, terminal are during the picture frame that preview is acquired by camera, when captured target object and terminal are there are when relative motion, carry out the first adjustment to the initial exposure parameter of the camera under preview state.
Wherein, which includes reducing the initial time for exposure and increasing initial exposure gain.In practical application, eventually Under preview state, camera can generate initial exposure parameter at automatic exposure (automatic exposure, AE) at end.It include initial time for exposure and initial exposure gain in initial exposure parameter.
Step 202, terminal are after receiving shooting instruction, second adjustment is carried out to the exposure parameter after the first adjustment, and the first exposure frame is generated according to the exposure parameter after the first adjustment, and at least two second exposure frames are generated according to the exposure parameter after second adjustment, wherein, second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after increase the first adjustment.
Since the time for exposure after second adjustment is more shorter than the time for exposure after the first adjustment, therefore in the embodiment of the present application, it is referred to as long exposure frame according to the first exposure frame that the exposure parameter after the first adjustment generates, short exposure frames are referred to as according to the second exposure frame that the exposure parameter after second adjustment generates.
Step 203, terminal merge above-mentioned first exposure frame and at least two second exposure frames, export fused image.
It should be noted that, in the embodiment of the present application, when certain condition meets, it needs to adjust exposure parameter twice, to distinguish, the adjustment done under preview state to the initial exposure parameter of camera is known as the first adjustment, the adjustment done after the shooting instruction for receiving user's input to the exposure parameter after the first adjustment is known as second adjustment.The invention relates to exposure parameter include at least time for exposure and exposure gain.The first adjustment, which refers to, to be reduced the time for exposure and increases exposure gain on the basis of the exposure parameter of camera automatic exposure under preview state;Second adjustment, which refers to, to be reduced the time for exposure again and increases exposure gain on the basis of the exposure parameter after the first adjustment.Optionally, during the first adjustment and second adjustment, the ratio for reducing the time for exposure is identical with the ratio of exposure gain is increased, and in other words, obtains the ratio for accordingly increasing exposure gain according to the ratio for reducing the time for exposure.The invention relates to shooting can be, but not limited to include the process taken pictures, imaged etc. using camera imaging.
As can be seen that the embodiment of the present application can obtain the shorter time for exposure by reducing the time for exposure twice, motion blur can weaken in proportion with the reduction of time for exposure, and therefore, the shorter time for exposure helps more effectively to cut down motion blur.Time for exposure bring noise problem is reduced in order to weaken, the embodiment of the present application is by the way of multiframe fusion, long exposure frame is generated according to the exposure parameter of first time adjustment in imaging session, short exposure frames are generated according to the exposure parameter of second of adjustment, since the time for exposure that long exposure frame uses is longer, therefore long exposure frame has the advantage of higher brightness, and short exposure frames have the advantage being more clear in moving region.Long exposure frame and short exposure frames are merged, long exposure frame and the respective advantage of short exposure frames are retained in fusion process, can effectively export the image that moving region is more clear.
Some concrete details introductions are done to image processing method shown in Fig. 2 below, some specific descriptions are done to reaching for beneficial effect.
In step 202, a series of actions performed by terminal is that the shooting instruction for receiving user's triggering in terminal executes during imaging to output.Terminal generates at least one length exposure frame and at least two short exposure frames, the introduction that the embodiment of the present application carries out for generating one long exposure frame and three short exposure frames in imaging process.Optionally, terminal can also generate four, five short exposure frames in imaging process, but the factors such as COMPREHENSIVE CALCULATING amount, with three short exposure frames for a kind of possible implementation.
Specifically, the picture frame of preview can be carried out motion detection after terminal starting camera, relative motion whether there is with the target object and terminal that judge captured.Optionally, the embodiment of the present application can carry out motion detection using picture frame of any one method for testing motion in the prior art to preview.For example, in general, method for testing motion be usually former and later two picture frames are carried out it is down-sampled, will be down-sampled after picture frame gridding, a picture frame be may include multiple images grid block after gridding.By carrying out pattern match and calculating correlation to the image in former and later two picture frame respective image grid blocks, judge image in two frame grids with the presence or absence of moving, if the correlation being calculated greatly if indicate not move;Opposite variation has occurred in the small image indicated in two frame grids of correlation, there is movement.Former and later two picture frames present frame and consecutive frame come Statement determines image lattice block most like in consecutive frame and present frame, and by matching the distance between two obtained image lattice blocks measurement movement velocity, the average speed that target mass motion speed needs to calculate all images grid block in image is obtained.
If detecting, captured target object and terminal are opposing stationary, or approximate opposing stationary, then show terminal current shooting may be static image, be taken pictures according to the convergent normal exposure parameter of AE.If detecting, captured target object and terminal there are relative motion, need to be adjusted exposure parameter.In a kind of possible implementation, if terminal determines that the rate of the relative motion of target object and terminal is greater than setting rate-valve value, the exposure parameter after the first adjustment is subjected to second adjustment;If terminal determines that the relative motion rate of target object and terminal is not more than setting rate-valve value, the first adjustment only then is carried out to the initial exposure parameter of camera under preview state, imaging process uses the exposure parameter after the first adjustment to generate exposure frame, image, the i.e. not no process of second adjustment are exported according to the exposure frame of generation.Wherein, rate-valve value is set as empirical value.
Optionally, by whether carry out second adjustment condition can be equivalent be changed to following two or other conditions.
Condition one, under preview state camera initial exposure gain be greater than setting gain threshold when, exposure parameter after the first adjustment is subjected to second adjustment, when exposure gain during determining preview image is not more than setting gain threshold, imaging process uses the exposure parameter after the first adjustment to carry out generating exposure frame, image, the i.e. not no process of second adjustment are exported according to the exposure frame of generation.Wherein, gain threshold is set as empirical value, for example, setting gain threshold as 800.
Condition two, camera light value (the light value under preview state, when LV) being less than setting luminance threshold, exposure parameter after the first adjustment is subjected to second adjustment, when light value LV during determining preview image is not less than setting luminance threshold, imaging process uses the exposure parameter after the first adjustment to carry out generating exposure frame, image, the i.e. not no process of second adjustment are exported according to the exposure frame of generation.Wherein, LV may be considered the ambient brightness during the picture frame that terminal preview is acquired by camera.Luminance threshold is set as empirical value, for example, setting luminance threshold as 40,20.
In this way, can realize and handle the specific aim of varying environment brightness scene with the strategy of the different abatement motion blur of flexible choice by being chosen whether to carry out second adjustment according to exposure gain and LV value.For example, highlight scene selection only carries out the first adjustment to the initial exposure parameter of camera under preview state in general, go out frame according to the exposure parameter after the first adjustment;Selection carries out second adjustment after carrying out the first adjustment again under dark scene, merges length frame in conjunction with post processing of image.It is of course also possible to select to carry out second adjustment again after being all made of progress the first adjustment under most of lightness environment, length frame is merged in conjunction with post processing of image.
Optionally, when the terminal exposure parameter initial to camera under preview state is adjusted, reduce the time for exposure ratio can and the resulting target object of motion detection it is related with the rate of the relative motion of terminal.Specifically, terminal is according to the relationship between preset movement rate and the ratio value of reduction time for exposure, determine the ratio value for reducing the time for exposure corresponding with the rate of relative motion, and the value for increasing exposure gain is determined according to the ratio value of determining reduction time for exposure, in general, the ratio for increasing exposure gain is identical as the ratio of time for exposure is reduced.According to the value of the identified ratio value for reducing the time for exposure and increase exposure gain, the exposure parameter initial to camera under preview state is adjusted.Relationship between movement rate and the ratio value for reducing the time for exposure can be linear relationship, that is, with the increase of movement rate, the ratio value for reducing the time for exposure increases, and the value for increasing exposure gain also and then increases.But noise can be brought due to increasing exposure gain, therefore, optionally, as shown in Figure 3a, a maximum exposure gain threshold is arranged in the embodiment of the present application, when the exposure parameter to camera under preview state is adjusted, the value of the exposure gain after increase needs to be less than the maximum exposure gain threshold of setting.If increasing the value after exposure gain in proportion according to the reduction time for exposure is greater than maximum exposure gain threshold, the ratio value for reducing the time for exposure needs to reduce, and to be at least reduced to so that the exposure gain after increasing is less than the maximum exposure gain threshold of setting.For example, maximum exposure gain threshold can be 700,1000.The setting of this method can be increased by limitation exposure The upper limit of benefit controls the noise level of preview image.
If whether being greater than setting rate-valve value with the rate of relative motion to determine whether carrying out the condition of second adjustment, then the ratio value of the rate of relative motion and reduction time for exposure meet certain variation relation, and substantially the ratio value of reduction time for exposure is with the rate of relative motion in first linear rear constant relationship.For example, as shown in Figure 3b, characterizing the schematic diagram for carrying out the first adjustment according to speed grade to exposure parameter under preview state.Assuming that the rate of relative motion is divided into several grades, different brackets corresponds to the ratio value of different reduction time for exposure.When the speed grade of relative motion is 0, indicates shooting static image, be imaged according to the convergent normal exposure parameter of AE;When the speed grade of relative motion is 1, according to relationship figure line, proportionally 1 the ratio of time for exposure is reduced, and increase exposure gain in proportion, be imaged according to exposure parameter adjusted;When the speed grade of relative motion is 2, according to relationship figure line, proportionally 2 the ratio of time for exposure is reduced, and increase exposure gain in proportion, be imaged according to exposure parameter adjusted;When the speed grade of relative motion is 3, movement rate at this time is greater than the rate-valve value of setting, proportionally 2 reduces the ratio of time for exposure in preview phase, and increase exposure gain in proportion, is imaged according to exposure parameter adjusted;When the speed grade of relative motion is greater than 3, movement rate at this time is greater than the rate-valve value of setting, proportionally 2 reduces the ratio of time for exposure in preview phase, and increase exposure gain in proportion, is imaged according to exposure parameter adjusted.
It introduces in the post processing of image stage, how how at least two short exposure frames and at least one length exposure frame is merged below.For convenience of description, three short exposure frames and one long exposure frame to be merged, fused image is exported for imaging session generates three short exposure frames and one long exposure frame.
As shown in Figure 4, the process of fusion is roughly divided into: image signal processing unit (the image signal processing unit of terminal, ISP after) exporting three short exposure frames and one long exposure frame, by step 401 and step 402, final output fusion results export fused image.
Step 401: three short exposure frames being subjected to time domain multiframe noise reduction fusion treatment, the frame obtained after time domain multiframe noise reduction fusion treatment is known as short frame.
Short frame herein refer to after three short exposure frames carry out noise reduction fusion treatments as a result, special, it may be possible to one in three short exposure frames.
Step 402: obtain in step 401 short frame is merged with long exposure frame.
Specifically, in fixed International Organization for standardization (international standards organization, ISO sensitivity as defined in) and in the case where the time for exposure, the same scene is shot at least twice, since noise is random fluctuation between frames, signal is fixed, if the noise variance of each frame is σ2, after carrying out N frame averagely, the noise variance of result frame is reduced to σ2/N.The every reduction half of noise criteria difference, signal-to-noise ratio promote 6dB.So theoretically 4 frame time domain noise reductions can promote 6dB.Noise reduction is merged using three frame short exposure frames in the embodiment of the present application.Three frame short exposure frames are indicated with Frame0, Frame1, Frame2.
If during shooting multiframe, the object having occurred in hand-held shake or scene is moved, and time domain average may misplace, ghost or fuzzy is introduced, therefore needs to be added image registration and ghost detection before multiframe time domain average.
Therefore, step 401 may include several concrete implementation steps.Image registration, ghost detection, the fusion of time domain multiframe noise reduction.Wherein:
Image registration: reference frame Frame0 and frame Frame1 subject to registration to input carry out feature extraction, respectively obtain a series of characteristic point, and carry out feature description to each characteristic point.According to the characteristic point of feature profile matching two field pictures, series of features point pair is obtained.It is solved in the characteristic point pair that matching obtains and obtains the transformation matrix of two field pictures, i.e. projective transformation, be the matrix H of 3x3.Frame1 converts to obtain the image being aligned with Frame0 by H-matrix, can be described as the registration result of Frame1.Similarly, the image that Frame2 is aligned with Frame0 is in kind obtained, can be described as the registration of Frame2 As a result.
Ghost detection: reference frame Frame0 asks poor with the registration result of Frame1 and Frame2 respectively, obtains the difference value of each pixel, i.e. diff figure.Gaussian smoothing is carried out to diff image, removes influence of noise.By diff and corresponding ghost threshold value comparison, ghost point is judged whether it is.It is expanded using corrosion and eliminates isolated point (noise), obtained final ghost and cover, i.e. ghost Mask.
The fusion of time domain multiframe noise reduction: judge whether the ghost region area size of ghost detection is less than the 1/3 of full images.If it is not, then directly exporting the data of Frame0, that is, the process of fusion is saved, directly using Frame0 as above-mentioned short frame, participates in subsequent length frame fusion.If it is, removal ghost is then weighted to each pixel according to ghost Mask, and exports fusion results, wherein ghost Mask area reference frame pixel fusion weight is greater than other registration frame respective pixels, it is, in ghost region using the pixel of reference frame as fusion results.
Step 402 may include several concrete implementation steps.The short frame and long exposure frame that step 401 is obtained carry out image registration, ghost detection and removal, and the long frame for removing ghost is carried out frequency domain with short frame and is merged.
Equally, to avoid moving region ghost bring between long exposure frame and short frame from merging mistake, also used before the fusion of length frame with identical image registration and ghost detecting method in the short frame noise reduction process of multiframe, remove the influence of ghost.The method of image registration is similar with above-mentioned method for registering images, using ghost Mask as weight, the fused short frame of multiframe noise reduction is merged with the registration result of long exposure frame, the long exposure frame after obtaining ghost.For convenience of narration, the long exposure frame after ghost is gone to be known as long frame by what is obtained here.According to ghost detection as a result, being fusion benchmark with the ghost region of long frame, by long frame and the progress frequency domain fusion of short frame, wherein the short frame pixel fusion weight in ghost region is greater than long frame respective pixel, and the non-long frame pixel fusion weight in ghost region is greater than short frame respective pixel.
As shown in figure 5, removing the flow diagram of ghost for long exposure frame.Input is the long exposure frame after the short frame obtained in step 401 and registration.The ghost Mask detected using ghost is weighted fusion as weight, to two input pictures, and morphological dilations and smooth operation are the cavity and edge-smoothing in order to remove ghost Mask, improves image syncretizing effect.The long exposure frame after ghost is removed in output, is properly termed as long frame in the present embodiment.
It should be noted that, when the relative motion degree of photographic subjects object and terminal is larger or shooting microspur scene, if the ghost region area of length frame image accounting in entire image is more than given threshold (such as 10%), then show the maximum ghost threshold value that can be handled more than algorithm setting, at this time to avoid fusion dislocation that from skipping the fusion of length frame frequency domain, long exposure frame is directly exported.
In image procossing, frequency domain has reacted image in airspace grey scale change severe degree, that is, image grayscale pace of change or image gradient magnitude.For image, the marginal portion of image is Mutational part, and variation is very fast, therefore reacting is high fdrequency component on frequency domain;The noise of image is high frequency section in most cases;Image smooth variation part is then low frequency component.That is, Fourier transform provide one come from the approach that airspace to frequency is freely converted from image, the feature of image from image being transformed into frequency distribution from intensity profile.Image and the direct relationship of frequency: low frequency is mostly the gentle profile of image, and intermediate frequency is mostly the details such as image border, texture, and high frequency is mostly picture noise, and the gray average of Fourier's spectrogram central bright spot then representative image.Utilize the direct relation of image and frequency domain, the result that the present embodiment is detected according to ghost, it is fusion benchmark with the ghost region of long frame, long frame and short frame is subjected to frequency domain fusion, achieve the purpose that while retaining long frame brightness, non-athletic area's detailed information and short frame moving region sharpness information.
As shown in fig. 6, carrying out the flow diagram of frequency domain fusion for long frame and short frame.Input are as follows: the fused short frame of multiframe noise reduction and the long frame for removing ghost.
The long frame of step 601, short frame fused to multiframe noise reduction and removal ghost carries out down-sampled, reduction calculation amount respectively.
After step 602, long frame are down-sampled, then the loss Error Graph of calculating and original image is up-sampled, for restoring down-sampled bring image detail loss after image co-registration.
Step 603 carries out Fast Fourier Transform (FFT) to two input frames after down-sampled respectively, obtains respective Fourier spectrum, and calculate respective magnitudes.
Step 604, using amplitude as weight, by two input Fourier spectrum merge.
Wherein, in fusion, long frame Fourier spectrogram central bright spot need to be protected, the value in the region 10x10 centered on bright spot is assigned to fused Fourier spectrum, the average luminance information of long frame is retained with this.
The frequency spectrum of fusion is done inverse Fourier transform by step 605, obtains fused image.
Fused image is added by step 606 with the Error Graph that long frame down-sampling is calculated, and restores down-sampled bring loss.
Export final fusion results.
To sum up, the embodiment of the present application reduces the time for exposure by preview state for the first time, it is reduced for the second time the time for exposure in imaging process, and carries out the short frame noise reduction fusion of multiframe and length frame frequency domain fusion treatment in the post processing of image stage, improve the ability for cutting down motion blur when terminal shooting.
As shown in fig. 7, above-mentioned image processing method is described in further detail below with reference to specific application scenarios, it is assumed that application scenarios are to be taken pictures by camera.
Step 701, terminal receive the instruction for opening camera, start camera, into preview state.
Step 702, to preview image data carry out it is down-sampled.
Step 703 carries out motion detection to the image data after down-sampled.
Motion detection, and instant output test result are carried out by analysis two frame preview images of front and back, such as motion state and speed grade can be exported, movement rate is bigger, and speed grade is bigger.Speed grade is 0, indicates static without motion, and the movement rate that speed grade 1,2,3 indicates is incremented by.
If not detecting movement, 704~step 705 is thened follow the steps;If detecting movement, 706 are thened follow the steps.
Step 704 when issuing photographing command by camera applications end, is taken pictures, i.e., sensor Sensor goes out frame by normal exposure parameter by the convergent normal exposure parameter of AE.
Step 705 exports and stores image.
Step 706 judges whether movement rate grade is greater than the rate-valve value of setting, perhaps judges whether exposure gain is greater than the gain threshold of setting or judges whether LV value is less than the luminance threshold of setting;If so, thening follow the steps 710~step 714, otherwise, step 707~step 709 is executed.
Step 707 adjusts exposure parameter in preview.
The time for exposure is reduced by preset ratio based on movement velocity grade, and increases exposure gain in proportion, to keep image overall brightness constant.
Step 708, when issuing photographing command by camera applications end, Sensor goes out frame by exposure parameter adjusted.
Step 709 exports and stores image.
Step 710 adjusts exposure parameter in preview.
The time for exposure is reduced by preset ratio based on movement velocity grade, and increases exposure gain in proportion, to keep image overall brightness constant.
Step 711 when issuing photographing command by camera applications end, reduces the time for exposure in the ratio of setting on the basis of preview exposure parameter first after the adjustment again and increases exposure gain.
Step 712, Sensor generate three short exposure frames according to the time for exposure reduced again and increased exposure gain, press Long exposure frame is generated according to preview exposure parameter adjusted for the first time.Post processing of image is carried out after ISP is handled.
Short frame multiframe noise reduction and the fusion of length frame are carried out in step 713, post processing of image, retain the details of short frame moving region and the details of long frame non-moving areas.
Step 714 exports and saves final blending image.
In summary, in image processing method provided by the embodiments of the present application, the time for exposure is reduced twice by imaging session, obtain higher shutter speed, long exposure frame is generated using the parameter that first time reduces the time for exposure, the parameter for reducing the time for exposure using second generates at least two short exposure frames, long short exposure multiframe is merged in the post processing of image stage, in ghost region, short frame pixel fusion weight is greater than long frame pixel fusion weight, in other regions, long frame pixel fusion weight is greater than short frame pixel fusion weight, ghost region, that is, moving region, i.e. there are the partial regions of relative motion with terminal for target object, non-moving areas can consider opposing stationary or approximate opposing stationary with terminal, moving region is the flowers and plants of shake, non-moving areas is the backgrounds such as sky ground.The luminance information of long exposure frame can be retained in this way, so that the moving region of shooting image and non-moving areas are more clear, it can effectively avoid the problem that the scarce capacity of terminal abatement motion blur caused by because of tradeoff noise level the dynamics for reducing the time for exposure being restricted, by reducing the time for exposure in previews, the time delay that the shooting that can effectively shorten is imaged, the time for exposure is reduced again after shooting order issues, it being capable of significantly more efficient abatement motion blur, by carrying out time domain multiframe noise reduction process to the short frame of multiframe in post processing of image link, reduction be can aid in because increasing exposure gain bring noise, to have stronger anti motion-blur ability and more preferably anti motion-blur effect.
Based on the same inventive concept with image processing method shown in Fig. 2, as shown in Figure 8, the embodiment of the present application also provides a kind of image processing apparatus 800, for executing image processing method shown in Fig. 2, which includes: the image processing apparatus 800
Adjustment unit 801, during picture frame for being acquired in preview by camera, when there are when relative motion with device for captured target object, the exposure parameter initial to camera under preview state carries out the first adjustment, and the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain.
Adjustment unit 801 is also used to after receiving shooting instruction, carries out second adjustment to the exposure parameter after the first adjustment, wherein second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after increase the first adjustment.
Generation unit 802 exposes frame for generating first according to the exposure parameter after the first adjustment, and generates at least two second exposure frames according to the exposure parameter after second adjustment.
Integrated unit 803, the first exposure frame and at least two second exposure frames for generating generation unit 802 merge, and export fused image.
Optionally, when the exposure parameter after the first adjustment is carried out second adjustment, adjustment unit 801 is used for: when the rate of the relative motion of target object and device is greater than setting rate-valve value, the exposure parameter after the first adjustment being carried out second adjustment;Alternatively, the exposure parameter after the first adjustment is carried out second adjustment when the initial exposure gain of camera is greater than setting gain threshold under preview state;Alternatively, the exposure parameter after the first adjustment is carried out second adjustment when the light value LV of camera is less than setting luminance threshold under preview state.
Optionally, when the initial exposure parameter to camera under preview state carries out the first adjustment, adjustment unit 801 is used for: according to the relationship between preset movement rate and the ratio value of reduction time for exposure, it determines the corresponding ratio value for reducing the time for exposure of the rate of relative motion, and determines the value for increasing exposure gain;According to the ratio value of determining reduction time for exposure and the value of increase exposure gain, the first adjustment is carried out to the initial exposure parameter of camera under preview state.
Optionally, before being merged the first exposure frame and at least two second exposure frames, integrated unit 803 is also used at least two second exposure frames carrying out time domain multiframe noise reduction fusion treatment, obtains a short frame.
Optionally, when being merged the first exposure frame and at least two second exposure frames, integrated unit 803 is used for short Frame is reference frame, and the first exposure frame and short frame are carried out image registration and ghost detects, and handling as a result, the first exposure frame after image registration is carried out ghost according to ghost detection, the long frame of after obtaining ghost;According to ghost detection as a result, being fusion benchmark with the ghost region of long frame, by long frame and the progress frequency domain fusion of short frame.
Based on image processing method shown in Fig. 2, as shown in figure 9, the embodiment of the present application also provides another image processing apparatus 900, including camera 901 and processor 902.Wherein, camera 901 is used for acquired image frames, and processor 902 is for executing one group of code, when code is performed, so that the image processing apparatus is able to carry out image processing method shown in Fig. 2.Details are not described herein for method something in common.The image processing apparatus 900 can be terminal 100 shown in FIG. 1.Terminal 100 shown in FIG. 1 can be used for executing image processing method shown in Fig. 2, and camera 160 executes function performed by camera 901, and processor 120 executes function performed by processor 902.Wherein, camera 160 is used for acquired image frames;Processor 120 is used to execute the details of image processing method shown in Fig. 2.Functional module adjustment unit 801, generation unit 802, integrated unit 803 in Fig. 8 can be realized by the processor 902 in image processing apparatus 900, i.e., can also be realized by the processor 120 in terminal 100 shown in FIG. 1.
The embodiment of the present application provides a kind of computer storage medium, is stored with computer program, which includes for executing image processing method shown in Fig. 2.
The embodiment of the present application provides a kind of computer program product comprising instruction, when run on a computer, so that computer executes image processing method shown in Fig. 2.
Any image processing apparatus provided by the embodiments of the present application can also be a kind of System on Chip/SoC.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program product.Therefore, the form of complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application.Moreover, the form for the computer program product implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) that one or more wherein includes computer usable program code can be used in the application.
The application is that reference is described according to the flowchart and/or the block diagram of the method for the embodiment of the present application, equipment (system) and computer program product.It should be understood that the combination of process and/or box in each flow and/or block and flowchart and/or the block diagram that can be realized by computer program instructions in flowchart and/or the block diagram.These computer program instructions be can provide to the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate a machine, so that generating by the instruction that computer or the processor of other programmable data processing devices execute for realizing the device for the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, to be able to guide in computer or other programmable data processing devices computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates the manufacture including command device, which realizes the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that series of operation steps are executed on a computer or other programmable device to generate computer implemented processing, thus the step of instruction executed on a computer or other programmable device is provided for realizing the function of specifying in one or more flows of the flowchart and/or one or more blocks of the block diagram.
Although the preferred embodiment of the application has been described, once a person skilled in the art knows basic creative concepts, then additional changes and modifications may be made to these embodiments.So it includes preferred embodiment and all change and modification for falling into the application range that the following claims are intended to be interpreted as.
Obviously, those skilled in the art can carry out various modification and variations without departing from the spirit and scope of the embodiment of the present application to the embodiment of the present application.If then the application is also intended to include these modifications and variations in this way, these modifications and variations of the embodiment of the present application belong within the scope of the claim of this application and its equivalent technologies.

Claims (16)

  1. A kind of image processing method characterized by comprising
    Terminal is during the picture frame that preview is acquired by camera, when there are when relative motion with the terminal for captured target object, the exposure parameter initial to camera described under preview state carries out the first adjustment, and the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain;
    The terminal is after receiving shooting instruction, second adjustment is carried out to the exposure parameter after the first adjustment, and the first exposure frame is generated according to the exposure parameter after the first adjustment, and at least two second exposure frames are generated according to the exposure parameter after the second adjustment, wherein, the second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after the increase the first adjustment;
    The terminal merges the first exposure frame and at least two second exposures frame, exports fused image.
  2. The method as described in claim 1, which is characterized in that the exposure parameter after the first adjustment is carried out second adjustment by the terminal, comprising:
    Exposure parameter after the first adjustment is carried out second adjustment when the target object and the rate of the relative motion of the terminal are greater than setting rate-valve value by the terminal;Alternatively,
    When the initial exposure gain of terminal camera under preview state is greater than setting gain threshold, the exposure parameter after the first adjustment is subjected to second adjustment;Alternatively,
    When the light value LV of terminal camera under preview state is less than setting luminance threshold, the exposure parameter after the first adjustment is subjected to second adjustment.
  3. It is method according to claim 1 or 2, which is characterized in that the initial exposure parameter to camera described under preview state carries out the first adjustment, comprising:
    The terminal determines the corresponding ratio value for reducing the time for exposure of the rate of the relative motion according to the relationship between preset movement rate and the ratio value of reduction time for exposure, and determines the value for increasing exposure gain;
    According to the ratio value of determining reduction time for exposure and the value of increase exposure gain, the first adjustment is carried out to the initial exposure parameter of camera described under preview state.
  4. Method as claimed in claim 1,2 or 3, which is characterized in that the terminal is before being merged the first exposure frame and at least two second exposures frame, further includes:
    At least two second exposures frame is carried out time domain multiframe noise reduction fusion treatment by the terminal, obtains a short frame.
  5. Method as claimed in claim 4, which is characterized in that the terminal merges the first exposure frame and at least two second exposures frame, comprising:
    The first exposure frame and the short frame are carried out image registration and ghost detect by the terminal using the short frame as reference frame, and handling as a result, the first exposure frame after image registration is carried out ghost according to ghost detection, the long frame of after obtaining ghost;
    According to ghost detection as a result, being fusion benchmark with the ghost region of the long frame, by the long frame and the short frame progress frequency domain fusion.
  6. A kind of image processing apparatus characterized by comprising
    Adjustment unit, during picture frame for being acquired in preview by camera, when there are when relative motion with described device for captured target object, the exposure parameter initial to camera described under preview state carries out the first adjustment, and the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain;
    The adjustment unit is also used to after receiving shooting instruction, carries out second adjustment to the exposure parameter after the first adjustment, wherein the second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after the increase the first adjustment;
    Generation unit exposes frame for generating first according to the exposure parameter after the first adjustment, and generates at least two second exposure frames according to the exposure parameter after the second adjustment;
    Integrated unit, the first exposure frame and at least two second exposures frame for generating the generation unit merge, and export fused image.
  7. Device as claimed in claim 6, which is characterized in that when the exposure parameter after the first adjustment is carried out second adjustment, the adjustment unit is used for:
    When the rate of the relative motion of the target object and described device is greater than setting rate-valve value, the exposure parameter after the first adjustment is subjected to second adjustment;Alternatively,
    When the initial exposure gain of the camera is greater than setting gain threshold under preview state, the exposure parameter after the first adjustment is subjected to second adjustment;Alternatively,
    When the light value LV of the camera is less than setting luminance threshold under preview state, the exposure parameter after the first adjustment is subjected to second adjustment.
  8. Device as claimed in claims 6 or 7, which is characterized in that when the initial exposure parameter to camera described under preview state carries out the first adjustment, the adjustment unit is used for:
    According to the relationship between preset movement rate and the ratio value of reduction time for exposure, the corresponding ratio value for reducing the time for exposure of the rate of the relative motion is determined, and determine the value for increasing exposure gain;
    According to the ratio value of determining reduction time for exposure and the value of increase exposure gain, the first adjustment is carried out to the initial exposure parameter of camera described under preview state.
  9. Device as described in claim 6,7 or 8, which is characterized in that before being merged the first exposure frame and at least two second exposures frame, the integrated unit is also used to:
    At least two second exposures frame is subjected to time domain multiframe noise reduction fusion treatment, obtains a short frame.
  10. Device as claimed in claim 9, which is characterized in that when being merged the first exposure frame and at least two second exposures frame, the integrated unit is used for:
    Using the short frame as reference frame, the first exposure frame and the short frame are subjected to image registration and ghost detects, and handling as a result, the first exposure frame after image registration is carried out ghost according to ghost detection, the long frame of after obtaining ghost;
    According to ghost detection as a result, being fusion benchmark with the ghost region of the long frame, by the long frame and the short frame progress frequency domain fusion.
  11. A kind of image processing apparatus, which is characterized in that including camera and processor, in which:
    The camera is used for acquired image frames;
    The processor is used for, during the picture frame that preview is acquired by camera, when there are when relative motion with described device for captured target object, the exposure parameter initial to camera described under preview state carries out the first adjustment, after receiving shooting instruction, second adjustment is carried out to the exposure parameter after the first adjustment, the first adjustment includes reducing the initial time for exposure and increasing initial exposure gain, and the second adjustment includes the time for exposure after reducing the first adjustment and the exposure gain after the increase the first adjustment;The first exposure frame is generated according to the exposure parameter after the first adjustment, and at least two second exposure frames are generated according to the exposure parameter after the second adjustment;Frame is exposed by the first of generation and at least two second exposures frame merges, and exports fused image.
  12. Device as claimed in claim 11, which is characterized in that when the exposure parameter after the first adjustment is carried out second adjustment, the processor is used for:
    When the rate of the relative motion of the target object and described device is greater than setting rate-valve value, the exposure parameter after the first adjustment is subjected to second adjustment;Alternatively,
    When the initial exposure gain of the camera is greater than setting gain threshold under preview state, the exposure parameter after the first adjustment is subjected to second adjustment;Alternatively,
    When the light value LV of the camera is less than setting luminance threshold under preview state, the exposure parameter after the first adjustment is subjected to second adjustment.
  13. Device as described in claim 11 or 12, which is characterized in that when the initial exposure parameter to camera described under preview state carries out the first adjustment, the processor is used for:
    According to the relationship between preset movement rate and the ratio value of reduction time for exposure, the corresponding ratio value for reducing the time for exposure of the rate of the relative motion is determined, and determine the value for increasing exposure gain;
    According to the ratio value of determining reduction time for exposure and the value of increase exposure gain, the first adjustment is carried out to the initial exposure parameter of camera described under preview state.
  14. Device as described in claim 11,12 or 13, which is characterized in that before being merged the first exposure frame and at least two second exposures frame, the processor is also used to:
    At least two second exposures frame is subjected to time domain multiframe noise reduction fusion treatment, obtains a short frame.
  15. Device as claimed in claim 14, which is characterized in that when being merged the first exposure frame and at least two second exposures frame, the processor is used for:
    Using the short frame as reference frame, the first exposure frame and the short frame are subjected to image registration and ghost detects, and handling as a result, the first exposure frame after image registration is carried out ghost according to ghost detection, the long frame of after obtaining ghost;
    According to ghost detection as a result, being fusion benchmark with the ghost region of the long frame, by the long frame and the short frame progress frequency domain fusion.
  16. A kind of computer storage medium, which is characterized in that the computer storage medium is stored with computer program, and the computer program includes for executing the instruction such as any one of 1~claim 5 of claim the method.
CN201780081683.XA 2017-10-13 2017-10-13 Image processing method and device Active CN110121882B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/106176 WO2019071613A1 (en) 2017-10-13 2017-10-13 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110121882A true CN110121882A (en) 2019-08-13
CN110121882B CN110121882B (en) 2020-09-08

Family

ID=66101215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780081683.XA Active CN110121882B (en) 2017-10-13 2017-10-13 Image processing method and device

Country Status (2)

Country Link
CN (1) CN110121882B (en)
WO (1) WO2019071613A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460773A (en) * 2019-08-16 2019-11-15 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111462021A (en) * 2020-04-27 2020-07-28 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112399091A (en) * 2020-10-26 2021-02-23 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112532890A (en) * 2020-11-02 2021-03-19 浙江大华技术股份有限公司 Exposure control method, image pickup apparatus, and computer-readable storage medium
CN112689099A (en) * 2020-12-11 2021-04-20 北京邮电大学 Double-image-free high-dynamic-range imaging method and device for double-lens camera
CN113055580A (en) * 2019-12-26 2021-06-29 中兴通讯股份有限公司 Environment recognition method, shooting mode switching method, terminal and storage medium
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113382169A (en) * 2021-06-18 2021-09-10 荣耀终端有限公司 Photographing method and electronic equipment
CN113411512A (en) * 2021-08-04 2021-09-17 红云红河烟草(集团)有限责任公司 Industrial camera automatic exposure control method for cigarette field
CN113592887A (en) * 2021-06-25 2021-11-02 荣耀终端有限公司 Video shooting method, electronic device and computer-readable storage medium
CN113905185A (en) * 2021-10-27 2022-01-07 锐芯微电子股份有限公司 Image processing method and device
CN115037915A (en) * 2021-03-05 2022-09-09 华为技术有限公司 Video processing method and processing device
CN115706863A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706870A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706766A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706767A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
WO2023124202A1 (en) * 2021-12-29 2023-07-06 荣耀终端有限公司 Image processing method and electronic device
CN116402695A (en) * 2022-06-28 2023-07-07 上海玄戒技术有限公司 Video data processing method, device and storage medium
CN116723408A (en) * 2022-02-28 2023-09-08 荣耀终端有限公司 Exposure control method and electronic equipment

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619593B (en) * 2019-07-30 2023-07-04 西安电子科技大学 Double-exposure video imaging system based on dynamic scene
TWI727497B (en) * 2019-11-11 2021-05-11 瑞昱半導體股份有限公司 Image processing method based on sensor characteristics
CN112819699A (en) * 2019-11-15 2021-05-18 北京金山云网络技术有限公司 Video processing method and device and electronic equipment
CN111091498B (en) * 2019-12-31 2023-06-23 联想(北京)有限公司 Image processing method, device, electronic equipment and medium
CN113271414B (en) * 2020-02-14 2022-11-18 上海海思技术有限公司 Image acquisition method and device
CN111275653B (en) * 2020-02-28 2023-09-26 北京小米松果电子有限公司 Image denoising method and device
CN113785559A (en) * 2020-03-11 2021-12-10 深圳市大疆创新科技有限公司 Infrared image processing method, processing device, unmanned aerial vehicle and storage medium
CN114338956A (en) * 2020-09-30 2022-04-12 北京小米移动软件有限公司 Image processing method, image processing apparatus, and storage medium
CN113766150B (en) * 2021-08-31 2024-03-26 北京安酷智芯科技有限公司 Noise reduction method, circuit system, electronic equipment and computer readable storage medium
CN114302068B (en) * 2022-01-06 2023-09-26 重庆紫光华山智安科技有限公司 Image shooting method and device
CN115278069A (en) * 2022-07-22 2022-11-01 北京紫光展锐通信技术有限公司 Image processing method and device, computer readable storage medium and terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5986705A (en) * 1997-02-18 1999-11-16 Matsushita Electric Industrial Co., Ltd. Exposure control system controlling a solid state image sensing device
US20080204564A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
CN101510960A (en) * 2009-03-26 2009-08-19 北京中星微电子有限公司 Mobile phone camera shooting method and apparatus
CN101873437A (en) * 2009-09-15 2010-10-27 杭州海康威视系统技术有限公司 Method and device for regulating exposure
CN103634513A (en) * 2012-08-20 2014-03-12 佳能株式会社 Image processing apparatus and control method thereof
CN103702015A (en) * 2013-12-20 2014-04-02 华南理工大学 Exposure control method for human face image acquisition system under near-infrared condition
CN105827964A (en) * 2016-03-24 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN106972887A (en) * 2012-05-24 2017-07-21 松下电器(美国)知识产权公司 Information communicating method, information-communication device, program and recording medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5986705A (en) * 1997-02-18 1999-11-16 Matsushita Electric Industrial Co., Ltd. Exposure control system controlling a solid state image sensing device
US20080204564A1 (en) * 2007-02-22 2008-08-28 Matsushita Electric Industrial Co., Ltd. Image pickup apparatus and lens barrel
CN101510960A (en) * 2009-03-26 2009-08-19 北京中星微电子有限公司 Mobile phone camera shooting method and apparatus
CN101873437A (en) * 2009-09-15 2010-10-27 杭州海康威视系统技术有限公司 Method and device for regulating exposure
CN106972887A (en) * 2012-05-24 2017-07-21 松下电器(美国)知识产权公司 Information communicating method, information-communication device, program and recording medium
CN103634513A (en) * 2012-08-20 2014-03-12 佳能株式会社 Image processing apparatus and control method thereof
CN103702015A (en) * 2013-12-20 2014-04-02 华南理工大学 Exposure control method for human face image acquisition system under near-infrared condition
CN105827964A (en) * 2016-03-24 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110460773A (en) * 2019-08-16 2019-11-15 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN113055580A (en) * 2019-12-26 2021-06-29 中兴通讯股份有限公司 Environment recognition method, shooting mode switching method, terminal and storage medium
CN113055580B (en) * 2019-12-26 2023-10-03 中兴通讯股份有限公司 Environment recognition method, shooting mode switching method, terminal and storage medium
CN111462021B (en) * 2020-04-27 2023-08-29 Oppo广东移动通信有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111462021A (en) * 2020-04-27 2020-07-28 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112399091A (en) * 2020-10-26 2021-02-23 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112532890A (en) * 2020-11-02 2021-03-19 浙江大华技术股份有限公司 Exposure control method, image pickup apparatus, and computer-readable storage medium
CN112689099A (en) * 2020-12-11 2021-04-20 北京邮电大学 Double-image-free high-dynamic-range imaging method and device for double-lens camera
CN115037915A (en) * 2021-03-05 2022-09-09 华为技术有限公司 Video processing method and processing device
CN115037915B (en) * 2021-03-05 2023-11-14 华为技术有限公司 Video processing method and processing device
CN113382169A (en) * 2021-06-18 2021-09-10 荣耀终端有限公司 Photographing method and electronic equipment
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113592887A (en) * 2021-06-25 2021-11-02 荣耀终端有限公司 Video shooting method, electronic device and computer-readable storage medium
CN113592887B (en) * 2021-06-25 2022-09-02 荣耀终端有限公司 Video shooting method, electronic device and computer-readable storage medium
WO2022267565A1 (en) * 2021-06-25 2022-12-29 荣耀终端有限公司 Video photographing method, and electronic device and computer-readable storage medium
CN113411512A (en) * 2021-08-04 2021-09-17 红云红河烟草(集团)有限责任公司 Industrial camera automatic exposure control method for cigarette field
CN115706863A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706767A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706766A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706870A (en) * 2021-08-12 2023-02-17 荣耀终端有限公司 Video processing method and device, electronic equipment and storage medium
CN115706767B (en) * 2021-08-12 2023-10-31 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN115706863B (en) * 2021-08-12 2023-11-21 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN115706766B (en) * 2021-08-12 2023-12-15 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN115706870B (en) * 2021-08-12 2023-12-26 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN113905185B (en) * 2021-10-27 2023-10-31 锐芯微电子股份有限公司 Image processing method and device
CN113905185A (en) * 2021-10-27 2022-01-07 锐芯微电子股份有限公司 Image processing method and device
WO2023124202A1 (en) * 2021-12-29 2023-07-06 荣耀终端有限公司 Image processing method and electronic device
CN116723408A (en) * 2022-02-28 2023-09-08 荣耀终端有限公司 Exposure control method and electronic equipment
CN116402695A (en) * 2022-06-28 2023-07-07 上海玄戒技术有限公司 Video data processing method, device and storage medium

Also Published As

Publication number Publication date
CN110121882B (en) 2020-09-08
WO2019071613A1 (en) 2019-04-18

Similar Documents

Publication Publication Date Title
CN110121882A (en) A kind of image processing method and device
US11671702B2 (en) Real time assessment of picture quality
CN109671106B (en) Image processing method, device and equipment
KR101297524B1 (en) Response to detection of blur in an image
JP6271990B2 (en) Image processing apparatus and image processing method
JP4664379B2 (en) Electronic device and image data processing method for image data processing
JP5589548B2 (en) Imaging apparatus, image processing method, and program storage medium
US10853927B2 (en) Image fusion architecture
JP6267502B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
US11138709B2 (en) Image fusion processing module
JP5699432B2 (en) Image processing device
CN112822412B (en) Exposure method, exposure device, electronic equipment and storage medium
US9838594B2 (en) Irregular-region based automatic image correction
CN112637500B (en) Image processing method and device
CN113099122A (en) Shooting method, shooting device, shooting equipment and storage medium
CN106412423A (en) Focusing method and device
CN110677580B (en) Shooting method, shooting device, storage medium and terminal
CN111654618A (en) Camera focusing sensitivity control method and device
JP6645711B2 (en) Image processing apparatus, image processing method, and program
CN112954204B (en) Photographing control method and device, storage medium and terminal
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
US20230368343A1 (en) Global motion detection-based image parameter control
WO2019072222A1 (en) Image processing method and device and apparatus
CN116416141A (en) Image processing method and device, equipment and storage medium
JP6271985B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant