CN110493522A - Anti-fluttering method and device, electronic equipment, computer readable storage medium - Google Patents
Anti-fluttering method and device, electronic equipment, computer readable storage medium Download PDFInfo
- Publication number
- CN110493522A CN110493522A CN201910790673.4A CN201910790673A CN110493522A CN 110493522 A CN110493522 A CN 110493522A CN 201910790673 A CN201910790673 A CN 201910790673A CN 110493522 A CN110493522 A CN 110493522A
- Authority
- CN
- China
- Prior art keywords
- image
- dispersion
- exposure duration
- parameter
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
This application involves a kind of anti-fluttering methods and device, electronic equipment, computer readable storage medium.The method includes obtaining image collection;Image collection includes at least one second image of shooting in the first image and objective time interval of current time shooting;Motion blur detection is carried out to image collection, obtains the corresponding fuzzy parameter set of image collection;The dispersion of the fog-level of the first image relative to the fog-level of the second image is determined according to fuzzy parameter set;Target light exposure duration is determined according to dispersion;It is shot based on target light exposure duration, obtains target image.The above method and device, electronic equipment, computer readable storage medium, can be improved the accuracy of stabilization.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of anti-fluttering method, device, electronic equipment, computer
Readable storage medium storing program for executing.
Background technique
With the development of computer technology, more and more electronic equipments can carry out shooting image, also occur various
The technology that image is handled.When shooting image, often make to shoot obtained image due to the shake of electronic equipment
It is relatively fuzzy.Traditional anti-fluttering method, usually electronic flutter-proof or OIS (Optical image stabilization, optics
Stabilization) stabilization.However, traditional stabilization technology, has that stabilization accuracy is lower.
Summary of the invention
The embodiment of the present application provides a kind of anti-fluttering method, device, electronic equipment, computer readable storage medium, Ke Yiti
The accuracy of high stabilization.
A kind of anti-fluttering method, comprising:
Obtain image collection;Described image set includes shooting in the first image and objective time interval of current time shooting
At least one second image;
Motion blur detection is carried out to described image set, obtains the corresponding fuzzy parameter set of described image set;
Dispersion of the first image relative to second image is determined according to the fog-level parameter sets;
Target light exposure duration is determined according to the dispersion;
It is shot based on the target light exposure duration, obtains target image.
A kind of anti-shake apparatus, comprising:
Image collection obtains module, for obtaining image collection;Described image set includes the first of current time shooting
At least one second image shot in image and objective time interval;
Motion blur detection module obtains described image set for carrying out motion blur detection to described image set
Corresponding fuzzy parameter set;
Dispersion determining module, for determining the fog-level of the first image according to the fog-level parameter sets
The dispersion of fog-level relative to second image;
Target light exposure duration determining module, for determining target light exposure duration according to the dispersion;
Target image obtains module, for shooting based on the target light exposure duration, obtains target image.
A kind of electronic equipment, including memory and processor store computer program, the calculating in the memory
When machine program is executed by the processor, so that the step of processor executes above-mentioned anti-fluttering method.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The step of above-mentioned method is realized when row.
Above-mentioned anti-fluttering method and device, electronic equipment, computer readable storage medium obtain image collection;Image collection
At least one second image shot in the first image and objective time interval including current time shooting;Image collection is transported
Dynamic fuzzy detection, obtains the corresponding fuzzy parameter set of image collection;The first image phase is determined according to fog-level parameter sets
For the dispersion of the second image;Dispersion is used to indicate fog-level of the fog-level of the first image relative to the second image
Departure degree can determine more accurate target light exposure duration according to dispersion, and be shot based on target light exposure duration, obtain
The target image being more clear improves the accuracy of stabilization.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the schematic diagram of image processing circuit in one embodiment;
Fig. 2 is the flow chart of anti-fluttering method in one embodiment;
Fig. 3 is the flow chart of step motion fuzzy detection in one embodiment;
Fig. 4 is the flow chart that step generates target video in one embodiment;
Fig. 5 is that step obtains the flow chart of frame image in one embodiment;
Fig. 6 is the schematic diagram of anti-fluttering method in one embodiment;
Fig. 7 is the structural block diagram of anti-shake apparatus in one embodiment;
Fig. 8 is the schematic diagram of internal structure of electronic equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes
It says, in the case where not departing from scope of the present application, the first image can be known as the second image, and similarly, it can be by second
Image is known as the first image.First image and the second image both image, but it is not same image.
The embodiment of the present application provides a kind of electronic equipment.It include image processing circuit, image procossing in above-mentioned electronic equipment
Circuit can use hardware and or software component realization, it may include define ISP (Image Signal Processing, image
Signal processing) pipeline various processing units.Fig. 1 is the schematic diagram of image processing circuit in one embodiment.As shown in Figure 1,
For purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in Figure 1, image processing circuit includes ISP processor 140 and control logic device 150.Imaging device 110 captures
Image data handled first by ISP processor 140, ISP processor 140 to image data analyzed with capture can be used for really
The image statistics of fixed and/or imaging device 110 one or more control parameters.Imaging device 110 may include having one
The camera of a or multiple lens 112 and imaging sensor 114.Imaging sensor 114 may include colour filter array (such as
Bayer filter), imaging sensor 114 can obtain the luminous intensity captured with each imaging pixel of imaging sensor 114 and wavelength
Information, and the one group of raw image data that can be handled by ISP processor 140 is provided.Attitude transducer 120 (such as three-axis gyroscope,
Hall sensor, accelerometer) it can be based on 120 interface type of attitude transducer parameter (such as stabilization of the image procossing of acquisition
Parameter) it is supplied to ISP processor 140.120 interface of attitude transducer can use SMIA (Standard Mobile Imaging
Architecture, Standard Mobile Imager framework) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can also be sent to attitude transducer 120 by imaging sensor 114, sensor 120 can base
Raw image data is supplied to ISP processor 140 in 120 interface type of attitude transducer or attitude transducer 120 will be former
Beginning image data is stored into video memory 130.
ISP processor 140 handles raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processor 140 can carry out raw image data at one or more images
Reason operation, statistical information of the collection about image data.Wherein, image processing operations can be by identical or different bit depth precision
It carries out.
ISP processor 140 can also receive image data from video memory 130.For example, 120 interface of attitude transducer will
Raw image data is sent to video memory 130, and the raw image data in video memory 130 is available to ISP processing
Device 140 is for processing.Video memory 130 can be the independence in a part, storage equipment or electronic equipment of memory device
Private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
114 interface of imaging sensor is come from or from 120 interface of attitude transducer or from video memory when receiving
When 130 raw image data, ISP processor 140 can carry out one or more image processing operations, such as time-domain filtering.Processing
Image data afterwards can be transmitted to video memory 130, to carry out other processing before shown.ISP processor 140
Processing data are received from video memory 130, and the processing data are carried out in original domain and RGB and YCbCr color sky
Between in image real time transfer.Treated that image data may be output to display 160 for ISP processor 140, for user's viewing
And/or it is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP
The output of processor 140 also can be transmitted to video memory 130, and display 160 can read picture number from video memory 130
According to.In one embodiment, video memory 130 can be configured to realize one or more frame buffers.
The statistical data that ISP processor 140 determines, which can be transmitted, gives control logic device Unit 150.For example, statistical data can wrap
Include vibration frequency, automatic exposure, automatic white balance, automatic focusing, the flicker detection, black level compensation, 112 yin of lens of gyroscope
114 statistical informations of imaging sensors such as shadow correction.Control logic device 150 may include executing one or more routines (such as firmware)
Processor and/or microcontroller, one or more routines can statistical data based on the received, determine the control of imaging device 110
The control parameter of parameter and ISP processor 140.For example, the control parameter of imaging device 110 may include that attitude transducer 120 is controlled
Parameter (such as gain, the time of integration of spectrum assignment, stabilization parameter etc.) processed, camera flash control parameter, camera stabilization
The combination of displacement parameter, 112 control parameter of lens (such as focusing or zoom focal length) or these parameters.ISP control parameter can
Including the gain level and color correction matrix for automatic white balance and color adjustment (for example, during RGB processing), and
112 shadow correction parameter of lens.
In one embodiment, current time bat is obtained by the lens 112 of imaging device 110 and imaging sensor 114
At least one second image shot in the first image and objective time interval taken the photograph, and by the first image and at least one second image
It is sent to ISP processor 140.ISP processor 140 can be to image set composed by the first image and at least one second image
It closes and carries out motion blur detection, obtain the corresponding fuzzy parameter set of image collection;The first figure is determined according to fuzzy parameter set
Dispersion of the fog-level of picture relative to the fog-level of the second image;Dispersion is used to indicate the fog-level of the first image
Relative to the fog-level departure degree of the second image, more accurate target light exposure duration can be determined according to dispersion, and
Target light exposure duration is sent to control logic device 150.After control logic device 150 gets target light exposure duration, control imaging
Equipment 110 is based on target light exposure duration and is shot, and the target image being more clear improves the accuracy of stabilization.
In one embodiment, attitude transducer 120 acquires the first attitude data, and the first attitude data is sent to
ISP processor 140.First attitude data can be filtered by ISP processor 140, obtain the second attitude data;It obtains
The second adjustment factor;Targeted attitude data are obtained according to the first attitude data, the second attitude data and the second adjustment factor;According to
Target image is carried out image conversion by targeted attitude data, i.e., corrects to target image, obtain more accurate video frame figure
Picture improves the accuracy of stabilization to generate the more stable target video of picture.
Fig. 2 is the flow chart of anti-fluttering method in one embodiment.Anti-fluttering method in the present embodiment, to run in Fig. 1
Terminal or server on for be described.As shown in Fig. 2, anti-fluttering method includes step 202 to step 210.
Step 202, image collection is obtained;Image collection includes bat in the first image and objective time interval of current time shooting
The second image of at least one taken the photograph.
Camera can be set in electronic equipment, and the quantity of the camera of setting can be one or more.For example, setting
1,2,3,5 etc., it is not limited here.The form that camera is set to electronic equipment is unlimited, for example, it may be built-in
In the camera of electronic equipment, the camera of electronic equipment can also be placed outside;It can be front camera, be also possible to postposition
Camera.
In embodiment provided by the present application, the camera on electronic equipment can be any type of camera.For example,
Camera can be colour imagery shot, black and white camera, depth camera, focal length camera, wide-angle imaging head etc., be not limited to
This.
Correspondingly, color image is obtained by colour imagery shot, black white image is obtained by black and white camera, passes through depth
It spends camera and obtains depth image, focal length image is obtained by focal length camera, wide angle picture is obtained by wide-angle camera, no
It is limited to this.Camera in electronic equipment can be the camera of same type, be also possible to different types of camera.Example
Such as, it can be colour imagery shot, can also be black and white camera;Can one of camera be focal length camera,
Other cameras are wide-angle camera, without being limited thereto.
Image collection refers to containing at least two the set of image, i.e. the first image and mesh comprising current time shooting
Mark the set of at least one the second image shot in the period.First image refers to the image shot at current time, the
Two images refer to the image that shooting obtains in objective time interval.
In one embodiment, objective time interval can be the period before current time.For example, when current time is 13
40 points 36 seconds, objective time interval can be (40 divide 33 seconds when 13, and 40 divide 36 seconds when 13).
In one embodiment, the first image and at least one second image are obtained, comprising: by the of current time shooting
At least one second image shot in one image and objective time interval is stored into first queue;First is obtained from first queue
Image and at least one second image.
First queue can be First Input First Output, i.e., the image first stored into first queue first takes out.By image
It stores at least one second image shot in objective time interval into first queue, the first image and objective time interval can be saved
At least one second image of interior shooting, it is convenient that motion blur detection is carried out to the first image and at least one second image.
Step 204, motion blur detection is carried out to image collection, obtains the corresponding fuzzy parameter set of image collection.
Motion blur refers to that the camera of electronic equipment and subject cause the mould of image there are relative motion
Paste.Motion blur detection refers to the technology of the motion blur degree of detection image.Fuzzy parameter set is referred to comprising image
The set of corresponding fuzzy parameter.The fog-level of fuzzy parameter characterization image.
Step 206, fuzzy journey of the fog-level relative to the second image of the first image is determined according to fuzzy parameter set
The dispersion of degree.
The fog-level of first image refers to the mould of the first image relative to the dispersion of the fog-level of the second image
Difference degree of the paste degree relative to the fog-level of the second image.Dispersion and difference degree are positively correlated, i.e., difference degree is got over
Greatly, then dispersion is bigger;Difference degree is smaller, then dispersion is smaller.
It include the fuzzy parameter of the first image and the fuzzy parameter of each second image in fuzzy parameter set.The
The fuzzy parameter of one image characterizes the fog-level of the first image, and the fuzzy parameter of the second image characterizes the fuzzy journey of the second image
Degree.
The fuzzy parameter of the fuzzy parameter of first image and each second image is subjected to operation, available dispersion.
For example, the fuzzy parameter of the average value of the fuzzy parameter of all second images and the first image can be subjected to difference operation, it will
The absolute value of the result of difference operation is as dispersion.It for another example, can be by the fuzzy parameter of each the second image respectively with
The fuzzy parameter of one image carries out difference operation, then by the means absolute value of obtained each difference, which is made
For dispersion.For another example, the average value of the fuzzy parameter of all second images and the fuzzy parameter of the first image can be compared
It is worth operation, using obtained ratio as dispersion.It should be pointed out that by the fuzzy parameter of the first image and each second image
Fuzzy parameter carry out operation, so that the mode for obtaining dispersion can be to be a variety of, it is not limited here.
Step 208, target light exposure duration is determined according to dispersion.
Exposure refers to that shutter moment is opened and closed camera when shooting, is projected to a certain amount of light by camera lens photosensitive
On material, to generate the process of image.Exposure time refers to project light on the photosurface of photosensitive material, fastly
The duration to be opened of door.It is understood that exposure time is longer, then the light being projected on photosensitive material is more, shooting
The information that obtained image obtains is also more, but the image that shooting obtains is fuzzyyer;Exposure time is shorter, then is projected to photosensitive
Light on material is fewer, and the information that the image shot obtains is also fewer, but the image that shooting obtains is more clear.
Specifically, when dispersion is larger, and the first image is fuzzyyer than the second image, exposure time can be shortened, from
And available more visible target image.When dispersion is smaller, the fog-level and second image of the first image are indicated
Fog-level is close, and electronic equipment shooting image is more stable, then can be with prolonged exposure duration, to obtain comprising more information
Target image.
Step 210, it is shot based on target light exposure duration, obtains target image.
Target image refers to the image shot based on target light exposure duration.
It is understood that imaging sensor progressive scan is exposed line by line, until institute at the beginning of exposure
There is pixel to be all exposed.When subject is relative to camera high-speed motion or fast vibration, with Rolling shutter mode
Shooting, progressive scan speed is inadequate, and " tilting " may occur in shooting result, " unsteadiness " or situations such as " Partial exposure ".
This Rolling shutter mode shoots the phenomenon that occurring, and is just defined as roller shutter effect.Pass through the first image shot to current time
Motion blur detection is carried out at least one second image shot in objective time interval, fuzzy parameter set is obtained, according to fuzzy
Parameter sets determine the dispersion of the fog-level of the first image relative to the fog-level of the second image, based on according to dispersion
Determining target light exposure duration is shot, roller shutter effect when imaging sensor can be inhibited to be imaged, so as to shoot
To the image being more clear.
Above-mentioned anti-fluttering method obtains image collection;When image collection includes the first image and target of current time shooting
At least one second image shot in section;Motion blur detection is carried out to image collection, it is corresponding fuzzy to obtain image collection
Parameter sets;Dispersion of first image relative to the second image is determined according to fog-level parameter sets;Dispersion is used for table
Fog-level departure degree of the fog-level relative to the second image for showing the first image, can determine more quasi- according to dispersion
True target light exposure duration, and shot based on target light exposure duration, the target image being more clear improves the accurate of stabilization
Property.
In one embodiment, as shown in figure 3, carrying out motion blur detection to image collection, it is corresponding to obtain image collection
Fuzzy parameter set mode, include any of the following:
Step 302, each frame image in image collection is obtained, calculates the image gradient of each frame image, and according to image ladder
The histogram distribution at the edge of degree is analyzed, and fuzzy parameter set is obtained;Fuzzy parameter in fuzzy parameter set is used for table
Levy the fog-level of image.
When image is regarded as two-dimensional discrete function, image gradient is exactly the derivation of the two-dimensional discrete function.From image ladder
The marginal portion that image is obtained in degree, the marginal portion of each frame image is counted using histogram distribution, and to the histogram distribution
It is analyzed, obtains fuzzy parameter set.
It, can be with by the marginal portion of image it is understood that the marginal portion of image is fuzzyyer when image obscures
More accurately determine the fog-level of image.The fuzzy parameter of each image, and root are obtained according to the fog-level of each image
Fuzzy parameter set is generated according to the fuzzy parameter of each image.
Step 304, each frame image in image collection is obtained, Laplace transform is carried out to each frame image, is obtained each
Median, and mean square error is carried out to each median, obtain the fuzzy parameter set of image.
Laplace transform is a kind of linear transformation, and can have parameter for one is that the function of real number t (t >=0) is converted to one
A parameter is the function of plural number s.Mean square error (mean-square error, MSE) is between reflection estimator and the amount of being estimated
A kind of measurement of difference degree.
By obtaining median to the progress Laplace transform of each frame image, and mean square error is carried out to each median,
Available each more accurate fuzzy parameter of frame image.
Step 306, each frame image in image collection is obtained, using machine learning training off-line model, and according to training
The fuzzy parameter set of the off-line model detection image set of completion.
Machine learning refers to electronic equipment simulation or realizes the learning behavior of the mankind, to obtain new knowledge or skills,
The existing structure of knowledge is reorganized to be allowed to constantly improve the performance of itself.Machine learning such as deep learning, supervised learning etc..
It in another embodiment, can be in advance using machine learning training off-line model.It is completed using training offline
Model detects each frame image, available each more accurate fuzzy parameter of frame image, is generated according to each fuzzy parameter
Fuzzy parameter set.
It in one embodiment, include the first parameter and the second parameter in fuzzy parameter set, the first parameter is for characterizing
The fog-level of first image, the second parameter are used to characterize the fog-level of the second image.Is determined according to fuzzy parameter set
Fog-level of the fog-level of one image relative to the dispersion of the second image, comprising: determine being averaged for each second parameter
Value;The average value of second parameter and the first parameter are subjected to difference operation, obtain difference;Using the absolute value of difference as the first figure
As the dispersion with each second image.
In image collection, comprising current time shooting the first image and objective time interval in shoot at least one second
Image.Correspondingly, motion blur detection is carried out to image collection, obtain fuzzy parameter set, then in fuzzy parameter set, packet
Containing corresponding first parameter of the first image and corresponding second parameter of each second image;One the second image is one second corresponding
Parameter.
When in image collection including the first image and second image, then the average value of each second parameter be this
The value of second parameter of two images.When in image collection including the first image and at least two second images, then each second
The average value of parameter can be calculated according to the following formula:
Wherein, αnIt is the average value of each second parameter, n is the quantity of the second parameter, αt-iIt is each second parameter.
In one embodiment, the average value of the second parameter can be subtracted into the first parameter, obtains difference.In another implementation
In example, the first parameter can also be subtracted to the average value of the second parameter, obtain difference.
The dispersion of first image and each second image can be calculated according to the following formula:
Wherein, αnIt is the average value of the second parameter, αtIt is the first parameter, λ is dispersion.
Above-mentioned anti-fluttering method determines the average value of each second parameter, by the average value of the second parameter and the first parameter into
Row difference operation, obtains difference, using the absolute value of the difference as the dispersion of the first image and each second image, Ke Yigeng
Accurately determine the size of dispersion.
In another embodiment, the first parameter can be subjected to difference operation with each second parameter respectively, obtained each
A difference;Determine the average value of the corresponding absolute value of each difference;Using the average value as the first image and each second image
Dispersion.
Wherein, the first parameter carries out difference operation with each second parameter respectively, can be the first parameter and subtracts the second ginseng
Number, is also possible to the second parameter and subtracts the first parameter.
The dispersion of first image and each second image can be calculated according to the following formula:
Wherein, αtIt is the first parameter, αt-iIt is each second parameter, λ is dispersion.
It in other embodiments, can also be by the maximum value or minimum value and the first parameter progress difference in the second parameter
Operation;The median of the second parameter or weighted average and the first parameter can also be subjected to difference operation, it is without being limited thereto.
In one embodiment, target light exposure duration is determined according to dispersion, comprising: the first adjustment is obtained according to dispersion
The factor;Obtain reference exposure duration;Target light exposure duration is determined according to the first adjustment factor and reference exposure duration.
The first adjustment factor refers to the factor for adjusting target light exposure duration.Reference exposure duration refers to camera
Preset exposure time is also possible to use if reference exposure duration can be the preset duration of the auto exposure system of camera
The exposure time that family is set as needed, it is without being limited thereto.
The first adjustment factor can be numerical value set by user, can also be adjusted according to dispersion.For example, first adjusts
Integral divisor is 0.8.For another example, the first adjustment factor can be obtained according to following functional relation: the λ of θ=2.Wherein, θ is the first adjustment
The factor, λ are dispersions.
In one embodiment, the first adjustment factor is obtained according to dispersion, comprising: when dispersion is more than or equal to discrete
When threshold value, the first sub- Dynamic gene is obtained;First sub- Dynamic gene is identical as the value of dispersion.According to the first adjustment factor and ginseng
It examines exposure time and determines target light exposure duration, comprising: first object is determined according to the first sub- Dynamic gene and reference exposure duration
Exposure time;First object exposure time is less than reference exposure duration.
First sub- Dynamic gene refers to the Dynamic gene when dispersion is greater than or equal to discrete threshold values.
When dispersion is greater than or equal to discrete threshold values, the fog-level of the first image and the fuzzy journey of the second image are indicated
The difference degree of degree is larger, then using the value of dispersion as the first sub- Dynamic gene, i.e., the first sub- Dynamic gene and dispersion
It is worth identical, determines first object exposure time further according to the first sub- Dynamic gene and reference exposure duration.
First object exposure time is less than reference exposure duration, i.e. first object exposure time is less than exposure set by user
Light time is long.Exposure time is shorter, then the image shot is more clear, and the accuracy of stabilization can be improved.
First object exposure time can be calculated according to the following formula:
Wherein, T 'vIt is first object exposure time,It is reference exposure duration, k is scaling coefficient, and θ is the first son adjustment
The factor, λ are dispersions.
In one embodiment, the above method further include: according to the available exposure value of automatic exposure formula.Automatic exposure
Formula: Ev=Av+Tv=Sv+Bv.Wherein, EvIndicate exposure value, AvIndicate aperture size, TvIndicate exposure time, SvIndicate camera shooting
The sensitivity of head, BvIndicate average brightness.
Obtain the aperture size A of the camera of electronic equipmentv, and first object exposure time is obtained as Tv, then can root
According to Ev=Av+TvE is calculatedv, i.e. exposure value.
In one embodiment, the above method further include: when dispersion be less than discrete threshold values when, obtain second son adjustment because
Son.Target light exposure duration is determined according to the first adjustment factor and reference exposure duration, comprising: according to the second sub- Dynamic gene and ginseng
It examines exposure time and determines the second target light exposure duration;Second target light exposure duration is equal to reference exposure duration.
Second sub- Dynamic gene refers to the Dynamic gene when dispersion is less than discrete threshold values.The second adjustment factor can be with
It is the preset numerical value of user, such as the second adjustment factor is 0.
When dispersion is less than discrete threshold values, the difference of the fog-level of the first image and the fog-level of the second image is indicated
Off course degree is smaller, i.e., electronic equipment shoots more stable when image.
Second target light exposure duration can be calculated according to the following formula:
Wherein, T 'vIt is the second target light exposure duration,It is reference exposure duration, k is scaling coefficient, and θ is the second son adjustment
The factor.
In one embodiment, above-mentioned anti-fluttering method further include: the first figure for shooting target image as current time
Picture returns and executes acquisition image collection step.
The first image that target image is shot as current time returns and executes acquisition image collection step, that is, obtains
Image collection in include the target image.And so on, the first image that target image is shot as current time is followed
Ring carries out stabilization processing, then after target image be that image taking has been obtained based on having carried out stabilization treated so that clapping
The dispersion for each target image taken the photograph is smaller, improves the accuracy of stabilization.
In one embodiment, as shown in figure 4, the above method further include:
Step 402, attitude transducer the first attitude data collected is obtained.
Attitude transducer is based on MEMS (Micro-Electro-Mechanical System, MEMS) technology
High performance three-dimensional motion attitude measuring system.Attitude transducer may include three-axis gyroscope, three axis accelerometer, three axis electronics
The motion sensors such as compass.Attitude transducer the first attitude data collected may include angular velocity data, acceleration information,
Bearing data etc..Attitude transducer the first attitude data collected can indicate the degree of jitter of electronic equipment.
Angular speed refers to describing the angle turned within the unit time when object rotation and rotation in physics
The vector in direction.The direction of angle and rotation that the electronic equipment that angular velocity data refers to turns within the unit time.Angular speed
Data are bigger, indicate that the angle of electronic equipment rotation is bigger, the direction of rotation is bigger, then the shake of electronic equipment is bigger.
Acceleration refers to velocity variable and the ratio of time used in this variation occurs, and is description electronic equipment speed
Change the physical quantity of speed.When acceleration is bigger, indicate that the velocity variations of electronic equipment are faster.Acceleration information can be angle and add
Speed data is also possible to translational acceleration data, without being limited thereto.
In one embodiment, available attitude transducer first attitude data collected in objective time interval.Mesh
The mark period can be the period before current time.For example, current time 40 divides 36 seconds when being 13, objective time interval be can be
(40 divide 33 seconds when 13, and 40 divide 36 seconds when 13).Objective time interval also may include current time.For example, 40 when current time is 13
Points 36 seconds, objective time interval can be (40 divide 33 seconds when 13, and 40 divide 37 seconds when 13).Objective time interval can also current time it
Afterwards.For example, current time 40 divides 36 seconds when being 13, objective time interval can be (40 divide 37 seconds when 13, and 40 divide 40 seconds when 13).Pass through
The first attitude data in the objective time interval of acquisition indicates the degree of jitter of electronic equipment.
In one embodiment, attitude transducer the first attitude data collected is obtained, comprising: by attitude transducer institute
First attitude data of acquisition is stored into second queue;All first attitude datas are obtained from second queue.
Second queue can be First Input First Output, i.e., first store to the first attitude data of second queue and first take out.It will
Attitude transducer the first attitude data collected is stored to second queue, can save attitude transducer the first appearance collected
State data, it is convenient that first attitude data is handled.
Step 404, target image is carried out by image conversion based on the first attitude data, obtains video frame images.
Image conversion refers to be indicated video frame images with orthogonal function or orthogonal matrix and make target image
Two-dimensional linear inverible transform.Generally, using target image as space area image, by the transformed image, that is, video frame of image
Image as conversion area image, conversion area image can contravariant be changed to spatial domain image.After video frame images refer to image conversion,
And the image for generating target video.
The dispersion between video frame images that after target image progress image conversion, will be obtained based on the first attitude data
It is lower, i.e., target image is corrected based on the first attitude data, obtains more accurate video frame images.
Step 406, target video is generated according to each video frame images.
Above-mentioned anti-fluttering method obtains attitude transducer the first attitude data collected, is based on the first attitude data for mesh
Logo image carries out image conversion, and available more accurate video frame images, i.e., the dispersion between each video frame images is more
Low, then the picture of the target video generated according to each video frame images is more stable, improves the accuracy of stabilization.
In one embodiment, attitude transducer the first posture number collected in the first period of acquisition and in the second period
According to;First period was later than current time earlier than current time, the second period.
For example, current time 40 divides 36 seconds when being 13, it can be that (40 divide 33 seconds when 13, and 40 divide 36 when 13 first period
Second), the second period can be (40 divide 36 seconds when 13, and 40 divide 39 seconds when 13), i.e., the front and back period posture at acquisition current time passes
Sensor the first attitude data collected can more accurately indicate electronic equipment in the degree of jitter at current time, thus more
Accurately image is handled, obtains more accurate target video.
In one embodiment, it as shown in figure 5, target image is carried out image conversion based on the first attitude data, obtains
Video frame images, comprising:
Step 502, the first attitude data is filtered, obtains the second attitude data.
Filtering processing refers to the operation for filtering out the specific band frequency in the first attitude data.Filtering processing can be
At least one of gaussian filtering, smothing filtering, average filter etc..First attitude data is filtered, can be removed
Noise in first attitude data obtains more accurate second attitude data, can more accurately indicate trembling for electronic equipment
Traverse degree.
Step 504, the second adjustment factor is obtained.
The second adjustment factor can be the numerical value that user is set, and be also possible to be obtained according to the first adjustment factor,
It is without being limited thereto.
In one embodiment, the second adjustment factor can be calculated according to the following formula:
β=e-lθ
Wherein, β is the second adjustment factor, and l is zoom factor, and l is constant, and θ is the first adjustment factor.
When dispersion is greater than or equal to discrete threshold values, the first adjustment factor θ is the first sub- Dynamic gene, and the first son is adjusted
Integral divisor is identical as the value λ of dispersion.That is second adjustment factor-beta=e-lλ。
When dispersion is less than discrete threshold values, the first adjustment factor θ is the second sub- Dynamic gene.When the second sub- Dynamic gene
When being 0, then second adjustment factor-beta=1.
Step 506, targeted attitude data are obtained according to the first attitude data, the second attitude data and the second adjustment factor.
Targeted attitude is obtained according to the first attitude data, the second attitude data and the second adjustment factor.Second posture
Data indicate spin matrix, can carry out image conversion to target image, obtain more accurate video frame images.
In one embodiment, targeted attitude data can be calculated according to the following formula:
Rt=β R+ (1- β) R
Wherein, RtIt is targeted attitude data, β is the second adjustment factor, and R ' is the second attitude data, and R is the first posture number
According to.
When dispersion is greater than or equal to discrete threshold values, second adjustment factor-beta=e-lλ, then targeted attitude data Rt=e-lλ·R′+(1-e-lλ)R。
When dispersion is less than discrete threshold values, second adjustment factor-beta=1, then targeted attitude data Rt=R '.
In another embodiment, can be obtained according to the second adjustment factor the first weight factor and the second weight because
Son;First weight factor characterizes the weight of the first attitude data, and the second weight factor characterizes the weight of the second attitude data;According to
First attitude data, the second attitude data, the first weight factor and the second weight factor obtain targeted attitude data.
For example, the first weight factor is positively correlated with the second adjustment factor, the second weight factor and the second adjustment factor at
It is negatively correlated.As the first weight factor a can be calculated according to the following formula: a=2 β;Second weight factor b can according to
Lower formula is calculated: b=2/ β.Then targeted attitude data Rt=bR '+aR.
Step 508, target image is subjected to image conversion according to targeted attitude data, obtains video frame images.
Targeted attitude data indicate spin matrix, can carry out image conversion to target image, obtain more accurate video
Frame image.
In one embodiment, pass through EIS (Flectric Image Stabilization, electronic flutter-proof) system, base
Image conversion is carried out to target image in targeted attitude data, target image can be corrected, obtain more accurate video
Frame image improves the accuracy of stabilization so that the picture of the target video generated is more stable.
In one embodiment, as shown in fig. 6, obtain imaging sensor 602 shoot current time the first image and
At least one second image shot in objective time interval, can will be shot in first image and objective time interval at current time to
Few second image is stored into first queue 604.First image and target at current time are obtained from first queue 604
At least one second image shot in period executes step 606 to the image of acquisition, i.e., carries out movement mould to the image of acquisition
Paste detection, obtains the fuzzy parameter of each image, includes the first parameter and each second image corresponding the of the first image
Two parameters.Fuzzy journey of the fog-level relative to the second image of the first image is determined according to the first parameter and each second parameter
The dispersion of degree obtains reference exposure duration, further according to the first adjustment factor according to the available the first adjustment factor of dispersion
Target light exposure duration 608 is determined with reference exposure duration, and target light exposure duration 608 is sent to imaging sensor 602.Image
Sensor 608 is based on the shooting of target light exposure duration 608 and obtains target image.
First attitude data collected of attitude transducer 610 is obtained, and the first attitude data is stored to second queue
In 612.All first attitude datas are obtained from second queue 612, and the first attitude data is filtered, obtains second
Attitude data, then the first attitude data and the second attitude data are sent to EIS system.By the image hair in first queue 604
It send to EIS system, the dispersion obtained after motion blur detection is sent to EIS system.EIS system 614 is obtained according to dispersion
To the second adjustment factor;Targeted attitude data are obtained according to the first attitude data, the second attitude data and the second adjustment factor;Root
Image is subjected to image conversion according to targeted attitude data, obtains video frame images.Target view is generated according to each video frame images
Frequently 616.
It should be understood that although each step in the flow chart of Fig. 2 to Fig. 5 is successively shown according to the instruction of arrow,
But these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these
There is no stringent sequences to limit for the execution of step, these steps can execute in other order.Moreover, Fig. 2 is into Fig. 5
At least part step may include that perhaps these sub-steps of multiple stages or stage are not necessarily same to multiple sub-steps
One moment executed completion, but can execute at different times, and the execution in these sub-steps or stage sequence is also not necessarily
Be successively carry out, but can at least part of the sub-step or stage of other steps or other steps in turn or
Alternately execute.
Fig. 7 is the structural block diagram of the anti-shake apparatus of one embodiment.As shown in fig. 7, a kind of anti-shake apparatus 700 is provided,
It is characterised by comprising: image collection obtains module 702, motion blur detection module 704, dispersion determining module 706, mesh
It marks exposure time determining module 708 and target image obtains module 710, in which:
Image collection obtains module 702, for obtaining image collection;Image collection includes the first figure of current time shooting
At least one second image shot in picture and objective time interval.
It is corresponding to obtain image collection for carrying out motion blur detection to image collection for motion blur detection module 704
Fuzzy parameter set.
Dispersion determining module 706, for determining that the fog-level of the first image is opposite according to fog-level parameter sets
In the dispersion of the fog-level of the second image.
Target light exposure duration determining module 708, for determining target light exposure duration according to dispersion.
Target image obtains module 710, for shooting based on target light exposure duration, obtains target image.
Above-mentioned anti-shake apparatus obtains image collection;When image collection includes the first image and target of current time shooting
At least one second image shot in section;Motion blur detection is carried out to image collection, it is corresponding fuzzy to obtain image collection
Parameter sets;Dispersion of first image relative to the second image is determined according to fog-level parameter sets;Dispersion is used for table
Fog-level departure degree of the fog-level relative to the second image for showing the first image, can determine more quasi- according to dispersion
True target light exposure duration, and shot based on target light exposure duration, the target image being more clear improves the accurate of stabilization
Property.
In one embodiment, above-mentioned motion blur detection module 704 is also used to obtain each frame image in image collection,
The image gradient of each frame image is calculated, and is analyzed according to the histogram distribution at the edge of image gradient, fuzzy parameter is obtained
Set;Fuzzy parameter in fuzzy parameter set is used to characterize the fog-level of image.
In one embodiment, above-mentioned motion blur detection module 704 is also used to obtain each frame image in image collection,
Laplace transform is carried out to each frame image, each median is obtained, and mean square error is carried out to each median, obtains image
Fuzzy parameter set.
In one embodiment, above-mentioned motion blur detection module 704 is also used to obtain each frame image in image collection,
Using machine learning training off-line model, and according to the fuzzy parameter set for training the off-line model detection image set completed.
It in one embodiment, include the first parameter and the second parameter in fuzzy parameter set, the first parameter is for characterizing
The fog-level of first image, the second parameter are used to characterize the fog-level of the second image.Above-mentioned dispersion determining module 706 is also
For determining the average value of each second parameter;The average value of second parameter and the first parameter are subjected to difference operation, obtain difference
Value;Using the absolute value of difference as the dispersion of the first image and each second image.
In one embodiment, above-mentioned target light exposure duration determining module 708 is also used to obtain first according to dispersion and adjust
Integral divisor;Obtain reference exposure duration;Target light exposure duration is determined according to the first adjustment factor and reference exposure duration.
In one embodiment, above-mentioned target light exposure duration determining module 708 be also used to when dispersion be greater than or equal to from
When dissipating threshold value, the first sub- Dynamic gene is obtained;First sub- Dynamic gene is identical as the value of dispersion;According to the first sub- Dynamic gene
First object exposure time is determined with reference exposure duration;First object exposure time is less than reference exposure duration.
In one embodiment, above-mentioned target light exposure duration determining module 708 is also used to be less than discrete threshold values when dispersion
When, obtain the second sub- Dynamic gene;The second target light exposure duration is determined according to the second sub- Dynamic gene and reference exposure duration;The
Two target light exposure durations are equal to reference exposure duration.
In one embodiment, above-mentioned anti-shake apparatus 700 further includes loop module, for using target image as it is current when
The first image of shooting is carved, returns and executes acquisition image collection step.
In one embodiment, above-mentioned anti-shake apparatus 700 further includes target video generation module, for obtaining posture sensing
Device the first attitude data collected;Target image is subjected to image conversion based on the first attitude data, obtains video frame images;
Target video is generated according to each video frame images.
In one embodiment, above-mentioned target video generation module is also used to for the first attitude data being filtered,
Obtain the second attitude data;Obtain the second adjustment factor;According to the first attitude data, the second attitude data and the second adjustment factor
Obtain targeted attitude data;Target image is subjected to image conversion according to targeted attitude data, obtains video frame images.
The division of modules is only used for for example, in other embodiments, can fill stabilization in above-mentioned anti-shake apparatus
It sets and is divided into different modules as required, to complete all or part of function of above-mentioned anti-shake apparatus.
Fig. 8 is the schematic diagram of internal structure of electronic equipment in one embodiment.As shown in figure 8, the electronic equipment includes logical
Cross the processor and memory of system bus connection.Wherein, which supports entire electricity for providing calculating and control ability
The operation of sub- equipment.Memory may include non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with behaviour
Make system and computer program.The computer program can be performed by processor, to be mentioned for realizing following each embodiment
A kind of anti-fluttering method supplied.Built-in storage provides cache for the operating system computer program in non-volatile memory medium
Running environment.The electronic equipment can be mobile phone, tablet computer or personal digital assistant or wearable device etc..
Realizing for the modules in anti-shake apparatus provided in the embodiment of the present application can be the form of computer program.It should
Computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal or service
On the memory of device.When the computer program is executed by processor, realize the embodiment of the present application described in method the step of.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When, so that the step of processor executes anti-fluttering method.
A kind of computer program product comprising instruction, when run on a computer, so that computer executes stabilization
Method.
It may include non-to any reference of memory, storage, database or other media used in the embodiment of the present application
Volatibility and/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM in a variety of forms may be used
, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM),
Enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (12)
1. a kind of anti-fluttering method characterized by comprising
Obtain image collection;Described image set includes the first image shot at current time and shoots at least in objective time interval
One the second image;
Motion blur detection is carried out to described image set, obtains the corresponding fuzzy parameter set of described image set;
Fuzzy journey of the fog-level of the first image relative to second image is determined according to the fuzzy parameter set
The dispersion of degree;
Target light exposure duration is determined according to the dispersion;
It is shot based on the target light exposure duration, obtains target image.
2. the method according to claim 1, wherein it is described to described image set carry out motion blur detection,
The mode of the corresponding fuzzy parameter set of described image set is obtained, is included any of the following:
Each frame image in described image set is obtained, calculates the image gradient of each frame image, and according to described image gradient
The histogram distribution at edge is analyzed, and fuzzy parameter set is obtained;Fuzzy parameter in the fuzzy parameter set is used for table
Levy the fog-level of described image;
Each frame image in described image set is obtained, Laplace transform is carried out to each frame image, obtains each median, and
Mean square error is carried out to each median, obtains the fuzzy parameter set of described image;
Obtain described image set in each frame image, using machine learning training off-line model, and according to training complete from
The fuzzy parameter set of line model detection image set.
3. the method according to claim 1, wherein including the first parameter and second in the fuzzy parameter set
Parameter, first parameter are used to characterize the fog-level of the first image, and second parameter is for characterizing described second
The fog-level of image;
It is described according to the fuzzy parameter set determine the fog-level of the first image relative to second image from
The fog-level of divergence, comprising:
Determine the average value of each second parameter;
The average value of second parameter and first parameter are subjected to difference operation, obtain difference;
Using the absolute value of the difference as the dispersion of the first image and each second image.
4. the method according to claim 1, wherein described determine target light exposure duration according to the dispersion,
Include:
The first adjustment factor is obtained according to the dispersion;
Obtain reference exposure duration;
Target light exposure duration is determined according to the first adjustment factor and the reference exposure duration.
5. according to the method described in claim 4, it is characterized in that, it is described according to the dispersion obtain the first adjustment factor,
Include:
When the dispersion is greater than or equal to discrete threshold values, the first sub- Dynamic gene is obtained;The first sub- Dynamic gene with
The value of the dispersion is identical;
It is described that target light exposure duration is determined according to the first adjustment factor and the reference exposure duration, comprising:
First object exposure time is determined according to the described first sub- Dynamic gene and the reference exposure duration;The first object
Exposure time is less than the reference exposure duration.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
When the dispersion is less than the discrete threshold values, the second sub- Dynamic gene is obtained;
It is described that target light exposure duration is determined according to the first adjustment factor and the reference exposure duration, comprising:
The second target light exposure duration is determined according to the described second sub- Dynamic gene and the reference exposure duration;Second target
Exposure time is equal to the reference exposure duration.
7. the method according to claim 1, wherein the method also includes:
The first image that the target image is shot as current time returns and executes the acquisition image collection step.
8. the method according to claim 1, wherein the method also includes:
Obtain attitude transducer the first attitude data collected;
The target image is subjected to image conversion based on first attitude data, obtains video frame images;
Target video is generated according to each video frame images.
9. according to the method described in claim 8, it is characterized in that, described be based on first attitude data for the target figure
As carrying out image conversion, video frame images are obtained, comprising:
First attitude data is filtered, the second attitude data is obtained;
Obtain the second adjustment factor;
Targeted attitude data are obtained according to first attitude data, the second attitude data and the second adjustment factor;
The target image is subjected to image conversion according to the targeted attitude data, obtains video frame images.
10. a kind of anti-shake apparatus characterized by comprising
Image collection obtains module, for obtaining image collection;Described image set includes the first image of current time shooting
With at least one second image shot in objective time interval;
It is corresponding to obtain described image set for carrying out motion blur detection to described image set for motion blur detection module
Fuzzy parameter set;
Dispersion determining module, for determining that the fog-level of the first image is opposite according to the fog-level parameter sets
In the dispersion of the fog-level of second image;
Target light exposure duration determining module, for determining target light exposure duration according to the dispersion;
Target image obtains module, for shooting based on the target light exposure duration, obtains target image.
11. a kind of electronic equipment, including memory and processor, computer program, the calculating are stored in the memory
When machine program is executed by the processor, so that the processor executes stabilization side as claimed in any one of claims 1-9 wherein
The step of method.
12. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method as claimed in any one of claims 1-9 wherein is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910790673.4A CN110493522A (en) | 2019-08-26 | 2019-08-26 | Anti-fluttering method and device, electronic equipment, computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910790673.4A CN110493522A (en) | 2019-08-26 | 2019-08-26 | Anti-fluttering method and device, electronic equipment, computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110493522A true CN110493522A (en) | 2019-11-22 |
Family
ID=68554308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910790673.4A Pending CN110493522A (en) | 2019-08-26 | 2019-08-26 | Anti-fluttering method and device, electronic equipment, computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110493522A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111193867A (en) * | 2020-01-08 | 2020-05-22 | Oppo广东移动通信有限公司 | Image processing method, image processor, photographing device and electronic equipment |
CN111263069A (en) * | 2020-02-24 | 2020-06-09 | Oppo广东移动通信有限公司 | Anti-shake parameter processing method and device, electronic equipment and readable storage medium |
CN111372000A (en) * | 2020-03-17 | 2020-07-03 | Oppo广东移动通信有限公司 | Video anti-shake method and apparatus, electronic device, and computer-readable storage medium |
CN112215232A (en) * | 2020-10-10 | 2021-01-12 | 平安科技(深圳)有限公司 | Certificate verification method, device, equipment and storage medium |
CN112788236A (en) * | 2020-12-31 | 2021-05-11 | 维沃移动通信有限公司 | Video frame processing method and device, electronic equipment and readable storage medium |
CN117880621A (en) * | 2024-01-19 | 2024-04-12 | 杭州峰景科技有限公司 | Electric welding operation monitoring method, equipment and storage medium based on Internet of things |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457675A (en) * | 2010-10-27 | 2012-05-16 | 展讯通信(上海)有限公司 | Image shooting anti-shaking manner for handheld camera equipment |
CN104902142A (en) * | 2015-05-29 | 2015-09-09 | 华中科技大学 | Method for electronic image stabilization of video on mobile terminal |
CN105611179A (en) * | 2016-03-28 | 2016-05-25 | 广东欧珀移动通信有限公司 | Multi-frame optimization method and device for handheld photographic shake prevention and mobile terminal |
CN105635559A (en) * | 2015-07-17 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Terminal shooting control method and device |
CN106296688A (en) * | 2016-08-10 | 2017-01-04 | 武汉大学 | The image fog detection method estimated based on the overall situation and system |
CN107026978A (en) * | 2017-04-14 | 2017-08-08 | 珠海市魅族科技有限公司 | IMAQ control method and device |
CN108833801A (en) * | 2018-07-11 | 2018-11-16 | 深圳合纵视界技术有限公司 | Adaptive motion detection method based on image sequence |
-
2019
- 2019-08-26 CN CN201910790673.4A patent/CN110493522A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457675A (en) * | 2010-10-27 | 2012-05-16 | 展讯通信(上海)有限公司 | Image shooting anti-shaking manner for handheld camera equipment |
CN104902142A (en) * | 2015-05-29 | 2015-09-09 | 华中科技大学 | Method for electronic image stabilization of video on mobile terminal |
CN105635559A (en) * | 2015-07-17 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Terminal shooting control method and device |
CN105611179A (en) * | 2016-03-28 | 2016-05-25 | 广东欧珀移动通信有限公司 | Multi-frame optimization method and device for handheld photographic shake prevention and mobile terminal |
CN106296688A (en) * | 2016-08-10 | 2017-01-04 | 武汉大学 | The image fog detection method estimated based on the overall situation and system |
CN107026978A (en) * | 2017-04-14 | 2017-08-08 | 珠海市魅族科技有限公司 | IMAQ control method and device |
CN108833801A (en) * | 2018-07-11 | 2018-11-16 | 深圳合纵视界技术有限公司 | Adaptive motion detection method based on image sequence |
Non-Patent Citations (1)
Title |
---|
韩九强,杨磊: "图像变换", 《数字图像处理基于XAVIS组态软件》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111193867A (en) * | 2020-01-08 | 2020-05-22 | Oppo广东移动通信有限公司 | Image processing method, image processor, photographing device and electronic equipment |
CN111193867B (en) * | 2020-01-08 | 2021-03-23 | Oppo广东移动通信有限公司 | Image processing method, image processor, photographing device and electronic equipment |
CN111263069A (en) * | 2020-02-24 | 2020-06-09 | Oppo广东移动通信有限公司 | Anti-shake parameter processing method and device, electronic equipment and readable storage medium |
CN111263069B (en) * | 2020-02-24 | 2021-08-03 | Oppo广东移动通信有限公司 | Anti-shake parameter processing method and device, electronic equipment and readable storage medium |
CN111372000A (en) * | 2020-03-17 | 2020-07-03 | Oppo广东移动通信有限公司 | Video anti-shake method and apparatus, electronic device, and computer-readable storage medium |
CN111372000B (en) * | 2020-03-17 | 2021-08-17 | Oppo广东移动通信有限公司 | Video anti-shake method and apparatus, electronic device, and computer-readable storage medium |
CN112215232A (en) * | 2020-10-10 | 2021-01-12 | 平安科技(深圳)有限公司 | Certificate verification method, device, equipment and storage medium |
WO2021151348A1 (en) * | 2020-10-10 | 2021-08-05 | 平安科技(深圳)有限公司 | Certificate verification method and apparatus, and device and storage medium |
CN112215232B (en) * | 2020-10-10 | 2023-10-24 | 平安科技(深圳)有限公司 | Certificate verification method, device, equipment and storage medium |
CN112788236A (en) * | 2020-12-31 | 2021-05-11 | 维沃移动通信有限公司 | Video frame processing method and device, electronic equipment and readable storage medium |
CN117880621A (en) * | 2024-01-19 | 2024-04-12 | 杭州峰景科技有限公司 | Electric welding operation monitoring method, equipment and storage medium based on Internet of things |
CN117880621B (en) * | 2024-01-19 | 2024-05-28 | 杭州峰景科技有限公司 | Electric welding operation monitoring method, equipment and storage medium based on Internet of things |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110493522A (en) | Anti-fluttering method and device, electronic equipment, computer readable storage medium | |
JP7371081B2 (en) | Night view photography methods, devices, electronic devices and storage media | |
CN110610465B (en) | Image correction method and device, electronic equipment and computer readable storage medium | |
CN103685913B (en) | The picture pick-up device of periodic variation conditions of exposure and the control method of picture pick-up device | |
EP3614661B1 (en) | Image processing method, image processing apparatus, electronic device and storage medium | |
EP3603049A1 (en) | Video stabilization | |
CN110536057A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN111246089A (en) | Jitter compensation method and apparatus, electronic device, computer-readable storage medium | |
KR20180101466A (en) | Depth information acquisition method and apparatus, and image acquisition device | |
CN109391767A (en) | Picture pick-up device and the method wherein executed | |
CN109598764A (en) | Camera calibration method and device, electronic equipment, computer readable storage medium | |
CN111246100B (en) | Anti-shake parameter calibration method and device and electronic equipment | |
CN111432118B (en) | Image anti-shake processing method and device, electronic equipment and storage medium | |
CN110278360A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN109712192A (en) | Camera module scaling method, device, electronic equipment and computer readable storage medium | |
CN113875219B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN110266966A (en) | Image generating method and device, electronic equipment, computer readable storage medium | |
CN110300263B (en) | Gyroscope processing method and device, electronic equipment and computer readable storage medium | |
CN109660718A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN110213498A (en) | Image generating method and device, electronic equipment, computer readable storage medium | |
CN110519513A (en) | Anti-fluttering method and device, electronic equipment, computer readable storage medium | |
CN109671028A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
US20220174217A1 (en) | Image processing method and device, electronic device, and computer-readable storage medium | |
CN110266950A (en) | Gyroscope treating method and apparatus, electronic equipment, computer readable storage medium | |
WO2016009199A2 (en) | Minimisation of blur in still image capture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191122 |