CN109544620A - Image processing method and device, computer readable storage medium and electronic equipment - Google Patents
Image processing method and device, computer readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN109544620A CN109544620A CN201811291740.XA CN201811291740A CN109544620A CN 109544620 A CN109544620 A CN 109544620A CN 201811291740 A CN201811291740 A CN 201811291740A CN 109544620 A CN109544620 A CN 109544620A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- amount
- jitter
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 230000006870 function Effects 0.000 claims description 73
- 238000012545 processing Methods 0.000 claims description 44
- 238000006073 displacement reaction Methods 0.000 claims description 39
- 238000012937 correction Methods 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 21
- 230000003287 optical effect Effects 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 16
- 238000012360 testing method Methods 0.000 claims description 12
- 238000003702 image correction Methods 0.000 claims description 9
- 230000000087 stabilizing effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 206010044565 Tremor Diseases 0.000 claims description 5
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 20
- 238000012887 quadratic function Methods 0.000 description 7
- 230000006641 stabilisation Effects 0.000 description 5
- 238000011105 stabilization Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 239000000571 coke Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- NJPPVKZQTLUDBO-UHFFFAOYSA-N novaluron Chemical compound C1=C(Cl)C(OC(F)(F)C(OC(F)(F)F)F)=CC=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F NJPPVKZQTLUDBO-UHFFFAOYSA-N 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Abstract
This application involves a kind of image processing method and device, computer readable storage medium and electronic equipment, image processing method includes: that the first camera of control acquires the first image, and synchronously control second camera acquires the second image;When electronic equipment shake, obtains the first camera and acquire the first amount of jitter of the first image and the second amount of jitter of second camera the second image of acquisition;The first image is corrected according to the first preset calibrations function and the first amount of jitter to obtain first object image, and the second image is corrected according to the second preset calibrations function and the second amount of jitter to obtain the second target image;It is handled according to first object image and the second target image, based on double double OIS systems taken the photograph, the clarity with depth information image can be improved.
Description
Technical field
This application involves field of computer technology, more particularly to image processing method and device, computer-readable storage
Medium and electronic equipment.
Background technique
Optical anti-vibration (Optical Image Stabilization, optical image stabilization) is as at present by public acceptance
A kind of stabilization technology, mainly corrected by the floating lens of camera lens " light shaft offset ", principle is by camera lens
Gyroscope detects small movement, and signal is then reached microprocessor, and processor calculates the displacement for needing to compensate immediately,
Then it by compensation lens set, is compensated according to the jitter direction of camera lens and displacement;To effectively overcome because of camera
Vibration generate image blur.
But if the electronic equipment with dual camera is shaken in shooting process, dual camera synchronous acquisition
The first image and the second image can all generate offset, keep fused target image not clear enough.
Summary of the invention
The embodiment of the present application provides a kind of image processing method and device, computer readable storage medium and electronic equipment,
The image that dual camera acquires can be corrected based on double OIS respectively, improve the clarity of image.
A kind of image processing method, which comprises
Control the first camera acquire the first image, and synchronously control second camera acquire the second image, described second
Image is for indicating the corresponding depth information of the first image;Wherein, first camera and the second camera are equal
Including optical image stabilizing system;
When electronic equipment shake, obtain first camera acquisition the first image the first amount of jitter and
The second camera acquires the second amount of jitter of second image;
The first image is corrected to obtain first according to the first preset calibrations function and first amount of jitter
Target image, and second image is corrected to obtain according to the second preset calibrations function and second amount of jitter
Second target image;
It is handled according to the first object image and the second target image.
A kind of image processing apparatus, described device include:
Image capture module acquires the first image for controlling the first camera, and synchronously control second camera acquires
Second image, second image is for indicating the corresponding depth information of the first image;Wherein, first camera and
The second camera includes optical image stabilizing system;
Shake obtains module, for when electronic equipment shake, obtaining the first camera acquisition described first
First amount of jitter of image and the second camera acquire the second amount of jitter of second image;
Image correction module, for according to the first preset calibrations function and first amount of jitter to the first image into
Row is corrected to obtain first object image, and according to the second preset calibrations function and second amount of jitter to second figure
As being corrected to obtain the second target image;
Image processing module, for being handled according to the first object image and the second target image.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The step of image processing method is realized when row.
A kind of electronic equipment, including memory and processor store computer-readable instruction in the memory, described
When instruction is executed by the processor, so that the step of processor executes image processing method.
Above-mentioned image processing method and device, computer readable storage medium and electronic equipment, can control the first camera shooting
Head the first image of acquisition, and synchronously control second camera acquires the second image;When electronic equipment shake, the first camera shooting is obtained
The first amount of jitter and second camera of head the first image of acquisition acquire the second amount of jitter of the second image;According to the first pre- bidding
Determine function and the first amount of jitter is corrected the first image to obtain first object image, and according to the second preset calibrations letter
Several and the second amount of jitter is corrected the second image to obtain the second target image;According to first object image and the second target
Image is handled, and based on double double OIS systems taken the photograph, the clarity with depth information image can be improved.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the applied environment figure of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is second camera range measurement principle schematic diagram in one embodiment;
Fig. 4 be in one embodiment according to the first preset calibrations function and the first amount of jitter to the first image be corrected with
Obtain the flow chart of first object image;
Fig. 5 is the flow chart of another embodiment image processing method;
Fig. 6 be in one embodiment according to the second preset calibrations function and the second amount of jitter to the second image be corrected with
Obtain the flow chart of the second target image;
Fig. 7 is the flow chart handled in one embodiment according to first object image and the second target image;
Fig. 8 is the flow chart of image processing method in another embodiment;
Fig. 9 is the structure chart of image processing apparatus in one embodiment;
Figure 10 is the schematic diagram of image processing circuit in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes
It says, in the case where not departing from scope of the present application, the first image can be known as the second image, and similarly, it can be by second
Image is known as the first image.First image and the second image both image, but it is not same image.
Fig. 1 is the applied environment figure of image processing method in one embodiment.As shown in Figure 1, mountable on electronic equipment
Two cameras, including the first camera 102 and second camera 104.Specifically, electronic equipment can pass through the first camera
102 and second camera 104 shot, to obtain the first image of the first camera 102 acquisition, and synchronous obtain second and take the photograph
As the second image of first 104 acquisition, wherein the second image is for indicating the corresponding depth information of the first image;Work as electronic equipment
When shake, the first amount of jitter and second camera 104 for obtaining the first camera 102 the first image of acquisition acquire the second image
Second amount of jitter;The first image is corrected according to the first preset calibrations function and the first amount of jitter to obtain first object figure
Picture, and the second image is corrected according to the second preset calibrations function and the second amount of jitter to obtain the second target image;
It is handled according to first object image and the second target image.
Wherein, the first camera 102 and second camera 104 include OIS (Optical Image
Stabilization, optical anti-vibration) system.Optical anti-vibration is by the structure of special camera lens or CCD photosensitive element most
The reduction user of big degree is in use since shake causes image unstable.Specifically, when the gyroscope in camera
When detecting small mobile, signal can be reached to microprocessor and calculate the displacement for needing to compensate immediately, then pass through compensation
Lens set is compensated according to the jitter direction of camera lens and displacement, so that the shake because of camera effectively be overcome to generate
Image blur.
Wherein, the first camera and second camera include camera lens, voice coil motor, infrared fileter, imaging sensor
(Sensor IC) and Digital Signal Processing (DSP) and PCB circuit board.Wherein, camera lens is usually made of multiple eyeglasses, imaging
Effect, in the case where there is shake, controls camera lens relative to Image sensor shift and by hand if camera lens has OIS function
Image shift canceling compensation caused by trembling falls.
It is understood that above-mentioned first camera and second camera can be applied in the electronic device, electronic equipment
It can be mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of
Sales, point-of-sale terminal), vehicle-mounted computer, wearable device, digital camera etc. have take pictures, any terminal of camera function is set
It is standby.
Fig. 2 is the flow chart of image processing method in one embodiment.Image processing method in one embodiment, including
Step 202- step 208.Wherein,
Step 202, the first camera of control acquires the first image, and synchronously control second camera acquires the second image,
Second image is for indicating the corresponding depth information of the first image.
Multiple cameras can be installed on electronic equipment, and obtain image by multiple cameras of installation.Camera can
To be divided into the first-class type of Laser video camera head, visible image capturing according to the difference of the image of acquisition, Laser video camera head is available to swash
Illumination, which is mapped on object, is formed by image, it is seen that is formed by image on the available radiation of visible light to object of light image.
In the embodiment of the present application, electronic equipment at least installs two cameras, and respectively the first camera and second are taken the photograph
It as head, then controls the first camera and second camera while exposing, and control the first camera and acquire the first image, control
Second camera acquires the second image.It is understood that the first camera and second camera can be directed to Same Scene
Acquire and obtain corresponding image, meanwhile, the first camera and second camera are generally aligned in the same plane on position.
In one embodiment, the first camera can be visible image capturing head, and second camera can be with infrared camera
(IR Camrea), the corresponding projector of mountable second camera on electronic equipment, the projector can be infrared projector.Projection
Device and second camera are generally aligned in the same plane on position.Line between the projector and second camera is baseline.Projector hair
Speckle pattern is projected to target object surface by structure light out, and the speckle pattern of target object reflection is clapped by second camera
The speckle pattern for taking the photograph the reflection obtains the second image (target speckle pattern) target speckle pattern.By target speckle pattern be obtained ahead of time
Reference speckle pattern matched, obtain target speckle pattern and with reference to the speckle point of the same name in speckle pattern, obtain two speckle points
Between target parallax value the depth Z of target point can be calculated, i.e., according to the focal length of baseline length and second camera
Position and the distance of photographic subjects object can be obtained.That is, the second image is for generating the corresponding depth information of the first image.
As shown in figure 3, x1 is coordinate of the target point in target speckle pattern, x0 is corresponding speckle point of the same name in reference speckle
Coordinate in figure, the difference of the two are parallax d.The speckle point of target speckle pattern and the speckle point of reference speckle pattern are corresponding, then
Two speckle points are speckle point of the same name.By carrying out the available correspondence of images match to target speckle pattern and with reference to speckle pattern
Speckle point of the same name.Further, it is also possible to encode to the pattern in speckle image, each pattern in speckle image has only
One number then has unique number with reference to each pattern in speckle pattern, after taking target speckle pattern, in target speckle pattern
It is middle to find each pattern uniquely encoded, a pattern number is found, can directly table look-up and find pair in reference speckle pattern
The pattern answered.
The depth of target point can be calculated according to formula (1).
Wherein, b is the baseline length of the projector 222 and camera 224, and f is the focal length of camera, Z0For with reference to speckle pattern
Shooting distance.
Depth calculation precision is mainly influenced by camera optical distortion, and optical lens distortion size becomes with field positions
Change, target speckle and the corresponding speckle that refers to appear in different field positions, therefore have different amount of distortion, in order to promote knot
Structure optical mode group depth calculation precision, needing to distort is corrected, and obtains with reference to parallax d'.
x0=x'0+Δx0
x1=x '1+Δx1
D=x1-x0=d'+ (Δ x1-Δx0) formula (2)
X ' in formula (2)1For calibration coordinate of the target point in target speckle pattern, x '0It is being referred to for correspondence speckle point of the same name
Calibration coordinate in speckle pattern.
By carrying out distortion correction to target speckle pattern and with reference to speckle pattern, eliminating relative distortion influences, and utilizes distortion school
Target speckle pattern and reference speckle pattern after just do speckle matching, and depth calculation precision can be improved.
Step 204, when electronic equipment is shaken, the first amount of jitter and second that the first camera acquires the first image is obtained
Camera acquires the second amount of jitter of the second image.
It include the gyro sensors whether shaken for detecting the first camera and second camera in electronic equipment
Device.When the angular velocity information of gyro sensor acquisition changes, then it is believed that first camera and second camera
It is shaken.When the first camera and second camera are shaken, the first shake of available first camera
Second amount of jitter of amount and second camera.
It should be noted that when the first camera and second camera are mounted on same pedestal, the first camera
First amount of jitter is identical with the second amount of jitter of second camera.When the first camera and second camera are mounted on different bases
When on seat, the first amount of jitter of the first camera and the second amount of jitter of second camera can be the same or different.
Optionally, can also based on electronic equipment it is original in sensing value of gyro sensor and or acceleration transducer can examine
It surveys the first camera and whether second camera is shaken.It, can be with when the first camera and second camera are shaken
Obtain the first amount of jitter of the first camera and the second amount of jitter of second camera.
Further, amount of jitter can be indicated with the angular velocity information that gyroscope acquires, amount of jitter and angular velocity information
It corresponds, the corresponding amount of jitter of each angular velocity information.First camera can be same while acquiring first image of frame
Step obtains multiple angular velocity informations of gyro sensor acquisition, correspondingly, second camera acquires the same of second image of frame
When can also synchronize obtain gyro sensor acquisition multiple angular velocity informations.Wherein, the frequency acquisition of gyro sensor
Higher than the frequency for obtaining the first camera and second camera acquisition image.For example, the first camera carries out the first figure with 30Hz
As acquisition, synchronization carries out the acquisition of angular velocity information with gyro sensor with 200Hz, then acquires first image of frame
Time acquires 6-7 angular velocity information for corresponding in timing.
Amount of jitter can be understood as the angle information after angular velocity information integral.Wherein, the time of integration and gyro sensors
The frequency dependence of device acquisition angular velocity information.Wherein, amount of jitter and angular velocity information correspond, and each angular velocity information is corresponding
One amount of jitter, then corresponding 6~7 amount of jitter of 6~7 angular velocity informations acquired.
Step 206, the first image is corrected to obtain first according to the first preset calibrations function and the first amount of jitter
Target image, and the second image is corrected according to the second preset calibrations function and the second amount of jitter to obtain the second target
Image.
When the first camera is shaken, corresponding movement can also occur for the camera lens of the first camera, be moved
Vector be referred to as camera lens offset.First of camera lens in the first camera is obtained that is, can correspond to according to the first amount of jitter
Camera lens offset.Wherein, the unit of the first camera lens offset is code, and the unit of image shift amount is pixel (pixel), is based on
First camera lens offset is converted to image shift amount by the first preset calibrations function.
Wherein, the first preset calibrations function can be obtained according to specific calibration mode, and the first preset calibrations function can be with
For One- place 2-th Order function, Binary quadratic functions or the multiple function of binary, wherein can by camera lens in X/Y plane along the offset of x-axis
Amount is brought into the offset along y-axis into the first preset calibrations function, by calculating, to obtain corresponding image shift amount d1.
A plurality of lenses offset can be corresponded to by acquiring a frame image, then according to the first preset calibrations function, can correspond to and obtain multiple figures
As offset.
The first image can be corrected according to image shift amount to obtain first object image.For example, electronic equipment
Can using frame by frame, block-by-block compensation, line by line or the compensation policies such as interlacing, school can be carried out to the first image according to image shift amount
Just to obtain first object image.Wherein, compensation can use an image shift amount to the whole region of different frame image frame by frame
Carry out unified compensation;Block-by-block compensation, line by line or interlacing compensation can for same frame image different zones carry out partition compensation,
I.e., it is possible to be compensated to different regions using different image shift amounts.
After electronic equipment is by compensating correction to the first image, available clearly first object image, meanwhile,
First camera lens offset of the first camera can also be eliminated, the initial bit to shake so that the first camera playbacks
It sets.
In one embodiment, electronic equipment can be according to the second amount of jitter and the second preset calibrations function to the second image
It is corrected to obtain the second target image.Specifically, electronic equipment can obtain the second calibration function in advance, second calibration
Function is used to the second amount of jitter being converted to correcting value, and the motor movement in second camera is driven according to the correcting value, with
The position of IR eyeglass in mobile second camera compensates the second amount of jitter to realize the correction to the second image, to obtain the
Two target images.After electronic equipment is by compensating correction to the second image, available clearly the second target image, together
When, the second amount of jitter of second camera can also be eliminated, the initial position to shake so that second camera playbacks.
Step 208, it is handled according to first object image and the second target image.
It, can be to first object image and the second target after electronic equipment obtains first object image and the second target image
Image carries out fusion treatment, to obtain the target image with depth information.It optionally, can also be to first object image and
Two target images carry out other processing, and specific processing mode is unlimited.For example, electronic equipment can according to first object image into
Pedestrian's face identifying processing, and three-dimensional modeling is carried out to the face recognized in first object image according to the second target image, it obtains
To the threedimensional model of face.Electronic equipment can also be according to the depth information in the second target image, in first object image
Face carry out U.S. face processing.
Above-mentioned image processing method can control the first camera and adopt applied to the electronic equipment with multiple cameras
Collect the first image, and synchronously control second camera acquires the second image;When electronic equipment shake, obtains the first camera and adopt
The first amount of jitter and second camera that collect the first image acquire the second amount of jitter of the second image;According to the first preset calibrations letter
Several and the first amount of jitter is corrected the first image to obtain first object image, and according to the second preset calibrations function and
Second amount of jitter is corrected the second image to obtain the second target image;According to first object image and the second target image
It is handled, based on double double OIS systems taken the photograph, can simplify the fusion difficulty of first object image and the second target image, mention
High fusion efficiencies and the clarity for improving target image.
Fig. 4 be in one embodiment according to the first preset calibrations function and the first amount of jitter to the first image be corrected with
Obtain the flow chart of first object image.In one embodiment, according to the first preset calibrations function and the first amount of jitter to
One image is corrected to obtain first object image, including step 402- step 406.Wherein,
Step 402, the first camera lens offset of the first camera is obtained according to the first amount of jitter.
In one embodiment, two dimension can be established using plane where the imaging sensor of the first camera as X/Y plane
Coordinate system, the origin position of two-dimensional coordinate system is not further in this application to be limited.The offset of first camera lens is understood that
Vector shift of the initial position in two-dimensional coordinate system before being shaken for the current location after camera lens shake with camera lens, that is, mirror
Vector distance of the current location relative to the initial position before camera lens shake after head shake.Wherein, initial position is understood that
Lens location when for one times of focal length that the distance between camera lens and imaging sensor are camera lens.Camera lens offset refers to camera lens
Vector distance before and after (convex lens) is mobile, between optical center.
Further, electronic equipment can based in camera Hall sensor or laser technology come in acquisition camera
The amount of movement (deviation scale of the camera lens of camera on X/Y plane) of camera lens, that is, camera lens deviates.Electronic equipment is inclined in record
While moving scale, can then it be obtained with record-shifted direction, according to the corresponding distance of each scale and offset direction
To camera lens offset p (xi,yj).The size of the Hall value of known Hall sensor acquisition, that is, can determine that the current time camera lens
The size of offset.In OIS system, the camera lens offset order of magnitude is other in the micron-scale.
Wherein, the Hall value that the angular velocity information with Hall sensor of gyro sensor acquisition acquire is right in timing
It answers.
It should be noted that can also synchronize judgement while control camera collection image and obtain camera lens offset, suddenly
The frequency acquisition of your sensor is higher than the frequency for obtaining camera collection image.That is, when camera acquires the same of a frame image
When, it can synchronize and obtain a plurality of lenses offset.For example, camera carries out Image Acquisition, synchronization hall sensing with 30Hz
Device carries out the acquisition of Hall value with 200Hz, then acquires the time of a frame image, acquires 6-7 Hall value for corresponding in timing,
And a plurality of lenses offset can be acquired.
Step 404, the image shift amount of the first image is obtained according to the first camera lens offset and the first preset calibrations function.
First preset calibrations function is used to camera lens offset being converted to image shift amount, wherein the list of camera lens offset
Position is code, and the unit of image shift amount is pixel (pixel), can be by camera lens offset based on the first preset calibrations function
Be converted to image shift amount.
During image shift amount can be understood as one acquisition image, within the scope of the same visual field, same characteristic point
Displacement before camera lens shake and after shake.For example, imaging device before shake, i.e., camera lens acquires first at first position
Coordinate position of each pixel in X/Y plane in image and the first image of record.When imaging device is shaken, camera lens meeting
It is moved in X/Y plane, i.e. camera lens acquires the second image and record at the second position (current location after moving)
The second image can be referred to as by each pixel in the coordinate position of X/Y plane relative to the offset of the first image in second image
Image shift amount.
Wherein, the first preset calibrations function can be obtained according to specific calibration mode, and the first preset calibrations function can be with
For One- place 2-th Order function, Binary quadratic functions or the multiple function of binary, wherein can by camera lens in X/Y plane along the offset of x-axis
Amount is brought into the offset along y-axis into the first preset calibrations function, by calculating, to obtain corresponding image shift amount d1.
A plurality of lenses offset can be corresponded to by acquiring first image of frame, then according to the first preset calibrations function, it is more can to correspond to acquisition
A image shift amount.
Step 406, the first image is corrected according to image shift amount to obtain first object image.
The first image is corrected according to default Correction Strategies and image shift amount to obtain first object image,
In, presetting Correction Strategies includes correction frame by frame and block-by-block Correction Strategies.
In one embodiment, when electronic equipment acquires first image of frame, the multiple images offset of acquisition can be corresponded to.
If default Correction Strategies are timing frame by frame, it is minimum that minimum image offset, derivative can be obtained in multiple images offset
Image shift amount, institute of the smallest image shift amount as target image offset to every frame image is differed with average jitter amount
There is pixel to carry out unified correction.
In one embodiment, when electronic equipment acquires first image of frame, the multiple images offset of acquisition can be corresponded to.
For example, each Hall value is corresponding only if can correspond to six Hall value hall1-hall6 of acquisition when acquiring first image of frame
A image shift amount one by one, is denoted as biaspixel1-biaspixel6.If default Correction Strategies are timing frame by frame, if CMOS
60 rows are scanned, then can carry out piecemeal correction, is i.e. 60 rows are divided into 6 pieces, and one piece includes 10 rows, use biaspixel1- respectively
Biaspixel6 carries out block-by-block amendment to this 6 blocks of images, that is, first piece of 10 row for including is all made of biaspixel1 as amendment
Parameter compensates and corrects, and second piece of 10 row for including compensates correction as corrected parameter using biaspixel2.
In one embodiment, when electronic equipment acquires first image of frame, the multiple images offset of acquisition can be corresponded to.
For example, each Hall value is corresponding only if can correspond to six Hall value hall1-hall6 of acquisition when acquiring first image of frame
A image shift amount one by one, is denoted as biaspixel1-biaspixel6.If default Correction Strategies are timing frame by frame, if CMOS
6 rows are scanned, then can be used biaspixel1 to compensate the 1st row pixel, the 2nd row pixel is carried out with biaspixel2
Compensation, can compensate the 3rd row pixel with biaspixel3, be compensated with biaspixel4 to the 4th row pixel, can
With biaspixel, 5 pair of the 5th row pixel is compensated, and is compensated with biaspixel6 to the 6th row pixel, that is, adopting
1-6 row pixel is compensated line by line with biaspixel1-biaspixel6, and so on, to complete to this image
Correction.
Optionally, 6 image shift amount biaspixel1-biaspixel6 of acquisition are also based on, arbitrarily select 1
A, 2 or 3 image shift amounts carry out interlacing compensation to the 1st row, the 3rd row, the 5th row pixel respectively, or, respectively to the 2nd
Row, the 4th row, the 6th row pixel carry out interlacing compensation, and so on, to complete the compensation to this image.
In the present embodiment, electronic equipment can be adaptive according to acquisition parameters (for example, the information such as exposure, shutter speed)
Image is compensated using different Correction Strategies, the clarity of the first image can be improved.
Fig. 5 is the flow chart of one embodiment image processing method.Specifically, pre- according to the first camera lens offset and first
If further including step 502- step 508 before calibration function obtains the image shift amount of the first image.Wherein,
Step 502, drive motor moves the camera lens of the first camera according to desired guiding trajectory;Desired guiding trajectory includes multiple displacements
Point.
Test chart is fixed in the areas imaging of the first camera, and controls the motor of the first camera according to default
The camera lens of the first camera of Track fusion.Wherein, test chart can be marked for CTF (Contrast Transfer Function)
Plate, SFR (Spatial Frequency Response) target, DB target or other customized targets.Desired guiding trajectory can be
Circumference, ellipse, rectangle or other desired guiding trajectories.Multiple displacement points are set on desired guiding trajectory, wherein two neighboring displacement point
Distance can be identical, can not also be identical.The location information of its displacement point can carry out table with coordinate information in the xy plane
Show.For example, displacement point qiLocation information can use coordinate position q in the xy planei(xi,yj) be indicated, that is, displacement
The first location information of point can use coordinate qi(xi,yj) be indicated.
Step 504, when camera lens is moved to each displacement point, the image information of corresponding collecting test target.
When driving the motor of the first camera to push camera lens mobile according to desired guiding trajectory, correspondence is adopted in each displacement point
Collect the image information of test chart.The image information of the corresponding width test chart of one displacement point, the image information are understood that
For constitute the image multiple pixels location information.For example, needing corresponding acquisition when the quantity of displacement point is six
The image information of six width test charts.
Step 506, the corresponding first location information for obtaining each displacement point and the image information in the acquisition of each displacement point
In image shift amount of the same characteristic point relative to initial position.
Electronic equipment can choose a characteristic point p in image informationi, to obtain in characteristic point piSecond confidence
Breath, characteristic point piSecond location information can also use coordinate p in the xy planei(Xi,Yj) be indicated.
Wherein, characteristic point piIt can be the corresponding photographic subjects object of pixel in the image information by entad,
Or the most bright pixel of brightness or other corresponding photographic subjects of pixel with prominent meaning in the image information
Object, here, specific location to characteristic point and defining not further limited.
It should be noted that photographic subjects object corresponding to characteristic point in the image information of multiple test charts is identical,
That is, the location information of the characteristic point in the image information of different displacement points acquisition is different, but the corresponding bat of same characteristic point
It is identical to take the photograph object.
In one embodiment, displacement point q of the camera lens in initial position0(x0,y0) it can be origin.When camera lens is first
Beginning position q0(x0,y0) when, characteristic point in the image information of retrievable test chart, this feature point can use p0(X0,Y0) indicate.
When camera lens is moved to displacement point q1(x1,y1) when, correspond to the characteristic point p in the image information for obtaining test chart1
(X1,Y1) and this feature point p1(X1,Y1) relative in initial position q0(x0,y0) obtain characteristic point p0(X0,Y0) image
Offset d1;And so on, when camera lens is moved to displacement point q6(x6,y6) when, it corresponds in the image information for obtaining test chart
Characteristic point p6(X6,Y6) and in this feature point p6(X6,Y6) relative in initial position q0 (x0,y0) obtain characteristic point p0
(X0,Y0) image shift amount d6.
Step 508, first location information and image shift are input to default bias transformation model, there is calibration to determine
The default bias transfer function of coefficient, wherein the quantity of displacement point and the quantity of calibration coefficient are associated.
Preset calibrations model can be One- place 2-th Order function model, Binary quadratic functions model, or binary is multiple
Function model, the setting of the preset calibrations model are obtained by way of study.
The first location information for the feature transfer point that electronic equipment can will acquire, characteristic point corresponding with the displacement point
Second location information and the equal input value preset calibrations model of image shift amount determine the preset calibrations by analytic operation
Each coefficient in model, and then the preset calibrations function with calibration coefficient.Correspondingly, the standard of available each gear
Preset calibrations function corresponding to exposure information.
It should be noted that preset calibrations model is consistent with the expression formula of preset calibrations function, for preset calibrations model,
Calibration coefficient therein is unknown number, and for preset calibrations function, corresponding calibration coefficient is datum.
For example, can be indicated with following formula when preset calibrations model is Binary quadratic functions model:
F (Δ X, Δ Y)=ax2+by2+cxy+dx+ey+f
In formula, (Δ X, Δ Y) indicates that image shift amount, the image shift amount indicate current displacement point q (xi,yj) relative to
In initial position q (x0,y0) at obtain same characteristic point image shift amount, the image shift amount be scalar offset, that is,
Current displacement point q (xi,yj) and initial position q (x0,y0) the distance between same characteristic point.X indicates displacement point horizontal axis x's
Coordinate parameters;The coordinate parameters of y expression displacement point longitudinal axis y.
It wherein, include six unknowm coefficients a, b, c, d, e, f, six displacement points that will acquire in Binary quadratic functions model
q1(x1,y1)-q6(x6,y6) and with six characteristic point p1(X1,Y1)-p6(X6,Y6) corresponding image shift amount d1-d6Respectively
It is input to Binary quadratic functions model, so that it may parse a, b, c, d, e, f in above-mentioned equation, wherein the coefficient a of acquisition,
B, c, d, e, f bring Binary quadratic functions model into, then available corresponding preset calibrations function, wherein a, b, c, d, e, f are
The calibration coefficient of preset calibrations function.
Image processing method in the present embodiment, according to preset calibrations model, multiple displacement points and corresponding multiple figures
As offset, available corresponding preset calibrations function, the preset calibrations function can be directly based upon camera lens offset it is accurate,
Efficient to obtain image shift magnitude, calibration efficiency, precision are higher, have established good basis for correction image.
Fig. 6 be in one embodiment according to the second preset calibrations function and the second amount of jitter to the second image be corrected with
Obtain the flow chart of the second target image.In one embodiment, according to the second preset calibrations function and the second amount of jitter to
Two images are corrected to obtain the second target image, including step 602- step 604.Wherein,
Step 602, the correcting value of second camera is determined according to the second amount of jitter and the second preset calibrations function.
The second preset calibrations function can be stored in advance in electronic equipment, which is used for the second camera shooting
The amount of jitter of head is converted to the correcting value for eliminating the amount of jitter.Second preset calibrations function can be linear function, will acquire the
Two amount of jitter are input to the second preset calibrations function, that is, can determine and think corresponding correcting value with the second shake.
Step 604, it is moved according to correcting value drive motor be corrected to the second image to obtain the second target image.
Electronic equipment can drive the motor of second camera to drive camera lens mobile according to determining correcting value, the correction
, with elimination because of the shake of second camera caused by offset contrary with the second amount of jitter is measured, and then is realized to second
The correction of image, to obtain clearly the second target image.
Fig. 7 is the flow chart handled in one embodiment according to first object image and the second target image.One
In a embodiment, handled according to first object image and the second target image, comprising:
Step 702, the same characteristic features point in first object image and the second target image is obtained.
Step 704, fusion treatment is carried out to first object image and the second target image according to same characteristic features point, to obtain
Target image with depth information.
In one embodiment, first object image RGB (Red Green Blue, RGB) image, the second target figure
It is comprising the image or image channel with the information of the distance dependent on the surface of the scenario objects of viewpoint as being Depth image.
Depth image is similar to gray level image, and each pixel value of Depth figure is the actual range of sensor distance object.
Electronic equipment can extract the same characteristic features point in first object image and the second target image, and to the identical of extraction
Characteristic point is registrated, so as to there is between same characteristic features point one-to-one corresponding relationship in RGB image and Depth image, with
It realizes fusion treatment, and obtains the target image with depth information, which can be understood as RGB-D image (depth
Image), wherein RGB-D image=RGB Three Channel Color image+Depth image.
Image processing method in the present embodiment can be respectively to the first camera shooting figure acquisition based on double double OIS systems taken the photograph
The first image be corrected, the second image of second camera acquisition is corrected, and can simplify first object image and the
The fusion difficulty of two target images improves fusion efficiencies.
It should be noted that characteristic point can be extracted with the following method in this application.Such as: SIFT (Scale-
Invariant Features, size constancy feature) feature point detecting method, the innovatory algorithm of SIFT can also be used, or adopt
With Harris, Susan corner detection approach and its related innovatory algorithm, this application involves feature point extraction mode be not limited to it is above-mentioned
It illustrates.
In one embodiment, carrying out processing according to first object image and the second target image may also include that identification the
Target object in one target image obtains the corresponding target depth information of target object according to the second target image;According to mesh
Mark depth information handles target object.
In embodiment provided by the present application, identify the method for target object herein without limiting.For example, target object
It can be face, then can identify the face in first object image by Face datection algorithm.Target object, which can also be, to be built
Object, plant, animal etc. are built, can be identified by way of artificial intelligence.
After getting the corresponding target depth information of target object, can according to target depth information to target object into
Row processing.For example, three-dimensional modeling can be carried out to target object according to target depth information, it can also be according to target depth information
Landscaping treatment etc. is carried out to target object, specific processing mode is it is not limited here.
Fig. 8 is the flow chart of image processing method in one embodiment.In one embodiment, image processing method includes
Step 802- step 814.Wherein,
Step 802, the first camera of control acquires the first image, and synchronously control second camera acquires the second image.
Step 804, when electronic equipment is shaken, the first amount of jitter and second that the first camera acquires the first image is obtained
Camera acquires the second amount of jitter of the second image.
Step 806, the first camera lens offset of the first camera is obtained according to the first amount of jitter;
Step 808, the image shift amount of the first image is obtained according to the first camera lens offset and the first preset calibrations function.
Step 810, the first image is corrected according to image shift amount to obtain first object image.
Step 812, the correcting value of second camera is determined according to the second amount of jitter and the second preset calibrations function.
Step 814, it is moved according to correcting value drive motor be corrected to the second image to obtain the second target image.
Step 816, the same characteristic features point in first object image and the second target image is obtained;
Step 818, fusion treatment is carried out to first object image and the second target image according to same characteristic features point, to obtain
Target image with depth information.
Above-mentioned image processing method, the OIS system based on the first camera can be obtained according to the first camera lens offset
The image shift amount of first image, and then the first image can be corrected according to image shift amount, to obtain clearly first
Target image, meanwhile, correcting value can be determined according to the second amount of jitter of the second camera lens based on the OIS system of second camera,
And then it is corrected according to correcting value to the second image to obtain the second target image of cleaning.Correcting the first image and the
While two images, the offset of the first camera and second camera because of caused by the shake of electronic equipment can be eliminated, with
The first camera and second camera is set to restore to initial position (shake front position), so that the first camera and second is taken the photograph
The internal references information such as distance as head remains unchanged, and can simplify the fusion difficulty of first object image and the second target image, mentions
High fusion efficiencies and the clarity for improving target image.
It should be understood that although each step in the flow chart of Fig. 2, Fig. 4-8 is successively shown according to the instruction of arrow,
But these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these
There is no stringent sequences to limit for the execution of step, these steps can execute in other order.Moreover, in Fig. 2, Fig. 4-8
At least part step may include multiple sub-steps perhaps these sub-steps of multiple stages or stage be not necessarily
Synchronization executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage also need not
Be so successively carry out, but can at least part of the sub-step or stage of other steps or other steps in turn or
Person alternately executes.
Fig. 9 is the structure chart of image processing apparatus in one embodiment.The embodiment of the present application also provides at a kind of image
Device is managed, applied to the imaging device including optical image stabilizing system, imaging device, which is equipped with, carries optical image stabilizing system
Camera, image processing apparatus, comprising: image capture module 910, shake obtain module 920, image correction module 930 and
Image processing module 940.Wherein,
Image capture module 910 acquires the first image for controlling the first camera, and synchronously control second camera is adopted
Collect the second image, the second image is for indicating the corresponding depth information of the first image;Wherein, the first camera and second camera
It include optical image stabilizing system;
Shake obtains module 920, for obtaining the first camera acquires the first image first when electronic equipment shake
Amount of jitter and second camera acquire the second amount of jitter of the second image;
Image correction module 930, for carrying out school to the first image according to the first preset calibrations function and the first amount of jitter
Just to obtain first object image, and according to the second preset calibrations function and the second amount of jitter to the second image be corrected with
Obtain the second target image;
Image processing module 940, for being handled according to first object image and the second target image.
Above-mentioned image processing apparatus can control the first camera and adopt applied to the electronic equipment with multiple cameras
Collect the first image, and synchronously control second camera acquires the second image;When electronic equipment shake, obtains the first camera and adopt
The first amount of jitter and second camera that collect the first image acquire the second amount of jitter of the second image;According to the first preset calibrations letter
Several and the first amount of jitter is corrected the first image to obtain first object image, and according to the second preset calibrations function and
Second amount of jitter is corrected the second image to obtain the second target image;According to first object image and the second target image
It is handled, based on double double OIS systems taken the photograph, can simplify the fusion difficulty of first object image and the second target image, mention
High fusion efficiencies and the clarity for improving target image.
In one embodiment, image correction module 930, comprising:
Camera lens offset units, the first camera lens for obtaining first camera according to first amount of jitter deviate
Amount;
Image shift unit, for according to the first camera lens offset and the first preset calibrations function acquisition
The image shift amount of first image;
Image correction unit obtains described for being corrected according to described image offset to the first image
One target image.
In one embodiment, image processing apparatus, further includes building module, which includes:
Drive motor moves the camera lens of first camera according to desired guiding trajectory;The desired guiding trajectory includes multiple displacements
Point;
Image acquisition units, for corresponding to collecting test target when the camera lens is moved to each displacement point
Image information;
Information acquisition unit, for the corresponding first location information for obtaining each displacement point and in each displacement
Image shift amount of the same characteristic point relative to initial position in the described image information of point acquisition;
Processing unit is demarcated, for the first location information and described image offset to be input to default bias modulus of conversion
Type, to determine the default bias transfer function with calibration coefficient, wherein the quantity of the displacement point and the calibration are
Several quantity is associated.
In one embodiment, image correction module 940 is also used to according to default Correction Strategies and described image offset
The first image is corrected to obtain the first object image, wherein the default Correction Strategies include school frame by frame
Just with block-by-block Correction Strategies.
In one embodiment, image correction module 930, further includes:
Determination unit is corrected, for determining described second according to second amount of jitter and the second preset calibrations function
The correcting value of camera;
Corrective control unit, for according to the correcting value drive motor move with to second image be corrected with
Obtain the second target image.
In one embodiment, shake obtains module 920, comprising:
Acquisition unit is shaken, when obtaining one frame the first image of acquisition for synchronizing when electronic equipment shake
First camera and the second camera acquire multiple angular velocity informations when the second image described in a frame;
Acquiring unit is shaken, for trembling according to corresponding the first amount of jitter and described second that obtains of the multiple angular velocity information
Momentum.
In one embodiment, image processing module 940, comprising:
Feature acquiring unit, for obtaining the same characteristic features in the first object image and second target image
Point;
Fusion treatment unit, for according to same characteristic features point to the first object image and second target image into
Row fusion treatment, to obtain the target image with depth information.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
The embodiment of the present application also provides a kind of electronic equipment.It include image processing circuit in above-mentioned electronic equipment, at image
Reason circuit can use hardware and or software component realization, it may include define ISP (Image Signal Processing, figure
As signal processing) the various processing units of pipeline.Figure 10 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 10 institute
Show, for purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in Figure 10, image processing circuit includes the first ISP processor 1030, the 2nd ISP processor 1040 and control
Logic device 1050.First camera 1010 includes one or more first lens 1012 and the first imaging sensor 1014.First
Imaging sensor 1014 may include colour filter array (such as Bayer filter), and the first imaging sensor 1014 can be obtained with first
The luminous intensity and wavelength information that each imaging pixel of imaging sensor 1014 captures, and providing can be by the first ISP processor
One group of image data of 1030 processing.Second camera 1020 includes one or more second lens 1022 and the second image sensing
Device 1024.Second imaging sensor 1024 may include colour filter array (such as Bayer filter), and the second imaging sensor 1024 can
Luminous intensity and wavelength information that each imaging pixel of the second imaging sensor 1024 captures are obtained, and providing can be by second
One group of image data of the processing of ISP processor 1040.
First image transmitting of the first camera 1010 acquisition is handled to the first ISP processor 1030, at the first ISP
It, can be by statistical data (brightness of such as image, the contrast value of image, the image of the first image after managing the first image of processing of device 1030
Color etc.) be sent to control logic device 1050, control logic device 1050 can determine the first camera 1010 according to statistical data
Control parameter, so that the first camera 1010 can carry out auto-focusing, the operation such as automatic exposure according to control parameter.First figure
As that can store after the first ISP processor 1030 is handled into video memory 1060, the first ISP processor 1030
The image that stores in video memory 1060 can be read with to handling.In addition, the first image passes through ISP processor 1030
It can be sent directly to display 1070 after being handled and shown that display 1070 can also be read in video memory 1060
Image to be shown.
Wherein, the first ISP processor 1030 handles image data pixel by pixel in various formats.For example, each image
Pixel can have the bit depth of 8,10,12 or 14 bits, and the first ISP processor 1030 can carry out image data one or more
The statistical information of image processing operations, collection about image data.Wherein, image processing operations can be by identical or different locating depth
Precision is spent to carry out.
Video memory 1060 can be independent special in a part, storage equipment or electronic equipment of memory device
It with memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the first 1014 interface of imaging sensor, the first ISP processor 1030 can carry out one or more
A image processing operations, such as time-domain filtering.Image data that treated can be transmitted to video memory 1060, so as to shown
Other processing is carried out before.First ISP processor 1030 receives processing data from video memory 1060, and to the processing
Data carry out the image real time transfer in RGB and YCbCr color space.First ISP processor 1030 treated image data
May be output to display 1070, for user watch and/or by graphics engine or GPU (Graphics Processing Unit,
Graphics processor) it is further processed.In addition, the output of the first ISP processor 1030 also can be transmitted to video memory 1060, and
Display 1070 can read image data from video memory 1060.In one embodiment, video memory 1060 can be matched
It is set to the one or more frame buffers of realization.
The statistical data that first ISP processor 1030 determines can be transmitted to control logic device 1050.For example, statistical data can
Including automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of the first lens etc. the
One imaging sensor, 1014 statistical information.Control logic device 1050 may include executing the processing of one or more routines (such as firmware)
Device and/or microcontroller, one or more routines can statistical data based on the received, determine the control ginseng of the first camera 1010
Several and the first ISP processor 1030 control parameter.For example, the control parameter of the first camera 1010 may include gain, exposure
Time of integration of control, stabilization parameter, flash of light control parameter, 1012 control parameter of the first lens (such as focus or zoom coke
Away from) or the combination of these parameters etc..ISP control parameter may include for automatic white balance and color adjustment (for example, at RGB
During reason) 1012 shadow correction parameter of gain level and color correction matrix and the first lens.
Similarly, the second image transmitting that second camera 1020 acquires is handled to the 2nd ISP processor 1040, the
After two ISP processors 1040 handle the first image, can by the statistical data of the second image (brightness of such as image, image contrast
Value, color of image etc.) it is sent to control logic device 1050, control logic device 1050 can determine the second camera shooting according to statistical data
First 1020 control parameter, so that second camera 1020 can carry out the operation such as auto-focusing, automatic exposure according to control parameter.
Second image can store after the 2nd ISP processor 1040 is handled into video memory 1060, the 2nd ISP processor
1040 can also read the image that stores in video memory 1060 with to handling.In addition, the second image is handled by ISP
Device 1040 can be sent directly to display 1070 after being handled and be shown, display 1070 can also read video memory
Image in 1060 is to be shown.Second camera 1020 and the 2nd ISP processor 1040 also may be implemented such as the first camera shooting
First 1010 and the first treatment process described in ISP processor 1030.
The following are realize image processing method with image processing techniques in Figure 10.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When, so that the step of processor executes image processing method.
A kind of computer program product comprising instruction, when run on a computer, so that computer executes image
Processing method.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage is situated between
Matter can be magnetic disk, CD, read-only memory (Read-Only Memory, ROM) etc..
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of image processing method is applied to electronic equipment, which is characterized in that the described method includes:
It controls the first camera and acquires the first image, and synchronously control second camera acquires the second image, second image
For indicating the corresponding depth information of the first image;Wherein, first camera and the second camera include
Optical image stabilizing system;
When electronic equipment shake, the first amount of jitter of first camera acquisition the first image and described is obtained
Second camera acquires the second amount of jitter of second image;
The first image is corrected to obtain first object according to the first preset calibrations function and first amount of jitter
Image, and second image is corrected to obtain second according to the second preset calibrations function and second amount of jitter
Target image;
It is handled according to the first object image and the second target image.
2. the method according to claim 1, wherein described tremble according to the first preset calibrations function and described first
Momentum is corrected the first image to obtain first object image, comprising:
The first camera lens offset of first camera is obtained according to first amount of jitter;
The image shift amount of the first image is obtained according to the first camera lens offset and the first preset calibrations function;
The first image is corrected according to described image offset to obtain the first object image.
3. according to the method described in claim 2, it is characterized in that, described according to the first camera lens offset and described first
Before preset calibrations function obtains the image shift amount of the first image, further includes:
Drive motor moves the camera lens of first camera according to desired guiding trajectory;The desired guiding trajectory includes multiple displacement points;
When the camera lens is moved to each displacement point, the image information of corresponding collecting test target;
The corresponding first location information for obtaining each displacement point and the described image information acquired in each displacement point
In image shift amount of the same characteristic point relative to initial position;
The first location information and described image offset are input to default bias transformation model, there is calibration coefficient to determine
The default bias transfer function, wherein the quantity of the displacement point is associated with the quantity of the calibration coefficient.
4. according to the method described in claim 3, it is characterized in that, it is described according to described image offset to the first image
It is corrected to obtain the first object image, comprising:
The first image is corrected according to default Correction Strategies and described image offset to obtain the first object
Image, wherein the default Correction Strategies include correction frame by frame and block-by-block Correction Strategies.
5. the method according to claim 1, wherein described tremble according to the second preset calibrations function and described second
Momentum is corrected to obtain the second target image second image, comprising:
The correcting value of the second camera is determined according to second amount of jitter and the second preset calibrations function;
It is moved according to the correcting value drive motor to be corrected second image to obtain the second target image.
6. obtaining described the method according to claim 1, wherein described when electronic equipment shake
First amount of jitter of one camera acquisition the first image and the second camera acquire the second of second image and tremble
Momentum, comprising:
When electronic equipment shake, it is synchronous when obtaining one frame the first image of acquisition described in the first camera and described the
Two cameras acquire multiple angular velocity informations when the second image described in a frame;
The first amount of jitter and second amount of jitter are obtained according to the multiple angular velocity information is corresponding.
7. the method according to claim 1, wherein described according to the first object image and the second target figure
As being handled, comprising:
Obtain the same characteristic features point in the first object image and second target image;
Fusion treatment is carried out to the first object image and second target image according to the same characteristic features point, to obtain
Target image with depth information.
8. a kind of image processing apparatus, which is characterized in that described device includes:
Image capture module acquires the first image, and synchronously control second camera acquisition second for controlling the first camera
Image, second image is for indicating the corresponding depth information of the first image;Wherein, first camera and described
Second camera includes optical image stabilizing system;
Shake obtains module, for obtaining first camera and acquiring the first image when electronic equipment shake
The first amount of jitter and the second camera acquire the second amount of jitter of second image;
Image correction module, for carrying out school to the first image according to the first preset calibrations function and first amount of jitter
Just to obtain first object image, and according to the second preset calibrations function and second amount of jitter to second image into
Row correction is to obtain the second target image;
Image processing module, for being handled according to the first object image and the second target image.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The step of method as described in any one of claims 1 to 7 is realized when processor executes.
10. a kind of electronic equipment, including the first camera, second camera, memory and processor, wherein described first takes the photograph
As head and the second camera include optical image stabilizing system, computer-readable instruction is stored in the memory,
It is characterized in that, when described instruction is executed by the processor, so that the processor is executed as any in claim 1 to 7
The step of item the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291740.XA CN109544620B (en) | 2018-10-31 | 2018-10-31 | Image processing method and apparatus, computer-readable storage medium, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291740.XA CN109544620B (en) | 2018-10-31 | 2018-10-31 | Image processing method and apparatus, computer-readable storage medium, and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109544620A true CN109544620A (en) | 2019-03-29 |
CN109544620B CN109544620B (en) | 2021-03-30 |
Family
ID=65846309
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811291740.XA Active CN109544620B (en) | 2018-10-31 | 2018-10-31 | Image processing method and apparatus, computer-readable storage medium, and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109544620B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189380A (en) * | 2019-05-30 | 2019-08-30 | Oppo广东移动通信有限公司 | Optimization method, structure optical mode group and the storage medium of nominal data |
CN110209273A (en) * | 2019-05-23 | 2019-09-06 | Oppo广东移动通信有限公司 | Gesture identification method, interaction control method, device, medium and electronic equipment |
CN110288554A (en) * | 2019-06-29 | 2019-09-27 | 北京字节跳动网络技术有限公司 | Video beautification method, device and electronic equipment |
CN111340737A (en) * | 2020-03-23 | 2020-06-26 | 北京迈格威科技有限公司 | Image rectification method, device and electronic system |
CN111402313A (en) * | 2020-03-13 | 2020-07-10 | 合肥的卢深视科技有限公司 | Image depth recovery method and device |
CN111578839A (en) * | 2020-05-25 | 2020-08-25 | 北京百度网讯科技有限公司 | Obstacle coordinate processing method and device, electronic equipment and readable storage medium |
CN111866336A (en) * | 2020-06-22 | 2020-10-30 | 上海摩象网络科技有限公司 | Pan-tilt camera, camera control method and storage medium |
CN112133101A (en) * | 2020-09-22 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Method and device for enhancing license plate area, camera device, computing equipment and storage medium |
CN112956184A (en) * | 2020-03-11 | 2021-06-11 | 深圳市大疆创新科技有限公司 | Image processing system, movable platform, image processing method thereof, and storage medium |
CN113875221A (en) * | 2019-08-27 | 2021-12-31 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN114863036A (en) * | 2022-07-06 | 2022-08-05 | 深圳市信润富联数字科技有限公司 | Data processing method and device based on structured light, electronic equipment and storage medium |
CN117376694A (en) * | 2023-12-07 | 2024-01-09 | 荣耀终端有限公司 | Time synchronization method and processing device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013127418A1 (en) * | 2012-02-27 | 2013-09-06 | Eth Zurich | Method and system for image processing in video conferencing for gaze correction |
CN103685950A (en) * | 2013-12-06 | 2014-03-26 | 华为技术有限公司 | Method and device for preventing shaking of video image |
CN104616284A (en) * | 2014-12-09 | 2015-05-13 | 中国科学院上海技术物理研究所 | Pixel-level alignment algorithm for color images to depth images of color depth camera |
CN107424187A (en) * | 2017-04-17 | 2017-12-01 | 深圳奥比中光科技有限公司 | Depth calculation processor, data processing method and 3D rendering equipment |
CN207691960U (en) * | 2017-12-14 | 2018-08-03 | 深圳奥比中光科技有限公司 | Integrated 3D imaging devices and electronic equipment |
-
2018
- 2018-10-31 CN CN201811291740.XA patent/CN109544620B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013127418A1 (en) * | 2012-02-27 | 2013-09-06 | Eth Zurich | Method and system for image processing in video conferencing for gaze correction |
CN103685950A (en) * | 2013-12-06 | 2014-03-26 | 华为技术有限公司 | Method and device for preventing shaking of video image |
CN104616284A (en) * | 2014-12-09 | 2015-05-13 | 中国科学院上海技术物理研究所 | Pixel-level alignment algorithm for color images to depth images of color depth camera |
CN107424187A (en) * | 2017-04-17 | 2017-12-01 | 深圳奥比中光科技有限公司 | Depth calculation processor, data processing method and 3D rendering equipment |
CN207691960U (en) * | 2017-12-14 | 2018-08-03 | 深圳奥比中光科技有限公司 | Integrated 3D imaging devices and electronic equipment |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110209273A (en) * | 2019-05-23 | 2019-09-06 | Oppo广东移动通信有限公司 | Gesture identification method, interaction control method, device, medium and electronic equipment |
CN110209273B (en) * | 2019-05-23 | 2022-03-01 | Oppo广东移动通信有限公司 | Gesture recognition method, interaction control method, device, medium and electronic equipment |
CN110189380A (en) * | 2019-05-30 | 2019-08-30 | Oppo广东移动通信有限公司 | Optimization method, structure optical mode group and the storage medium of nominal data |
CN110189380B (en) * | 2019-05-30 | 2021-12-07 | Oppo广东移动通信有限公司 | Calibration data optimization method, structured light module and storage medium |
CN110288554A (en) * | 2019-06-29 | 2019-09-27 | 北京字节跳动网络技术有限公司 | Video beautification method, device and electronic equipment |
CN113875221A (en) * | 2019-08-27 | 2021-12-31 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN112956184A (en) * | 2020-03-11 | 2021-06-11 | 深圳市大疆创新科技有限公司 | Image processing system, movable platform, image processing method thereof, and storage medium |
WO2021179217A1 (en) * | 2020-03-11 | 2021-09-16 | 深圳市大疆创新科技有限公司 | Image processing system, mobile platform and image processing method therefor, and storage medium |
CN111402313A (en) * | 2020-03-13 | 2020-07-10 | 合肥的卢深视科技有限公司 | Image depth recovery method and device |
CN111402313B (en) * | 2020-03-13 | 2022-11-04 | 合肥的卢深视科技有限公司 | Image depth recovery method and device |
CN111340737A (en) * | 2020-03-23 | 2020-06-26 | 北京迈格威科技有限公司 | Image rectification method, device and electronic system |
CN111340737B (en) * | 2020-03-23 | 2023-08-18 | 北京迈格威科技有限公司 | Image correction method, device and electronic system |
CN111578839A (en) * | 2020-05-25 | 2020-08-25 | 北京百度网讯科技有限公司 | Obstacle coordinate processing method and device, electronic equipment and readable storage medium |
CN111866336A (en) * | 2020-06-22 | 2020-10-30 | 上海摩象网络科技有限公司 | Pan-tilt camera, camera control method and storage medium |
CN111866336B (en) * | 2020-06-22 | 2022-05-27 | 上海摩象网络科技有限公司 | Pan-tilt camera, camera control method and storage medium |
CN112133101A (en) * | 2020-09-22 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Method and device for enhancing license plate area, camera device, computing equipment and storage medium |
CN112133101B (en) * | 2020-09-22 | 2021-11-26 | 杭州海康威视数字技术股份有限公司 | Method and device for enhancing license plate area, camera device, computing equipment and storage medium |
CN114863036A (en) * | 2022-07-06 | 2022-08-05 | 深圳市信润富联数字科技有限公司 | Data processing method and device based on structured light, electronic equipment and storage medium |
CN117376694A (en) * | 2023-12-07 | 2024-01-09 | 荣耀终端有限公司 | Time synchronization method and processing device |
Also Published As
Publication number | Publication date |
---|---|
CN109544620B (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544620A (en) | Image processing method and device, computer readable storage medium and electronic equipment | |
CN109194876B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN108737734B (en) | Image compensation method and apparatus, computer-readable storage medium, and electronic device | |
JP6911192B2 (en) | Image processing methods, equipment and devices | |
CN108769528B (en) | Image compensation method and apparatus, computer-readable storage medium, and electronic device | |
CN109155843B (en) | Image projection system and image projection method | |
CN109194877B (en) | Image compensation method and apparatus, computer-readable storage medium, and electronic device | |
US7260270B2 (en) | Image creating device and image creating method | |
TWI253006B (en) | Image processing system, projector, information storage medium, and image processing method | |
CN109714536B (en) | Image correction method, image correction device, electronic equipment and computer-readable storage medium | |
CN109842753A (en) | Camera stabilization system, method, electronic equipment and storage medium | |
JP2020537382A (en) | Methods and equipment for dual camera-based imaging and storage media | |
CN110278360A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
JPH09181913A (en) | Camera system | |
CN110536057A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
JP2007081682A (en) | Image processor, image processing method, and executable program by information processor | |
CN109951638A (en) | Camera stabilization system, method, electronic equipment and computer readable storage medium | |
CN113875219B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN109598764A (en) | Camera calibration method and device, electronic equipment, computer readable storage medium | |
US10237485B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN109951640A (en) | Camera anti-fluttering method and system, electronic equipment, computer readable storage medium | |
US20120307132A1 (en) | Imaging module, imaging apparatus, image processing apparatus, and image processing method | |
CN103024302A (en) | Image processing device that performs image processing | |
CN109660718A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107872631A (en) | Image capturing method, device and mobile terminal based on dual camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |