WO2019237977A1 - 图像补偿方法、计算机可读存储介质和电子设备 - Google Patents

图像补偿方法、计算机可读存储介质和电子设备 Download PDF

Info

Publication number
WO2019237977A1
WO2019237977A1 PCT/CN2019/090140 CN2019090140W WO2019237977A1 WO 2019237977 A1 WO2019237977 A1 WO 2019237977A1 CN 2019090140 W CN2019090140 W CN 2019090140W WO 2019237977 A1 WO2019237977 A1 WO 2019237977A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
offset
camera
lens
preset
Prior art date
Application number
PCT/CN2019/090140
Other languages
English (en)
French (fr)
Inventor
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019237977A1 publication Critical patent/WO2019237977A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image compensation method, a computer-readable storage medium, and an electronic device.
  • Optical Image Stabilization As an anti-shake technology currently recognized by the public, is mainly used to correct the "optical axis shift" through the lens' floating lens. The principle is to detect After detecting a small movement, and then transmitting the signal to the microprocessor, the processor immediately calculates the amount of displacement that needs to be compensated, and then compensates according to the lens shake direction and the amount of displacement by compensating the lens group; thereby effectively overcoming camera vibration The resulting image is blurred.
  • an image compensation method a computer-readable storage medium, and an electronic device are provided.
  • An image compensation method applied to a camera including a camera carrying an optical image stabilization system includes:
  • An image compensation device is applied to a camera including an optical image stabilization system.
  • the device includes:
  • a lens offset acquisition module configured to acquire a lens offset of the camera when the camera shake is detected
  • An image offset acquisition module configured to determine an image offset corresponding to the lens offset according to a preset offset conversion function
  • An image compensation module is configured to compensate an image collected by the camera when a shake occurs according to the image offset.
  • a computer-readable storage medium having stored thereon a computer program, characterized in that the computer program implements an image compensation method when executed by a processor.
  • An electronic device includes a memory and a processor.
  • the memory stores computer-readable instructions, and the instructions are executed by the processor to cause the processor to execute an image compensation method.
  • the image compensation method, the computer-readable storage medium, and the electronic device described above can obtain the lens shift of the camera when the camera shake is detected; and determine the image shift corresponding to the lens shift according to a preset shift conversion function; Compensating the image collected by the camera when the shake occurs according to the image offset can more accurately obtain the image offset, and then compensate the image during image shooting or real-time preview to improve the sharpness of the image.
  • FIG. 1 is a block diagram of an electronic device in one embodiment.
  • FIG. 2 is a flowchart of an image compensation method according to an embodiment.
  • FIG. 3 is a flowchart of an image compensation method in another embodiment.
  • FIG. 4 is a flowchart of inputting the first position information and the second position information to a preset offset conversion model to determine the preset offset conversion function according to an embodiment.
  • FIG. 5 is a flowchart of obtaining a lens offset of the camera when the camera shake is detected in an embodiment.
  • FIG. 6 is a flowchart of an image compensation method according to another embodiment.
  • FIG. 7 is a structural diagram of an image compensation device in an embodiment.
  • FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
  • FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
  • first the terms “first”, “second”, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first camera may be referred to as a second camera, and similarly, the second camera may be referred to as a first camera. Both the first camera and the second camera are cameras, but they are not the same camera.
  • the camera carrying the OIS (Optical Image Stabilization) system includes a lens, a voice coil motor, an infrared filter, an image sensor (IC) and digital signal processing (DSP), a PCB circuit board, and multiple sensors ( For example, gyroscope sensor, Hall sensor, etc.).
  • the lens is usually composed of multiple lenses, and its imaging function. If the lens has the OIS function, in the case of shake, the lens is controlled to translate relative to the image sensor to compensate for the image shift caused by hand shake.
  • Optical image stabilization is based on the special lens or CCD light sensor structure to minimize the operator's image instability due to shake during use.
  • the gyroscope in the camera detects a slight movement, it transmits a signal to the microprocessor to immediately calculate the displacement amount that needs to be compensated, and then compensates according to the lens shake direction and displacement amount through the compensation lens group. This effectively overcomes image blur caused by camera shake.
  • the above cameras carrying the OIS (Optical Image Stabilization) system can be applied to electronic devices.
  • the electronic devices can be mobile phones, tablet computers, PDAs (Personal Digital Assistants), POS (Point of Sales) ), On-board computers, wearable devices, digital cameras, and any other terminal device with photo and video functions.
  • the electronic device may obtain a lens offset of the camera when detecting that the camera is shaken; determine an image offset corresponding to the lens offset according to a preset offset conversion function; and according to the image offset Compensate an image collected by the camera when a shake occurs.
  • FIG. 1 is a block diagram of an electronic device in one embodiment.
  • the electronic device includes a processor, a memory, a display screen, and an input device connected through a system bus.
  • the memory may include a non-volatile storage medium and a processor.
  • the non-volatile storage medium of the electronic device stores an operating system and a computer program, and the computer program is executed by a processor to implement an image compensation method provided in the embodiments of the present application.
  • This processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the internal memory in the electronic device provides an environment for running a computer program in a non-volatile storage medium.
  • the display screen of an electronic device can be a liquid crystal display or an electronic ink display
  • the input device can be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the electronic device casing, or External keyboard, trackpad, or mouse.
  • the electronic device may be a mobile phone, a tablet, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, a digital camera, and any other terminal device with photographing and video functions .
  • 1 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device to which the solution of the present application is applied. Include more or fewer parts than shown in the figure, or combine certain parts, or have a different arrangement of parts.
  • FIG. 2 is a flowchart of an image compensation method according to an embodiment.
  • the image compensation method is applied to a camera including an OIS system.
  • the image compensation method includes 202-206. among them,
  • the camera When the electronic device carrying the camera of the OIS system enters the image preview interface, the camera will collect images of various viewing angles in real time.At the same time, based on the gyro sensor in the camera or based on the gyro sensor and / or acceleration sensor in the original electronic device You can detect if the camera shakes. In one embodiment, when the angular velocity collected by the gyro sensor changes, it can be considered that the camera shakes. When the camera shakes, the lens shift of the camera can be obtained.
  • the movement amount of the lens in the camera may be collected based on the Hall sensor or laser technology in the camera, that is, the lens offset.
  • a two-dimensional coordinate system may be established by using the plane where the image sensor of the camera is located as an XY plane, and the origin position of the two-dimensional coordinate system is not further limited in this application.
  • the lens offset can be understood as the vector offset between the current position after the lens shake and the starting position before the lens shake in a two-dimensional coordinate system, that is, the vector of the current position after the lens shake relative to the initial position before the lens shake distance.
  • the initial position can be understood as the position of the lens when the distance between the lens and the image sensor is one focal length of the lens.
  • lens offset refers to the vector distance between the optical centers before and after the lens (convex lens) moves.
  • the electronic device obtains the first image collected by the lens in the initial position in advance, and simultaneously records the coordinate position of each pixel point in the XY plane in the first image.
  • the lens will move in the XY plane, that is, the second image collected by the electronic device at the current position after the shake will be offset from the first image in the XY plane, and the second image will be shifted.
  • the offset from the first image is called the image offset.
  • the same feature pixel point in the first image and the second image can be filtered out.
  • the coordinate information of the feature pixel point p 1 in the first image in the XY plane is p 1 (X 1 , Y 1 ).
  • the coordinate information of the characteristic pixel point p 1 ′ in the XY plane is (X 2 , Y 2 ), and the image offset d1 can be obtained according to the characteristic pixel points P 1 and P 1 ′.
  • an image offset may be obtained by obtaining a lens offset and according to a preset offset conversion function.
  • the unit of lens offset is code
  • the unit of image offset is pixel.
  • a lens offset can be converted into an image offset.
  • the preset offset conversion function can be obtained according to a specific calibration method, and the preset offset conversion function can be used to convert a lens offset into an image offset.
  • the offset of the lens in the XY plane along the x-axis and the offset along the y-axis can be brought into the corresponding variables in the preset offset conversion function, and the corresponding image offset d1 can be obtained by calculation.
  • the lens offset can be determined according to the Hall value of the Hall sensor.
  • the image captured by the camera is referred to as a first image
  • the frequency of the image collected by the camera is the image frequency.
  • the image frequency and the Hall value are strictly synchronized according to the timing (time stamp). For example, if image acquisition is performed at 30 Hz and Hall value is performed at 200 Hz at the same time, an image will correspond to 6-7 Hall values in time series.
  • Image compensation is performed on the first image according to the acquired image offset. For example, if the currently calculated image shift is shifted by 1 pixel, then in image compensation, the negative direction of the image shift is shifted by 1 pixel to realize image compensation.
  • image offsets corresponding to multiple Hall values may be used to correct the same frame image.
  • 6 image offsets corresponding to 6 Hall values may be used to correct the same frame image. Since the image obtained by the camera is a CMOS progressive scan image, image compensation is performed for areas with different Hall values corresponding to different numbers of lines. For example, there are currently six Hall values in hall1-hall6, each of which is This value corresponds to the only image offset, denoted as pixelpixel1-biaspixel6. At this time, if the CMOS scans 6 lines, you can use biaspixel1-biaspixel6 to correct the 6-line image progressively.
  • Block correction can be performed, that is, 60 lines are divided into 6 blocks, and one block contains 10 lines.
  • the 6 images are corrected block by block respectively by biaspixel1-biaspixel6, that is, the 10 lines included in the first block are all corrected using biaspixel1 as the correction parameter. Compensation and correction.
  • the 10 lines contained in the second block are compensated and corrected using biaspixel2 as the correction parameter.
  • the image compensation method when the camera shake is detected, a lens offset of the camera is obtained; an image offset corresponding to the lens offset is determined according to a preset offset conversion function; The image offset compensates the image collected by the camera when the shake occurs, and can more accurately obtain the image offset, and then compensate the image during the image shooting or real-time previewing process to improve the image sharpness.
  • FIG. 3 is a flowchart of an image compensation method in another embodiment.
  • the method before determining the image offset corresponding to the lens offset according to the preset offset conversion function, the method further includes obtaining a preset deflection conversion function, which specifically includes 302-308.
  • the driving motor moves the lens of the camera according to a preset trajectory; the preset trajectory includes a plurality of characteristic displacement points.
  • the test target is fixed within the imaging range of the camera, and the motor is controlled to move the lens of the camera to drive the lens according to a preset trajectory.
  • the preset track can be a circle, an ellipse, a rectangle, or other preset tracks.
  • Multiple characteristic displacement points are set on the preset trajectory, and the distance between two adjacent characteristic displacement points may be the same or different.
  • the position information of the characteristic displacement points can be represented by coordinate positions in the XY plane.
  • the test target may be a CTF (Contrast Transfer Function) target, an SFR (Spatial Frequency Response) target, a DB target, or other custom target.
  • CTF Contrast Transfer Function
  • SFR Spatial Frequency Response
  • DB target DB target
  • custom target For example, when the number of feature displacement points is six, it needs to collect image information of six test targets correspondingly.
  • the position information of the characteristic displacement point q can be represented by the coordinate position q (x i , y j ) in the XY plane, that is, the first position information of the characteristic displacement point can be represented by the coordinate q (x i , y j ) .
  • the first position information of the feature displacement points is recorded as q 1 (x 1 , y 1 ), q 2 (x 2 , y 2 ), and q 3 (x 3 , y 3 ), q 4 (x 4 , y 4 ), q 5 (x 5 , y 5 ), q 6 (x 6 , y 6 ).
  • a feature displacement point corresponds to image information of a test target, and the image information is composed of a plurality of pixels. That is, one or more characteristic pixel points p may be selected in the image information to obtain the second position information of the characteristic pixel points p, and the second position information of the characteristic pixel points p may also be the coordinate position p in the XY plane. (X i , Y j ).
  • the characteristic pixel point p may be a pixel point near the center in the image information, or may be the brightest pixel point in the image information or another pixel point with prominent meaning.
  • the specific pixel point p The location and definition are not further limited.
  • the feature shift point is q 0 (x 0 , y 0 ), and the feature pixel point in the acquired image information of the test target is p 0 (X 0 , Y 0 ), where the feature shift point q 0 (x 0 , y 0 ) may be the origin, and the characteristic pixel point p 0 (X 0 , Y 0 ) may also correspond to the origin. That is, according to the feature displacement point, the feature pixel point p (X i , Y j ) corresponding to the feature displacement point, and the feature pixel point p 0 (X 0 , Y 0 ) at the initial position, the feature pixel point relative can be obtained. The image shift from the initial position.
  • the characteristic pixel point p 1 (X 1 , Y 1 ) in the image information of the test target is correspondingly obtained; correspondingly, the characteristic displacement point q 2 ( x 2 , y 2 ) corresponds to the characteristic pixel point p 2 (X 2 , Y 2 ); the characteristic displacement point q 3 (x 3 , y 3 ) corresponds to the characteristic pixel point p 3 (X 3 , Y 3 ); the characteristic displacement point q 4 (x 4 , y 4 ) corresponds to the characteristic pixel point p 4 (X 4 , Y 4 ); characteristic displacement point q 5 (x 5 , y 5 ) corresponds to the characteristic pixel point p 5 (X 5 , Y 5 ); characteristic displacement The point q 6 (x 6 , y 6 ) corresponds to the characteristic pixel point p 6 (X 6 , Y 6 ).
  • the obtained first position information of the feature movement point and the second position information of the feature pixel point corresponding to the feature displacement point are input into a preset offset conversion model, and the analysis is performed to determine the preset offset conversion.
  • Each coefficient in the model has a preset offset conversion function that calibrates the coefficients.
  • the preset offset conversion model may be a univariate quadratic function model, a binary quadratic function model, or a binary multiple function model.
  • the preset offset conversion model may be set using a neural network or It can also be obtained by deep learning, or it can be obtained by data fitting based on the obtained large amount of first position information and second position information.
  • the preset offset conversion model is different, and the number of feature displacement points to be obtained is also different.
  • the number of unknown coefficients in the preset offset conversion model is less than or equal to the number of feature displacement points.
  • the preset offset conversion model is a binary quadratic function model
  • it can be expressed by the following formula:
  • ( ⁇ X, ⁇ Y) represents the image offset, which represents the image offset of the current feature displacement point p (x i , y j ) relative to the feature pixel point p 0 (X 0 , Y 0 ) at the initial position.
  • the image shift is a scalar shift, that is, the distance between the current feature shift point p (X i , Y j ) and the feature pixel point p 0 (X 0 , Y 0 ) at the initial position.
  • x represents the coordinate parameter of the horizontal axis x of the characteristic displacement point; the coordinate parameter of the vertical axis y of the y characteristic displacement point.
  • the feature pixel point p 0 (X 0 , Y 0 ) in the image information of the test target corresponding to the feature displacement point q 0 (x 0 , y 0 ) at the initial position is set as the coordinate origin.
  • the image offsets corresponding to the six feature pixels are F 1 ( ⁇ X 1 , ⁇ Y 1 ), F 2 ( ⁇ X 2 , ⁇ Y 2 ), F 3 ( ⁇ X 3 , ⁇ Y 3 ), and F 4 ( ⁇ X 4 , ⁇ Y 4 ), F 5 ( ⁇ X 5 , ⁇ Y 5 ), F 6 ( ⁇ X 6 , ⁇ Y 6 ), where F 1 ( ⁇ X 1 , ⁇ Y 1 ) is the characteristic pixel point p 1 (x 1 , y 1 ), p 0
  • the image offset d 1 between (x 0 , y 0 ), F 2 ( ⁇ X 2 , ⁇ Y 2 ) is between the characteristic pixel points p 2 (x 2 , y 2 ), p 0 (x 0 , y 0 ) image shift d 2;
  • F 3 ( ⁇ X 3 , ⁇ Y 3) is a feature pixel p 3 (x 3, y 3 ) between the picture, p 0 (x
  • the six feature displacement points obtained will be q 1 (x 1 , y 1 ), q 2 (x 2 , y 2 ), q 3 (x 3 , y 3 ), q 4 (x 4 , y 4 ), q 5 (x 5 , y 5 ), q 6 (x 6 , y 6 ), and the six characteristic pixel points p 1 (X 1 , Y 1 ), p 2 (X 2 , Y 2 ), p 3 (X 3 , Y 3 ), p 4 (X 4 , Y 4 ), p 5 (X 5 , Y 5 ), p 6 (X 6 , Y 6 ) corresponding to the image offset d 1 , d 2 , d 3 , d 4 , d 5 , and d 6 are input to the binary quadratic function model, and the following formulas can be obtained:
  • F 1 ( ⁇ X 1 , ⁇ Y 1 ) ax 1 2 + by 1 2 + cx 1 y 1 + dx 1 + ey 1 + f;
  • F 3 ( ⁇ X 3 , ⁇ Y 3 ) ax 3 2 + by 3 2 + cx 3 y 3 + dx 3 + ey 3 + f;
  • F 4 ( ⁇ X 4 , ⁇ Y 4 ) ax 4 2 + by 4 2 + cx 4 y 4 + dx 4 + ey 4 + f;
  • F 5 ( ⁇ X 5 , ⁇ Y 5 ) ax 5 2 + by 5 2 + cx 5 y 5 + dx 5 + ey 5 + f;
  • F 6 ( ⁇ X 6 , ⁇ Y 6 ) ax 6 2 + by 6 2 + cx 6 y 6 + dx 6 + ey 6 + f.
  • the quadratic quadratic function model includes six unknown coefficients a, b, c, d, e, f.
  • a, b, c, d, e, f where the obtained coefficients a, b, c, d, e, f are brought into the binary quadratic function model, then the corresponding preset offset conversion function can be obtained, where a, b, c, d, e And f are calibration coefficients of the preset offset conversion function.
  • the image compensation method in this embodiment can obtain a corresponding preset offset conversion function according to a preset offset conversion model, multiple offset movement points, and corresponding multiple characteristic pixel points.
  • the preset offset conversion function It can directly and accurately obtain the image offset value based on the lens offset. The calibration efficiency and accuracy are higher, which lays a good foundation for compensating the image.
  • FIG. 4 is a flowchart of inputting the first position information and the second position information to a preset offset conversion model to determine the preset offset conversion function according to an embodiment.
  • the preset offset conversion model is a binary multiple function; the first position information and the second position information are input to a preset offset conversion model to determine the preset offset conversion model.
  • Preset offset conversion functions including:
  • the preset offset conversion model is a binary multiple function model, and its expression is as follows:
  • n ⁇ 2; ( ⁇ X, ⁇ Y) represents the image offset, which represents the current feature displacement point p (X i , Y j ) relative to the original feature pixel point p 0 (X 0 , Y 0 ) Image offset, which is a scalar offset.
  • x represents the coordinate parameter of the horizontal axis x of the characteristic displacement point; the coordinate parameter of the vertical axis y of the y characteristic displacement point.
  • a, b, ..., c, d, e, f are unknown coefficients in the preset offset conversion model.
  • the preset offset conversion model is a binary multiple function model
  • the number of unknown coefficients a, b, ..., c, d, e, f is greater than or equal to six.
  • the number of unknown coefficients of the offset conversion model can be obtained.
  • the preset offset conversion model is a binary quadratic function model, where the number of unknown coefficients is 6, it is necessary to correspondingly obtain 6 or more.
  • Characteristic displacement points For example, when the preset offset conversion model is a binary cubic function model, the function model is:
  • the number of unknown coefficients is 9, and it is necessary to correspondingly obtain 9 or more characteristic displacement points. It can be known that the number of feature displacement points is greater than or equal to the unknown coefficient of the preset offset conversion model.
  • a characteristic moving point having the number can be selected from the preset trajectory, and each characteristic moving point is different.
  • the feature moving point may also be any non-repeating position point in the XY plane.
  • the feature pixel points in the image information of each test target corresponding to each feature displacement point and the image offset corresponding to the feature pixel points can be obtained.
  • the coordinate information of each feature displacement point and the corresponding image offset are input into a preset deflection conversion model to obtain various unknown coefficients of the preset deflection conversion model.
  • a preset offset conversion function with a calibration coefficient By bringing the obtained unknown coefficients into a preset offset conversion model, a preset offset conversion function with a calibration coefficient can be obtained.
  • the calibration coefficients in the preset offset conversion function can be understood as unknown coefficients to be solved in the preset offset conversion model.
  • the preset offset conversion model with unknown coefficients is called the preset offset conversion function.
  • determining an image offset corresponding to the lens offset according to the preset offset conversion function includes:
  • a, b, c, d, e, f are calibration coefficients, which are known coefficients.
  • F ( ⁇ X, ⁇ Y) is used to indicate the current image offset
  • x and y are the horizontal and vertical axis coordinates of the current lens offset, respectively.
  • the current lens offset is p (2,1)
  • the corresponding image offset F ( ⁇ X, ⁇ Y) is 4a + b + 2c + 2d + e + f.
  • Image offset F ( ⁇ X, ⁇ Y) which is a scalar offset.
  • An image offset corresponding to the lens offset can be determined according to the preset offset conversion function. That is, when the lens offset is obtained, the current lens offset can be converted into an image offset according to the preset offset conversion function.
  • the preset offset conversion function is a binary quadratic function, which comprehensively considers the information of the x-axis offset and the y-axis offset of the lens offset, and can more accurately and efficiently convert the lens offset into an image Offset.
  • obtaining the lens offset of the camera includes:
  • the camera also includes a gyro sensor for detecting whether the camera shakes, a motor for driving the lens movement of the camera, and an OIS controller for controlling the movement of the motor.
  • the angular velocity of the camera detected by the gyro sensor is collected in real time, and the camera shake amount is determined according to the obtained angular velocity.
  • the motor is controlled according to the determined shake amount to drive the lens movement of the camera, and the movement amount of the lens is opposite to the direction of the shake amount to eliminate the offset caused by the shake.
  • the electronic device can record the offset scale of the lens of the camera on the XY plane through the Hall sensor or laser, and at the same time record the offset scale, it can also record the offset direction, according to the distance corresponding to each scale, and the offset direction , And then get the lens offset p (x i , y j ).
  • the size of the Hall value collected by the Hall sensor can be used to uniquely determine the magnitude of the lens offset at the current moment. In OIS systems, this lens offset is on the order of microns.
  • the angular velocity information collected by the gyro sensor corresponds to the Hall value collected by the Hall sensor in time series.
  • the Hall sensor is a magnetic field sensor made according to the Hall effect.
  • the Hall effect is essentially a deflection of a moving charged particle in a magnetic field caused by the Lorentz force. When charged particles (electrons or holes) are confined in a solid material, this deflection results in the accumulation of positive and negative charges in the direction of the vertical current and magnetic field, thereby forming an additional lateral electric field.
  • determining the lens offset of the camera based on the Hall value of the Hall sensor includes: acquiring a first frequency of the image collected by the camera and a second frequency of the angular velocity information collected by the gyroscope; A frequency and a second frequency determine multiple angular velocity information corresponding to a frame of image; determine target angular velocity information based on the multiple angular velocity information; and determine a lens offset of the camera according to a Hall value corresponding to the target angular velocity information.
  • a first frequency at which the camera collects images and a second frequency at which the gyroscope collects angular velocity information are acquired. Because the acquisition frequency of the gyro sensor is higher than the frequency of acquiring images from the camera, for example, the camera collects images at 30Hz, and the gyro sensor acquires angular velocity at 200Hz at the same time. Corresponding to collect 6-7 angular velocities. Select the target angular velocity from the collected 6-7 angular velocity data. The target angular velocity may be the minimum angular velocity, the angular velocity with the smallest derivative, and the angular velocity with the smallest difference from the average angular velocity. The Hall value of the corresponding Hall sensor is obtained according to the target angular velocity, and the lens offset is determined according to the determined Hall value.
  • FIG. 6 is a flowchart of an image compensation method according to another embodiment.
  • the camera includes at least a first camera and a second camera.
  • the first camera and the second camera may both have an OIS function, or only one camera may have an OIS function, which is not further limited in this embodiment of the present application.
  • the embodiment of the present application does not place any limitation on the performance parameters (for example, focal length, aperture size, resolution, etc.) of the first camera and the second camera.
  • the first camera may be any one of a telephoto camera or a wide-angle camera.
  • the second camera may be any one of a telephoto camera or a wide-angle camera.
  • the first camera and the second camera may be disposed in the same plane of the electronic device, for example, simultaneously disposed on the back or front of the electronic device.
  • the installation distance of the dual camera on the electronic device can be determined according to the size of the terminal and / or the shooting effect.
  • the left and right cameras in order to make the objects shot by the left and right cameras (the first camera and the second camera) have a high degree of overlap, the left and right cameras may be installed as close as possible, for example, within 10 mm.
  • the image compensation method further includes:
  • the first camera and the second camera shake When the first camera and the second camera shake are detected, acquire a first lens offset of the first camera and a second lens offset of the second camera, and at the same time, the The first camera and the second camera capture a first image and a second image of a target object.
  • a first lens offset of the first camera and / or a second camera may be obtained based on a Hall sensor.
  • the second lens is offset.
  • the corresponding lens shift is 0.
  • a first image of the target object captured by the first camera and a second image of the target object captured by the second camera may also be obtained.
  • a first image offset corresponding to the first lens offset and a second image corresponding to the second lens offset are determined according to a preset offset conversion function.
  • the preset offset conversion function can be expressed as:
  • a, b, c, d, e, and f are the calibration coefficients respectively;
  • F ( ⁇ X, ⁇ Y) is the image offset;
  • x and y are the coordinates of the lens offset in the X and Y planes, respectively.
  • the first image may be compensated according to a first image offset
  • the second image may be compensated according to the second image offset. Acquire the compensated first image and the compensated second image separately, and obtain the distance information between the compensated first image and the same characteristic subject in the second image.
  • the distance information is a vector distance, and may be the first image and the second image after compensation are mapped on the XY plane, and then the coordinate distance between the target objects in the two images after compensation is obtained.
  • the distance information can be mapped on the XY plane after the compensated first image and the second image overlap, and obtain the vector distance between the coordinates of the same characteristic pixel point of the target object in the two images after compensation; It is also possible to obtain multiple feature pixels of the first image after compensation on the XY plane, and for each feature pixel, obtain corresponding feature pixels in the second image after compensation corresponding to the feature pixels . For each feature pixel point, the vector distance between the coordinates of the same feature pixel point in the two images after compensation can be obtained, an average value is calculated based on the obtained multiple vector distances, and the average value is used as the first Distance information between the same characteristic subject in one image and the second image.
  • the first camera and the second camera are located on the same plane, and the distance between the two cameras and the focal length of the first camera and the second camera can be obtained.
  • the focal lengths of the first camera and the second camera are equal.
  • the distance Z between the target object and the plane where the two cameras are located can be obtained, where the distance Z is the depth of field information of the target object.
  • the distance Z the distance between the two cameras * (the focal length of the first camera or the second camera) / distance information.
  • the depth of field information of the target object may also be determined based on a relationship such as a displacement difference and a posture difference proportional to the imaging of the first camera and the second camera.
  • this solution may also be applied to an electronic device including three or more cameras, where at least one of the three or more cameras includes an OIS-capable camera.
  • a combination of two cameras can be formed.
  • at least one camera has an OIS function.
  • the two cameras in each combination can obtain the depth information of the target object, so that three sets of depth information can be obtained, and the average depth of the three sets of depth information can be used as the actual depth of the target object.
  • the first image and the second image acquired when the first camera and the second camera shake are compensated, and then the depth of field of the first target object is obtained according to the compensated first image and second image. Information, thereby obtaining more accurate depth of field information.
  • FIG. 7 is a structural diagram of an image compensation device in an embodiment.
  • An embodiment of the present application further provides an image compensation device applied to a camera including an optical image stabilization system.
  • the device includes:
  • a lens offset acquisition module 710 configured to acquire a lens offset of the camera when the camera shake is detected
  • An image offset obtaining module 720 configured to determine an image offset corresponding to the lens offset according to a preset offset conversion function
  • An image compensation module 730 is configured to compensate an image collected by the camera when a shake occurs according to the image offset.
  • the above-mentioned image compensation device may obtain a lens offset of the camera when detecting that the camera is shaken; determine an image offset corresponding to the lens offset according to a preset offset conversion function; and according to the image
  • the offset compensates the image collected by the camera when the shake occurs, and can more accurately obtain the image offset, and then compensate the image during the image shooting or real-time preview process to improve the sharpness of the image.
  • the image compensation device further includes:
  • a lens driving module for driving a motor to move the lens of the camera according to a preset trajectory;
  • the preset trajectory includes a plurality of characteristic displacement points;
  • An image acquisition module configured to correspondingly acquire image information of a test target when the lens moves to each of the feature displacement points
  • a position acquisition module configured to correspondingly acquire the first position information of each feature displacement point and the second position information of the same feature pixel point in the image information collected at the feature displacement point;
  • a function determining module configured to input the first position information and the second position information into a preset offset conversion model to determine the preset offset conversion function having a calibration coefficient, wherein the feature displacement The number of points is associated with the number of calibration coefficients.
  • the function determination module includes:
  • a quantity determining unit configured to determine the quantity of the feature displacement points according to an unknown coefficient of the binary multiple function
  • a coefficient determining unit configured to input the determined first position information of each of the feature displacement points and second position information corresponding to the first position information to the preset offset conversion model to determine all Stated unknown coefficients
  • a function determining unit is configured to determine the preset offset conversion function having a calibration coefficient according to the determined unknown coefficient and a preset offset conversion model.
  • the image offset acquisition module includes:
  • a function obtaining unit is configured to obtain the preset offset conversion function, where the preset offset conversion function is represented as:
  • a, b, c, d, e, and f are the calibration coefficients respectively;
  • F ( ⁇ X, ⁇ Y) is the image offset; and
  • x and y are the coordinates of the lens offset in the X and Y planes, respectively.
  • An offset conversion unit is configured to determine an image offset corresponding to the lens offset according to the preset offset conversion function.
  • the image offset acquisition module includes:
  • An angular velocity acquiring unit configured to acquire angular velocity information of a camera based on a gyro sensor
  • a motor driving unit configured to control a movement of a lens driven by the motor to drive the camera according to the angular velocity information
  • the lens shift unit is configured to determine a lens shift of the camera based on a Hall value of the Hall sensor.
  • the lens shift unit is further configured to obtain a first frequency of the image collected by the camera and a second frequency of the angular velocity information collected by the gyroscope; and determine the correspondence when acquiring a frame of image according to the first frequency and the second frequency. Multiple pieces of angular velocity information; determining target angular velocity information based on the multiple angular velocity information, and determining a lens offset of the camera according to a Hall value corresponding to the target angular velocity information.
  • the camera includes at least a first camera and a second camera; the image compensation device further includes:
  • An obtaining module configured to obtain a first lens offset of the first camera and a second lens offset of the second camera when the first camera and the second camera shake are detected, and at the same time , The first camera and the second camera capture a first image and a second image of a target object;
  • a conversion module configured to determine a first image offset corresponding to the first lens offset and a second image offset corresponding to the second lens offset according to a preset offset conversion function
  • a compensation module for compensating the first image according to a first image offset and compensating the second image according to the second image offset to obtain a compensated first image and a second image Distance information between the same feature shots
  • the depth of field module is configured to determine the depth of field information of the target object according to the distance information, the first image, and the second image.
  • the first image and the second image acquired when the first camera and the second camera shake are compensated, and then the depth of field of the first target object is obtained according to the compensated first image and second image. Information, thereby obtaining more accurate depth of field information.
  • each module in the above image compensation device is only for illustration. In other embodiments, the image compensation device may be divided into different modules as needed to complete all or part of the functions of the above image compensation device.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, causing the processors to execute Image compensation method.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 8, for ease of description, only aspects of the image compensation technology related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 840 and a control logic 850.
  • the image data captured by the imaging device 810 is first processed by the ISP processor 840, which analyzes the image data to capture image statistical information that can be used to determine and / or one or more control parameters of the imaging device 810.
  • the imaging device 810 may include a camera having one or more lenses 812 and an image sensor 814.
  • the image sensor 814 may include a color filter array (such as a Bayer filter).
  • the image sensor 814 may obtain light intensity and wavelength information captured by each imaging pixel of the image sensor 814, and provide a set of raw data that may be processed by the ISP processor 840. Image data.
  • the sensor 820 (such as a gyroscope) may provide parameters (such as anti-shake parameters) of the acquired image compensation to the ISP processor 840 based on the interface type of the sensor 820.
  • the sensor 820 interface may use a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • SMIA Standard Mobile Imaging Architecture
  • the image sensor 814 may also send the original image data to the sensor 820, and the sensor 820 may provide the original image data to the ISP processor 840 for processing based on the interface type of the sensor 820, or the sensor 820 stores the original image data in the image memory 830 .
  • the ISP processor 840 processes the original image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 840 may perform one or more image compensation operations on the original image data and collect statistical information about the image data. The image compensation operation may be performed with the same or different bit depth accuracy.
  • the ISP processor 840 may also receive pixel data from the image memory 830.
  • the sensor 820 interface sends the original image data to the image memory 830, and the original image data in the image memory 830 is then provided to the ISP processor 840 for processing.
  • the image memory 830 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the ISP processor 840 may perform one or more image compensation operations, such as time-domain filtering.
  • the image data processed by the ISP processor 840 may be sent to the image memory 830 for further processing before being displayed.
  • the ISP processor 840 receives processing data from the image memory 830 and performs image data processing on the processed data in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data may be output to the display 880 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the ISP processor 840 may also be sent to the image memory 830, and the display 880 may read image data from the image memory 830.
  • the image memory 830 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 840 may be sent to an encoder / decoder 870 to encode / decode image data. The encoded image data can be saved and decompressed before being displayed on the display 880 device.
  • the image data processed by the ISP may be sent to the encoder / decoder 870 to encode / decode the image data.
  • the encoded image data can be saved and decompressed prior to display and display on the 880 device.
  • the image data processed by the ISP processor 840 may also be processed by the encoder / decoder 870 first.
  • the encoder / decoder 870 may be a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit) in a mobile terminal.
  • the statistical data determined by the ISP processor 840 may be sent to the control logic 850 unit.
  • the statistical data may include image sensor 814 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and lens 812 shadow compensation.
  • the control logic 850 may include a processor and / or a microcontroller that executes one or more routines (such as firmware). The one or more routines may determine the control parameters of the imaging device 810 and the ISP processing based on the received statistical data. Parameters of the controller 840.
  • control parameters of the imaging device 810 may include sensor 820 control parameters (e.g., gain, integration time for exposure control, image stabilization parameters, etc.), camera flash control parameters, lens 812 control parameters (e.g., focal length for focusing or zooming), or these A combination of parameters.
  • ISP control parameters may include gain levels and color compensation matrices for automatic white balance and color adjustment (eg, during RGB processing), and lens 812 shadow compensation parameters.
  • FIG. 9 is a schematic diagram of an image processing circuit in another embodiment. As shown in FIG. 9, for ease of description, only aspects of the image compensation technology related to the embodiment of the present application are shown.
  • the first camera 100 may include one or more lenses 1202 and a first image sensor 140.
  • the first image sensor 140 may include a color filter array (such as a Bayer filter), and the first image sensor 140 may obtain a first image sensor.
  • the light intensity and wavelength information captured by each imaging pixel of 140 and provides a set of raw image data that can be processed by the first ISP processor 912.
  • the first ISP processor 912 After the first ISP processor 912 processes the first image, it can collect statistics of the first image Data (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) are sent to the control logic 920.
  • the control logic 920 can determine the control parameters of the first camera 100 according to the statistical data, so that the first camera 100 can Perform operations such as autofocus, auto exposure, and OIS stabilization.
  • the first image may be stored in the image memory 950 after being processed by the first ISP processor 912, and the first ISP processor 912 may also read the image stored in the image memory 950 for processing.
  • the first image may be directly sent to the display 970 for display after being processed by the ISP processor 912, and the display 970 may also read the image in the image memory 950 for display.
  • the processing flow of the second camera is the same as that of the first camera.
  • the functions of the image sensor and ISP processor are the same as described in the single-shot case.
  • first ISP processor 912 and the second ISP processor 914 may also be combined into a unified ISP processor, which processes the data of the first image sensor and the second image sensor respectively.
  • the CPU is connected to the logic controller 920, the first ISP processor 912, the second ISP processor 914, the image memory 950, and the display 970, and the CPU is used to implement global control.
  • the power supply module is used to supply power to each module.
  • a mobile phone with dual cameras works in both camera modes (for example, portrait mode).
  • the CPU controls the power supply module to supply power to the first camera and the second camera.
  • the image sensor in the first camera is powered on, and the image sensor in the second camera is powered on.
  • some photographing modes for example, photo mode
  • only one of the cameras works by default, for example, only a telephoto camera works.
  • the CPU controls the power supply module to supply power to the image sensor of the corresponding camera.
  • the program can be stored in a non-volatile computer-readable storage medium.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Adjustment Of Camera Lenses (AREA)

Abstract

一种图像补偿方法,图像补偿方法包括:当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。

Description

图像补偿方法、计算机可读存储介质和电子设备
相关申请的交叉引用
本申请要求于2018年06月15日提交中国专利局、申请号为201810623007.7、发明名称为“图像补偿方法和装置、计算机可读存储介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及图像补偿方法、计算机可读存储介质和电子设备。
背景技术
光学防抖(Optical Image Stabilization,光学图像稳定)作为目前被公众认可的一种防抖技术,主要是通过镜头的浮动透镜来纠正“光轴偏移”,其原理是通过镜头内的陀螺仪侦测到微小的移动,然后将信号传至微处理器,处理器立即计算需要补偿的位移量,然后通过补偿镜片组,根据镜头的抖动方向及位移量加以补偿;从而有效的克服因摄像头的振动产生的影像模糊。
但是,在抖动的过程中会产生图像的偏移,镜头的移动会对图像带来实际的影响,一般的防抖技术无法解决图像偏移的问题。
发明内容
根据本申请的各种实施例,提供一种图像补偿方法、计算机可读存储介质和电子设备。
一种图像补偿方法,应用于包括包括携带光学图像稳定系统的摄像头,所述方法包括:
当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
一种图像补偿装置,应用于包括携带光学图像稳定系统的摄像头,所述装置,包括:
镜头偏移获取模块,用于当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
图像偏移获取模块,用于根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
图像补偿模块,用于根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现图像补偿方法的。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,其特征在于,所述指令被所述处理器执行时,使得所述处理器执行图像补偿方法的。
上述图像补偿方法、计算机可读存储介质和电子设备,可以在检测到摄像头发生抖动时,获取摄像头的镜头偏移;根据预设偏移转换函数,确定与镜头偏移相对应的图像偏移;根据图像偏移对发生抖动时摄像头采集的图像进行补偿,可以更为精准的获取图像偏移,进而在图像拍摄或实时预览过程中对图像进行补偿,提高图像的清晰度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根 据这些附图获得其他的附图。
图1为一个实施例中电子设备的框图。
图2为一个实施例中图像补偿方法的流程图。
图3为另一个实施例中图像补偿方法的流程图。
图4为一个实施例中所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数的流程图。
图5为一个实施例中当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移的流程图。
图6为又一个实施例中图像补偿方法的流程图。
图7为一个实施例中图像补偿装置的结构图。
图8为一个实施例中图像处理电路的示意图。
图9为一个实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一摄像头称为第二摄像头,且类似地,可将第二摄像头称为第一摄像头。第一摄像头和第二摄像头两者都是摄像头,但其不是同一摄像头。
其中,携带OIS(Optical Image Stabilization,光学防抖)系统的摄像头包括镜头、音圈马达、红外滤光片、图像传感器(Sensor IC)和数字信号处理(DSP)、PCB电路板及多个传感器(例如,陀螺仪传感器、霍尔传感器等)。其中,镜头通常由多个镜片组成,其成像作用,若镜头具备OIS功能时,在有抖动的情况下,控制镜头相对于图像传感器平移而将手抖造成的图像偏移抵消补偿掉。光学防抖是依靠特殊的镜头或者CCD感光元件的结构在最大程度的降低操作者在使用过程中由于抖动造成影像不稳定。具体地,当摄像头内的陀螺仪侦测到微小的移动时,会将信号传至微处理器立即计算需要补偿的位移量,然后通过补偿镜片组,根据镜头的抖动方向及位移量加以补偿,从而有效的克服因摄像头的抖动产生的影像模糊。
上述携带OIS(Optical Image Stabilization,光学防抖)系统的摄像头可以应用在电子设备中,电子设备可以为手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备、数码相机等具备拍照、摄像功能的任意终端设备。
电子设备可以在检测到所述摄像头发生抖动时,获取所述摄像头的镜头偏移;根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;根据所述图像偏移对发生抖动时所述摄像头采集的图像进行补偿。
图1为一个实施例中电子设备的框图。如图1所示,该电子设备包括通过系统总线连接的处理器、存储器、显示屏和输入装置。其中,存储器可包括非易失性存储介质及处理器。电子设备的非易失性存储介质存储有操作系统及计算机程序,该计算机程序被处理器执行时以实现本申请实施例中提供的一种图像补偿方法。该处理器用于提供计算和控制能力,支撑整个电子设备的运行。电子设备中的内存储器为非易失性存储介质中的计算机程序的运行提供环境。电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏等,输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板, 也可以是外接的键盘、触控板或鼠标等。该电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备、数码相机等具备拍照、摄像功能的任意终端设备。本领域技术人员可以理解,图1中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
图2为一个实施例中图像补偿方法的流程图。图像补偿方法应用于包括携带OIS系统的摄像头。在一个实施例中图像补偿方法,包括202-206。其中,
202,当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移。
当携带OIS系统的摄像头的电子设备进入图像预览界面时,摄像头会实时采集各个视角范围的图像,同时,基于摄像头中的陀螺仪传感器或基于电子设备原有中的陀螺仪传感器和/或加速度传感器可以检测摄像头是否发生抖动。在一个实施例中,当陀螺仪传感器采集的角速度发生变化时,则可认为该摄像头发生了抖动。当摄像头发生抖动时,可以获取该摄像头的镜头偏移。
在一个实施例中,可以基于摄像头中的霍尔传感器或激光技术来采集摄像头中镜头的移动量,也即,镜头偏移。
进一步的,可以以摄像头的图像传感器所在平面为XY平面,建立二维坐标系,其二维坐标系的原点位置在本申请中不做进一步的限定。镜头偏移可以理解为镜头抖动后的当前位置与镜头抖动前的起始位置在二维坐标系中的矢量偏移,也即,镜头抖动后的当前位置相对于镜头抖动前的初始位置的矢量距离。其中,初始位置可以理解为镜头与图像传感器之间的距离为镜头的一倍焦距时的镜头位置。
需要说明的是,镜头偏移指的是镜头(凸透镜)移动前后,光心之间的矢量距离。
204,根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移。
电子设备会预先获取镜头在初始位置采集的第一图像,同时并记录第一图像中各个像素点在XY平面的坐标位置。当摄像头发生抖动时,镜头会在XY平面发生移动,也即,电子设备在发生抖动后的当前位置采集的第二图像相对于第一图像在XY平面也会有所偏移,将第二图像相对于第一图像的偏移称之为图像偏移。例如,可以筛选出第一图像和第二图像中同一个特征像素点,第一图像中该特征像素点p 1在XY平面的坐标信息为p 1(X 1,Y 1),第二图像中该特征像素点p 1'在XY平面的坐标信息为(X 2,Y 2),根据特征像素点P 1、P 1'就可以获取图像偏移d1。但是,当摄像头发生抖动时,无法直接筛选出第一图像和第二图像中同一个特征像素点,通过同一个特征像素点在第一图像和第二图像中的坐标信息获取。在本申请实施例中,可以通过获取镜头偏移,根据预设偏移转换函数来获取图像偏移。
由于镜头偏移的单位为code,图像偏移的单位为像素(pixel)。根据预设偏置转换函数,可以将镜头偏移转换为图像偏移。其中,预设偏移转换函数可以根据特定的标定方式获取,预设偏移转换函数可以用于将镜头偏移转换为图像偏移。其中,可以将镜头在XY平面沿x轴的偏移量与沿y轴的偏移量带入至预设偏移转换函数中对应的变量,通过计算,以获取对应的图像偏移d1。
206,根据所述图像偏移对发生抖动时所述摄像头采集的图像进行补偿。
在本申请实施例中,镜头偏移可以根据霍尔传感器的霍尔值来确定,当发生抖动时,其将摄像头采集的图像称之为第一图像,其中,摄像头采集图像的频率为图像频率。其中,图像频率与霍尔值严格按照时序(时间戳)同步。例如,以30Hz进行图像采集,同一时刻以200Hz进行霍尔值,则一幅图像在时序上将对应6-7个霍尔值。
根据获取的图像偏移对第一图像进行图像补偿。例如,当前计算出的图像偏移为偏移了1个像素(pixel),则在图像补偿时,将该图像偏移的负方向平移1个像素,实现图像 的补偿。
进一步的,本申请实施例可将多个霍尔值对应的图像偏移用于对同一帧图像进行矫正,例如,可将6个霍尔值对应的6个图像偏移对同一帧图像进行矫正,由于摄像头获取的图像是采用CMOS逐行扫描得到的图像,故将不同的霍尔值对应不同行数的区域进行图像补偿,例如,目前有hall1-hall6共六个霍尔值,每一个霍尔值对应唯一一个图像偏移,记为biaspixel1-biaspixel6,此时,若CMOS扫描了6行,则可分别用biaspixel1-biaspixel6对此6行图像进行逐行修正,若CMOS扫描了60行,则可进行分块修正,即60行分为6块,一块包含10行,分别用biaspixel1-biaspixel6对此6块图像进行逐块修正,即,第一块包含的10行均采用biaspixel1作为修正参数进行补偿修正,第二块包含的10行采用biaspixel2作为修正参数进行补偿修正。
上述图像补偿方法,可以在检测到所述摄像头发生抖动时,获取所述摄像头的镜头偏移;根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;根据所述图像偏移对发生抖动时所述摄像头采集的图像进行补偿,可以更为精准的获取图像偏移,进而在图像拍摄或实时预览过程中对图像进行补偿,提高图像的清晰度。
图3为另一个实施例中图像补偿方法的流程图。在一个实施例中,所述根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移前,还包括获取预设偏转转换函数的,具体包括302-308。
302,驱动马达按照预设轨迹移动所述摄像头的镜头;所述预设轨迹包括多个特征位移点。
将测试标板固定在摄像头的成像范围内,并控制马达按照预设轨迹移动所述摄像头的镜头驱动镜头。预设轨迹可以为圆周、椭圆、矩形或其他预设轨迹。在预设轨迹上设定了多个特征位移点,其中,相邻两个特征位移点的距离可以相同,也可以不相同。其特征位移点的位置信息可以在XY平面中用坐标位置进行表示。
304,当所述镜头移动至每个所述特征位移点时,对应采集测试标板的图像信息。
当驱动马达推动摄像头的镜头按照预设轨迹移动时,在每个特征位移点对对应采集测试标板的图像信息。其中,测试标板可以为CTF(Contrast Transfer Function)标板、SFR(Spatial Frequency Response)标板、DB标板或其他自定义标板。例如,当特征位移点的数量为六个时,其需要对应采集六幅测试标板的图像信息。
306,对应获取所述每个特征位移点的第一位置信息及与所述特征位移点相对应的所述图像信息中同一特征像素点的第二位置信息。
特征位移点q的位置信息可以在XY平面中用坐标位置q(x i,y j)进行表示,也即,特征位移点的第一位置信息可以用坐标q(x i,y j)进行表示。例如,若特征位移点的数量为六个,特征位移点的第一位置信息分别记为q 1(x 1,y 1)、q 2(x 2,y 2)、q 3(x 3,y 3)、q 4(x 4,y 4)、q 5(x 5,y 5)、q 6(x 6,y 6)。一个特征位移点对应一幅测试标板的图像信息,该图像信息由多个像素点构成。也即,可以在图像信息中选取一个或多个特征像素点p,以获取在特征像素点p的第二位置信息,特征像素点p的第二位置信息也可以在XY平面中用坐标位置p(X i,Y j)进行表示。其中,特征像素点p可以为该图像信息中靠近中心位置的像素点,也可以为该图像信息中亮度最亮的像素点或其他具有突出意义的像素点,在此,对特征像素点的具体位置及定义不做进一步的限定。
当镜头在初始位置时,该特征位移点为q 0(x 0,y 0),获取的测试标板的图像信息中特征像素点为p 0(X 0,Y 0),其中,特征位移点q 0(x 0,y 0)可以为原点,特征像素点p 0(X 0,Y 0)也可以对应为原点。也即,根据特征位移点、与该特征位移点相对应的特征像素点p(X i,Y j)及初始位置处特征像素点p 0(X 0,Y 0),可以获取特征像素点相对于初始位置的图像偏移。
当镜头移动至特征位移点q 1(x 1,y 1)时,对应获取测试标板的图像信息中的特征像素点p 1(X 1,Y 1);相应的,特征位移点q 2(x 2,y 2)对应特征像素点p 2(X 2,Y 2);特征位移点q 3(x 3,y 3) 对应特征像素点p 3(X 3,Y 3);特征位移点q 4(x 4,y 4)对应特征像素点p 4(X 4,Y 4);特征位移点q 5(x 5,y 5)对应特征像素点p 5(X 5,Y 5);特征位移点q 6(x 6,y 6)对应特征像素点p 6(X 6,Y 6)。
308,将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定具有标定系数的所述预设偏移转换函数,其中,所述特征位移点的数量与所述标定系数的数量相关联。
将获取的特征移动点的第一位置信息,与该特征位移点相对应的特征像素点的第二位置信息均输入值预设偏移转换模型,通过分析运算,即确定该预设偏移转换模型中的各个系数,进而具有标定系数的预设偏移转换函数。其中,预设偏移转换模型可以为一元二次函数模型,二元二次函数模型,也可以为二元多次函数模型,其该预设偏移转换模型的设定,可以利用神经网络或深度学习的方式来获取的,也可以基于获取的大量的第一位置信息和第二位置信息,通过数据拟合的方式来获取。
其中,预设偏移转换模型不同,其需要获取的特征位移点的数量也不相同,且预设偏移转换模型中的未知系数的数量小于等于特征位移点的数量。
例如,当预设偏移转换模型为二元二次函数模型时,可以用如下公式进行表示:
F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
式中,(ΔX,ΔY)表示图像偏移,该图像偏移表示当前特征位移点p(x i,y j)相对于初始位置的特征像素点p 0(X 0,Y 0)的图像偏移,该图像偏移为标量偏移,也即,当前特征位移点p(X i,Y j)与初始位置的特征像素点p 0(X 0,Y 0)之间的距离。x表示特征位移点横轴x的坐标参数;y特征位移点纵轴y的坐标参数。
本实施例中初始位置处的特征位移点q 0(x 0,y 0)对应的测试标板的图像信息中的特征像素点p 0(X 0,Y 0)设为坐标原点。则六个特征像素点对应的图像偏移分别为F 1(ΔX 1,ΔY 1)、F 2(ΔX 2,ΔY 2)、F 3(ΔX 3,ΔY 3)、F 4(ΔX 4,ΔY 4)、F 5(ΔX 5,ΔY 5)、F 6(ΔX 6,ΔY 6),其中,F 1(ΔX 1,ΔY 1)为特征像素点p 1(x 1,y 1)、p 0(x 0,y 0)之间的图像偏移d 1,F 2(ΔX 2,ΔY 2)为特征像素点p 2(x 2,y 2)、p 0(x 0,y 0)之间的图像偏移d 2;F 3(ΔX 3,ΔY 3)为特征像素点p 3(x 3,y 3)、p 0(x 0,y 0)之间的图像偏移d 3;F 4(ΔX 4,ΔY 4)为特征像素点p 4(x 4,y 4)、p 0(x 0,y 0)之间的图像偏移d 4;F 5(ΔX 5,ΔY 5)为特征像素点p 5(x 5,y 5)、p 0(x 0,y 0)之间的图像偏移d 5;F 6(ΔX 6,ΔY 6)为特征像素点p 6(x 6,y 6)、p 0(x 0,y 0)之间的图像偏移d 6
将获取的六个特征位移点q 1(x 1,y 1)、q 2(x 2,y 2)、q 3(x 3,y 3)、q 4(x 4,y 4)、q 5(x 5,y 5)、q 6(x 6,y 6)以及与该六个特征像素点p 1(X 1,Y 1)、p 2(X 2,Y 2)、p 3(X 3,Y 3)、p 4(X 4,Y 4)、p 5(X 5,Y 5)、p 6(X 6,Y 6)对应的图像偏移的d 1、d 2、d 3、d 4、d 5、d 6分别输入至二元二次函数模型,可以得到以下公式:
F 1(ΔX 1,ΔY 1)=ax 1 2+by 1 2+cx 1y 1+dx 1+ey 1+f;
F 2(ΔX 2,ΔY 2)=ax 2 2+by 2 2+cx 2y 2+dx 2+ey 2+f;
F 3(ΔX 3,ΔY 3)=ax 3 2+by 3 2+cx 3y 3+dx 3+ey 3+f;
F 4(ΔX 4,ΔY 4)=ax 4 2+by 4 2+cx 4y 4+dx 4+ey 4+f;
F 5(ΔX 5,ΔY 5)=ax 5 2+by 5 2+cx 5y 5+dx 5+ey 5+f;
F 6(ΔX 6,ΔY 6)=ax 6 2+by 6 2+cx 6y 6+dx 6+ey 6+f。
其中,二元二次函数模型中包括六个未知系数a、b、c、d、e、f,根据上述六个公式就可以解析出上述等式中的a、b、c、d、e、f,其中,获取的系数a、b、c、d、e、f带入二元二次函数模型,则可以获取对应的预设偏移转换函数,其中,a、b、c、d、e、f为预设偏移转换函数的标定系数。
当然,还可以获取更多的特征位移点q 7(x 7,y 7)、特征位移点q 8(x 8,y 8)等等,以及对该特征位移点对应的特征像素点p 7(X 7,Y 7)、p 8(X 8,Y 8)对应的图像偏移的d 7、d 8,将获取的d 7、d 8、q 7(x 7,y 7)、q 8(x 8,y 8)也输入至上述二元二次函数模型中,进而从八个等式中选取6个等式进行计算,以确定预设偏移转换函数。
本实施例中的图像补偿方法,根据预设偏移转换模型、多个偏移移动点以及对应的多个特征像素点,可以获取对应的预设偏移转换函数,该预设偏移转换函数可以直接基于镜头偏移准确、高效的获取图像偏移值,标定效率、精准度更高,为补偿图像奠定了好的基础。
图4为一个实施例中所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数的流程图。在一个实施例中,所述预设偏移转换模型为二元多次函数;所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数,包括:
402,根据所述二元多次函数的未知系数确定所述特征位移点的数量。
其中,预设偏移转换模型为二元多次函数模型,其表达式如下:
F(ΔX,ΔY)=ax (n)+by (n)+...+cxy+dx+ey+f,
式中,n≥2;(ΔX,ΔY)表示图像偏移,该图像偏移表示当前特征位移点p(X i,Y j)相对于原始特征像素点p 0(X 0,Y 0)的图像偏移,该图像偏移为标量偏移。x表示特征位移点横轴x的坐标参数;y特征位移点纵轴y的坐标参数。a、b、...、c、d、e、f为该预设偏移转换模型中的未知系数。
当预设偏移转换模型为二元多次函数模型时,其中未知系数a、b、...、c、d、e、f的数量大于等于6。具体的,可以获取偏移转换模型的未知系数的数量,例如,当预设偏移转换模型为二元二次函数模型时,其中,未知系数的数量为6个,则需要对应获取大于等于6个的特征位移点。例如,当预设偏移转换模型为二元三次函数模型,该函数模型为:
F(ΔX,ΔY)=ax 3+by 3+gx 2y+hxy 2+ix 2y+cxy+dx+ey+f
其中,未知系数的数量为9个,则需要对应获取大于等于9个的特征位移点。由此可知,特征位移点的数量大于等于预设偏移转换模型的未知系数。
404,将确定的每个所述特征位移点的第一位置信息,以及与所述第一位置信息对应的第二位置信息输入至所述预设偏移转换模型,以确定所述未知系数。
根据确定的预设偏移转换模型中的未知系数的数量,可以预设轨迹中选取具有该数量的特征移动点,且各个特征移动点均不相同。
可选的,特征移动点也可以为XY平面内的任意不重复位置点。
根据确定数量的多个特征位移点,可以对获取各个特征位移点对应的各个测试标板图像信息中的特征像素点,以及特征像素点对应的图像偏移。将各个特征位移点的坐标信息以及对应的图像偏移输入至预设偏转转换模型中,以求解得到预设偏转转换模型的各个未知系数。
406,根据确定的所述未知系数、预设偏移转换模型以确定具有标定系数的所述预设偏移转换函数。
将获取的各个未知系数带入至预设偏移转换模型中,就可以获取具有标定系数的预设偏移转换函数。其中预设偏移转换函数中的标定系数可以理解为预设偏移转换模型中求解的未知系数。将确定了未知系数的预设偏移转换模型称之为预设偏移转换函数。
在一个实施例中,当预设偏移转换模型为二元二次函数模型时,根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移,包括:
获取所述预设偏移转换函数,所述预设偏移转换函数表示为:
F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
式中,a、b、c、d、e、f分别为标定系数,也就是已知系数。F(ΔX,ΔY)用于表示当前图像偏移,x、y分别表示当前镜头偏移的横轴坐标、纵轴坐标。例如,若当前的镜头偏移为p(2,1),则对应的图像偏移F(ΔX,ΔY)为4a+b+2c+2d+e+f,根据确定的标定系数,则可以获取图像偏移F(ΔX,ΔY),该图像偏移为标量偏移。
根据该预设偏移转换函数就可以确定与所述镜头偏移相对应的图像偏移。也即,当获 取镜头偏移时,可以根据该预设偏移转换函数,就可以将当前的镜头偏移转换为图像偏移。该预设偏移转换函数为二元二次函数,其综合考虑镜头偏移的x轴偏移和y轴偏移两个维度的信息,可以更为精准、高效的将镜头偏移转换为图像偏移。
图5为一个实施例中当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移的流程图。在一个实施例中,所述当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移,包括:
502,基于陀螺仪传感器获取摄像头的角速度信息。
其中,该摄像头中还包括用于检测摄像头是否发生抖动的陀螺仪传感器和用于驱动摄像头的镜头移动的马达以及用于控制马达运动的OIS控制器。
当陀螺仪传感器检测到摄像头发生抖动时,实时采集该陀螺仪传感器检测到的摄像头的角速度,根据获取的角速度来确定摄像头的抖动量。
504,根据所述角速度信息控制马达驱动所述摄像头的镜头的移动。
根据确定的抖动量控制马达以驱动所述摄像头的镜头移动,其镜头的移动量与该抖动量的方向相反,以消除因抖动引起的偏移。
506,基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移。
电子设备可以通过霍尔传感器或激光记录摄像头的镜头在XY平面上的偏移刻度,并记录偏移刻度的同时,还可以记录偏移的方向,根据每个刻度对应的距离,以及偏移方向,继而得到镜头偏移p(x i,y j)。在本申请实施例中,已知霍尔传感器采集的霍尔值的大小,即可唯一确定出当前时刻该镜头偏移的大小。在OIS系统中,该镜头偏移数量级在微米级别。
其中,所述陀螺仪传感器采集的角速度信息与霍尔传感器采集的霍尔值在时序上对应。
其中,霍尔传感器(Hall sensor),是根据霍尔效应制作的一种磁场传感器,霍尔效应从本质上讲是运动的带电粒子在磁场中受洛仑兹力作用引起的偏转。当带电粒子(电子或空穴)被约束在固体材料中,这种偏转就导致在垂直电流和磁场的方向上产生正负电荷的聚积,从而形成附加的横向电场。
进一步,506,基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移,包括:获取所述摄像头采集图像的第一频率以及所述陀螺仪采集角速度信息的第二频率;根据所述第一频率和第二频率确定采集一帧图像时对应的多个角速度信息;根据多个角速度信息确定目标角速度信息,根据目标角速度信息对应的霍尔值确定所述摄像头的镜头偏移。
具体地,获取所述摄像头采集图像的第一频率以及陀螺仪采集角速度信息的第二频率。由于陀螺仪传感器的采集频率高于获取摄像头采集图像的频率,例如,摄像头以30Hz进行图像采集,同一时刻以陀螺仪传感器以200Hz进行角速度的采集,则采集一幅图像的时间,在时序上将对应采集6-7个角速度。在采集的6~7角速度数据中选取目标角速度。其中目标角速度可以为最小角速度、导数最小的角速度、与平均角速度相差最小的角速度。根据给目标角速度获取对应的霍尔传感器的霍尔值,根据确定的霍尔值来确定镜头偏移。
图6为又一个实施例中图像补偿方法的流程图。在一个实施例中,所述摄像头至少包括第一摄像头和第二摄像头。其中,第一摄像头和第二摄像头可以都具备OIS功能,也可以只有一个摄像头具备OIS功能,本申请实施例在此不做进一步的限定。本申请实施例对第一摄像头和第二摄像头的性能参数(例如,焦距、光圈大小、解像力等等)不做任何限制。在一些实施例中,第一摄像头可为长焦摄像头或广角摄像头中的任一者。第二摄像头可为长焦摄像头或广角摄像头中的任一者。第一摄像头和第二摄像头可设置于电子设备的同一平面内,比如,同时设置在电子设备的背面或前面。双摄像头在电子设备的安装距离可根据终端的尺寸确定和/或拍摄效果等确定。在一些实施例中,为了使左右摄像头(第一摄像头和第二摄像头)拍摄的物体重叠度高,可将左右摄像头安装得越近越好,例如, 10mm以内。
在一个实施例中,图像补偿方法,还包括:
602,当检测到所述第一摄像头和所述第二摄像头抖动时,获取所述第一摄像头的第一镜头偏移和所述第二摄像头的第二镜头偏移以及在同一时刻,所述第一摄像头和第二摄像头拍摄目标对象的第一图像和第二图像。
根据前述实施例中202的方法,当第一摄像头和/或所述第二摄像头抖动时,可以基于霍尔传感器获取所述第一摄像头的第一镜头偏移和/或所述第二摄像头的第二镜头偏移。当其中一个摄像头未发生偏移时,其对应的镜头偏移为0。
同时,获取第一镜头偏移和/或第二镜头偏移的同时,还可以获取第一摄像头拍摄目标对象的第一图像以及获取第二摄像头分别拍摄包含目标对象的第二图像。
604,根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移。
根据前述实施例中204的方法,根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移。例如,预设偏移转换函数可以表示为:
F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
式中,a、b、c、d、e、f分别为所述标定系数;F(ΔX,ΔY)为图像偏移;x、y分别为镜头偏移的在X、Y平面的坐标。将获取的第一镜头偏移带入上述预设偏移转换函数中,就可以将第一镜头偏移转换为第一图像偏移;相应的,将获取的第二镜头偏移带入上述预设偏移转换函数中,就可以将第二镜头偏移转换为第二图像偏移。
606,根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿,以获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息。
根据前述实施例中204的方法,可以根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿。分别获取补偿后的第一图像和补偿后的第二图像,并获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息。
其中,距离信息为矢量距离,可以是补偿后的第一图像与第二图像重叠之后映射在XY平面上,进而获取补偿后的两个图像中目标对象之间的坐标距离。
具体地,距离信息可以是将补偿后的第一图像与第二图像重叠之后映射在XY平面上,并获取补偿后的两个图像中目标对象的同一特征像素点的坐标之间的矢量距离;也可以是在XY平面上获取补偿后第一图像的多个特征像素点,并针对每个特征像素点,在补偿后的第二图像中对应获取与该特征像素点具有相同特征的特征像素点。针对每个特征像素点,可以获取补偿后的两个图像中同一特征像素点的坐标之间的矢量距离,在根据获取的多个矢量距离计算平均值,并将该平均值作为补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息。
608,根据所述距离信息、第一摄像头和第二摄像头确定所述目标对象的景深信息。
其中,第一摄像头、第二摄像头位于同一平面,可以获取两个摄像头之间的距离以及第一摄像头和第二摄像头的焦距。其中,第一摄像头和第二摄像头的焦距相等。基于三角测距远离,可以获取目标对象与两个摄像头所在平面之间的距离Z,其中,距离Z为目标对象的景深信息。具体地,距离Z=两个摄像头之间的距离*(第一摄像头或第二摄像头的焦距)/距离信息。
可选的,还可以基于第一摄像头、第二摄像头的成像的位移差、姿势差成比例等关系来确定目标对象的景深信息。
可选的,本方案还可以适用于包括三个或三个以上的摄像头的电子设备,其中,三个或三个以上的摄像头中至少包括一个具备OIS功能的摄像头。以三个摄像头为例进行说 明,可以构成两两摄像头的组合,该组合中,至少一个摄像头具备OIS功能。每个组合中的两个摄像头可以获取目标对象的深度信息,这样就可以获取三组深度信息,可以将三组深度信息的平均深度作为目标对象的实际深度。
本实施例中,可以通过在第一摄像头和第二摄像头发生抖动时采集的第一图像和第二图像进行补偿,进而根据补偿后的第一图像和第二图像来获取第一目标对象的景深信息,从而获取的景深信息更为准确。
图7为一个实施例中图像补偿装置的结构图。本申请实施例还提供了一种图像补偿装置,应用于包括携带光学图像稳定系统的摄像头,所述装置,包括:
镜头偏移获取模块710,用于当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
图像偏移获取模块720,用于根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
图像补偿模块730,用于根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
上述图像补偿装置可以在检测到所述摄像头发生抖动时,获取所述摄像头的镜头偏移;根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;根据所述图像偏移对发生抖动时所述摄像头采集的图像进行补偿,可以更为精准的获取图像偏移,进而在图像拍摄或实时预览过程中对图像进行补偿,提高图像的清晰度。
在一个实施例中,图像补偿装置,还包括:
镜头驱动模块,用于驱动马达按照预设轨迹移动所述摄像头的镜头;所述预设轨迹包括多个特征位移点;
图像采集模块,用于当所述镜头移动至每个所述特征位移点时,对应采集测试标板的图像信息;
位置获取模块,用于对应获取所述每个特征位移点的第一位置信息及在所述特征位移点采集的所述图像信息中同一特征像素点的第二位置信息;
函数确定模块,用于将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定具有标定系数的所述预设偏移转换函数,其中,所述特征位移点的数量与所述标定系数的数量相关联。
在一个实施例中,函数确定模块,包括:
数量确定单元,用于根据所述二元多次函数的未知系数确定所述特征位移点的数量;
系数确定单元,用于将确定的每个所述特征位移点的第一位置信息,以及与所述第一位置信息对应的第二位置信息输入至所述预设偏移转换模型,以确定所述未知系数;
函数确定单元,用于根据确定的所述未知系数、预设偏移转换模型以确定具有标定系数的所述预设偏移转换函数。
在一个实施例中,图像偏移获取模块,包括:
函数获取单元,用于获取所述预设偏移转换函数,所述预设偏移转换函数表示为:
F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
式中,a、b、c、d、e、f分别为所述标定系数;F(ΔX,ΔY)为图像偏移;x、y分别为镜头偏移的在X、Y平面的坐标。
偏移转换单元,用于根据所述预设偏移转换函数确定与所述镜头偏移相对应的图像偏移。
在一个实施例中,图像偏移获取模块,包括:
角速度获取单元,用于基于陀螺仪传感器获取摄像头的角速度信息;
马达驱动单元,用于根据所述角速度信息控制马达驱动所述摄像头的镜头的移动;
镜头偏移单元,用于基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移。
进一步的,镜头偏移单元还用于获取所述摄像头采集图像的第一频率以及所述陀螺仪采集角速度信息的第二频率;根据所述第一频率和第二频率确定采集一帧图像时对应的多个角速度信息;根据多个角速度信息确定目标角速度信息,并根据目标角速度信息对应的霍尔值确定所述摄像头的镜头偏移。
在一个实施例中,所述摄像头至少包括第一摄像头和第二摄像头;图像补偿装置,还包括:
获取模块,用于当检测到所述第一摄像头和所述第二摄像头抖动时,获取所述第一摄像头的第一镜头偏移和所述第二摄像头的第二镜头偏移以及在同一时刻,所述第一摄像头和第二摄像头拍摄目标对象的第一图像和第二图像;
转换模块,用于根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移;
补偿模块,用于根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿,以获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息;
景深模块,用于根据所述距离信息、第一图像和第二图像确定所述目标对象的景深信息。
本实施例中,可以通过在第一摄像头和第二摄像头发生抖动时采集的第一图像和第二图像进行补偿,进而根据补偿后的第一图像和第二图像来获取第一目标对象的景深信息,从而获取的景深信息更为准确。
上述图像补偿装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像补偿装置按照需要划分为不同的模块,以完成上述图像补偿装置的全部或部分功能。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行上述任一实施例中的图像补偿方法。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图8为一个实施例中图像处理电路的示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像补偿技术的各个方面。
如图8所示,图像处理电路包括ISP处理器840和控制逻辑器850。成像设备810捕捉的图像数据首先由ISP处理器840处理,ISP处理器840对图像数据进行分析以捕捉可用于确定和/或成像设备810的一个或多个控制参数的图像统计信息。成像设备810可包括具有一个或多个透镜812和图像传感器814的照相机。图像传感器814可包括色彩滤镜阵列(如Bayer滤镜),图像传感器814可获取用图像传感器814的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器840处理的一组原始图像数据。传感器820(如陀螺仪)可基于传感器820接口类型把采集的图像补偿的参数(如防抖参数)提供给ISP处理器840。传感器820接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口、或上述接口的组合。
此外,图像传感器814也可将原始图像数据发送给传感器820,传感器820可基于传感器820接口类型把原始图像数据提供给ISP处理器840进行处理,或者传感器820将原始图像数据存储到图像存储器830中。
ISP处理器840按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器840可对原始图像数据进行一个或多个图像补偿操作、收集关于图像数据的统计信息。其中,图像补偿操作可按相同或不同的位深度精度进行。
ISP处理器840还可从图像存储器830接收像素数据。例如,传感器820接口将原始 图像数据发送给图像存储器830,图像存储器830中的原始图像数据再提供给ISP处理器840以供处理。图像存储器830可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器814接口或来自传感器820接口或来自图像存储器830的原始图像数据时,ISP处理器840可进行一个或多个图像补偿操作,如时域滤波。ISP处理器840处理后的图像数据可发送给图像存储器830,以便在被显示之前进行另外的处理。ISP处理器840从图像存储器830接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器880,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器840的输出还可发送给图像存储器830,且显示器880可从图像存储器830读取图像数据。在一个实施例中,图像存储器830可被配置为实现一个或多个帧缓冲器。此外,ISP处理器840的输出可发送给编码器/解码器870,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器880设备上之前解压缩。
ISP处理后的图像数据可发送给编码器/解码器870,以便编码/解码图像数据。编码的图像数据可被保存,并在显示与显示器880设备上之前解压缩。ISP处理器840处理后的图像数据还可以先经过编码器/解码器870处理。其中,编码器/解码器870可为移动终端中CPU(Central Processing Unit,中央处理器)或GPU(Graphics Processing Unit,图形处理器)等。
ISP处理器840确定的统计数据可发送给控制逻辑器850单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜812阴影补偿等图像传感器814统计信息。控制逻辑器850可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备810的控制参数以及ISP处理器840的控制参数。例如,成像设备810的控制参数可包括传感器820控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜812控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩补偿矩阵,以及透镜812阴影补偿参数。
图9为另一个实施例中图像处理电路的示意图。如图9所示,为便于说明,仅示出与本申请实施例相关的图像补偿技术的各个方面。
第一摄像头100可包括具有一个或多个透镜1202和第一图像传感器140;第一图像传感器140可包括色彩滤镜阵列(如Bayer滤镜),第一图像传感器140可获取用第一图像传感器140的每个成像像素捕捉的光强度和波长信息,并提供可由第一ISP处理器912处理的一组原始图像数据,第一ISP处理器912处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器920,控制逻辑器920可根据统计数据确定第一摄像头100的控制参数,从而第一摄像头100可根据控制参数进行自动对焦、自动曝光、OIS防抖等操作。第一图像经过第一ISP处理器912进行处理后可存储至图像存储器950中,第一ISP处理器912也可以读取图像存储器950中存储的图像以对进行处理。另外,第一图像经过ISP处理器912进行处理后可直接发送至显示器970进行显示,显示器970也可以读取图像存储器950中的图像以进行显示。
第二摄像头的处理流程和第一摄像头相同。图像传感器、ISP处理器的功能和单摄情况的描述相同。
应理解,第一ISP处理器912和第二ISP处理器914也可合成为统一ISP处理器,分别处理第一图像传感器和第二图像传感器的数据。
此外,图中没有展示的,还包括CPU和供电模块。CPU和逻辑控制器920、第一ISP处理器912、第二ISP处理器914、图像存储器950和显示器970均连接,CPU用于实现 全局控制。供电模块用于为各个模块供电。
一般的,具有双摄的手机,在某些拍照模式(例如,人像模式)下,双摄均工作,此时,CPU控制供电模块为第一摄像头和第二摄像头供电。第一摄像头中的图像传感器上电,第二摄像头中的图像传感器上电,就可以实现图像的采集转换。在某些拍照模式下(例如,照片模式),默认仅其中的一个摄像头工作,例如,仅长焦摄像头工作,这种情况下,CPU控制供电模块给相应摄像头的图像传感器供电即可。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像补偿方法,应用于包括携带光学图像稳定系统的摄像头,所述方法包括:
    当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
    根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
    根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
  2. 根据权利要求1所述的方法,其特征在于,所述根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移前,还包括:
    驱动马达按照预设轨迹移动所述摄像头的镜头;所述预设轨迹包括多个特征位移点;
    当所述镜头移动至每个所述特征位移点时,对应采集测试标板的图像信息;
    对应获取所述每个特征位移点的第一位置信息及在所述特征位移点采集的所述图像信息中同一特征像素点的第二位置信息;
    将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定具有标定系数的所述预设偏移转换函数,其中,所述特征位移点的数量与所述标定系数的数量相关联。
  3. 根据权利要求2所述的方法,其特征在于,所述预设偏移转换模型为二元多次函数;所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数,包括:
    根据所述二元多次函数的未知系数确定所述特征位移点的数量;
    将确定的每个所述特征位移点的第一位置信息,以及与所述第一位置信息对应的第二位置信息输入至所述预设偏移转换模型,以确定所述未知系数;
    根据确定的所述未知系数、预设偏移转换模型以确定具有标定系数的所述预设偏移转换函数。
  4. 根据权利要求1所述的方法,其特征在于,根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移,包括:
    获取所述预设偏移转换函数,所述预设偏移转换函数表示为:
    F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
    式中,a、b、c、d、e、f分别为所述标定系数;F(ΔX,ΔY)为图像偏移;x、y分别为镜头偏移的在X、Y平面的坐标。
    根据所述预设偏移转换函数确定与所述镜头偏移相对应的图像偏移。
  5. 根据权利要求1所述的方法,其特征在于,所述当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移,包括:
    基于陀螺仪传感器获取摄像头的角速度信息;
    根据所述角速度信息控制马达驱动所述摄像头的镜头的移动;
    基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移。
  6. 根据权利要求5所述的方法,其特征在于,所述基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移,包括:
    获取所述摄像头采集图像的第一频率以及所述陀螺仪采集角速度信息的第二频率;
    根据所述第一频率和第二频率确定采集一帧图像时对应的多个角速度信息;
    根据多个角速度信息确定目标角速度信息,并根据目标角速度信息对应的霍尔值确定所述摄像头的镜头偏移。
  7. 根据权利要求1所述的方法,其特征在于,所述摄像头至少包括第一摄像头和第二摄像头;所述方法还包括:
    当检测到所述第一摄像头和/或所述第二摄像头抖动时,获取所述第一摄像头的第一镜头偏移和所述第二摄像头的第二镜头偏移以及在同一时刻,所述第一摄像头和第二摄像头分别拍摄包含目标对象的第一图像和第二图像;
    根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移;
    根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿,以获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息;
    根据所述距离信息、第一摄像头和第二摄像头确定所述目标对象的景深信息。
  8. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下操作:
    当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
    根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
    根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
  9. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移前,还执行如下操作:
    驱动马达按照预设轨迹移动所述摄像头的镜头;所述预设轨迹包括多个特征位移点;
    当所述镜头移动至每个所述特征位移点时,对应采集测试标板的图像信息;
    对应获取所述每个特征位移点的第一位置信息及在所述特征位移点采集的所述图像信息中同一特征像素点的第二位置信息;
    将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定具有标定系数的所述预设偏移转换函数,其中,所述特征位移点的数量与所述标定系数的数量相关联。
  10. 根据权利要求9所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述预设偏移转换模型为二元多次函数;所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数时,还执行如下操作:
    根据所述二元多次函数的未知系数确定所述特征位移点的数量;
    将确定的每个所述特征位移点的第一位置信息,以及与所述第一位置信息对应的第二位置信息输入至所述预设偏移转换模型,以确定所述未知系数;
    根据确定的所述未知系数、预设偏移转换模型以确定具有标定系数的所述预设偏移转换函数。
  11. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移时,还执行如下操作:
    获取所述预设偏移转换函数,所述预设偏移转换函数表示为:
    F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
    式中,a、b、c、d、e、f分别为所述标定系数;F(ΔX,ΔY)为图像偏移;x、y分别为镜头偏移的在X、Y平面的坐标。
    根据所述预设偏移转换函数确定与所述镜头偏移相对应的图像偏移。
  12. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移时,还执行如下操作:
    基于陀螺仪传感器获取摄像头的角速度信息;
    根据所述角速度信息控制马达驱动所述摄像头的镜头的移动;
    基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移。
  13. 根据权利要求12所述的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行所述基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移时,还执行如下操作:
    获取所述摄像头采集图像的第一频率以及所述陀螺仪采集角速度信息的第二频率;
    根据所述第一频率和第二频率确定采集一帧图像时对应的多个角速度信息;
    根据多个角速度信息确定目标角速度信息,并根据目标角速度信息对应的霍尔值确定所述摄像头的镜头偏移。
  14. 根据权利要求8所述的计算机可读存储介质,其特征在于,所述摄像头至少包括第一摄像头和第二摄像头;所述计算机程序被处理器执行时,还执行如下操作:
    当检测到所述第一摄像头和/或所述第二摄像头抖动时,获取所述第一摄像头的第一镜头偏移和所述第二摄像头的第二镜头偏移以及在同一时刻,所述第一摄像头和第二摄像头分别拍摄包含目标对象的第一图像和第二图像;
    根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移;
    根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿,以获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息;
    根据所述距离信息、第一摄像头和第二摄像头确定所述目标对象的景深信息。
  15. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行如下操作:
    当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移;
    根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移;
    根据所述图像偏移,对发生抖动时所述摄像头采集的图像进行补偿。
  16. 根据权利要求15所述的电子设备,其特征在于,所述计处理器执行所述根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移前,还执行如下操作:
    驱动马达按照预设轨迹移动所述摄像头的镜头;所述预设轨迹包括多个特征位移点;
    当所述镜头移动至每个所述特征位移点时,对应采集测试标板的图像信息;
    对应获取所述每个特征位移点的第一位置信息及在所述特征位移点采集的所述图像信息中同一特征像素点的第二位置信息;
    将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定具有标定系数的所述预设偏移转换函数,其中,所述特征位移点的数量与所述标定系数的数量相关联。
  17. 根据权利要求16所述的电子设备,其特征在于,所述处理器执行所述预设偏移转换模型为二元多次函数;所述将所述第一位置信息和所述第二位置信息输入至预设偏移转换模型,以确定所述预设偏移转换函数时,还执行如下操作:
    根据所述二元多次函数的未知系数确定所述特征位移点的数量;
    将确定的每个所述特征位移点的第一位置信息,以及与所述第一位置信息对应的第二位置信息输入至所述预设偏移转换模型,以确定所述未知系数;
    根据确定的所述未知系数、预设偏移转换模型以确定具有标定系数的所述预设偏移转换函数。
  18. 根据权利要求15所述的电子设备,其特征在于,所述处理器执行根据预设偏移转换函数,确定与所述镜头偏移相对应的图像偏移时,还执行如下操作:
    获取所述预设偏移转换函数,所述预设偏移转换函数表示为:
    F(ΔX,ΔY)=ax 2+by 2+cxy+dx+ey+f
    式中,a、b、c、d、e、f分别为所述标定系数;F(ΔX,ΔY)为图像偏移;x、y分别为镜头偏移的在X、Y平面的坐标;
    根据所述预设偏移转换函数确定与所述镜头偏移相对应的图像偏移。
  19. 根据权利要求15所述的电子设备,其特征在于,所述处理器执行所述当检测到所述摄像头抖动时,获取所述摄像头的镜头偏移时,还执行如下操作:
    基于陀螺仪传感器获取摄像头的角速度信息;
    根据所述角速度信息控制马达驱动所述摄像头的镜头的移动;
    基于霍尔传感器的霍尔值确定所述摄像头的镜头偏移。
  20. 根据权利要求15所述的电子设备,其特征在于,所述摄像头至少包括第一摄像头和第二摄像头;所述处理器还执行如下操作:
    当检测到所述第一摄像头和/或所述第二摄像头抖动时,获取所述第一摄像头的第一镜头偏移和所述第二摄像头的第二镜头偏移以及在同一时刻,所述第一摄像头和第二摄像头分别拍摄包含目标对象的第一图像和第二图像;
    根据预设偏移转换函数,确定与所述第一镜头偏移相对应的第一图像偏移,以及与所述第二镜头偏移相对应的第二图像偏移;
    根据第一图像偏移对所述第一图像进行补偿,及根据所述第二图像偏移对所述第二图像进行补偿,以获取补偿后的第一图像与第二图像中同一特征拍摄物之间的距离信息;
    根据所述距离信息、第一摄像头和第二摄像头确定所述目标对象的景深信息。
PCT/CN2019/090140 2018-06-15 2019-06-05 图像补偿方法、计算机可读存储介质和电子设备 WO2019237977A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810623007.7A CN108737734B (zh) 2018-06-15 2018-06-15 图像补偿方法和装置、计算机可读存储介质和电子设备
CN201810623007.7 2018-06-15

Publications (1)

Publication Number Publication Date
WO2019237977A1 true WO2019237977A1 (zh) 2019-12-19

Family

ID=63929785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090140 WO2019237977A1 (zh) 2018-06-15 2019-06-05 图像补偿方法、计算机可读存储介质和电子设备

Country Status (2)

Country Link
CN (1) CN108737734B (zh)
WO (1) WO2019237977A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476735A (zh) * 2020-04-13 2020-07-31 厦门美图之家科技有限公司 人脸图像处理方法、装置、计算机设备及可读存储介质
CN113218419A (zh) * 2021-04-25 2021-08-06 维沃移动通信(深圳)有限公司 陀螺仪的异常检测方法、装置、电子设备及存储介质
CN114500838A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 防抖拍摄方法及装置
CN116074641A (zh) * 2023-03-06 2023-05-05 触景无限科技(北京)有限公司 基于mosse算法的监控设备点位图像矫正方法及系统
CN117524073A (zh) * 2024-01-08 2024-02-06 深圳蓝普视讯科技有限公司 一种超高清图像显示抖动补偿方法、系统及存储介质

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737734B (zh) * 2018-06-15 2020-12-01 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备
CN109660718B (zh) * 2018-11-30 2021-06-01 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109714536B (zh) * 2019-01-23 2021-02-23 Oppo广东移动通信有限公司 图像校正方法、装置、电子设备及计算机可读存储介质
CN109842753B (zh) * 2019-03-26 2021-04-23 Oppo广东移动通信有限公司 摄像头防抖系统、方法、电子设备和存储介质
CN110012224B (zh) * 2019-03-26 2021-07-09 Oppo广东移动通信有限公司 摄像头防抖系统、方法、电子设备和计算机可读存储介质
CN109951640A (zh) * 2019-03-26 2019-06-28 Oppo广东移动通信有限公司 摄像头防抖方法和系统、电子设备、计算机可读存储介质
CN110049239B (zh) * 2019-03-26 2021-03-23 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN110072049B (zh) * 2019-03-26 2021-11-09 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109951638B (zh) * 2019-03-26 2021-02-02 Oppo广东移动通信有限公司 摄像头防抖系统、方法、电子设备和计算机可读存储介质
CN110111364B (zh) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 运动检测方法、装置、电子设备及存储介质
CN110166695B (zh) * 2019-06-26 2021-10-01 Oppo广东移动通信有限公司 摄像头防抖方法、装置、电子设备和计算机可读存储介质
CN110763190A (zh) * 2019-10-29 2020-02-07 王君 参数大数据实时测量系统及方法
WO2022032611A1 (en) * 2020-08-14 2022-02-17 Qualcomm Incorporated Automatic focus control accounting for lens movement during image capture
CN114449151B (zh) * 2020-10-30 2023-06-02 华为技术有限公司 一种图像处理方法及相关装置
CN113055598B (zh) * 2021-03-25 2022-08-05 浙江商汤科技开发有限公司 朝向数据补偿方法、装置、电子设备、可读存储介质
CN113352998A (zh) * 2021-06-01 2021-09-07 地平线征程(杭州)人工智能科技有限公司 设置方向调节方法、装置及计算机可读存储介质
CN113891005B (zh) * 2021-11-19 2023-11-24 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN114286001A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 图像处理电路、装置、方法、电子设备、图像处理芯片及主控芯片
CN114003190B (zh) * 2021-12-30 2022-04-01 江苏移动信息系统集成有限公司 一种适应多场景和多设备的增强现实方法和装置
CN115396596B (zh) * 2022-08-15 2023-06-30 上海交通大学 一种超分辨率图像成像方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294945A (ja) * 2007-05-28 2008-12-04 Sony Corp 撮像装置、撮像装置の自動制御方法および撮像装置の自動制御プログラム
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
CN104601870A (zh) * 2015-02-15 2015-05-06 广东欧珀移动通信有限公司 一种旋转摄像头的拍摄方法及移动终端
CN107223330A (zh) * 2016-01-12 2017-09-29 华为技术有限公司 一种深度信息获取方法、装置及图像采集设备
CN108737734A (zh) * 2018-06-15 2018-11-02 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备
CN108769528A (zh) * 2018-06-15 2018-11-06 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3349129B2 (ja) * 2000-02-18 2002-11-20 株式会社東芝 撮像装置
JP4000872B2 (ja) * 2002-03-05 2007-10-31 ソニー株式会社 画像撮影装置及び色収差補正方法
JP2008160294A (ja) * 2006-12-21 2008-07-10 Matsushita Electric Ind Co Ltd ぶれ補正装置
CN104113698A (zh) * 2014-08-06 2014-10-22 北京北纬通信科技股份有限公司 应用于图像捕获设备的模糊图像处理方法和系统
CN105262934B (zh) * 2015-10-16 2018-09-14 浙江宇视科技有限公司 一种视频图像的调整方法和装置
CN108021668A (zh) * 2017-12-05 2018-05-11 广东欧珀移动通信有限公司 图像处理的方法和装置、电子设备、计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008294945A (ja) * 2007-05-28 2008-12-04 Sony Corp 撮像装置、撮像装置の自動制御方法および撮像装置の自動制御プログラム
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
CN104601870A (zh) * 2015-02-15 2015-05-06 广东欧珀移动通信有限公司 一种旋转摄像头的拍摄方法及移动终端
CN107223330A (zh) * 2016-01-12 2017-09-29 华为技术有限公司 一种深度信息获取方法、装置及图像采集设备
CN108737734A (zh) * 2018-06-15 2018-11-02 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备
CN108769528A (zh) * 2018-06-15 2018-11-06 Oppo广东移动通信有限公司 图像补偿方法和装置、计算机可读存储介质和电子设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476735A (zh) * 2020-04-13 2020-07-31 厦门美图之家科技有限公司 人脸图像处理方法、装置、计算机设备及可读存储介质
CN111476735B (zh) * 2020-04-13 2023-04-28 厦门美图之家科技有限公司 人脸图像处理方法、装置、计算机设备及可读存储介质
CN113218419A (zh) * 2021-04-25 2021-08-06 维沃移动通信(深圳)有限公司 陀螺仪的异常检测方法、装置、电子设备及存储介质
CN113218419B (zh) * 2021-04-25 2024-04-12 维沃移动通信(深圳)有限公司 陀螺仪的异常检测方法、装置、电子设备及存储介质
CN114500838A (zh) * 2022-01-25 2022-05-13 维沃移动通信有限公司 防抖拍摄方法及装置
CN116074641A (zh) * 2023-03-06 2023-05-05 触景无限科技(北京)有限公司 基于mosse算法的监控设备点位图像矫正方法及系统
CN117524073A (zh) * 2024-01-08 2024-02-06 深圳蓝普视讯科技有限公司 一种超高清图像显示抖动补偿方法、系统及存储介质
CN117524073B (zh) * 2024-01-08 2024-04-12 深圳蓝普视讯科技有限公司 一种超高清图像显示抖动补偿方法、系统及存储介质

Also Published As

Publication number Publication date
CN108737734B (zh) 2020-12-01
CN108737734A (zh) 2018-11-02

Similar Documents

Publication Publication Date Title
WO2019237977A1 (zh) 图像补偿方法、计算机可读存储介质和电子设备
CN108769528B (zh) 图像补偿方法和装置、计算机可读存储介质和电子设备
WO2020088133A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN110012224B (zh) 摄像头防抖系统、方法、电子设备和计算机可读存储介质
CN109842753B (zh) 摄像头防抖系统、方法、电子设备和存储介质
CN109194877B (zh) 图像补偿方法和装置、计算机可读存储介质和电子设备
CN111147741B (zh) 基于对焦处理的防抖方法和装置、电子设备、存储介质
CN109544620B (zh) 图像处理方法和装置、计算机可读存储介质和电子设备
CN109714536B (zh) 图像校正方法、装置、电子设备及计算机可读存储介质
US9019387B2 (en) Imaging device and method of obtaining image
CN110035228B (zh) 摄像头防抖系统、方法、电子设备和计算机可读存储介质
CN109951638B (zh) 摄像头防抖系统、方法、电子设备和计算机可读存储介质
US8433185B2 (en) Multiple anti-shake system and method thereof
KR20180101466A (ko) 심도 정보 취득 방법 및 장치, 그리고 이미지 수집 디바이스
CN110300263B (zh) 陀螺仪处理方法和装置、电子设备、计算机可读存储介质
US20090244327A1 (en) Camera system
CN109951640A (zh) 摄像头防抖方法和系统、电子设备、计算机可读存储介质
CN102572235A (zh) 成像装置、图像处理方法和计算机程序
WO2021035524A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
JP2015012482A (ja) 画像処理装置及び画像処理方法
WO2018076529A1 (zh) 场景深度计算方法、装置及终端
JP5393877B2 (ja) 撮像装置および集積回路
US9667869B2 (en) Camera apparatus for automatically maintaining horizontality and method for the same
JP2015095670A (ja) 撮像装置、その制御方法、および制御プログラム
CN110278374B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19819261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19819261

Country of ref document: EP

Kind code of ref document: A1