CN109194877B - Image compensation method and apparatus, computer-readable storage medium, and electronic device - Google Patents

Image compensation method and apparatus, computer-readable storage medium, and electronic device Download PDF

Info

Publication number
CN109194877B
CN109194877B CN201811290317.8A CN201811290317A CN109194877B CN 109194877 B CN109194877 B CN 109194877B CN 201811290317 A CN201811290317 A CN 201811290317A CN 109194877 B CN109194877 B CN 109194877B
Authority
CN
China
Prior art keywords
image
exposure
camera
offset
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811290317.8A
Other languages
Chinese (zh)
Other versions
CN109194877A (en
Inventor
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811290317.8A priority Critical patent/CN109194877B/en
Publication of CN109194877A publication Critical patent/CN109194877A/en
Application granted granted Critical
Publication of CN109194877B publication Critical patent/CN109194877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image compensation method and device, a computer readable storage medium and an electronic device, wherein the image compensation method comprises the following steps: controlling a camera to acquire an image and acquiring exposure information of the acquired image; when the camera is detected to shake, acquiring lens offset of the camera when the image is acquired; determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset; the image is compensated according to the exposure information and the image offset, the image offset can be more accurately acquired according to the exposure information, and then the image is compensated, so that the definition of the image can be improved.

Description

Image compensation method and apparatus, computer-readable storage medium, and electronic device
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image compensation method and apparatus, a computer-readable storage medium, and an electronic device.
Background
Optical Image Stabilization (Optical Image Stabilization) is taken as an anti-shake technology which is currently accepted by the public, and mainly corrects 'Optical axis deviation' through a floating lens of a lens, the principle is that a micro movement is detected through a gyroscope in the lens, then a signal is transmitted to a microprocessor, a processor immediately calculates a displacement amount to be compensated, and then compensation is performed through a compensation lens group according to the shake direction and the displacement amount of the lens; thereby effectively overcoming the image blur caused by the vibration of the camera.
However, an image shift occurs during the shake process, and the movement of the lens actually affects the image, so that the general optical anti-shake technology cannot solve the problem of the image shift.
Disclosure of Invention
Embodiments of the present application provide an image compensation method and apparatus, a computer-readable storage medium, and an electronic device, which can compensate for image offset generated by jitter and improve image sharpness.
A method of image compensation, the method comprising:
controlling a camera to acquire an image and acquiring exposure information of the acquired image; the camera comprises an optical image stabilization system;
when the camera is detected to shake, acquiring lens offset of the camera when the image is acquired;
determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset;
and compensating the image according to the exposure information and the image offset.
An image compensation apparatus, the apparatus comprising:
the exposure information acquisition module is used for controlling the camera to acquire images and acquiring exposure information of the acquired images; the camera comprises an optical image stabilization system;
the camera lens offset acquisition module is used for acquiring the camera lens offset of the camera when the camera is detected to shake;
the image offset acquisition module is used for determining a target calibration function according to the exposure information and acquiring an image offset corresponding to the lens offset;
and the image compensation module is used for compensating the image according to the exposure information and the image offset.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image compensation method.
An electronic device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the steps of an image compensation method.
The image compensation method and device, the computer readable storage medium and the electronic equipment can control the camera to collect the image and obtain the exposure information of the collected image; when the camera is detected to shake, acquiring lens offset of the camera when the image is acquired; determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset; the image is compensated according to the exposure information and the image offset, the image offset can be more accurately acquired according to the exposure information, and then the image is compensated, so that the definition of the image can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of an electronic device in one embodiment;
FIG. 2 is a flow diagram of a method of image compensation in one embodiment;
FIG. 3 is a flowchart illustrating determining a target calibration function according to exposure information and obtaining an image offset corresponding to a lens offset in one embodiment;
FIG. 4 is a flow diagram of determining a preset calibration function matching standard exposure information for each profile of standard exposure information in one embodiment;
FIG. 5 is a flow diagram of compensating an image based on exposure information and image offset in one embodiment;
FIG. 6 is a flow chart of compensating an image based on exposure information and image offset in another embodiment;
FIG. 7 is a flowchart illustrating lens shift of a camera in acquiring a captured image when camera shake is detected according to an embodiment;
FIG. 8 is a block diagram of an image compensation apparatus according to an embodiment;
FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first image may be referred to as a second image, and similarly, a second image may be referred to as a first image, without departing from the scope of the present application. The first image and the second image are both images, but they are not the same image.
The electronic device may capture all scenes, people, etc. within the scene through the imaging device. The imaging device includes an OIS (Optical Image Stabilization) system, that is, the imaging device includes a camera carrying the OIS system. The optical anti-shake is realized by means of a special lens or a CCD photosensitive element structure, so that the instability of images caused by shake in the use process of an operator is reduced to the greatest extent. Specifically, when the gyroscope in the camera detects a tiny movement, a signal is transmitted to the microprocessor to immediately calculate the displacement required to be compensated, and then the displacement is compensated according to the shaking direction of the camera lens and the displacement through the compensation lens group, so that the image blur caused by the shaking of the camera lens is effectively overcome.
Optionally, the present solution may also be applied to an imaging device including two or more cameras, where at least one of the two or more cameras includes a camera having an OIS function.
The imaging device can be applied to an electronic device, and the electronic device can be any terminal device having photographing and shooting functions, such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of sale (POS), a vehicle-mounted computer, a wearable device, and a Digital camera. It should be noted that the imaging device may be disposed on the electronic device, or may not be disposed on the electronic device but connected to the electronic device, and an image captured by the imaging device may be displayed on a screen of the electronic device.
The imaging equipment can control the camera to collect images and acquire exposure information of the collected images; the camera comprises an optical image stabilization system; when the camera is detected to shake, acquiring lens offset of the camera; determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset; and compensating the image collected by the camera when the shake occurs according to the exposure information and the image offset.
FIG. 1 is a block diagram of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, a display screen, and an input device connected through a system bus. The memory may include, among other things, a non-volatile storage medium and a processor. The non-volatile storage medium of the electronic device stores an operating system and a computer program, and the computer program is executed by a processor to implement an image compensation method provided in the embodiment of the present application. The processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The internal memory in the electronic device provides an environment for the execution of the computer program in the nonvolatile storage medium. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, and the input device may be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a housing of the electronic device, or an external keyboard, a touch pad or a mouse. The electronic device may be any terminal device having a photographing function and a shooting function, such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, a wearable device, and a Digital camera. Those skilled in the art will appreciate that the architecture shown in fig. 1 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
FIG. 2 is a flow diagram of a method for image compensation in one embodiment. . In one embodiment, the image compensation method comprises steps 202-208. Wherein,
step 202, controlling the camera to collect images and acquiring exposure information of the collected images.
The camera includes an Optical Image Stabilization (OIS) system. The camera comprises a lens, a voice coil motor, an infrared filter, an image Sensor (Sensor IC), a Digital Signal Processor (DSP) and a PCB circuit board. The lens is generally composed of a plurality of lenses, and has an imaging function, and if the lens has an OIS function, the lens is controlled to translate relative to the image sensor to offset and compensate image offset caused by hand shake under the condition of shake.
When the imaging device enters the image preview interface, the imaging device can control the camera to preview and acquire images of each view angle range in real time, or when the imaging device enters the image preview interface, the imaging device can shoot the images of each view angle range.
During the process of acquiring the image, the imaging device also acquires exposure information of the image. Here, the exposure may refer to a process in which a photosensitive element of the image forming apparatus receives external light and forms an image, and the process directly determines brightness/darkness of a picture. The exposure factors are aperture, shutter, and sensitivity (ISO).
The exposure information can be understood as shutter or sensitivity among the exposure three elements. The shutter speed is usually referred to as shutter, and refers to the time from opening to closing of the shutter of the imaging device, and the light sensing time of the CCD is controlled by the shutter. Shutter speeds are typically expressed in fractional form, and are customarily multiplied, in the sequence 1 second, 1/2. For example, 1/100 indicates that the shutter is 0.01 seconds open to close and 1/10 is 0.1 seconds. The sensitivity, also called ISO value, may refer to the sensitivity of the negative to light, and under the condition that the aperture and the shutter are not changed, the higher the sensitivity is, the stronger the photosensitive capability of the imaging device is, the brighter the photographed image will be; the darker the opposite will be.
And step 204, when the camera is detected to shake, acquiring the lens offset of the camera.
The imaging device also comprises a gyroscope sensor used for detecting whether the camera shakes. When the angular velocity information acquired by the gyroscope sensor changes, the camera can be considered to shake. When the camera shakes, the lens offset of the camera can be acquired.
Optionally, whether the camera shakes may be detected based on a gyroscope sensor in the imaging device or based on a gyroscope sensor and/or an acceleration sensor in the electronic device.
In one embodiment, a plane on which an image sensor of the camera is located may be an XY plane, and a two-dimensional coordinate system may be established, where an origin position of the two-dimensional coordinate system is not further limited in this application. The lens shift may be understood as a vector shift of the current position after the lens shake and the initial position before the lens shake in a two-dimensional coordinate system, that is, a vector distance of the current position after the lens shake relative to the initial position before the lens shake. Here, the initial position may be understood as a position of the lens when a distance between the lens and the image sensor is one focal length of the lens. Lens shift refers to the vector distance between the optical centers of the lens (convex lens) before and after it moves.
Further, the amount of movement of the lens in the camera, i.e., the lens offset, may be collected based on a hall sensor in the camera or laser technology. The angular speed information acquired by the gyroscope sensor corresponds to the Hall value acquired by the Hall sensor in time sequence. In the embodiment of the application, knowing the hall value collected by the hall sensor, the offset of the lens at the current moment can be determined. In OIS systems, this lens offset is on the order of microns.
It should be noted that, when the camera is controlled to acquire an image, the offset of the lens is synchronously determined and acquired, and the acquisition frequency of the hall sensor is higher than the frequency of acquiring the image by the camera. That is, when the camera collects a frame of image, the offset of multiple lenses can be synchronously acquired. For example, the camera performs image acquisition at 30Hz, and the hall sensor performs hall value acquisition at 200Hz at the same time, so that the time of acquiring one frame of image is acquired, 6-7 hall values are acquired correspondingly in time sequence, and a plurality of lens offsets can be acquired.
And step 206, determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset.
The target calibration function may be understood as a calibration function matched with the current exposure information, and the target calibration functions corresponding to different exposure information may be the same or different. Since the unit of the lens offset is code and the unit of the image offset is pixel (pixel), the lens offset is converted into the image offset based on the target calibration function, and the lens offset can be converted into the image offset.
The image offset can be understood as the displacement of the same characteristic point before and after the lens shakes in the same view field range in the process of acquiring an image once. For example, the imaging device acquires a first image before shaking, that is, when the lens is at the first position, and records the coordinate positions of each pixel point in the first image on the XY plane. When the imaging device shakes, the lens moves on the XY plane, that is, when the lens is at the second position (the current position after the movement), the second image is collected and the coordinate position of each pixel point in the second image on the XY plane is recorded, and the offset of the second image relative to the first image can be called as an image offset.
The target calibration function may be obtained according to a specific calibration manner, and the target calibration function may be a unitary quadratic function, a binary quadratic function, or a binary multiple function, where the offset of the lens in the XY plane along the x axis and the offset along the y axis may be brought into the target calibration function, and the corresponding image offset d1 may be obtained through calculation. And acquiring a frame of image corresponding to a plurality of lens offsets, and correspondingly acquiring a plurality of image offsets according to the target calibration function.
And step 208, compensating the image according to the exposure information and the image offset.
And compensating the image according to the acquired exposure information and the image offset. Different exposure information may have different corresponding compensation strategies, for example, the compensation strategies may include frame-by-frame or frame-by-frame compensation, block compensation, progressive or interlaced compensation, and so on. The compensation frame by frame or frame interval can carry out uniform compensation on all areas of different frame images by adopting an image offset; the block compensation, progressive compensation or interlaced compensation can perform the block compensation for different regions of the same frame image, that is, different image offsets can be used for different regions to perform the compensation.
The image compensation method can control the camera to collect the image and obtain the exposure information of the collected image; when the camera is detected to shake, acquiring lens offset of the camera when the image is acquired; determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset; the image is compensated according to the exposure information and the image offset, the image offset can be more accurately acquired according to the exposure information, and then the image is compensated, so that the definition of the image can be improved.
FIG. 3 is a flowchart illustrating an embodiment of determining a target calibration function according to exposure information and obtaining an image offset corresponding to a lens offset. In one embodiment, determining a target calibration function according to exposure information and acquiring an image offset corresponding to a lens offset includes:
step 302, constructing a mapping relation between the standard exposure information of the plurality of gears and a plurality of preset calibration functions.
In one embodiment, the exposure information is sensitivity, which is a measure of the sensitivity of the film to light, and is determined by sensitivity metrology and measurements. Wherein, the sensitivity comprises a plurality of gears of ISO100, ISO200, ISO400, ISO800, ISO1000, ISO1600, ISO3200, ISO6400, ISO12800 and the like. In the process of acquiring an image, the sensitivity of the currently acquired image, for example, ISO400, may be directly acquired.
The electronic apparatus may store in advance standard exposure information of a plurality of gear positions, wherein the gear position of the standard exposure information may be set to five gear positions, which are ISO100 (first gear), ISO500 (second gear), ISO1000 (third gear), ISO2000 (fourth gear), ISO4000 (fifth gear), respectively.
Alternatively, the gear of the standard exposure information may be set to six gears, which are ISO100 (first gear), ISO200 (second gear), ISO400 (third gear), ISO800 (fourth gear), ISO1000 (fifth gear), and ISO1600 (sixth gear), respectively. In the embodiment of the present application, the number of the shift stages of the standard exposure information and the specific numerical value of each shift stage may be set according to actual requirements, which is not further limited herein.
For example, when the electronic device exposes the standardThe gear of the information is set to five gears, and when the gears are SO100, ISO500, ISO1000, ISO2000 and ISO4000 respectively, the electronic device can determine the preset calibration function with the calibration coefficient according to the preset calibration model under the conditions of ISO100, ISO500, ISO1000, ISO2000 and ISO4000 respectively. The standard exposure information of each gear corresponds to a preset calibration model, and a unique preset calibration function can be determined according to the preset calibration model. If the standard exposure information of five gears is included, corresponding, it can also be determined that five corresponding preset calibration functions can be respectively recorded as F1(ΔX,ΔY)、F2(ΔX,ΔY)、F3(ΔX,ΔY)、(ΔX,ΔY)、F5(ΔX,ΔY)。
For example, the mapping relationship may be represented as ISO100 → F1(ΔX,ΔY)、ISO500→F2(ΔX,ΔY)、ISO1000→F3(ΔX,ΔY)、ISO2000F4→(ΔX,ΔY)、ISO4000→F5(ΔX,ΔY)。
Specifically, the preset calibration function may be a unitary quadratic function, a binary multiple function, etc., and the lower the value of the sensitivity is, the more complex the corresponding sub-element of the preset calibration function is, and the higher the value of the sensitivity is, the simpler the corresponding sub-element of the preset calibration function is.
Under the condition of the same aperture, if the same exposure value is required, the sensitivity is inversely proportional to the shutter speed, that is, the higher the sensitivity is, the smaller the corresponding door opening speed is, and conversely, the smaller the sensitivity is, the larger the corresponding door opening speed is.
Optionally, in one embodiment, the exposure information may also be a shutter speed. The shutter speed represents the length of the exposure time, and generally, the shorter the exposure time is required under sufficient light conditions, and the longer the exposure time is required under insufficient light conditions. Shutter speeds are typically expressed in fractional form, and are customarily multiplied, in the sequence 1 second, 1/2. The electronic apparatus may store standard exposure information of a plurality of gear positions in advance, wherein the gear position of the standard exposure information may be set to five gear positions which are 1 second, 1/2 seconds, 1/4 seconds, 1/8 seconds, 1/16 seconds, respectively. The electronic device may determine the preset calibration functions with calibration coefficients according to a preset calibration model in 1 second, 1/2 seconds, 1/4 seconds, 1/8 seconds, 1/16 seconds, respectively. The standard exposure information of each gear corresponds to a preset calibration model, and a unique preset calibration function can be determined according to the preset calibration model. If the standard exposure information of five gears is included, correspondingly, five corresponding preset calibration functions can also be determined. The faster the shutter speed, the simpler the corresponding secondary element of the preset calibration function, and the slower the shutter speed, the more complex the corresponding secondary element of the preset calibration function.
And step 304, determining the gear of the exposure information according to the standard exposure information of the plurality of gears.
The electronic equipment can acquire exposure information when an image is acquired, and compare the exposure information with standard exposure information of a plurality of gears to determine the standard exposure information closest to the exposure information. For example, if the exposure information during image capturing is ISO400, the gear of the standard exposure information is set to five gears, which are ISO100, ISO500, ISO1000, ISO2000, and ISO4000, respectively, it can be determined that the standard exposure information closest to the exposure information is ISO500, and the corresponding gear is second gear, that is, the gear of the current exposure information is also second gear. Correspondingly, if the exposure information is the shutter speed, the standard exposure information closest to the shutter speed can be determined, and the gear position of the shutter speed can be further determined.
And step 306, determining a target calibration function matched with the gear of the exposure information in the plurality of preset calibration functions according to the mapping relation.
If the mapping relationship in the electronic device is ISO100 → F1(ΔX,ΔY)、ISO500→F2(ΔX,ΔY)、ISO1000→F3(ΔX,ΔY)、ISO2000F4→(ΔX,ΔY)、ISO4000→F5(Δ X, Δ Y), a target calibration function may be determined from the five preset calibration functions according to the determined gear of the exposure information. If the gear of the exposure information ISO400 is the second gear, the preset calibration function corresponding to the second-gear standard exposure information ISO500 is F2(Δ X, Δ Y), the second gear is markedAnd the preset calibration function corresponding to the quasi-exposure information is called a target calibration function.
And 308, determining an image offset corresponding to the lens offset according to the target calibration function.
In one embodiment, the preset scaling function may be a unary quadratic function, a binary multiple function, or the like. For example, the calibration function F is preset1(Δ X, Δ Y) may be expressed as:
F1(ΔX,ΔY)=ax(n)+by(n)+...+cxy+dx+ey+f;
the preset calibration function F2(Δ X, Δ Y) may be expressed as:
F2(ΔX,ΔY)=ax2+by2+cxy+dx+ey+f
the preset calibration function F3(Δ X, Δ Y) may be expressed as:
F3(ΔX,ΔY)=ax2+by2+cx+dy+e
the preset calibration function F4(Δ X, Δ Y) may be expressed as:
F4(ΔX,ΔY)=axy+bx+cy+d
the preset calibration function F5(Δ X, Δ Y) may be expressed as:
F5(ΔX,ΔY)=ax+by+c
in the above formulas, a, b, c, d, e, and f are calibration coefficients and known coefficients, respectively. Fi(Δ X, Δ Y) (i is 1, 2, 3, 4, 5) is used to indicate the current image shift amount, and X and Y indicate the abscissa and ordinate of the current lens shift amount, respectively.
The preset calibration function is obtained according to a preset rule. It should be noted that the expression of the preset calibration function is not limited to the above example, and may also be represented by other expressions.
If the target calibration function is the preset calibration function F2(Δ X, Δ Y), the image shift amount can be obtained according to the target calibration function F2(Δ X, Δ Y). For example, if the current lens shift amount is p (2,1), the corresponding image shift amount F2(Δ X, Δ Y) is 4a + b +2c +2d + e + F, and according to the determined calibration coefficient, the image shift amount F2(Δ X, Δ Y) which is a scalar shift may be acquired. The preset calibration function is a binary quadratic function, and the information of two dimensions of the x-axis offset and the y-axis offset of the lens offset is comprehensively considered, so that the lens offset can be more accurately and efficiently converted into the image offset.
In one embodiment, before the mapping relationship between the standard exposure information and the preset calibration function is constructed, a step 300 of determining, for each piece of standard exposure information, a preset calibration function that matches the standard exposure information is further included.
FIG. 4 is a flow diagram of determining a preset calibration function matching standard exposure information for each profile of standard exposure information, according to an embodiment. Specifically, for each grade of standard exposure information, determining a preset calibration function matched with the standard exposure information includes:
step 402, aiming at each grade of standard exposure information, a driving motor moves a camera according to a preset track, and the preset track comprises a plurality of displacement points.
And fixing the test target within the imaging range of the camera, and controlling a motor to move a lens driving lens of the camera according to a preset track. The test target can be a ctf (contrast Transfer function) target, an sfr (spatial Frequency response) target, a DB target, or other customized targets. The predetermined trajectory may be a circle, an ellipse, a rectangle, or other predetermined trajectory. A plurality of displacement points are set on the preset track, wherein the distances between two adjacent displacement points can be the same or different. The position information of the displacement point thereof can be expressed in terms of coordinate position in the XY plane. For example, the displacement point qiCan be represented in the XY plane by the coordinate position qi(xi,yj) Expressed, i.e. the first position information of the displacement point can be expressed in the coordinate qi(xi,yj) And (4) performing representation.
If the gear of the standard exposure information is set to five gears, it is ISO100, ISO500, ISO1000, ISO2000, ISO4000, respectively. The motor moves the cameras according to a preset trajectory and stores corresponding image data in a group record, driven under the standard exposure information of ISO100, ISO500, ISO1000, ISO2000, ISO4000, respectively.
And step 404, correspondingly acquiring image information of the test target when the lens moves to each displacement point.
When the driving motor pushes the lens of the camera to move according to the preset track, the image information of the test target plate is correspondingly collected at each displacement point pair. One displacement point corresponds to image information of one test target, and the image information can be understood as position information of a plurality of pixel points forming the image. For example, when the number of the displacement points is six, it is required to correspondingly acquire image information of six test targets.
And step 406, correspondingly acquiring first position information of each displacement point and image offset of the same characteristic point in the image information acquired at each displacement point relative to the initial position.
The electronic equipment can select a characteristic point p in the image informationiTo obtain the characteristic point piSecond position information of (1), feature point piMay also be represented in the XY plane by the coordinate pi(Xi,Yj) And (4) performing representation.
Wherein, the characteristic point piThe specific positions and definitions of the feature points are not further limited herein, and the feature points may be shot targets corresponding to pixels close to the central position in the image information, or shot targets corresponding to pixels with the brightest brightness or other pixels with prominent significance in the image information.
It should be noted that the shooting targets corresponding to the feature points in the image information of the multiple test targets are the same, that is, the position information of the feature points in the image information collected at different displacement points is different, but the shooting targets corresponding to the same feature point are the same.
In one embodiment, the lens is displaced at a point q at the initial position0(x0,y0) May be the origin. When the lens is at the initial position q0(x0,y0) Then, the characteristic point in the image information of the test target can be obtained, and the characteristic point can also be p0(X0,Y0) And (4) showing.
When the lens moves to the displacement point q1(x1,y1) Correspondingly acquiring the characteristic point p in the image information of the test target plate1(X1,Y1) And the characteristic point p1(X1,Y1) With respect to the feature point p acquired at the initial position0(X0,Y0) The image shift amount d 1; analogizing in turn, when the lens moves to the displacement point q6(x6,y6) Correspondingly acquiring the characteristic point p in the image information of the test target plate6(X6,Y6) And at the characteristic point p6(X6,Y6) With respect to the feature point p acquired at the initial position0(X0,Y0) The image shift amount d 6.
Step 408, inputting the first position information and the image offset into a preset offset conversion model to determine a preset offset conversion function with calibration coefficients, wherein the number of the displacement points is associated with the number of the calibration coefficients.
The standard exposure information of different gears corresponds to different preset calibration models, and the different preset calibration models have different calibration coefficients. The preset calibration models are different, the number of displacement points required to be acquired is different, and the number of unknown coefficients in the preset calibration models is less than or equal to the number of the displacement points.
The preset calibration model can be a unitary quadratic function model, a binary quadratic function model, or a binary multiple function model, and the setting of the preset calibration model is obtained by learning.
The corresponding preset calibration model can be determined according to the gear of the current standard exposure information, the obtained first position information of the characteristic moving point, the second position information of the characteristic point corresponding to the displacement point and the image offset are input into the preset calibration model, each coefficient in the preset calibration model is determined through analysis and operation, and then the preset calibration function of the calibration coefficient is provided. Correspondingly, a preset calibration function corresponding to the standard exposure information of each gear can be obtained.
It should be noted that the preset calibration model is consistent with the expression of the preset calibration function, and for the preset calibration model, the calibration coefficient is an unknown number, and for the preset calibration function, the corresponding calibration coefficient is a known number.
For example, when the preset calibration model is a binary quadratic model, it can be expressed by the following formula:
F(ΔX,ΔY)=ax2+by2+cxy+dx+ey+f
where (Δ X, Δ Y) represents an image shift amount representing the current shift point q (X)i,yj) Relative to the initial position q (x)0,y0) The image shift amount of the same feature point obtained at the position is a scalar shift, that is, the current shift point q (x)i,yj) With an initial position q (x)0,y0) Of the same feature point. x represents a coordinate parameter of a horizontal axis x of the displacement point; y represents a coordinate parameter of the longitudinal axis y of the displacement point.
The binary quadratic function model comprises six unknown coefficients a, b, c, d, e and f, and six displacement points q to be obtained1(x1,y1)-q6(x6,y6) And the six feature points p1(X1,Y1)-p6(X6,Y6) Corresponding image shift amount d1-d6And respectively inputting the coefficients into the binary quadratic function model to analyze a, b, c, d, e and f in the equation, wherein the obtained coefficients a, b, c, d, e and f are substituted into the binary quadratic function model to obtain the corresponding preset calibration function, and a, b, c, d, e and f are calibration coefficients of the preset calibration function.
According to the image compensation method in the embodiment, the corresponding preset calibration function can be obtained according to the preset calibration model, the plurality of displacement points and the plurality of corresponding characteristic points, the image offset value can be accurately and efficiently obtained by the preset calibration function directly based on the lens offset, the calibration efficiency and the accuracy are higher, and a good foundation is laid for compensating the image.
FIG. 5 is a flow diagram for compensating an image based on exposure information and image offset in one embodiment. In one embodiment, compensating an image based on exposure information and an image offset comprises:
step 502, determining exposure levels of the exposure information, wherein the exposure levels comprise primary exposure, secondary exposure and tertiary exposure.
In one embodiment, if the exposure information is sensitivity, it can set its exposure level according to the sensitivity. The sensitivity of the electronic device includes ISO100, ISO200, ISO400, ISO800, ISO1000, ISO1600, ISO3200, ISO6400, ISO12800, and the like, and the exposure isodiametric may be divided according to the sensitivity. For example, a sensitivity of less than or equal to ISO500 may be regarded as the first-order exposure, a sensitivity of greater than ISO500 and less than or equal to ISO1000 may be regarded as the second-order exposure, and a sensitivity of greater than ISO1000 may be regarded as the third-order exposure. The electronic device may determine the exposure level of the current exposure information based on the set primary exposure, secondary exposure, and tertiary exposure.
In one embodiment, if the exposure information is a shutter speed, it can set its exposure level according to the shutter speed. The shutter speed of the electronic device includes 1 second, 1/2 seconds, 1/4 seconds, 1/8 seconds, 1/16 seconds, and the like, and the exposure constant diameter may be divided according to the shutter speed. For example, a shutter speed greater than 1/2 seconds and less than or equal to 1/8 seconds may be used as the primary exposure, a shutter speed greater than 1/8 seconds and less than or equal to 1/2 seconds may be used as the secondary exposure, and a shutter speed greater than 1/2 seconds may be used as the tertiary exposure. The electronic device may determine the exposure level of the current exposure information based on the set primary exposure, secondary exposure, and tertiary exposure.
It should be noted that the electronic device divides the exposure levels according to the exposure information, and the specific division rule is not limited to the above example, and the range of the exposure information of each exposure level may be set according to the actual requirement.
And step 504, determining a corresponding compensation strategy according to the exposure level.
The image compensation strategy for each exposure level is different. Specifically, when the exposure level is first-level exposure, the corresponding compensation strategy is frame-by-frame or frame-by-frame compensation; when the exposure level is two-stage exposure, the corresponding compensation strategy is block compensation; when the exposure level is three-level exposure, the corresponding compensation strategy is progressive or interlaced compensation.
And step 506, compensating the image according to the image offset and the compensation strategy.
In one embodiment, if the exposure level corresponding to the exposure information is a first-level exposure, frame-by-frame or frame-by-frame compensation may be used. When the electronic equipment collects a frame of image, the image offset correspondingly obtained by the electronic equipment is multiple. When the image is compensated by adopting a frame-by-frame or frame-by-frame compensation strategy, the minimum image offset, the image offset with the minimum derivative and the image offset with the minimum difference with the average jitter amount can be obtained from a plurality of image offsets and used as the target image offset to compensate all pixel points of each frame of image or frame-by-frame image.
In one embodiment, if the exposure level corresponding to the exposure information is a two-level exposure, the block compensation may be performed. When the electronic equipment collects a frame of image, the image offset correspondingly obtained by the electronic equipment is multiple. For example, there are six hall values, namely hall1-hall6, each of which corresponds to a unique image offset, which is denoted as biaspixel1-biaspixel6, at this time, if the CMOS scans 60 lines, block correction can be performed, that is, 60 lines are divided into 6 blocks, one block includes 10 lines, and the 6 blocks of images are corrected block by using biaspixel1-biaspixel6, that is, 10 lines included in the first block all adopt biaspixel1 as correction parameters to perform compensation correction, and 10 lines included in the second block adopt biaspixel2 as correction parameters to perform compensation correction.
In one embodiment, if the exposure information corresponds to an exposure level of three levels of exposure, the compensation can be performed line by line or interlaced. When the electronic equipment collects a frame of image, the image offset correspondingly obtained by the electronic equipment is multiple. For example, there are six hall values of hall1-hall6, each hall value corresponds to a unique image offset, and 6 image offsets can be respectively recorded as binary 1-binary 6, at this time, if the CMOS scans 6 lines, the binary 1 can be used to compensate the 1 st line of pixels, the binary 2 can be used to compensate the 2 nd line of pixels, the binary 3 can be used to compensate the 3 rd line of pixels, the binary 4 can be used to compensate the 4 th line of pixels, the binary, 5 can be used to compensate the 5 th line of pixels, the binary 6 can be used to compensate the 6 th line of pixels, that is, the binary 1-binary 6 can be used to compensate the 1-6 line of pixels line by line, and so on, so as to complete the compensation of the image
Optionally, based on the obtained 6 image offsets biaspixel1-biaspixel6, 1, 2, or 3 image offsets may be arbitrarily selected to perform interlace compensation on the 1 st, 3 rd, and 5 th line pixel points, or perform interlace compensation on the 2 nd, 4 th, and 6 th line pixel points, and so on, to complete compensation on the image.
In this embodiment, the image may be compensated by adopting different compensation strategies based on different exposure levels, and the image sharpness under different exposure levels may be improved.
FIG. 6 is a flowchart illustrating compensation of an image according to an image shift amount and a compensation strategy when the exposure level is two-level exposure in one embodiment. In one embodiment, when the exposure level is the two-level exposure, compensating the image according to the image shift amount and the compensation strategy comprises:
step 602, identifying the image to identify a region to be compensated and a non-compensation region, wherein the colors of all pixel points of the non-compensation region are the same, and the proportion of the non-compensation region in the image is greater than or equal to a preset value.
In one embodiment, the electronic device may obtain color values of each pixel point in the image; and clustering the pixels based on the color values of the pixels. Wherein, the color values of the pixels in each pixel category are the same; and determining the outline of each connected region formed by the pixels in the pixels to obtain an outline set, further obtaining the area of each outline set, screening out the outline with the largest area, and taking the region corresponding to the outline as a non-compensation region.
The method comprises the steps of acquiring a non-compensation area, judging the proportion of the non-compensation area in the whole image after acquiring the non-compensation area, and if the proportion is larger than or equal to a preset value, considering the non-compensation area to be effective, and further acquiring the area to be compensated. If the proportion is smaller than the preset value, the non-compensation area is considered to be effective, and the whole image is taken as the area to be compensated.
Furthermore, the preset value can be one-half, three-fifths, four-sevenths and the like, and the specific preset value is not further limited and can be set according to actual requirements.
For example, if the acquired image is a night view image, most of the image area is a pure black area, and only a relatively small part of the image area is a bright area, the black area may be used as a non-compensation area, and the bright area may be used as an area to be compensated. If the collected image is a portrait plus a background image, wherein the portrait is a foreground region, the background region is a region with the same color, the proportion of the background region in the whole image is larger than or equal to a preset value, the background region is used as a non-compensation region, the portrait foreground region is used as a compensation region, the proportion of the background region in the whole image is smaller than the preset value, and the whole image is used as a region to be compensated.
It should be noted that the image recognition method is not limited to the above example, and may also be performed by using an algorithm for extracting color features and extracting texture features and extracting edge features, where the method for recognizing the to-be-compensated region and the non-compensated region is not further limited.
And step 604, compensating the compensation area according to the image offset and the compensation strategy.
The compensation strategy corresponding to the secondary exposure level is block compensation, that is, according to the image offset, block compensation can be performed on the region to be compensated, that is, fine compensation can be performed on the region to be compensated, and no processing is performed on the non-compensation region, so that the definition of the region to be compensated is improved, and the image processing efficiency is improved.
Fig. 7 is a flowchart illustrating lens shift of a camera when a captured image is acquired when camera shake is detected in an embodiment. In one embodiment, when the camera is detected to shake, acquiring a lens shift of the camera when the image is captured includes:
step 702, when detecting that the camera shakes, synchronously acquiring a plurality of shaking amounts of the camera when acquiring a frame of image.
Specifically, a first frequency of an image collected by a camera and a second frequency of angular velocity information collected by a gyroscope sensor are obtained; that is, when the camera collects a frame of image, a plurality of angular velocity information collected by the gyroscope sensor are synchronously obtained. The acquisition frequency of the gyroscope sensor is higher than the frequency of acquiring images acquired by the camera. For example, if the camera performs image acquisition at 30Hz and the gyroscope sensor performs acquisition of angular velocity information at 200Hz at the same time, the time for acquiring one frame of image will correspond to the acquisition of 6-7 angular velocity information in time sequence.
And determining corresponding shaking amounts according to the acquired angular velocity information, wherein the shaking amounts correspond to the angular velocity information one to one, and each angular velocity information corresponds to one shaking amount, so that 6-7 shaking amounts corresponding to the acquired 6-7 angular velocity information are obtained. The shake amount can be understood as angle information integrated with angular velocity information. Wherein the integration time is related to the frequency at which the gyroscope sensor collects angular velocity information.
And step 704, controlling the motor to drive the lens of the camera to move according to the plurality of shaking amounts.
The imaging device further comprises a motor for driving the lens of the camera to move and an OIS controller for controlling the motor to move. When the gyroscope sensor detects that the camera shakes, the electronic equipment can control the motor to drive the lens of the camera to move according to the obtained plurality of shaking amounts, and the moving amount of the lens is opposite to the direction of the shaking amounts so as to eliminate offset caused by shaking.
Step 706, determining the lens offset of the camera based on the hall value of the hall sensor, and synchronously collecting the angular speed information and the hall value.
The imaging device also comprises a Hall sensor for recording the movement amount of the lens, wherein the Hall sensor is a magnetic field sensor manufactured according to Hall effect, and the Hall effect is basically deflection of moving charged particles caused by Lorentz force action in a magnetic field. The angular speed information acquired by the gyroscope sensor corresponds to the Hall value acquired by the Hall sensor in time sequence.
The imaging device can record the offset scales of the lens of the camera on the XY plane through the Hall sensor or the laser, record the offset direction while recording the offset scales, and then obtain the lens offset p (x) according to the corresponding distance of each scale and the offset directioni,yj)。
In this embodiment, frequency based on gyroscope sensor and hall sensor data collection corresponds in the chronogenesis, and simultaneously, the camera is gathered and is shot the object and gyroscope sensor data collection and the frequency is different in the timestamp is synchronous, and the time of gathering a frame of image can correspond a plurality of angular velocity information of gathering, and then can confirm a plurality of camera lens offsets in a plurality of angular velocity information to convert a plurality of image offsets into, then can improve image compensation's precision and effect.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 8 is a block diagram of an image compensation apparatus according to an embodiment. The embodiment of the present application further provides an image compensation apparatus, which is applied to an imaging device including an optical image stabilization system, the imaging device is provided with a camera carrying the optical image stabilization system, and the image compensation apparatus includes: an exposure information obtaining module 810, a lens shift obtaining module 820, an image shift obtaining module 830, and an image compensation module 840. Wherein,
an exposure information obtaining module 810, configured to control the camera to collect an image and obtain exposure information of the collected image; the camera comprises an optical image stabilization system;
a lens shift acquiring module 820, configured to acquire a lens shift of the camera when the camera is detected to shake;
an image offset obtaining module 830, configured to determine a target calibration function according to the exposure information, and obtain an image offset corresponding to the lens offset;
and the image compensation module 840 is used for compensating the image according to the exposure information and the image offset.
The image compensation device can control the camera to collect images and acquire exposure information of the collected images; when the camera is detected to shake, acquiring lens offset of the camera when the image is acquired; determining a target calibration function according to the exposure information, and acquiring an image offset corresponding to the lens offset; the image is compensated according to the exposure information and the image offset, the image offset can be more accurately acquired according to the exposure information, and then the image is compensated, so that the definition of the image can be improved.
In one embodiment, the exposure information includes a shutter speed or sensitivity; an image offset acquisition module 830, comprising:
the system comprises a construction unit, a calibration unit and a calibration unit, wherein the construction unit is used for constructing a mapping relation between standard exposure information of a plurality of gears and a plurality of preset calibration functions, and the standard exposure information of each gear corresponds to one preset calibration function;
the gear determining unit is used for determining gears of the exposure information according to the standard exposure information of the plurality of gears;
the matching unit is used for determining the target calibration function matched with the gear of the exposure information in the plurality of preset calibration functions according to the mapping relation;
and the offset acquisition unit is used for determining the image offset corresponding to the lens offset according to the target calibration function.
In one embodiment, the image offset acquisition module 830 further comprises:
and the function determining unit is used for determining a preset calibration function matched with the standard exposure information aiming at each file of the standard exposure information.
According to the image compensation method in the embodiment, the corresponding preset calibration function can be obtained according to the preset calibration model, the plurality of displacement points and the plurality of corresponding characteristic points, the image offset value can be accurately and efficiently obtained by the preset calibration function directly based on the lens offset, the calibration efficiency and the accuracy are higher, and a good foundation is laid for compensating the image.
In one embodiment, the image compensation module 840 includes:
the level determining unit is used for determining the exposure level of the exposure information, and the exposure level comprises primary exposure, secondary exposure and tertiary exposure;
the strategy unit is used for determining a corresponding compensation strategy according to the exposure level;
and the compensation unit is used for compensating the image according to the image offset and the compensation strategy.
In one embodiment, the policy unit is further configured to:
when the exposure level is first-level exposure, the corresponding compensation strategy is compensation frame by frame or frame by frame;
when the exposure level is secondary exposure, the corresponding compensation strategy is block compensation;
and when the exposure level is three-level exposure, the corresponding compensation strategy is progressive or interlaced compensation.
In this embodiment, the image may be compensated by adopting different compensation strategies based on different exposure levels, and the image sharpness under different exposure levels may be improved.
In one embodiment, when the exposure level is a two-level exposure, the compensation unit is further configured to: identifying the image to identify a region to be compensated and a non-compensation region, wherein the colors of all pixel points of the non-compensation region are the same, and the proportion of the non-compensation region in the image is greater than or equal to a preset value; and compensating the compensation area according to the image offset and the compensation strategy.
In the embodiment, the area to be compensated can be finely compensated, and the non-compensation area is not processed, so that the definition of the area to be compensated is improved, and the image processing efficiency is improved.
In one embodiment, the lens shift acquisition module 820 includes:
the camera shake acquisition unit is used for synchronously acquiring a plurality of shake amounts of the camera when acquiring a frame of image when detecting that the camera shakes;
the driving unit is used for controlling the motor to drive the camera lens to move according to the plurality of shaking amounts;
and the calculating unit is used for determining the lens offset of the camera based on the Hall value of the Hall sensor, and the jitter amount and the Hall value are synchronously acquired.
In this embodiment, frequency based on gyroscope sensor and hall sensor data collection corresponds in the chronogenesis, and simultaneously, the camera is gathered and is shot the object and gyroscope sensor data collection and the frequency is different in the timestamp is synchronous, and the time of gathering a frame of image can correspond a plurality of angular velocity information of gathering, and then can confirm a plurality of camera lens offsets in a plurality of angular velocity information to convert a plurality of image offsets into, then can improve image compensation's precision and effect.
The division of the modules in the image compensation apparatus is only for illustration, and in other embodiments, the image compensation apparatus may be divided into different modules as needed to complete all or part of the functions of the image compensation apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the image compensation method of any of the embodiments described above.
The embodiment of the application also provides the electronic equipment. The electronic equipment comprises imaging equipment of an optical image stabilization system, a memory and a processor, wherein the imaging equipment comprises a camera which carries the optical image stabilization system; the memory stores computer readable instructions, which when executed by the processor, cause the processor to perform the image compensation method of any of the above embodiments. Included in the electronic device is an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image compensation technique related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. The imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 (e.g., a gyroscope) may provide parameters of the acquired image compensation (e.g., anti-shake parameters) to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, image sensor 914 may also send raw image data to sensor 920, sensor 920 may provide raw image data to ISP processor 940 for processing based on the type of interface of sensor 920, or sensor 920 may store raw image data in image memory 930.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 9, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image compensation operations on the raw image data, gathering statistical information about the image data. Wherein the image compensation operation may be performed with the same or different bit depth precision.
ISP processor 940 may also receive pixel data from image memory 930. For example, the sensor 920 interface sends raw image data to the image memory 930, and the raw image data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 914 interface or from sensor 920 interface or from image memory 930, ISP processor 940 may perform one or more image compensation operations, such as temporal filtering. The image data processed by ISP processor 940 may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 980 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 980 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 970 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on a display 980 device.
The ISP processed image data may be transmitted to the encoder/decoder 970 in order to encode/decode the image data. The encoded image data may be saved and decompressed prior to display on a display 980 device. The image data processed by the ISP processor 940 may also be processed by the encoder/decoder 970. The encoder/decoder 970 may be a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU) in the mobile terminal.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading compensation, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and control parameters of the ISP processor 940 based on the received statistical data. For example, the control parameters of imaging device 910 may include sensor 920 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color compensation matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading compensation parameters.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image compensation method, characterized in that the method comprises:
controlling a camera to acquire an image and acquiring exposure information of the acquired image; the camera comprises an optical image stabilization system, and the exposure information comprises shutter speed or sensitivity;
when the camera is detected to shake, acquiring the lens offset of the camera when the image is acquired;
determining a target calibration function matched with the exposure information according to the exposure information, and acquiring an image offset corresponding to the lens offset according to the target calibration function;
determining the exposure level of the exposure information, wherein the exposure level comprises primary exposure, secondary exposure and tertiary exposure;
determining a corresponding compensation strategy according to the exposure grade;
and compensating the image according to the image offset and the compensation strategy.
2. The method according to claim 1, wherein the determining a target calibration function matching the exposure information according to the exposure information and obtaining an image offset corresponding to the lens offset according to the target calibration function comprises:
constructing a mapping relation between standard exposure information of a plurality of gears and a plurality of preset calibration functions, wherein the standard exposure information of each gear corresponds to one preset calibration function;
determining the gear of the exposure information according to the standard exposure information of the plurality of gears;
determining the target calibration function matched with the gear of the exposure information in the plurality of preset calibration functions according to the mapping relation;
and determining the image offset corresponding to the lens offset according to the target calibration function.
3. The method according to claim 2, wherein before constructing the mapping relationship between the standard exposure information and the preset calibration function, the method further comprises:
and determining a preset calibration function matched with the standard exposure information for each file of the standard exposure information.
4. The method of claim 1, wherein determining a corresponding compensation strategy based on the exposure level comprises:
when the exposure level is first-level exposure, the corresponding compensation strategy is compensation frame by frame or frame by frame;
when the exposure level is secondary exposure, the corresponding compensation strategy is block compensation;
and when the exposure level is three-level exposure, the corresponding compensation strategy is progressive or interlaced compensation.
5. The method of claim 4, wherein compensating the image according to the image offset and the compensation policy when the exposure level is a two-level exposure comprises:
identifying the image to identify a region to be compensated and a non-compensation region, wherein the colors of all pixel points of the non-compensation region are the same, and the proportion of the non-compensation region in the image is greater than or equal to a preset value;
and compensating the area to be compensated according to the image offset and the compensation strategy.
6. The method according to claim 1, wherein when the camera is detected to shake, acquiring a lens shift amount of the camera when the image is acquired comprises:
when the camera is detected to shake, synchronously acquiring a plurality of shaking amounts of the camera when one frame of image is acquired;
controlling a motor to drive the camera lens to move according to the plurality of shaking amounts;
and determining the lens offset of the camera based on the Hall value of the Hall sensor, wherein the jitter amount and the Hall value are synchronously acquired.
7. An image compensation apparatus, characterized in that the apparatus comprises:
the exposure information acquisition module is used for controlling the camera to acquire images and acquiring exposure information of the acquired images; the camera comprises an optical image stabilization system, and the exposure information comprises shutter speed or sensitivity;
the camera lens offset acquisition module is used for acquiring the camera lens offset of the camera when the camera is detected to shake;
the image offset obtaining module is used for determining a target calibration function matched with the exposure information according to the exposure information and obtaining an image offset corresponding to the lens offset according to the target calibration function;
an image compensation module comprising:
the level determining unit is used for determining the exposure level of the exposure information, and the exposure level comprises primary exposure, secondary exposure and tertiary exposure;
the strategy unit is used for determining a corresponding compensation strategy according to the exposure level;
and the compensation unit is used for compensating the image according to the image offset and the compensation strategy.
8. The apparatus of claim 7, wherein the image offset acquisition module comprises:
the system comprises a construction unit, a calibration unit and a calibration unit, wherein the construction unit is used for constructing a mapping relation between standard exposure information of a plurality of gears and a plurality of preset calibration functions, and the standard exposure information of each gear corresponds to one preset calibration function;
the gear determining unit is used for determining gears of the exposure information according to the standard exposure information of the plurality of gears;
the matching unit is used for determining the target calibration function matched with the gear of the exposure information in the plurality of preset calibration functions according to the mapping relation;
and the offset acquisition unit is used for determining the image offset corresponding to the lens offset according to the target calibration function.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. An electronic device comprises an imaging device of an optical image stabilization system, a memory and a processor, wherein the imaging device comprises a camera which carries the optical image stabilization system; the memory has stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of the method of any one of claims 1 to 6.
CN201811290317.8A 2018-10-31 2018-10-31 Image compensation method and apparatus, computer-readable storage medium, and electronic device Active CN109194877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811290317.8A CN109194877B (en) 2018-10-31 2018-10-31 Image compensation method and apparatus, computer-readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811290317.8A CN109194877B (en) 2018-10-31 2018-10-31 Image compensation method and apparatus, computer-readable storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN109194877A CN109194877A (en) 2019-01-11
CN109194877B true CN109194877B (en) 2021-03-02

Family

ID=64941282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811290317.8A Active CN109194877B (en) 2018-10-31 2018-10-31 Image compensation method and apparatus, computer-readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN109194877B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110012224B (en) * 2019-03-26 2021-07-09 Oppo广东移动通信有限公司 Camera anti-shake system, camera anti-shake method, electronic device, and computer-readable storage medium
WO2020227998A1 (en) 2019-05-15 2020-11-19 深圳市大疆创新科技有限公司 Image stability augmentation control method, photography device and movable platform
CN112129317B (en) * 2019-06-24 2022-09-02 南京地平线机器人技术有限公司 Information acquisition time difference determining method and device, electronic equipment and storage medium
CN110177223B (en) * 2019-06-28 2021-10-22 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN110166697B (en) * 2019-06-28 2021-08-31 Oppo广东移动通信有限公司 Camera anti-shake method and device, electronic equipment and computer readable storage medium
CN111028191B (en) * 2019-12-10 2023-07-04 上海闻泰电子科技有限公司 Anti-shake method and device for video image, electronic equipment and storage medium
CN113840093B (en) * 2020-06-24 2023-07-25 北京小米移动软件有限公司 Image generation method and device
CN115022540B (en) * 2022-05-30 2024-06-25 Oppo广东移动通信有限公司 Anti-shake control method, device and system and electronic equipment
CN115100209B (en) * 2022-08-28 2022-11-08 电子科技大学 Camera-based image quality correction method and correction system
CN115379123B (en) * 2022-10-26 2023-01-31 山东华尚电气有限公司 Transformer fault detection method for unmanned aerial vehicle inspection
CN115942620B (en) * 2023-01-09 2023-08-29 广州诺顶智能科技有限公司 Crimping machine control method and system, crimping machine and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098438A (en) * 2009-12-15 2011-06-15 索尼公司 Image capturing apparatus and image capturing method
CN102176106A (en) * 2008-07-15 2011-09-07 佳能株式会社 Image stabilization control apparatus and imaging apparatus
CN102455567A (en) * 2010-10-19 2012-05-16 佳能株式会社 Optical apparatus, image capturing apparatus, and method for controlling optical apparatus
CN103685950A (en) * 2013-12-06 2014-03-26 华为技术有限公司 Method and device for preventing shaking of video image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102608736A (en) * 2006-07-21 2012-07-25 株式会社尼康 Zoom lens system, imaging apparatus, and method for zooming the zoom lens system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176106A (en) * 2008-07-15 2011-09-07 佳能株式会社 Image stabilization control apparatus and imaging apparatus
CN102098438A (en) * 2009-12-15 2011-06-15 索尼公司 Image capturing apparatus and image capturing method
CN102455567A (en) * 2010-10-19 2012-05-16 佳能株式会社 Optical apparatus, image capturing apparatus, and method for controlling optical apparatus
CN103685950A (en) * 2013-12-06 2014-03-26 华为技术有限公司 Method and device for preventing shaking of video image

Also Published As

Publication number Publication date
CN109194877A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109194877B (en) Image compensation method and apparatus, computer-readable storage medium, and electronic device
CN108737734B (en) Image compensation method and apparatus, computer-readable storage medium, and electronic device
CN108769528B (en) Image compensation method and apparatus, computer-readable storage medium, and electronic device
CN109842753B (en) Camera anti-shake system, camera anti-shake method, electronic device and storage medium
CN109544620B (en) Image processing method and apparatus, computer-readable storage medium, and electronic device
CN109194876B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110012224B (en) Camera anti-shake system, camera anti-shake method, electronic device, and computer-readable storage medium
EP3480783B1 (en) Image-processing method, apparatus and device
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
US9489747B2 (en) Image processing apparatus for performing object recognition focusing on object motion, and image processing method therefor
US10410061B2 (en) Image capturing apparatus and method of operating the same
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109951638B (en) Camera anti-shake system, camera anti-shake method, electronic device, and computer-readable storage medium
CN110636216B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN101753825A (en) Image sensing apparatus
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
EP2219366A1 (en) Image capturing device, image capturing method, and image capturing program
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN109951641B (en) Image shooting method and device, electronic equipment and computer readable storage medium
JP2015012482A (en) Image processing apparatus and image processing method
US8243154B2 (en) Image processing apparatus, digital camera, and recording medium
CN110519513B (en) Anti-shake method and apparatus, electronic device, computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant