WO2019237984A1 - 图像校正方法、电子设备及计算机可读存储介质 - Google Patents

图像校正方法、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019237984A1
WO2019237984A1 PCT/CN2019/090227 CN2019090227W WO2019237984A1 WO 2019237984 A1 WO2019237984 A1 WO 2019237984A1 CN 2019090227 W CN2019090227 W CN 2019090227W WO 2019237984 A1 WO2019237984 A1 WO 2019237984A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
offset
camera
electronic device
lens
Prior art date
Application number
PCT/CN2019/090227
Other languages
English (en)
French (fr)
Inventor
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019237984A1 publication Critical patent/WO2019237984A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Definitions

  • the present application relates to the field of information technology, and in particular, to an image correction method, an electronic device, and a computer-readable storage medium.
  • OIS Optical Image Stabilization
  • the embodiments of the present application provide an image correction method, an electronic device, and a computer-readable storage medium, which can solve the problem of image shift caused by the OIS function in the prior art during dual-camera photography.
  • An embodiment of the present application provides an image correction method, including:
  • Acquiring at least one lens offset of an electronic device the electronic device including a first camera and a second camera, and the first camera and / or the second camera having an optical image stabilization OIS mode;
  • An embodiment of the present application further provides an image correction apparatus, including:
  • An obtaining unit configured to obtain at least one lens offset of an electronic device, the electronic device including a first camera and a second camera, and the first camera and / or the second camera having an optical image stabilization OIS mode;
  • a computing unit configured to synchronously acquire images collected by the electronic device through the first camera and the second camera in the OIS mode, and calculate the at least according to a preset OIS calibration function and the lens offset obtained synchronously; Image offset corresponding to a lens offset;
  • a correction unit is configured to perform a progressive correction on the acquired image by using the image offset.
  • An embodiment of the present application further provides an electronic device including a memory and a processor.
  • the memory stores a computer program.
  • the processor causes the processor to execute the image correction method described above. operating.
  • An embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the operations of the foregoing image correction method are implemented.
  • FIG. 1 is a flowchart of an image correction method according to an embodiment
  • FIG. 2 is a schematic structural diagram of an electronic device with a dual camera in an embodiment
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment
  • FIG. 4 is a schematic diagram of an internal structure of an electronic device according to an embodiment
  • FIG. 5 is a block diagram of a partial structure of a mobile terminal related to an electronic device in an embodiment.
  • first the terms “first”, “second”, and the like used in this application can be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first client may be referred to as the second client, and similarly, the second client may be referred to as the first client. Both the first client and the second client are clients, but they are not the same client.
  • An embodiment of the present application provides an image correction method. As shown in FIG. 1, the method includes:
  • the operations of the embodiments of the present application may be implemented by an electronic device.
  • the electronic device may be a device with dual cameras, including but not limited to a camera, a video camera, a mobile terminal (such as a smart phone), a tablet computer (Personal Digital Assistant) (PDA), a portable device (such as a portable computer) ), Wearable devices, etc., this embodiment of the present application does not specifically limit this.
  • the electronic device includes a dual camera
  • the first camera and the second camera may be arranged side by side on the body of the electronic device.
  • the embodiment of the present application does not place any restrictions on the performance parameters (for example, focal length, aperture size, resolution, etc.) of the first camera and the second camera.
  • the first camera may be any one of a telephoto camera or a wide-angle camera.
  • the second camera may be any one of a telephoto camera or a wide-angle camera.
  • the first camera and the second camera may be disposed in the same plane of an electronic device (such as a mobile terminal), for example, at the same time on the back or front of the terminal.
  • the installation distance of the dual camera on the terminal may be determined according to the size of the terminal and / or the shooting effect.
  • the left and right cameras may be installed as close as possible, for example, within 10 mm.
  • the first image collected by the first camera 100 is transmitted to the first ISP processor 812 for processing.
  • the first image statistical data (such as the brightness of the image and the contrast value of the image) can be processed. , Image color, etc.) are sent to the control logic 820, and the control logic 820 can determine the control parameters of the first camera 100 according to the statistical data, so that the first camera 100 can perform operations such as autofocus and automatic exposure according to the control parameters.
  • the first image may be stored in the image memory 850 after being processed by the first ISP processor 812, and the first ISP processor 812 may also read the image stored in the image memory 850 for processing.
  • the first image can be directly sent to the display 870 for display after being processed by the ISP processor 812, and the display 870 can also read the image in the image memory 850 for display.
  • the processing flow of the second camera is the same as that of the first camera.
  • the functions of the image sensor and ISP processor are the same as described in the single-shot case.
  • first ISP processor and the second ISP processor may also be combined into a unified ISP processor, which processes the data of the first image sensor and the second image sensor respectively.
  • the CPU is connected to the logic controller, the first ISP processor, the second ISP processor, the image memory and the display, and the CPU is used to implement global control.
  • the power supply module is used to supply power to each module.
  • an electronic device with dual cameras works in both camera modes (for example, portrait mode and blur mode).
  • the CPU controls the power supply module to supply power to the first camera and the second camera.
  • the image sensor in the first camera is powered on, and the image sensor in the second camera is powered on.
  • only one of the cameras works by default, for example, only a telephoto camera works.
  • the CPU controls the power supply module to supply power to the image sensor of the corresponding camera.
  • the dual camera simulates the human binocular vision principle to perceive the distance, that is, observe an object from two points, obtain images in different perspectives, and calculate it based on the principle of triangulation based on the pixel matching relationship between the images. Offset between pixels to get the scene depth of the object.
  • the calibration parameters of the dual cameras will change due to lens shift caused by OIS, resulting in parallax problems and inaccurate scene depth calculations. Therefore, subsequent calibration functions are required. Perform calculation correction or compensation of the target scene depth.
  • the OIS system includes Hall sensors, motors, and gyroscopes.
  • the gyroscope measures the angular velocity of the current electronic device in the multi-axis direction, and accordingly controls the motor to perform lens shift.
  • the Hall sensor can be used to measure the Hall position information when the OIS shift is in real time.
  • the lens movement amount offset amount
  • the movement may be a movement of the first camera or the second camera in the X and / or Y direction.
  • the method before acquiring the lens offset, the method further includes: acquiring Hall position information of the electronic device, where the Hall position information corresponds to the lens offset; according to the Hall The position information acquires the lens offset.
  • the function f (x) ay + b, where x and y represent the Hall position information and the lens offset, respectively.
  • the Hall position information is equal to the lens offset, and by acquiring the Hall position information, the lens offset is also obtained; there may also be non-uniform quadratic equations, binary quadratic equations, etc. Linear relationship.
  • the magnitude of the lens position information can be used to uniquely determine the magnitude of the lens offset at the current moment. In OIS systems, this lens offset is on the order of microns.
  • Synchronization means that the image and the lens offset are collected simultaneously in the same time period or at a time point, so that the image data and the lens offset data correspond in time.
  • a calibration relation table of the lens shift amount and the image shift amount at the different moments is constructed, and according to the calibration relation table, a calibration relation of the lens shift amount and the image shift amount is fitted.
  • the calibration relationship between the lens offset and the image offset is fitted, and a calibration function that satisfies the lens offset and the image offset can be determined by setting a calibration function model.
  • draw a fitting curve in a two-dimensional coordinate system to determine the calibration function that the current lens offset and image offset meet.
  • fitting the calibration relationship between the lens offset and the image offset may be:
  • the OIS calibration function may be a linear one-variable linear equation, or a nonlinear one-variable quadratic equation or two-variable quadratic equation, which is not limited in the embodiment of the present application.
  • f ( ⁇ x, ⁇ y) ax 2 + by 2 + cxy + dx + ey + f
  • ⁇ x, ⁇ y are image offsets
  • the unit is pixel
  • x and y are X axis
  • Y The lens offset of the axis
  • a, b, c, d, e, and f are parameters.
  • OIS is initialized and the camera position is at O.
  • OIS moves to A (x1, y1), B (x2, y2), C (x3, y3), D (x4, y4), E (x5, y5), F (x6, y6) 6 points, take 6 images, by measuring a certain feature point or some feature points / feature blocks, The feature point / feature block offset of each feature point / feature block relative to 0 points ( ⁇ x1, ⁇ y1), ( ⁇ x2, ⁇ y2), ( ⁇ x3, ⁇ y3), ( ⁇ x4, ⁇ y4), ( ⁇ x5, ⁇ y5), ( ⁇ x6, ⁇ y6), bring the ⁇ x, ⁇ y, and x, y data into the equation, and you can find the specific values of the six parameters a, b, c, d, e, and f, so A specific value of f ( ⁇ x, ⁇ y) is determined.
  • the captured image is a composite of the images captured by the first camera and the second camera.
  • S102 and S103 can also be:
  • the angular velocity information corresponds to the Hall position information in time sequence.
  • the gyroscope is used to identify the amount of movement or tilt of the electronic device in different directions.
  • the OIS motor will The data at different times are reversely shifted to different degrees to ensure that the shake caused by the body or the lens is cancelled when the hand shakes.
  • the acquired Hall position information is synchronized with the gyroscope data in time series.
  • an angular velocity value in which the variance is the smallest, or the angular velocity is added to the smallest value may be calculated, a Hall value corresponding to the angular velocity value is used to calculate a calibrated image offset, and the image offset is used for the image Compensation.
  • Image correction is performed on the image using the at least one image offset (unit is pixel). For example, if the currently calculated image offset is 1 pixel in the X axis, then during image compensation, each pixel line of the image is shifted by 1 pixel in the negative direction of the X axis to realize the image. Line-by-line correction.
  • the number of image offsets corresponds to the number of lens offsets one by one, and when the number of image offsets is greater than or equal to the number of pixel rows of the image, the utilizing the image
  • the offset corrects the image line by line, which may specifically be:
  • the method of correcting pixel rows one by one compared with the method of correcting the same image offset for all images, the accuracy of correction is greatly enhanced, the degree of reduction is better, and for the background background The effect of transformation has been significantly improved.
  • CMOS is progressive scan imaging. If a frame image is 500 lines, there are 533 image offsets for 500 lines. Then 500 of 533 data are selected, and each data corresponds to each line. That is, 500 pieces of data among 533 pieces of data are taken to each line one by one, and image correction is performed through each line of data.
  • 500 data can be selected in the order of collection, or in order of the mean square value from large to small, or in other ways.
  • the calibration relationship between the lens offset and the image offset is used to correct the image offset during image capture or real-time preview, and the corrected depth of field is used to calculate the depth of field, eliminating the depth of field.
  • the deviation caused by calculation improves the effect of background blurring when OIS is turned on.
  • An embodiment of the present application further provides an image correction apparatus 20.
  • the image correction apparatus 20 includes:
  • the obtaining unit 21 is configured to obtain at least one lens offset of an electronic device, where the electronic device includes a first camera and a second camera, and the first camera or the second camera has an optical image stabilization OIS mode;
  • the OIS mode is implemented by the OIS system 210 provided on the first camera or the second camera.
  • the OIS system 210 may include:
  • the OIS controller 211 The OIS controller 211, the gyroscope 211, the Hall sensor 212, the motor 213, and the camera 214.
  • the camera 214 includes a first camera and a second camera.
  • the first camera and the second camera can be located side by side on the front of the electronic device or on the back of the electronic device.
  • the arrangement can be horizontal or vertical.
  • the first camera and / or the second camera have an OIS system, and the first camera and the second camera each have a lens.
  • the Hall sensor 212 is a magnetic field sensor that performs displacement detection based on the Hall effect. It is used to obtain Hall position information, and to obtain the lens offset of the camera with the OIS system through the correspondence between the Hall position information and the lens offset.
  • the shift amount is the lens shift amount of the first camera and / or the second camera.
  • the gyroscope 211 is a positioning system based on the movement of electronic devices in free space. It is used to obtain the angular velocity information of the electronic device when it is shaking. It should be noted that the angular velocity information corresponds to the Hall position information in time series, that is, an angle at the same time The value corresponds to a Hall position.
  • the OIS controller 211 obtains the angular velocity information from the gyroscope 211, and converts the angular velocity information into a jitter amplitude of the electronic device, and sends the jitter amplitude to the motor 213 as a reference signal.
  • the motor 213 may be an OIS motor, and is used to move the lens of the camera with the OIS system according to the amplitude of the shake to ensure the sharpness of the image.
  • the OIS controller 211 may be the obtaining unit 21 in the embodiment of the present application, and acquires the target scene at the same time from the first camera and the second camera to obtain the first image and the second image, respectively. At the same time, the OIS controller 211 also obtains the Hall position information collected by the Hall sensor 212 and the angular velocity information collected by the gyroscope 211 at the same time, and outputs the information to the calculation unit.
  • the computing unit 22 may be a general-purpose processor CPU, an image processor GPU, or an image processor (Image Signal Processor, ISP).
  • ISP Image Signal Processor
  • a computing unit 22 is configured to synchronously acquire images collected by the electronic device through the first camera and the second camera in the OIS mode, and calculate the lens according to a preset OIS calibration function and the lens offset obtained synchronously.
  • a correction unit 23 is configured to perform a progressive correction on the acquired image by using the image offset.
  • the acquiring unit 21 is further configured to acquire angular velocity information of the gyroscope synchronously;
  • the obtaining unit 21 is further configured to select at least one of the angular velocity information, and obtain a lens offset corresponding to the at least one of the angular velocity information;
  • the calculation unit 22 is further configured to calculate an image offset corresponding to the at least one lens offset according to a preset OIS calibration function and the obtained lens offset;
  • the correction unit 23 is further configured to perform progressive correction on the image by using the image offset.
  • the correction unit uses the The image offset corrects the image line by line, specifically:
  • the correction unit sequentially assigns a different image offset to each pixel row of the image
  • the acquiring unit is further configured to acquire Hall position information of the electronic device, where the Hall position information corresponds to the lens offset;
  • the position information of the Hall can be referred to the relevant parts of the foregoing embodiments, which will not be repeated here.
  • Each module in the above-mentioned image correction apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the network interface may be an Ethernet card or a wireless network card.
  • the above modules may be embedded in the processor in the form of hardware or independent of the processor in the server, or may be stored in the memory of the server in the form of software to facilitate the processor. Call to perform the operations corresponding to the above modules.
  • FIG. 4 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the terminal includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the wireless network communication method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program may be executed by a processor to implement a method for scene depth calculation provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile terminal, a tablet computer, or a personal digital assistant or a wearable device.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the operations of the image correction method.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform a method of image correction.
  • An embodiment of the present application further provides an electronic device.
  • the electronic device may be any terminal device including a mobile terminal, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sale (POS), a vehicle-mounted computer, and a wearable device, and the electronic device is a mobile terminal.
  • PDA Personal Digital Assistant
  • POS Point of Sale
  • vehicle-mounted computer a vehicle-mounted computer
  • wearable device a mobile terminal
  • FIG. 5 is a block diagram of a partial structure of a mobile terminal related to an electronic device according to an embodiment of the present application.
  • the mobile terminal includes: a radio frequency (RF) circuit 510, a memory 520, an input unit 530, a display unit 540, a sensor 550, an audio circuit 560, a wireless fidelity (WiFi) module 570, and a processor. 580, and power supply 590 and other components.
  • RF radio frequency
  • the RF circuit 510 may be used to receive and send signals during the transmission and reception of information or during a call.
  • the downlink information of the base station may be received and processed by the processor 580.
  • the uplink data may also be sent to the base station.
  • the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 510 can also communicate with a network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including, but not limited to, Global System of Mobile (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 520 may be configured to store software programs and modules.
  • the processor 580 runs the software programs and modules stored in the memory 520 to execute various functional applications and data processing of the mobile terminal.
  • the memory 520 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, applications required for at least one function (such as an application for a sound playback function, an application for an image playback function, etc.), etc .;
  • the data storage area can store data (such as audio data, contacts, etc.) created according to the use of the mobile terminal.
  • the memory 520 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 530 may be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the mobile terminal 500.
  • the input unit 530 may include a touch panel 531 and other input devices 532.
  • Touch panel 531 also known as touch screen, can collect user's touch operations on or near it (for example, the user uses a finger, stylus or any suitable object or accessory on touch panel 531 or near touch panel 531 Operation), and drive the corresponding connection device according to a preset program.
  • the touch panel 531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it To the processor 580, and can receive and execute the commands sent by the processor 580.
  • the touch panel 531 may be implemented by using various types such as a resistive type, a capacitive type, an infrared type, and a surface acoustic wave.
  • the input unit 530 may include other input devices 532. Specifically, the other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.) and the like.
  • the display unit 540 may be used to display information input by the user or information provided to the user and various menus of the mobile terminal.
  • the display unit 540 may include a display panel 541.
  • the display panel 541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • the touch panel 531 may cover the display panel 541. When the touch panel 531 detects a touch operation on or near the touch panel 531, the touch panel 531 transmits the touch operation to the processor 580 to determine the type of the touch event. The type of touch event provides a corresponding visual output on the display panel 541.
  • the touch panel 531 and the display panel 541 are implemented as two independent components to implement the input and input functions of the mobile terminal, in some embodiments, the touch panel 531 and the display panel 541 may be integrated. And realize the input and output functions of the mobile terminal.
  • the mobile terminal 500 may further include at least one sensor 550, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor may adjust the brightness of the display panel 541 according to the brightness of the ambient light.
  • the proximity sensor may close the display panel 541 and the display panel 541 when the mobile terminal is moved to the ear. / Or backlight.
  • Motion sensors can include acceleration sensors, which can detect the magnitude of acceleration in various directions, the magnitude and direction of gravity can be detected when stationary, and can be used for applications that recognize the attitude of mobile terminals (such as horizontal and vertical screen switching), vibration recognition related functions (such as a pedometer, tap, etc .;
  • the mobile terminal can be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, and so on.
  • the audio circuit 560, the speaker 561, and the microphone 562 may provide an audio interface between the user and the mobile terminal.
  • the audio circuit 560 may transmit the received electrical data converted electrical signals to a speaker 561, which is converted into a sound signal by the speaker 561.
  • the microphone 562 converts the collected sound signal into an electrical signal, and the audio circuit 560 After receiving, it is converted into audio data, and then the audio data output processor 680 is processed and then sent to another mobile terminal via the RF circuit 610, or the audio data is output to the memory 620 for subsequent processing.
  • WiFi is a short-range wireless transmission technology.
  • a mobile terminal can help users send and receive email, browse web pages, and access streaming media through the WiFi module 570. It provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 570, it can be understood that it is not a necessary component of the mobile terminal 500, and may be omitted as needed.
  • the processor 580 is a control center of the mobile terminal, and uses various interfaces and lines to connect various parts of the entire mobile terminal, and runs or executes software programs and / or modules stored in the memory 520 and calls data stored in the memory 520 , Perform various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • the processor 580 may include one or more processing units.
  • the processor 580 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, and the like; the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 580.
  • the mobile terminal 500 also includes a power supply 590 (such as a battery) for supplying power to various components.
  • a power supply 590 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 580 through a power management system, thereby implementing functions such as managing charging, discharging, and power consumption management through the power management system. .
  • the mobile terminal 500 may further include a camera, a Bluetooth module, and the like.
  • the processor 580 included in the electronic device executes a computer program stored on a memory to perform an operation of a method for image correction.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM
  • the program can be stored in a non-volatile computer-readable storage medium.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像校正方法,包括:获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头或第二摄像头具备光学图像稳定OIS模式;同步获取所述电子设备在OIS模式下通过所述第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;利用所述图像偏移量对所述采集的图像进行逐行校正。

Description

图像校正方法、电子设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2018年06月15日提交中国专利局,申请号为201810623848.8,发明名称为“图像校正方法、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息技术领域,特别是涉及一种图像校正方法、电子设备及计算机可读存储介质。
背景技术
当前,通过双摄像头实现深度计算的方式正应用在越来越多的手机上,而光学图像稳定(Optical Image Stabilization,OIS)作为提升在低光照下拍照质量的重要手段,也越来越多的在手机上应用。OIS的工作原理是通过镜头的移动来补偿抖动以达到图像的稳定。然而,即使摄像头的OIS功能开启,仍然会在拍照或实时预览过程中产生图像的偏移,现有技术中无法解决由拍照过程中摄像头中OIS带来的图像偏移问题。
发明内容
本申请实施例提供一种图像校正方法、电子设备和计算机可读存储介质,可以解决现有技术中双摄拍照时OIS功能引起的图像偏移问题。
本申请实施例提供一种图像校正方法,包括:
获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头和/或第二摄像头具备光学图像稳定OIS模式;
同步获取所述电子设备在OIS模式下通过所述第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
利用所述图像偏移量对所述采集的图像进行逐行校正。
本申请实施例还提供一种图像校正装置,包括:
获取单元,用于获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头和/或第二摄像头具备光学图像稳定OIS模式;
计算单元,用于同步获取所述电子设备在OIS模式下通过第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
校正单元,用于利用所述图像偏移量对所述采集的图像进行逐行校正。
本申请实施例还提供一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述的图像校正方法的操作。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的图像校正方法的操作。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像校正方法流程图;
图2为一个实施例中带有双摄像头的电子设备结构示意图;
图3为一个实施例中电子设备结构示意图;
图4为一个实施例中电子设备的内部结构示意图;
图5为一个实施例中电子设备相关的移动终端的部分结构的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对 本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
本申请实施例提供了一种图像校正方法,如图1所示,该方法包括:
S101、获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头或第二摄像头具备光学图像稳定OIS模式;
在本申请实施例中,可通过电子设备来实施本申请实施例的操作。电子设备可以是具备双摄像头的设备,包括但不限于照相机、摄像机、移动终端(如智能手机)、平板电脑(pad)、个人数字助理(Personal Digital Assistant,PDA)、便携设备(例如,便携式计算机)、可穿戴设备等,本申请实施例对此不做具体限定。其中,当电子设备包含双摄像头,则第一摄像头及第二摄像头可并列设置在所述电子设备机身上。
如图2所示,本申请实施例对第一摄像头和第二摄像头的性能参数(例如,焦距、光圈大小、解像力等等)不做任何限制。在一些实施例中,第一摄像头可为长焦摄像头或广角摄像头中的任一者。第二摄像头可为长焦摄像头或广角摄像头中的任一者。第一摄像头和第二摄像头可设置于电子设备(如移动终端)的同一平面内,比如,同时设置在终端的背面或前面。双摄像头在终端的安装距离可根据终端的尺寸确定和/或拍摄效果等确定。在一些实施例中,为了使左右摄像头(第一摄像头和第二摄像头)拍摄的物体重叠度高,可将左右摄像头安装得越近越好,例如,10mm以内。
第一摄像头100采集的第一图像传输给第一ISP处理器812进行处理,第一ISP处理器812处理第一图像后,可将第一图像的统计数据(如图像的亮度、图像的反差值、图像的颜色等)发送给控制逻辑器820,控制逻辑器820可根据统计数据确定第一摄像头100的控制参数,从而第一摄像头100可根据控制参数进行自动对焦、自动曝光等操作。第一图像经过第一ISP处理器812进行处理后可存储至图像存储器850中,第一ISP处理器 812也可以读取图像存储器850中存储的图像以对进行处理。另外,第一图像经过ISP处理器812进行处理后可直接发送至显示器870进行显示,显示器870也可以读取图像存储器850中的图像以进行显示。
第二摄像头的处理流程和第一摄像头相同。图像传感器、ISP处理器的功能和单摄情况的描述相同。
应理解,第一ISP处理器和第二ISP处理器也可合成为统一ISP处理器,分别处理第一图像传感器和第二图像传感器的数据。
此外,图中没有展示的,还包括CPU和供电模块。CPU和逻辑控制器、第一ISP处理器、第二ISP处理器、图像存储器和显示器均连接,CPU用于实现全局控制。供电模块用于为各个模块供电。
一般的,具有双摄的电子设备,在某些拍照模式(例如,人像模式、虚化模式)下,双摄均工作,此时,CPU控制供电模块为第一摄像头和第二摄像头供电。第一摄像头中的图像传感器上电,第二摄像头中的图像传感器上电,就可以实现图像的采集转换。在某些拍照模式下,默认仅其中的一个摄像头工作,例如,仅长焦摄像头工作,这种情况下,CPU控制供电模块给相应摄像头的图像传感器供电即可。
在本申请实施例中,双摄像头模拟人类双目视觉原理感知距离,即从两个点观察一个物体,获取在不同视角下的图像,根据图像之间像素的匹配关系,通过三角测量原理计算出像素之间的偏移来获取物体的场景深度。当双摄像头中的一个或者两个带有OIS系统后,由于OIS引起镜头偏移,导致双摄像头标定参数发生变化,导致视差问题,进而造成场景深度计算不准确,因此,需要通过后续的标定函数进行目标场景深度的计算修正或补偿。
OIS系统包括霍尔传感器、马达及陀螺仪。其中,陀螺仪测量当前电子设备在多轴方向上的角速度,相应地控制马达进行镜头偏移,而霍尔传感器可用于实时测量OIS偏移时的霍尔位置信息,可根据霍尔位置信息与镜头移动量(偏移量)的对应关系,计算出当前时刻的镜头移动量大小及方向。其中,该移动可以是第一摄像头或第二摄像头在X和/或Y方向上的移动。其中,霍尔位置信息与镜头偏移量存在对应关系,包括但不限于:霍尔位置信息与镜头偏移量相等,或,霍尔位置信息与镜头偏移量存在线性关系,或,霍尔位 置信息与镜头偏移量存在着非线性关系。因此,本申请实施例中,在获取镜头偏移量之前,还包括:获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;根据所述霍尔位置信息获取所述镜头偏移量。例如,霍尔位置信息与镜头偏移量可存在线性的标定关系,满足函数f(x)=ay+b,x和y分别表示霍尔位置信息和镜头偏移量,例如,当a=1,b=0时,霍尔位置信息与镜头偏移量相等,通过获取到该霍尔位置信息也即获取到镜头偏移量;也可以存在诸如一元二次方程、二元二次方程等非线性关系。在本申请实施例中,已知霍尔位置信息的大小,即可唯一确定出当前时刻该镜头偏移量的大小。在OIS系统中,该镜头偏移量数量级在微米级别。
S102、同步获取所述电子设备在OIS模式下通过所述第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
同步即表示在相同的时间段内或时间点上同时采集图像及镜头偏移量,使图像数据与镜头偏移量的数据在时序上对应。
需要说明的是,OIS标定函数的获取方式如下:
在不同时刻对同一目标参照物进行拍摄,获取每一时刻的镜头偏移量对应的图像,所述图像中包含至少一个特征点;
对所述图像中至少一个特征点进行检测,并根据不同图像中所述特征点的位置,计算不同图像相对于初始时刻图像的图像偏移量;
构建所述不同时刻的镜头偏移量与图像偏移量的标定关系表,并根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系。在本申请实施例中,拟合出所述镜头偏移量与所述图像偏移量的标定关系,可以为通过设置标定函数模型,确定出镜头偏移量与图像偏移量满足的标定函数,通过计算机几何技术,在二维坐标系中绘制拟合曲线,从而确定当前镜头偏移量与图像偏移量满足的标定函数。
具体地,所述根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系可以为:
根据所述标定关系表,拟合所述镜头偏移量与所述图像偏移量的OIS标定函数;
将所述不同时刻的镜头偏移量与图像偏移量作为输入参数代入标定函数模型,计算出 所述OIS标定函数的一般表达式。
例如,该OIS标定函数可以是线性的一元一次方程,也可以是非线性的一元二次方程或二元二次方程等,本申请实施例对此不作限制。以二元二次方程f(Δx,Δy)=ax 2+by 2+cxy+dx+ey+f为例,Δx,Δy为图像偏移量,单位为像素,x和y为X轴及Y轴的镜头偏移量,a,b,c,d,e和f为参数,在本申请中,需要拟合镜头偏移量与图像偏移量的对应关系,就必须要确定a,b,c,d,e和f这6个参数的具体值,在本申请实施例中,需要测量出6个参数的大小,则需要6个方程式,即,在Δx,Δy和x,y可测量得出的情况下,选取不同的Δx,Δy和x,y带入该方程式,即可求出该6个参数的大小。即,在不同时刻,按照既定的不同镜头偏移量对同一目标物进行拍摄,通过该拍摄图像中特征点(目标点)的位移,确定出Δx和Δy。例如,在t0时刻OIS处于初始化开启状态,此时摄像头位置处于O点,在t1-t6六个时刻时,OIS分别移动至A(x1,y1),B(x2,y2),C(x3,y3),D(x4,y4),E(x5,y5),F(x6,y6)6个点,拍摄6张图像,通过对某一特征点或某几个特征点/特征块的测量,可得到每张图像中该特征点/特征块相对于0点的特征点/特征块偏移量(Δx1,Δy1),(Δx2,Δy2),(Δx3,Δy3),(Δx4,Δy4),(Δx5,Δy5),(Δx6,Δy6),将该Δx,Δy和x,y数据带入方程式中,即可求出a,b,c,d,e和f这6个参数的具体值,从而确定f(Δx,Δy)的具体值。
S103、利用所述图像偏移量对所述采集的图像进行逐行校正。
其中,采集的图像为第一摄像头与第二摄像头采集的图像合成。
此外,S102和S103具体还可以为:
同步获取陀螺仪的角速度信息;
即,所述角速度信息与所述霍尔位置信息在时序上对应;在OIS系统中,陀螺仪用于识别该电子设备在不同方向上的移动或倾斜量,OIS马达将根据陀螺仪给出的不同时刻的数据作不同程度的反向偏移,从而保证手抖时该机身或镜头造成的抖动被抵消,此时,获取到的霍尔位置信息与陀螺仪数据在时序上同步。
选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的镜头偏移量;
例如,可在多个角速度值当中,计算其中方差最小,或角速度相加最小的角速度值,利用该角速度值对应的霍尔值计算标定的图像偏移量,并利用该图像偏移量对图像进行补 偿。
根据预设的OIS标定函数及获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
利用所述至少一个图像偏移量(单位为像素)对所述图像进行图像校正。例如,当前计算出的图像偏移量为X轴正向偏移了1个像素(pixel),则在图像补偿时,将该图像每一像素行向X轴负向平移1个像素,实现图像的逐行校正。
其中,所述图像偏移量的数量与所述镜头偏移量的数量一一对应,当所述图像偏移量的数量大于或等于所述图像的像素行数时,所述利用所述图像偏移量对所述图像进行逐行校正,具体可以为:
依次为所述图像的每一像素行分配不同的图像偏移量;
利用分配的图像偏移量对所述图像的像素行做逐一校正。
本申请实施例中,对像素行进行逐一校正的方式,相对于所有图像采用同一个图像偏移量进行校正的方式而言,其校正的精度大大增强,还原度更好,且对于后期背景虚化的效果有明显提高。
例如,当前镜头偏移量的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到533个镜头偏移量的数据,也即对应于533个图像偏移量的数据,CMOS是逐行扫描成像,假设一帧图像为500行,则533个图像偏移量对于500行还有剩余,则选取533个数据中的500个,每一个数据对应每一行,即将533个的数据中的500个数据逐一至每一行,通过每一行数据进行图像校正。此外,在533个数据中选择500个数据,可按照采集的先后顺序选取,也可以按照均方值由大到小的顺序选取,也可以按照其他方式进行选取。
本申请实施例中,通过镜头偏移量与图像偏移量的标定关系,实现了在图像拍摄或实时预览过程中对图像偏移进行校正,并利用校正后的图像计算景深,消除了对景深计算带来的偏差,提升了在OIS开启下背景虚化的效果。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行 并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
本申请实施例还提供一种图像校正装置20,如图3所示,该图像校正装置20包括:
获取单元21,用于获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头或第二摄像头具备光学图像稳定OIS模式;
OIS模式通过设置在第一摄像头或第二摄像头的OIS系统210来实现。
其中OIS系统210可以包括:
OIS控制器211、陀螺仪211、霍尔传感器212、马达213和摄像头214。
摄像头214包括第一摄像头和第二摄像头,第一摄像头和第二摄像头可以并列位于电子设备的前面,也可以并列位于电子设备的背面,排布方式可以为水平排布也可以为竖直排布,第一摄像头和/或第二摄像头带有OIS系统,第一摄像头和第二摄像头分别具有镜头。
霍尔传感器212是基于霍尔效应进行位移检测的磁场传感器,用于获取霍尔位置信息,并通过霍尔位置信息与镜头偏移量的对应关系,获取该带有OIS系统的摄像头的镜头偏移量,即第一摄像头和/或第二摄像头的镜头偏移量。
陀螺仪211是基于电子设备在自由空间方位移动的定位系统,用于获取电子设备抖动时的角速度信息,需要说明的是,角速度信息与霍尔位置信息在时序上对应,即同一时刻的一个角度数值对应一个霍尔位置信息。
OIS控制器211从陀螺仪211获取角速度信息,并将角速度信息转换为电子设备的抖动幅度,并将该抖动幅度作为参考信号发送给马达213.
马达213可以为OIS马达,用于根据抖动幅度,推动带有OIS系统的摄像头的镜头移动,以保证图像的清晰度。
OIS控制器211,可以是本申请实施例中的获取单元21,从第一摄像头和第二摄像头获取同一时刻采集目标场景分别得到第一图像和第二图像。同时,OIS控制器211还在同一时刻获取到霍尔传感器212采集的霍尔位置信息以及陀螺仪211采集的角速度信息,并 将该信息输出至计算单元。
在本申请实施例中,计算单元22可以是通用处理器CPU,也可以是图像处理器GPU,还可以是图像处理器(Image Signal Processor,ISP)。
计算单元22,用于同步获取所述电子设备在OIS模式下通过第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
校正单元23,用于利用所述图像偏移量对所述采集的图像进行逐行校正。
其中,
所述获取单元21,还用于同步获取陀螺仪的角速度信息;
所述获取单元21,还用于选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的镜头偏移量;
所述计算单元22,还用于根据预设的OIS标定函数及获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;
所述校正单元23,还用于利用所述图像偏移量对所述图像进行逐行校正。
其中,所述图像偏移量的数量与所述镜头偏移量的数量一一对应,当所述图像偏移量的数量大于或等于所述图像的像素行数时,所述校正单元利用所述图像偏移量对所述图像进行逐行校正,具体为:
所述校正单元依次为所述图像的每一像素行分配不同的图像偏移量;
利用分配的图像偏移量对所述图像的像素行做逐一校正。
在获取电子设备的至少一个镜头偏移量之前,
所述获取单元,还用于获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;
根据所述霍尔位置信息获取所述镜头偏移量。
具体地,霍尔位置信息、镜头偏移量与图像偏移量的标定关系、逐行校正的方案及相关介绍可参见前述实施例的相关部分,这里不再累述。
上述图像校正装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。其中,网络接口可以是以太网卡或无线网卡等,上述各模块可以以硬件形式内嵌于或独立于服务 器中的处理器中,也可以以软件形式存储于服务器中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图4为一个实施例中电子设备的内部结构示意图。如图4所示,该终端包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的无线网络通信方法。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种场景深度计算的方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是移动终端、平板电脑或者个人数字助理或穿戴式设备等。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行图像校正方法的操作。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像校正的方法。
本申请实施例还提供了一种电子设备。如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括移动终端、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备,以电子设备为移动终端为例:
图5为与本申请实施例提供的电子设备相关的移动终端的部分结构的框图。参考图5,移动终端包括:射频(Radio Frequency,RF)电路510、存储器520、输入单元530、显示单元540、传感器550、音频电路560、无线保真(wireless fidelity,WiFi)模块570、处理器580、以及电源590等部件。本领域技术人员可以理解,图5所示的移动终端结构并不构成对移动终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
其中,RF电路510可用于收发信息或通话过程中,信号的接收和发送,可将基站的下行信息接收后,给处理器580处理;也可以将上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noi se Amplifier,LNA)、双工器等。此外,RF电路510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器520可用于存储软件程序以及模块,处理器580通过运行存储在存储器520的软件程序以及模块,从而执行移动终端的各种功能应用以及数据处理。存储器520可主要包括程序存储区和数据存储区,其中,程序存储区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能的应用程序、图像播放功能的应用程序等)等;数据存储区可存储根据移动终端的使用所创建的数据(比如音频数据、通讯录等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元530可用于接收输入的数字或字符信息,以及产生与移动终端500的用户设置以及功能控制有关的键信号输入。具体地,输入单元530可包括触控面板531以及其他输入设备532。触控面板531,也可称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板531上或在触控面板531附近的操作),并根据预先设定的程式驱动相应的连接装置。在一个实施例中,触控面板531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器580,并能接收处理器580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板531。除了触控面板531,输入单元530 还可以包括其他输入设备532。具体地,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)等中的一种或多种。
显示单元540可用于显示由用户输入的信息或提供给用户的信息以及移动终端的各种菜单。显示单元540可包括显示面板541。在一个实施例中,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板541。在一个实施例中,触控面板531可覆盖显示面板541,当触控面板531检测到在其上或附近的触摸操作后,传送给处理器580以确定触摸事件的类型,随后处理器580根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图5中,触控面板531与显示面板541是作为两个独立的部件来实现移动终端的输入和输入功能,但是在某些实施例中,可以将触控面板531与显示面板541集成而实现移动终端的输入和输出功能。
移动终端500还可包括至少一种传感器550,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板541的亮度,接近传感器可在移动终端移动到耳边时,关闭显示面板541和/或背光。运动传感器可包括加速度传感器,通过加速度传感器可检测各个方向上加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态的应用(比如横竖屏切换)、振动识别相关功能(比如计步器、敲击)等;此外,移动终端还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器等。
音频电路560、扬声器561和传声器562可提供用户与移动终端之间的音频接口。音频电路560可将接收到的音频数据转换后的电信号,传输到扬声器561,由扬声器561转换为声音信号输出;另一方面,传声器562将收集的声音信号转换为电信号,由音频电路560接收后转换为音频数据,再将音频数据输出处理器680处理后,经RF电路610可以发送给另一移动终端,或者将音频数据输出至存储器620以便后续处理。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块570,但是可以理解的是,其并不属于移动终端500的必须构 成,可以根据需要而省略。
处理器580是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。在一个实施例中,处理器580可包括一个或多个处理单元。在一个实施例中,处理器580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等;调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器580中。
移动终端500还包括给各个部件供电的电源590(比如电池),优选的,电源可以通过电源管理系统与处理器580逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
在一个实施例中,移动终端500还可以包括摄像头、蓝牙模块等。
在本申请实施例中,该电子设备所包括的处理器580执行存储在存储器上的计算机程序时实现图像校正的方法的操作。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,该存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的 各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种图像校正方法,其特征在于,包括:
    获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头和/或第二摄像头具备光学图像稳定OIS模式;
    同步获取所述电子设备在OIS模式下通过所述第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    利用所述图像偏移量对所述采集的图像进行逐行校正。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    同步获取陀螺仪的角速度信息;
    选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的镜头偏移量;
    根据预设的OIS标定函数及获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    利用所述图像偏移量对所述图像进行逐行校正。
  3. 根据权利要求2所述的方法,其特征在于,所述图像偏移量的数量与所述镜头偏移量的数量一一对应,当所述图像偏移量的数量大于或等于所述图像的像素行数时,所述利用所述图像偏移量对所述图像进行逐行校正,包括:
    依次为所述图像的每一像素行分配不同的图像偏移量;及
    利用分配的图像偏移量对所述图像的像素行做逐一校正。
  4. 根据权利要求1所述的方法,其特征在于,在获取电子设备的至少一个镜头偏移量之前,所述方法还包括:
    获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;及
    根据所述霍尔位置信息获取所述镜头偏移量。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述校正后的图像,计算所述场景深度信息。
  6. 一种图像校正装置,其特征在于,所述装置包括:
    获取单元,用于获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头和/或第二摄像头具备光学图像稳定OIS模式;
    计算单元,用于同步获取所述电子设备在OIS模式下通过第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    校正单元,用于利用所述图像偏移量对所述采集的图像进行逐行校正。
  7. 根据权利要求6所述的装置,其特征在于,
    所述获取单元,还用于同步获取陀螺仪的角速度信息;
    所述获取单元,还用于选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的镜头偏移量;
    所述计算单元,还用于根据预设的OIS标定函数及获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    所述校正单元,还用于利用所述图像偏移量对所述图像进行逐行校正。
  8. 根据权利要求7所述的装置,其特征在于,所述图像偏移量的数量与所述镜头偏移量的数量一一对应,当所述图像偏移量的数量大于或等于所述图像的像素行数时,所述校正单元利用所述图像偏移量对所述图像进行逐行校正,包括:
    所述校正单元依次为所述图像的每一像素行分配不同的图像偏移量;及
    利用分配的图像偏移量对所述图像的像素行做逐一校正。
  9. 根据权利要求6所述的装置,其特征在于,在获取电子设备的至少一个镜头偏移量之前,
    所述获取单元,还用于获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;及
    根据所述霍尔位置信息获取所述镜头偏移量。
  10. 根据权利要求6所述的装置,其特征在于,所述计算单元还用于:
    根据所述校正后的图像,计算所述场景深度信息。
  11. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,所述处理器执行所述计算机程序时实现以下 操作:
    获取电子设备的至少一个镜头偏移量,所述电子设备包括第一摄像头和第二摄像头,且所述第一摄像头和/或第二摄像头具备光学图像稳定OIS模式;
    同步获取所述电子设备在OIS模式下通过所述第一摄像头和第二摄像头采集的图像,根据预设的OIS标定函数及同步获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    利用所述图像偏移量对所述采集的图像进行逐行校正。
  12. 根据权利要求10所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,所述处理器执行所述计算机程序时实现以下操作:
    同步获取陀螺仪的角速度信息;
    选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的镜头偏移量;
    根据预设的OIS标定函数及获取的所述镜头偏移量,计算出所述至少一个镜头偏移量对应的图像偏移量;及
    利用所述图像偏移量对所述图像进行逐行校正。
  13. 根据权利要求12所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,所述处理器执行所述计算机程序时实现以下操作:
    所述图像偏移量的数量与所述镜头偏移量的数量一一对应,当所述图像偏移量的数量大于或等于所述图像的像素行数时,所述利用所述图像偏移量对所述图像进行逐行校正,包括:
    依次为所述图像的每一像素行分配不同的图像偏移量;及
    利用分配的图像偏移量对所述图像的像素行做逐一校正。
  14. 根据权利要求10所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,所述处理器执行所述计算机程序时实现以下操作:
    在获取电子设备的至少一个镜头偏移量之前,所述方法还包括:
    获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;及
    根据所述霍尔位置信息获取所述镜头偏移量。
  15. 根据权利要求10所述的电子设备,其特征在于,所述计算机程序被所述处理器 执行时,所述处理器执行所述计算机程序时实现以下操作:
    根据所述校正后的图像,计算所述场景深度信息。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-5中任一项所述的图像校正方法的操作。
PCT/CN2019/090227 2018-06-15 2019-06-06 图像校正方法、电子设备及计算机可读存储介质 WO2019237984A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810623848.8A CN108737735B (zh) 2018-06-15 2018-06-15 图像校正方法、电子设备及计算机可读存储介质
CN201810623848.8 2018-06-15

Publications (1)

Publication Number Publication Date
WO2019237984A1 true WO2019237984A1 (zh) 2019-12-19

Family

ID=63929837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090227 WO2019237984A1 (zh) 2018-06-15 2019-06-06 图像校正方法、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108737735B (zh)
WO (1) WO2019237984A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114815257A (zh) * 2022-04-25 2022-07-29 歌尔股份有限公司 一种xr眼镜及摄像头调整方法、系统、设备、介质
CN115022540A (zh) * 2022-05-30 2022-09-06 Oppo广东移动通信有限公司 防抖控制方法、装置及系统、电子设备
EP4087230A4 (en) * 2019-12-31 2023-05-03 Vivo Mobile Communication Co., Ltd. IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737735B (zh) * 2018-06-15 2019-09-17 Oppo广东移动通信有限公司 图像校正方法、电子设备及计算机可读存储介质
CN109598764B (zh) * 2018-11-30 2021-07-09 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN109493391A (zh) * 2018-11-30 2019-03-19 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN109671028B (zh) * 2018-11-30 2023-04-11 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109600548B (zh) * 2018-11-30 2021-08-31 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109714536B (zh) * 2019-01-23 2021-02-23 Oppo广东移动通信有限公司 图像校正方法、装置、电子设备及计算机可读存储介质
CN109909801B (zh) * 2019-03-13 2020-05-22 湖北文理学院 旋转台误差校正方法、装置及电子设备
CN109951638B (zh) * 2019-03-26 2021-02-02 Oppo广东移动通信有限公司 摄像头防抖系统、方法、电子设备和计算机可读存储介质
CN110072049B (zh) * 2019-03-26 2021-11-09 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109951640A (zh) * 2019-03-26 2019-06-28 Oppo广东移动通信有限公司 摄像头防抖方法和系统、电子设备、计算机可读存储介质
CN110175960B (zh) * 2019-05-21 2021-04-13 Oppo广东移动通信有限公司 图像校正方法、装置、电子设备以及存储介质
CN110233969B (zh) * 2019-06-26 2021-03-30 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN111833394A (zh) * 2020-07-27 2020-10-27 深圳惠牛科技有限公司 摄像头校准方法、基于双目测量装置的测量方法
CN112261262B (zh) * 2020-10-21 2022-06-17 维沃移动通信有限公司 图像校准方法和装置、电子设备和可读存储介质
CN112911091B (zh) * 2021-03-23 2023-02-24 维沃移动通信(杭州)有限公司 多点激光器的参数调整方法、装置和电子设备
CN115134505A (zh) * 2021-03-25 2022-09-30 北京小米移动软件有限公司 预览画面生成方法及装置、电子设备、存储介质
CN114745499A (zh) * 2022-03-21 2022-07-12 维沃移动通信有限公司 拍摄装置的控制方法、控制装置、拍摄装置和电子设备
CN116074628B (zh) * 2022-05-30 2023-10-20 荣耀终端有限公司 一种数据处理方法及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169761A1 (en) * 2010-07-27 2013-07-04 Panasonic Corporation Image capturing device
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
CN106791363A (zh) * 2015-11-23 2017-05-31 鹦鹉无人机股份有限公司 配备发送校正了晃动效应的图像序列的摄像机的无人机
CN107223330A (zh) * 2016-01-12 2017-09-29 华为技术有限公司 一种深度信息获取方法、装置及图像采集设备
CN107852462A (zh) * 2015-07-22 2018-03-27 索尼公司 相机模块、固体摄像元件、电子设备和摄像方法
CN108737735A (zh) * 2018-06-15 2018-11-02 Oppo广东移动通信有限公司 图像校正方法、电子设备及计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730462A (zh) * 2017-09-30 2018-02-23 努比亚技术有限公司 一种图像处理方法、终端及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169761A1 (en) * 2010-07-27 2013-07-04 Panasonic Corporation Image capturing device
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
CN107852462A (zh) * 2015-07-22 2018-03-27 索尼公司 相机模块、固体摄像元件、电子设备和摄像方法
CN106791363A (zh) * 2015-11-23 2017-05-31 鹦鹉无人机股份有限公司 配备发送校正了晃动效应的图像序列的摄像机的无人机
CN107223330A (zh) * 2016-01-12 2017-09-29 华为技术有限公司 一种深度信息获取方法、装置及图像采集设备
CN108737735A (zh) * 2018-06-15 2018-11-02 Oppo广东移动通信有限公司 图像校正方法、电子设备及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4087230A4 (en) * 2019-12-31 2023-05-03 Vivo Mobile Communication Co., Ltd. IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE
CN114815257A (zh) * 2022-04-25 2022-07-29 歌尔股份有限公司 一种xr眼镜及摄像头调整方法、系统、设备、介质
CN115022540A (zh) * 2022-05-30 2022-09-06 Oppo广东移动通信有限公司 防抖控制方法、装置及系统、电子设备

Also Published As

Publication number Publication date
CN108737735B (zh) 2019-09-17
CN108737735A (zh) 2018-11-02

Similar Documents

Publication Publication Date Title
WO2019237984A1 (zh) 图像校正方法、电子设备及计算机可读存储介质
EP3582487B1 (en) Image stabilisation
CN109348125B (zh) 视频校正方法、装置、电子设备和计算机可读存储介质
CN108388849B (zh) 调整终端的显示图像的方法和装置、电子设备、存储介质
KR101712301B1 (ko) 화면을 촬영하기 위한 방법 및 디바이스
US11539887B2 (en) Video image anti-shake method and terminal
CN109688322B (zh) 一种生成高动态范围图像的方法、装置及移动终端
EP3062286B1 (en) Optical distortion compensation
CN107948505B (zh) 一种全景拍摄方法及移动终端
CN109922253B (zh) 镜头防抖方法及装置、移动设备
CN112414400B (zh) 一种信息处理方法、装置、电子设备和存储介质
CN112188082A (zh) 高动态范围图像拍摄方法、拍摄装置、终端及存储介质
CN114290338B (zh) 二维手眼标定方法、设备、存储介质及程序产品
CN108769529B (zh) 一种图像校正方法、电子设备及计算机可读存储介质
US20170302908A1 (en) Method and apparatus for user interaction for virtual measurement using a depth camera system
WO2021136181A1 (zh) 图像处理方法及电子设备
WO2019179413A1 (zh) 景深图像生成方法及移动终端
CN115134527B (zh) 处理方法、智能终端及存储介质
CN109785226B (zh) 一种图像处理方法、装置及终端设备
CN114339023B (zh) 用于相机模组的防抖检测方法、装置及介质
CN112070681B (zh) 图像处理方法及装置
CN111986097B (zh) 图像处理方法及装置
CN111985280B (zh) 图像处理方法及装置
CN108234867B (zh) 图像处理方法及移动终端
CN118135042A (zh) 图像生成方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19818920

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19818920

Country of ref document: EP

Kind code of ref document: A1