WO2020088134A1 - 视频校正方法、装置、电子设备和计算机可读存储介质 - Google Patents

视频校正方法、装置、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2020088134A1
WO2020088134A1 PCT/CN2019/106389 CN2019106389W WO2020088134A1 WO 2020088134 A1 WO2020088134 A1 WO 2020088134A1 CN 2019106389 W CN2019106389 W CN 2019106389W WO 2020088134 A1 WO2020088134 A1 WO 2020088134A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
offset
key frame
frame
images
Prior art date
Application number
PCT/CN2019/106389
Other languages
English (en)
French (fr)
Inventor
谭国辉
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020088134A1 publication Critical patent/WO2020088134A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • This application relates to the field of information technology, and in particular to a video correction method, device, electronic device, and computer-readable storage medium.
  • OIS optical image stabilization
  • a video correction method, apparatus, electronic device, and computer-readable storage medium are provided.
  • a video correction method applied to electronic equipment with a camera including:
  • An embodiment of the present application also provides a video correction device, including:
  • the acquisition module is set to control the camera to continuously acquire multiple frames of images, and the camera includes an optical image stabilization system;
  • the first obtaining module is configured to obtain the lens offset of each frame of the image collected when the camera shakes;
  • a second acquisition module configured to acquire an image offset corresponding to the lens offset according to a preset calibration function and the lens offset
  • An identification module configured to identify each frame of the image and obtain key frame images in the multiple frames of the image
  • the correction module is configured to correct the key frame according to the image offset.
  • An embodiment of the present application further provides an electronic device, including a memory and a processor, where a computer program is stored in the memory, and when the computer program is executed by the processor, the processor causes the processor to perform the video correction method described above step.
  • An embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the above method are implemented.
  • FIG. 1 is a flowchart of a video correction method provided in an embodiment
  • FIG. 2 is a flowchart of a method for obtaining a calibration function provided in an embodiment
  • FIG. 3 is a flowchart of determining key frame images and non-key frame images in an embodiment
  • FIG. 4 is a flowchart of the block correction of the key frame image in one embodiment
  • FIG. 5 is a flowchart of correcting non-key frame images in an embodiment
  • FIG. 6 is a schematic structural diagram of a video correction device provided in an embodiment
  • FIG. 7 is a schematic diagram of an internal structure of an electronic device in an embodiment
  • FIG. 8 is a block diagram of a partial structure of a mobile terminal related to an electronic device in an embodiment.
  • first”, “second”, etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish the first element from another element.
  • the first acquisition module may be referred to as the second acquisition module, and similarly, the second acquisition module may be referred to as the first acquisition module. Both the first acquisition module and the second acquisition module are acquisition modules, but they are not the same acquisition module.
  • FIG. 1 is a flowchart of a video correction method in an embodiment. As shown in FIG. 1, the video correction method provided by the embodiment of the present application is applied to an electronic device with a camera, and includes steps 110 to 150.
  • Step 110 Control the camera to continuously collect multiple frames of images, the camera including an optical image stabilization system.
  • the electronic device is a device equipped with a camera, including but not limited to a camera, a video camera, a mobile terminal (such as a smartphone), a tablet (pad), a personal digital assistant (Personal Digital Assistant, PDA), and a portable device (such as , Portable computers), wearable devices, etc., which are not specifically limited in the embodiments of the present invention.
  • a camera including but not limited to a camera, a video camera, a mobile terminal (such as a smartphone), a tablet (pad), a personal digital assistant (Personal Digital Assistant, PDA), and a portable device (such as , Portable computers), wearable devices, etc., which are not specifically limited in the embodiments of the present invention.
  • the electronic device includes dual cameras, the first camera and the second camera may be arranged side by side on the body of the electronic device.
  • the video correction method provided by the embodiments of the present application can correct the video during the video shooting process, so that the current frame of the video collection frame seen by the user has low jitter or No shake, that is, real-time anti-shake correction, you can also perform shake correction on the finished video, that is, non-real-time anti-shake correction.
  • the optical image stabilization (OIS) system may include a Hall sensor, a motor, and a gyroscope.
  • the gyroscope measures the angular velocity of the current electronic device in the multi-axis direction, and controls the motor to perform lens offset accordingly
  • the Hall sensor can be used to measure the Hall position information when the OIS offset is real-time.
  • the corresponding relationship of the lens shift amount (movement amount) calculates the magnitude and direction of the lens movement amount at the current moment.
  • the movement may be movement of the first camera or the second camera in the X and / or Y directions.
  • Hall position information is equal to the lens offset, or, there is a linear relationship between the Hall position information and the lens offset, or, Hall There is a nonlinear relationship between the position information and the lens offset.
  • Step 120 Obtain the lens offset of each frame of the image collected when the camera shakes.
  • the electronic device controls the camera to continuously acquire multiple frames of images to obtain a video stream, and simultaneously obtains the lens offset corresponding to each frame of image when the camera shakes.
  • Synchronization means that the image and the lens offset are collected simultaneously in the same time period or time point, so that the image data and the lens offset data correspond in time sequence.
  • obtaining the lens offset of each frame of the image collected when the camera shakes includes: obtaining a plurality of angular velocity information of the gyroscope when the camera shakes; selecting at least one of the angular velocity information to obtain at least A Hall value corresponding to the angular velocity information calculates a lens offset corresponding to the Hall value, and the angular velocity information corresponds to the Hall value in time sequence.
  • the angular velocity information of the gyroscope when the camera shakes is acquired, the angular velocity information corresponds to the Hall value in time sequence, at least one of the angular velocity information is selected, and at least one of the angular velocity information is acquired Calculate the lens offset corresponding to the Hall value.
  • At least one angular velocity value is selected from multiple angular velocity values (angular velocity information), which can be selected according to different ways.
  • the weighted values of multiple angular velocity values are arranged from large to small, or from small to large, and the first N values (N is a positive integer) are selected, or the mean square value of multiple angular velocities can be adjusted from large to small, or Sort from small to large, and select the first N values.
  • a set of data including at least one lens offset is correspondingly acquired for each frame of image.
  • the first frame image acquires the first group lens offset data
  • the second frame image acquires the second group lens offset data, and so on
  • each frame image acquires a corresponding set of image offset data.
  • the current lens offset acquisition frequency is 8kHz, and the frequency of capturing a frame of image is 30Hz
  • the acquisition of a frame of image will simultaneously collect 266 lens offset data, also That corresponds to a set of data that includes 266 image offsets
  • the electronic device includes dual cameras
  • the current lens offset acquisition frequency is 8 kHz, and the frequency of shooting one frame of image is 30 Hz, then one frame of image acquisition will At the same time, data of 533 lens offsets is collected, which corresponds to a set of data including 533 image offsets.
  • the magnitude and direction of the lens movement amount at the current moment can be calculated according to the correspondence between the Hall position information and the lens movement amount (offset amount).
  • the size of the lens offset at the current moment can be uniquely determined.
  • the lens offset is in the order of microns.
  • Step 130 Acquire an image offset corresponding to the lens offset according to a preset calibration function and the lens offset.
  • Step 140 Identify each frame of the image to obtain key frame images in multiple frames of the image.
  • each frame image is identified so that each frame image carries corresponding identification information, and the identification information can uniquely identify whether each frame image is a key frame.
  • the identification information can be numbers, letters, etc., and the specific form is not limited.
  • Step 150 Correct the key frame image according to the image offset.
  • the key frame image may be corrected according to a set of image offsets corresponding to the key frame image, or may be corrected according to image offsets corresponding to other frame images.
  • all pixels of the key frame image may be divided into multiple areas to form multiple pixel blocks, and the multiple pixel blocks are corrected one by one according to the image offset.
  • FIG. 2 is a flowchart of a calibration function acquisition method provided in an embodiment. As shown in FIG. 2, it includes steps 210 to 230.
  • Step 210 Shoot the same target reference object at different times to obtain an image corresponding to the lens offset at each time, where the image includes at least one feature point;
  • Step 220 Detect at least one feature point in the image, and calculate the image offset of the different image relative to the image at the initial time according to the position of the feature point in the different image;
  • Step 230 Construct a calibration relationship table between the lens offset and the image offset at different times, and fit the calibration relationship between the lens offset and the image offset according to the calibration relationship table .
  • fitting the calibration relationship between the lens offset and the image offset may be a calibration function that the lens offset and the image offset are satisfied by setting a calibration function model
  • draw a fitting curve in a two-dimensional coordinate system so as to determine the calibration function that the current lens offset and image offset satisfy.
  • fitting the calibration relationship between the lens offset and the image offset according to the calibration relationship table may include:
  • the lens offset and the image offset at different times are substituted as input parameters into the calibration function model, and the general expression of the calibration function is calculated.
  • the preset calibration function may be a linear one-dimensional linear equation, or a non-linear one-dimensional quadratic equation or a two-dimensional quadratic equation, etc.
  • This embodiment of the present application does not limit this.
  • the lens offset, a, b, c, d, e, and f are parameters.
  • OIS is initialized and the camera position is at point O.
  • OIS moves to A (x1, y1), B (x2, y2), C (x3, y3), D (x4, y4), E (x5, y5), F (x6, y6) 6 points, take 6 images, by measuring a certain feature point or a few feature points / feature blocks, The feature point / feature block offset of each feature point / feature block relative to 0 points ( ⁇ x1, ⁇ y1), ( ⁇ x2, ⁇ y2), ( ⁇ x3, ⁇ y3), ( ⁇ x4, ⁇ y4), ( ⁇ x5, ⁇ y5) and ( ⁇ x6, ⁇ y6), the ⁇ x, ⁇ y and x, y data into the equation, you can find the specific values of the six parameters a, b, c, d, e and f, thus Determine the specific value of f ( ⁇ x, ⁇ y).
  • FIG. 3 is a flowchart of determining a key frame image and a non-key frame image provided in an embodiment. As shown in FIG. 3, step 310 and step 330 are included.
  • Step 310 Obtain the global features and local features of each frame of the image
  • Step 320 Calculate the similarity between the images of each frame according to the global features and the local features to obtain the neighbor relationship between the images of each frame;
  • Step 330 Mark each frame of the image according to the neighbor relationship to determine the key frame image and the non-key frame image in the multiple frames of the image.
  • the collected video stream includes multiple frames of images
  • the histogram and grayscale image of the adjacent two frames of image data are calculated to obtain the frame difference value and the average difference of the grayscale images of the adjacent two frame images
  • each frame image is identified so that each frame image carries corresponding identification information, and the identification information can uniquely identify each frame image as a key frame or a non-key frame.
  • the identification information can be numbers, letters, etc., and the specific form is not limited.
  • the identification information is composed of numbers, and is used to indicate whether each frame of image is a key frame.
  • the number 1 can be used to indicate that the frame image is a key frame
  • the number 2 indicates that the frame image is a non-key frame.
  • FIG. 4 is a flowchart of correcting key frame images in an embodiment. As shown in FIG. 4, the block correction method for key frame images includes steps 410 and 420.
  • Step 410 Divide all pixels of the key frame image into multiple areas to form multiple pixel blocks.
  • Step 420 Correct the plurality of pixel blocks one by one according to the image offset.
  • each pixel block may include a plurality of pixel rows, and each pixel block is corrected one by one according to an image offset (unit is a pixel). For example, if the key frame image has 1000 lines of pixels, the image can be divided into 20 blocks, one for every 50 rows, and then 20 image offsets can be selected from the set of image offsets corresponding to the key frame image The amount corresponds to all the 20 pixel blocks and is corrected one by one.
  • an image offset unit is a pixel
  • the image offset 1 is positively offset by 1 pixel (pixel) toward the X axis
  • the image offset 2 is offset by 1 pixel toward the X axis
  • the entire block 1 moves to the right by 1 pixel
  • Block 2 moves 1 pixel to the right as a whole, and so on, different pixel blocks are corrected by different image offsets.
  • the image can be improved.
  • the accuracy of correction effectively guarantees the quality of image correction, thereby improving the quality of video correction.
  • each pixel block may include one pixel row, and each pixel row is corrected one by one according to the image offset. For example, if the currently calculated image offset is positively offset by 1 pixel on the X axis, then during image compensation, each pixel row of the image will be shifted by 1 pixel in the negative direction of the X axis to achieve line by line Correction.
  • the image offset is used to correct the image line by line, for example, if the electronic device includes a camera , The current lens offset acquisition frequency is 8kHz, and the frequency of shooting a frame of image is 30Hz, then the acquisition of a frame of image will simultaneously collect 266 lens offset data, which corresponds to a group of 266 Image offset data; if the electronic device includes dual cameras, the current acquisition frequency of the lens offset is 8kHz, and the frequency of shooting one frame of image is 30Hz, then the acquisition of one frame of image will collect 533 lens offsets at the same time The amount of data corresponds to a set of data including 533 image offsets.
  • CMOS is progressive scanning imaging. Assuming that one frame of image has 200 lines, then 266 image offsets are left for 100 lines, then 200 of 266 data are selected, and each data corresponds to each pixel line. 200 of the data are assigned to each pixel line one by one, and the key frame image is corrected line by line.
  • the key offset image is corrected line by line using the image offset. For example, if the current lens offset acquisition frequency is 8 kHz, and the frequency of capturing a frame of image is 30 Hz, then capturing a frame of image will simultaneously collect 266 lens offset data, which corresponds to a group including 266 Data of an image offset.
  • CMOS is progressive scan imaging. Assuming that one frame of image is 300 lines, then 266 image offsets are for 300 pixel lines. At this time, 266 image offset data cannot be assigned to each pixel line one by one. 266 image offsets are allocated to the first 266 pixel lines, and the remaining 34 pixel lines select 34 image offsets from the 266 image offsets, that is, each pixel line corresponds to an image offset.
  • the key frame image is corrected line by line.
  • image offset data is selected from a group of image offset data, for example, 200 data are selected from 266 data, which can be selected in the order of collection, or according to the mean square value
  • the order is selected from large to small, and is specifically selected according to the actual situation, and this embodiment is not limited.
  • FIG. 5 is a flowchart of correcting non-key frame images in an embodiment. As shown in FIG. 5, step 510 and step 520 are included.
  • Step 510 Acquire continuous multi-frame non-key frame images
  • Step 520 Use the same image offset to correct consecutive multi-frame non-key frame images.
  • the continuous multiple frame non-key frame images are acquired, and the same image offset is used for the continuous multiple frame non-key frame images.
  • Correction the camera collects a total of five frames of images, including the first frame of images, the second frame of images, the third frame of images, the fourth frame of images, and the fifth frame of images. If the first frame of images is a key frame image, the second frame of images The fifth frame image is a non-key frame image, you can choose one of the five image offsets corresponding to the five frame image to correct the second frame image to the fifth frame image, the specific selection method is not Be limited.
  • the third frame image is a key frame image
  • the first frame image, the second frame image, the fourth frame image, and the fifth frame image are all non-key frame images
  • you can offset the five groups of images corresponding to the five frame images Select one image offset to correct the above four non-key frame images, or you can choose two image offsets, the first frame image and the second frame image share one of the image offsets, the fourth frame image Share another image offset with the fifth frame image.
  • steps in the flowcharts of FIGS. 1 to 5 are displayed in order according to the arrows, the steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 1 to 5 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be executed in turn or alternately with other steps or sub-steps of the other steps or at least a part of the stages.
  • the video correction device includes an acquisition module 610, a first acquisition module 620, a second acquisition module 630, an identification module 640, and a correction module 650.
  • the acquisition module 610 may be an OIS controller configured to control a camera to continuously acquire multiple frames of images, and the camera includes an optical image stabilization (OIS) system.
  • OIS optical image stabilization
  • the first acquiring module 620 including a gyroscope and a Hall sensor, is configured to acquire the lens offset of each frame of the image collected when the camera shakes.
  • the second obtaining module 630 may be a general-purpose processor CPU, an image processor GPU, or an image processor (Image Signal) Processor (ISP), set according to a preset calibration function and the lens offset, Obtain the image offset corresponding to the lens offset.
  • ISP Image Signal Processor
  • the identification module 640 is configured to identify each frame of the image and obtain key frame images among the multiple frames of the image.
  • the correction module 650 is configured to correct the key frame according to the image offset.
  • the first acquisition module 620 includes: a first acquisition unit and a calculation unit.
  • the first acquiring unit is configured to acquire angular velocity information of the gyroscope when the camera shakes, and the angular velocity information corresponds to the Hall value in time sequence;
  • the calculation unit is configured to select at least one of the angular velocity information, acquire at least one Hall value corresponding to the angular velocity information, and calculate a lens offset corresponding to the Hall value.
  • the identification module 640 includes a second acquisition unit, a third acquisition unit, and a determination unit.
  • the second acquiring unit is configured to acquire global features and local features of the image of each frame
  • the third acquiring unit is configured to calculate the similarity between the images of each frame according to the global features and the local features to acquire the neighbor relationship between the images of each frame;
  • the determining unit is configured to identify each frame of the image according to the neighbor relationship to determine the key frame image and the non-key frame image in the multiple frames of the image.
  • the correction module 650 is configured to divide all pixels of the key frame image into a plurality of areas to form a plurality of pixel blocks, and correct the plurality of pixel blocks one by one according to the image offset .
  • the correction module 650 is further configured to correct the non-key frame image according to the image offset, specifically:
  • the same image offset is used to correct consecutive non-key frame images.
  • the video correction device obtains the lens offset of each frame of the image collected when the camera shakes by controlling the camera to continuously acquire multiple frames of images, and obtains according to a preset calibration function and the lens offset An image offset corresponding to the lens offset, identifying each frame of the image, acquiring a key frame image in multiple frames of the image, and performing a key frame image according to the image offset Block correction not only improves the correction efficiency of the video, but also ensures the correction effect of the video.
  • each module in the above video correction device is for illustration only. In other embodiments, the video correction device may be divided into different modules as needed to complete all or part of the functions of the above video correction device.
  • Each module in the above video correction device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • each module in the video correction device provided in the embodiments of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module composed of the computer program may be stored in the memory of the terminal or the server.
  • the terminal includes a processor, memory, and network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory.
  • the computer program can be executed by the processor to implement the wireless network communication method for electronic devices provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the computer program may be executed by the processor to implement a video correction method provided by the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer programs in the non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, which is used to communicate with external electronic devices.
  • the electronic device may be a mobile terminal, a tablet computer, a personal digital assistant or a wearable device.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions, which when executed by one or more processors, cause the processors to perform the steps of the video correction method.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform a video correction method.
  • An embodiment of the present application also provides an electronic device. As shown in FIG. 8, for ease of description, only parts related to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method part of the embodiments of the present application.
  • the electronic device may be any terminal device including a mobile phone, tablet computer, PDA (Personal Digital Assistant), POS (Point of Sales), in-vehicle computer, wearable device, etc. Taking the electronic device as a mobile phone for example :
  • the mobile phone includes: a radio frequency (Radio Frequency) circuit 810, a memory 820, an input unit 830, a display unit 840, a sensor 850, an audio circuit 860, a wireless fidelity (WiFi) module 870, and a processor 880 , And power supply 890 and other components.
  • Radio Frequency Radio Frequency
  • a memory 820 for storing data
  • a sensor 850 for detecting and a wireless fidelity
  • WiFi wireless fidelity
  • processor 880 a processor 880
  • power supply 890 and other components.
  • the structure of the mobile phone shown in FIG. 8 does not constitute a limitation on the mobile phone, and may include more or fewer components than those shown in the figure, or a combination of certain components, or a different component arrangement.
  • the RF circuit 810 can be used to receive and send signals during the sending and receiving of information or during a call. It can receive the downlink information of the base station and process it to the processor 880; it can also send the uplink data to the base station.
  • RF circuits include but are not limited to antennas, at least one amplifier, transceiver, coupler, low noise amplifier (Low Noise Amplifier, LNA), duplexer, and so on.
  • the RF circuit 810 can also communicate with other devices through a wireless communication network.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Message Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Message Service
  • the memory 820 may be used to store software programs and modules.
  • the processor 880 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 820.
  • the memory 820 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system and at least one function-required application program (such as a sound playback function application program, an image playback function application program, etc.), etc .;
  • the data storage area can store data (such as audio data, address book, etc.) created according to the use of the mobile phone.
  • the memory 820 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 830 may be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the mobile phone 800.
  • the input unit 830 may include a touch panel 831 and other input devices 832.
  • the touch panel 831 also known as a touch screen, can collect user's touch operations on or near it (such as the user using any finger, stylus, or any other suitable object or accessory on or near the touch panel 831 Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 831 may include a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates, and then sends To the processor 880, and can receive the command sent by the processor 880 and execute it.
  • the touch panel 831 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 830 may also include other input devices 832.
  • other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
  • the display unit 840 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 840 may include a display panel 841.
  • the display panel 841 may be configured in the form of a liquid crystal display (Liquid Crystal) (LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • the touch panel 831 may cover the display panel 841, and when the touch panel 831 detects a touch operation on or near it, it is transmitted to the processor 880 to determine the type of touch event, and then the processor 880 according to The type of touch event provides a corresponding visual output on the display panel 841.
  • the touch panel 831 and the display panel 841 are implemented as two independent components to realize the input and input functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated and Realize the input and output functions of the mobile phone.
  • the mobile phone 800 may further include at least one sensor 850, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of the ambient light, and the proximity sensor may close the display panel 841 and / or when the mobile phone moves to the ear Or backlight.
  • the motion sensor may include an acceleration sensor. The acceleration sensor can detect the magnitude of acceleration in various directions, and can detect the magnitude and direction of gravity when at rest. Pedometer, percussion), etc.
  • the mobile phone can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor and so on.
  • the audio circuit 860, the speaker 861, and the microphone 862 may provide an audio interface between the user and the mobile phone.
  • the audio circuit 860 can transmit the converted electrical signal of the received audio data to the speaker 861, and the speaker 861 converts it into a sound signal output; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 860 After receiving, it is converted into audio data, and then processed by the audio data output processor 880, and then sent to another mobile phone via the RF circuit 810, or the audio data is output to the memory 820 for subsequent processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users send and receive e-mails, browse web pages, and access streaming media through the WiFi module 870. It provides users with wireless broadband Internet access.
  • FIG. 8 shows the WiFi module 870, it can be understood that it is not a necessary component of the mobile phone 800, and may be omitted as needed.
  • the processor 880 is the control center of the mobile phone, connects various parts of the entire mobile phone with various interfaces and lines, executes or executes the software programs and / or modules stored in the memory 820, and calls the data stored in the memory 820 to execute Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole.
  • the processor 880 may include one or more processing units.
  • the processor 880 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application program, and the like; the modem processor mainly processes wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 880.
  • the mobile phone 800 further includes a power supply 890 (such as a battery) for powering various components.
  • a power supply 890 (such as a battery) for powering various components.
  • the power supply can be logically connected to the processor 880 through the power management system, so as to realize functions such as charging, discharging, and power management through the power management system.
  • the mobile phone 800 may further include a camera, a Bluetooth module, and the like.
  • the processor 880 included in the electronic device executes the steps of the video correction method when executing the computer program stored in the memory.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种视频校正方法、装置、电子设备和计算机可读存储介质,其中所述方法包括:控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统;获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量;根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量;对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像;根据所述图像偏移量对所述关键帧图像进行校正。

Description

视频校正方法、装置、电子设备和计算机可读存储介质
本申请要求于2018年10月31日提交中国专利局、申请号为2018112914793、发明名称为“视频校正方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息技术领域,特别是涉及一种视频校正方法、装置、电子设备和计算机可读存储介质。
背景技术
这里的陈述仅提供与本申请有关的背景信息,而不必然地构成示例性技术。
随着科学技术飞速的发展,各种电子设备不断丰富并方便了人们的日常生活。其中,智能手机、平板电脑等具有摄像头的电子设备使得用户可以随时拍摄自身想要的视频,增加了用户的使用体验。光学图像稳定(Optical Image Stabilization,OIS)作为提升在低光照下拍照质量的重要手段,也越来越多的在手机上应用。但是,在视频拍摄过程中,由于环境因素等其他因素,使得用户手持电子设备时无法保持电子设备的位置稳定,导致所拍摄的视频图像发生抖动现象,进而降低用户的观感。
传统地,当开启OIS功能拍摄视频时,会针对视频的每一帧图像作补偿,运算量较大,导致视频的校正效率较低。
发明内容
根据本申请的各种实施例,提供一种视频校正方法、装置、电子设备和计算机可读存储介质。
一种视频校正方法,应用于带有摄像头的电子设备,包括:
控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统;
获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量;
根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量;
对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像;
根据所述图像偏移量对所述关键帧图像进行校正。
本申请实施例还提供一种视频校正装置,包括:
采集模块,设置为控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统;
第一获取模块,设置为获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量;
第二获取模块,设置为根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量;
标识模块,设置为对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像;
校正模块,设置为根据所述图像偏移量对所述关键帧进行校正。
本申请实施例还提供一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行上述的视频校正方法的步骤。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中提供的视频校正方法流程图;
图2为一个实施例中提供的标定函数获取方式的流程图;
图3为一个实施例中确定关键帧图像和非关键帧图像的流程图;
图4为一个实施例中对关键帧图像分块校正的流程图;
图5为一个实施例中对非关键帧图像校正的流程图;
图6为一个实施例中提供的视频校正装置的结构示意图;
图7为一个实施例中电子设备的内部结构示意图;
图8为一个实施例中电子设备相关的移动终端的部分结构的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一获取模块称为第二获取模块,且类似地,可将第二获取模块称为第一获取模块。第一获取模块和第二获取模块两者都是获取模块,但其不是同一获取模块。
图1为一个实施例中视频校正方法流程图。如图1所示,本申请实施例提供的视频校正方法,应用于带有摄像头的电子设备,包括步骤110至步骤150。
步骤110:控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统。
在一个实施例,电子设备是具备摄像头的设备,包括但不限于照相机、摄像机、移动终端(如智能手机)、平板电脑(pad)、个人数字助理(Personal Digital Assistant,PDA)、便携设备(例如,便携式计算机)、可穿戴设备等,本发明实施例对此不做具体限定。其中,若电子设备包含双摄像头,则第一摄像头及第二摄像头可并列设置在所述电子设备的机身上。
需要说明的是,本申请实施例所提供的视频校正方法对视频进行校正的过程可以发生在视频拍摄过程中,从而使得用户所看到的视频采集框中的当前帧图像具有较低的抖动或者无抖动,即实时防抖修正,也可以对已拍摄完毕的视频进行抖动修正,即非实时防抖修正。
在一实施例中,光学图像稳定(Optical Image Stabilization,OIS)系统可以包括霍尔传感器、马达及陀螺仪。其中,陀螺仪测量当前电子设备在多轴方向上的角速度,相应地控制马达进行镜头偏移,而霍尔传感器可用于实时测量OIS偏移时的霍尔位置信息,可根据霍尔位置信息与镜头偏移量(移动量)的对应关系,计算出当前时刻的镜头移动量大小及方向。其中,该移动可以是第一摄像头或第二摄像头在X和/或Y方向上的移动。其中,霍尔位置信息与镜头偏移量存在对应关系,包括但不限于:霍尔位置信息与镜头偏移量相等,或,霍尔位置信息与镜头偏移量存在线性关系,或,霍尔位置信息与镜头偏移量存在着非线性关系。
步骤120:获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量。
在一实施例中,电子设备控制摄像头连续采集多帧图像以获取视频流,在摄像头抖动时同步获取每一帧图像对应的镜头偏移量。同步即表示在相同的时间段内或时间点上同时采集图像及镜头偏移量,使图像数据与镜头偏移量的数据在时序上对应。
具体地,获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量,包括:获取在所述摄像头抖动时陀螺仪的多个角速度信息;选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的霍尔值,计算出所述霍尔值对应的镜头偏移量,所述角速度信息与所述霍尔值在时序上对应。
在一实施例中,获取在所述摄像头抖动时陀螺仪的角速度信息,所述角速度信息与所述霍尔值在时序上对应,选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的霍尔值,计算出所述霍尔值对应的镜头偏移量。
其中,从多个角速度值(角速度信息)选取至少一个角速度值,可根据不同方式进行选取。例如,按照多个角速度值加权值从大到小,或由小到大进行排列,选取前N个值(N为正整数),也可以按照多个角速度的均方值由大到小,或由小到大进行排列,选取前N个值。
在一实施例中,针对每一帧图像对应获取一组至少包含一个镜头偏移量的数据。具体地,第一帧图像获取第一组镜头偏移量数据,第二帧图像获取第二组镜头偏移量数据,依次类推,每一帧图像均获取一组对应的图像偏移量数据。例如,若电子设备包含一个摄像头,当前镜头偏移量的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到266个镜头偏移量的数据,也即对应于一组包括266个图像偏移量的数据;若电子设备包含双摄像头,当前镜头偏移量的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到533个镜头偏移量的数据,也即对应于一组包括533个图像偏移量的数据。
在一实施例中,可以根据霍尔位置信息与镜头移动量(偏移量)的对应关系,计算出当前时刻的镜头移动量大小及方向。例如,霍尔位置信息与镜头偏移量可存在线性的标定关系,满足函数f(x)=ay+b,x和y分别表示霍尔位置信息和镜头偏移量,例如,当a=1,b=0时,霍尔位置信息与镜头偏移量相等,通过获取到该霍尔位置信息也即获取到镜头偏移量;也可以存在诸如一元二次方程、二元二次方程等非线性关系。在本申请实施例中,已知霍尔位置信息的大小,即可唯一确定出当前时刻该镜头偏移量的大小。在OIS系统中,该镜头偏移量数量级在微米级别。
步骤130:根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量。
步骤140:对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像。
在一实施例中,对每一帧图像进行标识,使每一帧图像携带对应的标识信息,所述标识信息可以唯一标识每一帧图像是否为关键帧。通过识别每一帧图像的标识信息来获取多帧所述图像中的关键帧图像。其中,标识信息可以为数字、字母等,具体形式不做限定。
步骤150:根据所述图像偏移量对所述关键帧图像进行校正。
在确定出关键帧图像之后,可以根据该关键帧图像对应的一组图像偏移量对该关键帧图像进行校正,也可以根据其他帧图像对应的图像偏移量进行校正。
在一实施例中,可以将所述关键帧图像的所有像素分为多个区域,以形成多个像素块,根据所述图像偏移量对所述多个像素块进行逐一校正。
本申请实施例中,通过控制摄像头连续采集多帧图像,获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量,根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量,对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像,根据所述图像偏移量对所述关键帧图像进行分块校正,不仅提升了视频的校正效率,同时也保证了视频的校正效果。
图2为一个实施例中提供的标定函数获取方式的流程图,如图2所示,包括步骤210至步骤230。
步骤210:在不同时刻对同一目标参照物进行拍摄,获取每一时刻的镜头偏移量对应的图像,所述图像中包含至少一个特征点;
步骤220:对所述图像中至少一个特征点进行检测,并根据不同图像中所述特征点的位置,计算不同图像相对于初始时刻图像的图像偏移量;
步骤230:构建所述不同时刻的镜头偏移量与图像偏移量的标定关系表,并根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系。
在本申请实施例中,拟合出所述镜头偏移量与所述图像偏移量的标定关系,可以为通过设置标定函数模型,确定出镜头偏移量与图像偏移量满足的标定函数,通过计算机几何技术,在二维坐标系中绘制拟合曲线,从而确定当前镜头偏移量与图像偏移量满足的标定函数。
在一实施例中,所述根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系可以包括:
根据所述标定关系表,拟合所述镜头偏移量与所述图像偏移量的OIS标定函数;
将所述不同时刻的镜头偏移量与图像偏移量作为输入参数代入标定函数模型,计算出所述标定函数的一般表达式。
在一实施例中,预设标定函数可以是线性的一元一次方程,也可以是非线性的一元二次方程或二元二次方程等,本申请实施例对此不作限制。以二元二次方程f(Δx,Δy)=ax2+by2+cxy+dx+ey+f为例,Δx,Δy为图像偏移量,单位为像素,x和y为X轴及Y轴的镜头偏移量,a,b,c,d,e和f为参数,在本发明中,需要拟合镜头偏移量与图像偏移量的对应关系,就必须要确定a,b,c,d,e和f这6个参数的具体值,在本申请实施例中,需要测量出6个参数的大小,则需要6个方程式,即,在Δx,Δy和x,y可测量得出的情况下,选取不同的Δx,Δy和x,y带入该方程式,即可求出该6个参数的大小。即,在不同时刻,按照既定的不同镜头偏移量对同一目标物进行拍摄,通过该拍摄图像中特征点(目标点)的位移,确定出Δx和Δy。例如,在t0时刻OIS处于初始化开启状态,此时摄像头位置处于O点,在t1-t6六个时刻时,OIS分别移动至A(x1,y1),B(x2,y2),C(x3,y3),D(x4,y4),E(x5,y5),F(x6,y6)6个点,拍摄6张图像,通过对某一特征点或某几个特征点/特征块的测量,可得到每张图像中该特征点/特征块相对于0点的特征点/特征块偏移量(Δx1,Δy1),(Δx2,Δy2),(Δx3,Δy3),(Δx4,Δy4),(Δx5,Δy5)和(Δx6,Δy6),将该Δx,Δy和x,y数据带入方程式中,即可求出a,b,c,d,e和f这6个参数的具体值,从而确定f(Δx,Δy)的具体值。
图3为一个实施例中提供的确定关键帧图像和非关键帧图像的流程图。如图3所示,包括步骤310和步骤330。
步骤310:获取每一帧所述图像的全局特征和局部特征;
步骤320:根据所述全局特征和所述局部特征计算每一帧所述图像之间的相似度,以获取每一帧所述图像之间的近邻关系;
步骤330:根据所述近邻关系,对每一帧所述图像进行标识,以确定多帧所述图像中的关键帧图像和非关键帧图像。
在一实施例中,采集的视频流包括多帧图像,对相邻的两帧图像数据的直方图和灰度图进行计算,获取相邻两帧图像的帧差异值、灰度图的均值差值和灰度图的方差差值组成的特征向量,以及对所述帧差异值、均值差值和方差差值进行加权处理得到所述向量的欧式距离,比较所述欧式距离与预设阈值的大小,基于比较的结果获取相邻两帧图像的近邻关系,根据所述近邻关系对每一帧所述图像进行标识,以确定多帧所述图像中的关键帧图像和非关键帧图像
在一实施例中,对每一帧图像进行标识,使每一帧图像携带对应的标识信息,所述标识信息可以唯一标识每一帧图像为关键帧或非关键帧。通过识别每一帧图像的标识信息来获取多帧所述图像中的关键帧图像和非关键这图像。其中,标识信息可以为数字、字母等,具体形式不做限定。
在一实施例中,标识信息由数字组成,用于表示每一帧图像是否为关键帧。例如,可以用数字1来表示该帧图像为关键帧,数字2表示该帧图像为非关键帧。
图4为一个实施例中对关键帧图像校正的流程图。如图4所示,对关键帧图像分块校正方法包括步骤410和步骤420。
步骤410:将所述关键帧图像的所有像素分为多个区域,以形成多个像素块。
步骤420:根据所述图像偏移量对所述多个像素块进行逐一校正。
在一实施例中,每一个像素块可以包括多个像素行,根据图像偏移量(单 位为像素)对每一像素块进行逐一校正。例如,若该关键帧图像具备1000行像素,则可将该图像分为20块,每50行为一块,则可以在该关键帧图像对应的一组图像偏移量中选择20个图像偏移量,分别对应于所有的20个像素块进行逐一校正。例如,图像偏移量1为向X轴正向偏移1个像素(pixel),图像偏移量2为向X轴负向偏移1个像素,则分块1整体向右移动1个像素,分块2整体向右移动1个像素,以此类推,不同的像素块通过不同的图像偏移量进行图像校正,相比于一个图像用一个图像偏移量进行校正的方式,可提高图像校正的精度,有效地保证了图像校正的质量,从而提高视频校正的质量。
在一实施例中,每一个像素块可以包括一个像素行,根据图像偏移量对每一像素行进行逐一校正。例如,当前计算出的图像偏移量为X轴正向偏移了1个像素,则在图像补偿时,将该图像每一像素行向X轴负向平移1个像素,实现图像的逐行校正。
在一实施例中,所述图像偏移量的数量大于或等于所述图像的像素行数时,利用所述图像偏移量对所述图像进行逐行校正,例如,若电子设备包含一个摄像头,当前镜头偏移量的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到266个镜头偏移量的数据,也即对应于一组包括266个图像偏移量的数据;若电子设备包含双摄像头,当前镜头偏移量的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到533个镜头偏移量的数据,也即对应于一组包括533个图像偏移量的数据。本申请实施例以电子设备包含一个摄像头为例说明。CMOS是逐行扫描成像,假设一帧图像为200行,则266个图像偏移量对于100行还有剩余,则在266个数据中选取200个,每一个数据对应每一个像素行,即将266个的数据中的200个数据逐一分配至每一个像素行,对关键帧图像进行逐行校正。
在一实施例中,所述图像偏移量的数量小于所述图像的像素行数时,利用所述图像偏移量对所述关键帧图像进行逐行校正。例如,当前镜头偏移量 的采集频率是8kHz,而拍摄一帧图像的频率是30Hz,则采集一帧图像将会同时采集到266个镜头偏移量的数据,也即对应于一组包括266个图像偏移量的数据。CMOS是逐行扫描成像,假设一帧图像为300行,则266个图像偏移量对于300个像素行,此时不能将266个图像偏移量数据逐一分配至每一个像素行,则可以将266个图像偏移量对应分配至前266个像素行,其余34个像素行在266个图像偏移量中选取34个图像偏移量,即每一个像素行对应于一个图像偏移量,对关键帧图像进行逐行校正。
需要说明的是,在一组图像偏移量数据中选择一定数量的图像偏移量数据,例如在266个数据中选择200个数据,可按照采集的先后顺序选取,也可以按照均方值由大到小的顺序选取,具体根据实际情况选择,本实施例不做限定。
本申请实施例,对关键帧的像素行进行逐一校正的方式,相对于所有图像采用同一个图像偏移量进行校正的方式而言,其校正的精度大大增强。
图5为一个实施例中对非关键帧图像校正的流程图。如图5所示,包括步骤510和步骤520。
步骤510:获取连续多帧非关键帧图像;
步骤520:采用同一个图像偏移量对连续多帧非关键帧图像进行校正。
在一实施例中,在确定多帧所述图像中的关键帧图像和非关键帧图像之后,获取连续多帧非关键帧图像,采用同一个图像偏移量对连续多帧非关键帧图像进行校正。例如:摄像头共采集五帧图像,包括第一帧图像、第二帧图像、第三帧图像、第四帧图像、第五帧图像,若第一帧图像为关键帧图像,第二帧图像至第五帧图像均为非关键帧图像,则可以在五帧图像对应的五组图像偏移量中选择一个图像偏移量对第二帧图像至第五帧图像进行校正,具体选择方式不做限定。若第三帧图像为关键帧图像,第一帧图像、第二帧图像、第四帧图像、第五帧图像均为非关键帧图像,则可以在五帧图像对应的五组图像偏移量中选择一个图像偏移量对上述四个非关键帧图像进行校正,也可以选择两个图像偏移量,第一帧图像和第二帧图像共用其中一个图像偏 移量,第四帧图像和第五帧图像共用另一个图像偏移量。
应该理解的是,虽然图1至图5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1至图5中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图6为一个实施例的视频校正装置的结构框图。如图6所示,该视频校正装置包括采集模块610、第一获取模块620、第二获取模块630、标识模块640和校正模块650。
采集模块610,可以为OIS控制器,设置为控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定(Optical Image Stabilization,OIS)系统。
第一获取模块620,包括陀螺仪和霍尔传感器,设置为获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量。
第二获取模块630,可以为通用处理器CPU,也可以是图像处理器GPU,还可以是图像处理器(Image Signal Processor,ISP),设置为根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量。
标识模块640,设置为对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像。
校正模块650,设置为根据所述图像偏移量对所述关键帧进行校正。
在一实施例中,所述第一获取模块620包括:第一获取单元和计算单元。
第一获取单元,设置为获取在所述摄像头抖动时陀螺仪的角速度信息,所述角速度信息与所述霍尔值在时序上对应;
计算单元,设置为选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的霍尔值,计算出所述霍尔值对应的镜头偏移量。
在一实施例中,标识模块640包括第二获取单元、第三获取单元和确定单元。
第二获取单元设置为获取每一帧所述图像的全局特征和局部特征;
第三获取单元设置为根据所述全局特征和所述局部特征计算每一帧所述图像之间的相似度,以获取每一帧所述图像之间的近邻关系;
确定单元设置为根据所述近邻关系,对每一帧所述图像进行标识,以确定多帧所述图像中的关键帧图像和非关键帧图像。
在一实施例中,校正模块650设置为将所述关键帧图像的所有像素分为多个区域,以形成多个像素块,根据所述图像偏移量对所述多个像素块进行逐一校正。
在一实施例中,校正模块还650设置为根据所述图像偏移量对所述非关键帧图像进行校正,具体为:
获取连续多帧非关键帧图像;
采用同一个所述图像偏移量对连续多帧所述非关键帧图像进行校正。
镜头偏移量与图像偏移量的标定关系及相关介绍可参见前述实施例的相关部分,这里不再赘述。
上述提供的视频校正装置,通过控制摄像头连续采集多帧图像,获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量,根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量,对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像,根据所述图像偏移量对所述关键帧图像进行分块校正,不仅提升了视频的校正效率,同时也保证了视频的校正效果。
上述视频校正装置中各个模块的划分仅用于举例说明,在其他实施例中,可将视频校正装置按照需要划分为不同的模块,以完成上述视频校正装置的全部或部分功能。
关于视频校正装置的具体限定可以参见上文中对于视频校正方法的限定,在此不再赘述。上述视频校正装置中的各个模块可全部或部分通过软件、 硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
本申请实施例中提供的视频校正装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的步骤。
图7为一个实施例中电子设备的内部结构示意图。如图7所示,该终端包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的无线网络通信方法。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种视频校正的方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是移动终端、平板电脑或者个人数字助理或穿戴式设备等。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行视频校正方法的步骤。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行视频校正方法。
本申请实施例还提供了一种电子设备。如图8所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电 脑、穿戴式设备等任意终端设备,以电子设备为手机为例:
图8为与本申请实施例提供的电子设备相关的移动终端的部分结构的框图。参考图8,手机包括:射频(Radio Frequency,RF)电路810、存储器820、输入单元830、显示单元840、传感器850、音频电路860、无线保真(wireless fidelity,WiFi)模块870、处理器880、以及电源890等部件。本领域技术人员可以理解,图8所示的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
其中,RF电路810可用于收发信息或通话过程中,信号的接收和发送,可将基站的下行信息接收后,给处理器880处理;也可以将上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路810还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器820可用于存储软件程序以及模块,处理器880通过运行存储在存储器820的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器820可主要包括程序存储区和数据存储区,其中,程序存储区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能的应用程序、图像播放功能的应用程序等)等;数据存储区可存储根据手机的使用所创建的数据(比如音频数据、通讯录等)等。此外,存储器820可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元830可用于接收输入的数字或字符信息,以及产生与手机800 的用户设置以及功能控制有关的键信号输入。具体地,输入单元830可包括触控面板831以及其他输入设备832。触控面板831,也可称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板831上或在触控面板831附近的操作),并根据预先设定的程式驱动相应的连接装置。在一实施例中,触控面板831可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器880,并能接收处理器880发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板831。除了触控面板831,输入单元830还可以包括其他输入设备832。具体地,其他输入设备832可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)等中的一种或多种。
显示单元840可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元840可包括显示面板841。在一实施例中,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板841。在一实施例中,触控面板831可覆盖显示面板841,当触控面板831检测到在其上或附近的触摸操作后,传送给处理器880以确定触摸事件的类型,随后处理器880根据触摸事件的类型在显示面板841上提供相应的视觉输出。虽然在图8中,触控面板831与显示面板841是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板831与显示面板841集成而实现手机的输入和输出功能。
手机800还可包括至少一种传感器850,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板841的亮度,接近传感器可在手机移动到耳边时,关闭显示面板841和/或背光。运动传感器可包括 加速度传感器,通过加速度传感器可检测各个方向上加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换)、振动识别相关功能(比如计步器、敲击)等;此外,手机还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器等。
音频电路860、扬声器861和传声器862可提供用户与手机之间的音频接口。音频电路860可将接收到的音频数据转换后的电信号,传输到扬声器861,由扬声器861转换为声音信号输出;另一方面,传声器862将收集的声音信号转换为电信号,由音频电路860接收后转换为音频数据,再将音频数据输出处理器880处理后,经RF电路810可以发送给另一手机,或者将音频数据输出至存储器820以便后续处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块870可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图8示出了WiFi模块870,但是可以理解的是,其并不属于手机800的必须构成,可以根据需要而省略。
处理器880是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器820内的软件程序和/或模块,以及调用存储在存储器820内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。在一实施例中,处理器880可包括一个或多个处理单元。在一实施例中,处理器880可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等;调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器880中。
手机800还包括给各个部件供电的电源890(比如电池),优选的,电源可以通过电源管理系统与处理器880逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
在一实施例中,手机800还可以包括摄像头、蓝牙模块等。
在本申请实施例中,该电子设备所包括的处理器880执行存储在存储器 上的计算机程序时实现视频校正的方法的步骤。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种视频校正方法,应用于带有摄像头的电子设备,包括:
    控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统;
    获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量;
    根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量;
    对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像;及
    根据所述图像偏移量对所述关键帧图像进行校正。
  2. 根据权利要求1所述的方法,其中所述获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量,包括:
    获取在所述摄像头抖动时陀螺仪的多个角速度信息;及
    选择至少一个所述角速度信息,获取至少一个所述角速度信息对应的霍尔值,计算出所述霍尔值对应的镜头偏移量,所述角速度信息与所述霍尔值在时序上对应。
  3. 根据权利要求1所述的方法,其中所述获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量,包括:
    获取所述电子设备的霍尔位置信息,所述霍尔位置信息与所述镜头偏移量对应;及
    根据所述霍尔位置信息获取所述镜头偏移量。
  4. 根据权利要求1所述的方法,其中所述对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像,包括:
    获取每一帧所述图像的全局特征和局部特征;
    根据所述全局特征和所述局部特征计算每一帧所述图像之间的相似度,以获取每一帧所述图像之间的近邻关系;及
    根据所述近邻关系,对每一帧所述图像进行标识,以确定多帧所述图像中的关键帧图像和非关键帧图像。
  5. 根据权利要求1所述的方法,其中所述根据预设标定函数和所述镜头 偏移量,获取与所述镜头偏移量对应的图像偏移量前,还包括:
    在不同时刻对同一目标参照物进行拍摄,获取每一时刻的镜头偏移量对应的图像,所述图像中包含一个或多个特征点;
    对所述图像中一个或多个特征点进行检测,并根据不同图像中所述特征点的位置,计算不同图像相对于初始时刻的图像偏移量;及
    构建所述不同时刻的镜头偏移量与图像偏移量的标定关系表,并根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系。
  6. 根据权利要求5所述的方法,其中所述根据所述标定关系表,拟合出所述镜头偏移量与所述图像偏移量的标定关系,包括:
    根据所述标定关系表,拟合所述镜头偏移量与所述图像偏移量的OIS标定函数;及
    将所述不同时刻的镜头偏移量与图像偏移量作为输入参数,计算出所述OIS标定函数的参数。
  7. 根据权利要求1所述的方法,其中所述根据所述图像偏移量对所述关键帧图像进行校正,包括:
    将所述关键帧图像的所有像素分为多个区域,以形成多个像素块;及
    根据所述图像偏移量对多个所述像素块进行逐一校正。
  8. 根据权利要求7所述的方法,其中根据所述图像偏移量对多个所述像素块进行逐一校正,包括:
    若每一个所述像素块包括多个像素行,则根据所述图像偏移量对每一像素块进行逐一校正:及
    若每一个像素块包括一个像素行,根据图像偏移量对每一像素行进行逐一校正。
  9. 根据权利要求1所述的方法,其中所述根据所述图像偏移量对所述关键帧图像进行校正,包括:
    若所述图像偏移量的数量大于或等于所述关键帧图像的像素行数,则选取与所述关键帧图像的像素行数对应的多个所述图像偏移量;及
    利用选取的多个所述图像偏移量对所述关键帧图像进行逐行校正。
  10. 根据权利要求9所述的方法,其中选取与所述关键帧图像的像素行数对应的多个所述图像偏移量包括:按照图像偏移量采集的先后顺序进行选取,或按照图像偏移量的均方值由大到小的顺序进行选取。
  11. 根据权利要求1所述的方法,其中所述根据所述图像偏移量对所述关键帧图像进行校正,包括:
    若所述图像偏移量的数量小于所述图像的像素行数,则将所述图像偏移量分配至所述关键帧图像对应数量的多个像素行;及
    根据图像偏移量与所述关键帧图像的像素行数的数量差,选取多个所述图像偏移量分配至关键帧图像的其他像素行,以使所述关键帧图像的每一个像素行对应于一个图像偏移量,并对所述关键帧图像进行逐行校正。
  12. 根据权利要求4所述的方法,其中所述方法还包括根据所述图像偏移量对所述非关键帧图像进行校正,具体包括:
    获取连续多帧非关键帧图像;
    采用同一个所述图像偏移量对连续多帧所述非关键帧图像进行校正。
  13. 一种视频校正装置,包括:
    采集模块,设置为控制摄像头连续采集多帧图像,所述摄像头包括光学图像稳定系统;
    第一获取模块,设置为获取所述摄像头抖动时采集的每一帧所述图像的镜头偏移量;
    第二获取模块,设置为根据预设标定函数和所述镜头偏移量,获取与所述镜头偏移量对应的图像偏移量;
    标识模块,设置为对每一帧所述图像进行标识,获取多帧所述图像中的关键帧图像;
    校正模块,设置为根据所述图像偏移量对所述关键帧进行校正。
  14. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要 求1至12中任一项所述的视频校正方法的步骤。
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至12中任一项所述的方法的步骤。
PCT/CN2019/106389 2018-10-31 2019-09-18 视频校正方法、装置、电子设备和计算机可读存储介质 WO2020088134A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811291479.3 2018-10-31
CN201811291479.3A CN109348125B (zh) 2018-10-31 2018-10-31 视频校正方法、装置、电子设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020088134A1 true WO2020088134A1 (zh) 2020-05-07

Family

ID=65313120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106389 WO2020088134A1 (zh) 2018-10-31 2019-09-18 视频校正方法、装置、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109348125B (zh)
WO (1) WO2020088134A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360804A (zh) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 界面显示方法、装置、电子设备和存储介质
CN115115822A (zh) * 2022-06-30 2022-09-27 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN116320784A (zh) * 2022-10-27 2023-06-23 荣耀终端有限公司 图像处理方法及装置

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348125B (zh) * 2018-10-31 2020-02-04 Oppo广东移动通信有限公司 视频校正方法、装置、电子设备和计算机可读存储介质
CN110363748B (zh) * 2019-06-19 2023-07-21 平安科技(深圳)有限公司 关键点的抖动处理方法、装置、介质及电子设备
CN112311734B (zh) * 2019-07-30 2022-09-02 杭州海康威视数字技术股份有限公司 一种多路视频的图像特征提取方法、电子设备及存储介质
CN110610465B (zh) * 2019-08-26 2022-05-17 Oppo广东移动通信有限公司 图像校正方法和装置、电子设备、计算机可读存储介质
CN110602386B (zh) * 2019-08-28 2021-05-14 维沃移动通信有限公司 一种视频录制方法及电子设备
CN110673115B (zh) * 2019-09-25 2021-11-23 杭州飞步科技有限公司 雷达与组合导航系统的联合标定方法、装置、设备及介质
CN111343356A (zh) * 2020-03-11 2020-06-26 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、存储介质与电子设备
CN114710640B (zh) * 2020-12-29 2023-06-27 华为技术有限公司 基于虚拟形象的视频通话方法、装置和终端
CN113453070B (zh) * 2021-06-18 2023-01-03 北京灵汐科技有限公司 视频关键帧压缩方法及装置、存储介质和电子设备
CN114697469B (zh) * 2022-03-15 2024-02-06 华能大理风力发电有限公司洱源分公司 适用于光伏电站的视频处理方法、装置及电子设备
CN116017158B (zh) * 2023-01-31 2023-09-15 荣耀终端有限公司 一种光学防抖动的标定方法及设备
CN116341587B (zh) * 2023-05-25 2023-09-26 北京紫光青藤微系统有限公司 用于条码识别的方法及装置、条码采集设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
US20160360111A1 (en) * 2014-08-25 2016-12-08 Apple Inc. Combined Optical And Electronic Image Stabilization
CN107852462A (zh) * 2015-07-22 2018-03-27 索尼公司 相机模块、固体摄像元件、电子设备和摄像方法
CN107925722A (zh) * 2015-11-16 2018-04-17 谷歌有限责任公司 基于加速度计数据的稳定化
CN109348125A (zh) * 2018-10-31 2019-02-15 Oppo广东移动通信有限公司 视频校正方法、装置、电子设备和计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101537592B1 (ko) * 2008-09-03 2015-07-22 엘지전자 주식회사 이동단말기 및 그 제어 방법
CN102223545B (zh) * 2011-06-17 2013-10-16 宁波大学 一种快速多视点视频颜色校正方法
US9374532B2 (en) * 2013-03-15 2016-06-21 Google Inc. Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization
JP5997645B2 (ja) * 2013-03-26 2016-09-28 キヤノン株式会社 画像処理装置及び方法、及び撮像装置
CN107077140B (zh) * 2016-03-28 2018-11-30 深圳市大疆创新科技有限公司 无人飞行器的悬停控制方法、控制系统和无人飞行器
CN106550229A (zh) * 2016-10-18 2017-03-29 安徽协创物联网技术有限公司 一种平行全景相机阵列多视点图像校正方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685950A (zh) * 2013-12-06 2014-03-26 华为技术有限公司 一种视频图像防抖方法及装置
US20160360111A1 (en) * 2014-08-25 2016-12-08 Apple Inc. Combined Optical And Electronic Image Stabilization
CN107852462A (zh) * 2015-07-22 2018-03-27 索尼公司 相机模块、固体摄像元件、电子设备和摄像方法
CN107925722A (zh) * 2015-11-16 2018-04-17 谷歌有限责任公司 基于加速度计数据的稳定化
CN109348125A (zh) * 2018-10-31 2019-02-15 Oppo广东移动通信有限公司 视频校正方法、装置、电子设备和计算机可读存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360804A (zh) * 2021-06-03 2021-09-07 北京百度网讯科技有限公司 界面显示方法、装置、电子设备和存储介质
CN113360804B (zh) * 2021-06-03 2024-02-27 北京百度网讯科技有限公司 界面显示方法、装置、电子设备和存储介质
CN115115822A (zh) * 2022-06-30 2022-09-27 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN115115822B (zh) * 2022-06-30 2023-10-31 小米汽车科技有限公司 车端图像处理方法、装置、车辆、存储介质及芯片
CN116320784A (zh) * 2022-10-27 2023-06-23 荣耀终端有限公司 图像处理方法及装置
CN116320784B (zh) * 2022-10-27 2023-11-28 荣耀终端有限公司 图像处理方法及装置

Also Published As

Publication number Publication date
CN109348125A (zh) 2019-02-15
CN109348125B (zh) 2020-02-04

Similar Documents

Publication Publication Date Title
WO2020088134A1 (zh) 视频校正方法、装置、电子设备和计算机可读存储介质
US10567659B2 (en) Image compensation method, electronic device and computer-readable storage medium
WO2019237984A1 (zh) 图像校正方法、电子设备及计算机可读存储介质
CN108513070B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
EP3537709B1 (en) Electronic device photographing method and apparatus
CN109688322B (zh) 一种生成高动态范围图像的方法、装置及移动终端
CN108989672B (zh) 一种拍摄方法及移动终端
CN110913139B (zh) 拍照方法及电子设备
CN109685915B (zh) 一种图像处理方法、装置及移动终端
CN107749046B (zh) 一种图像处理方法及移动终端
CN108683850B (zh) 一种拍摄提示方法及移动终端
WO2016173350A1 (zh) 图片处理方法及装置
CN109819166B (zh) 一种图像处理方法和电子设备
CN111031234B (zh) 一种图像处理方法及电子设备
CN111601032A (zh) 一种拍摄方法、装置及电子设备
CN108881721B (zh) 一种显示方法及终端
KR102184308B1 (ko) 이미지 합성 방법, 장치 및 비휘발성 컴퓨터 판독 가능 매체
WO2019137535A1 (zh) 物距测量方法及终端设备
CN110944114B (zh) 拍照方法及电子设备
CN110363729B (zh) 一种图像处理方法、终端设备及计算机可读存储介质
CN116033269A (zh) 一种联动辅助防抖拍摄方法、设备及计算机可读存储介质
CN108536513B (zh) 一种图片显示方向调整方法及移动终端
WO2021136181A1 (zh) 图像处理方法及电子设备
CN109005337B (zh) 一种拍照方法及终端
CN107734269B (zh) 一种图像处理方法及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19879209

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19879209

Country of ref document: EP

Kind code of ref document: A1