WO2022057723A1 - 一种视频的防抖处理方法及电子设备 - Google Patents

一种视频的防抖处理方法及电子设备 Download PDF

Info

Publication number
WO2022057723A1
WO2022057723A1 PCT/CN2021/117504 CN2021117504W WO2022057723A1 WO 2022057723 A1 WO2022057723 A1 WO 2022057723A1 CN 2021117504 W CN2021117504 W CN 2021117504W WO 2022057723 A1 WO2022057723 A1 WO 2022057723A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
electronic device
viewfinder
original image
Prior art date
Application number
PCT/CN2021/117504
Other languages
English (en)
French (fr)
Inventor
魏秋阳
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP21868548.5A priority Critical patent/EP4044582A4/en
Priority to US17/756,347 priority patent/US11750926B2/en
Publication of WO2022057723A1 publication Critical patent/WO2022057723A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a video anti-shake processing method and electronic device.
  • the video picture stable is a big requirement of the video recording function.
  • the anti-shake processing method for video is particularly important.
  • users can reduce the magnitude of the user's hand shake by adding an anti-shake device (such as a stabilizer).
  • the additional anti-shake equipment increases the burden on the user to carry.
  • the industry can also use a built-in optical image stabilization (OIS) device in the electronic device to achieve an effect similar to an external stabilizer to keep the picture stable.
  • OIS optical image stabilization
  • the built-in OIS device will increase the size and cost of the camera, and a set of OIS devices can only cover one camera.
  • a set of OIS devices is generally not configured for each camera.
  • an inertial measurement unit such as a gyroscope and an accelerometer is built in the electronic device.
  • the motion characteristics of the electronic device are predicted through the data of the IMU, and the corresponding compensation is made to the video picture to achieve the effect of picture stabilization.
  • the existing anti-shake solutions for electronic devices cannot achieve a stable effect of maintaining multiple video images.
  • the present application provides a video anti-shake processing method and electronic device, which can maintain the stabilization effect of multiple video images.
  • a first aspect provides a video anti-shake processing method, the method is applied to an electronic device including a camera, the electronic device includes a first camera and a second camera, the method includes: receiving a first operation; In an operation, the first viewfinder and the second viewfinder are displayed; wherein, the first viewfinder is used to display the first image collected by the first camera, and the second viewfinder is used to display the second image collected by the second camera; wherein, for the first viewfinder The first original image collected by a camera is cropped with the target object in the first original image as the center to obtain the first image; for the second original image collected by the second camera, the center of the second original image is moved to the first direction The second image is obtained by cropping the position of the first distance as the center, and the first direction and the first distance are determined according to the motion characteristics of the electronic device; the first camera is a front camera, and the second camera is a rear camera; or, The first camera is a telephoto camera, and the second camera is a medium-focus or short-focus camera; or,
  • different anti-shake solutions are adopted for different video images collected by the electronic device.
  • the anti-shake method of compensating according to the motion characteristics of the electronic device is adopted for the video picture (that is, the second original image collected by the second camera) in which the shooting scene is far away from the electronic device or the zoom ratio is small.
  • a video picture with a larger zoom ratio ie, the first original image collected by the first camera
  • the image displayed in the first viewfinder is obtained by cropping the captured original image centered on the target object, that is, the target object is located at the center of the first viewfinder screen, and the user does not need to deliberately adjust the electronic device. Therefore, users can focus on the scene recorded in the second viewfinder, track and record the scene they want to shoot, and improve the quality of the recorded picture and the comfort of the user's recording.
  • the sizes of the first viewfinder and the second viewfinder are the same or different.
  • the first image is obtained by cropping the target object in the first original image as the center, including: in the first original image collected for the first camera. , crop the target object in the first original image as the center to obtain the third image; according to the third image and the size of the first viewfinder frame, adjust the zoom ratio corresponding to the first viewfinder frame; after the adjustment according to the first viewfinder frame The zoom magnification and the third image are zoomed to obtain the first image.
  • zoom processing may also be performed on the cropped image. For example, when the image of the target object is small, the zoom magnification can be increased, so that the first viewfinder can display the target object more clearly.
  • the method includes: The first original image is obtained by cropping the target object in the first original image as the center; the size of the first viewfinder is adjusted according to the size of the first image.
  • the size of the first viewfinder can be reduced. In this way, the second viewfinder can display more picture content.
  • the method further includes: automatically determining the target object in the first original image according to the first original image collected by the first camera; or, determining the target in the first original image according to a user's selection operation object.
  • the target object contains one or more faces.
  • the first image is obtained by cropping the target object in the first original image as the center
  • the method further includes: using image segmentation technology to collect the image obtained by the first camera.
  • the first original image is divided into a fourth image and a fifth image, the fourth image is the image of the target object, and the fifth image is the image that does not contain the target object in the first original image; for the fourth image, the target object in the fourth image is used.
  • the sixth image is obtained by cropping the object as the center; for the fifth image, the seventh image is obtained by cropping the position where the center of the fifth image moves the first distance in the first direction as the center; Merge to get the first image.
  • the original image collected by the front camera includes a human face and a background.
  • the face is usually closer to the phone, and the background is usually farther from the phone.
  • the solution of only cropping the face as the center may result in poor stability of the background. Therefore, the stability of both the face and the background needs to be considered.
  • stability weights can be set for faces and backgrounds. For example, combining the proportion of the face (or the human body) in the original image (or the cropped image), the weights of the face, the stability of the human body, and the stability of the background are matched.
  • the stability of the face is given a higher weight. That is, the cropping is mainly performed with the face as the center, and the stability of the background is not considered or is less considered.
  • the weight of the background stabilization is higher. That is, the cutting center is moved a corresponding distance in the opposite direction of the movement direction of the mobile phone, and then the cutting is performed without considering or less considering the stability of the face.
  • the face (or human body) in each image frame can also be separated from the background, and the face (or human body) and the background can be processed with different anti-shake schemes, and then the processed The two images are combined to obtain a front video with stable faces and backgrounds.
  • the electronic device is configured with an inertial measurement unit IMU, and the method further includes: determining a motion feature of the electronic device according to the data of the IMU, and determining the first direction and the first distance according to the motion feature of the electronic device.
  • the second camera is further configured with an optical anti-shake device, determines the motion characteristics of the electronic device according to the data of the IMU, and determines the first direction and the first distance according to the motion characteristics of the electronic device, including: according to the IMU.
  • the data of the electronic device determines the movement characteristic of the electronic device, and the first direction and the first distance are determined according to the movement characteristic of the electronic device and the data of the optical anti-shake device.
  • the target object is located in the center area of the first viewfinder, and the first camera may not be configured with an optical image stabilization device. Then, an OIS device (such as a micro-pan/tilt device) and the like can be added to the rear camera. Combining the OIS device and the predicted motion characteristics of the mobile phone, the picture in the second viewfinder is compensated.
  • an OIS device such as a micro-pan/tilt device
  • the first operation is an operation of the user on a specific control, inputting a specific voice command, and executing any one of the preset air gestures.
  • an electronic device comprising: a processor, a memory, a touch screen, a first camera and a second camera, the memory, the touch screen, the first camera, and the second camera are coupled to the processor, and the memory is used for storing computer program codes
  • the computer program code includes computer instructions, when the processor reads the computer instructions from the memory, to cause the electronic device to perform operations such as the following: receiving the first operation; in response to receiving the first operation, displaying the first viewfinder and the second A viewfinder; wherein the first viewfinder is used to display the first image collected by the first camera, and the second viewfinder is used to display the second image collected by the second camera; wherein, for the first original image collected by the first camera, the first image is collected by the first camera.
  • the first image is obtained by cropping the target object in an original image as the center; for the second original image collected by the second camera, the center of the second original image is moved by a first distance in the first direction, and the center is obtained by cropping.
  • the first direction and the first distance are determined according to the motion characteristics of the electronic device; the first camera is a front camera, and the second camera is a rear camera; or, the first camera is a telephoto camera, and the second camera is a medium-focus or short-focus camera; or, the first camera and the second camera are the same camera, and the zoom ratio of the first image is greater than the zoom ratio of the second image.
  • the sizes of the first viewfinder and the second viewfinder are the same or different.
  • the first image is obtained by cropping the target object in the first original image as the center, including: in the first original image collected for the first camera. , crop the target object in the first original image as the center to obtain the third image; according to the third image and the size of the first viewfinder frame, adjust the zoom ratio corresponding to the first viewfinder frame; after the adjustment according to the first viewfinder frame The zoom magnification and the third image are zoomed to obtain the first image.
  • the following steps are also performed: An original image is obtained by cropping the target object in the first original image as the center; according to the size of the first image, the size of the first viewfinder is adjusted.
  • it is further performed: automatically determine the target object in the first original image according to the first original image collected by the first camera; or determine the target object in the first original image according to the user's selection operation.
  • the target object contains one or more faces.
  • the first image is obtained by cropping the target object in the first original image as the center
  • the method further includes: using image segmentation technology to collect the image obtained by the first camera.
  • the first original image is divided into a fourth image and a fifth image, the fourth image is the image of the target object, and the fifth image is the image that does not contain the target object in the first original image; for the fourth image, the target object in the fourth image is used.
  • the sixth image is obtained by cropping the object as the center; for the fifth image, the seventh image is obtained by cropping the position where the center of the fifth image moves the first distance in the first direction as the center; Merge to get the first image.
  • the electronic device is configured with an inertial measurement unit IMU, and the electronic device further executes: determining the motion characteristics of the electronic device according to the data of the IMU, and determining the first direction and the first distance according to the motion characteristics of the electronic device.
  • the second camera is further configured with an optical anti-shake device, determines the motion characteristics of the electronic device according to the data of the IMU, and determines the first direction and the first distance according to the motion characteristics of the electronic device, including: according to the IMU.
  • the data of the electronic device determines the movement characteristic of the electronic device, and the first direction and the first distance are determined according to the movement characteristic of the electronic device and the data of the optical anti-shake device.
  • the first operation is an operation of the user on a specific control, inputting a specific voice command, and executing any one of the preset air gestures.
  • an apparatus in a third aspect, is provided, the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior of the electronic device in any of the above aspects and possible implementation manners.
  • This function can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, a receiving module or unit, a display module or unit, and a processing module or unit, etc.
  • a computer-readable storage medium including computer instructions, which, when the computer instructions are run on a terminal, cause the terminal to execute the method described in the above aspects and any possible implementation manners.
  • a graphical user interface on an electronic device has a display screen, a camera, a memory, and one or more processors, the one or more processors are configured to execute the operations stored in the memory.
  • One or more computer programs in , the graphical user interface comprises a graphical user interface displayed when the electronic device performs the method as described in the above aspects and any of the possible implementations.
  • a sixth aspect provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in the above aspect and any one of the possible implementation manners.
  • a chip system including a processor.
  • the processor executes an instruction, the processor executes the method described in the above aspect and any one of the possible implementation manners.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a user interface of some electronic devices provided by an embodiment of the present application.
  • 3A is a schematic diagram of user interfaces of further electronic devices provided by the embodiments of the present application.
  • 3B is a schematic diagram of user interfaces of further electronic devices provided by the embodiments of the present application.
  • FIG. 4 is a schematic diagram of user interfaces of further electronic devices provided by the embodiments of the present application.
  • FIG. 5 is a schematic flowchart of an anti-shake method for recording a video provided by an embodiment of the present application
  • FIG. 6 is a schematic flowchart of another anti-shake method for recording a video provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of another anti-shake method for recording a video provided by an embodiment of the present application.
  • FIG. 8 is a schematic process diagram of another anti-shake method for recording a video provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a chip system according to an embodiment of the present application.
  • plural means two or more.
  • the words “exemplary” or “such as” are used to mean serving as an example, illustration, or illustration. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the anti-shake method provided in this embodiment of the present application is applicable to an electronic device with a camera
  • the electronic device may be, for example, a mobile phone, a tablet computer, a personal computer (PC), a personal digital assistant (personal digital assistant, PDA), cameras, netbooks, wearable electronic devices (such as smart watches, smart bracelets, etc.), augmented reality (AR) devices, virtual reality (VR) devices, etc.
  • AR augmented reality
  • VR virtual reality
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the processor 110 may further include a sensor hub, which may implement real-time control of the sensor when the AP is dormant, thereby achieving the function of reducing power consumption.
  • sensor hub is used to connect low-speed, long-time sensors, such as gyroscopes, accelerometers, etc., to save AP power consumption.
  • the sensor hub can also fuse the data of different types of sensors to realize the functions that can only be realized by the combination of various sensor data.
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in FIG. 1 is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • at least one camera in the electronic device 100 is configured with an OIS device, and the ISP or other processors can use the data of the OIS device to perform anti-shake processing on images or videos captured by the corresponding camera.
  • the electronic device 100 may implement a multi-view video recording function. That is, the electronic device 100 can record videos of multiple different frames at the same time. For example, in dual-view recording, the electronic device 100 can simultaneously record videos of two frames.
  • the electronic device 100 may use the rear camera to record the scene on the back of the electronic device 100 (that is, the scene opposite to the user), and use the front camera to record the portrait or scene on the front of the electronic device 100 (that is, the portrait or scene on the user's side), etc.
  • the electronic device 100 may use the same or different cameras to record pictures with different magnifications, one is a panorama picture with a smaller zoom magnification, and the other is a close-up picture with a larger zoom magnification.
  • the electronic device 100 determines the motion characteristics (including motion direction, motion acceleration or motion distance, etc.) of the electronic device 100 according to the OIS data and the image collected by the camera 193,
  • the motion feature of 100 performs compensation in the same unit or in different proportions for the multiple captured video pictures. Since the shooting scenes or zoom ratios corresponding to different video pictures are different, different video pictures will exhibit different amplitudes of shaking when the electronic device 100 moves with the same amplitude. It can be seen that the electronic anti-shake technology in the prior art cannot achieve the stability of multiple video images at the same time.
  • the electronic device 100 simultaneously uses the rear camera to shoot video frame 1, and uses the front camera to capture video frame 2 at the same time.
  • the scene in the video frame 1 is usually far from the electronic device 100 , while the portrait (usually the photographer) in the video frame 2 is relatively close to the electronic device 100 .
  • the electronic device 100 moves 1 mm in a certain direction, correspondingly, the video picture 1 moves a distance of one pixel in this direction, and the video picture 2 moves a distance of 0.2 pixels in the opposite direction. Then, if the same compensation is performed on the video picture 1 and the video picture 2, when the stability of the video picture 1 is guaranteed, the stability of the video picture 2 is not guaranteed.
  • the electronic device 100 uses the rear camera to shoot video frame 1 and video frame 2 at the same time.
  • the zoom magnification of the video picture 1 is “1 ⁇ ”
  • the zoom magnification of the video picture 2 is “5 ⁇ ”.
  • the electronic device 100 moves 1 mm in a certain direction, correspondingly, the video picture 1 moves a distance of one pixel in this direction, and the video picture 2 moves a distance of five pixels in this direction. Then, if the same compensation is performed on the video picture 1 and the video picture 2, when the stability of the video picture 1 is guaranteed, the stability of the video picture 2 is not guaranteed.
  • an embodiment of the present application provides an anti-shake method, which adopts different anti-shake solutions for different video images collected by the electronic device 100 .
  • the anti-shake method of compensating according to the motion characteristics of the electronic device is adopted for the video picture of the shooting scene which is far away from the electronic device or the zoom ratio is small, and the video picture of which the portrait is closer to the electronic device or the zoom ratio is relatively large.
  • a cropping scheme centered on the target object is adopted to ensure the stability of the video image.
  • a cropping scheme centered on the target object is adopted for a video image with a portrait that is closer to the electronic device or a larger zoom ratio, and can also be compensated in combination with the motion characteristics of the electronic device 100 .
  • the anti-shake method will be described in detail below.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 . Speaker 170A, also referred to as a "speaker”, is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call. The receiver 170B, also referred to as "earpiece”, is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals.
  • the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the electronic device 100 further includes an IMU, which is a device that can be used to measure the three-axis attitude angle and acceleration of the electronic device 100 . Then, the ISP or other processors can use the data of the IMU to extract the motion characteristics of the electronic device 100 (eg, moving direction, moving speed, or moving distance, etc.). Further, anti-shake processing may be performed on the recorded video according to the extracted motion feature of the electronic device 100 .
  • IMU is a device that can be used to measure the three-axis attitude angle and acceleration of the electronic device 100 .
  • the ISP or other processors can use the data of the IMU to extract the motion characteristics of the electronic device 100 (eg, moving direction, moving speed, or moving distance, etc.). Further, anti-shake processing may be performed on the recorded video according to the extracted motion feature of the electronic device 100 .
  • the IMU includes a gyro sensor 180A and an acceleration sensor 180B.
  • the gyro sensor 180A can be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes may be determined by gyro sensor 180A.
  • the gyro sensor 180A can be used for image stabilization.
  • the gyro sensor 180A detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate for according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180A can also be used for navigation, somatosensory game scenarios.
  • the acceleration sensor 180B can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 may further include a magnetic sensor 180C, such as a three-axis magnetometer, which can be combined with a three-axis accelerometer to implement the function of a compass.
  • the electronic device 100 may more accurately determine the motion characteristics of the electronic device 100 according to the magnetic sensor 180C.
  • the electronic device 100 may further include an air pressure sensor 180D, a touch sensor, a compass, a GPS positioning module, and the like.
  • the above-mentioned keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the electronic device 100 is a mobile phone and dual-view recording is taken as an example, and the technical solutions provided by the embodiments of the present application are described in detail with reference to the accompanying drawings.
  • the user opens a camera application
  • the camera application may be a native camera application of a mobile phone, or a multifunctional camera application developed by a third party, or the like.
  • the mobile phone may display the photographing interface 200 as shown in (1) in FIG. 2 .
  • the camera application can enable the "photograph” function by default.
  • the shooting interface 200 includes a viewfinder frame 201, and functional controls such as "large aperture", “portrait”, “photograph”, "video”, and "more”.
  • the user can open the function option menu 203 as shown in (2) in FIG. 2 by operating the “more” control 202 .
  • the function option menu 203 includes a “dual-view video recording” function control 204 .
  • the user can enable the dual-view recording function by operating the “dual-view recording” function control 204 .
  • the user can also enter the recording interface 206 of the “recording” as shown in FIG. 2 (3) by operating the “recording” function control 205, and the recording interface 206 of the Recording" function control 207.
  • the user enables the dual-view recording function through the “dual-view recording” function control 207 .
  • the mobile phone when the mobile phone detects that the user has opened a preset application or is in a preset recording scene, the mobile phone can also automatically enable the dual-view recording function.
  • a video blog video blog, Vlog
  • records a concert or a sports event, etc. switches between the front camera or the rear camera for a preset number of times.
  • the mobile phone automatically enables the dual-view recording function or prompts the user to enable the dual-view recording function.
  • This embodiment of the present application does not limit the specific manner of enabling the dual-view video recording function.
  • the mobile phone displays a dual-view recording interface 300 as shown in (1) in FIG. 3A .
  • the dual-view recording interface 300 includes a viewfinder 301 , a viewfinder 302 and a recording control 303 .
  • the viewfinder 301 is used to display the picture captured by the rear camera, which is the scene corresponding to the back of the screen of the mobile phone, and is usually far away.
  • the viewfinder 302 is used for displaying the picture captured by the front camera, which is the photographer who is close.
  • the mobile phone predicts the motion characteristics of the mobile phone according to the IMU data, and compensates the display screen in the viewfinder 301 according to the predicted motion characteristics of the mobile phone, so as to maintain the stability of the screen in the viewfinder 301 .
  • the mobile phone will determine the size of the cropped area according to the zoom magnification, and use the original image captured by the camera (that is, the full-size image or the full-size image to be slightly cropped). Center is cropped at the center of the cropped image. Then, the mobile phone performs digital zooming and other processing on the cropped image to obtain a preview or captured image. That is to say, the image in the central area of the original image is cropped at this time.
  • the mobile phone After adopting the anti-shake solution, after determining the size of the cropping area according to the zoom ratio, the mobile phone will crop the position where the center of the original image is offset by a certain distance.
  • the direction of the offset is opposite to the predicted movement direction of the mobile phone, and the offset distance is positively correlated with the predicted mobile phone movement distance.
  • the mobile phone takes the face as the center, and crops the original image collected by the front camera to obtain the picture in the viewfinder 302 , so as to ensure the stability of the picture in the viewfinder 302 .
  • the mobile phone determines the size of the cropping area according to the zoom ratio, and performs cropping with the center of the face image of the original image captured by the camera as the center of the cropped image.
  • the mobile phone performs digital zooming and other processing on the cropped image to obtain a preview or captured image.
  • the image collected by the front camera includes multiple faces
  • the area covered by the multiple faces may be regarded as a whole. Taking the center of the whole as the center of the cropped image, crop the image captured by the front camera, and perform digital zooming and other processing to obtain the picture displayed in the viewfinder 302 .
  • the displayed picture in the viewfinder 302 is obtained by cropping the collected original image centered on the human face, that is, the human face is located at the center of the picture in the viewfinder 301 and is relatively stable.
  • the user can focus on the scene recorded in the viewfinder 301 , track and record the desired scene (eg, a white swan paddling water), so as to improve the quality of the recorded picture and the user's recording comfort.
  • the image in the viewfinder 301 can be compensated with higher precision, improving the Picture stabilization in viewfinder 301.
  • an OIS device such as a micro-pan/tilt device
  • the picture in the viewfinder 301 is compensated in combination with the OIS device and the predicted motion characteristics of the mobile phone.
  • the mobile phone can start to use different anti-shake schemes to process the pictures displayed in different viewfinders during the preview, and it can also use different anti-shake schemes to process the pictures displayed in different viewfinders only during recording. For processing, it is also possible to use different anti-shake schemes to process the pictures displayed in different viewfinder frames only when the recording is completed or when the video is saved, which is not limited in this embodiment of the present application.
  • This article is described by taking the mobile phone as an example to use different anti-shake schemes to process the pictures displayed in different viewfinders during the preview.
  • the size of the viewfinder 301 and the viewfinder 302 may be the same or different.
  • the mobile phone when the mobile phone is in a vertical screen state, the mobile phone displays a dual-view recording interface 300 as shown in (1) in FIG. 3A , and the display screen is divided into upper and lower areas, and each area corresponds to a viewfinder.
  • the mobile phone when the mobile phone is in a horizontal screen state, the mobile phone displays a dual-view recording interface 306 as shown in (2) in FIG. 3A , and the display screen is divided into left and right areas, and each area corresponds to a viewfinder.
  • the mobile phone may also display the dual-view recording interface 307 as shown in (3) in FIG.
  • the viewfinder with a small size covers the viewfinder with a large size and is located at the edge of the viewfinder with a large size.
  • the user can adjust the size of the viewfinder 301 and the viewfinder 302 and their relative positions. This embodiment of the present application does not limit this.
  • the zoom magnification of the viewfinder 302 may also be appropriately adjusted according to the size of the viewfinder 302 .
  • the zoom magnification of the viewfinder 302 is zoom magnification 1 (eg, “1 ⁇ ”).
  • the zoom magnification of the viewfinder 302 can be automatically increased to a zoom magnification of 2 (for example, “2 ⁇ ”), as shown in (2) in FIG. 3B , the dual-view recording interface 308 . It can be seen that by increasing the zoom magnification of the viewfinder 302, a clearer facial expression of the user can be recorded.
  • the size of the viewfinder 302 may also be appropriately adjusted.
  • the size of the viewfinder 302 becomes smaller, and the size of the viewfinder 301 becomes larger.
  • the size of the viewfinder 301 becomes larger, the size of the viewfinder 302 becomes smaller, and moves to the edge of the viewfinder 301. It can be seen that increasing the size of the viewfinder 301 is convenient for recording a clearer picture.
  • the user can manually adjust the zoom magnification of the viewfinder 301 .
  • the user can increase the zoom ratio by sliding two fingers to the outside of the screen, or reduce the zoom ratio by pinching with two fingers, or operate a specific control to change the zoom ratio.
  • the dual-view recording interface 300 further includes a zoom magnification indicating control 305 .
  • the user can adjust the zoom magnification of the screen in the viewfinder 301 by operating the zoom magnification indicating control 305 .
  • the mobile phone can compensate the original image collected by the rear camera according to the changed zoom magnification, the predicted motion characteristics of the mobile phone, and the data of the OIS device, so as to obtain the picture after the zoom magnification change, and maintain the picture stability.
  • the larger the zoom magnification of the picture the greater the compensation of the mobile phone to the picture according to the predicted motion characteristics of the mobile phone.
  • the mobile phone when the mobile phone is configured with multiple rear cameras, and each rear camera corresponds to a different focal length, when the zoom ratio of the screen in the viewfinder 301 corresponds to different focal lengths, the mobile phone can switch between different rear cameras to capture images. .
  • the mobile phone first uses the mid-focus camera (ie, the main camera) to capture images by default. Images captured with a mid-focus camera can correspond to a range of zoom magnifications, for example: "1 ⁇ " to "5 ⁇ ".
  • the mobile phone switches to the telephoto camera to capture images.
  • the mobile phone When the user reduces the zoom magnification of the viewfinder 301 to below “1 ⁇ ”, the mobile phone switches to a short-focus (ie, wide-angle) camera to capture images.
  • the user can also adjust the zoom ratio of the picture in the viewfinder 302 .
  • the specific adjustment method is the same as the adjustment method of the zoom magnification of the screen in the viewfinder 301 .
  • the mobile phone still crops the original image captured by the front camera with the face as the center, and performs digital zoom processing according to the changed zoom ratio to obtain the viewfinder 302 in the screen.
  • the number of front cameras may also be two or more.
  • the user can switch the camera used in the viewfinder 302, eg, from the front camera to the rear camera, or from the rear camera back to the front camera.
  • viewfinder 302 also includes camera switching controls 209 .
  • the mobile phone in response to the user operating the camera switching control 304 on the dual-view recording interface 300 shown in (1) in FIG. 4 , the mobile phone displays the dual-view recording interface 400 as shown in (2) in FIG. 4 .
  • the image displayed in the viewfinder 302 is the image captured by the rear camera. That is to say, at this time, the displayed images in the viewfinder 301 and the viewfinder 302 are all images captured by the rear camera.
  • a specific rear camera can be used to capture images by default, for example, the main camera is used by default to capture images, or the telephoto camera is used to capture images, or the viewfinder 301 is used to capture images by default.
  • the camera used is different.
  • the user can adjust the zoom ratio of the viewfinder 302, and the mobile phone switches the corresponding rear camera to capture images according to the changed zoom ratio.
  • a specific rear camera eg, a telephoto camera
  • the zoom magnification of the viewfinder 302 defaults to a specific zoom magnification (eg, “5 ⁇ ” or more).
  • the viewfinder 301 maintains the previous zoom magnification, and the user can adjust the zoom magnification of the viewfinder 301 subsequently.
  • the viewfinder 301 then switches the corresponding rear camera to capture images according to the changed zoom magnification.
  • the viewfinder 302 when the viewfinder 302 is just switched to the rear camera, the viewfinder 301 automatically switches to a specific rear camera (eg, a mid-focus camera) to capture images. That is, the screen in the viewfinder 302 is a panoramic image.
  • the zoom magnification of the viewfinder 302 defaults to a specific zoom magnification (for example, "1x").
  • the viewfinder 301 is automatically switched to a specific rear camera, it cannot be switched to other rear cameras.
  • the picture in the viewfinder 302 may also be a panoramic picture by default, and the picture in the viewfinder 301 is a close-up picture, which is not limited in the comparison of the embodiments of the present application.
  • a description will be given by taking, as an example, a close-up screen displayed in the viewfinder 302 and a panoramic screen displayed in the viewfinder 301 .
  • the picture displayed in the viewfinder 302 corresponds to the zoom factor 3, and the camera A is used to capture the image.
  • the picture displayed in the viewfinder 301 corresponds to the zoom factor 4, and the camera B is used to capture the image.
  • the camera A and the camera B are the same or different cameras, and the zoom ratio 3 is greater than the zoom ratio 4.
  • the mobile phone can automatically identify the target object of the close-up shot according to the image collected by the camera B, or receive a preset operation performed by the user to determine the target object of the close-up shot.
  • the preset operations are, for example, double-click, long-press, frame selection, voice input, and the like.
  • the mobile phone performs cropping and digital zooming, etc. on the image collected by the camera A with the target object as the center, so as to obtain the picture displayed in the viewfinder 302 . If the mobile phone does not recognize the target object of the close-up shot, or the user does not specify the target object of a specific shot, the mobile phone continues to crop the central area and digital zoom of the image captured by the camera A to obtain the picture displayed in the viewfinder 302 .
  • the mobile phone can crop and digitally zoom the image displayed by the camera A with the target object as the center to obtain the picture displayed in the viewfinder frame 302 . In this way, even when the user shakes or the target object is slightly displaced, the target object is still located in the center of the screen displayed by the viewfinder frame 302, so as to achieve the effect of screen stabilization.
  • prompt information 401 may also be included on the double-click recording interface 400, prompting the user to select the target object of the close-up shot.
  • the mobile phone displays a recording interface 403 as described in (4) in FIG. 4 .
  • a mark frame 404 may be displayed in the panoramic picture of the viewfinder 301 for marking the target object selected by the user.
  • the user can cancel the selected target object or replace with a new target object by performing other preset operations.
  • the mobile phone takes the selected multiple target objects as a whole, and takes the whole as the center of the cropped image to perform cropping and digital zoom processing on the image captured by the camera A to obtain the viewfinder 302 displayed screen.
  • FIG. 5 it is a schematic flowchart of a method for dual-view recording provided by an embodiment of the present application.
  • the processing method of the image captured by the rear camera includes: step 1 to step 3.
  • Step 1 The mobile phone obtains IMU data through the configured IMU device.
  • the rear camera is configured with an OIS device, obtain OSI data.
  • the IMU device includes a three-axis accelerometer and a three-axis gyroscope.
  • the IMU device may further include a magnetometer, a barometer, a compass, and the like.
  • the IMU data can be used to calculate the movement trajectory (including movement direction and movement distance, etc.) of the mobile phone and the pose changes of the mobile phone.
  • the OIS data should also be obtained, so that when calculating the motion trajectory of the mobile phone, the compensated part of the OIS device needs to be considered, so as to obtain the accurate motion trajectory and position of the mobile phone. Posture changes.
  • Step 2 Input the IMU data and OIS data into the pre-trained model 1, and predict the movement trajectory of the mobile phone.
  • Model 1 can be used to clean abnormal data in IMU data and OIS data, calculate the current motion trajectory and pose changes of the mobile phone, and smooth the motion trajectory and pose changes.
  • the movement trajectory and the posture change of the mobile phone in the next time period are predicted according to the previous movement trajectory and the posture change.
  • Step 3 Smoothing, compensating and cropping the image frames collected by the rear camera (referred to as rear image frames for short) according to the predicted movement track of the mobile phone, to obtain a stable rear video.
  • the image frames collected by the rear camera Due to the shaking of the hand, the movement of the hand when moving the mirror, the rolling exposure mechanism, etc., the image frames collected by the rear camera have phenomena such as shaking, deformation, and distortion, which make the original image frames appear to have obvious shaking.
  • the original image frame collected by the rear camera can be compensated in the opposite direction according to the previously predicted motion trajectory of the mobile phone.
  • the mobile phone when the anti-shake solution is not used, the mobile phone will determine the size of the cropped area according to the zoom ratio, and crop the cropped image with the center of the original image captured by the camera as the center of the cropped image. Then, the mobile phone performs digital zooming and other processing on the cropped image to obtain a preview or captured image. That is to say, the cropped image is the full-size central area at this time.
  • the mobile phone After adopting the anti-shake solution, after determining the size of the cropping area according to the zoom ratio, the mobile phone will crop the position where the center of the original image is offset by a certain distance. The direction of the offset is opposite to the predicted movement direction of the mobile phone, and the offset distance is positively correlated with the predicted mobile phone movement distance.
  • the compensation process it is also possible to consider the smooth transition of multiple consecutive image frames after cropping, and adjust the compensation distance appropriately.
  • the mobile phone can also correct the deformed area, and perform image processing such as rotation on the distorted area.
  • the relevant parameters involved in the compensation scheme can be transmitted to the model 1, so that the model 1 can more accurately predict the subsequent motion trajectory.
  • the processing method for the image captured by the front camera includes: step 1, step 4 to step 6.
  • Step 4 The mobile phone performs face position recognition on the image frame obtained by the front camera (referred to as the front image frame for short).
  • the image frame obtained by the front camera can be input into the face position recognition model to identify the position information of the face. If the image frame obtained by the front camera includes multiple faces, the location information of the multiple faces can be identified.
  • information such as the deflection angle and the face orientation of the face can also be identified.
  • Step 5 The mobile phone inputs the identified face information (including face position information, deflection angle, orientation, etc.) and the IMU data obtained from the IMU device into the pre-trained model 2, and predicts the movement trajectory of the face, The movement track of the mobile phone.
  • identified face information including face position information, deflection angle, orientation, etc.
  • model 2 can be used to clean the abnormal data in the IMU data and face position information, calculate the motion trajectory of the face, the current motion trajectory and pose changes of the mobile phone, and smooth the motion trajectory of the face, the motion trajectory and the pose of the mobile phone. Changes, etc. And, according to the previous movement trajectory of the face, and the movement trajectory and posture changes of the mobile phone, the movement trajectory of the face, the movement trajectory and the posture changes of the mobile phone in the next time period are predicted.
  • Step 6 Perform leveling, smoothing, compensation, and cropping processing on the image frame collected by the front camera (referred to as the front image frame) according to the predicted movement trajectory of the face and the movement trajectory of the mobile phone to obtain a stable front video. .
  • the mobile phone can cut the front image frame with the face as the center according to the predicted movement trajectory of the face, so as to achieve the effect of stabilizing the picture.
  • the smooth transition of multiple consecutive image frames after cropping can also be considered, and the position when cropping can be fine-tuned.
  • the mobile phone can also correct the deformed area, and perform image processing such as rotation on the distorted area.
  • the mobile phone may perform cropping processing on the front image frame according to the predicted movement trajectory of the face and the predicted movement trajectory of the mobile phone.
  • the images captured by the front camera include faces and backgrounds. Among them, the face is closer to the mobile phone, and the background is usually farther away from the mobile phone. In some scenes, the solution of only cropping the face as the center may result in poor stability of the background. Therefore, both face and background stability need to be considered.
  • stability weights can be set for faces and backgrounds. For example, combining the proportion of the face (or the human body) in the original image (or the cropped image), the weights of the face, the stability of the human body, and the stability of the background are matched.
  • the stability of the face is given a higher weight. That is, the cropping is mainly performed with the face as the center, and the stability of the background is not considered or is less considered.
  • the weight of the background stabilization is higher. That is, the cutting center is moved a corresponding distance in the opposite direction of the movement direction of the mobile phone, and then the cutting is performed without considering or less considering the stability of the face.
  • the face (or human body) in each image frame can also be separated from the background, and the face (or human body) and the background can be processed with different anti-shake schemes, and then the processed The two images are combined to obtain a front video with stable faces and backgrounds.
  • the background (such as a building) may be inclined.
  • the mobile phone can also rotate the background, so that the background in the front video is in a horizontal position.
  • the relevant parameters involved in the compensation scheme can be passed to Model 2, so that Model 2 can more accurately predict the subsequent face motion trajectory and mobile phone motion trajectory.
  • step 3 and step 6 the mobile phone splices or superimposes the proportion of the front video and the rear video in the display screen to obtain the dual-view video finally displayed on the display screen.
  • the mobile phone can adjust the processed front video and rear video to the same size and stitch them together to obtain a dual-view video (or preview image), the dual-view recording interface 300 shown in FIG. 3A (1), or the dual-view recording interface 306 shown in FIG. 3A (2).
  • the mobile phone can adjust the frame size of the processed front video and rear video to a ratio of 1:8, and superimpose the front video on the rear video to obtain a dual-view video (or preview image), as shown in the figure
  • the dual scene recording interface 307 shown in (3) in 3A is shown in 3A.
  • FIG. 6 it is a schematic flowchart of a video anti-shake method provided by an embodiment of the present application, and the process specifically includes:
  • the mobile phone receives the operation of the user for enabling the dual-view recording function.
  • the operation of the user to enable the dual-view recording function is an operation of the user clicking the switch of the dual-view recording function in the camera application, or executing a predefined operation, or inputting a voice command.
  • the mobile phone In response to the operation of enabling the dual-view recording function, the mobile phone displays a first viewfinder and a second viewfinder.
  • the first viewfinder displays the first image collected by the first camera
  • the second viewfinder displays the second image collected by the second camera.
  • the zoom magnification of the first image is greater than the zoom magnification of the second image.
  • the target object in the first viewfinder is located in the central area of the first image; the target object in the second viewfinder is located in the central area of the second image or not.
  • the first camera is a telephoto camera
  • the second camera is a medium-focus or short-focus camera; or, the first camera and the second camera are the same camera, but the zoom ratio of the first image is greater than that of the second image. It should be noted that, in this embodiment, the first camera and the second camera are cameras on the same side of the mobile phone.
  • the user shoots at the target object, and in this example, the first camera and the second camera are cameras on the same side of the mobile phone, so both the first viewfinder and the second viewfinder contain the target object.
  • the target object may be automatically identified by the mobile phone according to the collected original image, or determined by the mobile phone according to the user's selection operation.
  • the target object can include one or more objects.
  • the target object includes a human face (or human body) or multiple human faces (or human bodies).
  • the mobile phone obtains the first image by cropping the target object in the first original image as the center.
  • the image is stabilized by compensating according to the motion characteristics of the mobile phone. That is, the second image is obtained by cropping the position where the center of the second original image moves in the first direction by the first distance as the center.
  • the first direction and the first distance are determined according to the motion characteristics of the electronic device.
  • the image displayed by the second viewfinder is obtained by compensating the original image collected according to the motion characteristics of the mobile phone.
  • the target object in the second viewfinder may not be located in the central area of the image.
  • the target object itself is not located in the central area of the second viewfinder, that is, the target object is not deviated from the central area of the second viewfinder due to the shaking of the mobile phone.
  • the target object that was originally located in the central area of the image is displaced and moved out of the central area of the second viewfinder. Then, in this case, even after compensation is made according to the motion characteristics of the mobile phone, the target object in the second viewfinder is still not located in the central area of the second image.
  • the target object will always be located in the center area of the first image.
  • the interface 403 shown in (4) in FIG. 4 The viewfinder 301 is the second viewfinder, and the viewfinder 302 is the first viewfinder.
  • the above target object is located in the central area of the image (the first image or the second image), including that the distance between the center of the target object and the center of the image is less than or equal to a preset threshold (for example, the distance between two pixels). ).
  • the center of the target object is, for example, the geometric center of the rectangular frame occupied by the target object in the image.
  • FIG. 7 a schematic flowchart of another video anti-shake method provided by an embodiment of the present application, the process specifically includes:
  • the mobile phone receives the operation of the user to enable the dual-view recording function.
  • the operation of the user to enable the dual-view recording function is an operation of the user clicking the switch of the dual-view recording function in the camera application, or executing a predefined operation, or inputting a voice command.
  • the mobile phone In response to the operation of enabling the dual-view recording function, the mobile phone displays a first viewfinder and a second viewfinder.
  • the first viewfinder displays the image collected by the front camera, and the face or portrait in the first viewfinder is located in the center area of the image; the second viewfinder displays the image collected by the rear camera.
  • the human face is located in the central area of the image, including the distance between the center of the human face or the human image and the center of the image is less than or equal to a preset threshold (for example, the distance between two pixels).
  • the center of the human face or human portrait is, for example, the geometric center of the rectangular frame occupied by the human face or human portrait in the image.
  • the image displayed in the first viewfinder is obtained by cropping the face or the portrait as the center, keeping the face or the portrait in the central area of the first viewfinder, so as to stabilize the face. or portrait effect.
  • the motion characteristics of the mobile phone can be combined to perform picture compensation on the original image collected by the rear camera to achieve the effect of stabilizing the image.
  • the image 801 is the original image collected by the rear camera. If compensation is performed according to the motion characteristics of the mobile phone, that is, the image 801 is moved to the first direction by a first distance with the center of the image 801 as the center, and then an image of a corresponding size, that is, the image 803 , is cropped.
  • Image 802 is the original image captured by the front camera. Compensation is performed according to the motion characteristics of the mobile phone, that is, the image 804 is cropped after moving a first distance in the first direction with the center of the image 802 as the center. Then, image 803 and image 804 are merged to obtain image 805 .
  • the merging may also include performing other image processing, such as scaling down the image 803 and the image 804, and the like, which is not limited here.
  • the image 801 is the original image collected by the rear camera. If compensation is performed according to the motion characteristics of the mobile phone, an image 803 is obtained.
  • Image 802 is the original image captured by the front camera.
  • Image 806 is obtained if a corresponding crop is performed with the face in image 802 as the center. Then, image 803 and image 806 are merged to obtain image 807 .
  • the merging may also include performing other image processing, such as scaling down the image 803 and the image 804, and the like, which is not limited here.
  • the chip system includes at least one processor 1101 and at least one interface circuit 1102 .
  • the processor 1101 and the interface circuit 1102 may be interconnected by wires.
  • the interface circuit 1102 may be used to receive signals from other devices, such as the memory of the electronic device 100 .
  • the interface circuit 1102 may be used to send signals to other devices (eg, the processor 1101).
  • the interface circuit 1102 may read the instructions stored in the memory and send the instructions to the processor 1101 .
  • the electronic device can be made to perform various steps performed by the electronic device 100 (eg, a mobile phone) in the above-mentioned embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • An embodiment of the present application further provides an apparatus, the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior of the electronic device in any of the methods in the foregoing embodiments.
  • This function can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes at least one module or unit corresponding to the above-mentioned functions. For example, a detection module or unit, a display module or unit, a determination module or unit, and a calculation module or unit, etc.
  • Embodiments of the present application further provide a computer storage medium, including computer instructions, when the computer instructions are executed on the electronic device, the electronic device is made to execute any of the methods in the foregoing embodiments.
  • Embodiments of the present application further provide a computer program product, which, when the computer program product runs on a computer, causes the computer to execute any of the methods in the foregoing embodiments.
  • the embodiments of the present application further provide a graphical user interface on an electronic device, where the electronic device has a display screen, a camera, a memory, and one or more processors, and the one or more processors are configured to execute the storage in the One or more computer programs in memory, the graphical user interface comprising a graphical user interface displayed when the electronic device performs any of the methods in the above-described embodiments.
  • the above-mentioned terminal and the like include corresponding hardware structures and/or software modules for executing each function.
  • Those skilled in the art should be easily aware that, in conjunction with the units and algorithm steps of each example described in the embodiments disclosed herein, the embodiments of the present application can be implemented in hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the embodiments of the present invention.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiment of the present invention is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • Each functional unit in each of the embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

一种视频的防抖处理方法及电子设备,涉及电子技术领域,在多景录制模式下保持多个视频画面的稳定效果,该方法包括:在多景录制模式下,针对前置摄像头拍摄的视频画面或者变焦倍率高的视频画面,采用以目标对象为中心进行裁切的方法达到画面稳定效果,针对其他视频画面,采用根据电子设备的运动特征进行补偿的方法达到画面稳定效果。

Description

一种视频的防抖处理方法及电子设备
本申请要求于2020年09月18日提交国家知识产权局、申请号为202010988444.6、申请名称为“一种视频的防抖处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种视频的防抖处理方法及电子设备。
背景技术
保持视频画面稳定是录像功能的一大需求。然而,在拍摄者手持电子设备录像的过程中,用户很难避免手部抖动问题(包括手部静止时的抖动,以及运镜时的手部抖动等),因此针对视频的防抖处理方法尤为重要。目前,用户可以通过外加防抖设备(例如稳定器),来减弱用户手部抖动的幅度。但,外加防抖设备增加用户携带的负担。
另外,业界还可以通过在电子设备内置光学防抖(optical image stabilization,OIS)器件,做到类似外置稳定器的效果,保持画面稳定。但内置OIS器件会增大摄像头的体积和成本,且一套OIS器件只能覆盖一个摄像头。当电子设备具备多个摄像头时,一般不会为每一个摄像头都分别配置一套OIS器件。
此外,业界还可以采用电子防抖的方法。即,在电子设备中内置陀螺仪、加速度计等惯性测量单元(inertial measurement unit,IMU)。通过IMU的数据预测电子设备的运动特征,对视频画面做相应的补偿,达到画面稳定的效果。但是,在一些的多景的录像场景中,例如双景录像,现有的电子设备防抖方案不能实现保持多个视频画面的稳定效果。
发明内容
本申请提供的一种视频的防抖处理方法及电子设备,可以保持多个视频画面的稳定效果。
为了实现上述目的,本申请实施例提供了以下技术方案:
第一方面、提供一种视频的防抖处理方法,该方法应用于包含相机的电子设备,电子设备包括第一摄像头和第二摄像头,该方法包括:接收到第一操作;响应于接收到第一操作,显示第一取景器和第二取景器;其中,第一取景器用于显示第一摄像头采集的第一图像,第二取景器用于显示第二摄像头采集的第二图像;其中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像;针对第二摄像头采集的第二原始图像,以第二原始图像的中心向第一方向移动第一距离的位置为中心进行裁切得到第二图像,第一方向和第一距离为根据电子设备的运动特征确定的;第一摄像头为前置摄像头,第二摄像头为后置摄像头;或者,第一摄像头为长焦摄像头,第二摄像头为中焦或短焦摄像头;或者,第一摄像头和第二摄 像头为同一摄像头,第一图像的变焦倍率大于第二图像的变焦倍率。
也就是说,针对电子设备采集的不同的视频画面采用不同的防抖方案。对拍摄景物距离电子设备较远或者变焦倍率较小的视频画面(即第二摄像头采集的第二原始图像)采用根据电子设备的运动特征进行补偿的防抖方法,对拍摄人像距离电子设备较近或者变焦倍率较大的视频画面(即第一摄像头采集的第一原始图像)采用以目标对象为中心的裁切方案,进而保证多个视频画面的稳定。
另外,由于第一取景器中显示画面是以目标对象为中心对采集的原始图像进行裁剪得到的,即目标对象是位于第一取景器画面的中心位置,无需用户刻意调整电子设备。因此,用户可以将注意力集中在第二取景器中录制的景物,跟踪录制想要拍摄的景物,提升录制画面的质量,以及用户录制的舒适度。
一种可能的实现方式中,第一取景器和第二取景器的尺寸相同或不同。
一种可能的实现方式中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像,包括:在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切,得到第三图像;根据第三图像,以及第一取景框的尺寸,调整第一取景框对应的变焦倍率;根据第一取景框调整后的变焦倍率和第三图像进行变焦处理,得到第一图像。
也就是说,针对第一摄像头采集的第一原始图像,在以目标对象为中心进行裁切后,还可以对该裁切后图像进行变焦处理。例如,目标对象的图像较小时,可以调大变焦倍率,使得第一取景器显示更为清楚的目标对象。
一种可能的实现方式中,在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像之后,该方法包括:在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像;根据第一图像的尺寸,调整第一取景器的尺寸。
例如,若包含目标对象的第一图像较小时,可以调小第一取景器的尺寸。这样,第二取景器可以显示更多画面内容。
一种可能的实现方式中,该方法还包括:根据第一摄像头采集的第一原始图像,自动确定第一原始图像中的目标对象;或者,根据用户的选择操作确定第一原始图像中的目标对象。
一种可能的实现方式中,目标对象包含一个或多个人脸。
一种可能的实现方式中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像,还包括:采用图像分割技术将第一摄像头采集的第一原始图像分割为第四图像和第五图像,第四图像为目标对象的图像,第五图像为第一原始图像中不包含目标对象的图像;针对第四图像,以第四图像中目标对象为中心进行裁切得到第六图像;针对第五图像,为第五图像的中心向第一方向移动第一距离的位置为中心进行裁切得到第七图像;将第六图像和第七图像进行合并,得到第一图像。
当第一摄像头为前置摄像头时,前置摄像头采集的原始图像中包括人脸和背景。人脸通常距离手机较近,背景通常距离手机较远。在一些场景中,仅以人脸为中心进行裁切的方案,可能造成背景的稳定性不佳。因此,需要同时考虑人脸和背景的稳定 性。一些示例中,可以为人脸和背景设置稳定性权重。例如,结合人脸(或人体)在原始图像(或者裁切后图像)中的占比,配比人脸、人体稳定程度和背景稳定程度的权重。例如,当人脸占原始图像的面积达到预设比例(例如60%或60%以上)时,人脸稳定程度的权重更高。即,主要以人脸为中心进行裁切,不考虑或者较少考虑背景的稳定性。当人脸占原始图像的面积未达到预设比例时,背景稳定程度的权重更高。即,将裁切中心向手机的运动方向的反方向移动相应距离后进行裁切,不考虑或者较少考虑人脸的稳定性。在另一些示例中,还可以将每个图像帧中的人脸(或人体)与背景分离开,分别对人脸(或人体)和背景采用不同的防抖方案进行处理,然后将处理后的两个图像进行合成,得到人脸和背景都稳定的前置视频。
一种可能的实现方式中,电子设备配置有惯性测量单元IMU,该方法还包括:根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征确定第一方向和第一距离。
一种可能的实现方式中,第二摄像头还配置有光学防抖器件,根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征确定第一方向和第一距离,包括:根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征和光学防抖器件的数据确定第一方向和第一距离。
由于第一取景器中是以目标对象为中心进行裁切的,一般而言目标对象位于第一取景器的中心区域,第一摄像头可以不配置光学防抖器件。那么,可以为后置摄像头增加OIS器件(例如微云台器件)等。结合OIS器件和预测的手机的运动特征,对第二取景器中的画面进行补偿。
一种可能的实现方式中,第一操作为用户针对特定控件的操作,输入特定语音命令,执行预设隔空手势中的任一项。
第二方面、提供一种电子设备,包括:处理器、存储器、触摸屏、第一摄像头和第二摄像头,存储器、触摸屏、第一摄像头、第二摄像头与处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当处理器从存储器中读取计算机指令,以使得电子设备执行如如下操作:接收到第一操作;响应于接收到第一操作,显示第一取景器和第二取景器;其中,第一取景器用于显示第一摄像头采集的第一图像,第二取景器用于显示第二摄像头采集的第二图像;其中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像;针对第二摄像头采集的第二原始图像,以第二原始图像的中心向第一方向移动第一距离的位置为中心进行裁切得到第二图像,第一方向和第一距离为根据电子设备的运动特征确定的;第一摄像头为前置摄像头,第二摄像头为后置摄像头;或者,第一摄像头为长焦摄像头,第二摄像头为中焦或短焦摄像头;或者,第一摄像头和第二摄像头为同一摄像头,第一图像的变焦倍率大于第二图像的变焦倍率。
一种可能的实现方式中,第一取景器和第二取景器的尺寸相同或不同。
一种可能的实现方式中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像,包括:在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切,得到第三图像;根据第三图像,以及第一取景框的尺寸,调整第一取景框对应的变焦倍率;根据第一取景框调整后的 变焦倍率和第三图像进行变焦处理,得到第一图像。
一种可能的实现方式中,在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像之后,还执行:在针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像;根据第一图像的尺寸,调整第一取景器的尺寸。
一种可能的实现方式中,还执行:根据第一摄像头采集的第一原始图像,自动确定第一原始图像中的目标对象;或者,根据用户的选择操作确定第一原始图像中的目标对象。
一种可能的实现方式中,目标对象包含一个或多个人脸。
一种可能的实现方式中,针对第一摄像头采集的第一原始图像,以第一原始图像中目标对象为中心进行裁切得到第一图像,还包括:采用图像分割技术将第一摄像头采集的第一原始图像分割为第四图像和第五图像,第四图像为目标对象的图像,第五图像为第一原始图像中不包含目标对象的图像;针对第四图像,以第四图像中目标对象为中心进行裁切得到第六图像;针对第五图像,为第五图像的中心向第一方向移动第一距离的位置为中心进行裁切得到第七图像;将第六图像和第七图像进行合并,得到第一图像。
一种可能的实现方式中,电子设备配置有惯性测量单元IMU,电子设备还执行:根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征确定第一方向和第一距离。
一种可能的实现方式中,第二摄像头还配置有光学防抖器件,根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征确定第一方向和第一距离,包括:根据IMU的数据确定电子设备的运动特征,并根据电子设备的运动特征和光学防抖器件的数据确定第一方向和第一距离。
一种可能的实现方式中,第一操作为用户针对特定控件的操作,输入特定语音命令,执行预设隔空手势中的任一项。
第三方面、提供一种装置,该装置包含在电子设备中,该装置具有实现上述方面及可能的实现方式中任一方法中电子设备行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,接收模块或单元、显示模块或单元、以及处理模块或单元等。
第四方面、提供一种计算机可读存储介质,包括计算机指令,当计算机指令在终端上运行时,使得终端执行如上述方面及其中任一种可能的实现方式中所述的方法。
第五方面、提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、摄像头、存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如上述方面及其中任一种可能的实现方式中所述的方法时显示的图形用户界面。
第六方面、提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述方面中及其中任一种可能的实现方式中所述的方法。
第七方面、提供一种芯片系统,包括处理器,当处理器执行指令时,处理器执行如上述方面中及其中任一种可能的实现方式中所述的方法。
附图说明
图1为本申请实施例提供的一种电子设备的结构示意图;
图2为本申请实施例提供的一些电子设备的用户界面示意图;
图3A为本申请实施例提供的又一些电子设备的用户界面示意图;
图3B为本申请实施例提供的又一些电子设备的用户界面示意图;
图4为本申请实施例提供的又一些电子设备的用户界面示意图;
图5为本申请实施例提供的一种录制视频的防抖方法的流程示意图;
图6为本申请实施例提供的又一种录制视频的防抖方法的流程示意图;
图7为本申请实施例提供的又一种录制视频的防抖方法的流程示意图;
图8为本申请实施例提供的又一种录制视频的防抖方法的过程示意图;
图9为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
示例性的,本申请实施例提供的防抖方法适用于具有摄像头的电子设备中,该电子设备例如可以为手机、平板电脑、个人计算机(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、相机、上网本、可穿戴电子设备(例如智能手表、智能手环等)、增强现实技术(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备等,本申请对该电子设备的具体形式不做特殊限制。
图1示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit, GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在本申请的一些实施例中,处理器110还可以包括传感器集线器(sensor hub),可以在AP休眠的情况下,实现对传感器的实时控制,从而达到降低功耗的功能。例如sensor hub,用于连接低速、长时间工作的传感器,比如陀螺仪、加速度计等,以节省AP的功耗。另外,sensor hub还可以将不同类型传感器的数据进行融合,实现多种传感器数据结合才能实现的功能。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,图1示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于 覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算, 用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。在另一些实施例中,电子设备100中至少一个摄像头配置有OIS器件,ISP或其他处理器可使用OIS器件的数据对相应的摄像头采集的图像或视频进行防抖处理。
在本申请实施例中,电子设备100可实现多景录像功能。即,电子设备100可同时录制多个不同画面的视频。例如,双景录制,电子设备100可以同时录制两个画面的视频。比如,电子设备100可以分别使用后置摄像头录制电子设备100背面的景物(即用户对面的景物),使用前置摄像头录制电子设备100正面的人像或景物(即用户侧的人像或景物)等。又比如,电子设备100可以使用相同或不同的摄像头录制不同倍率的画面,一个为变焦倍率较小的全景画面,另一个为变焦倍率较大的特写镜头画面。
若采用现有技术中的电子防抖技术,电子设备100根据OIS的数据以及摄像头193采集到的图像确定电子设备100的运动特征(包括运动方向、运动加速度或运动距离等),并根据电子设备100的运动特征对采集的多个视频画面进行相同单位或者不同比例的补偿。由于不同视频画面对应的拍摄场景或变焦倍率不同,因此不同视频画面对电子设备100相同幅度的移动会呈现不同幅度的抖动。可见,现有技术中的电子防抖技术并不能实现同时保持多个视频画面的稳定。
比如,电子设备100同时使用后置摄像头拍摄视频画面1,使用前置摄像头拍摄 视频画面2。视频画面1中的景物通常距离电子设备100较远,而视频画面2中的人像(通常为拍摄者)距离电子设备100较近。假设电子设备100向某个方向移动1毫米,相应的,视频画面1向该方向移动一个像素的距离,而视频画面2则向相反方向移动0.2个像素的距离。那么,若对视频画面1和视频画面2进行相同的补偿,则在保证了视频画面1的稳定时,并不保证视频画面2的稳定。
又比如,电子设备100同时使用后置摄像头拍摄视频画面1和视频画面2。其中,视频画面1的变焦倍率为“1×”,视频画面2的变焦倍率为“5×”。假设电子设备100向某个方向移动1毫米,相应的,视频画面1向该方向移动一个像素的距离,而视频画面2则向该方向移动五个像素的距离。那么,若对视频画面1和视频画面2进行相同的补偿,则在保证了视频画面1的稳定时,并不保证视频画面2的稳定。
为此,本申请实施例提供了一种防抖的方法,针对电子设备100采集的不同的视频画面采用不同的防抖方案。示例性的,对拍摄景物距离电子设备较远或者变焦倍率较小的视频画面采用根据电子设备的运动特征进行补偿的防抖方法,对拍摄人像距离电子设备较近或者变焦倍率较大的视频画面采用以目标对象为中心的裁切方案,进而保证该视频画面的稳定。在另一些实施例中,对拍摄人像距离电子设备较近或者变焦倍率较大的视频画面采用以目标对象为中心的裁切方案,也可以结合电子设备100的运动特征进行补偿。下文将对该防抖方法进行详细说明。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C, 耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
在本申请的一些实施例中,电子设备100还包括IMU,可用于测量电子设备100的三轴姿态角及加速度等的装置。而后,ISP或其他处理器可使用IMU的数据提取电子设备100的运动特征(例如,移动方向、移动速度或移动距离等)。进一步,可根据提取的电子设备100的运动特征对录制的视频进行防抖处理。
例如,IMU包括陀螺仪传感器180A和加速度传感器180B。
其中,陀螺仪传感器180A可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180A确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180A可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180A检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180A还可以用于导航,体感游戏场景。
加速度传感器180B可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
可选的,电子设备100还可以包括磁传感器180C,例如具体为三轴磁力计,可以与三轴加速度计结合实现指南针的功能。电子设备100可以根据磁传感器180C更准确地确定电子设备100的运动特征。可选的,电子设备100还可以包括气压传感器180D、触摸传感器、指南针、GPS定位模块等。
上述按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触 摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
本文以电子设备100是手机,且以双景录制为例,结合附图对本申请实施例提供的技术方案进行详细说明。
示例性的,用户开启相机应用,该相机应用可以为手机的原生相机应用,也可以是第三方开发的多功能相机应用等。响应于接收到用户开启相机应用的操作,手机可以显示如图2中(1)所示的拍摄界面200。相机应用可以默认开启“拍照”功能,拍摄界面200包括取景框201,以及“大光圈”、“人像”、“拍照”、“录像”、“更多”等功能控件。在一个示例中,用户可以通过操作“更多”控件202,开启如图2中(2)所示的功能选项菜单203。其中,功能选项菜单203中包括“双景录像”功能控件204。用户可以通过操作“双景录像”功能控件204开启双景录像功能。在另一个示例中,用户也可以通过操作“录像”功能控件205进入如图2中(3)所示的“录像”的录制界面206,在“录像”的录制界面206中设置有“双景录像”功能控件207。进一步,用户通过“双景录像”功能控件207开启双景录像功能。在又一示例中,手机在检测到用户开启预设应用或者处于预设录制场景时,也可以自动开启双景录像功能。例如,当用户开启直播应用、视频博客(video blog,Vlog)应用、录制演唱会或体育赛事等、在前置摄像头或后置摄像头之间切换的次数达到预设次数、在中焦摄像头或长焦摄像头之间切换次数达到预设次数等,手机自动开启双景录像功能或提示用户开启双景录像功能。本申请实施例对开启双景录像功能的具体方式不做限定。
在开启双景录像功能后,手机显示如图3A中(1)所示的双景录制界面300。该双景录制界面300中包括取景器301、取景器302和录制控件303。其中,取景器301用于显示后置摄像头采集的画面,为手机屏幕背面对应的景物,通常距离较远。取景器302用于显示前置摄像头采集的画面,为距离较近的拍摄者。一方面,手机根据IMU数据预测手机的运动特征,并根据预测的手机的运动特征对取景器301中的显示画面进行补偿,以保持取景器301中画面的稳定性。具体来说,在没有采用防抖方案时,手机会根据变焦倍率确定裁剪区域的大小,并以摄像头采集的原始图像(即全尺寸图像或者对全尺寸图像进行小幅度裁切后的图像)的中心为裁切后图像的中心进行裁切。而后手机对裁剪后的图像进行数字变焦等处理得到预览或拍摄的图像。也就是说,此时裁剪的为原始图像中心区域的图像。在采用防抖方案后,手机在根据变焦倍率确定 裁剪区域的大小后,会以原始图像的中心偏移一定距离的位置为中心进行裁切。偏移的方向与预测的手机的运动方向相反,偏移的距离与预测的手机运动距离正相关。另一方面,手机以人脸为中心,对前置摄像头采集的原始图像进行裁剪,得到取景器302中画面,以保证取景器302中画面的稳定性。具体来说,手机会根据变焦倍率确定裁剪区域的大小,并以摄像头采集的原始图像的人脸图像的中心为裁切后图像的中心进行裁切。而后手机对裁剪后的图像进行数字变焦等处理得到预览或拍摄的图像。可选的,若前置摄像头中采集的图像中包含多个人脸时,可以将多个人脸所覆盖的区域作为一个整体。以该整体的中心作为裁剪后的图像的中心,对前置摄像头采集的图像进行裁剪,以及数字变焦等处理,得到取景器302中显示的画面。
可见,由于取景器302中显示画面是以人脸为中心对采集的原始图像进行裁剪得到的,即人脸是位于取景器301画面的中心位置,相对稳定。这样,用户可以将注意力集中在取景器301中录制的景物,跟踪录制想要拍摄的景物(例如划水的白天鹅),提升录制画面的质量,以及用户录制的舒适度。另外,在根据预测的手机的运动特征对取景器301中的显示画面进行补偿时,无需考虑取景器302中画面的稳定性,因此可以针对取景器301中的画面进行更高精度的补偿,提升取景器301中的画面稳定性。例如,可以为后置摄像头增加OIS器件(例如微云台器件)等。结合OIS器件和预测的手机的运动特征,对取景器301中的画面进行补偿。
需要注意的是,手机可以在预览时就开始采用不同的防抖方案对不同取景框中显示的画面进行处理,也可以在录制时才开启采用不同的防抖方案对不同取景框中显示的画面进行处理,还可以在录制完成时或保存视频时,才开启采用不同的防抖方案对不同取景框中显示的画面进行处理,本申请实施例对此不做限定。本文是以手机在预览时就开启采用不同的防抖方案对不同取景框中显示的画面进行处理为例进行描述。
可选的,取景器301和取景器302的尺寸大小可以相同也可以不同。例如,当手机处于竖屏状态时,手机显示如图3A中(1)所示的双景录制界面300,显示屏被划分为上下两个区域,每一个区域对应一个取景器。例如,当手机处于横屏状态时,手机显示如图3A中(2)所示的双景录制界面306,显示屏被划分为左右两个区域,每一个区域对应一个取景器。又例如,手机也可显示如图3A中(3)所示的双景录制界面307,其中尺寸小的取景器覆盖在尺寸大的取景器上,且位于尺寸大的取景器的边缘处。可选的,用户可以调整取景器301和取景器302的尺寸大小以及二者的相对位置。本申请实施例对此不做限定。
在一些实施例中,当手机以人脸为中心对前置摄像头采集的图像进行裁剪后,也可以根据取景器302的尺寸大小,适当调整取景器302的变焦倍率。例如,如图3B中(1)所示,取景器302的变焦倍率为变焦倍率1(例如“1×”)。当手机对前置摄像头采集的图像进行裁剪后,可以自动将取景器302的变焦倍率增大为变焦倍率2(例如“2×”),如图3B中(2)所示双景录制界面308。可见,将取景器302的变焦倍率调大,可以录制到用户更为清楚的面部表情。
在另一些实施例中,当手机以人脸为中心对前置摄像头采集的图像进行裁剪后,也可以适当调整取景器302的尺寸大小。例如,如图3B中(3)所示双景录制界面309,取景器302的尺寸变小,取景器301的尺寸变大。又例如,如图3B中(4)所示双景 录制界面310,取景器301的尺寸变大,取景器302的尺寸变小,且移动到取景器301的边缘处。可见,将取景器301的尺寸调大,便于更加录制到更为清楚的画面。
在又一些实施例中,用户可以手动调节取景器301的变焦倍率。例如,用户通过双指向屏幕外侧滑动增大变焦倍率,或者通过双指捏合的方式减小变焦倍率,或者操作特定控件来改变变焦倍率。例如,双景录制界面300还包括变焦倍率指示控件305。用户可以通过操作变焦倍率指示控件305,调节取景器301中画面的变焦倍率。相应的,手机可以根据变化后的变焦倍率,预测的手机运动特征、以及OIS器件的数据,对后置摄像头采集的原始图像进行补偿,得到变焦倍率变化后的画面,并维持画面稳定。一般,画面的变焦倍率越大,手机根据预测的手机运动特征对画面的补偿越大。
可选的,当手机配置有多个后置摄像头,且每个后置摄像头对应不同的焦段时,当取景器301中画面的变焦倍率对应不同焦段时,手机可以切换不同的后置摄像头采集图像。例如,当手机开启双景录制功能后,手机先默认使用中焦摄像头(即主摄像头)采集图像。使用中焦摄像头拍摄的图像可以对应一个区间的变焦倍率,例如:“1×”至“5×”。当用户将取景器301的变焦倍率增大到“5×”及“5×”以上时,手机切换到长焦摄像头采集图像。当用户将取景器301的变焦倍率减小到“1×”以下时,手机切换到短焦(即广角)摄像头采集图像。
在又一些实施例中,用户也可以调节取景器302中画面的变焦倍率。具体调节方式与取景器301中画面的变焦倍率的调节方式相同。但需要注意的是,取景器302的变焦倍率发生变化后,手机仍然是以人脸为中心裁切前置摄像头采集的原始图像,并按照变化后的变焦倍率进行数字变焦处理,得到取景器302中的画面。可选的,前置摄像头的数量也可以为两个或两个以上,相关内容可参见对多个后置摄像头的描述,这里不再赘述。
在又一些实施例中,用户可以切换取景器302中使用的摄像头,例如从前置摄像头切换为后置摄像头,或者从后置摄像头切回前置摄像头。例如,取景器302还包括摄像头切换控件209。例如,响应于用户在图4中(1)中所示的双景录制界面300上操作摄像头切换控件304,手机显示如图4中(2)所示的双景录制界面400。该双景录制界面400中,取景器302中显示画面为后置摄像头拍摄的画面。也就是说,此时取景器301和取景器302中显示画面均为后置摄像头采集的图像。
在一个示例中,当取景器302刚切换到后置摄像头时,可以默认使用特定的后置摄像头采集图像,例如默认使用主摄像头采集图像,或者使用长焦摄像头采集图像,或者使用与取景器301使用的摄像头不同。后续,用户可以调节取景器302的变焦倍率,手机再根据变化后的变焦倍率切换相对应的后置摄像头采集图像。
在另一个示例中,当取景器302刚切换到后置摄像头时,默认使用特定的后置摄像头(例如长焦摄像头)采集图像。也就是说,默认取景器302中的画面为特写镜头的画面。取景器302的变焦倍率默认为特定变焦倍率(例如“5×”或“5×”以上)。
与此同时,取景器301保持之前的变焦倍率,后续用户可以对取景器301的变焦倍率进行调整。取景器301再根据变化后的变焦倍率切换相应的后置摄像头采集图像。或者,当取景器302刚切换到后置摄像头时,取景器301自动切换到特定的后置摄像头(例如中焦摄像头)采集图像。也就是说,取景器302中的画面为全景图像。取景 器302的变焦倍率默认为特定变焦倍率(例如“1×”)。可选的,取景器301自动切换到特定的后置摄像头后,且不能切换到其他的后置摄像头。
当然,也可以默认取景器302中的画面为全景的画面,取景器301中的画面为特写镜头的画面,本申请实施例对比不做限定。以下,以取景器302中显示特写镜头的画面,取景器301中显示全景画面为例进行说明。
例如,取景器302中显示的画面对应变焦倍率3,使用摄像头A采集图像。取景器301中显示的画面对应变焦倍率4,使用摄像头B采集图像。其中摄像头A和摄像头B为相同或不同的摄像头,且变焦倍率3大于变焦倍率4。
那么,手机可以根据摄像头B采集的图像自动识别特写镜头的目标对象,或者接收用户执行的预设操作,确定特写镜头的目标对象。其中预设操作例如为双击、长按、框选、语音输入等。而后,手机对摄像头A采集的图像,以目标对象为中心进行裁剪以及数字变焦等,得到取景器302中显示的画面。若手机未识别出特写镜头的目标对象,或者用户未指定特定镜头的目标对象,手机继续裁剪摄像头A采集的图像的中心区域以及数字变焦等,得到取景器302中显示的画面。如图4中(3)所示,当手抖或者目标对象的位移造成用户想录制的目标对象不位于摄像头A采集图像的中心区域时,目标对象极可能显示不全,由此给用户强烈的抖动感。
若手机自动识别出特写镜头的目标对象,或者接收用户选择的目标对象后,手机可以对摄像头A显示的图像,以目标对象为中心进行裁剪以及数字变焦,得到取景框302中显示的画面。这样,即使用户抖动或者目标对象发生较小位移时,目标对象仍位于取景框302显示画面的中心,达到画面稳定的效果。
例如,如图4中(2)所示,在双击录制界面400上还可以包括提示信息401,提示用户选择特写镜头的目标对象。响应于用户双击“白天鹅”的目标对象,手机显示如图4中(4)中所述录制界面403。可选的,取景器301的全景画面中可以显示标记框404,用于标记用户选中的目标对象。可选的,用户可以通过执行其他预设操作,取消选择的目标对象或者更换新的目标对象。当然,用户也可以选择多个目标对象,那么手机将选中的多个目标对象作为一个整体,以该整体为裁剪后图像的中心对摄像头A采集的图像进行裁剪以及数字变焦处理,得到取景器302显示的画面。
下面,以手机分别采用不同的防抖方案对前置摄像头采集的图像,以及对后置摄像头采集的图像进行处理的过程进行详细说明。
如图5所示,为本申请实施例提供的一种双景录制的方法的流程示意图。其中,对后置摄像头采集图像的处理方法包括:步骤1至步骤3。
步骤1、手机通过配置的IMU器件获取IMU数据。可选的,若后置摄像头配置有OIS器件,获取OSI数据。
其中,IMU器件包括三轴加速度计和三轴陀螺仪。可选的,IMU器件还可以包括磁力计、气压计、指南针等。IMU数据可用于计算手机的运动轨迹(包括运动方向和运动距离等)以及手机的位姿变化。
在一些示例中,若手机后置摄像头还配置有OIS器件,也要获取OIS数据,便于在计算手机的运动轨迹时,需要考虑OIS器件已补偿的部分,从而得到准确的手机的运动轨迹和位姿变化。
步骤2、将IMU数据和OIS数据输入到预先训练的模型1中,预测出手机的运动轨迹。
其中,模型1可用于清洗IMU数据和OIS数据中的异常数据,计算手机当前的运动轨迹和位姿变化,平滑运动轨迹和位姿变化等处理。并且,根据之前的运动轨迹和位姿变化对下一时间段手机的运动轨迹和位姿变化等进行预测。
步骤3、根据预测的手机的运动轨迹对后置摄像头采集的图像帧(简称为后置图像帧)进行平滑、补偿以及裁切等处理,得到稳定的后置视频。
由于手部的抖动或者运镜时手部的运动,滚动曝光机制等原因,后置摄像头采集的图像帧存在抖动、变形、扭曲等现象,使得原始的图像帧看上去有明显的抖动。本申请实施例可以根据之前预测的手机的运动轨迹,对后置摄像头采集的原始图像帧进行反方向的补偿。
具体来说,在没有采用防抖方案时,手机会根据变焦倍率确定裁剪区域的大小,并以摄像头采集的原始图像的中心为裁切后图像的中心进行裁切。而后手机对裁剪后的图像进行数字变焦等处理得到预览或拍摄的图像。也就是说,此时裁剪的为全尺寸中心区域的图像。在采用防抖方案后,手机在根据变焦倍率确定裁剪区域的大小后,会以原始图像的中心偏移一定距离的位置为中心进行裁切。偏移的方向与预测的手机的运动方向相反,偏移的距离与预测的手机运动距离正相关。并且,在补偿过程中,还可以考虑裁切后的连续的多个图像帧平滑过渡,适当调整补偿的距离等。手机还可以对变形的区域进行修正,对发生扭曲的区域进行旋转等图像处理。
补偿方案中涉及相关参数(例如补偿方向、补偿距离等)可以传递给模型1,便于模型1更准确预计后续的运动轨迹。
对前置摄像头采集图像的处理方法包括:步骤1、步骤4至步骤6。
步骤4、手机对前置摄像头获取的图像帧(简称为前置图像帧)进行人脸位置识别。
可将前置摄像头获取的图像帧输入到人脸位置识别模型中,识别出人脸的位置信息。若前置摄像头获取的图像帧中包括多个人脸,可以识别多个人脸的位置信息。
进一步的,也可以识别出的人脸的偏转角度、人脸朝向等信息。
步骤5、手机将识别出的人脸信息(包括人脸位置信息、偏转角度、朝向等),以及从IMU器件获取的IMU数据输入到预先训练的模型2中,预测出人脸的运动轨迹、手机的运动轨迹。
其中,模型2可用于清洗IMU数据和人脸位置信息中的异常数据,计算人脸的运动轨迹、手机当前的运动轨迹及位姿变化,平滑人脸的运动轨迹、手机的运动轨迹以及位姿变化等处理。并且,根据之前的人脸的运动轨迹,以及手机的运动轨迹和位姿变化对下一时间段人脸的运动轨迹,手机的运动轨迹和位姿变化等进行预测。
步骤6、根据预测的人脸的运动轨迹、手机的运动轨迹对前置摄像头采集的图像帧(简称为前置图像帧)进行水平、平滑、补偿以及裁切等处理,得到稳定的前置视频。
在一些实施例中,手机可以根据预测的人脸的运动轨迹,以人脸为中心进行裁切前置图像帧,达到稳定画面的效果。当然,在裁切时也可以考虑裁切后连续多个图像 帧平滑过渡,可对裁切时的位置进行微调等处理。手机还可以对变形的区域进行修正,对发生扭曲的区域进行旋转等图像处理。
在另一些实施例中,手机可以根据预测的人脸的运动轨迹、以及预测的手机的运动轨迹对前置图像帧进行裁切处理。这是因为,前置摄像头采集的图像中包括人脸和背景。其中人脸距离手机较近,背景通常距离手机较远。在一些场景中,仅以人脸为中心进行裁切的方案,可能造成背景的稳定性不佳。因此,需要同时考虑人脸和背景的稳定性。一些示例中,可以为人脸和背景设置稳定性权重。例如,结合人脸(或人体)在原始图像(或者裁切后图像)中的占比,配比人脸、人体稳定程度和背景稳定程度的权重。例如,当人脸占原始图像的面积达到预设比例(例如60%或60%以上)时,人脸稳定程度的权重更高。即,主要以人脸为中心进行裁切,不考虑或者较少考虑背景的稳定性。当人脸占原始图像的面积未达到预设比例时,背景稳定程度的权重更高。即,将裁切中心向手机的运动方向的反方向移动相应距离后进行裁切,不考虑或者较少考虑人脸的稳定性。在另一些示例中,还可以将每个图像帧中的人脸(或人体)与背景分离开,分别对人脸(或人体)和背景采用不同的防抖方案进行处理,然后将处理后的两个图像进行合成,得到人脸和背景都稳定的前置视频。
可选的,考虑到用户在录制时,并不一定能精确保证手机水平放置,即前置摄像头采集的图像中,背景(例如建筑物)可能出现倾斜。对此,手机还可以对背景进行旋转,使得前置视频中背景处于水平位置。
补偿方案中涉及相关参数(例如补偿方向、补偿距离等)可以传递给模型2,便于模型2更准确预计后续的人脸运动轨迹以及手机运动轨迹。
在步骤3和步骤6之后,手机根据前置视频和后置视频在显示屏中的占比进行拼接或叠加,得到显示屏最终显示的双景视频。
例如,若前置视频和后置视频在显示屏中的占比为1:1,手机可以将处理后的前置视频和后置视频调整到等尺寸的画幅并进行拼接在一起得到双景视频(或预览图像),如图3A中(1)所示双景录制界面300,或者,如图3A中(2)所示的双景录制界面306。
又例如,若前置视频和后置视频在显示屏中的占比为1:8,且前置视频覆盖在后置视频上。那么,手机可以将处理后的前置视频和后置视频的画幅尺寸调整到1:8的比例,并将前置视频叠加在后置视频上,得到双景视频(或预览图像),如图3A中(3)所示的双景录制界面307。
如图6所示,为本申请实施例提供的一种视频的防抖方法的流程示意图,该流程具体包括:
S601、手机接收用户开启双景录制功能的操作。
其中,用户开启双景录制功能的操作,例如为用户在相机应用中点击双景录制功能的开关的操作,或者执行预定义操作,或者输入语音命令等。
S602、响应于开启双景录制功能的操作,手机显示第一取景器和第二取景器。
S603、第一取景器显示第一摄像头采集的第一图像,第二取景器显示第二摄像头采集的第二图像。其中,第一图像的变焦倍率大于第二图像的变焦倍率。并且,第一取景器中的目标对象位于第一图像的中心区域;第二取景器中的目标对象位于或不位 于第二图像的中心区域。
其中,第一摄像头为长焦摄像头,第二摄像头为中焦或短焦摄像头;或者,第一摄像头和第二摄像头为同一摄像头,但第一图像的变焦倍率大于第二图像的变焦倍率。需要说明的是,在该实施例中,第一摄像头和第二摄像头为手机同一侧的摄像头。
一般,用户会对准目标对象进行拍摄,且本实例中第一摄像头和第二摄像头为手机同一侧的摄像头,故第一取景器和第二取景器中均包含目标对象。目标对象可以为手机根据采集的原始图像自动识别的,也可以是手机根据用户选择操作确定的。目标对象可以包括一个或多个物体。例如,目标对象包括一个人脸(或人体)或者多个人脸(或人体)。
具体的,手机针对第一摄像头采集的第一原始图像,采用以第一原始图像中目标对象为中心进行裁切得到第一图像。针对第二摄像头采集的第二原始图像,采用根据手机的运动特征进行补偿的方式来稳定画面。即,以第二原始图像的中心向第一方向移动第一距离的位置为中心进行裁切得到第二图像。其中,第一方向和第一距离为根据所述电子设备的运动特征确定的。
可以理解的,由于第二取景器显示的图像,是根据手机的运动特征对采集的原始图像进行补偿得到的。在一些场景中,第二取景器中的目标对象可能不位于图像的中心区域。例如,目标对象本身不位于第二取景器的中心区域,也就是说,并不是因为手机的抖动而造成目标对象偏离第二取景器的中心区域。举个例子,原本位于图像中心区域的目标对象本身发生位移,移出第二取景器的中心区域。那么,在这种情况下,即便根据手机的运动特征进行补偿后,第二取景器中的目标对象仍不位于第二图像的中心区域。但是,由于第一取景器的第一图像是以目标对象为中心进行裁切的,那么目标对象会一直位于第一图像的中心区域。例如,如图4中(4)所示的界面403。其中,取景器301为第二取景器,取景器302为第一取景器。
需要说明的是,上述目标对象位于图像(第一图像或第二图像)的中心区域,包括目标对象的中心与图像的中心之间的距离小于或等于预设阈值(例如两个像素点的距离)。其中,目标对象的中心例如为目标对象在图像中占据的矩形框的几何中心。
如图7所示,为本申请实施例提供的又一种视频的防抖方法的流程示意图,该流程具体包括:
S701、手机接收用户开启双景录制功能的操作。
其中,用户开启双景录制功能的操作,例如为用户在相机应用中点击双景录制功能的开关的操作,或者执行预定义操作,或者输入语音命令等。
S702、响应于开启双景录制功能的操作,手机显示第一取景器和第二取景器。
S703、第一取景器显示前置摄像头采集的图像,且第一取景器中人脸或人像位于图像中心区域;第二取景器显示后置摄像头采集的图像。
示例性的,人脸位于图像的中心区域,包括人脸或人像的中心与图像的中心之间的距离小于或等于预设阈值(例如两个像素点的距离)。其中,人脸或人像的中心例如为人脸或人像在图像中占据的矩形框的几何中心。
类似的,针对前置摄像头采集的原始图像,以人脸或人像为中心进行裁切得到第一取景器显示的图像,保持人脸或人像一直位于第一取景器的中心区域,达到稳定人 脸或人像的效果。针对后置摄像头采集的原始图像,可以结合手机的运动特征,对后置摄像头采集的原始画面进行画面补偿,达到稳定画面的效果。
下面结合图8对本申请实施例提供的方法达到稳定画面的效果进行说明。
在现有技术中,如图8中(1)所示,图像801为后置摄像头采集的原始图像。若根据手机的运动特征进行补偿,即,以图像801的中心为中心向第一方向移动第一距离后裁切相应大小的图像,即图像803。图像802为前置摄像头采集的原始图像。根据手机的运动特征进行补偿,即,以图像802的中心为中心向第一方向移动第一距离后裁切相应大小的图像,即图像804。而后,将图像803和图像804进行合并得到图像805。在合并时还可以包括对图像803和图像804进行按比例缩小等其他图像处理等,这里不做限定。
在本申请中,如图8中(2)所示,图像801为后置摄像头采集的原始图像。若根据手机的运动特征进行补偿,得到图像803。图像802为前置摄像头采集的原始图像。若以图像802中人脸为中心进行相应裁切,则得到图像806。而后,将图像803和图像806进行合并得到图像807。在合并时还可以包括对图像803和图像804进行按比例缩小等其他图像处理等,这里不做限定。
对比图像805和图像807可知,采用图8中(2)所述的方法,可以使得前置摄像头拍摄的画面中人脸始终位于图像中心,达到人脸稳定的效果。
本申请实施例还提供一种芯片系统,如图9所示,该芯片系统包括至少一个处理器1101和至少一个接口电路1102。处理器1101和接口电路1102可通过线路互联。例如,接口电路1102可用于从其它装置(例如电子设备100的存储器)接收信号。又例如,接口电路1102可用于向其它装置(例如处理器1101)发送信号。示例性的,接口电路1102可读取存储器中存储的指令,并将该指令发送给处理器1101。当所述指令被处理器1101执行时,可使得电子设备执行上述实施例中的电子设备100(比如,手机)执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种装置,该装置包含在电子设备中,该装置具有实现上述实施例中任一方法中电子设备行为的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括至少一个与上述功能相对应的模块或单元。例如,检测模块或单元、显示模块或单元、确定模块或单元、以及计算模块或单元等。
本申请实施例还提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如上述实施例中任一方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述实施例中任一方法。
本申请实施例还提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、摄像头、存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如上述实施例中任一方法时显示的图形用户界面。
可以理解的是,上述终端等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施 例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本申请实施例可以根据上述方法示例对上述终端等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种视频的防抖处理方法,其特征在于,所述方法应用于包含相机的电子设备,所述电子设备包括第一摄像头和第二摄像头,所述方法包括:
    接收到第一操作;
    响应于接收到所述第一操作,显示第一取景器和第二取景器;其中,所述第一取景器用于显示所述第一摄像头采集的第一图像,所述第二取景器用于显示所述第二摄像头采集的第二图像;
    其中,针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像;
    针对所述第二摄像头采集的第二原始图像,以所述第二原始图像的中心向第一方向移动第一距离的位置为中心进行裁切得到所述第二图像,所述第一方向和所述第一距离为根据所述电子设备的运动特征确定的;
    所述第一摄像头为前置摄像头,所述第二摄像头为后置摄像头;或者,所述第一摄像头为长焦摄像头,所述第二摄像头为中焦或短焦摄像头;或者,所述第一摄像头和所述第二摄像头为同一摄像头,所述第一图像的变焦倍率大于所述第二图像的变焦倍率。
  2. 根据权利要求1所述的方法,其特征在于,所述第一取景器和所述第二取景器的尺寸相同或不同。
  3. 根据权利要求1或2所述的方法,其特征在于,所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像,包括:
    在针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切,得到第三图像;
    根据所述第三图像,以及所述第一取景框的尺寸,调整所述第一取景框对应的变焦倍率;
    根据所述第一取景框调整后的变焦倍率和所述第三图像进行变焦处理,得到所述第一图像。
  4. 根据权利要求1或2所述的方法,其特征在于,在所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像之后,所述方法包括:
    在所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像;
    根据所述第一图像的尺寸,调整所述第一取景器的尺寸。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一摄像头采集的所述第一原始图像,自动确定所述第一原始图像中的所述目标对象;或者,根据用户的选择操作确定所述第一原始图像中的所述目标对象。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述目标对象包含一个或多个人脸。
  7. 根据权利要求6所述的方法,其特征在于,所述针对所述第一摄像头采集的第 一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像,还包括:
    采用图像分割技术将所述第一摄像头采集的第一原始图像分割为第四图像和第五图像,所述第四图像为所述目标对象的图像,所述第五图像为所述第一原始图像中不包含所述目标对象的图像;
    针对所述第四图像,以所述第四图像中所述目标对象为中心进行裁切得到第六图像;针对所述第五图像,为所述第五图像的中心向所述第一方向移动所述第一距离的位置为中心进行裁切得到第七图像;
    将所述第六图像和所述第七图像进行合并,得到所述第一图像。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述电子设备配置有惯性测量单元IMU,所述方法还包括:
    根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动特征确定所述第一方向和所述第一距离。
  9. 根据权利要求8所述的方法,其特征在于,所述第二摄像头还配置有光学防抖器件,所述根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动特征确定所述第一方向和所述第一距离,包括:
    根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动特征和所述光学防抖器件的数据确定所述第一方向和所述第一距离。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一操作为用户针对特定控件的操作,输入特定语音命令,执行预设隔空手势中的任一项。
  11. 一种电子设备,其特征在于,包括:处理器、存储器、触摸屏、第一摄像头和第二摄像头,所述存储器、所述触摸屏、所述第一摄像头、所述第二摄像头与所述处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述处理器从所述存储器中读取所述计算机指令,以使得所述电子设备执行如下操作:
    接收到第一操作;
    响应于接收到所述第一操作,显示第一取景器和第二取景器;其中,所述第一取景器用于显示所述第一摄像头采集的第一图像,所述第二取景器用于显示所述第二摄像头采集的第二图像;
    其中,针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像;
    针对所述第二摄像头采集的第二原始图像,以所述第二原始图像的中心向第一方向移动第一距离的位置为中心进行裁切得到所述第二图像,所述第一方向和所述第一距离为根据所述电子设备的运动特征确定的;
    所述第一摄像头为前置摄像头,所述第二摄像头为后置摄像头;或者,所述第一摄像头为长焦摄像头,所述第二摄像头为中焦或短焦摄像头;或者,所述第一摄像头和所述第二摄像头为同一摄像头,所述第一图像的变焦倍率大于所述第二图像的变焦倍率。
  12. 根据权利要求11所述的电子设备,其特征在于,所述第一取景器和所述第二 取景器的尺寸相同或不同。
  13. 根据权利要求11或12所述的电子设备,其特征在于,所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像,包括:
    在针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切,得到第三图像;
    根据所述第三图像,以及所述第一取景框的尺寸,调整所述第一取景框对应的变焦倍率;
    根据所述第一取景框调整后的变焦倍率和所述第三图像进行变焦处理,得到所述第一图像。
  14. 根据权利要求11或12所述的电子设备,其特征在于,在所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像之后,还执行:
    在所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像;
    根据所述第一图像的尺寸,调整所述第一取景器的尺寸。
  15. 根据权利要求11-14任一项所述的电子设备,其特征在于,还执行:
    根据所述第一摄像头采集的所述第一原始图像,自动确定所述第一原始图像中的所述目标对象;或者,根据用户的选择操作确定所述第一原始图像中的所述目标对象。
  16. 根据权利要求11-15任一项所述的电子设备,其特征在于,所述目标对象包含一个或多个人脸。
  17. 根据权利要求16所述的电子设备,其特征在于,所述针对所述第一摄像头采集的第一原始图像,以所述第一原始图像中目标对象为中心进行裁切得到所述第一图像,还包括:
    采用图像分割技术将所述第一摄像头采集的第一原始图像分割为第四图像和第五图像,所述第四图像为所述目标对象的图像,所述第五图像为所述第一原始图像中不包含所述目标对象的图像;
    针对所述第四图像,以所述第四图像中所述目标对象为中心进行裁切得到第六图像;针对所述第五图像,为所述第五图像的中心向所述第一方向移动所述第一距离的位置为中心进行裁切得到第七图像;
    将所述第六图像和所述第七图像进行合并,得到所述第一图像。
  18. 根据权利要求11-17任一项所述的电子设备,其特征在于,所述电子设备配置有惯性测量单元IMU,所述电子设备还执行:
    根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动特征确定所述第一方向和所述第一距离。
  19. 根据权利要求18所述的电子设备,其特征在于,所述第二摄像头还配置有光学防抖器件,所述根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动特征确定所述第一方向和所述第一距离,包括:
    根据所述IMU的数据确定所述电子设备的运动特征,并根据所述电子设备的运动 特征和所述光学防抖器件的数据确定所述第一方向和所述第一距离。
  20. 根据权利要求11-19任一项所述的电子设备,其特征在于,所述第一操作为用户针对特定控件的操作,输入特定语音命令,执行预设隔空手势中的任一项。
  21. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的视频的防抖处理方法。
  22. 一种芯片系统,其特征在于,包括一个或多个处理器,当所述一个或多个处理器执行指令时,所述一个或多个处理器执行如权利要求1-10中任一项所述的视频的防抖处理方法。
PCT/CN2021/117504 2020-09-18 2021-09-09 一种视频的防抖处理方法及电子设备 WO2022057723A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21868548.5A EP4044582A4 (en) 2020-09-18 2021-09-09 VIDEO ANTI-QUAKE TREATMENT METHOD AND ELECTRONIC DEVICE
US17/756,347 US11750926B2 (en) 2020-09-18 2021-09-09 Video image stabilization processing method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010988444.6 2020-09-18
CN202010988444.6A CN114205515B (zh) 2020-09-18 2020-09-18 一种视频的防抖处理方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022057723A1 true WO2022057723A1 (zh) 2022-03-24

Family

ID=80645091

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117504 WO2022057723A1 (zh) 2020-09-18 2021-09-09 一种视频的防抖处理方法及电子设备

Country Status (4)

Country Link
US (1) US11750926B2 (zh)
EP (1) EP4044582A4 (zh)
CN (1) CN114205515B (zh)
WO (1) WO2022057723A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112781B (zh) * 2022-05-25 2023-12-01 荣耀终端有限公司 录像方法、装置及存储介质
CN116132790B (zh) * 2022-05-25 2023-12-05 荣耀终端有限公司 录像方法和相关装置
WO2024076362A1 (en) * 2022-10-04 2024-04-11 Google Llc Stabilized object tracking at high magnification ratios
CN116709023B (zh) * 2022-12-14 2024-03-26 荣耀终端有限公司 视频处理方法和装置
CN116320515B (zh) * 2023-03-06 2023-09-08 北京车讯互联网股份有限公司 一种基于移动摄像设备的实时直播方法与系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110013896A1 (en) * 2009-07-15 2011-01-20 Canon Kabushiki Kaisha Image stabilization apparatus, image sensing apparatus and image stabilization method
WO2018205902A1 (zh) * 2017-05-09 2018-11-15 杭州海康威视数字技术股份有限公司 防抖控制方法和装置
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备
CN111246089A (zh) * 2020-01-14 2020-06-05 Oppo广东移动通信有限公司 抖动补偿方法和装置、电子设备、计算机可读存储介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3697244B2 (ja) * 2002-01-25 2005-09-21 キヤノン株式会社 振れ補正装置、撮像装置、振れ補正方法、及び振れ補正用コンピュータ制御プログラム
JP2006080969A (ja) * 2004-09-10 2006-03-23 Canon Inc カメラ
CN101964870A (zh) * 2009-07-23 2011-02-02 华晶科技股份有限公司 校正影像位置的影像撷取装置及影像位置校正方法
JP5739624B2 (ja) * 2010-07-01 2015-06-24 キヤノン株式会社 光学機器、撮像装置、及び制御方法
US8531535B2 (en) 2010-10-28 2013-09-10 Google Inc. Methods and systems for processing a video for stabilization and retargeting
JP5779959B2 (ja) * 2011-04-21 2015-09-16 株式会社リコー 撮像装置
GB201116566D0 (en) 2011-09-26 2011-11-09 Skype Ltd Video stabilisation
US8866943B2 (en) 2012-03-09 2014-10-21 Apple Inc. Video camera providing a composite video sequence
US9503645B2 (en) * 2012-05-24 2016-11-22 Mediatek Inc. Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof
JP2014160982A (ja) * 2013-02-20 2014-09-04 Sony Corp 画像処理装置および撮影制御方法、並びにプログラム
KR102145190B1 (ko) 2013-11-06 2020-08-19 엘지전자 주식회사 이동 단말기 및 이의 제어 방법
US10244175B2 (en) * 2015-03-09 2019-03-26 Apple Inc. Automatic cropping of video content
CN106254771B (zh) 2016-07-29 2017-07-28 广东欧珀移动通信有限公司 拍摄防抖方法、装置和移动终端
CN106385541A (zh) * 2016-09-30 2017-02-08 虹软(杭州)科技有限公司 利用广角摄像组件及长焦摄像组件实现变焦的方法
KR20180095197A (ko) * 2017-02-17 2018-08-27 엘지전자 주식회사 이동단말기 및 그 제어방법
US10630895B2 (en) * 2017-09-11 2020-04-21 Qualcomm Incorporated Assist for orienting a camera at different zoom levels
TWI693828B (zh) * 2018-06-28 2020-05-11 圓展科技股份有限公司 顯示擷取裝置與其操作方法
CN110717576B (zh) * 2018-07-13 2024-05-28 株式会社Ntt都科摩 图像处理方法、装置和设备
CN110830704B (zh) * 2018-08-07 2021-10-22 纳宝株式会社 旋转图像生成方法及其装置
CN110636223B (zh) 2019-10-16 2021-03-30 Oppo广东移动通信有限公司 防抖处理方法和装置、电子设备、计算机可读存储介质
CN111010506A (zh) * 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN111083557B (zh) * 2019-12-20 2022-03-08 浙江大华技术股份有限公司 一种视频录播控制方法及装置
CN111614869A (zh) * 2020-04-17 2020-09-01 北京中庆现代技术股份有限公司 一种4k高清摄像机双路画面采集系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110013896A1 (en) * 2009-07-15 2011-01-20 Canon Kabushiki Kaisha Image stabilization apparatus, image sensing apparatus and image stabilization method
WO2018205902A1 (zh) * 2017-05-09 2018-11-15 杭州海康威视数字技术股份有限公司 防抖控制方法和装置
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备
CN111246089A (zh) * 2020-01-14 2020-06-05 Oppo广东移动通信有限公司 抖动补偿方法和装置、电子设备、计算机可读存储介质

Also Published As

Publication number Publication date
US11750926B2 (en) 2023-09-05
US20220417433A1 (en) 2022-12-29
CN114205515B (zh) 2023-04-07
EP4044582A4 (en) 2023-01-25
CN114205515A (zh) 2022-03-18
EP4044582A1 (en) 2022-08-17

Similar Documents

Publication Publication Date Title
US11765463B2 (en) Multi-channel video recording method and device
EP3866458B1 (en) Method and device for capturing images
WO2021093793A1 (zh) 一种拍摄方法及电子设备
WO2022262260A1 (zh) 一种拍摄方法及电子设备
WO2022057723A1 (zh) 一种视频的防抖处理方法及电子设备
WO2020073959A1 (zh) 图像捕捉方法及电子设备
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
CN111050062B (zh) 一种拍摄方法及电子设备
WO2024087804A1 (zh) 切换摄像头的方法与电子设备
WO2021032117A1 (zh) 一种拍摄方法及电子设备
CN113596316A (zh) 拍照方法、图形用户界面及电子设备
CN113572956A (zh) 一种对焦的方法及相关设备
WO2022089341A1 (zh) 一种图像处理方法及相关装置
CN113923351B (zh) 多路视频拍摄的退出方法、设备和存储介质
WO2022068505A1 (zh) 一种拍摄方法和电子设备
CN114302063B (zh) 一种拍摄方法及设备
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
WO2022105670A1 (zh) 一种显示方法及终端
CN117714849A (zh) 一种图像拍摄方法及相关设备
CN115696067A (zh) 终端的图像处理方法、装置和终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868548

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022526838

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021868548

Country of ref document: EP

Effective date: 20220511

NENP Non-entry into the national phase

Ref country code: DE