WO2023160169A1 - 一种拍摄方法及电子设备 - Google Patents

一种拍摄方法及电子设备 Download PDF

Info

Publication number
WO2023160169A1
WO2023160169A1 PCT/CN2022/140192 CN2022140192W WO2023160169A1 WO 2023160169 A1 WO2023160169 A1 WO 2023160169A1 CN 2022140192 W CN2022140192 W CN 2022140192W WO 2023160169 A1 WO2023160169 A1 WO 2023160169A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
row
electronic device
images
frames
Prior art date
Application number
PCT/CN2022/140192
Other languages
English (en)
French (fr)
Other versions
WO2023160169A9 (zh
Inventor
许集润
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22922524.8A priority Critical patent/EP4274248A4/en
Publication of WO2023160169A1 publication Critical patent/WO2023160169A1/zh
Publication of WO2023160169A9 publication Critical patent/WO2023160169A9/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/689Motion occurring during a rolling shutter mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present application relates to the technical field of terminals, and in particular to a photographing method and electronic equipment.
  • the embodiment of the present application discloses a photographing method and an electronic device, which can ensure the clarity of an original image, so as to improve user photographing experience.
  • the present application provides a shooting method, the method is applied to an electronic device, and includes: acquiring N frames of images in response to a first operation, the first operation is an operation acting on a shooting control, and the N
  • the frame images are N frames of images in the preview screen collected by the camera, and N is a positive integer; in the process of sequentially obtaining the shake amount of each frame of the N frames of images, it is determined that the target image is the output original image, so
  • the original image is an image acquired by the electronic device through the sensor of the camera; the target image is an image determined from the N frames of images according to the shake amount and meets the shake amount requirement.
  • the electronic device can filter and discriminate the N frames of images acquired by the electronic device before image processing, so that images with a large amount of jitter can be filtered out, and the clarity of the output original image can be improved, so that after The clarity and quality of the post-processed image are higher, so that the clarity of the image captured by the electronic device can be improved, so as to improve the user's photographing experience.
  • the determining the target image as the output original image specifically includes: extracting the first image in the N frames of images; acquiring the shake amount of the first image; When the shake amount of the image is less than or equal to a preset threshold, determine the first image as the target image; when the shake amount of the first image is greater than the preset threshold, extract the N The next frame in the frame image is a new first image, and the step of obtaining the shake amount of the first image is executed; if the shake amount of the N frame images is greater than the preset threshold, determine the The image with the smallest shake amount among the N frames of images is the target image. In this way, the image that satisfies the preset threshold can be selected as the original output image as far as possible. If there is no image satisfying the preset threshold in the N frames of images, the image with the smallest amount of jitter is selected, and the electronic device ensures the clarity of the selected original image as much as possible, thereby improving the user's photographing experience.
  • the acquiring N frames of images in response to the first operation specifically includes: in response to the first operation, determining that the moment of the first operation is the first moment; Duration starts to acquire consecutive N frames of images from the sensor.
  • the acquiring the shake amount of the first image specifically includes: acquiring gyroscope data of M lines in the first image, where M is a positive integer, and M is less than or equal to the first image The number of pixel rows of an image; determining the shaking amount of the first image based on the M rows of gyroscope data.
  • the electronic device can calculate the amount of shaking of one frame of image in the sensor, and calculate the amount of shaking of multiple frames of images, so that a higher quality image can be selected from the multiple frames of images for processing by the electronic device, so that the image that the user can see Clearer.
  • the acquiring the gyroscope data of the M lines in the first image specifically includes: acquiring the exposure time information of the M lines of the first image, and the exposure time information includes the start and end of exposure of the M lines. start time and end time; obtain time stamp information and corresponding gyroscope data, described time stamp information is the time information of collecting corresponding gyroscope data; described time stamp information is in the exposure time information of corresponding row in described M row In the case of within, acquire the gyroscope data within the exposure time information of the corresponding row.
  • selecting the gyroscope data based on the time stamp information and the exposure time information can ensure the shaking of the user when the first image is exposed, thereby ensuring the accuracy of the gyroscope data in time, and further ensuring the accuracy of the acquired shaking amount.
  • the determining the shake amount of the first image based on the M rows of gyroscope data specifically includes: passing the gyroscope data of the i-th row in the M rows through a shake function F i is expressed as:
  • j represents the gyroscope data corresponding to j timestamp information in total in the exposure of the i-th row
  • f is the focal length
  • k is an integer from 1 to Q
  • g i nk is the data of the kth dimension in the gyroscope data of the nth time stamp information in the i-th row
  • ⁇ t in is the data in the i-th row The time difference between n timestamp information and the last timestamp information
  • p i j represents the spatial position of the gyroscope data g i j corresponding to the j-th timestamp information in the i-th row;
  • the jitter amount S i of the i-th row is determined as the difference between the maximum value and the minimum value of the position function p i of the i-th row:
  • max(pi) is the maximum value in the position function of j in the i-th row
  • min(pi) is the minimum value in the j position function in the i-th row
  • the shake amount S of the first image is determined as the mean value of the shake amounts of M lines:
  • the electronic device can effectively calculate the shake amount of each frame of image, make preparations for subsequent screening and selection, and ensure the integrity and reliability of the scheme.
  • the method further includes: acquiring the optical compensation amount corresponding to each time stamp information in the i-th row; determining based on the spatial position corresponding to each time stamp information in the i-th row
  • the i-th line position function p i also includes:
  • o i j is the optical compensation amount corresponding to the jth time stamp information in the i-th row.
  • the optical compensation amount is considered and processed in advance during the process of obtaining the shake amount, so as to ensure the accuracy of obtaining the shake amount of this frame of image.
  • the preset threshold ranges from 0.1 to 1.0 pixels. In this way, the preset threshold can effectively filter the current first image, and the preset threshold can effectively filter the current first image. If the amount of jitter is small, it means that the image jitter of this frame is very small, and there will be no Because the shaking of the electronic equipment causes the blurring of this frame of image, the frame of image can be output from the sensor for subsequent processing, so as to ensure the clarity and quality of the image.
  • the present application provides an electronic device, including: a touch screen, one or more processors, and one or more memories, the one or more memories are used to store computer program codes, and the computer program codes include computer instructions which, when executed by the one or more processors, cause the electronic device to perform:
  • the first operation In response to the first operation, acquire N frames of images, the first operation is an operation acting on the shooting control, the N frames of images are N frames of images in the preview screen collected by the camera, and N is a positive integer;
  • the target image is an output original image
  • the original image is an image obtained by the electronic device through the sensor of the camera
  • the target The image is determined from the N frames of images according to the shake amount and meets the shake amount requirement.
  • the electronic device can screen and discriminate the original image obtained by the electronic device before image processing, so that the large image with jitter can be filtered out, and the definition of the original image can be improved, so that the electronic device can be improved.
  • the quality of the image captured by the camera is improved to improve the user's camera experience.
  • the determining that the target image is the output original image is specifically performed: extracting the first image in the N frames of images; acquiring the shake amount of the first image; When the shake amount of the image is less than or equal to a preset threshold, determine the first image as the target image; when the shake amount of the first image is greater than the preset threshold, extract the N The next frame in the frame image is a new first image, and the step of obtaining the shake amount of the first image is executed; if the shake amount of the N frame images is greater than the preset threshold, determine the The image with the smallest shake amount among the N frames of images is the target image. In this way, the image that satisfies the preset threshold can be selected as the original output image as far as possible. If there is no image satisfying the preset threshold in the N frames of images, the one with the smallest amount of jitter is selected. In this way, the electronic device can ensure the clarity of the selected original image as much as possible, thereby improving the user's photographing experience.
  • the acquiring N frames of images in response to the first operation is specifically performed: in response to the first operation, determining the moment of the first operation as the first moment; starting from the first moment before the first moment Duration starts to acquire consecutive N frames of images from the sensor.
  • the first time delay is used to ensure that the N frames of images are the images that the user wants to shoot, thereby improving the user's shooting experience.
  • the acquiring the shake amount of the first image is specifically performed: acquiring gyroscope data of M lines in the first image, M is a positive integer, and M is less than or equal to the first image The number of pixel rows of an image; determining the shaking amount of the first image based on the M rows of gyroscope data.
  • the electronic device can calculate the amount of shaking of one frame of image in the sensor, and calculate the amount of shaking of multiple frames of images, so that a higher quality image can be selected from the multiple frames of images for processing by the electronic device, so that the image that the user can see Clearer.
  • the acquiring the gyroscope data of rows M in the first image is specifically executed:
  • the exposure time information including the start time and end time of M lines of exposure
  • time stamp information is in the exposure time information of a corresponding row in the M rows, acquire the gyroscope data in the exposure time information of the corresponding row.
  • selecting the gyroscope data based on the time stamp information and the exposure time information can ensure the shaking of the user when the first image is exposed, thereby ensuring the accuracy of the gyroscope data in time, and further ensuring the accuracy of the acquired shaking amount.
  • the determining the shaking amount of the first image based on the M rows of gyroscope data is specifically performed:
  • the gyroscope data of the i-th row in the M rows is represented by the jitter function F i as:
  • j represents the gyroscope data corresponding to j timestamp information in total in the exposure of the i-th row
  • f is the focal length
  • k is an integer from 1 to Q
  • g i nk is the data of the kth dimension in the gyroscope data of the nth time stamp information in the i-th row
  • ⁇ t in is the data in the i-th row The time difference between n timestamp information and the last timestamp information
  • p i j represents the spatial position of the gyroscope data g i j corresponding to the j-th timestamp information in the i-th row;
  • the jitter amount S i of the i-th row is determined as the difference between the maximum value and the minimum value of the position function p i of the i-th row:
  • max(pi) is the maximum value in the position function of j in the i-th row
  • min(pi) is the minimum value in the j position function in the i-th row
  • the shake amount S of the first image is determined as the mean value of the shake amounts of M rows:
  • the electronic device can effectively calculate the shake amount of each frame of image, make preparations for subsequent screening and selection, and ensure the integrity and reliability of the scheme.
  • the electronic device further executes:
  • the determination of the position function p i of the i-th row based on the spatial position corresponding to each time stamp information of the i-th row is further performed:
  • o i j is the optical compensation amount corresponding to the jth time stamp information in the i-th row.
  • the optical compensation amount is considered and processed in advance during the process of obtaining the shake amount, so as to ensure the accuracy of obtaining the shake amount of this frame of image.
  • the preset threshold ranges from 0.1 to 1.0 pixels. In this way, the preset threshold can effectively screen the current first image. In the case of a small amount of jitter, it means that the image jitter of this frame is very small, and the image of this frame will not be blurred due to the jitter of the electronic device. Therefore, This frame of image can be output from the sensor for subsequent processing, thereby ensuring the clarity and quality of the image.
  • the present application provides an electronic device, including a touch screen, one or more processors, and one or more memories.
  • the one or more processors are coupled with a touch screen, a camera, and one or more memories, and the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the one or more processors execute computer
  • the electronic device is made to execute the photographing method in any possible implementation manner of any of the foregoing aspects.
  • the present application provides an electronic device, including: one or more functional modules.
  • One or more functional modules are used to execute the photographing method in any possible implementation manner of any of the above aspects.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which, when the computer instructions are run on the electronic device, cause the electronic device to execute the photographing method in any possible implementation of any one of the above aspects.
  • an embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the photographing method in any possible implementation manner of any one of the above aspects.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a camera provided in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a group of preview interfaces provided by the embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image exposure process provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a location function distribution provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a photographing method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a comparison of shooting effects provided by an embodiment of the present application.
  • Fig. 8 is a schematic flowchart of another shooting method provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of a software structure of an electronic device 100 provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • the photographing process of the electronic device is as follows: the user presses the control for photographing, the electronic device responds to the above user operation, opens the aperture and the shutter, and light enters the camera from the lens (lens) and reaches the sensor ( sensor), the sensor is responsible for collecting and recording light, and converting it into a current signal, which is processed by image signal processing (ISP), and finally the processed image is stored by the processor of the electronic device.
  • ISP image signal processing
  • the role of ISP is to perform calculation processing on the signal output by the sensor, that is, to perform linear correction, noise removal, bad point repair, color difference, white balance correction, exposure correction and other processing on the image collected by the sensor. Sharpness and image quality have been greatly improved.
  • the resolution is different, and the clarity of the photo is different.
  • the shutter is a device that controls the length of time that light enters the camera to determine the exposure time of the picture.
  • the longer the shutter is left open the more light enters the camera and the longer the exposure time for the picture.
  • the less time the shutter remains open the less light enters the camera and the shorter the exposure time for the picture.
  • Shutter speed is the amount of time the shutter remains open.
  • Shutter speed is the time interval from the shutter open state to the closed state. During this period of time, the object can leave an image on the film. The faster the shutter speed, the clearer the picture of moving objects on the image sensor. Conversely, the slower the shutter speed, the blurrier the picture of moving objects.
  • Shutter can be divided into rolling shutter (rolling shutter) and global shutter (global shutter) two.
  • the global shutter means that the entire scene is exposed at the same time, while the rolling shutter is realized by the progressive exposure of the sensor.
  • the rolling shutter is a shutter that opens and closes like a curtain. That is, at the beginning of exposure, the sensor starts to expose row by row until all pixels are exposed, and the exposure of all rows is completed in a very short time.
  • the jelly effect refers to partial exposure, slope graphics or shaking when the exposure is improper or the object is moving fast. The more severe the jelly effect, the blurrier the captured image will be.
  • Shooting parameters can include parameters such as shutter, exposure time, aperture value, exposure value and ISO, and electronic equipment can realize auto focus (auto focus, AF), auto exposure (automatic exposure, AE), auto white balance (auto white balance, AWB) and 3A (AF, AE and AWB), in order to realize the automatic adjustment of these shooting parameters.
  • auto focus auto focus
  • AE automatic exposure
  • AWB auto white balance
  • 3A AF, AE and AWB
  • the gyroscope is a device for detecting angular motion around one or two axes perpendicular to the rotation axis by using the momentum moment sensitive shell of the high-speed rotating body relative to the inertial space.
  • Angular motion detection devices made of other principles that perform the same function are also called gyroscopes. That is, the gyroscope can measure the magnitude of the angular rotation of an object in space.
  • OIS technology improves the performance of camera components by counteracting image blur caused by camera instability or shake and/or compensating for rolling shutter distortion during image capture.
  • OIS technology can largely compensate for the effects of camera motion, including rotation, translation, and shutter effects.
  • OIS technology can compensate 0.5cm to the right. Thereby reducing the degree of blur caused by the motion of the electronic device.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber Identification Module (Subscriber Identification Module, SIM) card interface 195 and so on.
  • SIM Subscriber Identification Module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing unit, GPU), an image signal processor (Image Signal Processor, ISP), controller, memory, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor Application Processor, AP
  • modem processor a graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP Digital Signal Processor
  • baseband processor baseband processor
  • NPU neural network Processing Unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices 100, such as AR devices.
  • the charging management module 140 is configured to receive a charging input from a charger. While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves and radiate them through the antenna 1 .
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the wireless communication module 160 can provide wireless local area network (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite System (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field communication technology (Near Field Communication, NFC), infrared technology (Infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diode (Organic Light-Emitting Diode, OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (Active-Matrix Organic Light Emitting Diode, AMOLED), flexible light-emitting diode (Flex Light-Emitting Diode, FLED), Mini LED, Micro LED, Micro-OLED, quantum dot light-emitting diode (Quantum Dot Light Emitting Diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 may realize the acquisition function through an ISP, a camera 193 , a video codec, a GPU, a display screen 194 , and an application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element can be a charge coupled device (Charge Coupled Device, CCD) or a complementary metal oxide semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) phototransistor.
  • CCD Charge Coupled Device
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP for conversion into a digital image or video signal.
  • ISP outputs digital image or video signal to DSP for processing.
  • DSP converts digital images or video signals into standard RGB, YUV and other formats of images or video signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the electronic device 100 can use N cameras 193 to acquire images with multiple exposure coefficients, and then, in video post-processing, the electronic device 100 can synthesize HDR images using the HDR technology based on the images with multiple exposure coefficients. image.
  • Digital signal processors are used to process digital signals. In addition to digital image or video signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: Moving Picture Experts Group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image and video playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the sensor module 180 may include one or more sensors, which may be of the same type or of different types. It can be understood that the sensor module 180 shown in FIG. 1 is only an exemplary division manner, and there may be other division manners, which are not limited in the present application.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes ie, x, y and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of the electronic device 100, and can be applied to applications such as horizontal and vertical screen switching, pedometers, etc.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • FIG. 2 is a schematic structural diagram of a camera disclosed in the present application.
  • the camera of the electronic device may include a lens (lens), a sensor (sensor), an analog to digital converter (analog to digital converter, ADC), and a digital signal processing chip (digital signal process, DSP).
  • Light can be projected through the lens onto the surface of the sensor.
  • the ADC can convert each pixel point on the sensor into an electrical signal, which is converted into a digital image signal by the ADC.
  • the DSP can process the image through the ISP algorithm, and then transmit it to the processor of the mobile phone through the IO interface for post-processing.
  • the electronic device can correct or compensate the original image captured by the sensor, and can also perform ISP tuning and post-processing algorithm on the original image. For example, deblurring the original image.
  • the above-mentioned images are all processed on the captured original image (image obtained from the sensor), and the final image quality of the image also depends on the clarity of the original image collected from the sensor.
  • the blurrier the image obtained from the sensor the more obvious the jelly effect, and the more difficult it is to process the original image in the later stage, and the processing effect is also limited. Therefore, the captured image has poor clarity and the user's camera experience is poor.
  • the electronic device may acquire the current N frames of images and their shaking amounts when the camera is shooting.
  • the electronic device can determine the image of this frame as the original image output by the sensor; if the shake amount of the frame image is greater than or equal to In the case of (greater than) the preset threshold, the shake amount of the next frame image is obtained and the above comparison is also performed.
  • the electronic device may determine an image with the smallest jitter of the N frames of images as the output original image.
  • the shake amount of the image represents the shake degree of the image
  • an image with a larger shake amount indicates a more severe shake degree, and a more serious blur degree of the image.
  • N is a positive integer
  • the N frames of images may be images presented on the sensor after continuous exposure by the camera of the electronic device.
  • the output raw image refers to the image acquired by the sensor and has not been processed by electronic equipment.
  • the N frames of images acquired by the electronic device are screened and screened, so that images with a large amount of jitter can be filtered out, and the clarity of the output original image can be improved, so that the clarity and quality of the post-processed image can be improved. Higher, so as to improve the clarity of the image captured by the electronic device, so as to improve the user's photographing experience.
  • FIG. 3 shows a schematic diagram of a group of preview interfaces.
  • a page indicator is also displayed under the multiple application icons to indicate the positional relationship between the currently displayed page and other pages.
  • Below the page indicator there are multiple tray icons (such as a dialer application icon, a message application icon, and a contacts application icon), and the tray icons remain displayed when the page is switched.
  • the above-mentioned page may also include multiple application icons and page indicators, and the page indicator may not be a part of the page, but exists independently.
  • the above-mentioned picture icons are also optional, which is not limited in this embodiment of the present application.
  • the electronic device may receive the user's input operation (for example, click) on the camera icon, and in response to the input operation, the electronic device may display the shooting interface 20 as shown in (B) in FIG. 3 .
  • the user's input operation for example, click
  • the electronic device may display the shooting interface 20 as shown in (B) in FIG. 3 .
  • the shooting interface 20 may include an echo control 201, a shooting control 202, a camera conversion control 203, a picture captured by the camera (preview picture) 205, a zoom ratio control 206A, and a setting control 206B , flash switch 206C, one or more shooting mode controls 204 (for example, "Night Scene Mode” control 204A, "Portrait Mode” control 204B, "normal photo mode” control 204C, “short video” control 204D, "video recording mode” control 204E, more mode controls 204F).
  • the echo control 201 can be used to display captured images.
  • the shooting control 202 is used to trigger saving of images captured by the camera.
  • the camera switching control 203 can be used to switch the camera for taking pictures.
  • the setting control 206B can be used to set the camera function.
  • the zoom ratio control 206A can be used to set the zoom ratio of the camera.
  • the zoom factor control 206A can trigger the electronic device to display a zoom slider, and the zoom slider can receive an operation of sliding up (or down) from the user, so that the electronic device can increase (or decrease) the zoom factor of the camera.
  • the zoom magnification control 206A can display a zoom increase control and a zoom decrease control on the electronic device, and the zoom increase control can be used to receive and respond to user input, triggering the electronic device to increase the zoom magnification of the camera; the zoom decrease control Can be used to receive and respond to user input, triggering the electronic device to reduce the zoom factor of the camera.
  • the flash switch 206C can be used to turn on/off the flash.
  • the shooting mode control can be used to trigger an image processing process corresponding to the shooting mode.
  • the "Night Scene Mode" control 204A can be used to trigger increased brightness, color richness, etc. in captured images.
  • the "portrait mode” control 204B can be used to trigger the blurring of the background of the person in the captured image. As shown in (B) in FIG. 3 , the shooting mode currently selected by the user is "normal photo taking mode".
  • the electronic device When the electronic device displays the preview screen 205 , the electronic device has started continuous exposure, acquires images in the current screen, and continuously displays the images acquired by exposure on the screen. As shown in (B) of FIG. 3 , the preview screen 205 of the electronic device may display the posture of a dancing actor.
  • the shutter speed can be: 1, 1/2, 1/4, 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, 1/500, 1/1000, 1/2000 (second), etc.
  • the electronic device When the user touches the shooting control 202 , after the electronic device acquires the touch operation acting on the shooting control 202 , it can acquire and store a captured image.
  • the electronic device acquires the shake amount of the original image presented on the sensor.
  • the following describes the process in which the electronic device obtains the shake amount of the image through the data of the gyroscope sensor:
  • the gyroscope data of the gyroscope are acquired together, and the data of the gyroscope respectively correspond to each frame image.
  • the jitter of the electronic device can be reflected by the gyroscope data, so that the degree of jitter of the electronic device when each frame of image is taken can be determined, that is, the amount of jitter corresponding to each frame of image.
  • the electronic device obtains the gyroscope data of each frame of image when shooting.
  • the exposure method is mainly aimed at the exposure method of rolling shutter.
  • rolling shutter exposure all pixels are exposed in row order.
  • the image information of these pixels is not at the same time, but sequentially.
  • FIG. 4 is a schematic diagram of an image exposure process disclosed in an embodiment of the present application. As shown in (A) in Fig. 4, it is the image frame corresponding to the exposure, assuming that the size of the image is 1920*1080.
  • the electronic device can be exposed according to the starting exposure sequence from top to bottom (or from bottom to top, which is not limited).
  • the image shown in (B) in Figure 4 represents the exposure process of the shaded part in this frame of image, and the electronic device can record the exposure time information of each row.
  • the exposure time information may include the start time and end time of exposure of a certain line, that is, the start time point and end time point of exposure of a certain line of the image. Therefore, it can be obtained that the exposure duration of a row is the time length of the end moment of exposure of this row minus the start moment.
  • the shaking of the image is the overall shaking, that is, the shaking amplitude (or shaking degree) of a certain line or several lines of the image is consistent with the shaking amplitude of the entire image. Therefore, to measure the jitter of a frame of image, it can be determined by the jitter of one or several lines in a frame of image.
  • the electronic device can first obtain the exposure time of a certain row or several rows in an image, and obtain the gyroscope data of the electronic device at a corresponding time based on the exposure time, so as to obtain the gyroscope data of the image.
  • the electronic device may select M rows of exposure time information.
  • M is a positive integer, and M is less than or equal to the number of rows of pixels in this frame of image (such as 1920).
  • the electronic device can select row 4 (M) to obtain its exposure time information.
  • the starting time of the first line of pixel exposure (line1) is t1, and the ending time is t2; the starting time of the second line (line2) is t3, and the ending time is t4; the starting time of the third line (line3) is t5, the end time is t6; the start time of the fourth line (line4) is t7, and the end time is t8.
  • the exposure duration of the first row is t2-t1; the exposure duration of the second row is t4-t3; the exposure duration of the third row is t6-t5; the exposure duration of the fourth row is t8-t7.
  • first row to fourth row does not mean the order of exposing the entire frame of image, but the order of M lines selected from the entire image.
  • time difference between the exposure start times of two adjacent rows is basically the same, that is, it can be understood that the electronic device starts exposure row by row in sequence.
  • the gyroscope acquires gyroscope (gyro) data at a specific sampling frequency. That is, the electronic device can obtain time stamp information and gyro data.
  • the time stamp information represents the information of the time when the gyro data is acquired, and the time stamp information corresponds to the gyro data one by one. For example, when the timestamp information is ts1, the gyro data is g1; when the timestamp information is ts2, the gyro data is g2; when the timestamp information is ts3, the gyro data is g3...
  • the time interval between stamp information is the same, and the time interval is the reciprocal of the sampling frequency.
  • the sampling frequency of the gyroscope may be different, which is not limited in this embodiment of the present application.
  • the gyroscope (gyro) data may be data related to the x-axis, y-axis, and z-axis, and may also include data such as speed and acceleration, and may also include attitude change data.
  • the embodiment of the application does not specifically limit this.
  • g1 may include the rotation angle of the x-axis, the rotation angle of the y-axis and the rotation angle data of the z-axis.
  • the gyro data is three-dimensional data.
  • the electronic device knows the exposure time information of the M rows, as well as the time stamp information and gyro data of the gyroscope.
  • the electronic device can acquire the gyro data corresponding to the time stamp during the exposure period of M lines based on the exposure time of each line, so as to acquire the gyro data of this one (frame) image.
  • the exposure start time of the first line (line1) is t1
  • the end time is t2.
  • Timestamp information falls into t1 to t2 time period ts1 to ts5. Therefore, the electronic device can determine that the gyroscope data corresponding to the exposure of the first row are g1, g2, g3, g4 and g5.
  • the time stamp information falling into the time period from t3 to t4 is ts2 to ts6, and the electronic device can determine that the gyroscope data corresponding to the exposure of the second line is g2 to g6.
  • the time stamp information falling into the time period from t5 to t6 is ts3 to ts7, and the electronic device can determine that the gyroscope data corresponding to the third line of exposure is g3 to g7... so that this frame of image can be determined
  • the gyro data corresponding to the M row exposure in the middle is g3 to g7...
  • the acquired time stamp information ts1 to ts8 respectively correspond to gyroscope data g1 , g2 , g3 , g4 , g5 , g6 , g7 and g8 respectively.
  • the electronic device may obtain exposure time information of all lines of a certain frame of image, and determine which lines of exposure time information the above-mentioned known time stamp information falls within.
  • ts1 to ts5 fall within the range of exposure time t1 to t2 for the first row
  • ts2 to ts6 fall within the range of exposure time t3 to t4 for the second row
  • ts3 to ts7 fall within the range of exposure time t5 to t6 for the third row
  • ts4 to ts8 fall within the range of the fourth line exposure time t7 to t8.
  • the gyroscope data corresponding to the exposure row can be determined, that is, the gyroscope data corresponding to the first row exposure are g1 to g4; the gyroscope data corresponding to the second row exposure are g2 to g6; the gyroscope data corresponding to the third row exposure are g3 to g7; the gyroscope data corresponding to the exposure of the fourth line are g4 to g8.
  • the time stamp information and gyroscope data of the gyroscope can be obtained first, and then the image exposure time information can be obtained; or the image exposure time information can be obtained first, and then the time stamp of the gyroscope can be obtained
  • Information and gyroscope data, the sequence of acquisition is not limited.
  • the electronic device calculates the shake amount of each frame image based on the gyroscope data.
  • the electronic device After acquiring the gyro data of each of the M rows, the electronic device can calculate the amount of jitter for each row, as described below:
  • the gyroscope data Since the gyroscope data has many measurement angles, here are the representations of the gyroscope data and the corresponding timestamp information.
  • the gyroscope data needs to be specifically determined by i, n, k three angles.
  • i represents the exposure of the i-th row (i is a positive integer from 1 to M)
  • g i represents all the gyroscope data of the i-th row
  • ts i represents all the timestamp information of the i-th row.
  • n represents the order of the nth timestamp information exposed in a row (n is a positive integer from 1 to j), and can also be understood as the nth column, and there are j timestamp information in this row.
  • g i n represents the gyroscope data corresponding to the nth time stamp information exposed in the i line
  • ts i n represents the nth time stamp information exposed in the i line
  • g i n corresponds to ts i n .
  • j is a positive integer.
  • k represents the kth dimension of the gyroscope data (k is a positive integer from 1 to Q), assuming that the gyroscope data corresponding to each timestamp information of the electronic device has Q dimensions, which can be understood as each row in each row
  • Each column contains a set of gyroscope data, and each set of gyroscope data has Q dimensions, then g i nk is the gyroscope data of the k-th dimension corresponding to the n-th timestamp information in the i-th row.
  • Q is a positive integer.
  • the gyroscope data includes the data of x-axis and y-axis, Q is equal to 2, and the range of k is 1 and 2.
  • the gyro data of row i can be represented by a dithering function, namely
  • j indicates that the i-th row exposes gyro data corresponding to j timestamp information.
  • F 1 [g 1 1 , g 1 2 , g 1 3 , g 1 4 , g 1 5 ] in the first row corresponds to [g1 in (B) in FIG. 4 , g2, g3, g4, g5]
  • F 2 [g 2 1 , g 2 2 , g 2 3 , g 2 4 , g 2 5 ] in the second line corresponds to [g2 of (B) in Figure 4, g3, g4, g5, g6]...
  • the size of j in each row may not necessarily be equal.
  • the dithering amount of the i-th row may be determined based on the dithering function.
  • the electronic device may first integrate each group of gyro data in the i-th row of the jitter function to determine the spatial position or attitude p i n of the electronic device at each time stamp.
  • Each time stamp information corresponds to the spatial position or attitude p i n of the electronic device, which can be represented by a position function p i .
  • the electronic device can obtain the position function of the i-th row based on the jitter function as p i , which can be expressed as:
  • p i 1 represents the spatial position of the gyroscope data g i 1 corresponding to the first timestamp information in the i-th row; ...; p i j represents the j-th time stamp information in the i-th row corresponding to the gyroscope data g i j Spatial location.
  • the electronic device Before obtaining the position function of the i-th row, the electronic device needs to obtain the spatial position corresponding to each gyroscope data in the i-th row, as follows:
  • the electronic device can be based on the aforementioned m-dimensional gyroscope data.
  • the electronic device can integrate the Q-dimensional gyroscope data to obtain the spatial position p i n corresponding to the nth time stamp information as:
  • f represents the focal length (focal length), and f can be used as a coefficient of cumulative summation, which can be obtained from the camera in advance.
  • k is a positive integer from 1 to Q.
  • g i n k is the k-th dimension data in the gyroscope data of the nth time stamp information in the i-th row, for example, g i n includes g i n 1 , g i n 2 , g i n 3 , ..., g i n Q .
  • ⁇ t may be a specific time length, that is, the length of the time period during which the gyro sensor collects data, that is, the reciprocal of its sampling frequency.
  • the electronic device After acquiring the position function p i of the i-th row, the electronic device can determine the shake amount S i of the i-th row.
  • the amount of jitter S i is the difference between the maximum value and the minimum value of the j spatial position in the i-th row, and S i can be expressed as:
  • max(pi) is the maximum value of the j position function in the i-th line, that is, max(0,p i 1 ,p i 2 ,...,p i j );
  • min(pi) is the j position function in the i-th line The minimum value in , namely min(0,p i 1 ,p i 2 ,...,p i j ).
  • FIG. 5 is a schematic diagram of a location function distribution disclosed in an embodiment of the present application.
  • the ordinate in FIG. 5 may represent the value of the spatial position of each row in the position function, and the abscissa may represent the corresponding time (ie, the above-mentioned timestamp information).
  • the electronic device can calculate the amount of jitter S i for each of the M rows.
  • the electronic device when the electronic device has an optical image stabilization (OIS) function, the electronic device can sample the light compensation amount o to obtain the light corresponding to each time stamp information in the i-th row The compensation amount, the electronic device can obtain the position function of the i-th row based on the spatial position corresponding to each time stamp information and the light compensation amount is p i , which can be expressed as:
  • the electronic device After obtaining the position function of the i-th row as p i , the electronic device can determine the shake amount S i of the i-th row based on this.
  • the calculation method of the jitter amount S i is consistent with the above method, and will not be repeated.
  • the electronic device After obtaining the amount of jitter from the first row to the Mth row, the electronic device can obtain the amount of jitter S of this frame of image, and S can be the mean value of the amount of jitter in each row of M rows, which can be expressed as:
  • the electronic device can calculate the shake amount of one frame of image in the sensor, and calculate the shake amount of multiple frames of images, and then select a higher-quality image from the multiple frames of images for processing by the electronic device, so that the image that the user can see Clearer.
  • FIG. 6 is a schematic flowchart of a photographing method provided in an embodiment of the present application. As shown in FIG. 6, the photographing method includes but is not limited to the following steps.
  • the electronic device acquires N frames of images in response to a first operation.
  • the first operation may be an operation acting on the shooting control.
  • the electronic device may receive a first operation from the user.
  • N frames of images may be acquired in response to the first operation.
  • the electronic device may first determine that the moment when the first operation is acquired is the first moment, and determine one or more frames of images acquired by the sensor at a time near the first time before the first moment as N frames of images, where N is positive integer.
  • the first duration may be about 100 ms, or other durations, which are not limited in this embodiment of the present application. It should be noted that the determination of the first duration needs to take into account the time delay between when the user of the electronic device presses the shooting control and when the shutter releases the exposure.
  • the current first moment is 13:28:35.624, February 11, 2022, and the moment 100 ms (first duration) before it is 13:28:35.524, February 11, 2022.
  • the electronic device can determine the N frames of images to be exposed at 13:28:35.524 on February 11, 2022. For example, at 13:28:35.524 on February 11, 2022, the electronic device starts to expose 5 frames of images, and N frames of images on the sensor are acquired. At this time, the electronic device may determine that 5 frames of images are N frames of images. For another example, assume that the current first moment is 13:28:35.624, February 11, 2022, and the moment 100 ms before it is 13:28:35.524, February 11, 2022.
  • the electronic device can determine that the images exposed at 10 ms at the moment of 13:28:35.524 on February 11, 2022 are N frames of images. That is, the several frames of images acquired by the sensor within the time range from 13:28:35.514 on February 11, 2022 to 13:28:35.534 on February 11, 2022 are the N frames of images.
  • the electronic device acquires shake amounts of N frames of images.
  • the shaking amount of each frame of the N frames of images may be sequentially acquired.
  • the electronic device can acquire gyroscope data of multiple consecutive time stamp information around the first duration before the first operation, and the exposure time information of each row in the N frames of images. Afterwards, the electronic device can determine the shaking amount of the N frames of images based on the exposure time information and the gyroscope data.
  • step S602 for the specific description of step S602, reference may be made to the above-mentioned descriptions related to FIG. 4 and FIG. 5 , and details are not repeated.
  • the electronic device may determine that the target image is an output original image based on the shake amount.
  • the target image is determined from N frames of images according to the shake amount and meets the shake amount requirement.
  • the output original image refers to the image output by the sensor of the camera. That is, the electronic device can acquire multiple images from the sensor of its camera, select one of the images as the output image, and this output original image can be subjected to subsequent ISP, deblurring, and other processing.
  • the meaning of the output original image is to determine this frame of image as the image acquired from the sensor of the camera and sent to the DSP, that is, the image on the selected sensor, and this image will be processed and displayed by the ISP.
  • the images that meet the shake amount requirements in the N frames of images have different specific requirements, and the determined target images are also different.
  • the following two possible situations are specifically described:
  • the electronic device may first extract one frame of images out of N frames of images as the first image, and then may obtain the shaking amount of the first image.
  • the electronic device may first compare the relationship between the shaking amount of the first image and a preset threshold.
  • the electronic device may determine the first image as the target image.
  • the electronic device may extract the next frame of the N frames of images as a new first image, and perform the step of acquiring the shake amount of the first image (for example, S602); if In a case where the shaking amounts of the N frames of images are all greater than the preset threshold, the electronic device may determine that the image with the smallest shaking amount among the N frames of images is the target image.
  • the shake amount of the first image for example, S602
  • the image that meets the shake amount requirement when there is an image meeting the preset threshold in the N frames of images, the image that meets the shake amount requirement is an image in the N frames that is less than or equal to the preset threshold, and one of the frames can be selected as the target image;
  • the image that meets the shake amount requirement is the image with the smallest shake amount in the N frames.
  • the first image is only one frame of images exposed by the electronic device, and the electronic device can continuously expose multiple images to obtain multiple original images.
  • the electronic device may continuously expose multiple frames of images as the first image, and select the first frame of images as the first image.
  • the first image may be changed in sequence.
  • the range of the preset threshold may be 0.1 pixel (pixel) to 1.0 pixel.
  • the specific value of the preset threshold is not limited.
  • the preset threshold can effectively filter the current first image. When the amount of jitter is small, it means that the image jitter of this frame is very small, and the image of this frame will not be blurred due to the jitter of electronic equipment, so it can be used for This frame of image is output from the sensor for subsequent processing, so that the clarity and quality of the image can be guaranteed.
  • the electronic device may perform comparison based on the above N frames of images. See if there is an image whose shake amount is less than the preset threshold in the subsequent images of the N frames of images, so that you can try to select the image that meets the preset threshold as the original image for output. If there is no image satisfying the preset threshold in the N frames of images, the one with the smallest amount of jitter is selected. In this way, the electronic device can ensure the clarity of the selected original image as much as possible, thereby improving the user's photographing experience.
  • the electronic device may calculate the shake amount of each of the N frames of images, and determine the frame of image with the smallest shake amount as the target image.
  • the target image can be used as the output original image.
  • the image that meets the shake amount requirement is the image with the smallest shake amount in the N frames.
  • the electronic device can select the image with the smallest amount of shaking in the N frames of images as the target image, which can ensure that the image of the output original image in the N frames is the image with the best definition.
  • FIG. 7 is a schematic diagram of a comparison of photographing effects disclosed in an embodiment of the present application.
  • the image on the left is the image processed by the shooting method of the embodiment of the present application, and the image on the right is the image not processed in the embodiment.
  • the image on the left is clearly visible and has less noise, while the image on the right is blurred and has poor image quality. From the effects of the above two images, it can be seen that the electronic device selects the image frame through the amount of shaking, so that the output image effect can be improved.
  • the electronic device selects a clearer image based on the shake amount of the image, so as to improve the quality and effect of the captured image, and improve the user's shooting experience.
  • the electronic device in the embodiment of the present application has a shooting function.
  • the technical solutions of the embodiments of the present application can be applied to various shooting scenarios.
  • the application does not specifically limit the type of electronic equipment.
  • the electronic equipment in the embodiment of the application can be a mobile phone, a wearable device (for example, a smart bracelet), a tablet computer, a laptop computer (laptop ), handheld computer, computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), cell phone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) devices and other portable devices.
  • FIG. 8 is a schematic flowchart of another photographing method provided by an embodiment of the present application. As shown in Figure 8, the shooting method includes but is not limited to the following steps:
  • the electronic device acquires a first image based on a first operation.
  • the electronic device may acquire N frames of images in response to the first operation, and then the electronic device may extract the first image in the N frames of images. That is, the electronic device may determine that one image in N frames is the first image.
  • step S801 for the description of the first operation in step S801, reference may be made to the description of step S601, and details are not repeated.
  • the electronic device acquires a shaking amount of the first image.
  • step S802 reference may be made to step S602 and related descriptions in FIG. 4 and FIG. 5 , and details are not repeated.
  • the electronic device determines whether the shaking amount of the first image is greater than (greater than or equal to) a preset threshold, and if it is greater than or equal to the preset threshold, execute step S805; otherwise, execute step S804.
  • step S803 reference may be made to the description of step S603, and details are not repeated.
  • the electronic device determines the first image as an output original image.
  • the electronic device determines the first image as the target image, that is, the output original image.
  • step S804 reference may be made to the related description of step S603, and details are not repeated.
  • step S805. The electronic device judges whether the first image is the last image of the N frames of images. If yes, execute step S807; otherwise, execute step S806.
  • the electronic device determines whether the first image is the last frame of the N frames of images.
  • step S805 reference may be made to the related description of step S603.
  • the electronic device stores the shaking amount of the first image, and extracts the next frame of the N frames of images as a new first image, and executes step 802 again.
  • the shaking amount of the first image is greater than (greater than or equal to) the preset threshold, if the first image is not the last frame of N frame images, store the shaking amount of the current first image, and determine the next frame in N frames The frame is a new first image, and S802 is executed.
  • step S806 reference may be made to the related description of step S603, and details are not repeated.
  • the electronic device determines that the image with the smallest amount of image shake in the N frames is determined as the original image to be output.
  • the electronic device has judged all the N frames of images, and can sort the shaking amounts of the stored N frames of images, and determine the image with the smallest amount of shaking as the output original image, that is, the target image.
  • the electronic device may perform comparison based on the above N frames of images. It is checked whether there is an image whose shake amount is smaller than the preset threshold in the subsequent images of the N frames of images. In this way, the image satisfying the preset threshold can be selected as the original output image as much as possible. If there is no image satisfying the preset threshold in the N frames of images, the one with the smallest amount of jitter is selected. In this way, the electronic device can ensure the clarity of the selected original image as much as possible, thereby improving the user's photographing experience.
  • step S807 reference may be made to the relevant description of step S603, and details are not repeated.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the software structure of the electronic device 100 is exemplarily described by taking an Android system with a layered architecture as an example.
  • FIG. 9 is a schematic diagram of a software structure block of the electronic device 100 provided by the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • FIG. 9 shows a software structural block diagram of an electronic device 100 exemplarily provided in an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system can be divided into four layers, from top to bottom: application layer, application framework layer, hardware abstraction layer (hardware abstraction layer, HAL) layer and hardware driver layer.
  • application layer application framework layer
  • hardware abstraction layer hardware abstraction layer
  • HAL hardware abstraction layer
  • the application layer includes a series of application packages, including for example the camera application. Not limited to the camera app, it can also include some other apps like camera, gallery, video, SMS and phone apps.
  • the camera application may provide the user with a camera function.
  • the camera can notify the coding module and the image processing module in the application framework layer to take a photo.
  • the application framework layer (framework, FWK) provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a camera service interface (Camera Service), which can provide a communication interface between the camera application and the HAL layer in the application layer.
  • Camera Service Camera Service interface
  • the HAL layer may include an image signal processing unit, and the image signal processing unit may include a camera application for providing the above-mentioned shooting method in this application. That is, after the image signal processing unit acquires the first image of the image sensor and the gyroscope data through the driving of the gyroscope sensor, it can start to process the first image through the method of the embodiment of the present application to obtain the output original image. Refer to the descriptions of FIG. 6 and FIG. 8 , without repeating them.
  • the hardware driver layer may include modules such as a focus motor driver, an image sensor driver, an image signal processor driver, a gyroscope sensor driver, and a touch sensor driver.
  • the focus motor driver can control the focus motor including pushing the lens to focus and obtain focus information during the shooting process of the camera.
  • the focal length f in the embodiment of the present application.
  • the image sensor driver may acquire image information acquired by the sensor of the camera, for example, may acquire the first image in the embodiment of the present application.
  • the image signal processor driver can drive the image signal processor to process and calculate the first image.
  • the gyro sensor driver is used to acquire gyroscope data
  • the touch sensor driver is used to acquire touch events, for example, the first operation.
  • the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting".
  • the phrase “in determining” or “if detected (a stated condition or event)” may be interpreted to mean “if determining" or “in response to determining" or “on detecting (a stated condition or event)” or “in response to detecting (a stated condition or event)”.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware, and the programs can be stored in computer-readable storage media.
  • the programs When the programs are executed, , may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开一种拍摄方法及电子设备,包括:响应于第一操作,获取N帧图像,第一操作是作用于拍摄控件的操作,N帧图像是通过摄像头采集到的预览画面中的N帧图像,N为正整数;在依次获取N帧图像中每帧图像的抖动量的过程中,确定目标图像为输出的原始图像,原始图像是电子设备通过摄像头的传感器获取的图像;目标图像为根据抖动量从N帧图像中确定的符合抖动量要求的图像。本申请实施例中,可以保证原始图像的清晰度,以提高用户拍照体验。

Description

一种拍摄方法及电子设备
本申请要求于2022年02月25日提交中国专利局、申请号为202210181416.2、申请名称为“一种拍摄方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种拍摄方法及电子设备。
背景技术
在日常生活中,用户使用手机拍照已经成为习惯。手机的摄像头往往将通过镜头的光线呈现在传感器上,形成原始图像,再将原始图像通过图形信号处理(image signal processing,ISP)和后期算法的处理,呈现给用户。
在用户使用手机拍照的时候,由于用户手部的自然抖动、摄像头的对焦位置、用户的移动以及拍摄对象的快速移动等问题,会造成拍摄的原始图像模糊的情况。
发明内容
本申请实施例公开了一种拍摄方法及电子设备,可以保证原始图像的清晰度,以提高用户拍照体验。
第一方面,本申请提供了一种拍摄方法,所述方法应用于电子设备,包括:响应于第一操作,获取N帧图像,所述第一操作是作用于拍摄控件的操作,所述N帧图像是通过摄像头采集到的预览画面中的N帧图像,N为正整数;在依次获取所述N帧图像中每帧图像的抖动量的过程中,确定目标图像为输出的原始图像,所述原始图像是所述电子设备通过所述摄像头的传感器获取的图像;所述目标图像为根据所述抖动量从所述N帧图像中确定的符合抖动量要求的图像。
在本申请实施例中,电子设备能够在图像处理之前,电子设备获取到的N帧图像进行筛选和甄别,从而可以将抖动量大的图像过滤掉,提高输出的原始图像的清晰度,使得经过后期处理的图像的清晰度和质量更高,从而可以提高电子设备的拍摄得到的图像清晰程度,以提高用户的拍照体验。
在一种可能的实现方式中,所述确定目标图像为输出的原始图像,具体包括:提取所述N帧图像中的第一图像;获取所述第一图像的抖动量;在所述第一图像的抖动量小于或等于预设阈值的情况下,将所述第一图像确定为所述目标图像;在所述第一图像的抖动量大于所述预设阈值的情况下,提取所述N帧图像中下一帧为新的第一图像,执行所述获取所述第一图像的抖动量的步骤;若所述N帧图像的抖动量都大于所述预设阈值的情况下,确定所述N帧图像中抖动量最小的图像为所述目标图像。这样,可以尽量先选择满足预设阈值的图像为输出的原始图像。如果N帧图像中没有满足预设阈值的图像,就选择抖动量最小的,电子设备尽可能地保证选择的原始图像的清晰程度,进而可以提高用户的拍照体 验。
在一种可能的实现方式中,所述响应于第一操作,获取N帧图像,具体包括:响应于第一操作,确定第一操作的时刻为第一时刻;从第一时刻之前的第一时长开始,从传感器获取连续的N帧图像。这样,为了能够让电子设备拍摄的图像是用户想要拍摄的画面,在获取第一图像的过程中,需要考虑到上述的用户的按下拍摄控件到电子设备对画面进行曝光的时延,可以通过第一时延保证N帧图像是用户想要拍摄的画面,从而提高用户的拍摄体验。
在一种可能的实现方式中,所述获取所述第一图像的抖动量,具体包括:获取所述第一图像中M行的陀螺仪数据,M为正整数,M小于或等于所述第一图像的像素行数;基于所述M行的陀螺仪数据确定所述第一图像的抖动量。这样,电子设备可以计算出传感器中一帧图像的抖动量,计算出多帧图像的抖动量,才能从这多帧图像中选择更加优质的图像供电子设备处理,从而能够让用户看到的图像更加清晰。
在一种可能的实现方式中,所述获取所述第一图像中M行的陀螺仪数据,具体包括:获取第一图像的M行曝光时间信息,所述曝光时间信息包括M行曝光的起始时刻和结束时刻;获取时间戳信息和对应的陀螺仪数据,所述时间戳信息为采集对应陀螺仪数据的时间信息;在所述时间戳信息处于所述M行中对应行的曝光时间信息内的情况下,获取所述对应行曝光时间信息内的陀螺仪数据。这样,基于时间戳信息和曝光时间信息选择陀螺仪数据,能够保证第一图像曝光时用户的抖动情况,从而能够保证陀螺仪数据在时间上的准确性,进而保证获取的抖动量的准确性。
在一种可能的实现方式中,所述基于所述M行的陀螺仪数据确定所述第一图像的抖动量,具体包括:将所述M行中第i行的陀螺仪数据通过抖动函数F i表示为:
F i=[g i 1,g i 2,…,g i n,…,g i j]
其中,j表示所述第i行曝光一共有j个时间戳信息对应的陀螺仪数据;
对所述M行每一行j个中的第n个时间戳信息对应的Q维陀螺仪数据的进行积分,得到所述第i行第n个时间戳信息对应的空间位置p i n为:
Figure PCTCN2022140192-appb-000001
其中,f为焦距,k为从1到Q的整数,g i n k为所述第i行第n个时间戳信息的陀螺仪数据中第k维的数据,Δt i n为所述第i行的第n个时间戳信息和上一个时间戳信息之间的时间差;
基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i
p i=[0,p i 1,p i 2,…,p i j]
其中,p i j表示在第i行第j个时间戳信息对应陀螺仪数据g i j的空间位置;
将所述第i行的抖动量S i确定为所述第i行的位置函数p i的最大值与最小值之间的差值:
S i=max(pi)-min(pi)
其中,max(pi)为第i行j个位置函数中的最大值,min(pi)为第i行j个位置函数中的最小值;
将所述第一图像的抖动量S确定为M行抖动量的均值:
Figure PCTCN2022140192-appb-000002
这样,电子设备能够有效地计算出每一帧图像的抖动量,为后续的甄别和选择做好准备,保证方案的完整性和可靠性。
在一种可能的实现方式中,所述方法还包括:获取第i行每个时间戳信息对应的光补偿量;所述基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i,还包括:
基于所述第i行每个时间戳信息对应的空间位置和所述光补偿量确定第i行的位置函数p i为:
p i=[0,p i 1-o i 1,p i 2-o i 2,…,p i j-o i j]
其中,o i j为第i行第j个时间戳信息对应的光补偿量。
这样,对于具有光补偿能力的电子设备,在获取抖动量的过程中,将光补偿量提前考虑和处理,从而保证获取到这一帧图像抖动量的准确性。
在一种可能的实现方式中,所述预设阈值的范围为0.1到1.0像素。这样,预设阈值能够有效地筛选当前的第一图像,预设阈值能够有效地筛选当前的第一图像,在抖动量较小的情况下,说明这一帧的图像抖动程度很小,不会因为电子设备的抖动导致这帧图像的模糊,因而可以对这一帧图像从传感器输出,进行后续处理,从而能够保证图像的清晰度和质量。
第二方面,本申请提供了一种电子设备,包括:触控屏、一个或多个处理器和一个或多个存储器,所述一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,以使得所述电子设备执行:
响应于第一操作,获取N帧图像,所述第一操作是作用于拍摄控件的操作,所述N帧图像是通过摄像头采集到的预览画面中的N帧图像,N为正整数;
在依次获取所述N帧图像中每帧图像的抖动量的过程中,确定目标图像为输出的原始图像,所述原始图像是所述电子设备通过所述摄像头的传感器获取的图像;所述目标图像为根据所述抖动量从所述N帧图像中确定的符合抖动量要求的图像。
在本申请实施例中,电子设备能够在图像处理之前,电子设备获取到的原始图像进行筛选和甄别,从而可以将抖动量的大图像过滤掉,提高原始图像的清晰度,从而可以提高电子设备的拍摄得到的图像质量,以提高用户的拍照体验。
在一种可能的实现方式中,所述确定目标图像为输出的原始图像,具体执行:提取所述N帧图像中的第一图像;获取所述第一图像的抖动量;在所述第一图像的抖动量小于或等于预设阈值的情况下,将所述第一图像确定为所述目标图像;在所述第一图像的抖动量大于所述预设阈值的情况下,提取所述N帧图像中下一帧为新的第一图像,执行所述获取所述第一图像的抖动量的步骤;若所述N帧图像的抖动量都大于所述预设阈值的情况下,确定所述N帧图像中抖动量最小的图像为所述目标图像。这样,可以尽量先选择满足预设阈值的图像为输出的原始图像。如果N帧图像中没有满足预设阈值的图像,就选择抖动量最小的,这样,电子设备尽可能地保证选择的原始图像的清晰程度,进而可以提高用户的拍照体验。
在一种可能的实现方式中,所述响应于第一操作,获取N帧图像,具体执行:响应于第一操作,确定第一操作的时刻为第一时刻;从第一时刻之前的第一时长开始,从传感器获取连续的N帧图像。这样,为了能够让电子设备拍摄的图像是用户想要拍摄的画面,在获取第一图像的过程中,需要考虑到上述的用户的按下拍摄控件到电子设备对画面进行曝光的时延,可以通过第一时延保证N帧图像是用户想要拍摄的画面,从而提高用户的拍摄体验。
在一种可能的实现方式中,所述获取所述第一图像的抖动量,具体执行:获取所述第一图像中M行的陀螺仪数据,M为正整数,M小于或等于所述第一图像的像素行数;基于所述M行的陀螺仪数据确定所述第一图像的抖动量。这样,电子设备可以计算出传感器中一帧图像的抖动量,计算出多帧图像的抖动量,才能从这多帧图像中选择更加优质的图像供电子设备处理,从而能够让用户看到的图像更加清晰。
在一种可能的实现方式中,所述获取所述第一图像中M行的陀螺仪数据,具体执行:
获取第一图像的M行曝光时间信息,所述曝光时间信息包括M行曝光的起始时刻和结束时刻;
获取时间戳信息和对应的陀螺仪数据,所述时间戳信息为采集对应陀螺仪数据的时间信息;
在所述时间戳信息处于所述M行中对应行的曝光时间信息内的情况下,获取所述对应行曝光时间信息内的陀螺仪数据。
这样,基于时间戳信息和曝光时间信息选择陀螺仪数据,能够保证第一图像曝光时用户的抖动情况,从而能够保证陀螺仪数据在时间上的准确性,进而保证获取的抖动量的准确性。
在一种可能的实现方式中,所述基于所述M行的陀螺仪数据确定所述第一图像的抖动量,具体执行:
将所述M行中第i行的陀螺仪数据通过抖动函数F i表示为:
F i=[g i 1,g i 2,…,g i n,…,g i j]
其中,j表示所述第i行曝光一共有j个时间戳信息对应的陀螺仪数据;
对所述M行每一行j个中的第n个时间戳信息对应的Q维陀螺仪数据的进行积分,得 到所述第i行第n个时间戳信息对应的空间位置p i n为:
Figure PCTCN2022140192-appb-000003
其中,f为焦距,k为从1到Q的整数,g i n k为所述第i行第n个时间戳信息的陀螺仪数据中第k维的数据,Δt i n为所述第i行的第n个时间戳信息和上一个时间戳信息之间的时间差;
基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i
p i=[0,p i 1,p i 2,…,p i j]
其中,p i j表示在第i行第j个时间戳信息对应陀螺仪数据g i j的空间位置;
将所述第i行的抖动量S i确定为所述第i行的位置函数p i的最大值与最小值之间的差值:
S i=max(pi)-min(pi)
其中,max(pi)为第i行j个位置函数中的最大值,min(pi)为第i行j个位置函数中的最小值;
将所述第一图像的抖动量S确定为M行抖动量的均值:
Figure PCTCN2022140192-appb-000004
这样,电子设备能够有效地计算出每一帧图像的抖动量,为后续的甄别和选择做好准备,保证方案的完整性和可靠性。
在一种可能的实现方式中,所述电子设备还执行:
获取第i行每个时间戳信息对应的光补偿量;
所述基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i,还执行:
基于所述第i行每个时间戳信息对应的空间位置和所述光补偿量确定第i行的位置函数p i为:
p i=[0,p i 1-o i 1,p i 2-o i 2,…,p i j-o i j]
其中,o i j为第i行第j个时间戳信息对应的光补偿量。
这样,对于具有光补偿能力的电子设备,在获取抖动量的过程中,将光补偿量提前考虑和处理,从而保证获取到这一帧图像抖动量的准确性。
在一种可能的实现方式中,所述预设阈值的范围为0.1到1.0像素。这样,预设阈值能够有效地筛选当前的第一图像,在抖动量较小的情况下,说明这一帧的图像抖动程度很小,不会因为电子设备的抖动导致这帧图像的模糊,因而可以对这一帧图像从传感器输出,进行后续处理,从而能够保证图像的清晰度和质量。
第三方面,本申请提供了一种电子设备,包括触控屏、一个或多个处理器和一个或多个存储器。该一个或多个处理器与触控屏、摄像头、以及一个或多个存储器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处 理器执行计算机指令时,使得电子设备执行上述任一方面任一项可能的实现方式中的拍摄方法。
第四方面,本申请提供了一种电子设备,包括:一个或多个功能模块。一个或多个功能模块用于执行上述任一方面任一项可能的实现方式中的拍摄方法。
第五方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一方面任一项可能的实现方式中的拍摄方法。
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一方面任一项可能的实现方式中的拍摄方法。
附图说明
图1是本申请实施例提供的一种电子设备100的硬件结构示意图;
图2是本申请实施例提供的一种摄像头的结构示意图;
图3是本申请实施例提供的一组预览界面的示意图;
图4是本申请实施例提供的一种图像曝光过程的示意图;
图5是本申请实施例提供一种位置函数分布示意图;
图6是本申请实施例提供一种拍摄方法的流程示意图;
图7是本申请实施例提供一种拍摄效果的对比示意图;
图8是本申请实施例提供的另一种拍摄方法的流程示意图;
图9是本申请实施例提供的一种电子设备100的软件结构示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
下面介绍本申请实施例涉及相关技术:
(1)电子设备的拍摄过程
电子设备具有拍照功能的情况下,电子设备的拍照过程是:用户按下用于拍摄的控件,电子设备响应于上述用户操作,打开光圈和快门,光线从镜头(lens)进入摄像头,到达传感器(sensor),传感器负责采集和记录光线,并转化成电流信号,交由图形信号处理(image signal processing,ISP)进行处理,最后将处理后的图像交由电子设备的处理器进行存储。
其中,ISP的作用是对传感器输出的信号进行运算处理,即对传感器采集的图像进行线 性纠正、噪点去除、坏点修补、颜色差值、白平衡矫正、曝光矫正等处理,处理后的图像的清晰度和呈像质量等大大提升。
在电子设备拍摄的过程中,分辨率不同,照片的清晰度不同。传感器中照片的像素点(最小感光单位)越多图像越清晰。因此,摄像头的像素越高,拍照的清晰度越高,所拍摄照片的分辨率也就越高。
(2)快门
快门是控制光线进入相机时间长短,以决定图片曝光时间的装置。快门保持在开启状态的时间越长,进入摄像头的光线越多,图片的曝光时间越长。快门保持在开启状态的时间越短,进入摄像头的光线越少,图片的曝光时间越短。快门速度,是快门保持开启状态的时间。快门速度即是从快门开启状态到关闭状态的时间间隔。在这一段时间内,物体可以在底片上留下影像。快门速度越快,运动物体在图像传感器上呈现的图片越清晰。反之,快门速度越慢,运动的物体呈现的图片就越模糊。
快门可以分为卷帘快门(rolling shutter)和全局快门(global shutter)两种。全局快门是指整幅场景在同一时间曝光实现的,而卷帘快门是传感器逐行曝光的方式实现。
其中,卷帘快门是像帘子一样打开和关闭的快门。即在曝光开始的时候,传感器逐行开始曝光,直到所有像素都被曝光,对所有行的曝光在极短的时间内完成。
在卷帘快门曝光的过程中,往往伴有果冻效应的出现。果冻效应是指将在曝光不当或者物体运动较快的情况下,出现部分曝光、斜坡图形或晃动等现象。果冻效果越严重,拍摄的图像会越模糊。
拍摄参数中可以包括快门、曝光时间、光圈值、曝光值和ISO等参数,电子设备可通过算法实现自动对焦(auto focus,AF)、自动曝光(automatic exposure,AE)、自动白平衡(auto white balance,AWB)和3A(AF、AE和AWB),以实现这些拍摄参数的自动调节。
(3)陀螺仪传感器(gyrometer)
陀螺仪是用高速回转体的动量矩敏感壳体相对惯性空间绕正交于自转轴的一个或二个轴的角运动检测装置。利用其他原理制成的角运动检测装置起同样功能的也称陀螺仪。即陀螺仪能够测量物体在空间中角度转动的幅度。
(4)OIS
OIS技术通过抵消由于相机不稳定或抖动引起的图像模糊和/或补偿在图像捕获期间的滚动快门失真,从而提高相机组件的性能。OIS技术能够很程度上补偿相机运动的影响,包括,旋转、平移和快门效应等等。
示例性地,由于电子设备在拍摄的时候向左运行,例如2cm。OIS技术可以将向右补偿0.5cm。从而减少由于电子设备运动造成的模糊程度。
下面介绍本申请实施例涉及的装置。
图1为本申请实施例提供的一种电子设备100的硬件结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(Universal Serial Bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A, 受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(Subscriber Identification Module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(Application Processor,AP),调制解调处理器,图形处理器(Graphics Processing unit,GPU),图像信号处理器(Image Signal Processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(Digital Signal Processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备100,例如AR设备等。
充电管理模块140用于从充电器接收充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备100供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大 器(Low Noise Amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(Frequency Modulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(Liquid Crystal Display,LCD),有机发光二极管(Organic Light-Emitting Diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(Active-Matrix Organic Light Emitting Diode的,AMOLED),柔性发光二极管(Flex Light-Emitting Diode,FLED),Mini LED,Micro LED,Micro-OLED,量子点发光二极管(Quantum Dot Light Emitting Diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现获取功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像或视频。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(Charge Coupled Device,CCD)或互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像或视频信号。ISP将数字图像或视频信号输出到DSP加工处理。DSP将数字图像或视频信号转换成标准的RGB,YUV等格式 的图像或视频信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。例如,在一些实施例中,电子设备100可以利用N个摄像头193获取多个曝光系数的图像,进而,在视频后处理中,电子设备100可以根据多个曝光系数的图像,通过HDR技术合成HDR图像。
数字信号处理器用于处理数字信号,除了可以处理数字图像或视频信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(Moving Picture Experts Group,MPEG)1,MPEG2,MPEG3,MPEG4等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像视频播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。
传感器模块180可以包括1个或多个传感器,这些传感器可以为相同类型或不同类型。可理解,图1所示的传感器模块180仅为一种示例性的划分方式,还可能有其他划分方式,本申请对此不作限制。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100姿态,应用于横竖屏切换,计步器等应用。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
图2是本申请公开的一种摄像头的结构示意图。如图2所示,电子设备的摄像头可以包括镜头(lens)、传感器(sensor)、模数转换器(analog to digital converter,ADC)、数字信号处理芯片(digital signal process,DSP)。光线可以通过镜头投射到传感器的表面。之后,ADC可以将传感器上的各个像素点转换为电信号,经过ADC转换为数字图像信号。之后DSP可以通过ISP算法对图像进行处理,之后通过IO接口传输到手机的处理器进行后期处理。
在用户实际的拍摄中,由于人手的自然抖动、用户的移动、拍摄对象的快速移动,AF的对焦位置等等原因,会造成图像的清晰度较差。
为了能够让图像清晰的呈现给用户,减少上述的图像的模糊程度,电子设备可以对传感器拍摄的原始图像进行抖动矫正或者补偿,也可以对原始图像进行ISP tuning调试和后期算法。例如,对原始图像进行去模糊处理。然而,上述的图像均是对拍摄的原始图像(从传感器上获取到的图像)进行的处理,图像的最终呈像质量同时也依赖于从传感器采集到的原始图像的清晰度。从传感器获取的图像越模糊,果冻效应越明显,后期对原始图像处理的难度也越大,处理效果也有限,因此,拍摄得到的图像清晰度差,用户拍照体验差。
在本申请实施例中,电子设备可以在摄像头正在拍摄的时候,获取当前N帧图像及其抖动量。在当前N帧中一帧图像的抖动量小于(小于或等于)预设阈值的情况下,电子设备可以将这一帧的图像确定为传感器输出的原始图像;在帧图像的抖动量大于或等于(大于)预设阈值的情况下,获取下一帧图像的抖动量并同样进行上述的比较。在N帧图像的抖动量都大于或等于(大于)预设阈值的情况下,电子设备可以基于将这N帧图像中抖动量最小的一帧图像确定为输出的原始图像。其中,图像的抖动量表示图像的抖动程度,抖动量越大的图像表示抖动程度越剧烈,图像的模糊程度越严重。N为正整数,N帧图像可以是电子设备的摄像头连续曝光后,呈现在传感器的图像。输出的原始图像是指传感器获取的且未经电子设备处理的图像。上述的实施方式中,电子设备获取到的N帧图像进行筛选和甄别,从而可以将抖动量大的图像过滤掉,提高输出的原始图像的清晰度,使得经过后期处理的图像的清晰度和质量更高,从而可以提高电子设备的拍摄得到的图像清晰程度,以提高用户的拍照体验。
图3示出了一组预览界面的示意图,如图3中的(A)所示,电子设备可以一个放置有应用图标的页面10,该页面包括多个应用图标101(例如,天气应用图标、计算器应用图标、设置应用图标、邮件应用图标、音乐应用图标、视频应用图标、图库应用图标、相机应用图标等等)。多个应用图标下方还显示包括有页面指示符,以表明当前显示的页面与其他页面的位置关系。页面指示符的下方有多个托盘图标(例如拨号应用图标、信息应用图标、联系人应用图标),托盘图标在页面切换时保持显示。在一些实施例中,上述页面也可以包括多个应用图标和页面指示符,页面指示符可以不是页面的一部分,单独存在,上 述图片图标也是可选的,本申请实施例对此不作限制。
电子设备可以接收用户作用于相机图标的输入操作(例如单击),响应于该输入操作,电子设备可以显示如图3中的(B)所示的拍摄界面20。
如图3中的(B)所示,该拍摄界面20可以包括有回显控件201、拍摄控件202、摄像头转换控件203、摄像头捕捉的画面(预览画面)205、变焦倍率控件206A、设置控件206B、闪光灯开关206C、一个或多个拍摄模式控件204(例如,“夜景模式”控件204A、“人像模式”控件204B、“普通拍照模式”控件204C、“短视频”控件204D、“录像模式”控件204E、更多模式控件204F)。其中,该回显控件201可用于显示已拍摄的图像。该拍摄控件202用于触发保存摄像头拍摄到的图像。该摄像头转换控件203可用于切换拍照的摄像头。该设置控件206B可用于设置拍照功能。该变焦倍率控件206A可用于设置摄像头的变焦倍数。其中,该变焦倍率控件206A可以触发电子设备显示变焦滑动条,该变焦滑动条可以接收用户的向上(或向下)滑动的操作,使得电子设备增大(或减小)摄像头的变焦倍率。可能的,该变焦倍率控件206A可以电子设备显示变焦增大控件和变焦增小控件,变焦增大控件可用于接收并响应于用户的输入,触发电子设备增大摄像头的变焦倍率;变焦减小控件可用于接收并响应于用户的输入,触发电子设备减小摄像头的变焦倍率。闪光灯开关206C可用于开启/关闭闪光灯。该拍摄模式控件可用于触发开启该拍摄模式对应的图像处理流程。例如,“夜景模式”控件204A可用于触发增加拍摄图像中的亮度和色彩丰富度等。“人像模式”控件204B可用于触发对拍摄图像中人物背景的虚化。如图3中的(B)所示,当前用户选择的拍摄模式为“普通拍照模式”。
在电子设备显示预览画面205的情况下,电子设备已经开始连续曝光,获取当前的画面中的图像,并将曝光获取的图像不断地显示在屏幕上。如图3中的(B)所示,电子设备的预览画面205中可以显示正在跳舞的演员姿态。
其中,电子设备曝光时,快门速度可以为:1、1/2、1/4、1/8、1/15、1/30、1/60、1/125、1/250、1/500、1/1000、1/2000(秒)等。
在用户触控拍摄控件202的情况下,电子设备获取到作用于拍摄控件202的触控操作之后,可以获取并储存一张拍摄的图像。
在本申请实施例中,电子设备在曝光的过程中,会获取呈现在传感器上的原始图像的抖动量。以下说明电子设备通过陀螺仪传感器的数据获取图像的抖动量的过程:
在电子设备拍摄的同时,一并获取陀螺仪的陀螺仪数据,陀螺仪的数据分别对应与各个帧图像。通过陀螺仪数据能够反映电子设备的抖动情况,从而可以确定拍摄每一帧图像时电子设备的抖动程度,即对应每一帧图像的抖动量。
第一步,电子设备在拍摄的情况下,获取每一帧图像的陀螺仪数据。
在本申请实施例中,曝光方式主要针对于卷帘快门的曝光方式。在卷帘快门曝光的过程中,所有的像素是按照行顺序开始进行曝光。这些像素的图像信息并非同一时刻的,而是依次顺序的。
图4是本申请实施例公开的一种图像曝光过程的示意图。如图4中的(A)所示是曝光对应的图像帧,假设图像的大小为1920*1080。电子设备可以按照从上到下的开始曝光顺序进行曝光(也可以是从下到上,不做限定)。如图4中的(B)所示的图像表示这帧图 像中阴影部分的曝光过程,电子设备可以记录每一行的曝光时间信息。其中,曝光时间信息可以包括某一行曝光的起始时刻和结束时刻,即图像在曝光某一行的开始时间点和结束时间点。从而可以得到,某一行的曝光时长为这一行曝光的结束时刻减去起始时刻的时间长度。
在用户拍摄过程中,图像的抖动是整体抖动的,即图像的某一行或者某几行的抖动幅度(或抖动程度)与整张图像的抖动幅度是一致的。因此,衡量一帧图像的抖动情况,可以通过一帧图像中的一行或几行抖动的情况确定。此时,电子设备可以先获取一张图像中某一行或者某几行的曝光时间,基于曝光时间获取对应时间电子设备的陀螺仪数据,便能够得到这一张图像的陀螺仪数据。
为了获取这帧图像某一行或者某多行像素的曝光时间信息,电子设备可以选择其中的M行曝光时间信息。其中,M为正整数,M小于或等于这帧图像像素的行数(如1920)。如图4中的(B)所示,电子设备可以选择其中的4(M)行,获取其曝光时间信息。第一行像素曝光的(line1)的起始时刻为t1,结束时刻为t2;第二行(line2)的起始时刻为t3,结束时刻为t4;第三行(line3)的起始时刻为t5,结束时刻为t6;第四行(line4)的起始时刻为t7,结束时刻为t8。因此,第一行的曝光时长为t2-t1;第二行的曝光时长为t4-t3;第三行的曝光时长为t6-t5;第四行的曝光时长为t8-t7。需要说明的是,上述的第一行到第四行不意味着针对于曝光整帧图像的顺序,而是从整张图像中选取出来的M行的顺序。其中,相邻两行的曝光起始时刻相差时间基本是相同的,即可以理解为,电子设备按照顺序逐行开始曝光。
在电子设备曝光的同时,陀螺仪按照特定的采样频率获取陀螺仪(gyro)数据。即电子设备可以获取到时间戳信息和gyro数据。如图4中的(B)所示,时间戳信息表示获取gyro数据时刻的信息,时间戳信息与gyro数据一一对应。例如,在时间戳信息为ts1时刻,gyro数据为g1;在时间戳信息为ts2时刻,gyro数据为g2;在时间戳信息为ts3时刻,gyro数据为g3……其中,相邻的两个时间戳信息之间的时间间隔是相同的,时间间隔即采样频率的倒数。例如,1/1000s,由于电子设备不同,陀螺仪的采样频率可能不同,本申请实施例不加限定。
需要说明的是,在本申请实施例中,陀螺仪(gyro)数据可以是x轴、y轴、z轴相关的数据,也可以包括速度、加速度等数据,还可以包括姿态变化数据等,本申请实施例对此不做特殊限定。例如,g1可以包括x轴的转动角度,y轴的转动角度和z轴的转动角度数据,对应地,gyro数据是三维数据。
此时,电子设备已知M行的曝光时间信息,以及陀螺仪的时间戳信息和gyro数据。电子设备可以基于每一行的曝光时间获取M行曝光期间对应时间戳的gyro数据,从而获取这一张(帧)图像的gyro数据。
示例性地,如图4中的(B)所示,已知第一行(line1)的曝光起始时刻为t1,结束时刻为t2。时间戳信息落入t1到t2时间段的有ts1到ts5。因此,电子设备可以确定第一行曝光对应的陀螺仪数据为g1、g2、g3、g4和g5。第二行(line2)曝光中,时间戳信息落入t3到t4时间段的有ts2到ts6,电子设备可以确定第二行曝光对应的陀螺仪数据为g2到g6。第三行(line3)曝光中,时间戳信息落入t5到t6时间段的有ts3到ts7,电子设备可以确定 第三行曝光对应的陀螺仪数据为g3到g7……从而可以确定这帧图像中M行曝光对应的gyro数据。
示例性地,已知获取到的时间戳信息ts1到ts8,依次分别对应的陀螺仪数据为g1、g2、g3、g4、g5、g6、g7和g8。电子设备可以获取某一帧图像所有行的曝光时间信息,确定上述的已知的时间戳信息落入了哪些行的曝光时间信息的范围内。假设ts1到ts5落入第一行曝光时间t1到t2的范围内,ts2到ts6落入第二行曝光时间t3到t4的范围内,ts3到ts7落入第三行曝光时间t5到t6的范围内,且ts4到ts8落入第四行曝光时间t7到t8的范围内。便可以确定对应行曝光的陀螺仪数据,即第一行曝光对应的陀螺仪数据为g1到g4;第二行曝光对应的陀螺仪数据为g2到g6;第三行曝光对应的陀螺仪数据为g3到g7;第四行曝光对应的陀螺仪数据为g4到g8。
还需要说明的是,在上述的实施方式中,可以先获取陀螺仪的时间戳信息和陀螺仪数据,后获取图像曝光时间信息;也可以先获取图像曝光时间信息,后获取陀螺仪的时间戳信息和陀螺仪数据,获取的先后顺序不加限定。
第二步,电子设备基于陀螺仪数据计算每一帧图像的抖动量。
在获取M行中每一行的gyro数据之后,电子设备可以计算每一行的抖动量,下面具体说明:
由于陀螺仪数据的衡量角度较多,因此这里先说明陀螺仪数据以及对应的时间戳信息的存在哪些表示方式。陀螺仪数据需要通过i,n,k三个角度具体确定。
其中,i表示第i行曝光(i为从1到M的正整数),g i表示第i行的所有陀螺仪数据,ts i表示第i行的所有时间戳信息。例如,图4中的(B)所示的内容,假设共获取4(M)行的陀螺仪数据,便可以确定M等于4,且i的范围为1、2、3和4。
n表示在某一行曝光的第n个时间戳信息的顺序(n为从1到j的正整数),也可以理解为第n列,这一行共有j个时间戳信息。例如,g i n表示第i行曝光的第n个时间戳信息对应的陀螺仪数据,ts i n表示第i行曝光的第n个时间戳信息,g i n与ts i n对应。其中,j为正整数。
k表示陀螺仪数据的第k个维度(k为从1到Q的正整数),假设电子设备的每一个时间戳信息对应的陀螺仪数据均有Q个维度,可以理解为每一行中的每一列均包涵一组陀螺仪数据,每一组陀螺仪数据有Q个维度,那么g i n k便是第i行第n个时间戳信息对应的第k维的陀螺仪数据。其中,Q为正整数。例如,陀螺仪数据包括x轴和y轴的数据,Q等于2,k的范围为1和2。
在M行的像素曝光中,第i行的gyro数据可以通过抖动函数进行表示,即
F i=[g i 1,g i 2,…,g i n,…,g i j]
其中,j表示第i行曝光一共有j个时间戳信息对应的gyro数据。例如,在上述图4对应的描述中,第一行的F 1=[g 1 1,g 1 2,g 1 3,g 1 4,g 1 5]对应图4中的(B)的[g1,g2,g3,g4,g5],第二行的F 2=[g 2 1,g 2 2,g 2 3,g 2 4,g 2 5]对应图4中的(B)的[g2,g3,g4,g5,g6]……此外,每一行的j的大小可以不一定相等。
在获取到第i行的抖动函数之后,可以基于抖动函数确定第i行的抖动量。
一种可能的情况下,电子设备可以先对第i行抖动函数中每一组gyro数据做积分,确 定每一个时间戳时电子设备的空间位置或姿态p i n。每一个时间戳信息对应电子设备的空间位置或姿态p i n可以通过位置函数p i表示。电子设备可以基于抖动函数获取到第i行的位置函数为p i,可以表示为:
p i=[0,p i 1,p i 2,…,p i j]
其中,p i 1表示在第i行第1个时间戳信息对应陀螺仪数据g i 1的空间位置;……;p i j表示在第i行第j个时间戳信息对应陀螺仪数据g i j的空间位置。
在获取第i行的位置函数之前,电子设备需要获取第i行每一个陀螺仪数据对应的空间位置,以下具体说明:
假设gyro数据为Q维数据,电子设备可以基于上述的m维的陀螺仪数据。电子设备可以将Q维陀螺仪数据的进行积分,得到第n个时间戳信息对应的空间位置p i n为:
Figure PCTCN2022140192-appb-000005
其中,f表示焦距(focal length),f可以作为累积求和的系数,可以提前从摄像头获取。k为从1到Q的正整数。其中,g i n k为第i行第n个时间戳信息的陀螺仪数据中第k维的数据,例如,g i n包括g i n 1,g i n 2,g i n 3,……,g i n Q。Δt i n为第i行的第n个时间戳信息和上一个(n-1)时间戳信息之间的时间差,即Δt i n=ts i n-ts i n-1。例如,在上述图4中的(B)所示,第1行gyro数据对应的时间戳信息为ts1,ts4,ts3,ts4和ts5,第1行的Δt 1_2=ts2-ts1;Δt 1_3=ts3-ts2;Δt 1_4=ts4-ts3。还需要说明的是,Δt(包括Δt 1_2、Δt 1_3等)可以是一个特定的时间长度,即陀螺仪传感器采集数据的时间周期长度,即其采样频率的倒数。
在获取到第i行的位置函数p i之后,电子设备可以确定第i行的抖动量S i
抖动量S i为第i行j个空间位置最大值与最小值之间的差值,S i可以表示为:
S i=max(pi)-min(pi)
其中,max(pi)为第i行j个位置函数中的最大值,即max(0,p i 1,p i 2,…,p i j);min(pi)为第i行j个位置函数中的最小值,即min(0,p i 1,p i 2,…,p i j)。
示例性地,图5是本申请实施例公开的一种位置函数分布示意图。图5中的纵坐标可以表示位置函数中每一行空间位置的值,横坐标可以表示对应的时间(即上述的时间戳信息)。如图5中的(A)所示,在[0,p i 1,p i 2,…,p i j]=[0,p1,p2,p3,p4,p5],其中,max(pi)=p2,min(pi)=p5,S i=p2-p5。电子设备可以计算M行中每一行的抖动量S i
另一种可能的情况下,在电子设备具备光学防抖(optical image stabilization,OIS)功能的情况下,电子设备可以对光补偿量o进行采样可以获取第i行每个时间戳信息对应的光补偿量,电子设备可以基于每个时间戳信息对应的空间位置和所述光补偿量获取到第i行的位置函数为p i,可以表示为:
p i=[0,p i 1-o i 1,p i 2-o i 2,…,p i j-o i j]
其中,p i 1,……,p i j的描述与上述相同,不加赘述。o i 1为第i行第1个时间戳信息对应的光补偿量;……;o i j为第i行第j个时间戳信息对应的光补偿量。
在获取到第i行的位置函数为p i之后,电子设备可以基于确定第i行的抖动量S i。抖动量S i计算方法与上述方式一致,不加赘述。
示例性地,如图5中的(B)所示,在p i=[0,p i 1-o i 1,p i 2-o i 2,…,p i j-o i j]=[0,p1-o1,p2-o2,p3-o3,p4-o4,p5-o5]中,其中,max(pi)=p2-o2,min(pi)=0,则可以确定S i=p2-o2-0。按照上述的方法,电子设备可以计算M行中每一行的抖动量S i
在获取到从第一行到第M行的抖动量之后,电子设备可以获取这一帧图像的抖动量S,S可以为M行每一行抖动量的均值,可以表示为:
Figure PCTCN2022140192-appb-000006
至此,电子设备可以计算出传感器中一帧图像的抖动量,计算出多帧图像的抖动量,才能从这多帧图像中选择更加优质的图像供电子设备处理,从而能够让用户看到的图像更加清晰。
请参阅图6,图6是本申请实施例提供的一种拍摄方法的流程示意图。如图6所示,该拍摄方法包含但不限于以下步骤。
S601、电子设备响应于第一操作,获取N帧图像。
其中,第一操作可以是作用于拍摄控件的操作。例如,用户可以进入预览画面的情况下(如图3中的(B)所示),点击拍摄控件,进行拍照。此时,电子设备可以接收到来自用户的第一操作。在电子设备接收到第一操作的情况下,响应于第一操作,可以获取N帧图像。
在用户拍摄的过程中,通常是在预览画面中出现了心仪的图像时,用户按下拍摄控件完成拍照。从用户确定当前需要拍摄的画面开始,到按下拍摄控件,再到电子设备中摄像头的快门和光圈进行曝光,曝光的得到的画面已不再是用户需要拍摄的画面了。这个过程中用户想要的拍摄画面的曝光时刻和实际拍摄的曝光时刻存在一定的时延。为了能够让电子设备拍摄的图像是用户想要拍摄的画面,所见即所得,在获取第一图像的过程中,需要考虑到上述的时延。此时,电子设备可以先确定获取到第一操作时刻为第一时刻,并将第一时刻之前第一时长的时刻附近时刻传感器获取的一帧或多帧图像确定为N帧图像,N为正整数。
其中,第一时长可以是100ms左右,还可以是其它时长,本申请实施例不加限定。需要说明的是,第一时长的确定需要考虑电子设备用户按下拍摄控件到快门进行曝光存在的时延。
示例性的,假设当前的第一时刻为2022年2月11日13:28:35.624,其前100ms(第一时长)的时刻为2022年2月11日13:28:35.524。电子设备可以确定2022年2月11日13:28:35.524这个时刻开始曝光的N帧图像。例如,2022年2月11日13:28:35.524开始电子设备进行曝光的5帧图像,获取到的传感器上的N帧图像。此时,电子设备可以确定5帧图像为N帧图像。又例如,假设当前的第一时刻为2022年2月11日13:28:35.624,其前100ms的时刻为2022年2月11日13:28:35.524。电子设备可以确定2022年2月 11日13:28:35.524这个时刻10ms附件曝光的图像为N帧图像。即在2022年2月11日13:28:35.514到2022年2月11日13:28:35.534的时间范围内传感器获取的几帧图像为这N帧图像。
S602、电子设备获取N帧图像的抖动量。
在电子设备获取到N帧图像之后,可以依次获取N帧图像中每帧图像的抖动量。
即电子设备可以获取第一操作之前第一时长附近的连续多个时间戳信息的陀螺仪数据,以及N帧图像中各行的曝光时间信息。之后电子设备可以基于曝光时间信息和陀螺仪数据确定N帧图像的抖动量。
其中,步骤S602的具体描述可以参考上述图4和图5相关的描述,不加赘述。
S603、电子设备可以基于抖动量确定目标图像为输出的原始图像。
其中,目标图像为根据抖动量从N帧图像中确定的符合抖动量要求的图像。输出的原始图像是指摄像头的传感器输出的图像。即电子设备可以从其摄像头的传感器获取到多张图像,选取其中的一张图像作为输出的图像,这张输出的原始图像可以进行后续的ISP、去模糊等等处理。输出的原始图像的含义是将这一帧图像确定为从摄像机的传感器上获取到的向DSP发送的图像,即选定的传感器上的图像,这张图像将被进行ISP处理和显示等。
在本申请实施例中,N帧图像中符合抖动量要求的图像,具体要求不同,确定的目标图像也不同,以下具体说明两种可能的情况:
在一种可能的实施方式中,电子设备可以先提取N帧图像中一帧图像为第一图像,之后可以获取到第一图像的抖动量。电子设备可以先比较第一图像的抖动量与预设阈值的关系。
在第一图像的抖动量小于或等于预设阈值的情况下,电子设备可以将第一图像确定为目标图像。
在第一图像的抖动量大于预设阈值的情况下,电子设备可以提取N帧图像中下一帧为新的第一图像,执行获取第一图像的抖动量的步骤(例如S602);若所述N帧图像的抖动量都大于所述预设阈值的情况下,电子设备可以确定所述N帧图像中抖动量最小的图像为所述目标图像。
在这一实施方式中,在N帧图像中有满足预设阈值的图像时,符合抖动量要求的图像为N帧中小于或等于预设阈值的图像,目标图像选择其中一帧即可;在N帧图像中不存在满足预设阈值的图像时,符合抖动量要求的图像为N帧中抖动量最小的图像。
需要说明的是,第一图像仅仅是电子设备曝光的其中一帧图像,电子设备可以连续曝光多张图像,得到多张原始图像。例如,电子设备在可以进行连续曝光多帧图像为第一图像,选择其中第一帧图像为第一图像。在后续的过程中,第一图像可以按照次序进行改变。
其中,预设阈值的范围可以是0.1像素(pixel)到1.0像素。例如,0.3pixel,预设阈值的具体值不加限定。预设阈值能够有效地筛选当前的第一图像,在抖动量较小的情况下,说明这一帧的图像抖动程度很小,不会因为电子设备的抖动导致这帧图像的模糊,因而可以对这一帧图像从传感器输出,进行后续处理,从而能够保证图像的清晰度和质量。
在抖动量较大的情况下,电子设备可以基于上述的N帧图像进行比较。看看N帧图像中后续的图像是否存在抖动量小于预设阈值的图像,这样,可以尽量先选择满足预设阈值 的图像为输出的原始图像。如果N帧图像中没有满足预设阈值的图像,就选择抖动量最小的,这样,电子设备尽可能地保证选择的原始图像的清晰程度,进而可以提高用户的拍照体验。
在另一种可能的实施方式中,电子设备可以计算出N帧图像每一帧图像的抖动量,将其中抖动量最小的一帧图像确定为目标图像。目标图像可以作为输出的原始图像。
在这一实施方式中,符合抖动量要求的图像为N帧中抖动量最小的图像。
这样,电子设备可以选择N帧图像中抖动量最小的图像为目标图像,可以保证输出的原始图像在N帧中的图像是清晰度最佳的图像。
图7是本申请实施例公开的一种拍摄效果的对比示意图。左边的图像为经过本申请实施例的拍摄方法处理过的图像,右边为未经本实施例处理的图像。二者相比,左边的图像的清楚可见,图像的噪点较少,右边的图像模糊,图像质量差。从上述2个图像的效果可以看出,电子设备通过抖动量对图像帧的选取,使得输出的图像效果能够提高。
在本申请实施例中,电子设备基于图像的抖动量选择更加清晰的图像,提高拍摄所得的图像的质量和效果,提高用户的拍摄体验。
其中,本申请实施例中的电子设备具有拍摄功能。本申请实施例的技术方案可以应用于各种拍摄场景。本申请对电子设备的类型不做具体限定,在一些实施例中,本申请实施例中的电子设备可以是手机、可穿戴设备(例如,智能手环)、平板电脑、膝上型计算机(laptop)、手持计算机、电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等便携设备。
请参阅图8,图8是本申请实施例提供的另一种拍摄方法的流程示意图。如图8所示,该拍摄方法包含但不限于以下步骤:
S801、电子设备基于第一操作获取第一图像。
电子设备可以响应于第一操作,获取N帧图像,之后电子设备可以提取所述N帧图像中的第一图像。即电子设备可以确定N帧中一帧图像为第一图像。
其中,步骤S801中第一操作的描述可以参考步骤S601的描述,不加赘述。
S802、电子设备获取第一图像的抖动量。
其中,步骤S802可以参考步骤S602以及图4和图5的相关描述,不加赘述。
S803、电子设备判断第一图像的抖动量是否大于(大于或等于)预设阈值,在大于或等于预设阈值的情况下,执行步骤S805;否则,执行步骤S804。
其中,步骤S803可以参考步骤S603的描述,不加赘述。
S804、电子设备将第一图像确定为输出的原始图像。
在第一图像的抖动量小于或等于(小于)预设阈值的情况下,电子设备将第一图像确定为目标图像,即输出的原始图像。
其中,步骤S804可以参考步骤S603的相关描述,不加赘述。
S805、电子设备判断第一图像是否为N帧图像的最后一帧图像。若是,执行步骤S807;否则,执行步骤S806。
在第一图像的抖动量大于(大于或等于)预设阈值的情况下,电子设备判断第一图像是否为N帧图像的最后一帧图像。
其中,步骤S805可以参考步骤S603的相关描述。
S806、电子设备存储第一图像的抖动量,并提取N帧图像中下一帧为新的第一图像,重新执行步骤802。
在第一图像的抖动量大于(大于或等于)预设阈值的情况下,若第一图像不是N帧图像的最后一帧,存储当前第一图像的抖动量,并确定N帧中的下一帧为新的第一图像,并执行S802。
其中,步骤S806可以参考步骤S603的相关描述,不加赘述。
S807、电子设备确定N帧图像抖动量最小的图像确定为输出的原始图像。
电子设备已经对N帧图像均已进行判断,可以对已经存储的N帧图像的抖动量进行排序,确定其中抖动量最小的图像为输出的原始图像,即目标图像。
在抖动量较大的情况下,电子设备可以基于上述的N帧图像进行比较。看看N帧图像中后续的图像是否存在抖动量小于预设阈值的图像,这样,可以尽量先选择满足预设阈值的图像为输出的原始图像。如果N帧图像中没有满足预设阈值的图像,就选择抖动量最小的,这样,电子设备尽可能地保证选择的原始图像的清晰程度,进而可以提高用户的拍照体验。
其中,步骤S807可以参考步骤S603的相关描述,不加赘述。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图9是本申请实施例提供的电子设备100的软件结构框示意图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
请参见图9,图9示出了本申请实施例示例性提供的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。
如图9所示,可将Android系统分为四层,从上至下分别为:应用程序层、应用程序框架层、硬件抽象层(hardware abstraction layer,HAL)层和硬件驱动层。其中:
应用程序层包括一系列应用程序包,例如包含相机应用。不限于相机应用,还可以包含其他一些应用,例如相机,图库,视频,短信和电话等应用程序。
其中,相机应用可为用户提供拍照功能。相机可以响应于用户在相机应用的用户界面中对拍摄控件的触摸操作,通知应用框架层中的编码模块和图像处理模块进行拍摄。
应用程序框架层(framework,FWK)为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图9所示,应用程序框架层可包含相机服务接口(Camera Service),该相机服务接口可提供应用程序层中相机应用和HAL层之间的通信接口。
如图9所示,HAL层可以包括图像信号处理单元,图像信号处理单元可以包含用于为相机应用提供本申请上述的拍摄方法。即图像信号处理单元获取到图像传感器的第一图像和通过陀螺仪传感器驱动获取到陀螺仪数据之后,可以开始通过本申请实施例的方法对第一图像进行处理,得到输出的原始图像,具体可以参考图6和图8的描述,不加赘述。
如图9所示,硬件驱动层可以包括对焦马达驱动、图像传感器驱动,图像信号处理器驱动、陀螺仪传感器驱动和触控传感器驱动等模块。
对焦马达驱动可以控制对焦马达包括在摄像头拍摄的过程中推动镜头进行对焦,并获取对焦信息。例如,本申请实施例中的焦距f。图像传感器驱动可以获取到摄像头的传感器获取的图像信息,例如,可以获取本申请实施例中的第一图像。图像信号处理器驱动可以驱动图像信号处理器进行对第一图像的处理和计算。陀螺仪传感器驱动用于获取陀螺仪数据,触控传感器驱动用于获取触控事件,例如,第一操作。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指示。在计算机上加载和执行所述计算机程序指示时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指示可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指示可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指示相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (10)

  1. 一种拍摄方法,其特征在于,所述方法应用于电子设备,所述方法包括:
    响应于第一操作,获取N帧图像,所述第一操作是作用于拍摄控件的操作,所述N帧图像是通过摄像头采集到的预览画面中的N帧图像,N为正整数;
    在依次获取所述N帧图像中每帧图像的抖动量的过程中,确定目标图像为输出的原始图像,所述原始图像是所述电子设备通过所述摄像头的传感器获取的图像;所述目标图像为根据所述抖动量从所述N帧图像中确定的符合抖动量要求的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述确定目标图像为输出的原始图像,具体包括:
    提取所述N帧图像中的第一图像;
    获取所述第一图像的抖动量;
    在所述第一图像的抖动量小于或等于预设阈值的情况下,将所述第一图像确定为所述目标图像;
    在所述第一图像的抖动量大于所述预设阈值的情况下,提取所述N帧图像中下一帧为新的第一图像,执行所述获取所述第一图像的抖动量的步骤;
    若所述N帧图像的抖动量都大于所述预设阈值的情况下,确定所述N帧图像中抖动量最小的图像为所述目标图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述响应于第一操作,获取N帧图像,具体包括:
    响应于第一操作,确定第一操作的时刻为第一时刻;
    从第一时刻之前的第一时长开始,从传感器获取连续的N帧图像。
  4. 根据权利要求2或3所述的方法,其特征在于,所述获取所述第一图像的抖动量,具体包括:
    获取所述第一图像中M行的陀螺仪数据,M为正整数,M小于或等于所述第一图像的像素行数;
    基于所述M行的陀螺仪数据确定所述第一图像的抖动量。
  5. 根据权利要求4所述的方法,其特征在于,所述获取所述第一图像中M行的陀螺仪数据,具体包括:
    获取第一图像的M行曝光时间信息,所述曝光时间信息包括M行曝光的起始时刻和结束时刻;
    获取时间戳信息和对应的陀螺仪数据,所述时间戳信息为采集对应陀螺仪数据的时间信息;
    在所述时间戳信息处于所述M行中对应行的曝光时间信息内的情况下,获取所述对应行曝光时间信息内的陀螺仪数据。
  6. 根据权利要求4或5所述的方法,其特征在于,所述基于所述M行的陀螺仪数据确定所述第一图像的抖动量,具体包括:
    将所述M行中第i行的陀螺仪数据通过抖动函数F i表示为:
    F i=[g i 1,g i 2,…,g i n,…,g i j]
    其中,j表示所述第i行曝光一共有j个时间戳信息对应的陀螺仪数据;
    对所述M行每一行j个中的第n个时间戳信息对应的Q维陀螺仪数据的进行积分,得到所述第i行第n个时间戳信息对应的空间位置p in为:
    Figure PCTCN2022140192-appb-100001
    其中,f为焦距,k为从1到Q的整数,g ink为所述第i行第n个时间戳信息的陀螺仪数据中第k维的数据,Δt in为所述第i行的第n个时间戳信息和上一个时间戳信息之间的时间差;
    基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i
    p i=[0,p i 1,p i 2,…,p i j]
    其中,p ij表示在第i行第j个时间戳信息对应陀螺仪数据g ij的空间位置;
    将所述第i行的抖动量S i确定为所述第i行的位置函数p i的最大值与最小值之间的差值:
    S i=max(pi)-min(pi)
    其中,max(pi)为第i行j个位置函数中的最大值,min(pi)为第i行j个位置函数中的最小值;
    将所述第一图像的抖动量S确定为M行抖动量的均值:
    Figure PCTCN2022140192-appb-100002
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    获取第i行每个时间戳信息对应的光补偿量;
    所述基于所述第i行每个时间戳信息对应的所述空间位置确定第i行位置函数p i,还包括:
    基于所述第i行每个时间戳信息对应的空间位置和所述光补偿量确定第i行的位置函数p i为:
    p i=[0,p i 1-o i 1,p i 2-o i 2,…,p i j-o i j]
    其中,o i j为第i行第j个时间戳信息对应的光补偿量。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述预设阈值的范围为0.1到1.0像素。
  9. 一种电子设备,其特征在于,包括:触控屏、一个或多个处理器和一个或多个存储器;所述一个或多个处理器与所述触控屏、所述一个或多个存储器耦合,所述一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1-8任一项所述的方法。
  10. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-8中任一项所述的方法。
PCT/CN2022/140192 2022-02-25 2022-12-20 一种拍摄方法及电子设备 WO2023160169A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22922524.8A EP4274248A4 (en) 2022-02-25 2022-12-20 PHOTOGRAPHIC METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210181416.2 2022-02-25
CN202210181416.2A CN116709043B (zh) 2022-02-25 2022-02-25 一种拍摄方法及电子设备

Publications (2)

Publication Number Publication Date
WO2023160169A1 true WO2023160169A1 (zh) 2023-08-31
WO2023160169A9 WO2023160169A9 (zh) 2023-10-26

Family

ID=87748242

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/140192 WO2023160169A1 (zh) 2022-02-25 2022-12-20 一种拍摄方法及电子设备

Country Status (3)

Country Link
EP (1) EP4274248A4 (zh)
CN (1) CN116709043B (zh)
WO (1) WO2023160169A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309335A1 (en) * 2009-06-05 2010-12-09 Ralph Brunner Image capturing device having continuous image capture
CN107172296A (zh) * 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN107509034A (zh) * 2017-09-22 2017-12-22 维沃移动通信有限公司 一种拍摄方法及移动终端
CN110049244A (zh) * 2019-04-22 2019-07-23 惠州Tcl移动通信有限公司 拍摄方法、装置、存储介质及电子设备
CN110290323A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102518373B1 (ko) * 2019-02-12 2023-04-06 삼성전자주식회사 이미지 센서 및 이를 포함하는 전자 기기
CN110266966A (zh) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 图像生成方法和装置、电子设备、计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309335A1 (en) * 2009-06-05 2010-12-09 Ralph Brunner Image capturing device having continuous image capture
CN107172296A (zh) * 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN107509034A (zh) * 2017-09-22 2017-12-22 维沃移动通信有限公司 一种拍摄方法及移动终端
CN110049244A (zh) * 2019-04-22 2019-07-23 惠州Tcl移动通信有限公司 拍摄方法、装置、存储介质及电子设备
CN110290323A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4274248A4

Also Published As

Publication number Publication date
WO2023160169A9 (zh) 2023-10-26
CN116709043A (zh) 2023-09-05
CN116709043B (zh) 2024-08-02
EP4274248A1 (en) 2023-11-08
EP4274248A4 (en) 2024-07-24

Similar Documents

Publication Publication Date Title
WO2021052232A1 (zh) 一种延时摄影的拍摄方法及设备
WO2021052292A1 (zh) 视频采集方法和电子设备
WO2019183819A1 (zh) 拍照方法、拍照装置和移动终端
JP7403551B2 (ja) 記録フレームレート制御方法及び関連装置
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2021190613A9 (zh) 一种拍照方法及装置
WO2020113534A1 (zh) 一种拍摄长曝光图像的方法和电子设备
WO2021219141A1 (zh) 拍照方法、图形用户界面及电子设备
CN115526787B (zh) 视频处理方法和装置
CN113630558B (zh) 一种摄像曝光方法及电子设备
WO2023273323A1 (zh) 一种对焦方法和电子设备
CN112532892A (zh) 图像处理方法及电子装置
WO2024045670A1 (zh) 生成高动态范围视频的方法和电子设备
CN113660408B (zh) 一种视频拍摄防抖方法与装置
EP4199496A1 (en) Image stabilization method and electronic device therefor
WO2022156473A1 (zh) 一种播放视频的方法及电子设备
CN113572948B (zh) 视频处理方法和视频处理装置
CN115150542A (zh) 一种视频防抖方法及相关设备
CN116389885B (zh) 拍摄方法、电子设备及存储介质
WO2023160230A9 (zh) 一种拍摄方法及相关设备
WO2023160169A1 (zh) 一种拍摄方法及电子设备
CN113286076B (zh) 拍摄方法及相关设备
WO2022033344A1 (zh) 视频防抖方法、终端设备和计算机可读存储介质
CN115460343A (zh) 图像处理方法、设备及存储介质
WO2023160224A9 (zh) 一种拍摄方法及相关设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022922524

Country of ref document: EP

Effective date: 20230731