WO2023035920A1 - 一种录像中抓拍图像的方法及电子设备 - Google Patents

一种录像中抓拍图像的方法及电子设备 Download PDF

Info

Publication number
WO2023035920A1
WO2023035920A1 PCT/CN2022/113982 CN2022113982W WO2023035920A1 WO 2023035920 A1 WO2023035920 A1 WO 2023035920A1 CN 2022113982 W CN2022113982 W CN 2022113982W WO 2023035920 A1 WO2023035920 A1 WO 2023035920A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
electronic device
domain
isp
Prior art date
Application number
PCT/CN2022/113982
Other languages
English (en)
French (fr)
Inventor
王宇
朱聪超
肖斌
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US18/015,583 priority Critical patent/US20240205533A1/en
Priority to EP22826796.9A priority patent/EP4171005A4/en
Publication of WO2023035920A1 publication Critical patent/WO2023035920A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2104Intermediate information storage for one or a few pictures
    • H04N1/2112Intermediate information storage for one or a few pictures using still video cameras
    • H04N1/212Motion video recording combined with still video recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Definitions

  • the present application relates to the technical field of photographing, and in particular to a method and electronic equipment for capturing images in video recording.
  • Existing mobile phones generally have photographing and video recording functions, and more and more people use mobile phones to take pictures and videos to record every bit of life.
  • video ie, video recording
  • some wonderful pictures may be collected.
  • the user may hope that the mobile phone can capture the above-mentioned wonderful picture, and save it as a photo for display to the user. Therefore, there is an urgent need for a solution that can capture images during video recording.
  • the present application provides a method and an electronic device for capturing images during video recording, which can capture images during video recording and improve the image quality of the captured images.
  • the present application provides a method for capturing images in video recording, which is applied to an electronic device, where the electronic device includes a camera, and the method includes: the electronic device receives a first user operation; wherein the first The operation is used to trigger the electronic device to start recording video; in response to the first operation, the electronic device displays a viewfinder interface, and the viewfinder interface displays a preview image stream, the preview image stream includes n frames of preview images, the The preview image is obtained based on n frames of the first image collected by the camera of the electronic device after the electronic device receives the first operation, and the viewfinder interface also includes a snapshot shutter, which is used to trigger the The electronic device takes a snapshot, and the n frames of the first image are stored in the first buffer queue of the electronic device, where n ⁇ 1, where n is an integer; the electronic device receives the second operation of the user; the electronic device responds to The second operation is to end the recording of the video; the electronic device saves the video; wherein, when the camera captures the first image
  • the user can use the capture shutter to capture images during the video recording process of the electronic device.
  • the electronic device may cache n frames of first images collected by the camera in the first buffer queue. In this way, even if there is a delay from receiving the user's snapping operation (that is, the third operation) to the Snapshot program receiving the snapping command; when the user's snapping operation is received, the first image output by the image sensor (Sensor) can also be cached in in the first cache queue.
  • the electronic device may select a snapshot frame (that is, the first image of the qth frame captured by the camera when the user inputs a snapshot operation) from the first buffer queue.
  • the electronic device can obtain the image captured by the camera when the user inputs the capture operation from the first buffer queue.
  • the electronic device can also perform image processing on the first image of the qth frame to obtain a captured image, which can improve the image quality of the captured image.
  • an image that meets the needs of the user can be captured during the video recording process, and the image quality of the captured image can be improved.
  • n In a possible implementation manner of the first aspect, n ⁇ 2.
  • multiple frames of the first image may be cached in the first cache queue.
  • the frames output by the Sensor can be buffered in the Buffer. Therefore, when the electronic device receives a user's capture operation, the Bayer image output by the Sensor may also be cached in the first cache queue. Moreover, the image content of the frame output by the Sensor will not change much in a short period of time.
  • the frame selection module of the electronic device can select a frame of image with better image quality from the Buffer as the captured image according to the additional information of the image cached in the Buffer. In this way, the image quality of the captured image can be improved.
  • the performing image processing on the first image of the qth frame stored in the first buffer queue to obtain a snapshot image includes: the electronic device performing Perform image processing on the m frames of the first image to obtain the snapped image, wherein the m frames of the first image include the qth frame of the first image, m ⁇ 1, and m is an integer.
  • the electronic device may perform image processing on one or more frames of the first image.
  • the electronic device can perform image processing on multiple frames of the first image, and other images in the m frames of the first image except the first image of the qth frame can be captured for the captured frame (that is, the first image of the qth frame
  • the first image of the frame also referred to as a reference frame
  • the image processing includes image processing in the RAW domain and ISP image processing, the image processing in the RAW domain is image processing performed in the RAW color space, and the ISP image processing
  • the processing is image processing performed by the image signal processor ISP of the electronic device, the image quality of the captured image is better than the image quality of the first image of the qth frame; or, the image processing includes the RAW domain image processing, the ISP image processing and encoding processing, the image quality of the captured image is better than the image quality of the first image of the qth frame.
  • the above-mentioned image processing in the RAW domain can be realized by preset RAW domain image processing algorithms.
  • the ISP image processing can be realized by the ISP of the electronic device.
  • the electronic device performs image processing on the m frames of the first image to obtain the snapshot image, which includes: the electronic device processing the m frames of the first image As an input, run a preset original RAW domain image processing algorithm to obtain a second image; wherein, the preset RAW domain image processing algorithm has the function of improving image quality; the preset RAW domain image processing algorithm integrates all At least one image processing function in the image processing functions of the RAW domain, RGB domain or YUV domain is used to improve the image quality of the image before the ISP performs image processing; the electronic device uses the ISP to process the first The second image is to encode the processed image to obtain the snapshot image.
  • the algorithm processing of the preset RAW domain image processing algorithm is added.
  • the preset RAW domain image processing The combination of algorithm and ISP has a better processing effect, which helps to improve the image quality of captured images.
  • the image format input by the preset RAW domain image processing algorithm is the Bayer format
  • the output image format is also the Bayer format
  • the electronic device takes the m frames of the first image as input, runs a preset RAW domain image processing algorithm, and obtains the second image, including: the electronic device takes the m frames of the first image in the Bayer format As an input, run the preset RAW domain image processing algorithm to obtain a second image in Bayer format; wherein, the preset RAW domain image processing algorithm integrates the RAW domain, the RGB domain or the YUV domain At least one part of the image processing function is used to improve the image quality before the ISP performs image processing.
  • the electronic device uses the ISP to process the second image, and encodes the processed image to obtain the snapshot image, including: the electronic device uses the ISP to sequentially perform the image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain, and encoding the processed image to obtain the first snapshot image.
  • the above-mentioned electronic device adopts the image processing performed by the ISP, which may also include conversion from Bayer format to RGB format, and conversion from RGB format to YUV format.
  • the image format input by the preset RAW domain image processing algorithm is Bayer format
  • the image format output is RGB format.
  • Preset RAW domain image processing algorithms perform Bayer format to RGB format conversion.
  • the electronic device uses the m frames of the first image as input, runs a preset RAW domain image processing algorithm, and obtains the second image, including: the electronic device uses the m frames of the first image in Bayer format as Input, run the preset RAW domain image processing algorithm to obtain a second image in RGB format; wherein, the preset RAW domain image processing algorithm integrates the image processing function of the RAW domain for use in the ISP Improve the quality of the image before performing image processing in the RGB domain and the YUV domain.
  • the electronic device uses the ISP to process the second image, and encodes the processed image to obtain the snapshot image, including: the electronic device uses the ISP to sequentially perform the The image processing in the RGB domain and the image processing in the YUV domain are performed, and the processed image is encoded to obtain the snapshot image.
  • the above-mentioned electronic device adopts the image processing performed by the ISP, which may also include conversion from RGB format to YUV format.
  • the preset RAW domain image processing algorithm also integrates part of the image processing functions of at least one of the RGB domain or the YUV domain, for The above-mentioned ISP improves the image quality before performing image processing in the RGB domain.
  • the image format input by the preset RAW domain image processing algorithm is Bayer format
  • the image format output is YUV format.
  • Preset RAW domain image processing algorithms perform Bayer format to RGB format conversion, and RGB format to YUV format conversion.
  • the electronic device uses the m frames of the first image as input, runs a preset RAW domain image processing algorithm, and obtains the second image, including: the electronic device uses the m frames of the first image in Bayer format as Input, run the preset RAW domain image processing algorithm to obtain a second image in YUV format; wherein, the preset RAW domain image processing algorithm integrates the image processing function of the RAW domain and the image of the RGB domain The processing function is used to improve the quality of the image before the ISP performs image processing in the YUV domain on the image.
  • the electronic device uses the ISP to process the second image, and encodes the processed image to obtain the snapshot image, including: the electronic device uses the ISP to perform the Image processing in the YUV domain, encoding the processed image to obtain the snapshot image.
  • the above-mentioned preset RAW domain image processing algorithm also integrates part of the image processing functions of the YUV domain, which is used to improve the image quality of the image before the ISP performs image processing in the YUV domain .
  • the electronic device may use the ISP of the electronic device to sequentially perform RAW on the first image processing, image processing in the RGB domain and image processing in the YUV domain to obtain a preview image.
  • the electronic device processes the first image to obtain the preview image and will not be affected by "the electronic device processes the first image to obtain the snapshot image”.
  • the electronic device uses the ISP to process the first image and the second image in a time division multiplexing manner. That is to say, the electronic device uses the ISP to process the second image to obtain the snapshot image, which will not affect the electronic device to use the ISP to process the first image to obtain the preview image. In other words, the processing of the captured image by the electronic device will not affect the processing of the preview image stream and the video file by the electronic device.
  • the present application provides a method for capturing images in video recording, which is applied to an electronic device, where the electronic device includes a camera, and the method includes: the electronic device receives a first user operation; wherein the first The operation is used to trigger the electronic device to start recording video; in response to the first operation, the electronic device displays a viewfinder interface; wherein the viewfinder interface displays a preview image stream, and the preview image stream includes n frames of preview images, The preview image is obtained based on n frames of first images collected by the camera of the electronic device after the electronic device receives the first operation, and the viewfinder interface also includes a snapshot shutter, which is used to trigger The electronic device takes a snapshot, and the n frames of the first image are stored in the first buffer queue of the electronic device, where n ⁇ 1, where n is an integer; the electronic device periodically buffers the first image in the first buffer queue Perform image processing on the k frames of the first image to obtain a second image, k ⁇ 1, k is an integer; the electronic device receives the second
  • the number of image frames buffered in the first buffer queue can be reduced by adopting the solution provided by the second aspect.
  • the delay from receiving the user's snapping operation to the Snapshot program receiving the snapping instruction is 330 milliseconds (ms)
  • the Sensor of the mobile phone exposes a frame of the first image every 30 milliseconds (ms).
  • the frame selection module of the mobile phone in order to ensure that the frame selection module of the mobile phone can select the first image exposed by the Sensor when the mobile phone receives the third operation from the first cache queue, at least 10 frames of images need to be cached in the first cache queue .
  • the second image is only used to enhance the fourth image that the user wants to capture, so there is no need to cache more image frames to generate the second image.
  • n ⁇ 2 In another possible implementation manner of the second aspect, n ⁇ 2.
  • the beneficial effect analysis of n ⁇ 2 may refer to the introduction in an implementation manner of the first aspect, which will not be repeated here.
  • k ⁇ 2 In another possible implementation manner of the second aspect, k ⁇ 2.
  • the beneficial effect analysis of k ⁇ 2 may refer to the introduction of m ⁇ 2 in an implementation manner of the first aspect, which will not be repeated here.
  • the foregoing image quality enhancement includes image super-resolution. That is to say, the electronic device uses the third image to enhance the quality of the fourth image, and can also increase the resolution of the fourth image. Wherein, the resolution of the third image and the snapshot image is higher than that of the fourth image.
  • the image processing includes image processing in a RAW domain and image processing by an image signal processor (ISP), where the image processing in the RAW domain is image processing performed in a RAW color space,
  • the ISP image processing is image processing performed by the image signal processor ISP of the electronic device, and the image quality of the second image is better than that of the k frames of the first image; or, the image
  • the processing includes the RAW domain image processing, the ISP image processing and encoding processing, and the image quality of the second image is better than the image quality of the k frames of the first image.
  • the electronic device periodically performs image processing on k frames of first images buffered in the first buffer queue to obtain a second image, including: the electronic device Periodically use the k frames of the first image cached in the first cache queue as input, and run a preset RAW domain image processing algorithm to obtain a third image; wherein, the preset RAW domain image processing algorithm has the ability to improve image quality Function: the electronic device uses the ISP of the electronic device to process the third image to obtain the second image.
  • the present application provides an electronic device, which includes a touch screen, a memory, a display screen, one or more cameras, and one or more processors.
  • the memory, display screen, camera and processor are coupled.
  • the camera is used for collecting images
  • the display screen is used for displaying images collected by the camera or images generated by the processor.
  • Computer program codes are stored in the memory, and the computer program codes include computer instructions.
  • the electronic The device executes the method described in the first aspect or the second aspect and any possible implementation manner thereof.
  • the present application provides an electronic device, and the electronic device includes a touch screen, a memory, a display screen, one or more cameras, and one or more processors.
  • the memory, the display screen, and the camera are coupled to the processor; wherein the camera is used to collect images, and the display screen is used to display images collected by the camera or images generated by the processor , computer program codes are stored in the memory, the computer program codes include computer instructions, and when the computer instructions are executed by the processor, the electronic device performs the following steps: receiving a user's first operation; wherein , the first operation is used to trigger the electronic device to start recording video; in response to the first operation, a viewfinder interface is displayed, and the viewfinder interface displays a preview image stream, the preview image stream includes n frames of preview images, the The preview image is obtained based on n frames of first images collected by the camera of the electronic device after the electronic device receives the first operation, and the viewfinder interface further includes a snapshot shutter, which is used to trigger the The electronic device performs a snapshot
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: performing image processing on the m frames of first images to obtain the Snapping an image, wherein the m frames of the first image include the qth frame of the first image, m ⁇ 1, and m is an integer.
  • the m frames of first images are m adjacent frames of images.
  • the image processing includes image processing in the RAW domain and ISP image processing, the image processing in the RAW domain is image processing performed in a RAW color space, and the ISP image processing is image processing performed by the image signal processor ISP of the electronic device, and the image quality of the captured image is better than the image quality of the first image of the qth frame; or, the image processing includes In the RAW domain image processing, the ISP image processing and encoding processing, the image quality of the captured image is better than the image quality of the first image in the qth frame
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following steps: taking the m frames of the first image as input, and running a pre-set The original RAW domain image processing algorithm is set to obtain the second image; wherein, the preset RAW domain image processing algorithm has the function of improving image quality; the preset RAW domain image processing algorithm integrates the RAW domain, RGB At least one image processing function in the image processing function of the YUV domain or the YUV domain is used to improve the image quality of the image before the ISP performs image processing; the ISP is used to process the second image, and the processed image is processed Encoding to obtain the snapped image.
  • the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: taking the m frames of the first image in the Bayer format as input, Running the preset RAW domain image processing algorithm to obtain a second image in Bayer format; wherein, the preset RAW domain image processing algorithm integrates at least one of the RAW domain, the RGB domain or the YUV domain Part of the image processing function of the item is used to improve the image quality of the image before the ISP performs image processing; using the ISP to sequentially perform the image processing of the RAW domain and the image processing of the RGB domain on the second image and the image processing in the YUV domain, encoding the processed image to obtain the snapshot image.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further performs the following steps: taking the m frames of the first image in Bayer format as input, and running The preset RAW domain image processing algorithm obtains a second image in RGB format; wherein, the preset RAW domain image processing algorithm integrates the image processing function of the RAW domain, and is used to perform image processing on the ISP Improving the image quality of the image before the image processing of the RGB domain and the YUV domain; using the ISP to sequentially perform the image processing of the RGB domain and the image processing of the YUV domain on the second image, and encode the processed image Obtain the snapshot image.
  • the preset RAW domain image processing algorithm also integrates a partial image processing function of at least one of the RGB domain or the YUV domain, for The above-mentioned ISP improves the image quality before performing image processing in the RGB domain.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further performs the following steps: taking the m frames of the first image in Bayer format as input, and running The preset RAW domain image processing algorithm obtains a second image in YUV format; wherein, the preset RAW domain image processing algorithm integrates the image processing function of the RAW domain and the image processing function of the RGB domain, It is used to improve the image quality of the image before the ISP performs the image processing in the YUV domain on the image; using the ISP to perform the image processing in the YUV domain on the second image, and encode the processed image to obtain the Snap an image.
  • the preset RAW domain image processing algorithm also integrates part of the image processing functions in the YUV domain, which are used to perform image processing in the YUV domain before the ISP Improve the image quality.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: using the ISP of the electronic device, sequentially processing the first image performing RAW image processing, RGB domain image processing, and YUV domain image processing to obtain the preview image; using the ISP to process the first image to obtain the preview image by means of time division multiplexing, and processing the The second image obtains the snapped image.
  • the present application provides an electronic device, which includes a touch screen, a memory, a display screen, one or more cameras, and one or more processors.
  • the memory, the display screen, and the camera are coupled to the processor; wherein the camera is used to collect images, and the display screen is used to display images collected by the camera or images generated by the processor , computer program codes are stored in the memory, the computer program codes include computer instructions, and when the computer instructions are executed by the processor, the electronic device performs the following steps: receiving a user's first operation; wherein , the first operation is used to trigger the electronic device to start recording a video; in response to the first operation, a viewfinder interface is displayed; wherein, the viewfinder interface displays a preview image stream, and the preview image stream includes n frames of preview images , the preview image is obtained based on n frames of first images collected by the camera of the electronic device after the electronic device receives the first operation, and the viewfinder interface further includes a snapshot shutter, and the snapshot shutter is used for Triggering the electronic
  • the image processing includes image processing in a RAW domain and image processing by an image signal processor (ISP), and the image processing in the RAW domain is image processing performed in a RAW color space,
  • the ISP image processing is image processing performed by the image signal processor ISP of the electronic device, and the image quality of the second image is better than that of the k frames of the first image; or, the image
  • the processing includes the RAW domain image processing, the ISP image processing and encoding processing, and the image quality of the second image is better than the image quality of the k frames of the first image.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: periodically storing the k-th frames cached in the first cache queue An image is used as an input, and a third image is obtained by running a preset RAW domain image processing algorithm; wherein, the preset RAW domain image processing algorithm has the function of improving image quality; the ISP of the electronic device is used to process the third image , to obtain the second image.
  • the image quality enhancement includes image super-resolution; wherein, the resolution of the second image and the captured image is higher than the resolution of the first image of the qth frame .
  • the present application provides a computer-readable storage medium, the computer-readable storage medium includes computer instructions, and when the computer instructions are run on an electronic device, the electronic device is made to execute the first aspect or the second aspect and any of them.
  • the present application provides a computer program product.
  • the computer program product runs on a computer, the computer executes the method described in the first aspect or the second aspect and any possible implementation manner.
  • the computer may be the electronic device described above.
  • Fig. 1A is a kind of image processing flowchart in mobile phone video recording process
  • Fig. 1B is another kind of image processing flowchart in the mobile phone video recording process
  • FIG. 2 is a schematic diagram of a video viewfinder interface of a mobile phone provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of the delay time between a mobile phone receiving a snapping operation and a Sensor receiving a snapping instruction provided by an embodiment of the present application;
  • FIG. 4A is a schematic block diagram of a method for capturing images in video recording provided by an embodiment of the present application
  • FIG. 4B is a schematic block diagram of a method for capturing images in video recording provided by an embodiment of the present application.
  • FIG. 5A is a schematic diagram of a hardware structure of a mobile phone provided by an embodiment of the present application.
  • FIG. 5B is a schematic diagram of a software architecture of a mobile phone provided by an embodiment of the present application.
  • FIG. 6A is a flow chart of a method for capturing images in video recording provided by an embodiment of the present application.
  • FIG. 6B is a flow chart of another method for capturing images in video recording provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a mobile phone video display interface provided by an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of another method for capturing images in video recording provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first cache queue Buffer provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another mobile phone video display interface provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of another mobile phone video display interface provided by the embodiment of the present application.
  • FIG. 12 is a flowchart of a method corresponding to the functional block diagram shown in FIG. 8;
  • FIG. 13 is a schematic block diagram of another method for capturing images in video recording provided by the embodiment of the present application.
  • FIG. 14 is a flowchart of a method corresponding to the functional block diagram shown in FIG. 13;
  • FIG. 15 is a schematic block diagram of another method for capturing images in video recording provided by the embodiment of the present application.
  • FIG. 16 is a flowchart of a method corresponding to the functional block diagram shown in FIG. 15;
  • FIG. 17 is a flow chart of another method for capturing images in video recording provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of the principle of caching a first image by a first buffer queue according to an embodiment of the present application.
  • FIG. 19A is a schematic diagram of a first buffer queue buffering a first image and performing image processing on the first image to obtain a second image according to an embodiment of the present application;
  • Fig. 19B is a schematic diagram of a first buffer queue buffering a first image provided by an embodiment of the present application, and performing image processing on the first image to obtain a second image;
  • FIG. 20 is a schematic block diagram of another method for capturing images in video recording provided by the embodiment of the present application.
  • FIG. 21 is a schematic block diagram of another method for capturing images in video recording provided by the embodiment of the present application.
  • FIG. 22 is a schematic block diagram of another method for capturing images in video recording provided by the embodiment of the present application.
  • Fig. 23 is a schematic diagram of the principle of generating a second image provided by the embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • the image sensor (Sensor) of the electronic equipment is controlled by the exposure, and can continuously output images.
  • Each frame of image is processed by the image signal processor (image signal processor, ISP) or image signal processing algorithm of the electronic device, and then encoded by the encoder (ENCODER), and then the video file can be obtained.
  • the original image output by the image sensor is usually a Bayer (Bayer) format image, and some image sensors can output RGGB, RGBW, CMYK, RYYB, CMY and other format images.
  • the Bayer format image output by the image sensor of the mobile phone is taken as an example for description. It should be noted that image sensors that output images in formats such as RGGB, RGBW, CMYK, RYYB, and CMY, and other electronic devices equipped with such image sensors are also applicable to the technical solutions provided in the embodiments of the present application.
  • RGGB is (red green green blue)
  • RGBW is (red green blue white)
  • CMYK is (cyan magenta yellow black)
  • RYYB is (red yellow yellow blue)
  • CMY is (cyan magenta yellow).
  • the preview image stream includes a multi-frame preview image that is finally presented to the user on the display screen during the recording process of the mobile phone
  • the video file refers to the video stream that is finally saved in the mobile phone in the format of a video file for viewing by the user after the recording is completed.
  • the image processing by the ISP of the mobile phone can be divided into processing in three image format domains: image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain.
  • Image processing in the RAW domain can include: black level correction (BLC) correction, linear correction (Linearizaton), lens shading correction (lens shading correction, LSC), bad point repair (defect pixel correction, DPC), RAW reduction Noise (Denoise), automatic white balance (automatic white balance, AWB), green channel balance (green imbalance, GIC), dechromatic aberration (CAC) and other processing.
  • BLC black level correction
  • Linearizaton linear correction
  • lens shading correction lens shading correction
  • DPC bad point repair
  • DPC bad point repair
  • RAW reduction Noise Denoise
  • automatic white balance automatic white balance
  • GIC green imbalance
  • CAC dechromatic aberration
  • Image processing in the RGB domain can include: demosaic (Demosiac), color correction CC, dynamic range compression (dynamic range control, DRC), Gamma correction, RGB2YUV (RGB format converted to YUV format).
  • Image processing in the YUV domain can include: UV downsampling, color enhancement CE, spatial domain noise reduction YUVNF, color management 3DLUT, sharpening Sharpness, scaling Scalar.
  • the division of "RAW domain", “RGB domain” and “YUV domain” in the ISP includes but is not limited to the above division methods.
  • demosaicing can also be included in the "RAW domain”.
  • the embodiment of the present application does not limit this.
  • the image after the image sensor (Sensor) outputs the image, the image can be processed by the ISP in the "RAW domain", "RGB domain” and “YUV domain”; "After image processing, it can be divided into two data streams.
  • One data stream is processed by the processing algorithm 1 shown in FIG. 1A , and then the display module performs encoding or format conversion to obtain and display a preview image.
  • the other data stream is processed by the processing algorithm 2 shown in FIG. 1A , and then encoded by the encoder 1 to obtain a video file.
  • the ISP can perform image processing in the "RAW domain” and "RGB domain” on the image; after the image processing in the "RGB domain”, Can be divided into two data streams.
  • One stream of data is processed using the processing algorithm 1 shown in Figure 1B, and then the ISP performs image processing in the "YUV domain", and then the display module performs encoding or format conversion to obtain and display a preview image.
  • the other data stream is processed by the processing algorithm 2 shown in Figure 1B, and then the image processing in the "YUV domain” is performed by the ISP, and then encoded by the encoder 1 to obtain a video file.
  • the image processing of the processing algorithm 1 and the processing algorithm 2 can be performed in the RGB domain, and can also be performed in the YUV domain.
  • the ISP can use the processing algorithm 1 to process the image before converting the image from the RGB format to the YUV format. After that, the ISP can convert the image processed by the processing algorithm 1 into the YUV format, and then perform image processing in the "YUV domain" on the image.
  • the ISP can first convert the image from the RGB format to the YUV format, and then use the processing algorithm 1 to process the image in the YUV format. Afterwards, the ISP can perform image processing in the "YUV domain" on the image processed by the processing algorithm 1.
  • processing algorithm 1 may also be called a post-processing algorithm for preview images
  • processing algorithm 2 may also be called a post-processing algorithm for video files.
  • Processing algorithm 1 and processing algorithm 2 may include anti-shake processing, denoising processing, blur processing, color and brightness adjustment and other processing functions.
  • the image output by the Sensor is an image in Bayer (Bayer) format (Bayer image for short).
  • the ISP "RAW domain” input image is a Bayer format image (ie Bayer image)
  • the ISP "RAW domain” output image is an RGB format image (abbreviated as RGB image).
  • RGB image abbreviated as RGB image
  • the input image of the “RGB domain” of the ISP is an image in RGB format (ie, an RGB image), and the output image of the “RGB domain” of the ISP is an image in the YUV format (referred to as a YUV image).
  • the "YUV domain” input image of ISP is an image in YUV format (that is, YUV image)
  • the output image of "YUV domain” of ISP can be encoded (ENCODE) to obtain a preview image or video file.
  • Bayer, RGB and YUV are three expression formats of images.
  • RGB images and YUV images reference may be made to related content in conventional technologies, and details will not be repeated here.
  • the Sensor outputs images the images processed by the ISP and the encoder (ie ENCODER, such as the encoder and encoder 1 of the display module) can be used to record video; therefore, the Sensor output image, the ISP and the encoder ( ENCODER)
  • the data stream in the whole process of image processing (such as the data stream of the video file and the data stream of the preview image) is called the video stream.
  • the ways in which the mobile phone processes images during video recording to obtain preview image streams and video files include but are not limited to the ways shown in Figure 1A and Figure 1B, other processing ways will not be described in this embodiment of the present application.
  • the processing method shown in FIG. 1A is taken as an example to introduce the method in the embodiment of the present application.
  • the mobile phone can capture images in response to user operations.
  • the mobile phone may display the video viewfinder interface 201 shown in FIG. 2 .
  • the viewfinder interface 201 of the video recording includes a snapshot shutter 202, which is used to trigger the mobile phone to capture images during the video recording and save them as photos.
  • the mobile phone can capture an image in response to the user's click operation on the capture button 202 shown in FIG. 2 .
  • what the user wants the mobile phone to capture is the image collected by the camera at the moment when the user clicks the capture shutter 202 .
  • the first frame image collected when the Snapshot program of the mobile phone receives the capture command can be used as the capture image (the seventh frame image shown in Figure 3).
  • the upper layer application the camera application corresponding to the viewfinder interface 201 of the video as shown in FIG. 2
  • the user's snapping operation such as the user's click operation on the snapping shutter 202
  • the Sensor will not stop outputting the Bayer image. Therefore, the Sensor may have output multiple frames of Bayer images from the time when the upper-layer application receives the user's capture operation, and the Snapshot program receives the capture command.
  • the image sensor (Sensor) outputs the third frame of Bayer image
  • the upper layer application receives the capture operation
  • the Sensor outputs the seventh frame of Bayer image
  • the capture command is passed to the Snapshot program.
  • the seventh frame of image is not a frame of image at the moment when the user clicks the snapshot shutter 202 .
  • the first frame of image is the earliest frame image of the Sensor
  • the eighth frame of image is the latest frame of image output of the sensor.
  • the image sensor (Sensor) can sequentially expose and output the 8 frames of images shown in FIG. 3 starting from the first frame of images.
  • the mobile phone can intercept a frame of image captured by the user at the moment of snapping in the video stream (such as the data stream of the video file and the data stream of the preview image), and save it as a snapped image and present it to the user as a photo.
  • An embodiment of the present application provides a method for capturing images during video recording, which can capture images during video recording and improve the image quality of the captured images.
  • an electronic device such as a mobile phone may cache the Sensor exposure output Bayer image in a first buffer queue (Buffer).
  • the first buffer queue can buffer multiple frames of Bayer images.
  • the frame selection module of the mobile phone may select a capture frame from the first buffer queue (that is, the first image of the qth frame captured by the camera when the user inputs a capture operation). In this way, the mobile phone can obtain the image captured by the camera when the user inputs the capture operation from the first cache queue.
  • the electronic device can also perform image processing on the first image of the qth frame to obtain a captured image, which can improve the image quality of the captured image.
  • an image that meets the needs of the user can be captured during the video recording process, and the image quality of the captured image can be improved.
  • the above image processing may include processing of a preset RAW domain image processing algorithm and ISP image processing.
  • the electronic device can also use a preset RAW domain image processing algorithm and an ISP hardware module to process the captured frame selected by the frame selection module to obtain a captured image.
  • the above image processing may also include encoding processing.
  • the image processed by the ISP hardware module may be encoded by an encoder (such as the encoder 2 ) to obtain a snapshot image.
  • the above-mentioned encoding processing may also be integrated in an ISP hardware module for implementation.
  • the encoding processing is independent of the ISP image processing as an example, and the method in the embodiment of the present application is introduced.
  • the algorithm processing of the preset RAW domain image processing algorithm is added.
  • the processing effect is better, which helps to improve the image quality of the captured image.
  • the preset RAW domain image processing algorithm is a deep learning network for image quality enhancement in the RAW domain.
  • the preset RAW domain image processing algorithm may also be called a preset image quality enhancement algorithm, a preset image quality enhancement algorithm model, or a preset RAW domain AI model.
  • the aforementioned preset RAW domain image processing algorithm can run on a graphics processing unit (graphics processing unit, GPU), a neural network processor (neural-network processing unit, NPU) of an electronic device, or other devices capable of running a neural network model. in the processor. Any one of the above-mentioned processors may load the preset RAW domain image processing algorithm from the memory before running the preset RAW domain image processing algorithm.
  • graphics processing unit graphics processing unit, GPU
  • NPU neural-network processing unit
  • the preset RAW domain image processing algorithm may be a software image processing algorithm.
  • the preset RAW domain image processing algorithm may be a software algorithm in a hardware abstraction layer (hardware abstraction layer, HAL) algorithm library of the mobile phone.
  • HAL hardware abstraction layer
  • the preset RAW domain image processing algorithm may be a hardware image processing algorithm.
  • the preset RAW domain image processing algorithm may be a hardware image processing algorithm implemented by calling the "RAW domain” image processing algorithm capability in the ISP.
  • the preset RAW domain image processing algorithm may be a hardware image processing algorithm implemented by calling the "RAW domain” and “RGB domain” image processing algorithm capabilities in the ISP.
  • the preset RAW domain image processing algorithm may be a hardware image processing algorithm implemented by calling the "RAW domain”, "RGB domain” and "YUV domain” image processing algorithm capabilities in the ISP.
  • the preset RAW domain image processing algorithm may also be referred to as a preset image processing algorithm.
  • it is called a preset RAW domain image processing algorithm because the input of the preset RAW domain image processing algorithm is a RAW domain image.
  • the output of the preset RAW domain image processing algorithm may be an image in the RAW domain or an image in the RGB domain.
  • the encoders, encoder 1 and encoder 2 in the display module shown in FIG. 1A or FIG. 1B may be three different encoders.
  • the mobile phone can use three different encoders to perform encoding or format conversion to obtain the above-mentioned preview image, video file and snapshot image.
  • the encoder, encoder 1 and encoder 2 in the above display module may be the same encoder.
  • An encoder can include multiple encoding units.
  • the mobile phone can use three different coding units in one coder to perform coding or format conversion respectively to obtain the above preview image, video file and snapped image.
  • the encoder and encoder 1 in the display module may be two different encoding units in the same encoder, and the encoder 2 may be another encoder.
  • the encoding modes of different encoders may be the same or different.
  • the encoding modes of different coding units of the same encoder may be the same or different. Therefore, the image formats output by the encoder in the display module and the encoder 1 may be the same or different.
  • the image output by the encoder in the display module and encoder 1 can be an image in any format such as Joint Photographic Experts Group (JPEG), Tag Image File Format (TIFF), etc. .
  • JPEG Joint Photographic Experts Group
  • TIFF Tag Image File Format
  • the electronic device in the embodiment of the present application may be a mobile phone, a tablet computer, a smart watch, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook , and cellular phones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc.
  • PDA personal digital assistant
  • augmented reality augmented reality, AR
  • VR virtual reality
  • the form is not particularly limited.
  • FIG. 5A is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application.
  • the electronic device 500 may include: a processor 510, an external memory interface 520, an internal memory 521, a universal serial bus (universal serial bus, USB) interface 530, a charging management module 540, a power management module 541, a battery 542, antenna 1, antenna 2, mobile communication module 550, wireless communication module 560, audio module 570, speaker 570A, receiver 570B, microphone 570C, earphone jack 570D, sensor module 580, button 590, motor 591, indicator 592, camera 593, a display screen 594, and a subscriber identification module (subscriber identification module, SIM) card interface 595, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 580 may include sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 500 .
  • the electronic device 500 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 510 may include one or more processing units, for example: the processor 510 may include an application processor (application processor, AP), a modem processor, a GPU, an image signal processor (image signal processor, ISP), a control device, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or NPU, etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 500 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 510 for storing instructions and data.
  • the memory in processor 510 is a cache memory.
  • the memory may hold instructions or data that the processor 510 has just used or recycled. If the processor 510 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 510 is reduced, thus improving the efficiency of the system.
  • processor 510 may include one or more interfaces. It can be understood that the interface connection relationship between the modules shown in this embodiment is only for schematic illustration, and does not constitute a structural limitation of the electronic device 500 . In other embodiments, the electronic device 500 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 540 is used for receiving charging input from the charger. While the charging management module 540 is charging the battery 542 , it can also supply power to the electronic device through the power management module 541 .
  • the power management module 541 is used for connecting the battery 542 , the charging management module 540 and the processor 510 .
  • the power management module 541 receives the input of the battery 542 and/or the charging management module 540, and supplies power for the processor 510, the internal memory 521, the external memory, the display screen 594, the camera 593, and the wireless communication module 560, etc.
  • the wireless communication function of the electronic device 500 can be realized by the antenna 1, the antenna 2, the mobile communication module 550, the wireless communication module 560, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the antenna 1 of the electronic device 500 is coupled to the mobile communication module 550, and the antenna 2 is coupled to the wireless communication module 560, so that the electronic device 500 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 500 implements a display function through a GPU, a display screen 594, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display screen 594 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 510 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 594 is used to display images, videos and the like.
  • the display screen 594 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode). diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 500 can realize the shooting function through an ISP, a camera 593 , a video codec, a GPU, a display screen 594 , and an application processor.
  • the ISP is used for processing the data fed back by the camera 593 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 593 .
  • Camera 593 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 500 may include N cameras 593, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 500 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 500 may support one or more video codecs.
  • the electronic device 500 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 500 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 520 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 500.
  • the external memory card communicates with the processor 510 through the external memory interface 520 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 521 may be used to store computer-executable program codes including instructions.
  • the processor 510 executes various functional applications and data processing of the electronic device 500 by executing instructions stored in the internal memory 521 .
  • the processor 510 may execute instructions stored in the internal memory 521, and the internal memory 521 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 500 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 521 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 500 can implement audio functions through an audio module 570 , a speaker 570A, a receiver 570B, a microphone 570C, an earphone interface 570D, and an application processor. Such as music playback, recording, etc.
  • the keys 590 include a power key, a volume key and the like.
  • the motor 591 can generate a vibrating reminder.
  • the indicator 592 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, missed calls, notifications and so on.
  • the SIM card interface 595 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 500 by inserting it into the SIM card interface 595 or pulling it out from the SIM card interface 595 .
  • the electronic device 500 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 595 can support Nano SIM card, Micro SIM card, SIM card etc.
  • FIG. 5B is a block diagram of the software structure of the mobile phone according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android TM system is divided into five layers, from top to bottom are application program layer, application program framework layer, Android runtime (Android runtime) and system library, hardware abstraction layer (hardware abstraction layer, HAL ) and the kernel layer.
  • Android runtime Android runtime
  • HAL hardware abstraction layer
  • this article uses the Android system as an example.
  • other operating systems such as Hongmeng TM system, IOS TM system, etc.
  • the solution of the present application can also be implemented.
  • the application layer can consist of a series of application packages.
  • applications such as call, memo, browser, contacts, gallery, calendar, map, bluetooth, music, video, and short message can be installed in the application layer.
  • an application with a shooting function for example, a camera application
  • a shooting function for example, a camera application
  • the application program layer may be installed in the application program layer.
  • other applications need to use the shooting function, they can also call the camera application to realize the shooting function.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, etc., which are not limited in this embodiment of the present application.
  • the window manager described above is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the above-mentioned content providers are used to store and obtain data, and make these data accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the above view system can be used to build the display interface of the application.
  • Each display interface can consist of one or more controls.
  • controls may include interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, and widgets (Widgets).
  • the resource manager mentioned above provides various resources for the application, such as localized strings, icons, pictures, layout files, video files and so on.
  • the above-mentioned notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sending out prompt sounds, vibrating, and flashing lights, etc.
  • the Android runtime includes a core library and a virtual machine.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem, and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is located below the HAL and is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, a sensor driver, etc., which are not limited in this embodiment of the present application.
  • a camera service may be set in the application framework layer.
  • the camera application can start the Camera Service by calling the preset API.
  • Camera Service can interact with Camera HAL in Hardware Abstraction Layer (HAL) during operation.
  • Camera HAL is responsible for interacting with hardware devices (such as cameras) that realize shooting functions in mobile phones.
  • Camera HAL hides the implementation details of related hardware devices (such as specific image processing algorithms), and on the other hand, it can provide Call the interface of related hardware devices.
  • the related control commands sent by the user can be sent to the Camera Service.
  • Camera Service can send the received control command to Camera HAL, so that Camera HAL can call the camera driver in the kernel layer according to the received control command, and the camera driver will drive the camera and other hardware devices to respond to the control command to collect images data.
  • the camera can transmit each frame of image data collected to the Camera HAL through the camera driver at a certain frame rate.
  • the transfer process of the control command inside the operating system refer to the specific transfer process of the control flow in FIG. 5B .
  • Camera Service After Camera Service receives the above control command, it can determine the shooting strategy at this time according to the received control command.
  • the shooting strategy sets specific image processing tasks that need to be performed on the collected image data. For example, in the preview mode, Camera Service can set image processing task 1 in the shooting strategy to implement the face detection function. For another example, if the user enables the beautification function in the preview mode, the Camera Service can also set the image processing task 2 in the shooting strategy to realize the beautification function. Furthermore, Camera Service can send the determined shooting strategy to Camera HAL.
  • the Camera HAL When the Camera HAL receives each frame of image data collected by the camera, it can perform corresponding image processing tasks on the above image data according to the shooting strategy issued by the Camera Service, and obtain each frame of the image after image processing. For example, Camera HAL can perform image processing task 1 on each frame of image data received according to shooting strategy 1, and obtain corresponding shooting pictures of each frame. When shooting strategy 1 is updated to shooting strategy 2, Camera HAL can perform image processing task 2 on each frame of image data received according to shooting strategy 2, and obtain corresponding shooting pictures of each frame.
  • the Camera HAL can report each frame of the captured image after image processing to the camera application through the Camera Service, and the camera application can display each frame of the captured image on the display interface, or the camera application can display the captured image in the form of a photo or video Each frame shot is saved in the phone.
  • the transfer process of the above-mentioned shooting picture inside the operating system refer to the specific transfer process of the data stream in FIG. 5B .
  • the working principle of the method implemented by the various software layers in the mobile phone in the embodiment of the present application is introduced here with reference to FIG. 5B .
  • the camera application When the camera application is running in video recording mode, it can send the capture command issued by the user to the Camera Service.
  • Camera HAL can call the camera driver in the kernel layer according to the video recording command received before, and the camera driver drives the camera and other hardware devices to respond to the video recording command to collect image data.
  • the camera can transmit each frame of image data collected to the Camera HAL through the camera driver at a certain frame rate.
  • the data stream composed of each frame of image transmitted by the camera driver to the Camera HAL based on the recording command may be the video stream described in the embodiment of the present application (such as the data stream of the video file and the data stream of the preview image).
  • the Camera Service After the Camera Service receives the above-mentioned capture command, it can determine that the capture strategy 3 at this time is to capture images in the video according to the received capture command.
  • the specific image processing task 3 that needs to be performed on the collected image data is set in the shooting strategy, and the image processing task 3 is used to realize the capture function in video recording.
  • Camera Service can send the determined shooting strategy 3 to Camera HAL.
  • the Camera HAL When the Camera HAL receives each frame of image data collected by the camera, it can execute the corresponding image processing task 3 on the above image data according to the shooting strategy 3 issued by the Camera Service to obtain the corresponding captured image.
  • each frame of image output by the exposure of the image sensor (Sensor) of the camera may be buffered in the first buffer queue (Buffer).
  • the Camera HAL can select a capture frame from the Buffer (that is, the image captured by the camera when the user inputs the capture operation).
  • the mobile phone can obtain the image captured by the camera when the user inputs the capture operation from the first cache queue.
  • the first buffer queue (Buffer) can be set on any layer of the mobile phone software system, such as the first buffer queue (Buffer) can be set in the memory area accessed by the Camera HAL through the software interface.
  • the HAL may also include preset RAW domain image processing algorithms.
  • the Camera HAL can pass through one.
  • the HAL can also include a preset RAW domain image processing algorithm.
  • Camera HAL can call the preset RAW domain image processing algorithm to process the captured frame and the adjacent frames of the captured frame to obtain the processed image frame.
  • the above CSI may be a software interface between the Buffer and the preset RAW domain image processing algorithm.
  • Camera HAL can call the camera driver in the kernel layer according to the capture command received before, and the camera driver drives the ISP and other hardware devices in the camera to respond to the capture command to perform hardware processing on the processed image frame to obtain a corresponding frame capture image. Subsequently, the Camera HAL can report the captured image after image processing to the camera application through the Camera Service, and the camera application can save the captured image in the mobile phone in the form of a photo.
  • An embodiment of the present application provides a method for capturing an image in video recording, and the method can be applied to a mobile phone, and the mobile phone includes a camera. As shown in FIG. 6A, the method may include S601-S607.
  • the mobile phone receives a first operation of the user.
  • the first operation is used to trigger the mobile phone to start recording video.
  • the mobile phone may display the viewfinder interface 701 shown in FIG. 7 .
  • the viewfinder interface 701 of the video is the viewfinder interface of the mobile phone that has not started to record.
  • the viewfinder interface 701 of the video includes a button 702 of “Start Video”.
  • the above-mentioned first operation may be the user's click operation on the "start recording" button 702, which is used to trigger the mobile phone to start recording video.
  • the mobile phone displays a viewfinder interface.
  • the viewfinder interface displays a preview image stream.
  • the preview image stream includes n frames of preview images, and the preview images are obtained based on n frames of first images collected by the camera after the mobile phone receives the first operation.
  • the viewfinder interface also includes a capture shutter, which is used to trigger the mobile phone to take a snapshot.
  • the first operation is the user's click operation on the button 702 of "Start Video Recording" as an example.
  • the display screen of the mobile phone can display the viewfinder interface 703 shown in FIG. 7 .
  • the viewfinder interface 703 is a viewfinder interface where the mobile phone is recording a video.
  • the viewfinder interface 703 may display a preview image stream.
  • the preview image stream includes multiple frames of preview images that are finally presented to the user on the display screen during the video recording process of the mobile phone.
  • the viewfinder interface 703 includes a preview image 704 obtained based on the above-mentioned first image.
  • the preview image 704 is a frame of preview image in the preview image stream shown in FIG. 8 .
  • the preview image 704 is obtained based on the first image collected by the camera after the mobile phone receives the first operation.
  • the embodiment of the present application introduces a method for the mobile phone to obtain the preview image 704 from the first image.
  • the mobile phone may process the first image to obtain the preview image 704 according to the processing method of the preview image in the preview image stream shown in FIG. 1B , FIG. 4A , FIG. 4B or FIG. 8 .
  • the ISP of the mobile phone can perform the above-mentioned image processing of RAW, image processing of RGB domain and image processing of YUV domain on each frame of the first image collected by the camera.
  • the method for the mobile phone to obtain the preview image 704 from the first image refer to the processing method of "preview image stream" shown in FIG. 4B or FIG. 8 .
  • the image sensor (Sensor) of the mobile phone is controlled by exposure and can continuously output Bayer images.
  • Each frame of Bayer image is processed by RAW image processing by the ISP of the mobile phone to obtain an RGB image, and the RGB image is processed by the ISP in the RGB domain to obtain a YUV image.
  • the YUV image is processed by the processing algorithm 1, and then the ISP performs image processing in the YUV domain, and then sends it to the encoder 1 (ENCODER) for encoding to obtain a preview image 704 .
  • the processed multi-frame preview images 704 may form a preview video stream (ie, preview image stream).
  • the viewfinder interface 703 also includes a snapshot shutter 702 .
  • the snapshot shutter 702 is used to trigger the mobile phone to take a snapshot.
  • the "snapshot by mobile phone" mentioned in the embodiment of the present application refers to: during the video recording process, the mobile phone snaps a frame of image in the video to obtain a photo.
  • the snapshot shutter 702 is used to trigger the mobile phone to capture an image during video recording to obtain a photo. It is conceivable that some wonderful pictures may be collected during the process of recording video (that is, video recording) by the mobile phone.
  • the user may hope that the mobile phone can capture the above-mentioned wonderful picture, and save it as a photo for display to the user.
  • the user can click the above-mentioned snapping shutter 702 to realize the function of snapping wonderful pictures during the video recording process.
  • the mobile phone can cache the Sensor exposure output Bayer image in a first buffer queue (Buffer).
  • the Bayer image output by the Sensor can also be cached in the first cache queue.
  • the mobile phone can acquire this frame of image from the first cache queue.
  • the mobile phone may store the above n frames of first images in a first buffer queue (Buffer) of the electronic device.
  • the mobile phone may also execute S603.
  • the mobile phone caches the first image captured by the camera in the first cache queue.
  • the first cache queue caches n frames of first images collected by the camera, where n ⁇ 1, and n is an integer.
  • the mobile phone may cache the first image captured by the camera in the first buffer queue (Buffer) shown in FIG. 4A or FIG. 4B .
  • the first buffer queue may buffer n frames of first images collected by the camera on a first-in-first-out basis.
  • the tail of the first cache queue can perform an enqueue operation for inserting the first image; the queue head of the first cache queue can perform a dequeue operation for deleting the first image.
  • n frames of the first image have been cached in the first buffer queue, each time a frame of the first image is inserted at the tail of the first buffer queue, a frame of the first image is deleted at the head of the first buffer queue.
  • the mobile phone receives the user's third operation on the snapshot shutter.
  • the above-mentioned third operation may be a user's single-click operation on the snapshot shutter.
  • the third operation may be the user's single-click operation on the snapshot shutter shown in FIG. 7 .
  • the third operation may be the user's continuous click operation on the capture shutter.
  • each click operation of the snapshot shutter is used to trigger the mobile phone to execute S605 once. That is to say, the single-click operation of the snap shutter is used to trigger the mobile phone to snap a photo.
  • the continuous click operation of the snap shutter is used to trigger the mobile phone to snap multiple photos.
  • the method for capturing multiple photos by the mobile phone during the video recording process is similar to the method for capturing one photo, and will not be repeated here.
  • the first image of the qth frame is a frame of the first image collected by the camera when the mobile phone receives the third operation.
  • the time when the qth frame of the first image is output by the image sensor of the camera is the closest to the time when the mobile phone receives the third operation.
  • the above-mentioned qth frame of the first image may also be an image with the highest definition among the n frames of first images.
  • the method of the embodiment of the present application is introduced by taking the time when the first image of the qth frame is output by the image sensor of the camera and the time when the mobile phone receives the third operation is the closest as an example.
  • the Camera HAL in the HAL of the mobile phone may include a frame selection module. After the Camera HAL receives the capture instruction from the Camera Service, it can select the first image of the qth frame (that is, the capture frame, also called the reference frame) from the n frames of the first image cached in the first buffer queue Buffer.
  • the first image of the qth frame that is, the capture frame, also called the reference frame
  • each frame of the first image above corresponds to a piece of time information
  • the time information records the time when the image sensor outputs the corresponding first image.
  • the time information may also be called a time stamp.
  • the mobile phone may record the time when the mobile phone receives the third operation (that is, the time when the first operation occurs).
  • the time when the mobile phone receives the third operation (that is, the snapshot operation) may be recorded by a hardware clock of the mobile phone (such as a hardware clock used to record the occurrence time of a touch event on the touch screen).
  • the mobile phone (such as the frame selection module mentioned above) can select the time stamp in the first buffer queue Buffer and the time stamp received by the mobile phone.
  • the first image with the latest time until the third operation is used as the first image of the qth frame (that is, the snapshot frame, also referred to as a reference frame).
  • the clock in the mobile phone that records the occurrence time of the third operation is synchronized with the clock that the Sensor records to output the first image.
  • the clock in the mobile phone that records the occurrence time of the third operation is the same system clock as the clock that the Sensor records the output of the first image.
  • the mobile phone may cache the Bayer image output by the Sensor exposure in a first buffer queue Buffer.
  • the first buffer queue can buffer multiple frames of Bayer images. In this way, even if the user's snapping operation is received to the Snapshot program receives the snapping command, there is a delay as shown in FIG. 3 .
  • the Bayer image output by the Sensor may also be cached in the first cache queue.
  • the frame selection module of the mobile phone can select the capture frame from the Buffer (that is, the image captured by the camera when the user inputs the capture operation). In this way, the mobile phone can obtain the image captured by the camera when the user inputs the capture operation from the first cache queue.
  • the Sensor exposure end time may be used as the time stamp; on other platforms, the Sensor start exposure time may be used as the time stamp, which is not limited in this embodiment of the present application.
  • the mobile phone performs image processing on the first image of the qth frame stored in the first cache queue to obtain a snapshot image.
  • n may be equal to 1.
  • one frame of the first image may be buffered in the first buffer queue. In this way, the mobile phone can only perform image processing on one frame of the first image when executing S605.
  • n may be greater than 1.
  • multiple frames of the first image may be cached in the first cache queue.
  • S605 may include: the mobile phone performs image processing on the m frames of the first image stored in the first buffer queue to obtain a snapshot image.
  • the m frames of the first image include the above-mentioned qth frame of the first image, m ⁇ 1, and m is an integer.
  • the mobile phone may perform image processing on one or more frames of the first image.
  • the mobile phone in the case of m ⁇ 2, can perform image processing on multiple frames of the first image, and other images in the m frames of the first image except the first image of the qth frame can be used for capturing frames (that is, the qth frame
  • the first image also referred to as a reference frame
  • the image processing described in S605 may include: image processing in the RAW domain and ISP image processing.
  • the image processing in the RAW domain is image processing performed in the RAW color space.
  • the ISP image processing is the image processing performed by the ISP of the mobile phone. After the above image processing, the image quality of the captured image is better than the image quality of the first image of the qth frame.
  • the above image processing may include: RAW domain image processing, ISP image processing and encoding processing. That is to say, the encoding process can be integrated in the ISP image processing, or can be independent of the ISP image processing.
  • the method in the embodiment of the present application is introduced by taking encoding processing independent of ISP image processing as an example.
  • the encoding process specifically refers to encoding an image by using an encoder.
  • the above-mentioned image processing in the RAW domain can be realized by preset RAW domain image processing algorithms.
  • ISP image processing can be realized through the ISP of the mobile phone.
  • the mobile phone in response to the third operation, performs image processing on the first image of the qth frame by preset RAW domain image processing algorithm and ISP, and the method for obtaining the snapshot image may include S605a-S605b.
  • S605 may include S605a-S605b.
  • the mobile phone takes m frames of the first image as input, runs a preset RAW domain image processing algorithm, and obtains the second image.
  • the preset RAW domain image processing algorithm has the function of improving image quality.
  • the preset RAW domain image processing algorithm has the function of improving image quality.
  • the preset RAW domain image processing algorithm integrates at least one image processing function in the RAW domain, RGB domain or YUV domain image processing functions, and is used to improve image quality before the ISP performs image processing.
  • m may be equal to 1. That is to say, the m frames of the first image are the above-mentioned qth frame of the first image.
  • the mobile phone uses the above-mentioned first image of frame q as an input to run a preset RAW domain image processing algorithm to obtain a second image with higher quality.
  • the preset RAW domain image processing algorithm is single-frame input and single-frame output.
  • parameters such as data integrity and texture in a frame of image are limited, and running a preset RAW domain image processing algorithm with a frame of image as input cannot effectively improve the image quality of this frame of image.
  • m may be greater than 1.
  • the mobile phone may use the qth frame of the first image and at least one frame of image adjacent to the qth frame of the first image as input, and run a preset RAW domain image processing algorithm. That is, the m frames of the first image including the qth frame of the first image among the n frames of the first image may be used as input to run the preset RAW domain image processing algorithm.
  • the preset RAW domain image processing algorithm is an image processing algorithm with multi-frame input and single-frame output.
  • the first image of the qth frame can enhance the image quality of the captured frame (that is, the first image of the qth frame, also referred to as a reference frame). It is beneficial to obtain information such as noise and texture, and can further improve the quality of the second image.
  • the aforementioned m frames of first images are m adjacent frames of images in the first buffer queue.
  • the m frames of the first image may also be m frames of images that are not adjacent but include the qth frame of the first image among the n frames of the first image buffered in the first buffer queue.
  • the preset RAW domain image processing algorithm described in the embodiment of the present application may be a neural network model with multi-frame input and single-frame output.
  • the preset RAW domain image processing algorithm is a deep learning network for image quality enhancement in the RAW domain.
  • the algorithm processing of the preset RAW domain image processing algorithm is added.
  • the preset RAW domain image processing algorithm The effect of combining with ISP is better, which helps to improve the image quality of captured images.
  • the mobile phone uses the ISP of the mobile phone to process the second image, and encodes the processed image to obtain a snapshot image.
  • the mobile phone may use the ISP to process the first image and the second image by means of time division multiplexing. That is to say, the mobile phone uses the ISP to process the second image, which will not affect the mobile phone to use the ISP to process the first image.
  • the mobile phone uses the ISP to process the captured image shown in FIG. 4A , which will not affect the mobile phone to process the preview image stream and video file shown in FIG. 4A or 4B .
  • the image processing flow of obtaining the preview image from the first image also includes the processing flow of the processing algorithm 1 .
  • the above-mentioned processing algorithm 1 may be included in a hardware module of the ISP.
  • the processing algorithm 1 may be included in other processors of the mobile phone (such as any processor such as CPU, GPU or NPU).
  • the hardware module of the ISP may call the processing algorithm 1 in the above-mentioned other processors to process the first image to obtain the preview image.
  • the mobile phone may generate and save a snapshot image.
  • the user cannot view the captured image during the video recording process of the mobile phone. After the recording is over, the user can view the snapped image in the photo album.
  • the method in the embodiment of the present application further includes S606-S607
  • the mobile phone receives the second operation of the user.
  • the mobile phone may receive the user's click operation (that is, the second operation) on the "end recording” button 706 shown in FIG. 10 .
  • the video recording can be ended, and the video viewing interface 1001 shown in FIG. 10 is displayed.
  • the viewfinder interface 1001 for recording is the viewfinder interface when the mobile phone has not started to record.
  • the photo in the photo option in the viewfinder interface of the mobile phone is updated from 708 shown in FIG. 7 to 1002 shown in FIG. 10 .
  • the mobile phone may respond to the user's start operation on the photo album application, and display the photo album list interface 1101 shown in FIG.
  • the photo album list interface 1101 includes multiple photos and videos saved in the mobile phone.
  • the album list interface 1101 includes a video 1103 recorded by the mobile phone, and a photo 1102 captured by the mobile phone during the recording of the video 1103 . That is to say, after the video recording of the mobile phone ends, the recorded video (such as video 1103) is saved.
  • the mobile phone may cache the Bayer image output by the Sensor exposure in a first buffer queue Buffer.
  • the first buffer queue can buffer multiple frames of Bayer images. In this way, even if there is a delay between receiving the user's snapping operation and the Snapshot program receiving the snapping command; when the user's snapping operation is received, the Bayer image output by the Sensor can also be cached in the first cache queue.
  • the frame selection module of the mobile phone can select the capture frame (that is, the image collected by the camera when the user inputs the capture operation) from the first buffer queue. In this way, the mobile phone can obtain the image captured by the camera when the user inputs the capture operation from the first cache queue.
  • the mobile phone can also use the preset RAW domain image processing algorithm and the ISP hardware module to process the snapshot frame selected by the frame selection module; finally, the encoder 2 encodes the processing result to obtain the snapshot image.
  • the preset RAW domain image processing algorithm is a deep learning network for image quality enhancement in the RAW domain.
  • the algorithm processing of the preset RAW domain image processing algorithm is added.
  • the processing effect is better, which helps to improve the image quality of the captured image.
  • an image that meets the needs of the user can be captured during the video recording process, and the image quality of the captured image can be improved.
  • the input and output image formats of the preset RAW domain image processing algorithm are both Bayer.
  • the preset RAW domain image processing algorithm integrates at least one partial image processing function in the RAW domain, RGB domain or YUV domain, and is used to improve image quality before the ISP performs image processing.
  • the ISP may sequentially perform image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain on the Bayer image output by the preset RAW domain image processing algorithm.
  • S605a can be replaced by S1201
  • S605b can be replaced by S1202.
  • the mobile phone takes m frames of the first image in Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain a second image in Bayer format.
  • the m frames of the first image include the qth frame of the first image, and the preset RAW domain image processing algorithm has the function of improving the image quality.
  • the mobile phone uses the ISP to sequentially perform image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain on the second image, and encode the processed image to obtain a snapshot image.
  • the preset RAW domain image processing algorithm integrates at least one partial image processing function in the RAW domain, RGB domain or YUV domain, and is used to improve image quality before the ISP performs image processing.
  • the image processing functions of the ISP in the RAW domain include A, B, and C
  • the image processing functions of the ISP in the RGB domain include D and E
  • the image processing functions of the ISP in the YUV domain include F and G.
  • the specific image processing functions of the RAW domain, RGB domain, and YUV domain of the ISP reference may be made to relevant introductions in the foregoing embodiments, and details are not described here.
  • the image processing functions of A and C in the RAW domain may be integrated into the preset RAW domain image processing algorithm.
  • the mobile phone executes S1201 to run the preset RAW domain image processing algorithm to complete the image processing functions of A and C in the RAW domain.
  • the mobile phone executes S1202, and uses the ISP to complete the image processing functions of B in the RAW domain, the image processing functions of D and E in the RGB domain, and the image processing functions of F and G in the YUV domain for the second image in sequence.
  • the image processing function of A in the RAW domain and the image processing function of D in the RGB domain may be integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1201 to run the preset RAW domain image processing algorithm to complete the image processing function of A in the RAW domain and the image processing function of D in the RGB domain.
  • the mobile phone executes S1202, and uses the ISP to sequentially complete image processing functions of B and C in the RAW domain, complete image processing functions of E in the RGB domain, and complete image processing functions of F and G in the YUV domain for the second image.
  • the image processing function of A in the RAW domain and the image processing function of F in the YUV domain may be integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1201 to run the preset RAW domain image processing algorithm to complete the image processing function of A in the RAW domain and the image processing function of F in the YUV domain.
  • the mobile phone executes S1202, and uses the ISP to sequentially complete the image processing functions of B and C in the RAW domain, complete the image processing functions of D and E in the RGB domain, and complete the image processing function of G in the YUV domain for the second image.
  • the image processing function of D in the RGB domain and the image processing function of F in the YUV domain may be integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1201 to run the preset RAW domain image processing algorithm to complete the image processing function of D in the RGB domain and the image processing function of F in the YUV domain.
  • the mobile phone executes S1202, and uses the ISP to sequentially complete the image processing functions of A, B, and C in the RAW domain, complete the image processing function of E in the RGB domain, and complete the image processing function of G in the YUV domain for the second image.
  • the image format input by the preset RAW image processing algorithm is Bayer
  • the image format input by the preset RAW image processing algorithm is RGB.
  • the image processing function of the RAW domain is integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs image processing in the RGB domain and the YUV domain.
  • the ISP may sequentially perform image processing in the RGB domain and image processing in the YUV domain on the RGB image output by the preset RAW domain image processing algorithm.
  • S605a can be replaced by S1401
  • S605b can be replaced by S1402.
  • the mobile phone takes m frames of the first image in Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain a second image in RGB format.
  • the m frames of the first image include the qth frame of the first image, and the preset RAW domain image processing algorithm has the function of improving the image quality.
  • the mobile phone uses the ISP to sequentially perform image processing in the RGB domain and image processing in the YUV domain on the second image, and encode the processed image to obtain a snapshot image.
  • the image processing function of the RAW domain is integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs image processing in the RGB domain and the YUV domain.
  • the image processing functions of the RAW domain of the ISP include a, b, and c
  • the image processing functions of the RGB domain of the ISP include d and e
  • the image processing functions of the YUV domain of the ISP include f and g.
  • the image processing functions of a, b and c in the RAW domain are integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1401 to run the preset RAW domain image processing algorithm to complete the image processing functions of a, b and c in the RAW domain.
  • the mobile phone executes S1402, and uses the ISP to sequentially complete the image processing functions of d and e in the RGB domain and the image processing functions of f and g in the YUV domain for the second image.
  • S1402 uses the ISP to sequentially complete the image processing functions of d and e in the RGB domain and the image processing functions of f and g in the YUV domain for the second image.
  • the preset RAW domain image processing algorithm not only integrates the image processing function of the RAW domain, but also integrates a part of the image processing function of at least one of the RGB domain or the YUV domain, so as to perform RGB domain processing in the ISP.
  • the image processing functions of the RAW domain of the ISP include a, b, and c
  • the image processing functions of the RGB domain of the ISP include d and e
  • the image processing functions of the YUV domain of the ISP include f and g.
  • the image processing functions of a, b, and c in the RAW domain and the image processing function of d in the RGB domain are integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1401 to run the preset RAW domain image processing algorithm to complete the image processing functions of a, b and c in the RAW domain, and complete the image processing function of d in the RGB domain.
  • the mobile phone executes S1402, and uses the ISP to sequentially complete the image processing function of e in the RGB domain and the image processing functions of f and g in the YUV domain for the second image.
  • the image processing functions of a, b and c in the RAW domain and the image processing function of f in the YUV domain are integrated in the preset RAW domain image processing algorithm.
  • the mobile phone executes S1401 to run the preset RAW domain image processing algorithm to complete the image processing functions of a, b and c in the RAW domain, and to complete the image processing function of f in the YUV domain.
  • the mobile phone executes S1402, and uses the ISP to sequentially complete the image processing functions of d and e in the RGB domain and the image processing function of g in the YUV domain for the second image.
  • the preset RAW domain image processing algorithm integrates the image processing functions of a, b and c in the RAW domain, the image processing function of d in the RGB domain, and the image processing function of f in the YUV domain.
  • the mobile phone executes S1401 to run the preset RAW domain image processing algorithm to complete the image processing functions of a, b and c in the RAW domain, complete the image processing function of d in the RGB domain, and complete the image processing function of f in the YUV domain.
  • the mobile phone executes S1402, and uses the ISP to sequentially complete the image processing function of e in the RGB domain and the image processing function of g in the YUV domain for the second image.
  • the image format input by the preset RAW image processing algorithm is Bayer, and the image format input by the preset RAW image processing algorithm is YUV.
  • the image processing function of the RAW domain and the image processing function of the RGB domain are integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs the image processing of the YUV domain on the image.
  • the ISP may sequentially perform image processing in the YUV domain on the RGB images output by the preset RAW domain image processing algorithm.
  • S605a can be replaced by S1601
  • S605b can be replaced by S1602.
  • the mobile phone takes m frames of the first image in Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain a second image in YUV format.
  • the m frames of the first image include the qth frame of the first image, and the preset RAW domain image processing algorithm has the function of improving the image quality.
  • the mobile phone uses the ISP to sequentially perform image processing in the YUV domain on the second image, and encodes the processed image to obtain a snapshot image.
  • the image processing function of the RAW domain and the image processing function of the RGB domain are integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs the image processing of the YUV domain on the image.
  • the image processing functions of the RAW domain of the ISP include I, II and III
  • the image processing functions of the RGB domain of the ISP include IV and V
  • the image processing functions of the ISP of the YUV domain include VI and VII.
  • the preset RAW domain image processing algorithm integrates the image processing functions of I, II and III in the RAW domain, and the image processing functions of IV and V in the RGB domain.
  • the mobile phone executes S1601 to run the preset RAW domain image processing algorithm to complete the image processing functions of I, II and III in the RAW domain, and the image processing functions of IV and V in the RGB domain.
  • the mobile phone executes S1602, and uses the ISP to sequentially complete the image processing functions of VI and VII in the YUV domain for the second image.
  • the specific image processing functions of the RAW domain, RGB domain, and YUV domain of the ISP reference may be made to relevant introductions in the foregoing embodiments, and details are not described here.
  • the preset RAW domain image processing algorithm not only integrates the image processing functions of the RAW domain and the RGB domain, but also integrates at least one part of the image processing functions in the YUV domain, which is used to perform the YUV domain in the ISP. Improve the quality of the image before image processing.
  • the image processing functions of the RAW domain of the ISP include I, II and III
  • the image processing functions of the RGB domain of the ISP include IV and V
  • the image processing functions of the ISP of the YUV domain include VI and VII.
  • the preset RAW domain image processing algorithm integrates the image processing functions of I, II and III in the RAW domain, the image processing functions of IV and V in the RGB domain, and the image processing functions of VI in the YUV domain.
  • the mobile phone executes S1601 to run the preset RAW domain image processing algorithm to complete the image processing functions of I, II and III in the RAW domain, the image processing functions of IV and V in the RGB domain, and the image processing functions of VI in the YUV domain.
  • the mobile phone executes S1602, and uses the ISP to sequentially complete the image processing function of VII in the YUV domain for the second image.
  • At least one partial image processing function in the RAW domain, RGB domain or YUV domain of the ISP can be integrated in the preset RAW domain image processing algorithm, so as to improve the image quality of the image before the ISP performs image processing .
  • An embodiment of the present application provides a method for capturing an image in video recording, and the method can be applied to a mobile phone, and the mobile phone includes a camera. As shown in Fig. 17, the method may include S1701-S1708.
  • the mobile phone receives the user's first operation.
  • the first operation is used to trigger the mobile phone to start recording video.
  • the mobile phone displays a viewfinder interface.
  • the viewfinder interface displays a preview image stream, and the preview image stream includes n frames of preview images.
  • the preview image is obtained from n frames of the first image collected by the camera of the mobile phone after receiving the first operation recently.
  • the mobile phone caches the first image collected by the camera in the first cache queue.
  • the first cache queue caches n frames of first images collected by the camera, where n ⁇ 1, and n is an integer.
  • the mobile phone selects a snapshot frame (also referred to as a reference frame) from the first cache queue in response to the user's third operation on the snapshot shutter, and uses the reference frame and the adjacent frame of the reference frame as input to run the preset RAW domain image processing algorithm to obtain snapshot images.
  • the preset RAW domain image processing algorithm may be periodically used to process the k frames of the first image cached in the first cache queue.
  • the method in this embodiment of the present application may further include S1704.
  • the mobile phone periodically performs image processing on k frames of the first image buffered in the first buffer queue to obtain a second image, where k ⁇ 1, where k is an integer.
  • k may be equal to n, or k may be smaller than n.
  • the mobile phone may periodically perform image processing on every 4 frames of the first image to obtain the second image.
  • the first frame of the first image, the second frame of the first image, the third frame of the first image and the fourth frame of the first image shown in (a) of Figure 18 can be used as a group of images Like, perform image processing once.
  • a new frame of the first image (such as the first image of the 7th frame) generated by the exposure of the Sensor enters the queue at the end of the Buffer queue, and a frame of the image at the head of the Buffer queue (such as the first frame of the 7th frame) 1 frame of the first image) can be dequeued from the head of the Buffer queue.
  • the mobile phone may periodically read the k frames of the first image from the head of the Buffer queue, and perform image processing on the k frames of the first image to obtain the second image. For example, as shown in (a) in Figure 19A, after the first image of the fourth frame is dequeued, the mobile phone can send the first image of the first frame, the first image of the second frame, the first image of the third frame, and the first image of the fourth frame. Frames of the first image The four frames of the first image are subjected to image processing to obtain the second image i.
  • the mobile phone can check the first image in the fifth frame, the first image in the sixth frame, the first image in the seventh frame, and the first image in the eighth frame. Image processing is performed on the 4 frames of the first image of an image to obtain the second image ii.
  • the mobile phone can view the first image of the 9th frame, the first image of the 10th frame, the first image of the 11th frame, and the first image of the 12th frame The 4 frames of first images are subjected to image processing to obtain a second image iii.
  • the image processing described in S1704 may include: image processing in the RAW domain and ISP image processing.
  • the image processing in the RAW domain is image processing performed in the RAW color space.
  • the ISP image processing is the image processing performed by the ISP of the mobile phone. After the above image processing, the image quality of the captured image is better than the image quality of the first image of the qth frame.
  • the above image processing may include: RAW domain image processing, ISP image processing and encoding processing. That is to say, the encoding process can be integrated in the ISP image processing, or can be independent of the ISP image processing.
  • the method in the embodiment of the present application is introduced by taking encoding processing independent of ISP image processing as an example. For a detailed introduction of the encoding process, reference may be made to relevant content in the foregoing embodiments, and details are not repeated here.
  • S1704 may include S1704a and S1704b:
  • the mobile phone periodically takes k frames of the first image buffered in the first buffer queue as input, and runs a preset RAW domain image processing algorithm to obtain a third image.
  • the preset RAW domain image processing algorithm has the function of improving image quality.
  • the preset RAW domain image processing algorithm integrates at least one image processing function in the RAW domain, RGB domain or YUV domain image processing function, and is used to improve the image quality of the image before the ISP performs image processing.
  • the mobile phone uses the ISP of the mobile phone to process the third image to obtain the second image.
  • the mobile phone can send the first image of the first frame, the first image of the second frame, the first image of the third frame, and the first image of the fourth frame
  • the 4 frames of the first image of an image are used as input, and the third image i is obtained by running a preset RAW domain image processing algorithm.
  • the ISP of the mobile phone can process the third image i to obtain the second image I.
  • the mobile phone can send the first image in the fifth frame, the first image in the sixth frame, the first image in the seventh frame, and the first image in the eighth frame
  • An image takes the 4 frames of the first image as input, and runs the preset RAW domain image processing algorithm to obtain the third image ii.
  • the ISP of the mobile phone can process the third image ii to obtain the second image II.
  • the mobile phone can send the first image of the 9th frame, the first image of the 10th frame, the first image of the 11th frame, and the first image of the 12th frame
  • the 4 frames of the first image are used as input, and the preset RAW domain image processing algorithm is run to obtain the third image iii.
  • the ISP of the mobile phone can process the third image iii to obtain the second image III.
  • the specific method for the mobile phone to execute S1704 to perform image processing on k frames of the first image to obtain the second image may refer to the above embodiment, where the mobile phone performs image processing on the m frames of the first image stored in the first cache queue, The method for obtaining the captured image will not be described in detail here in this embodiment of the present application.
  • the mobile phone may select a first image with the best image quality among the k frames of first images as a reference frame.
  • the mobile phone can record the time information of the reference frame.
  • the time information of the reference frame can be used as the time information of the third image and the time information of the second image.
  • the time information of the second image may be used when the mobile phone executes S1705 to select the second image for image quality enhancement. It should be understood that the closer the time indicated by the time information of the two frames of images is, the higher the possibility that the textures of the two frames of images are close. Images with similar textures are easier to fuse, which is conducive to image enhancement, which in turn helps to improve the quality of the processed image.
  • the input of the preset RAW image processing algorithm shown in FIG. 20 and the input image format are both Bayer.
  • the preset RAW domain image processing algorithm integrates at least one partial image processing function in the RAW domain, RGB domain or YUV domain, and is used to improve image quality before the ISP performs image processing.
  • the ISP may sequentially perform image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain on the Bayer image output by the preset RAW domain image processing algorithm.
  • S1704a may include: the mobile phone takes k frames of the first image in the Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain the third image in the Bayer format.
  • S1704b may include: the mobile phone uses the ISP to sequentially perform image processing in the RAW domain, image processing in the RGB domain, and image processing in the YUV domain on the third image to obtain the second image.
  • the input image format of the preset RAW domain image processing algorithm shown in FIG. 21 is Bayer, and the output image format of the preset RAW domain image processing algorithm is RGB.
  • the image processing function of the RAW domain is integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs image processing in the RGB domain and the YUV domain.
  • the ISP may sequentially perform image processing in the RGB domain and image processing in the YUV domain on the RGB image output by the preset RAW domain image processing algorithm.
  • S1704a may include: the mobile phone takes k frames of the first image in Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain the third image in RGB format.
  • S1704b may include: the mobile phone uses the ISP to sequentially perform image processing in the RGB domain and image processing in the YUV domain on the third image to obtain the second image.
  • the input image format of the preset RAW domain image processing algorithm shown in FIG. 22 is Bayer, and the input image format of the preset RAW domain image processing algorithm is YUV.
  • the image processing function of the RAW domain and the image processing function of the RGB domain are integrated in the preset RAW domain image processing algorithm, which is used to improve the image quality of the image before the ISP performs the image processing of the YUV domain on the image.
  • the ISP may sequentially perform image processing in the YUV domain on the RGB images output by the preset RAW domain image processing algorithm.
  • S1704a may include: the mobile phone takes k frames of the first image in Bayer format as input, and runs a preset RAW domain image processing algorithm to obtain a third image in YUV format.
  • S1704b may include: the mobile phone uses the ISP to sequentially perform image processing in the YUV domain on the third image to obtain the second image.
  • the mobile phone receives a third operation of the snapping shutter by the user when the camera captures the qth frame of the first image in the n frames of the first image.
  • the mobile phone uses the second image of the latest frame obtained when the mobile phone receives the third operation to enhance the quality of the fourth image to obtain a snapshot image.
  • the fourth image is a frame of image in the video whose time information is the same as that of the first image in the qth frame.
  • the mobile phone executes S1704 to periodically process k frames of the first image to obtain the second image; therefore, during the video recording process of the mobile phone, if the user receives the third operation of the snap-shot shutter at different times, the second image of the latest frame obtained by the mobile phone The images are different.
  • the first image of the fourth frame is dequeued from the first buffer queue Buffer, the mobile phone executes S1704a to obtain the third image i, and the mobile phone executes S1704b to obtain the second image I from the third image i.
  • the first image of the eighth frame is dequeued from the first buffer queue Buffer, the mobile phone executes S1704a to obtain the third image ii, and the mobile phone executes S1704b to obtain the second image II from the third image ii.
  • the first image of the 12th frame is dequeued from the first buffer queue Buffer, the mobile phone executes S1704a to obtain the third image iii, and the mobile phone executes S1704b to obtain the second image III from the third image iii.
  • the mobile phone can only obtain the second image I; but cannot obtain the second image II and the second image III. Therefore, when the Sensor of the mobile phone exposes and outputs the first image of the fifth frame, the mobile phone receives the third operation above, and the second image of the latest frame in the mobile phone is the second image I. At this time, the mobile phone may use the second image I to enhance the quality of the first image in the fifth frame to obtain a snapshot image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the first image of the sixth frame, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is the second image I. At this time, the mobile phone may use the second image I to enhance the quality of the first image in the sixth frame to obtain a snapshot image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the first image of the 7th frame, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is still the second image I. At this time, the mobile phone may use the second image I to enhance the image quality of the first image in the seventh frame to obtain a snapshot image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the first image of the 8th frame, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is still the second image I. At this time, the mobile phone may use the second image I to enhance the quality of the first image in the eighth frame to obtain a captured image.
  • the mobile phone obtains the second image of the latest frame as the second image II; but cannot obtain the second image III. Therefore, when the Sensor of the mobile phone exposes and outputs the first image of the ninth frame, the mobile phone receives the third operation above, and the second image of the latest frame in the mobile phone is the second image II. At this time, the mobile phone may use the second image II to enhance the image quality of the first image in the ninth frame to obtain a captured image.
  • the mobile phone When the sensor of the mobile phone exposes and outputs the first image of the 10th frame, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is the second image I. At this time, the mobile phone may use the second image II to enhance the image quality of the first image in the tenth frame to obtain a snapshot image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the first image of the 11th frame, the mobile phone receives the above three operations, and the second image of the latest frame in the mobile phone is still the second image II. At this time, the mobile phone may use the second image II to enhance the image quality of the first image in the 11th frame to obtain a captured image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the 12th frame of the first image, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is still the second image II. At this time, the mobile phone may use the second image II to enhance the image quality of the first image in the twelfth frame to obtain a captured image.
  • the mobile phone When the Sensor of the mobile phone exposes and outputs the first image of the 13th frame, the mobile phone receives the above-mentioned third operation, and the second image of the latest frame in the mobile phone is the second image III. At this time, the mobile phone may use the second image III to enhance the quality of the first image in the thirteenth frame to obtain a captured image.
  • the mobile phone may cache the second image in the second buffer queue (Buffer').
  • the second buffer queue can buffer one or more frames of images.
  • the newly generated frame of the image can be enqueued in Buffer', and the frame of the second image previously cached in the Buffer' can be dequeued.
  • the ISP of the mobile phone outputs the second image I shown in (a) in FIG. 19A
  • the second image I can be cached in Buffer', that is, the second image I is enqueued in Buffer'.
  • the ISP of the mobile phone outputs the second image II shown in (b) in FIG. ', that is, the second image II is enqueued in Buffer'.
  • the ISP of the mobile phone outputs the second image III shown in (c) in FIG. ', that is, the second image III is enqueued in Buffer'.
  • the second image III is enqueued in Buffer'.
  • one frame of the second image is always cached in Buffer'.
  • the frame of the second image buffered in Buffer' is the latest frame of the second image obtained when the mobile phone receives the third operation.
  • the mobile phone may select a frame of the second image whose time information is closest to the time information of the fourth image from the second images cached in Buffer' as a guide image for drawing the fourth image.
  • the fourth image mentioned in S1707 may be the first image of the qth frame in the video file processed by the ISP.
  • the time information of the fourth image is the same as the time information of the first image in the qth frame.
  • the mobile phone may enhance the image quality of the fourth image by using the latest frame of the second image to obtain the captured image through the fusion network (also referred to as the image fusion network).
  • the method for the mobile phone to perform image enhancement through the fusion network can refer to related methods in the conventional technology, which will not be described in detail here in the embodiment of the present application.
  • registration before the mobile phone executes S1706, registration may be performed on the last frame of the second image and the fourth image. Afterwards, the mobile phone may use the registered second image to enhance the image quality of the registered fourth image. Wherein, before the mobile phone performs fusion (Fusion) on the latest frame of the second image and the fourth image, registration is performed on the latest frame of the second image and the fourth image, which can improve the success rate and effect of image quality enhancement of the mobile phone.
  • fusion Fusion
  • registration can include two ways: global registration and local registration.
  • Global registration generally uses feature point detection and matching. Take the registration of the fourth image and the third image by the mobile phone as an example.
  • the mobile phone can detect matching feature points (such as pixel points) in the fourth image and the third image.
  • the phone can then filter for matching feature points. If the number of good feature points in the matched feature points is greater than the preset threshold 1, the mobile phone can consider that the global registration effect is better and fusion can be performed.
  • the Local registration generally uses the optical flow method. Take the registration of the fourth image and the second image by the mobile phone as an example. The mobile phone may first calculate the optical flow for the fourth image and the second image. Then, the mobile phone may make a difference between the second image transformed by optical flow registration and the fourth image transformed by optical flow registration. If the difference is less than the preset threshold 2, the mobile phone can consider that the local registration effect is better and fusion can be performed.
  • the mobile phone before the mobile phone registers the fourth image and the second image, it may first compare the texture similarity between the fourth image and the third image. If the texture similarity between the fourth image and the second image is higher than the preset similarity threshold, it indicates that the texture similarity between the fourth image and the second image is relatively high. In this case, the registration success rate of the mobile phone to the fourth image and the third image is relatively high. By adopting this solution, the success rate of mobile phone registration can be improved.
  • the mobile phone will not register the fourth image with the second image. In this way, the impact of invalid registration on power consumption of the mobile phone is reduced. In this case, the mobile phone may directly use the fourth image as the captured image.
  • the image quality enhancement mentioned above can implement functions such as noise removal, definition improvement, change or expansion of dynamic range (Dynamic Range), image super-resolution, and the like.
  • the above-mentioned image super-resolution function is introduced here.
  • the ISP down-samples the image output by the Sensor during the recording process. Downsampling (subsampled) can also be called downsampling (down sampled). Downsampling an image reduces the resolution of the image. In this way, as shown in any one of FIGS. 20-22 , the fourth image in the video file is a low resolution (low resolution, LR) image.
  • the captured image that the user wishes to obtain during the video recording process is a high-resolution image.
  • the image quality enhancement described in this embodiment may include image super-resolution.
  • the second image shown in any one of Figures 20-22 is a high resolution (high resolution, HR) image that has not been down-sampled.
  • the mobile phone may use the last frame of the second image as a guide image to enhance the quality of the fourth image (including image super-resolution) and increase the resolution of the fourth image.
  • the resolution of the fourth image is 1080p
  • the resolution of the second image of the latest frame is 4k.
  • the mobile phone executes S1706, and may use the second image with a resolution of 4k as a guide image to enhance the quality of the fourth image with a resolution of 1080p.
  • the resolution of the fourth image (that is, the captured image) after image quality enhancement may be 4K.
  • the fourth image obtained from the first image collected by the camera when the mobile phone receives the third operation is the image that the user wants to capture.
  • the second image of the most recent frame obtained when the mobile phone receives the third operation is an image with higher image quality obtained by the mobile phone using a preset RAW domain image processing algorithm and ISP for image quality enhancement. Therefore, by using the second image of the latest frame to enhance the quality of the fourth image, the quality of the captured image finally obtained can be improved. In this way, not only the image that the user wants to capture during the video recording process can be obtained, but also the image quality of the captured image can be improved.
  • the mobile phone receives the user's second operation.
  • the number of image frames buffered in the first buffer queue can be reduced by adopting the solutions of S1701-S1708.
  • the delay shown in FIG. 3 is 330 milliseconds (ms)
  • the Sensor of the mobile phone exposes a frame of the first image every 30 milliseconds (ms).
  • the second image is only used to enhance the fourth image that the user wants to capture, so there is no need to cache more image frames to generate the second image.
  • the electronic device may include: the above-mentioned display screen, a camera, a memory, and one or more processors.
  • the display screen, camera, memory and processor are coupled.
  • the memory is used to store computer program code comprising computer instructions.
  • the processor executes the computer instructions, the electronic device can execute various functions or steps performed by the mobile phone in the foregoing method embodiments.
  • the structure of the electronic device reference may be made to the structure of the mobile phone shown in FIG. 5A.
  • the chip system 2400 includes at least one processor 2401 and at least one interface circuit 2402 .
  • the processor 2401 and the interface circuit 2402 may be interconnected through wires.
  • interface circuit 2402 may be used to receive signals from other devices, such as memory of an electronic device.
  • the interface circuit 2402 may be used to send signals to other devices (such as the processor 2401).
  • the interface circuit 2402 can read instructions stored in the memory, and send the instructions to the processor 2401 .
  • the electronic device may be made to execute various steps in the foregoing embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the above-mentioned electronic device, the electronic device is made to perform the various functions or steps performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)

Abstract

一种录像中抓拍图像的方法及电子设备,涉及拍摄技术领域,可在录像中抓拍图像,提升抓拍的图像质量。电子设备响应于第一操作,显示取景界面;该取景界面显示预览图像流,预览图像流包括基于摄像头采集的n帧第一图像得到的n帧预览图像和抓拍快门,该抓拍快门用于触发电子设备进行抓拍;n帧第一图像存储于第一缓存队列中;电子设备接收用户的第二操作;响应于第二操作,结束录制视频;保存视频;在摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对抓拍快门的第三操作;响应于第三操作,对保存于第一缓存队列的第q帧第一图像进行图像处理,得到抓拍图像。

Description

一种录像中抓拍图像的方法及电子设备
本申请要求于2021年09月07日提交国家知识产权局、申请号为202111045872.6、发明名称为“一种录像中抓拍图像的方法及电子设备”的中国专利申请的优先权,以及于2022年01月29日提交国家知识产权局、申请号为202210111700.2、发明名称为“一种录像中抓拍图像的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍摄技术领域,尤其涉及一种录像中抓拍图像的方法及电子设备。
背景技术
现有的手机一般具有拍照和录像功能,越来越来的人使用手机拍摄照片和视频来记录生活的点点滴滴。其中,手机录制视频(即录像)的过程中,可能会采集到的一些精彩的画面。在手机录像的过程中,用户可能会希望手机可以抓拍到上述精彩的画面,并保存成照片展示给用户。因此,亟待一种可以实现在录像过程中抓拍图像的方案。
发明内容
本申请提供一种录像中抓拍图像的方法及电子设备,可以在录像过程中抓拍图像,并且可以提升抓拍图像的图像质量。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,本申请提供一种录像中抓拍图像的方法,应用于电子设备,所述电子设备包括摄像头,所述方法包括:所述电子设备接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;响应于所述第一操作,所述电子设备显示取景界面,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;所述电子设备接收用户的第二操作;所述电子设备响应于所述第二操作,结束录制视频;所述电子设备保存所述视频;其中,在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;所述电子设备响应于所述第三操作,对保存于所述第一缓存队列的所述第q帧第一图像进行图像处理,得到抓拍图像。
该方案中,用户可以通过该抓拍快门实现在电子设备录像过程中抓拍图像。电子设备可以在第一缓存队列缓存摄像头采集的n帧第一图像。如此,即使从接收到用户的抓拍操作(即第三操作)到Snapshot程序接收到抓拍指令,存在延迟时长;接收到用户的抓拍操作时,图像传感器(Sensor)输出的第一图像也可以缓存在第一缓存队列中。可以由电子设备从第一缓存队列中选择抓拍帧(即用户输入抓拍操作时候,摄像头采集的第q帧第一图像)。如此,电子设备便可以从第一缓存队列中得到用户输 入抓拍操作时,摄像头采集的图像。另一方面,电子设备还可以对第q帧第一图像进行图像处理得到抓拍图像,可以提升抓拍图像的图像质量。
综上所述,采用本申请实施例的方法,可以在录像过程中抓拍到满足用户需求的图像,并且可以提升抓拍图像的图像质量。
在第一方面的一种可能的实现方式中,n≥2。换言之,第一缓存队列中可以缓存多帧第一图像。如此,即使从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令,存在延迟时长(如120ms-160ms);在这段延迟时长内Sensor出帧都可以缓存在Buffer中。因此,电子设备接收到用户的抓拍操作时,Sensor输出的Bayer图像也可以缓存在第一缓存队列中。并且,短时间内Sensor出帧的图像内容不会发生太大变化。如此,可以由电子设备的选帧模块根据Buffer中缓存的图像的附加信息,从Buffer中选择出图像质量较好的一帧图像作为抓拍图像。这样,可以提升抓拍图像的图像质量。
在第一方面的另一种可能的实现方式中,所述对保存于所述第一缓存队列的所述第q帧第一图像进行图像处理,得到抓拍图像,包括:所述电子设备对所述m帧第一图像进行图像处理,得到所述抓拍图像,其中,所述m帧第一图像包括所述第q帧第一图像,m≥1,m为整数。
在该实现方式中,电子设备可以对一帧或多帧第一图像进行图像处理。其中,m≥2的情况下,电子设备可以对多帧第一图像进行图像处理,该m帧第一图像中除第q帧第一图像之外的其他图像,可以对抓拍帧(即第q帧第一图像,也称为参考帧)起到画质增强的作用,有利于获取噪声和纹理等信息,可以进一步提升抓拍图像的画质。
在第一方面的另一种可能的实现方式中,所述图像处理包括RAW域的图像处理和ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质;或者,所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质。
其中,上述RAW域的图像处理可以通过预设RAW域图像处理算法来实现。ISP图像处理则可以通过电子设备的ISP来实现。
在第一方面的另一种可能的实现方式中,所述电子设备对所述m帧第一图像进行图像处理,得到所述抓拍图像,包括:所述电子设备将所述m帧第一图像作为输入,运行预设原始RAW域图像处理算法,得到第二图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;所述预设RAW域图像处理算法中集成了所述RAW域、RGB域或者YUV域的图像处理功能中的至少一项图像处理功能,用于在所述ISP进行图像处理前提升图像的画质;所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像。
该实现方式中,增加了预设RAW域图像处理算法的算法处理,相比于完全采用ISP的硬件RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,预设RAW域图像处理算法与ISP结合的处理效果更好,有助于提升抓拍图像的图像质量。
在第一方面的另一种可能的实现方式中,预设RAW域图像处理算法输入的图像格式为Bayer格式,输出的图像格式也是Bayer格式。
具体的,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:所述电子设备将拜耳Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到Bayer格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域、所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行图像处理前提升图像的画质。
相应的,所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:所述电子设备采用所述ISP依次对所述第二图像进行所述RAW域的图像处理、所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述第抓拍图像。
在该实现方式中,上述电子设备采用ISP所做的图像处理,还可以包括Bayer格式到RGB格式的转换,以及RGB格式到YUV格式的转换。
在第一方面的另一种可能的实现方式中,预设RAW域图像处理算法输入的图像格式为Bayer格式,输出的图像格式是RGB格式。预设RAW域图像处理算法执行了Bayer格式到RGB格式的转换。
具体的,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:所述电子设备将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到RGB格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能,用于在所述ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质。
相应的,所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:所述电子设备采用所述ISP依次对所述第二图像进行所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
在该实现方式中,上述电子设备采用ISP所做的图像处理,还可以包括RGB格式到YUV格式的转换。
在第一方面的另一种可能的实现方式中,所述预设RAW域图像处理算法中还集成了所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行RGB域的图像处理前提升图像的画质。
在第一方面的另一种可能的实现方式中,预设RAW域图像处理算法输入的图像格式为Bayer格式,输出的图像格式是YUV格式。预设RAW域图像处理算法执行了Bayer格式到RGB格式的转换,以及RGB格式到YUV格式的转换。
具体的,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:所述电子设备将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到YUV格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能和所述RGB域的图像处理功能,用于在所述ISP对图像进行YUV域的图像处理前提升图像的画质。
相应的,所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:所述电子设备采用所述ISP对所述第二图像进行所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
在第一方面的另一种可能的实现方式中,上述预设RAW域图像处理算法中还集成了YUV域的部分图像处理功能,用于在ISP进行YUV域的图像处理前提升图像的画质。
在第一方面的另一种可能的实现方式中,在电子设备的摄像头采集第一图像之后,电子设备显示取景界面之前,电子设备可以采用电子设备的ISP,依次对第一图像进行RAW的图像处理、RGB域的图像处理和YUV域的图像处理,得到预览图像。
其中,电子设备处理第一图像得到预览图像并不会受到“电子设备处理第一图像得到抓拍图像”的影响。
在第一方面的另一种可能的实现方式中,所述电子设备通过时分复用的方式,采用所述ISP处理所述第一图像和所述第二图像。也就是说,电子设备采用ISP处理第二图像得到抓拍图像,并不会影响电子设备采用ISP处理第一图像得到预览图像。换言之,电子设备处理抓拍图像,并不会影响电子设备处理预览图像流和录像文件。
第二方面,本申请提供一种录像中抓拍图像的方法,应用于电子设备,所述电子设备包括摄像头,所述方法包括:所述电子设备接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;响应于所述第一操作,所述电子设备显示取景界面;其中,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;所述电子设备周期性对所述第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,k≥1,k为整数;所述电子设备接收用户的第二操作;所述电子设备响应于所述第二操作,结束录制视频;所述电子设备保存所述视频;其中,在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;所述电子设备响应于所述第三操作,采用所述电子设备接收到所述第三操作时得到的最近一帧的第二图像对第四图像进行画质增强,得到抓拍图像;其中,所述第四图像是所述视频中时间信息与所述第q帧第一图像的时间信息相同的一帧图像。
相比于第一方面提供的方案,采用第二方面提供的方案,可以减少第一缓存队列中缓存的图像帧的数量。具体的,假设从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令的延迟时长是330毫秒(ms),手机的Sensor每30毫秒(ms)曝光一帧第一图像。执行第一方面提供的方案,为了保证手机的选帧模块可以从第一缓存队列中选出手机接收第三操作的时刻Sensor曝光的第一图像,则第一缓存队列中至少需要缓存10帧图像。而采用第二方面提供的方案,第二图像只是用于增强用户想要抓拍的第四图像,因此不需要缓存较多的图像帧来生成第二图像。
在第二方面的另一种可能的实现方式中,n≥2。其中,第二方面的任一种可能的实现方式中,n≥2的有益效果分析,可以参考第一方面的一种实现方式中的介绍,这里不予赘述。
在第二方面的另一种可能的实现方式中,k≥2。其中,第二方面的任一种可能的实现方式中,k≥2的有益效果分析,可以参考第一方面的一种实现方式中对m≥2的介绍,这里不予赘述。
在第二方面的另一种可能的实现方式中,上述画质增强包括图像超分辨。也就是说,电子设备采用第三图像对第四图像进行画质增强,还可以提升第四图像的分辨率。其中,第三图像和抓拍图像的分辨率高于第四图像的分辨率。
在第二方面的另一种可能的实现方式中,所述图像处理包括RAW域的图像处理和图像信号处理器ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质;或者,所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质。该实现方式的有益效果可以参考第一方面的可能的实现方式中的详细描述,这里不予赘述。
在第二方面的另一种可能的实现方式中,所述电子设备周期性对所述第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,包括:所述电子设备周期性将所述第一缓存队列中缓存的k帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;所述电子设备采用所述电子设备的ISP处理所述第三图像,得到所述第二图像。该实现方式的有益效果可以参考第一方面的可能的实现方式中的详细描述,这里不予赘述。
第三方面,本申请提供一种电子设备,该电子设备包括触摸屏、存储器、显示屏、一个或多个摄像头和一个或多个处理器。该存储器、显示屏、摄像头与处理器耦合。其中,摄像头用于采集图像,显示屏用于显示摄像头采集的图像或者处理器生成的图像,存储器中存储有计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被处理器执行时,使得电子设备执行如第一方面或第二方面及其任一种可能的实现方式所述的方法。
第四方面,本申请提供一种电子设备,所述电子设备包括触摸屏、存储器、显示屏、一个或多个摄像头和一个或多个处理器。所述存储器、所述显示屏、所述摄像头与所述处理器耦合;其中,所述摄像头用于采集图像,所述显示屏用于显示所述摄像头采集的图像或者所述处理器生成的图像,所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如下步骤:接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;响应于所述第一操作,显示取景界面,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;接收用户的第二操作;响应于所述第二操作,结束录制视频;保存所述视频;其中,在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;响应于所述第三操作,对保存于所述第一缓存队列的所述第q帧第一图像进行图像处理,得到抓拍图像。
在第四方面的一种可能的实现方式中,n≥2。
在第四方面的另一种可能的实现方式中,m≥2。
在第四方面的另一种可能的实现方式中,当计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:对所述m帧第一图像进行图像处理,得到所述抓拍图像,其中,所述m帧第一图像包括所述第q帧第一图像,m≥1,m为整数。
在第四方面的另一种可能的实现方式中,m帧第一图像为相邻的m帧图像。
在第四方面的另一种可能的实现方式中,所述所述图像处理包括RAW域的图像处理和ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质;或者,所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质
在第四方面的另一种可能的实现方式中,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:将所述m帧第一图像作为输入,运行预设原始RAW域图像处理算法,得到第二图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;所述预设RAW域图像处理算法中集成了所述RAW域、RGB域或者YUV域的图像处理功能中的至少一项图像处理功能,用于在所述ISP进行图像处理前提升图像的画质;采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像。
在第四方面的另一种可能的实现方式中,当所述计算机指令被处理器执行时,使得所述电子设备还执行如下步骤:将拜耳Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到Bayer格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域、所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行图像处理前提升图像的画质;采用所述ISP依次对所述第二图像进行所述RAW域的图像处理、所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
在第四方面的另一种可能的实现方式中,当所述计算机指令被处理器执行时,使得所述电子设备还执行如下步骤:将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到RGB格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能,用于在所述ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质;采用所述ISP依次对所述第二图像进行所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
在第四方面的另一种可能的实现方式中,所述预设RAW域图像处理算法中还集成了所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行RGB域的图像处理前提升图像的画质。
在第四方面的另一种可能的实现方式中,当所述计算机指令被处理器执行时,使得所述电子设备还执行如下步骤:将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到YUV格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能和所述RGB域的图像处理功能,用于在所述ISP对图像进行YUV域的图像处理前提升图像的画质;采用所述ISP对所述第 二图像进行所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
在第四方面的另一种可能的实现方式中,所述预设RAW域图像处理算法中还集成了所述YUV域的部分图像处理功能,用于在所述ISP进行YUV域的图像处理前提升图像的画质。
在第四方面的另一种可能的实现方式中,当所述计算机指令被处理器执行时,使得所述电子设备还执行如下步骤:采用所述电子设备的ISP,依次对所述第一图像进行RAW的图像处理、RGB域的图像处理和YUV域的图像处理,得到所述预览图像;通过时分复用的方式,采用所述ISP处理所述第一图像得到所述预览图像,处理所述第二图像得到所述抓拍图像。
第五方面,本申请提供一种电子设备,该电子设备包括触摸屏、存储器、显示屏、一个或多个摄像头和一个或多个处理器。所述存储器、所述显示屏、所述摄像头与所述处理器耦合;其中,所述摄像头用于采集图像,所述显示屏用于显示所述摄像头采集的图像或者所述处理器生成的图像,所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如下步骤:接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;响应于所述第一操作,显示取景界面;其中,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;周期性对所述第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,k≥1,k为整数;接收用户的第二操作;响应于所述第二操作,结束录制视频;保存所述视频;其中,在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;响应于所述第三操作,采用所述电子设备接收到所述第三操作时得到的最近一帧的第二图像对第四图像进行画质增强,得到抓拍图像;其中,所述第四图像是所述视频中时间信息与所述第q帧第一图像的时间信息相同的一帧图像。
在第五方面的另一种可能的实现方式中,所述图像处理包括RAW域的图像处理和图像信号处理器ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质;或者,所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质。
在第五方面的另一种可能的实现方式中,n≥2。
在第五方面的另一种可能的实现方式中,k≥2。
在第五方面的另一种可能的实现方式中,当所述计算机指令被处理器执行时,使得所述电子设备还执行如下步骤:周期性将所述第一缓存队列中缓存的k帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;采用所述电子设备的ISP处理所述第三图像,得到所述第二图像。
在第五方面的另一种可能的实现方式中,所述画质增强包括图像超分辨;其中,所述第二图像和抓拍图像的分辨率高于所述第q帧第一图像的分辨率。
第六方面,本申请提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面或第二方面及其任一种可能的实现方式所述的方法。
第七方面,本申请提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行如第一方面或第二方面及任一种可能的实现方式所述的方法。该计算机可以是上述电子设备。
可以理解地,上述提供的第三方面、第四方面和第五方面及其任一种可能的实现方式所述的电子设备,第六方面所述的计算机存储介质,第七方面所述的计算机程序产品所能达到的有益效果,可参考第一方面和第二方面及其任一种可能的实现方式中的有益效果,此处不再赘述。
附图说明
图1A为手机录像过程中的一种图像处理流程图;
图1B为手机录像过程中的另一种图像处理流程图;
图2为本申请实施例提供的一种手机的录像取景界面示意图;
图3为本申请实施例提供的一种手机接收抓拍操作到Sensor接收到抓拍指示的延迟时长示意图;
图4A为本申请实施例提供的一种录像中抓拍图像的方法原理框图;
图4B为本申请实施例提供的一种录像中抓拍图像的方法原理框图;
图5A为本申请实施例提供的一种手机的硬件结构示意图;
图5B为本申请实施例提供的一种手机的软件架构示意图;
图6A为本申请实施例提供的一种录像中抓拍图像的方法流程图;
图6B为本申请实施例提供的另一种录像中抓拍图像的方法流程图;
图7为本申请实施例提供的一种手机录像的显示界面示意图;
图8为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图9为本申请实施例提供的一种第一缓存队列Buffer的示意图;
图10为本申请实施例提供的另一种手机录像的显示界面示意图;
图11为本申请实施例提供的另一种手机录像的显示界面示意图;
图12为图8所示的原理框图对应的方法流程图;
图13为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图14为图13所示的原理框图对应的方法流程图;
图15为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图16为图15所示的原理框图对应的方法流程图;
图17为本申请实施例提供的另一种录像中抓拍图像的方法流程图;
图18为本申请实施例提供的一种第一缓存队列缓存第一图像的原理示意图;
图19A为本申请实施例提供的一种第一缓存队列缓存第一图像,以及对第一图像进行图像处理得到第二图像的原理示意图;
图19B为本申请实施例提供的一种第一缓存队列缓存第一图像,以及对第一图像 进行图像处理得到第二图像的原理示意图;
图20为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图21为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图22为本申请实施例提供的另一种录像中抓拍图像的方法原理框图;
图23为本申请实施例提供的一种生成第二图像的原理示意图;
图24为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
目前,电子设备录像过程中,电子设备的图像传感器(Sensor)受到曝光的控制,可以不断输出图像。每一帧图像经过电子设备的图像信号处理器(image signal processor,ISP)或图像信号处理算法处理,然后经过编码器(ENCODER)进行编码,便可以得到视频文件。以手机为代表的电子设备,其图像传感器输出的原始图像通常为拜耳(Bayer)格式图像,也有部分图像传感器可以输出RGGB、RGBW、CMYK、RYYB、CMY等格式图像。本申请实施例中,以手机的图像传感器输出Bayer格式图像为例进行描述。需要注意的是,输出RGGB、RGBW、CMYK、RYYB、CMY等格式图像的图像传感器以及搭载所述图像传感器的其他电子设备也适用于本申请实施例提供的技术方案。
其中,RGGB为(red green green blue),RGBW为(red green blue white),CMYK为(cyan magenta yellow black),RYYB为(red yellow yellow blue),CMY为(cyan magenta yellow)。
请参考图1A或图1B,其示出手机录像过程中预览图像流和录像文件的处理流程。其中,预览图像流包括手机录像过程中最终在显示屏上呈现给用户的多帧预览图像,录像文件是指最终用于录像结束后以视频文件的格式保存在手机中可供用户查看的视频流。
如图1A或图1B所示,手机的ISP处理图像可以分为三个图像格式域的处理:RAW域的图像处理、RGB域的图像处理和YUV域的图像处理。
RAW域的图像处理可以包括:黑电平(black level correction,BLC)纠正、线性纠正(Linearizaton)、镜头阴影纠正(lens shading correction,LSC)、坏点修复(defect pixel correction,DPC)、RAW降噪(Denoise)、自动白平衡(automatic white balance,AWB)、绿通道平衡(green imbalance,GIC)、去色差(CAC)等处理。
RGB域的图像处理可以包括:去马赛克(Demosiac)、色彩纠正CC、动态范围压缩(dynamic range control,DRC)、Gamma校正、RGB2YUV(RGB格式转换为YUV格式)。
YUV域的图像处理可以包括:UV下采样、色彩增强CE、空间域降噪YUVNF、色彩管理3DLUT、锐化Sharpness、缩放Scalar。
需要说明的是,ISP中“RAW域”、“RGB域”和“YUV域”的划分包括但不限于 上述划分方式。例如,去马赛克(Demosiac)还可以包括在“RAW域”中。本申请实施例对此不作限制。
在一种实现方式中,如图1A所示,图像传感器(Sensor)输出图像后,可以由ISP对图像进行“RAW域”、“RGB域”和“YUV域”的图像处理;在“YUV域”图像处理后,可以分为两路数据流。一路数据流采用图1A所示的处理算法1进行处理,然后由显示模组进行编码或格式转换后得到并显示预览图像。另一路数据流采用图1A所示的处理算法2进行处理,然后经过编码器1可编码得到录像文件。
在一种实现方式中,如图1B所示,图像传感器(Sensor)输出图像后,可以由ISP对图像进行“RAW域”和“RGB域”的图像处理;在“RGB域”图像处理后,可以分为两路数据流。一路数据流采用图1B所示的处理算法1进行处理,然后由ISP进行“YUV域”的图像处理,再由显示模组进行编码或格式转换后得到并显示预览图像。另一路数据流采用图1B所示的处理算法2进行处理,然后由ISP进行“YUV域”的图像处理,再经过编码器1可编码得到录像文件。
其中,处理算法1和处理算法2的图像处理可以在RGB域进行,也可以在YUV域进行。
例如,以处理算法1处理图像为例。在ISP对图像进行“RGB域”的图像处理之后,ISP可以采用处理算法1在图像由RGB格式转换为YUV格式之前,对图像进行处理。之后,ISP可以将处理算法1处理后的图像转换为YUV格式,再对图像进行“YUV域”的图像处理。
又例如,仍以处理算法1处理图像为例。在ISP对图像进行“RGB域”的图像处理之后,ISP可以先将图像由RGB格式转换为YUV格式,再采用处理算法1对YUV格式的图像进行处理。之后,ISP可以对处理算法1处理后的图像进行“YUV域”的图像处理。
需要说明的说,上述处理算法1也可以称为预览图像的后处理算法,处理算法2也可以称为录像文件的后处理算法。处理算法1和处理算法2可以包括防抖处理、去噪处理、虚化处理、色彩和亮度调整等处理功能。其中,Sensor输出的图像为拜耳(Bayer)格式的图像(简称Bayer图像)。图1A或图1B中,ISP的“RAW域”输入图像为Bayer格式的图像(即Bayer图像),ISP的“RAW域”输出图像为RGB格式的图像(简称RGB图像)。图1A或图1B中,ISP的“RGB域”输入图像为RGB格式的图像(即RGB图像),ISP的“RGB域”输出图像为YUV格式的图像(简称YUV图像)。图1A或图1B中,ISP的“YUV域”输入图像为YUV格式的图像(即YUV图像),ISP的“YUV域”输出的图像经过编码(ENCODE)可以得到预览图像或录像文件。
其中,Bayer、RGB和YUV是图像的三种表达格式。Bayer图像、RGB图像和YUV图像的详细介绍可以参考常规技术中的相关内容,这里不予赘述。
应注意,由于Sensor输出图像,ISP和编码器(即ENCODER,如显示模组的编码器和编码器1)处理图像均可以用于录制视频;因此,可以将Sensor输出图像、ISP和编码器(ENCODER)处理图像的整个过程中的数据流(如录像文件的数据流和预览图像的数据流)称为视频流。
需要说明的是,手机在录像过程中处理图像得到预览图像流和录像文件的方式包 括但不限于图1A和图1B所示的方式,其他的处理方式本申请实施例这里不予赘述。以下实施例中,以图1A所示的处理方式为例,介绍本申请实施例的方法。
手机录像的过程中,手机可以响应于用户的操作抓拍图像。例如,手机可以显示图2所示的录像的取景界面201。该录像的取景界面201包括抓拍快门202,该抓拍快门202用于触发手机抓拍录像过程中的图像并保存成照片。手机响应于用户对图2所示的抓拍按钮202的点击操作,便可以抓拍图像。其中,用户希望手机抓拍的是用户点击抓拍快门202那一瞬间,摄像头采集的图像。
为了实现手机录像中抓拍图像,一些技术方案中,可以选取手机的抓拍(Snapshot)程序接收到抓拍指令时,采集的第1帧图像作为抓拍图像(如图3所示的第7帧图像)。但是,上层应用(如图2所示的录像的取景界面201对应的相机应用)接收到用户的抓拍操作(如用户对抓拍快门202的点击操作)后,到Snapshot程序接收到抓拍指令需要时间(如图3所示的延迟时长)。在这段时间(如图3所示的延迟时长)内,Sensor并不会停止输出Bayer图像。所以,从上层应用接收到用户的抓拍操作,到Snapshot程序接收到抓拍指令,Sensor可能已经输出了多帧Bayer图像。
例如,如图3所示,假设图像传感器(Sensor)输出第3帧Bayer图像时,上层应用接收到抓拍操作;Sensor输出第7帧Bayer图像时,抓拍指令传递到Snapshot程序。如此,采用现有技术的方案,因为图3所示的延迟时长,所以第7帧图像并不是用户点击抓拍快门202瞬间的一帧图像。采用该方案,并不能抓拍到用户真实想要的一帧图像。需要说明的是,图3所示的8帧图像中,第1帧图像是Sensor最早出帧的一帧图像,而第8帧图像是sensor最晚出帧的一帧图像。图像传感器(Sensor)可以从第1帧图像开始,依次曝光输出图3所示的8帧图像。
在另一些实施例中,手机可以截取视频流(如录像文件的数据流和预览图像的数据流)中用户抓拍瞬间采集的一帧图像,作为抓拍图像保存成照片展示给用户。
但是,手机录像过程中,每秒需要处理大量图像(如30帧图像)。如此,留给每一帧图像的运算资源和时间都是有限的;因此,手机一般可以使用ISP的硬件处理模块,采用较为简单的处理方式来处理视频流;而不会使用复杂的算法来提升画质(如去噪和提亮)。这样的图像处理效果,只能满足视频的要求;而拍照对画质的要求则更高。因此,截取视频流中的图像,并不能抓拍到用户满意的图像。
本申请实施例提供一种录像中抓拍图像的方法,可以在录像过程中抓拍图像,并且可以提升抓拍图像的图像质量。
一方面,本申请实施例中,如图4A所示,电子设备(如手机)可以将Sensor曝光输出Bayer图像缓存在一个第一缓存队列(Buffer)中。该第一缓存队列可以缓存多帧Bayer图像。如此,即使从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令,存在图3所示的延迟时长;接收到用户的抓拍操作时,Sensor输出的Bayer图像也可以缓存在第一缓存队列中。如图4A所示,可以由手机的选帧模块从第一缓存队列中选择抓拍帧(即用户输入抓拍操作时候,摄像头采集的第q帧第一图像)。如此,手机则可以从第一缓存队列中得到用户输入抓拍操作时,摄像头采集的图像。
另一方面,如图4A所示,电子设备还可以对第q帧第一图像进行图像处理得到抓拍图像,可以提升抓拍图像的图像质量。
综上所述,采用本申请实施例的方法,可以在录像过程中抓拍到满足用户需求的图像,并且可以提升抓拍图像的图像质量。
进一步的,上述图像处理可以包括预设RAW域图像处理算法的处理和ISP图像处理。如图4B所示,电子设备还可以采用预设RAW域图像处理算法和ISP硬件模块,处理选帧模块选择的抓拍帧得到抓拍图像。在一些实施例中,上述图像处理还可以包括编码处理。例如,如图4B所示,ISP硬件模块处理后的图像可以由编码器(如编码器2)对处理结果进行编码得到抓拍图像。在另一些实施例中,上述编码处理还可以集成在ISP硬件模块中实现。本申请实施例中,编码处理独立于ISP图像处理为例,介绍本申请实施例的方法。
本方案中,增加了预设RAW域图像处理算法的算法处理,相比于完全采用ISP的硬件RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,预设RAW域图像处理算法与ISP结合的处理效果更好,有助于提升抓拍图像的图像质量。
其中,预设RAW域图像处理算法是一个RAW域的画质增强的深度学习网络。该预设RAW域图像处理算法也可以称为预设画质增强算法、预设画质增强算法模型或者预设RAW域AI模型。
示例性的,上述预设RAW域图像处理算法可以运行在电子设备的图形处理器(graphics processing unit,GPU)、神经网络处理器(neural-network processing unit,NPU)或者其他具备运行神经网络模型能力的处理器中。上述任一种处理器在运行预设RAW域图像处理算法之前,可以从内存加载该预设RAW域图像处理算法。
在一些实施例中,预设RAW域图像处理算法可以是软件图像处理算法。该预设RAW域图像处理算法可以是手机的硬件抽象层(hardware abstraction layer,HAL)算法库中的一种软件算法。
在另一些实施例中,预设RAW域图像处理算法可以是硬件图像处理算法。该预设RAW域图像处理算法可以是调用ISP中的“RAW域”图像处理算法能力实现的一种硬件图像处理算法。或者,该预设RAW域图像处理算法可以是调用ISP中的“RAW域”和“RGB域”图像处理算法能力实现的一种硬件图像处理算法。或者,该预设RAW域图像处理算法可以是调用ISP中的“RAW域”、“RGB域”和“YUV域”图像处理算法能力实现的一种硬件图像处理算法。
需要说明的是,预设RAW域图像处理算法也可以称为预设图像处理算法。本申请实施例中之所以称之为预设RAW域图像处理算法,是因为该预设RAW域图像处理算法输入的是RAW域的图像。该预设RAW域图像处理算法输出的可以是RAW域的图像,也可以是RGB域的图像。
图1A或图1B所示的显示模组中的编码器、编码器1和编码器2可以是三个不同的编码器。手机可以采用三个不同的编码器进行编码或者格式转换得到上述预览图像、录像文件和抓拍图像。或者,上述显示模组中的编码器、编码器1和编码器2可以是同一个编码器。一个编码器可以包括多个编码单元。手机可以采用一个编码器中三个不同的编码单元分别进行编码或者格式转换得到上述预览图像、录像文件和抓拍图像。或者,显示模组中的编码器和编码器1可以是同一个编码器中不同的两个编码单元,编码器2可以是另一个编码器。
其中,不同编码器的编码方式可以相同,也可以不同。同一编码器的不同编码单元的编码方式可以相同,也可以不同。因此,上述显示模组中的编码器和编码器1输出的图像格式可以相同,也可以不同。例如,显示模组中的编码器和编码器1输出的图像可以是联合图像专家组(Joint Photographic Experts Group,JPEG)、标签图像文件格式(Tag Image File Format,TIFF)等任一种格式的图像。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、智能手表、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括摄像头的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。请参考图5A,为本申请实施例提供的一种电子设备500的结构示意图。如图5A所示,电子设备500可以包括:处理器510,外部存储器接口520,内部存储器521,通用串行总线(universal serial bus,USB)接口530,充电管理模块540,电源管理模块541,电池542,天线1,天线2,移动通信模块550,无线通信模块560,音频模块570,扬声器570A,受话器570B,麦克风570C,耳机接口570D,传感器模块580,按键590,马达591,指示器592,摄像头593,显示屏594,以及用户标识模块(subscriber identification module,SIM)卡接口595等。
其中,上述传感器模块580可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备500的具体限定。在另一些实施例中,电子设备500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器510可以包括一个或多个处理单元,例如:处理器510可以包括应用处理器(application processor,AP),调制解调处理器,GPU,图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或NPU等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备500的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器510中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器510中的存储器为高速缓冲存储器。该存储器可以保存处理器510刚用过或循环使用的指令或数据。如果处理器510需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器510的等待时间,因而提高了系统的效率。
在一些实施例中,处理器510可以包括一个或多个接口。可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备500的结构限定。在另一些实施例中,电子设备500也可以采用上述实施例中不同的接口连接 方式,或多种接口连接方式的组合。
充电管理模块540用于从充电器接收充电输入。充电管理模块540为电池542充电的同时,还可以通过电源管理模块541为电子设备供电。
电源管理模块541用于连接电池542、充电管理模块540与处理器510。电源管理模块541接收电池542和/或充电管理模块540的输入,为处理器510,内部存储器521,外部存储器,显示屏594,摄像头593,和无线通信模块560等供电。
电子设备500的无线通信功能可以通过天线1,天线2,移动通信模块550,无线通信模块560,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。在一些实施例中,电子设备500的天线1和移动通信模块550耦合,天线2和无线通信模块560耦合,使得电子设备500可以通过无线通信技术与网络以及其他设备通信。
电子设备500通过GPU,显示屏594,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏594和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器510可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏594用于显示图像,视频等。该显示屏594包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organiclight-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备500可以通过ISP,摄像头593,视频编解码器,GPU,显示屏594以及应用处理器等实现拍摄功能。
ISP用于处理摄像头593反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头593中。
摄像头593用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备500可以包括N个摄像头593,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备500在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备500可以支持一种或多种 视频编解码器。这样,电子设备500可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备500的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口520可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备500的存储能力。外部存储卡通过外部存储器接口520与处理器510通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器521可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器510通过运行存储在内部存储器521的指令,从而执行电子设备500的各种功能应用以及数据处理。例如,在本申请实施例中,处理器510可以通过执行存储在内部存储器521中的指令,内部存储器521可以包括存储程序区和存储数据区。
其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备500使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器521可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备500可以通过音频模块570,扬声器570A,受话器570B,麦克风570C,耳机接口570D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键590包括开机键,音量键等。马达591可以产生振动提示。指示器592可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口595用于连接SIM卡。SIM卡可以通过插入SIM卡接口595,或从SIM卡接口595拔出,实现和电子设备500的接触和分离。电子设备500可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口595可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
以下实施例中的方法均可以在具有上述硬件结构的电子设备500中实现。以下实施例中,以电子设备500是手机为例,介绍本申请实施例的方法。图5B是本申请实施例的手机的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android TM系统分为五层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,硬件抽象层(hardware abstraction layer,HAL)以及内核层。应理解:本文以Android系统举例来说明,在其他操作系统中(例如鸿蒙 TM系统,IOS TM系统等),只要各个功能模块实现的功能和本申请的实施例类似也能实现本申请的方案。
应用程序层可以包括一系列应用程序包。
如图5B所示,应用程序层中可以安装通话,备忘录,浏览器,联系人,图库,日历,地图,蓝牙,音乐,视频,短信息等应用。
在本申请实施例中,应用程序层中可以安装具有拍摄功能的应用,例如,相机应用。当然,其他应用需要使用拍摄功能时,也可以调用相机应用实现拍摄功能。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
例如,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器等,本申请实施例对此不做任何限制。
例如,上述窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。上述内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。上述视图系统可用于构建应用程序的显示界面。每个显示界面可以由一个或多个控件组成。一般而言,控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、微件(Widget)等界面元素。上述资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。上述通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,振动,指示灯闪烁等。
如图5B所示,Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。
内核层位于HAL之下,是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动等,本申请实施例对此不做任何限制。
在本申请实施例中,仍如图5B所示,以相机应用举例,可应用程序框架层中设置有相机服务(Camera Service)。相机应用可通过调用预设的API启动Camera Service。 Camera Service在运行过程中可以与硬件抽象层(HAL)中的Camera HAL交互。其中,Camera HAL负责与手机中实现拍摄功能的硬件设备(例如摄像头)进行交互,Camera HAL一方面隐藏了相关硬件设备的实现细节(例如具体的图像处理算法),另一方面可向Android系统提供调用相关硬件设备的接口。
示例性的,相机应用运行时可将用户下发的相关控制命令(例如预览、放大、拍照、录像或者抓拍指令)发送至Camera Service。一方面,Camera Service可将接收到的控制命令发送至Camera HAL,使得Camera HAL可根据接收到的控制命令调用内核层中的相机驱动,由相机驱动来驱动摄像头等硬件设备响应该控制命令采集图像数据。例如,摄像头可按照一定的帧率,将采集到的每一帧图像数据通过相机驱动传递给Camera HAL。其中,控制命令在操作系统内部的传递过程可参见图5B中控制流的具体传递过程。
另一方面,Camera Service接收到上述控制命令后,可根据接收到的控制命令确定此时的拍摄策略,拍摄策略中设置了需要对采集到的图像数据执行的具体图像处理任务。例如,在预览模式下,Camera Service可在拍摄策略中设置图像处理任务1用于实现人脸检测功能。又例如,如果在预览模式下用户开启了美颜功能,则Camera Service还可以在拍摄策略中设置图像处理任务2用于实现美颜功能。进而,Camera Service可将确定出的拍摄策略发送至Camera HAL。
当Camera HAL接收到摄像头采集到的每一帧图像数据后,可根据Camera Service下发的拍摄策略对上述图像数据执行相应的图像处理任务,得到图像处理后的每一帧拍摄画面。例如,Camera HAL可根据拍摄策略1对接收到的每一帧图像数据执行图像处理任务1,得到对应的每一帧拍摄画面。当拍摄策略1更新为拍摄策略2后,Camera HAL可根据拍摄策略2对接收到的每一帧图像数据执行图像处理任务2,得到对应的每一帧拍摄画面。
后续,Camera HAL可将经过图像处理后的每一帧拍摄画面通过Camera Service上报给相机应用,相机应用可将每一帧拍摄画面显示在显示界面中,或者,相机应用以照片或视频的形式将每一帧拍摄画面保存在手机内。其中,上述拍摄画面在操作系统内部的传递过程可参见图5B中数据流的具体传递过程。
本申请实施例这里结合图5B介绍手机中各个软件层实现本申请实施例的方法的工作原理。相机应用在录像模式下运行时,可将用户下发的抓拍指令发送至Camera Service。在录像模式下,Camera HAL可根据之前接收到的录像指令调用内核层中的相机驱动,由相机驱动来驱动摄像头等硬件设备响应该录像指令采集图像数据。例如,摄像头可按照一定的帧率,将采集到的每一帧图像数据通过相机驱动传递给Camera HAL。其中,基于录像指令由相机驱动传递给Camera HAL的每一帧图像组成的数据流可以为本申请实施例中所述的视频流(如录像文件的数据流和预览图像的数据流)。
另外,Camera Service接收到上述抓拍指令后,可根据接收到的抓拍指令确定此时的拍摄策略3为录像中抓拍图像。该拍摄策略中设置了需要对采集到的图像数据执行的具体图像处理任务3,该图像处理任务3用于实现录像中抓拍功能。进而,Camera Service可将确定出的拍摄策略3发送至Camera HAL。
当Camera HAL接收到摄像头采集到的每一帧图像数据后,可根据Camera Service 下发的拍摄策略3对上述图像数据执行相应的图像处理任务3,得到对应的抓拍图像。
应注意,本申请实施例中,摄像头的图像传感器(Sensor)曝光输出的每一帧图像可以缓存在第一缓存队列(Buffer)中。Camera HAL响应于抓拍指令,可以从该Buffer中选择出抓拍帧(即用户输入抓拍操作时候,摄像头采集的图像)。如此,手机则可以从第一缓存队列中得到用户输入抓拍操作时,摄像头采集的图像。其中,第一缓存队列(Buffer)可以设置在手机软件系统的任何一层,如第一缓存队列(Buffer)可以设置在Camera HAL通过软件接口访问的内存区域。
在一些实施例中,HAL中还可以包括预设RAW域图像处理算法。Camera HAL可以通过一个在一些实施例中,HAL中还可以包括预设RAW域图像处理算法。Camera HAL可以调用预设RAW域图像处理算法处理该抓拍帧和该抓拍帧的相邻帧,得到处理后的图像帧。其中,上述CSI可以是Buffer与预设RAW域图像处理算法之前的一个软件接口。
之后,Camera HAL可以根据之前接收到的抓拍指令调用内核层中的相机驱动,由相机驱动来驱动摄像头中的ISP等硬件设备响应该抓拍指令对处理后的图像帧进行硬件处理,得到对应的一帧抓拍图像。后续,Camera HAL可将经过图像处理后的抓拍图像通过Camera Service上报给相机应用,相机应用可将抓拍图像以照片的形式保存在手机内。
本申请实施例提供一种录像中抓拍图像的方法,该方法可以应用于手机,该手机包括摄像头。如图6A所示,该方法可以包括S601-S607。
S601、手机接收用户的第一操作。该第一操作用于触发手机开始录制视频。
示例性的,手机可以显示图7所示的录像的取景界面701。该录像的取景界面701是手机还未开始录像的取景界面。该录像的取景界面701包括“开始录像”按钮702。上述第一操作可以是用户对“开始录像”按钮702的点击操作,用于触发手机开始录制视频。
S602、响应于第一操作,手机显示取景界面。该取景界面显示预览图像流。该预览图像流包括n帧预览图像,该预览图像是手机接收到第一操作后基于摄像头采集的n帧第一图像得到的。该取景界面还包括抓拍快门,该抓拍快门用于触发手机进行抓拍。
示例性的,以第一操作是用户对“开始录像”按钮702的点击操作为例。手机响应于用户对“开始录像”按钮702的点击操作,手机的显示屏可显示图7所示的取景界面703。该取景界面703是手机正在录制视频的取景界面。该取景界面703中可以显示预览图像流。该预览图像流包括手机录像过程中最终在显示屏上呈现给用户的多帧预览图像。如图7所示,该取景界面703包括基于上述第一图像得到的预览图像704。其中,预览图像704为图8所示的预览图像流中的一帧预览图像。该预览图像704是手机接收到第一操作后基于摄像头采集的第一图像得到的。
其中,本申请实施例这里介绍手机由第一图像得到预览图像704的方法。在手机显示取景界面之前,手机可以按照图1B、图4A、图4B或图8所示的预览图像流中预览图像的处理方式处理第一图像得到预览图像704。应注意,手机的ISP可以对摄像头采集的每一帧第一图像均执行上述RAW的图像处理、RGB域的图像处理和YUV域的 图像处理。手机由第一图像得到预览图像704的方法,可以参考图4B或图8所示“预览图像流”的处理方法。
如图4B或图8所示,手机的图像传感器(Sensor)受到曝光的控制,可以不断输出Bayer图像。每一帧Bayer图像由手机的ISP进行RAW的图像处理得到RGB图像,RGB图像由ISP进行RGB域的图像处理得到YUV图像。YUV图像由处理算法1进行处理,之后由ISP进行YUV域的图像处理后,送至编码器1(ENCODER)进行编码,便可以得到预览图像704。处理后的多帧预览图像704可以形成一段预览的视频流(即预览图像流)。
其中,预览图像刘的处理流程中,RAW的图像处理、RGB域的图像处理和YUV域的图像处理的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。图4B或图8所述的录像文件的处理方式可以参考上述实施例对录像文件处理方式的介绍,这里不予赘述。
需要强调的是,如图7所示,取景界面703还包括抓拍快门702。该抓拍快门702用于触发手机进行抓拍。其中,本申请实施例中所述的“手机抓拍”是指:手机在录像过程中,抓拍录像中的一帧图像得到照片。具体的,该抓拍快门702用于触发手机在录像的过程中抓拍图像得到照片。可以想到的是,手机录制视频(即录像)的过程中,可能会采集到的一些精彩的画面。在手机录像的过程中,用户可能会希望手机可以抓拍到上述精彩的画面,并保存成照片展示给用户。本申请实施例中,用户点击上述抓拍快门702便可以实现录像过程中抓拍精彩画面的功能。
为了保证手机响应于用户的抓拍操作(如用户对抓拍快门702的点击操作),可以抓拍到用户实际需要的图像;手机可以将Sensor曝光输出Bayer图像缓存在一个第一缓存队列(Buffer)中。如此,即使从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令,存在图3所示的延迟时长;接收到用户的抓拍操作时,Sensor输出的Bayer图像也可以缓存在第一缓存队列中。这样,手机便可以从第一缓存队列中获取这一帧图像。具体的,手机可以将上述n帧第一图像存储于电子设备的第一缓存队列(Buffer)中。响应于上述第一操作,手机还可以执行S603。
S603、手机在第一缓存队列缓存摄像头采集的第一图像。该第一缓存队列缓存摄像头采集的n帧第一图像,n≥1,n为整数。
示例性的,手机响应于上述第一操作,手机可以在图4A或图4B所示的第一缓存队列(Buffer)中缓存摄像头采集的第一图像。例如,该第一缓存队列可以以先进先出的原则缓存摄像头采集的n帧第一图像。如图9所示,第一缓存队列的队尾可以执行入队操作,用于插入第一图像;第一缓存队列的队头可以执行出队操作,用于删除第一图像。在第一缓存队列中已缓存n帧第一图像的情况下,第一缓存队列的队尾每插入一帧第一图像,第一缓存队列的队头则删除一帧第一图像。
S604、手机在摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对抓拍快门的第三操作。
示例性的,上述第三操作可以是用户对抓拍快门的单击操作。例如,第三操作可以是用户对图7所示的抓拍快门的单击操作。或者,第三操作可以是用户对抓拍快门的连续点击操作。其中,对抓拍快门的每次单击操作,用于触发手机执行一次S605。 也就是说,对抓拍快门的单击操作用于触发手机抓拍一张照片。对抓拍快门的连续点击操作用于触发手机抓拍多张照片。其中,手机在录像过程中抓拍多张照片的方法与抓拍一张照片的方法类似,这里不予赘述。
由S604的描述可知:第q帧第一图像是手机接收到第三操作时,摄像头采集的一帧第一图像。如此,在第一缓存队列缓存的n帧第一图像中,第q帧第一图像由摄像头的图像传感器输出的时间与手机接收到第三操作的时间最近。q≤n,q是正整数。
在另一些实施例中,上述第q帧第一图像也可以是n帧第一图像中清晰度最高的一帧图像。当然,也可以同时参考前述两种选帧依据,使用不同的权重并进行加权,作为对第q帧第一图像的选择依据。以下实施例中,以第q帧第一图像由摄像头的图像传感器输出的时间与手机接收到第三操作的时间最近为例,介绍本申请实施例的方法。
本申请实施例中,如图5B所示,手机的HAL中的Camera HAL可以包括一个选帧模块。Camera HAL接收到来自Camera Service的抓拍指令后,可以从第一缓存队列Buffer中缓存的n帧第一图像中选择出第q帧第一图像(即抓拍帧,也称为参考帧)。
在一些实施例中,上述每一帧第一图像对应一个时间信息,该时间信息记录有图像传感器Sensor输出对应第一图像的时间。其中,该时间信息也可以称为时间戳。
其中,手机可以记录手机接收到第三操作的时间(即第一操作发生时间)。在一种可能的实现方式中,可以由手机的硬件时钟(如用于记录触摸屏的触摸事件发生时间的硬件时钟)记录手机接收到第三操作(即抓拍操作)的时间。如此,在手机中记录第三操作发生时间的时钟与Sensor记录第一图像出图的时钟同步的前提下,手机(如上述选帧模块)则可以选择第一缓存队列Buffer中时间戳与手机接收到第三操作的时间最近的第一图像作为上述第q帧第一图像(即抓拍帧,也称为参考帧)。
需要说明的是,在一些实施例中手机中记录第三操作发生时间的时钟与Sensor记录第一图像出图的时钟同步。在另一些实施例中,手机中记录第三操作发生时间的时钟与Sensor记录第一图像出图的时钟为同一系统时钟。
本申请实施例中,手机可以将Sensor曝光输出Bayer图像缓存在一个第一缓存队列Buffer中。该第一缓存队列可以缓存多帧Bayer图像。如此,即使从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令,存在图3所示的延迟时长。手机接收到用户的抓拍操作时,Sensor输出的Bayer图像也可以缓存在第一缓存队列中。手机的选帧模块可以从Buffer中选择抓拍帧(即用户输入抓拍操作时候,摄像头采集的图像)。如此,手机则可以从第一缓存队列中得到用户输入抓拍操作时,摄像头采集的图像。
需要说明的是,在一些平台,可以将Sensor曝光结束时间作为时间戳;在另一些平台可以将Sensor开始曝光时间作为时间戳,本申请实施例对此不作限制。
S605、手机响应于第三操作,对保存于第一缓存队列的第q帧第一图像进行图像处理,得到抓拍图像。
在一些实施例中,n可以等于1。在这种情况下,第一缓存队列中可以缓存一帧第一图像。如此,手机执行S605只能对一帧第一图像进行图像处理。
在另一些实施例中,n可以大于1。在这种情况下,第一缓存队列中可以缓存多帧第一图像。如此,S605可以包括:手机对保存于第一缓存队列的m帧第一图像进行图 像处理,得到抓拍图像。该m帧第一图像包括上述第q帧第一图像,m≥1,m为整数。
在该实施例中,手机可以对一帧或多帧第一图像进行图像处理。其中,m≥2的情况下,手机可以对多帧第一图像进行图像处理,该m帧第一图像中除第q帧第一图像之外的其他图像,可以对抓拍帧(即第q帧第一图像,也称为参考帧)起到画质增强的作用,有利于获取噪声和纹理等信息,可以进一步提升抓拍图像的画质。
其中,n可以为预设正整数。假设Sensor每秒钟可以曝光a帧Bayer图像,图3所示的延迟时长为b秒,则Sensor在延迟时长b秒内可以曝光出b/(1/a)=a*b帧Bayer图像。n可以为大于或者等于a*b的整数。
示例性的,S605所述的图像处理可以包括:RAW域的图像处理和ISP图像处理。该RAW域的图像处理为在RAW颜色空间进行的图像处理。ISP图像处理为采用手机的ISP进行的图像处理。经过上述图像处理,抓拍图像的图像画质优于第q帧第一图像的图像画质。或者,上述图像处理可以包括:RAW域图像处理、ISP图像处理和编码处理。也就是说,编码处理可以集成在ISP图像处理中实现,也可以独立于ISP图像处理。本申请实施例中,以编码处理独立于ISP图像处理为例介绍本申请实施例的方法。编码处理具体是指采用编码器对图像进行编码。
其中,上述RAW域的图像处理可以通过预设RAW域图像处理算法来实现。ISP图像处理则可以通过手机的ISP来实现。
在一些实施例中,手机响应于第三操作,通过预设RAW域图像处理算法和ISP,对第q帧第一图像进行图像处理,得到抓拍图像的方法可以包括S605a-S605b。如图6B所示,S605可以包括S605a-S605b。
S605a、响应于第三操作,手机将m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像。其中,预设RAW域图像处理算法具备提升图像画质的功能。
其中,预设RAW域图像处理算法具备提升图像画质的功能。该预设RAW域图像处理算法中集成了RAW域、RGB域或者YUV域的图像处理功能中的至少一项图像处理功能,用于在ISP进行图像处理前提升图像的画质。
在一些实施例中,m可以等于1。也就是说,m帧第一图像是上述第q帧第一图像。但是,手机将上述第q帧第一图像作为输入运行预设RAW域图像处理算法,便可以得到画质较高的第二图像。这种情况下,预设RAW域图像处理算法单帧输入单帧输出。但是,一帧图像中的数据的完整性和纹理等参数均有限,将一帧图像作为输入运行预设RAW域图像处理算法,并不能有效提升这一帧图像的画质。
基于此,在另一些实施例中,m可以大于1。具体的,手机可以将该第q帧第一图像以及该第q帧第一图像相邻的至少一帧图像作为输入,运行预设RAW域图像处理算法。即可以将n帧第一图像中、包括第q帧第一图像在内的m帧第一图像作为输入运行预设RAW域图像处理算法。预设RAW域图像处理算法是一个多帧输入单帧输出的图像处理算法。应理解,m帧第一图像中除第q帧第一图像之外的其他图像,可以对抓拍帧(即第q帧第一图像,也称为参考帧)起到画质增强的作用,有利于获取噪声和纹理等信息,可以进一步提升第二图像的画质。
在一些实施例中,上述m帧第一图像为第一缓存队列中相邻的m帧图像。在另一些实施例中,m帧第一图像也可以是第一缓存队列缓存的n帧第一图像中,不相邻但 包括第q帧第一图像的m帧图像。
也就是说,本申请实施例中所述的预设RAW域图像处理算法可以是一个多帧输入、单帧输出的神经网络模型。其中,预设RAW域图像处理算法是一个RAW域的画质增强的深度学习网络。本方案中,增加了预设RAW域图像处理算法的算法处理,相比于完全采用ISP的硬件RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,预设RAW域图像处理算法与ISP结合的效果更好,有助于提升抓拍图像的图像质量。
S605b、手机采用手机的ISP处理第二图像,对处理后的图像编码得到抓拍图像。
需要说明的是,本申请实施例中,手机可以通过时分复用的方式,采用ISP处理第一图像和第二图像。也就是说,手机采用ISP处理第二图像,并不会影响手机采用ISP处理第一图像。换言之,手机采用ISP处理得到图4A所示的抓拍图像,并不会影响手机处理得到图4A或图4B所示的预览图像流和录像文件。其中,由第一图像得到预览图像的图像处理流程中还包括处理算法1的处理流程。在一些实施例中,上述处理算法1可以包含在ISP的硬件模块中。
在另一些实施例中,处理算法1可以包含在手机的其他处理器(如CPU、GPU或者NPU等任一处理器)中。在该实施例中,ISP的硬件模块可以调用上述其他处理器中的处理算法1,来处理第一图像得到预览图像。
示例性的,响应于用户对图7所示的抓拍快门的单击操作(即第二操作),手机可以生成并保存抓拍图像。但是,手机在录像过程中,用户并不能查看该抓拍图像。用户可以在录像结束后,在相册中查看该抓拍图像。具体的,S605之后,本申请实施例的方法还包括S606-S607
S606、手机接收用户的第二操作。
S607、手机响应于第二操作,结束录制视频,保存视频。
例如,手机可以接收用户对图10所示“结束录像”按钮706的点击操作(即第二操作)。响应于用户对图10所示“结束录像”按钮706的点击操作,可以结束录像,显示图10所示的录像的取景界面1001。录像的取景界面1001是手机未开始录像的取景界面。与图7所示的录像的取景界面701相比,手机的取景界面中的照片选项中的照片由图7所示的708更新为图10所示的1002。手机可以响应于用户对相册应用的启动操作,显示图11所示的相册列表界面1101,该相册列表界面1101包括手机中保存的多张照片和视频。例如,如图7所示,相册列表界面1101包括手机录制的视频1103,以及手机在录制视频1103过程中抓拍的照片1102。也就是说,手机录像结束后录制视频后,保存了录制的视频(如视频1103)。
本申请实施例中,手机可以将Sensor曝光输出Bayer图像缓存在一个第一缓存队列Buffer中。该第一缓存队列可以缓存多帧Bayer图像。如此,即使从接收到用户的抓拍操作到Snapshot程序接收到抓拍指令存在延迟时长;接收到用户的抓拍操作时,Sensor输出的Bayer图像也可以缓存在第一缓存队列中。手机的选帧模块可以从第一缓存队列中选择抓拍帧(即用户输入抓拍操作时候,摄像头采集的图像)。如此,手机则可以从第一缓存队列中得到用户输入抓拍操作时,摄像头采集的图像。
并且,手机还可以采用预设RAW域图像处理算法处理和ISP硬件模块,处理选帧模块选择的抓拍帧;最后,由编码器2对处理结果进行编码得到抓拍图像。其中,预 设RAW域图像处理算法是一个RAW域的画质增强的深度学习网络。本方案中,增加了预设RAW域图像处理算法的算法处理,相比于完全采用ISP的硬件RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,预设RAW域图像处理算法与ISP结合的处理效果更好,有助于提升抓拍图像的图像质量。
综上所述,采用本申请实施例的方法,可以在录像过程中抓拍到满足用户需求的图像,并且可以提升抓拍图像的图像质量。
在一些实施例中,如图8所示,预设RAW域图像处理算法输入和输出的图像格式均为Bayer。在该实施例中,预设RAW域图像处理算法中集成了RAW域、RGB域或者YUV域中至少一项的部分图像处理功能,用于在ISP进行图像处理前提升图像的画质。如图8所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的Bayer图像依次进行RAW域的图像处理、RGB域的图像处理和YUV域的图像处理。具体的,如图12所示,S605a可以替换为S1201,S605b可以替换为S1202。
S1201、响应于第三操作,手机将Bayer格式的m帧第一图像作为输入,运行预设RAW域图像处理算法,得到Bayer格式的第二图像。上述m帧第一图像包括第q帧第一图像,预设RAW域图像处理算法具备提升图像画质的功能。
S1202、手机采用ISP依次对第二图像进行RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,对处理后的图像进行编码得到抓拍图像。
在该实施例中,预设RAW域图像处理算法中集成了RAW域、RGB域或者YUV域中至少一项的部分图像处理功能,用于在ISP进行图像处理前提升图像的画质。
例如,假设ISP的RAW域的图像处理功能包括A、B和C,ISP的RGB域的图像处理功能包括D和E,ISP的YUV域的图像处理功能包括F和G。其中,ISP的RAW域、RGB域和YUV域的具体图像处理功能,可以参考上述实施例中的相关介绍,这里不予赘述。
在一种实现方式中,预设RAW域图像处理算法中可以集成了RAW域的A和C的图像处理功能。如此,手机执行S1201则可以运行预设RAW域图像处理算法完成RAW域中A和C的图像处理功能。手机执行S1202,采用ISP依次对第二图像完成RAW域中B的图像处理功能,完成RGB域中D和E的图像处理功能,完成YUV域中F和G的图像处理功能。
在另一种实现方式中,预设RAW域图像处理算法中可以集成了RAW域的A的图像处理功能,以及RGB域中D的图像处理功能。如此,手机执行S1201则可以运行预设RAW域图像处理算法完成RAW域中A的图像处理功能,以及RGB域中D的图像处理功能。手机执行S1202,采用ISP依次对第二图像完成RAW域中B和C的图像处理功能,完成RGB域中E的图像处理功能,完成YUV域中F和G的图像处理功能。
在另一种实现方式中,预设RAW域图像处理算法中可以集成了RAW域的A的图像处理功能,以及YUV域中F的图像处理功能。如此,手机执行S1201则可以运行预设RAW域图像处理算法完成RAW域中A的图像处理功能,以及YUV域中F的图像处理功能。手机执行S1202,采用ISP依次对第二图像完成RAW域中B和C的图像处理功能,完成RGB域中D和E的图像处理功能,完成YUV域中G的图像处理功能。
在另一种实现方式中,预设RAW域图像处理算法中可以集成了RGB域的D的图像 处理功能,以及YUV域中F的图像处理功能。如此,手机执行S1201则可以运行预设RAW域图像处理算法完成RGB域的D的图像处理功能,以及YUV域中F的图像处理功能。手机执行S1202,采用ISP依次对第二图像完成RAW域中A、B和C的图像处理功能,完成RGB域中E的图像处理功能,完成YUV域中G的图像处理功能。
在另一些实施例中,如图13所示,预设RAW域图像处理算法输入的图像格式为Bayer,预设RAW域图像处理算法输入的图像格式为RGB。在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能,用于在ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质。如图13所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的RGB图像依次进行RGB域的图像处理和YUV域的图像处理。具体的,如图14所示,S605a可以替换为S1401,S605b可以替换为S1402。
S1401、响应于第三操作,手机将Bayer格式的m帧第一图像作为输入,运行预设RAW域图像处理算法,得到RGB格式的第二图像。上述m帧第一图像包括第q帧第一图像,预设RAW域图像处理算法具备提升图像画质的功能。
S1402、手机采用ISP依次对第二图像进行RGB域的图像处理和YUV域的图像处理,对处理后的图像进行编码得到抓拍图像。
在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能,用于在ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质。
例如,假设ISP的RAW域的图像处理功能包括a、b和c,ISP的RGB域的图像处理功能包括d和e,ISP的YUV域的图像处理功能包括f和g。
也就是说,预设RAW域图像处理算法中集成了RAW域的a、b和c的图像处理功能。如此,手机执行S1401则可以运行预设RAW域图像处理算法完成RAW域中a、b和c的图像处理功能。手机执行S1402,采用ISP依次对第二图像完成RGB域中d和e的图像处理功能,完成YUV域中f和g的图像处理功能。其中,ISP的RAW域、RGB域和YUV域的具体图像处理功能,可以参考上述实施例中的相关介绍,这里不予赘述。
在另一些实施例中,预设RAW域图像处理算法中不仅集成了RAW域的图像处理功能,还可以集成RGB域或者YUV域中至少一项的部分图像处理功能,用于在ISP进行RGB域的图像处理前提升图像的画质。例如,假设ISP的RAW域的图像处理功能包括a、b和c,ISP的RGB域的图像处理功能包括d和e,ISP的YUV域的图像处理功能包括f和g。
在一种实现方式中,预设RAW域图像处理算法中集成了RAW域的a、b和c的图像处理功能,以及RGB域中d的图像处理功能。如此,手机执行S1401则可以运行预设RAW域图像处理算法完成RAW域中a、b和c的图像处理功能,以及完成RGB域中d的图像处理功能。手机执行S1402,采用ISP依次对第二图像完成RGB域中e的图像处理功能,完成YUV域中f和g的图像处理功能。
在另一种实现方式中,预设RAW域图像处理算法中集成了RAW域的a、b和c的图像处理功能,以及YUV域中f的图像处理功能。如此,手机执行S1401则可以运行预设RAW域图像处理算法完成RAW域中a、b和c的图像处理功能,以及完成YUV域中f的图像处理功能。手机执行S1402,采用ISP依次对第二图像完成RGB域中d和e的图像处理功能,完成YUV域中g的图像处理功能。
在另一种实现方式中,预设RAW域图像处理算法中集成了RAW域的a、b和c的图像处理功能,RGB域中d的图像处理功能,以及YUV域中f的图像处理功能。如此,手机执行S1401则可以运行预设RAW域图像处理算法完成RAW域中a、b和c的图像处理功能,完成RGB域中d的图像处理功能,以及完成YUV域中f的图像处理功能。手机执行S1402,采用ISP依次对第二图像完成RGB域中e的图像处理功能,完成YUV域中g的图像处理功能。
在另一些实施例中,如图15所示,预设RAW域图像处理算法输入的图像格式为Bayer,预设RAW域图像处理算法输入的图像格式为YUV。在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能和RGB域的图像处理功能,用于在ISP对图像进行YUV域的图像处理前提升图像的画质。如图15所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的RGB图像依次进行YUV域的图像处理。具体的,如图16所示,S605a可以替换为S1601,S605b可以替换为S1602。
S1601、响应于第三操作,手机将Bayer格式的m帧第一图像作为输入,运行预设RAW域图像处理算法,得到YUV格式的第二图像。上述m帧第一图像包括第q帧第一图像,预设RAW域图像处理算法具备提升图像画质的功能。
S1602、手机采用ISP依次对第二图像进行YUV域的图像处理,对处理后的图像进行编码得到抓拍图像。
在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能和RGB域的图像处理功能,用于在ISP对图像进行YUV域的图像处理前提升图像的画质。
例如,假设ISP的RAW域的图像处理功能包括I、II和III,ISP的RGB域的图像处理功能包括IV和V,ISP的YUV域的图像处理功能包括VI和VII。
也就是说,预设RAW域图像处理算法中集成了RAW域的I、II和III的图像处理功能,以及RGB域的IV和V的图像处理功能。如此,手机执行S1601则可以运行预设RAW域图像处理算法完成RAW域中I、II和III的图像处理功能,以及RGB域的IV和V的图像处理功能。手机执行S1602,采用ISP依次对第二图像完成YUV域中VI和VII的图像处理功能。其中,ISP的RAW域、RGB域和YUV域的具体图像处理功能,可以参考上述实施例中的相关介绍,这里不予赘述。
在另一些实施例中,预设RAW域图像处理算法中不仅集成了RAW域和RGB域的图像处理功能,还可以集成YUV域中至少一项的部分图像处理功能,用于在ISP进行YUV域的图像处理前提升图像的画质。
例如,假设ISP的RAW域的图像处理功能包括I、II和III,ISP的RGB域的图像处理功能包括IV和V,ISP的YUV域的图像处理功能包括VI和VII。预设RAW域图像处理算法中集成了RAW域的I、II和III的图像处理功能,RGB域的IV和V的图像处理功能,以及YUV域的VI的图像处理功能。如此,手机执行S1601则可以运行预设RAW域图像处理算法完成RAW域中I、II和III的图像处理功能,RGB域的IV和V的图像处理功能,以及YUV域的VI的图像处理功能。手机执行S1602,采用ISP依次对第二图像完成YUV域中VII的图像处理功能。
本申请实施例中,可以在预设RAW域图像处理算法中集成ISP的RAW域、RGB域或者YUV域中至少一项的部分图像处理功能,用于在ISP进行图像处理前提升图像的 画质。
本申请实施例提供一种录像中抓拍图像的方法,该方法可以应用于手机,该手机包括摄像头。如图17所示,该方法可以包括S1701-S1708。
S1701、手机接收用户的第一操作。该第一操作用于触发手机开始录制视频。
S1702、响应于第一操作,手机显示取景界面。该取景界面显示预览图像流,该预览图像流包括n帧预览图像。该预览图像是受最近接收到第一操作后基于手机的摄像头采集的n帧第一图像得到的,取景界面还包括抓拍快门,抓拍快门用于触发手机进行抓拍。
其中,第一预览图像可以参考上述实施例对预览图像704的详细描述,本申请实施例这里不予赘述。
S1703、手机在第一缓存队列缓存摄像头采集的第一图像。该第一缓存队列缓存摄像头采集的n帧第一图像,n≥1,n为整数。
其中,S1701-S1703的详细实现过程,可以参考上述实施例对S601-S603的介绍,本申请实施例这里不予赘述。上述实施例中,手机响应于用户对抓拍快门的第三操作,从第一缓存队列中选择抓拍帧(也称为参考帧),并将参考帧与参考帧的相邻帧作为输入运行预设RAW域图像处理算法,得到抓拍图像。而本申请实施例中,则可以周期性采用预设RAW域图像处理算法处理第一缓存队列中缓存的k帧第一图像。具体的,在S1703之后,本申请实施例的方法还可以包括S1704。
S1704、手机周期性对第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,k≥1,k为整数。
其中,k可以等于n,或者k可以小于n。
示例性的,以n=6,k=4为例。手机可以周期性对每4帧第一图像进行图像处理,得到第二图像。例如,图18中的(a)所示的第1帧第一图像、第2帧第一图像、第3帧第一图像和第4帧第一图像这4帧第一图像可以作为一组图像,进行一次图像处理。如图18中的(b)所示,随着Sensor曝光产生的新一帧第一图像(如第7帧第一图像)在Buffer的队尾入队,Buffer队头的一帧图像(如第1帧第一图像)则可以从Buffer的队头出队。
其中,手机可以周期性读取Buffer队头出队的k帧第一图像,对该k帧第一图像进行图像处理得到第二图像。例如,如图19A中的(a)所示,手机可以在第4帧第一图像出队后,将第1帧第一图像、第2帧第一图像、第3帧第一图像和第4帧第一图像这4帧第一图像进行图像处理得到第二图像i。如图19A中的(b)所示,手机可以在第8帧第一图像出队后,对第5帧第一图像、第6帧第一图像、第7帧第一图像和第8帧第一图像这4帧第一图像进行图像处理得到第二图像ii。如图19A中的(c),手机可以在第12帧第一图像出队后,对第9帧第一图像、第10帧第一图像、第11帧第一图像和第12帧第一图像这4帧第一图像进行图像处理得到第二图像iii。
示例性的,S1704所述的图像处理可以包括:RAW域的图像处理和ISP图像处理。该RAW域的图像处理为在RAW颜色空间进行的图像处理。ISP图像处理为采用手机的ISP进行的图像处理。经过上述图像处理,抓拍图像的图像画质优于第q帧第一图像的图像画质。或者,上述图像处理可以包括:RAW域图像处理、ISP图像处理和编码处 理。也就是说,编码处理可以集成在ISP图像处理中实现,也可以独立于ISP图像处理。本申请实施例中,以编码处理独立于ISP图像处理为例介绍本申请实施例的方法。编码处理的详细介绍可以参考上述实施例中的相关内容,这里不予赘述。
其中,上述RAW域的图像处理可以通过预设RAW域图像处理算法来实现。ISP图像处理则可以通过手机的ISP来实现。S1704可以包括S1704a和S1704b:
S1704a、手机周期性将第一缓存队列中缓存的k帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像。该预设RAW域图像处理算法具备提升图像画质的功能。
其中,该预设RAW域图像处理算法中集成了RAW域、RGB域或者YUV域的图像处理功能中的至少一项图像处理功能,用于在ISP进行图像处理前提升图像的画质。
S1704b、手机采用手机的ISP处理所述第三图像,得到第二图像。
示例性的,仍以n=6,k=4为例。如图19B中的(a)所示,手机可以在第4帧第一图像出队后,将第1帧第一图像、第2帧第一图像、第3帧第一图像和第4帧第一图像这4帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像i。然后,如图19B中的(a)所示,手机的ISP可以处理该第三图像i得到第二图像I。
如图19B中的(b)所示,手机可以在第8帧第一图像出队后,将第5帧第一图像、第6帧第一图像、第7帧第一图像和第8帧第一图像这4帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像ii。然后,如图19B中的(b)所示,手机的ISP可以处理该第三图像ii得到第二图像II。
如图19B中的(c),手机可以在第12帧第一图像出队后,将第9帧第一图像、第10帧第一图像、第11帧第一图像和第12帧第一图像这4帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像iii。然后,如图19B中的(c)所示,手机的ISP可以处理该第三图像iii得到第二图像III。
需要说明的是,手机执行S1704对k帧第一图像进行图像处理得到第二图像的具体方法,可以参考上述实施例中,手机对保存于第一缓存队列的m帧第一图像进行图像处理,得到抓拍图像的方法,本申请实施例这里不予赘述。
在一些实施例中,手机(如手机的选帧模块)可以选择出k帧第一图像中图像质量最好的一帧第一图像作为参考帧。手机可以记录该参考帧的时间信息。该参考帧的时间信息可以作为第三图像的时间信息,以及第二图像的时间信息。之后,第二图像的时间信息可以用于手机执行S1705时,选择用于进行画质增强的第二图像。应理解,两帧图像的时间信息所指示的时间越接近,这两帧图像的纹理接近的可能性则更高。纹理相似的图像更容易融合,有利于进行图像增强,进而有利于提升处理后图像的画质。
在一些实施例中,图20所示的预设RAW域图像处理算法输入和输入的图像格式均为Bayer。在该实施例中,预设RAW域图像处理算法中集成了RAW域、RGB域或者YUV域中至少一项的部分图像处理功能,用于在ISP进行图像处理前提升图像的画质。如图20所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的Bayer图像依次进行RAW域的图像处理、RGB域的图像处理和YUV域的图像处理。具体的,S1704a可以包括:手机将Bayer格式的k帧第一图像作为输入,运行预设RAW域图像处理算 法,得到Bayer格式的第三图像。S1704b可以包括:手机采用ISP依次对第三图像进行RAW域的图像处理、RGB域的图像处理和YUV域的图像处理,得到第二图像。
在另一些实施例中,图21所示的预设RAW域图像处理算法输入的图像格式为Bayer,预设RAW域图像处理算法输出的图像格式为RGB。在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能,用于在ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质。如图21所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的RGB图像依次进行RGB域的图像处理和YUV域的图像处理。具体的,S1704a可以包括:手机将Bayer格式的k帧第一图像作为输入,运行预设RAW域图像处理算法,得到RGB格式的第三图像。S1704b可以包括:手机采用ISP依次对第三图像进行RGB域的图像处理和YUV域的图像处理,得到第二图像。
在另一些实施例中,图22所示的预设RAW域图像处理算法输入的图像格式为Bayer,预设RAW域图像处理算法输入的图像格式为YUV。在该实施例中,预设RAW域图像处理算法中集成了RAW域的图像处理功能和RGB域的图像处理功能,用于在ISP对图像进行YUV域的图像处理前提升图像的画质。如图22所示,在该实施例中,ISP可以对预设RAW域图像处理算法输出的RGB图像依次进行YUV域的图像处理。具体的,S1704a可以包括:手机将Bayer格式的k帧第一图像作为输入,运行预设RAW域图像处理算法,得到YUV格式的第三图像。S1704b可以包括:手机采用ISP依次对第三图像进行YUV域的图像处理,得到第二图像。
S1705、手机在摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对抓拍快门的第三操作。
其中,S1706的详细实现过程,可以参考上述实施例对S604的介绍,本申请实施例这里不予赘述。
S1706、手机响应于第三操作,采用手机接收到第三操作时得到的最近一帧的第二图像对第四图像进行画质增强,得到抓拍图像。该第四图像是视频中时间信息与第q帧第一图像的时间信息相同的一帧图像。
其中,手机录像过程中,随时都有可能会接收到用户对抓拍快门的第三操作。由于手机执行S1704是周期性处理k帧第一图像得到第二图像的;因此,手机录像过程中,在不同时刻接收到用户对抓拍快门的第三操作,则手机得到的最近一帧的第二图像不同。
在图23所示的t 1时刻,第4帧第一图像从第一缓存队列Buffer中出队,手机执行S1704a得到第三图像i,手机执行S1704b由第三图像i得到第二图像I。在图23所示的t 2时刻,第8帧第一图像从第一缓存队列Buffer中出队,手机执行S1704a得到第三图像ii,手机执行S1704b由第三图像ii得到第二图像II。在图23所示的t 3时刻,第12帧第一图像从第一缓存队列Buffer中出队,手机执行S1704a得到第三图像iii,手机执行S1704b由第三图像iii得到第二图像III。
由此可见,在t 1时刻-t 2时刻这段时间,手机只可能得到第二图像I;而不能得到第二图像II和第二图像III。因此,在手机的Sensor曝光输出第5帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像是第二图像I。此时,手机可以采用第二图像I,对第5帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第6帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像是第二图像I。此时,手机可以采用第二图像I,对第6帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第7帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像仍是第二图像I。此时,手机可以采用第二图像I,对第7帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第8帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像仍是第二图像I。此时,手机可以采用第二图像I,对第8帧第一图像进行画质增强,得到抓拍图像。
在t 2时刻-t 3时刻这段时间,手机得到最近一帧的第二图像是第二图像II;而不能得到第二图像III。因此,在手机的Sensor曝光输出第9帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像是第二图像II。此时,手机可以采用第二图像II,对第9帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第10帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像是第二图像I。此时,手机可以采用第二图像II,对第10帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第11帧第一图像时,手机接收到上述三操作,手机中最近一帧的第二图像仍是第二图像II。此时,手机可以采用第二图像II,对第11帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第12帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像仍是第二图像II。此时,手机可以采用第二图像II,对第12帧第一图像进行画质增强,得到抓拍图像。
在手机的Sensor曝光输出第13帧第一图像时,手机接收到上述第三操作,手机中最近一帧的第二图像才是第二图像III。此时,手机可以采用第二图像III,对第13帧第一图像像进行画质增强,得到抓拍图像。
在一些实施例中,手机可以在第二缓存队列(Buffer′)中缓存第二图像。该第二缓存队列可以缓存一帧或多帧图像。
如此,手机生成一帧第二图像后,新生成的一帧图像便可以在Buffer′入队,而之前已缓存在Buffer′中的一帧第二图像则可以出队。例如,手机的ISP输出图19A中的(a)所示的第二图像I后,该第二图像I则可以缓存在Buffer′中,即第二图像I在Buffer′入队。之后,手机的ISP输出图19A中的(b)所示的第二图像II后,已缓存在Buffer′中的第二图像I则可以从Buffer′出队,第二图像II则可以缓存在Buffer′中,即第二图像II在Buffer′入队。之后,手机的ISP输出图19A中的(c)所示的第二图像III后,已缓存在Buffer′中的第三图像II则可以从Buffer′出队,第二图像III则可以缓存在Buffer′中,即第二图像III在Buffer′入队。如此循环往复,Buffer′中始终缓存一帧第二图像。Buffer′中缓存的一帧第二图像为上述手机接收到第三操作时得到的最近一帧的第二图像。
在一些实施例中,手机可以从Buffer′中缓存的第二图像中,选择出时间信息与上述第四图像的时间信息最近的一帧第二图像作为引导图像,用于对第四图像进行画 质增强。需要说明的是,S1707中所述的第四图像可以是录像文件中经过ISP处理过第q帧第一图像。第四图像的时间信息与第q帧第一图像的时间信息相同。
示例性的,手机可以通过融合网络(也称为图像融合网络),利用上述最近一帧第二图像对第四图像进行画质增强得到抓拍图像。其中,手机通过融合网络进行图像增强的方法,可以参考常规技术中的相关方法,本申请实施例这里不予赘述。
在一些实施例中,手机执行S1706之前,可以对上述最近一帧第二图像和第四图像进行配准。之后,手机可以利用配准后的第二图像,对配准后的第四图像进行画质增强。其中,手机对最近一帧第二图像和第四图像进行融合(Fusion)之前,对最近一帧第二图像和第四图像进行配准,可以提升手机进行画质增强的成功率和效果。
一般而言,配准可以包括两种方式:全局配准和局部配准。
全局配准一般使用特征点检测和匹配。以手机对第四图像和第三图像进行配准为例。手机可以检测第四图像和第三图像中匹配的特征点(如像素点)。然后,手机可以筛选匹配的特征点。如果匹配的特征点中好的特征点个数大于预设阈值1,则手机可以认为全局配准效果较好,可以进行融合。
局部配准一般使用光流法。以手机对第四图像和第二图像进行配准为例。手机可以先对第四图像和第二图像计算光流。然后,手机可以将经过光流配准变换后的第二图像,与经过光流配准变换后的第四图像做差。如果差异小于预设阈值2,则手机可以认为局部配准效果较好,可以融合。
在另一些实施例中,手机对第四图像和第二图像进行配准之前,可以先对比第四图像与第三图像的纹理相似度。如果第四图像与第二图像的纹理相似度高于预设相似度阈值,则表示第四图像与第二图像的纹理相似度较高。在这种情况下,手机对第四图像和第三图像进行配准的成功率较高。采用本方案,可以提升手机配准的成功率。
如果第四图像与第二图像的纹理相似度低于或等于预设相似度阈值,则表示第四图像与第二图像的纹理相似度较低。在这种情况下,手机则不会对第四图像和第二图像进行配准。这样,减少无效配准影响手机功耗。在这种情况下,手机可以直接将第四图像作为抓拍图像。
在一些实施例中,上述画质增强可以实现噪声去除、清晰度提升、动态范围(Dynamic Range)的改变或拓展、图像超分辨等功能。本申请实施例这里介绍上述图像超分辨功能。
一般而言,虑到功耗和存储空间等因素,录像(即录制视频)所选择的分辨率相比于Sensor出图的分辨率较低。因此,在录像过程中ISP对Sensor输出的图像是进行了下采样的。下采样(subsampled)也可以称为降采样(down sampled)。对图像进行下采样,可以降低图像的分辨率。如此,如图20-图22中任一附图所示,录像文件中的第四图像是低分辨率(low resolution,LR)图像。而用户希望在录像过程中获得的抓拍图像是高分辨率的图像。基于此,本实施例中所述的画质增强可以包括图像超分辨。图20-图22中任一附图所示的第二图像是未经过下采样的高分辨率(high resolution,HR)图像。手机可以将上述最近一帧第二图像作为引导图像,对第四图像进行画质增强(包括图像超分辨),提升第四图像的分辨率。
例如,假设第四图像的分辨率为1080p,上述最近一帧第二图像的分辨率为4k。 手机执行S1706,可以将分辨率为4k的第二图像作为引导图像,对分辨率为1080p的第四图像进行画质增强。画质增强后的第四图像(即抓拍图像)的分辨率可以为4k。
应理解,由手机接收到第三操作时摄像头采集的第一图像得到的第四图像是用户想要抓拍的图像。手机接收到第三操作时得到的最近一帧的第二图像是手机使用预设RAW域图像处理算法和ISP进行画质增强得到的图像质量较高的图像。因此,采用最近一帧的第二图像对第四图像进行画质增强,可以提升最终得到的抓拍图像的画质。如此,不仅可以得到用户在录像过程中想要抓拍的图像,还可以提升抓拍图像的图像质量。
S1707、手机接收用户的第二操作。
S1708、手机响应于第二操作,结束录制视频,保存视频。
其中,S1707-S1708的详细实现过程,可以参考上述实施例对S606-S607的介绍,本申请实施例这里不予赘述。
相比于S601-S607的方案,采用S1701-S1708的方案,可以减少第一缓存队列中缓存的图像帧的数量。具体的,假设图3所示的延迟时长是330毫秒(ms),手机的Sensor每30毫秒(ms)曝光一帧第一图像。执行S601-S606的方案,为了保证手机的选帧模块可以从第一缓存队列中选出手机接收第三操作的时刻Sensor曝光的第一图像,则第一缓存队列中至少需要缓存10帧图像。采用S1701-S1707的方案,第二图像只是用于增强用户想要抓拍的第四图像,因此不需要缓存较多的图像帧来生成第二图像。
本申请另一些实施例提供了一种电子设备,该电子设备可以包括:上述显示屏、摄像头、存储器和一个或多个处理器。该显示屏、摄像头、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图5A所示的手机的结构。
本申请实施例还提供一种芯片系统,如图24所示,该芯片系统2400包括至少一个处理器2401和至少一个接口电路2402。处理器2401和接口电路2402可通过线路互联。例如,接口电路2402可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路2402可用于向其它装置(例如处理器2401)发送信号。示例性的,接口电路2402可读取存储器中存储的指令,并将该指令发送给处理器2401。当所述指令被处理器2401执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块, 以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种录像中抓拍图像的方法,其特征在于,应用于电子设备,所述电子设备包括摄像头,所述方法包括:
    所述电子设备接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;
    响应于所述第一操作,所述电子设备显示取景界面,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;
    所述电子设备接收用户的第二操作;
    所述电子设备响应于所述第二操作,结束录制视频;
    所述电子设备保存所述视频;其中,
    在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;
    所述电子设备响应于所述第三操作,对保存于所述第一缓存队列的所述第q帧第一图像进行图像处理,得到抓拍图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对保存于所述第一缓存队列的所述第q帧第一图像进行图像处理,得到抓拍图像,包括:
    所述电子设备对所述m帧第一图像进行图像处理,得到所述抓拍图像,其中,所述m帧第一图像包括所述第q帧第一图像,m≥1,m为整数。
  3. 根据权利要求1或2所述的方法,其特征在于,
    所述图像处理包括RAW域的图像处理和ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质;或者,
    所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述抓拍图像的图像画质优于所述第q帧第一图像的图像画质。
  4. 根据权利要求2所述的方法,其特征在于,所述电子设备对所述m帧第一图像进行图像处理,得到所述抓拍图像,包括:
    所述电子设备将所述m帧第一图像作为输入,运行预设原始RAW域图像处理算法,得到第二图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;所述预设RAW域图像处理算法中集成了所述RAW域、RGB域或者YUV域的图像处理功能中的至少一项图像处理功能,用于在所述ISP进行图像处理前提升图像的画质;
    所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像。
  5. 根据权利要求4所述的方法,其特征在于,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:
    所述电子设备将拜耳Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW 域图像处理算法,得到Bayer格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域、所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行图像处理前提升图像的画质;
    所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:
    所述电子设备采用所述ISP依次对所述第二图像进行所述RAW域的图像处理、所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
  6. 根据权利要求4所述的方法,其特征在于,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:
    所述电子设备将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到RGB格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能,用于在所述ISP对图像进行RGB域和YUV域的图像处理前提升图像的画质;
    所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:
    所述电子设备采用所述ISP依次对所述第二图像进行所述RGB域的图像处理和所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
  7. 根据权利要求6所述的方法,其特征在于,所述预设RAW域图像处理算法中还集成了所述RGB域或者所述YUV域中至少一项的部分图像处理功能,用于在所述ISP进行RGB域的图像处理前提升图像的画质。
  8. 根据权利要求4所述的方法,其特征在于,所述电子设备将所述m帧第一图像作为输入,运行预设RAW域图像处理算法,得到第二图像,包括:
    所述电子设备将Bayer格式的所述m帧第一图像作为输入,运行所述预设RAW域图像处理算法,得到YUV格式的第二图像;其中,所述预设RAW域图像处理算法中集成了所述RAW域的图像处理功能和所述RGB域的图像处理功能,用于在所述ISP对图像进行YUV域的图像处理前提升图像的画质;
    所述电子设备采用所述ISP处理所述第二图像,对处理后的图像进行编码得到所述抓拍图像,包括:
    所述电子设备采用所述ISP对所述第二图像进行所述YUV域的图像处理,对处理后的图像进行编码得到所述抓拍图像。
  9. 根据权利要求8所述的方法,其特征在于,所述预设RAW域图像处理算法中还集成了所述YUV域的部分图像处理功能,用于在所述ISP进行YUV域的图像处理前提升图像的画质。
  10. 根据权利要求2-9中任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备通过时分复用的方式,采用所述ISP处理所述第一图像和所述第二图像。
  11. 一种录像中抓拍图像的方法,其特征在于,应用于电子设备,所述电子设备包括摄像头,所述方法包括:
    所述电子设备接收用户的第一操作;其中,所述第一操作用于触发所述电子设备开始录制视频;
    响应于所述第一操作,所述电子设备显示取景界面;其中,所述取景界面显示预览图像流,所述预览图像流包括n帧预览图像,所述预览图像是所述电子设备接收到所述第一操作后基于所述电子设备的摄像头采集的n帧第一图像得到的,所述取景界面还包括抓拍快门,所述抓拍快门用于触发所述电子设备进行抓拍,所述n帧第一图像存储于所述电子设备的第一缓存队列中,n≥1,n为整数;
    所述电子设备周期性对所述第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,k≥1,k为整数;
    所述电子设备接收用户的第二操作;
    所述电子设备响应于所述第二操作,结束录制视频;
    所述电子设备保存所述视频;其中,在所述摄像头采集n帧第一图像中的第q帧第一图像时,接收到用户对所述抓拍快门的第三操作;
    所述电子设备响应于所述第三操作,采用所述电子设备接收到所述第三操作时得到的最近一帧的第二图像对第四图像进行画质增强,得到抓拍图像;其中,所述第四图像是所述视频中时间信息与所述第q帧第一图像的时间信息相同的一帧图像。
  12. 根据权利要求11所述的方法,其特征在于,所述图像处理包括RAW域的图像处理和图像信号处理器ISP图像处理,所述RAW域的图像处理为在RAW颜色空间进行的图像处理,所述ISP图像处理为采用所述电子设备的图像信号处理器ISP进行的图像处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质;或者,
    所述图像处理包括所述RAW域图像处理、所述ISP图像处理和编码处理,所述第二图像的图像画质优于所述k帧第一图像的图像画质。
  13. 根据权利要求11或12所述的方法,其特征在于,所述电子设备周期性对所述第一缓存队列中缓存的k帧第一图像进行图像处理,得到第二图像,包括:
    所述电子设备周期性将所述第一缓存队列中缓存的k帧第一图像作为输入,运行预设RAW域图像处理算法得到第三图像;其中,所述预设RAW域图像处理算法具备提升图像画质的功能;
    所述电子设备采用所述电子设备的ISP处理所述第三图像,得到所述第二图像。
  14. 根据权利要求11-13中任一项所述的方法,其特征在于,所述画质增强包括图像超分辨;
    其中,所述第二图像和抓拍图像的分辨率高于所述第q帧第一图像的分辨率。
  15. 一种电子设备,其特征在于,包括:触摸屏、存储器、摄像头、显示屏、一个或多个处理器;所述触摸屏、所述存储器、所述摄像头、所述显示屏与所述处理器耦合;其中,所述存储器中存储有计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1-14任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-14中任一项所述的方法。
  17. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时, 使得所述计算机执行如权利要求1-14中任一项所述的方法。
PCT/CN2022/113982 2021-09-07 2022-08-22 一种录像中抓拍图像的方法及电子设备 WO2023035920A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/015,583 US20240205533A1 (en) 2021-09-07 2022-08-22 Method for capturing image during video recording and electronic device
EP22826796.9A EP4171005A4 (en) 2021-09-07 2022-08-22 METHOD FOR IMAGE CAPTURE DURING FILMING AND ELECTRONIC DEVICE

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111045872.6 2021-09-07
CN202111045872 2021-09-07
CN202210111700.2A CN115776532B (zh) 2021-09-07 2022-01-29 一种录像中抓拍图像的方法及电子设备
CN202210111700.2 2022-01-29

Publications (1)

Publication Number Publication Date
WO2023035920A1 true WO2023035920A1 (zh) 2023-03-16

Family

ID=85388186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113982 WO2023035920A1 (zh) 2021-09-07 2022-08-22 一种录像中抓拍图像的方法及电子设备

Country Status (4)

Country Link
US (1) US20240205533A1 (zh)
EP (1) EP4171005A4 (zh)
CN (1) CN115776532B (zh)
WO (1) WO2023035920A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140078343A1 (en) * 2012-09-20 2014-03-20 Htc Corporation Methods for generating video and multiple still images simultaneously and apparatuses using the same
US20170244897A1 (en) * 2016-02-18 2017-08-24 Samsung Electronics Co., Ltd. Electronic device and operating method thereof
CN111970440A (zh) * 2020-08-11 2020-11-20 Oppo(重庆)智能科技有限公司 图像获取方法、电子装置和存储介质
WO2021036536A1 (zh) * 2019-08-30 2021-03-04 维沃移动通信有限公司 视频拍摄方法及电子设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100591111C (zh) * 2005-12-16 2010-02-17 佳能株式会社 摄像设备和摄像方法以及再现设备和再现方法
US7675550B1 (en) * 2006-04-28 2010-03-09 Ambarella, Inc. Camera with high-quality still capture during continuous video capture
US20100231735A1 (en) * 2009-03-13 2010-09-16 Nokia Corporation Methods, Apparatuses, and Computer Program Products for Facilitating Concurrent Video Recording and Still Image Capture
US9113124B2 (en) * 2009-04-13 2015-08-18 Linkedin Corporation Method and system for still image capture from video footage
US20100295966A1 (en) * 2009-05-19 2010-11-25 John Furlan Digital video camera with high resolution imaging system
KR101797041B1 (ko) * 2012-01-17 2017-12-13 삼성전자주식회사 디지털 영상 처리장치 및 그 제어방법
US9225904B2 (en) * 2012-02-13 2015-12-29 Htc Corporation Image capture method and image capture system thereof
KR101899851B1 (ko) * 2012-05-03 2018-09-18 삼성전자 주식회사 이미지 처리 장치 및 방법
JP5680799B2 (ja) * 2012-06-13 2015-03-04 富士フイルム株式会社 画像処理システム、送信側装置および受信側装置
JP6306845B2 (ja) * 2013-09-12 2018-04-04 キヤノン株式会社 撮像装置及びその制御方法
CN105376473A (zh) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 一种拍照方法、装置及设备
CN104394319B (zh) * 2014-11-24 2018-02-16 浩云科技股份有限公司 一种嵌入式的高清网络视频录像机
CN105407282A (zh) * 2015-11-16 2016-03-16 中科创达软件股份有限公司 一种照相和回放的实现方法
CN205510224U (zh) * 2016-04-12 2016-08-24 上海豪成通讯科技有限公司 数字图像处理器
CN108769537A (zh) * 2018-07-25 2018-11-06 珠海格力电器股份有限公司 一种拍照方法、装置、终端及可读存储介质
CN109104633B (zh) * 2018-08-30 2021-09-28 Oppo广东移动通信有限公司 视频截图方法、装置、存储介质及移动终端
CN110933289A (zh) * 2018-09-20 2020-03-27 青岛海信移动通信技术股份有限公司 一种基于双目相机的连拍方法、拍照装置和终端设备
US10917582B2 (en) * 2019-01-04 2021-02-09 Gopro, Inc. Reducing power consumption for enhanced zero shutter lag
CN110086985B (zh) * 2019-03-25 2021-03-30 华为技术有限公司 一种延时摄影的录制方法及电子设备
CN112449085A (zh) * 2019-08-30 2021-03-05 北京小米移动软件有限公司 图像处理方法及装置、电子设备、可读存储介质
CN113382154A (zh) * 2020-02-25 2021-09-10 荣耀终端有限公司 基于深度的人体图像美化方法及电子设备
CN112738414B (zh) * 2021-04-06 2021-06-29 荣耀终端有限公司 一种拍照方法、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140078343A1 (en) * 2012-09-20 2014-03-20 Htc Corporation Methods for generating video and multiple still images simultaneously and apparatuses using the same
US20170244897A1 (en) * 2016-02-18 2017-08-24 Samsung Electronics Co., Ltd. Electronic device and operating method thereof
WO2021036536A1 (zh) * 2019-08-30 2021-03-04 维沃移动通信有限公司 视频拍摄方法及电子设备
CN111970440A (zh) * 2020-08-11 2020-11-20 Oppo(重庆)智能科技有限公司 图像获取方法、电子装置和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4171005A4

Also Published As

Publication number Publication date
CN115776532A (zh) 2023-03-10
US20240205533A1 (en) 2024-06-20
CN115776532B (zh) 2023-10-20
EP4171005A1 (en) 2023-04-26
EP4171005A4 (en) 2024-01-31

Similar Documents

Publication Publication Date Title
WO2021147482A1 (zh) 一种长焦拍摄的方法及电子设备
CN115473957A (zh) 一种图像处理方法和电子设备
CN113099146B (zh) 一种视频生成方法、装置及相关设备
WO2023035921A1 (zh) 一种录像中抓拍图像的方法及电子设备
WO2024055797A1 (zh) 一种录像中抓拍图像的方法及电子设备
US20230043815A1 (en) Image Processing Method and Electronic Device
CN115689963B (zh) 一种图像处理方法及电子设备
WO2020155875A1 (zh) 电子设备的显示方法、图形用户界面及电子设备
CN113935898A (zh) 图像处理方法、系统、电子设备及计算机可读存储介质
CN115756268A (zh) 跨设备交互的方法、装置、投屏系统及终端
WO2023036007A1 (zh) 一种获取图像的方法及电子设备
EP4262226A1 (en) Photographing method and related device
CN116055868B (zh) 一种拍摄方法及相关设备
WO2023035920A1 (zh) 一种录像中抓拍图像的方法及电子设备
WO2021204103A1 (zh) 照片预览方法、电子设备和存储介质
CN114793283A (zh) 图像编码方法、图像解码方法、终端设备及可读存储介质
CN115802147B (zh) 一种录像中抓拍图像的方法及电子设备
WO2023056785A1 (zh) 一种图像处理方法及电子设备
WO2022206600A1 (zh) 一种投屏方法、系统及相关装置
WO2023035868A1 (zh) 拍摄方法及电子设备
WO2024041006A1 (zh) 一种控制摄像头帧率的方法及电子设备
WO2023124149A1 (zh) 图像处理方法、电子设备及存储介质
CN118264896A (zh) 一种获取图像的方法及电子设备
WO2023160221A1 (zh) 一种图像处理方法和电子设备
CN115802147A (zh) 一种录像中抓拍图像的方法及电子设备

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 18015583

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2022826796

Country of ref document: EP

Effective date: 20221230

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22826796

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE