WO2024109203A1 - 拍照处理方法和电子设备 - Google Patents

拍照处理方法和电子设备 Download PDF

Info

Publication number
WO2024109203A1
WO2024109203A1 PCT/CN2023/114091 CN2023114091W WO2024109203A1 WO 2024109203 A1 WO2024109203 A1 WO 2024109203A1 CN 2023114091 W CN2023114091 W CN 2023114091W WO 2024109203 A1 WO2024109203 A1 WO 2024109203A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data packet
electronic device
queue
data
Prior art date
Application number
PCT/CN2023/114091
Other languages
English (en)
French (fr)
Inventor
许集润
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024109203A1 publication Critical patent/WO2024109203A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present application relates to the field of terminals, and in particular, to a photo processing method and an electronic device.
  • the present application provides a photo processing method and an electronic device.
  • the electronic device detects that a thumbnail image is clicked, the captured image can be displayed quickly to a certain extent, thereby improving the shooting experience.
  • a photo processing method which is applied to an electronic device, and the photo processing method includes:
  • a first operation is detected, where the first operation is an operation of instructing the electronic device to take a photo;
  • the data packets stored in the image data queue being used to generate a captured image, the first data packet being the earliest data packet collected in the image data queue;
  • a second operation is detected, where the second operation includes N shooting operations, where a time interval between the N shooting operations is less than a preset time length, where the shooting operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or equal to 2;
  • the third operation is an operation of clicking a thumbnail image of the second captured image
  • the electronic device can generate a captured image by performing image processing on the data packets in the image data queue in the order of acquisition time from late to early (for example, the order of acquisition first and then processing), thereby shortening the time between the electronic device detecting the operation of clicking the thumbnail image and displaying the captured image to a certain extent; it can be understood that after processing the data packet of the first operation in multiple consecutive photographing operations, the electronic device can select the second data packet with the latest acquisition time in the stored image data queue for image processing to generate the second captured image; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • the N data packets include the second data packet and N-1 data packets, and after generating the second captured image, the method further includes:
  • the image processing is performed on the N-1 data packets in sequence to generate N-1 captured images.
  • image processing can be performed on other data packets in the image data queue in order from late to early data packet acquisition time to generate corresponding captured images.
  • the first data packet includes a first end frame, and the first end frame is used to indicate an end position of the first data packet in the image data queue; after generating the first captured image, acquiring a second data packet in the image data queue includes:
  • the second data packet is obtained at a first moment; wherein the first moment is a moment of processing the first end frame.
  • the image data queue is a data queue that is updated in real time based on a photographing operation, in order to ensure that the selected second data packet is the data packet with the latest photographing operation in the image data queue; when processing the first end frame in the first data packet, the second data packet can be selected in the image data queue in order from late to early according to the data packet acquisition time; thereby improving the accuracy of the selected second data packet.
  • each of the N data packets includes a start frame and an end frame, the start frame is used to indicate a start position of a data packet in the image data queue, and the end frame is used to indicate an end position of a data packet in the image data queue; and acquiring the second data packet at the first moment includes:
  • determining position information of a target start frame in the image data queue wherein the target start frame is the latest start frame in the image data queue
  • the second data packet is acquired.
  • each of the N data packets includes a start frame
  • the target start frame in the image data queue (for example, the start frame with the latest acquisition time) can be first determined in the order from late to early based on the time information, and the target start frame is the start frame of the second data packet; thus, the second data packet in the image data queue is selected based on the target start frame; in the embodiment of the present application, since the start frame of the second data packet is determined by the position of the start frame The second data packet in the image data queue is selected, thereby reducing the computational workload of the electronic device to a certain extent.
  • the first data packet includes a first start frame, and the first start frame is used to identify the starting position of the first data packet in the image data queue; M frames of image data are included between the first start frame and the first end frame, and M is an integer greater than or equal to 1.
  • the M frames of image data are image data in a first color space;
  • the image processing includes processing using a first algorithm and a second algorithm, the first algorithm is an algorithm for the first color space, and the second algorithm is an algorithm for converting an image in the first color space into an image in a second color space.
  • the N shooting operations are continuous shooting operations.
  • the N+1 data packets in the image data queue are data packets obtained in the zero-second delay queue based on the time information of the first operation and the time information of the second operation.
  • the electronic device may obtain a corresponding data packet in a zero-second delay queue based on time information of detecting the first operation and time information of the second operation, and store the data packet in an image data queue.
  • an electronic device comprising one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code comprises computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute:
  • a first operation is detected, where the first operation is an operation of instructing the electronic device to take a photo;
  • the data packets stored in the image data queue being used to generate a captured image, the first data packet being the earliest data packet collected in the image data queue;
  • a second operation is detected, where the second operation includes N shooting operations, where a time interval between the N shooting operations is less than a preset time length, where the shooting operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or equal to 2;
  • the third operation is an operation of clicking a thumbnail image of the second captured image
  • the second photographed image is displayed.
  • the N data packets include the second data packet and N-1 data packets, and after generating the second captured image, the one or more processors call the computer instructions to cause the electronic device to execute:
  • the image processing is performed on the N-1 data packets in sequence to generate N-1 captured images.
  • the first data packet includes a first end frame, and the first end frame is used to indicate an end position of the first data packet in the image data queue; and the one or more processors call the computer instructions to enable the electronic device to execute:
  • the second data packet is obtained at a first moment; wherein the first moment is a moment of processing the first end frame.
  • each of the N data packets includes a start frame and an end frame, the start frame is used to indicate a start position of a data packet in the image data queue, and the end frame is used to indicate an end position of a data packet in the image data queue; the one or more processors call the computer instructions to cause the electronic device to execute:
  • determining position information of a target start frame in the image data queue wherein the target start frame is the latest start frame in the image data queue
  • the second data packet is acquired.
  • the first data packet includes a first start frame, and the first start frame is used to identify the starting position of the first data packet in the image data queue; M frames of image data are included between the first start frame and the first end frame, and M is an integer greater than or equal to 1.
  • the M frames of image data are image data in a first color space;
  • the image processing includes processing using a first algorithm and a second algorithm, the first algorithm is an algorithm for the first color space, and the second algorithm is an algorithm for converting an image in the first color space into an image in a second color space.
  • the N shooting operations are continuous photo shooting operations.
  • the N+1 data packets in the image data queue are data packets obtained in the zero-second delay queue based on the time information of the first operation and the time information of the second operation.
  • an electronic device comprising a module/unit for executing the photo processing method in the first aspect or any one of the implementations of the first aspect.
  • an electronic device comprising one or more processors and a memory; the memory is coupled to the one or more processors, the memory is used to store computer program code, the computer program code comprises computer instructions, and the one or more processors call the computer instructions to enable the electronic device to execute the photo processing method in the first aspect or any one of the implementations of the first aspect.
  • a chip system which is applied to an electronic device, and the chip system includes one or more processors, and the processor is used to call computer instructions so that the electronic device executes the first aspect or any one of the photo processing methods in the first aspect.
  • a computer-readable storage medium stores a computer program code.
  • the computer program code is executed by an electronic device, the electronic device executes the photo processing method in the first aspect or any one of the implementations of the first aspect.
  • a computer program product comprising: a computer program code, which, when executed by an electronic device, enables the electronic device to execute the photo processing method in the first aspect or any one of the implementations of the first aspect.
  • the electronic device may collect the data in the order from the latest to the earliest time (for example, collect the data first, The sequence of post-processing of the set) generates a captured image by performing image processing on the data packets in the image data queue, thereby shortening the time between the electronic device detecting the operation of clicking the thumbnail image and displaying the captured image to a certain extent; it can be understood that after the electronic device processes the data packet of the first operation in multiple consecutive photographing operations, it can select the second data packet with the latest collection time in the stored image data queue for image processing to generate a second captured image; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • FIG1 is a schematic diagram of a hardware system of an electronic device applicable to the present application.
  • FIG2 is a schematic diagram of an existing image processing sequence
  • FIG3 is a schematic diagram of a graphical user interface provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of another graphical user interface provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of a software architecture provided in an embodiment of the present application.
  • FIG6 is a schematic flow chart of a photographing processing method provided in an embodiment of the present application.
  • FIG7 is a schematic flow chart of another photo processing method provided in an embodiment of the present application.
  • FIG8 is a schematic diagram of a data packet for a continuous photographing operation provided by an embodiment of the present application.
  • FIG9 is a schematic diagram of a post-processing queue provided in an embodiment of the present application.
  • FIG10 is a schematic diagram of another photo processing method provided in an embodiment of the present application.
  • FIG11 is a schematic diagram of a sequence for generating captured images provided in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of storing data packets in a post-processing queue provided in an embodiment of the present application
  • FIG13 is a schematic diagram of another sequence of generating captured images provided in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another embodiment of the present application provided in the post-processing queue storing data packets
  • FIG15 is a schematic diagram of another sequence of generating captured images provided in an embodiment of the present application.
  • 16 is a schematic diagram of storing data packets in another post-processing queue provided in an embodiment of the present application.
  • FIG17 is a schematic diagram of another graphical user interface provided in an embodiment of the present application.
  • FIG18 is a schematic diagram of another graphical user interface provided in an embodiment of the present application.
  • FIG19 is a schematic diagram of another graphical user interface provided in an embodiment of the present application.
  • FIG20 is a schematic diagram of another graphical user interface provided in an embodiment of the present application.
  • FIG21 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • FIG. 22 is a schematic diagram of the structure of another electronic device provided in an embodiment of the present application.
  • first, second, etc. are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features.
  • a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features.
  • “multiple” means two or more.
  • a thumbnail image refers to an image with a smaller resolution cached in an electronic device.
  • the image quality of a thumbnail image is poorer than that of a captured image.
  • the resolution of the thumbnail image is smaller.
  • the thumbnail image in the shooting interface can be indexed to the actual captured image in the album.
  • the captured image may refer to a real image generated when a user detects a photographing operation; it may be understood that an electronic device detects a photographing operation and generates an image stored in a gallery application.
  • the post-processing queue is used to store data packets for generating captured images; the image data stored in the post-processing queue may be image data obtained from a zero-second delay (zero shutter lag, ZSL) queue; wherein, before shooting, the electronic device usually displays the image of the screen to be shot in the electronic device, and these displayed images are also called preview images.
  • ZSL zero shutter lag
  • the images in the ZSL queue may be Raw images.
  • FIG. 1 shows a hardware system of an electronic device suitable for the present application.
  • the electronic device 100 can be a mobile phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (AR) device, a virtual reality (VR) device, a laptop computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (PDA), a projector, etc.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • projector etc.
  • the embodiment of the present application does not impose any restrictions on the specific type of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, etc.
  • the structure shown in FIG1 does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in FIG1, or the electronic device 100 may include a combination of some of the components shown in FIG1, or the electronic device 100 may include sub-components of some of the components shown in FIG1.
  • the components shown in FIG1 may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include at least one of the following processing units: an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and a neural-network processing unit (NPU).
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices or integrated devices.
  • the controller may generate an operation control signal according to the instruction opcode and the timing signal to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or cyclically used. If the processor 110 needs to use the instruction or data again, it may be directly called from the memory. This avoids repeated access, reduces the waiting time of the processor 110, and thus improves the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the processor 110 may include at least one of the following interfaces: an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM interface, and a USB interface.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM interface SIM interface
  • USB interface USB interface
  • the processor 110 may be used to execute the photographing processing method provided by the embodiment of the present application; for example, a first operation is detected, the first operation is an operation of instructing the electronic device to take a photo; in response to the first operation, a first data packet is obtained; the first data packet is stored in an image data queue, the data packet stored in the image data queue is used to generate a photographed image, and the first data packet is the data packet with the earliest acquisition time in the image data queue; image processing is performed on the first data packet to generate a first photographed image; a second operation is detected, the second operation includes N photographing operations, the time interval of the N photographing operations is less than a preset time length, the photographing operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or equal to 2; in response to the second operation, N data packets are obtained, and the N data packets correspond to the N photographing operations one by one; N data packets are stored in the image data queue in an order from
  • connection relationship between the modules shown in Fig. 1 is only a schematic illustration and does not constitute a limitation on the connection relationship between the modules of the electronic device 100.
  • the modules of the electronic device 100 may also adopt a combination of multiple connection modes in the above embodiments.
  • the wireless communication function of the electronic device 100 can be implemented through components such as the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the utilization of antennas.
  • antenna 1 can be reused as a diversity antenna for a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the electronic device 100 can realize the display function through the GPU, the display screen 194 and the application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • Display screen 194 may be used to display images or videos.
  • the display screen 194 can be used to display images or videos.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a mini light-emitting diode (Mini LED), a micro light-emitting diode (Micro LED), Micro OLED or quantum dot light emitting diodes (QLED).
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back by the camera 193.
  • the shutter is opened, and the light is transmitted to the camera photosensitive element through the camera, and the light signal is converted into an electrical signal.
  • the camera photosensitive element transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye.
  • the ISP can perform algorithmic optimization on the noise, brightness and color of the image, and the ISP can also optimize the exposure and color temperature of the shooting scene and other parameters.
  • the ISP can be set in the camera 193.
  • the camera 193 (also referred to as a lens) is used to capture static images or videos. It can be triggered to start by an application program instruction to realize the photo function, such as taking an image of any scene.
  • the camera may include components such as an imaging lens, a filter, and an image sensor. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the imaging lens is mainly used to converge the light emitted or reflected by all objects in the photographing angle (also referred to as the scene to be photographed, the target scene, or the scene image that the user expects to shoot) to form an image;
  • the filter is mainly used to filter out the redundant light waves in the light (for example, light waves other than visible light, such as infrared);
  • the image sensor may be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the image sensor is mainly used to perform photoelectric conversion on the received light signal, convert it into an electrical signal, and then transmit the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV, etc. format.
  • the digital signal processor is used to process digital signals, and can process not only digital image signals but also other digital signals.
  • the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • a video codec is used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 may play or record videos in a variety of coding formats, such as Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, and MPEG4.
  • MPEG Moving Picture Experts Group
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes i.e., the x-axis, the y-axis, and the z-axis
  • the gyro sensor 180B can be used for anti-shake shooting. For example, when the shutter is pressed, the gyro sensor 180B detects the angle of the electronic device 100 shaking, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally the x-axis, y-axis, and z-axis). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. The acceleration sensor 180E can also be used to identify the posture of the electronic device 100 as an input parameter for applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the ambient light sensor 180L is used to sense the ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement functions such as unlocking, accessing application locks, taking photos, and answering calls.
  • the touch sensor 180K is also referred to as a touch control device.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also referred to as a touch control screen.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor 180K may pass the detected touch operation to the application processor to determine the type of touch event.
  • a visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, and may be disposed at a different position from the display screen 194.
  • electronic devices usually perform image processing on data packets collected by image sensors in a first-collect, first-process manner; since electronic devices process data packets in a first-collect, first-process manner, the electronic devices usually process the images with the latest shooting time last; however, after the user finishes the photo-taking operation, the user usually clicks on the thumbnail to view the actual captured image, and the thumbnail image at this time is usually the thumbnail image with the latest shooting time; since electronic devices usually process the images with the latest shooting time last, it takes a while for the electronic devices to display the images with the latest shooting time in the gallery application; therefore, currently after the user clicks on the thumbnail image, there is a problem that the images with the latest shooting time cannot be quickly displayed in the gallery application; this results in a long waiting time for the user and a poor shooting experience.
  • an electronic device detects three quick and continuous shooting operations.
  • the order in which the electronic device detects the shooting operations is: the first shooting operation, the second shooting operation and the third shooting operation. Since the electronic device processes the captured images in a first-capture-first-process manner, wherein the first shooting operation corresponds to the electronic device capturing data packet 1, the second shooting operation corresponds to the electronic device capturing data packet 2, and the third shooting operation corresponds to the electronic device capturing data packet 3.
  • the order in which the electronic device processes the data packets is: data packet 1, data packet 2 and data packet 3.
  • the order in which the captured images are displayed is in reverse order of time, which can be understood as the order in which the electronic device displays the captured images is: the third captured image, the second captured image and the first captured image. Since one shooting can capture one or more frames of images, a captured image of one shooting is generated by processing one or more frames of images. After the electronic device detects the third shooting operation, the electronic device may still be processing the first captured image or the second captured image. At this time, the electronic device cannot quickly display the captured image of the third shooting after the user clicks on the thumbnail image of the third shooting. This results in a long waiting time for the user and a poor shooting experience.
  • an electronic device For example, suppose that it takes 0.1 seconds for an electronic device to collect a data packet; it takes 3 seconds for the electronic device to process a data packet, that is, it takes 3 seconds for the electronic device to generate a captured image; if the electronic device detects three consecutive photo operations in quick succession, the electronic device will need 9 seconds to generate the third captured image; therefore, after the user completes the photo operation and clicks on the thumbnail of the third captured image, it will take some time for the electronic device to display the third captured image; this results in a long waiting time for the user and a poor shooting experience.
  • a preview interface 201 is displayed, as shown in (a) of FIG. 3.
  • the preview interface 201 includes a preview image, a photo control 202 and a thumbnail display control 203, wherein the image displayed in the thumbnail display control 203 is a thumbnail image of the last photo;
  • the electronic device detects multiple rapid and continuous shooting operations; for example, the electronic device detects the operation of clicking the photo control 202, as shown in (b) of FIG3; after the electronic device detects the operation of clicking the photo control 202, the electronic device detects the operation of popping up the photo control 202, and displays the display interface 204, as shown in (c) of FIG3; after the electronic device detects the operation of popping up the photo control 202, the electronic device detects the operation of clicking the photo control 202, as shown in (d) of FIG3; after the electronic device detects the operation of clicking the photo control 202, the electronic device detects the operation of popping up the photo control 202, and displays the display interface 205, as
  • the image of the second photo displayed at this time is an image obtained by scaling the thumbnail image of the second photo, and is not a real image of the second photo.
  • the image displayed in the display interface 206 includes an image area 207, and the detail information in the image area 207 is poor.
  • the electronic device After a certain period of time, the electronic device generates a captured image of the second photo and displays a display interface 208, and the display interface 208 includes the captured image of the second photo, that is, the real image of the second photo.
  • the real image of the second photo includes an image area 209.
  • the detail information in the image area 209 is better than the detail information in the image area 207, as shown in (d) in FIG. 4 .
  • the user after completing the photo-taking operation, the user usually clicks on the thumbnail display control 203 to view the captured image; after clicking on the thumbnail display control 203, the user hopes to be able to quickly display the captured image in the gallery application; if the electronic device processes the image according to the current method of first capture and then process (for example, in order of capture time from early to late), the electronic device display will not be able to quickly display the captured image after detecting the pop-up photo control, resulting in a long waiting time for the user to shoot and a poor shooting experience.
  • an electronic device detecting one operation of clicking a photo control and one operation of popping up the photo control can be regarded as detecting one photo operation; multiple operations of rapid continuous shooting can refer to the electronic device detecting multiple photo operations in a relatively short period of time; for example, the electronic device may detect three photo operations in a relatively short period of time, and the three photo operations include the first operation of clicking the photo control and the first operation of popping up the photo control; the second operation of clicking the photo control and the second operation of popping up the photo control; and the third operation of clicking the photo control and the third operation of popping up the photo control.
  • an embodiment of the present application provides a photographing processing method and an electronic device; in an embodiment of the present application, the electronic device generates a photographed image by performing image processing on the collected image data packets in the order of collection time from early to late, thereby shortening the time between the electronic device detecting the operation of clicking the thumbnail image and displaying the photographed image to a certain extent; it can be understood that after the electronic device processes the data packet of the first photographing operation in the continuous photographing operation, it can select the first data packet in the reverse direction of the stored image data queue for image processing, that is, obtain the data packet with the latest shooting time for image processing; thereby ensuring that the electronic device can quickly process the photographed images with a later shooting time, so that the electronic device can quickly display the photographed image after detecting the operation of clicking the thumbnail image; to a certain extent, it can shorten the time the user waits for the photographed image and improve the user's shooting experience.
  • FIG. 5 is a schematic diagram of a software system of an electronic device provided in an embodiment of the present application.
  • the system architecture may include an application layer 210 , an application framework layer 220 , a hardware abstraction layer 230 , a driver layer 240 , and a hardware layer 250 .
  • the application layer 210 may include a gallery application.
  • the application layer 210 may also include camera application, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer 220 provides an application programming interface (API) and a programming framework for applications in the application layer; the application framework layer may include some predefined functions.
  • API application programming interface
  • the application framework layer 220 includes a window manager, a content provider, a resource manager, a notification manager, and a view system.
  • the window manager is used to manage window programs; the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, and capture the screen.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data can include video, images, audio, calls made and received, browsing history and bookmarks, and phone books.
  • the resource manager provides various resources for applications, such as localized strings, icons, images, layout files, and video files.
  • the notification manager enables applications to display notification information in the status bar. It can be used to convey notification-type messages and can disappear automatically after a short stay without user interaction. For example, the notification manager is used for download completion notifications and message reminders.
  • the notification manager can also manage notifications that appear in the system's top status bar in the form of icons or scrolling bar text, such as notifications from applications running in the background.
  • the notification manager can also manage notifications that appear on the screen in the form of dialog windows, such as prompting text messages in the status bar, emitting reminder sounds, vibrating electronic devices, and flashing indicator lights.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying images.
  • the view system can be used to build applications.
  • a display interface can be composed of one or more views.
  • a display interface including a text notification icon can include a view for displaying text and a view for displaying images.
  • the hardware abstraction layer 230 is used to abstract the hardware.
  • the hardware abstraction layer 230 includes a post-processing queue, a frame selection module and an image processing module; wherein the post-processing queue is used to store data packets for generating captured images; the image processing module is used to perform image processing on the data packets selected by the frame selection module to generate captured images; the frame selection module is used to execute the photo processing method provided in the embodiment of the present application; for example, the frame selection module can be used to select data packets from the post-processing queue and transmit them to the image processing module through the photo processing method provided in the embodiment of the present application, and the image processing module is used to perform image processing on the acquired data packets to generate captured images.
  • the post-processing queue is used to store data packets for generating captured images
  • the image processing module is used to perform image processing on the data packets selected by the frame selection module to generate captured images
  • the frame selection module is used to execute the photo processing method provided in the embodiment of the present application; for example, the frame selection module can be used to select data packets from the post-processing queue and transmit
  • the driver layer 240 is used to provide drivers for different hardware devices.
  • the driver layer may include a display screen driver and a camera driver.
  • the hardware layer 250 is located at the bottom layer of the software system.
  • the hardware layer 250 may include a display screen and a camera module; wherein the display screen is used to display video; and the camera module is used to capture images.
  • Fig. 6 is a schematic flow chart of a photographing processing method provided by an embodiment of the present application.
  • the method 300 may be executed by the electronic device shown in Fig. 1; the method 300 includes S301 to S311, and S301 to S311 are described in detail below.
  • image data queue shown in FIG. 6 may refer to the post-processing queue shown in FIG. 5 or FIG. 7 .
  • the first operation is an operation of instructing the electronic device to take a photo.
  • the electronic device does not detect a photographing operation; after the first preset time period, the photographing operation detected by the electronic device may be referred to as a first operation.
  • the first operation may include an operation of clicking a photo control and an operation of popping up a photo control.
  • the first operation may be the first photographing operation shown in subsequent FIG. 7 , which will not be described in detail here.
  • the first data packet is the data packet with the earliest acquisition time in the image data queue.
  • the electronic device detects that the user pops up a photo control
  • the electronic device in response to the pop-up photo control operation, the electronic device is triggered to collect a first data packet.
  • the first data packet may include M image frames; after the electronic device detects that the user pops up the photo control, in response to the pop-up photo control operation, the image sensor in the electronic device is triggered to capture M image frames.
  • a first data packet may be obtained in the image data queue according to time information of the first operation; the first data packet may be a data packet corresponding to the first operation, and the first operation is the earliest photographing operation among multiple consecutive photographing operations.
  • the first data packet includes a first start frame, which is used to identify the starting position of the first data packet in the image data queue; M frames of image data are included between the first start frame and the first end frame, where M is an integer greater than or equal to 1.
  • the first data packet may be data packet 1, which may include a start frame, image frame 1, image frame 2, image frame N and an end frame; wherein the start frame is used to identify the starting position of data packet 1; and the end frame is used to identify the ending position of data packet 1.
  • first start frame may refer to the header of the first data packet; the first end frame may also refer to the tail of the first data packet.
  • the first data packet may include one image frame or the data packet may also include multiple image frames; the embodiment of the present application does not impose any limitation on the number of image frames in the first data packet.
  • the first data packet may be data packet 1, which may include a start frame, at least one image frame (for example, 3 image frames) and an end frame; wherein the start frame is used to identify the starting position of data packet 1 in the post-processing queue; and the end frame is used to identify the ending position of data packet 1 in the post-processing queue.
  • data packet 1 may include a start frame, at least one image frame (for example, 3 image frames) and an end frame; wherein the start frame is used to identify the starting position of data packet 1 in the post-processing queue; and the end frame is used to identify the ending position of data packet 1 in the post-processing queue.
  • S303 Store the first data packet in the image data queue.
  • the data packets stored in the image data queue are used to generate captured images, and the first data packet is the data packet with the earliest acquisition time in the image data queue.
  • the implementation method of S330 may refer to the related description of S403 in the subsequent FIG. 7 , which will not be repeated here.
  • S304 Perform image processing on the first data packet to generate a first captured image.
  • the M frames of image data are image data in a first color space; the image processing includes processing using a first algorithm and a second algorithm, the first algorithm is an algorithm for the first color space, and the second algorithm is an algorithm for converting an image in the first color space into an image in the second color space.
  • the image frame in the first data packet may be original image data, that is, the image frame may be a Raw image;
  • the image processing includes a first algorithm and a second algorithm; wherein the first algorithm is an algorithm of a first color space; and the second algorithm is an algorithm for converting an image in the first color space into an image in the second color space.
  • the first algorithm is an algorithm of the Raw color space
  • the algorithm of the Raw color space may include but is not limited to: black level correction (Black Level Correction, BLC), lens shading correction (Lens Shading Correction, LSC) and other algorithms.
  • black level correction is used to correct the black level.
  • Black level refers to the video signal level without a line of light output on a calibrated display device.
  • the reason for black level correction is: on the one hand, due to the dark current of the image sensor, there is a problem of voltage output of the pixel even in the absence of light; on the other hand, due to the insufficient accuracy of the analog-to-digital conversion of the image sensor.
  • Lens Shading Correction is used to eliminate the problem of inconsistent color and brightness around the image and the center of the image caused by the lens optical system.
  • the second algorithm includes converting a Raw image into an image in a YUV color space; an algorithm converting an image in a YUV color space into other storage formats; wherein other storage formats include JPEG format (JPG format), GIF format, DNG format or RAW format, etc.
  • JPG format JPG format
  • GIF format GIF format
  • DNG format DNG format
  • RAW format etc.
  • the above is an example of an image processing algorithm; the above image processing process of the data packet can refer to any existing algorithm for generating captured images, and the present application does not impose any limitation on this.
  • the second operation includes N shooting operations, the time interval between the N shooting operations is less than a preset duration, the shooting operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or equal to 2.
  • the N photographing operations may include a second photographing operation and a third photographing operation as shown in FIG. 7 ; the implementation method may refer to the relevant description shown in FIG. 7 , which will not be repeated here.
  • the N data packets correspond one-to-one to the N shooting operations.
  • the N data packets may include data packet 2 and data packet 3 as shown in FIG. 7 ; the implementation method may refer to the relevant description shown in FIG. 7 , which will not be repeated here.
  • data packet 1 may be the first data packet; the N data packets may include data packets 2 and 3; the implementation method of storing N data packets in order from early to late based on the collection time can be referred to the relevant description of FIG9 , which will not be repeated here.
  • the second data packet is the data packet with the latest acquisition time in the image data queue.
  • the first data packet includes a first end frame, and the first end frame is used to indicate an end position of the first data packet in the image data queue; after generating the first captured image, acquiring a second data packet in the image data queue includes:
  • a second data packet is acquired at a first moment; wherein the first moment is a moment of processing a first end frame.
  • the image data queue is a data queue that is updated in real time based on a photographing operation, in order to ensure that the selected second data packet is the data packet with the latest photographing operation in the image data queue; when processing the first end frame in the first data packet, the second data packet can be selected in the image data queue in order from late to early according to the data packet acquisition time; thereby improving the accuracy of the selected second data packet.
  • each of the N data packets includes a start frame and an end frame, and the start frame is used to indicate a The starting position of a data packet in the image data queue, and the end frame is used to indicate the end position of a data packet in the image data queue; obtaining a second data packet at a first moment, including:
  • determining position information of a target start frame in the image data queue wherein the target start frame is the latest start frame in the image data queue
  • a second data packet is acquired.
  • each of the N data packets includes a start frame; therefore, when selecting the second data packet in the image data queue in order from late to early according to the data packet acquisition time, the target start frame in the image data queue (for example, the latest start frame at the acquisition time) can be first determined in order from late to early based on the time information, and the target start frame is the start frame of the second data packet; thereby, the second data packet in the image data queue is selected based on the target start frame; in an embodiment of the present application, since the second data packet in the image data queue is selected by the position of the start frame of the second data packet, the amount of computation of the electronic device can be reduced to a certain extent.
  • the implementation method for obtaining the second data packet may refer to the related description of the subsequent FIG. 10 , which will not be repeated here.
  • S309 Perform the image processing on the second data packet to generate a second captured image.
  • the N data packets include the second data packet and the N-1 data packet, and after the second captured image is generated, further include:
  • the N-1 data packets in the image data queue are acquired in order from the latest to the earliest;
  • the N-1 data packets are processed sequentially to generate N-1 captured images.
  • image processing can be performed on other data packets in the image data queue in order from late to early data packet acquisition time to generate corresponding captured images.
  • the implementation method of generating N-1 captured images may refer to the related description of S4051 or S408 or S412 in the subsequent FIG. 7 , which will not be repeated here.
  • the image processing includes processing using a first algorithm and a second algorithm
  • the first algorithm is an algorithm of the first color space
  • the second algorithm is an algorithm for converting an image in the first color space into an image in a second color space.
  • the third operation is an operation of clicking on a thumbnail image of the second captured image.
  • the electronic device can display the photographed image stored in the gallery application, that is, the second photographed image.
  • the operation of detecting clicking on the thumbnail image of the third photographing operation may be as shown in (f) in FIG17 .
  • displaying the second captured image may be displaying the second captured image in a gallery application when the electronic device detects that a thumbnail image of the second captured image is clicked; the second captured image refers to a captured image captured by the latest operation among N photographing operations.
  • the user after the user finishes the photo-taking operation, the user usually views the photographed image by clicking on a thumbnail image; it can be understood that the user views the actual photographed image stored in the gallery application by clicking on a thumbnail image.
  • the electronic device can only generate the second photographed image at the 9th second; and based on the photographing processing method provided in the embodiment of the present application, after the electronic device generates the first photographed image, the electronic device can generate the second photographed image; it can be understood that based on the photographing processing method provided in the embodiment of the present application, as shown in Figure 7, the electronic device can generate the second photographed image at the 6th second; therefore, in the embodiment of the present application, when the electronic device detects that the user clicks on the thumbnail image of the second photographed image, the second photographed image can be quickly displayed; thereby shortening the user's waiting time to a certain extent and improving the user's shooting experience.
  • the electronic device can generate captured images by performing image processing on the data packets in the image data queue in the order of acquisition time from late to early (for example, the order of acquisition first and then processing), thereby shortening the time between the electronic device detecting the operation of clicking the thumbnail image and displaying the captured image to a certain extent; it can be understood that after processing the data packet of the first operation in multiple consecutive photographing operations, the electronic device can select the second data packet with the latest acquisition time in the stored image data queue for image processing to generate a second captured image; thereby ensuring that the electronic device can quickly process captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • the method shown in FIG6 is illustrated by an example in which the electronic device detects three or more consecutive photographing operations; the embodiment provided in the present application is also applicable to a case in which the electronic device detects one photographing operation and two consecutive photographing operations.
  • the post-processing queue only includes data packet 1; the electronic device can first perform image processing on data packet 1, and then perform image processing on the data packet using the above-mentioned method of first acquiring and then processing; in the post-processing queue, since the post-processing queue only includes data packet 1, after performing image processing on data packet 1, if the electronic device does not detect other data packets, the image processing process can be terminated.
  • the post-processing queue only includes data packet 1 and data packet 2; the electronic device can first perform image processing on data packet 1, and then perform image processing on the data packet using the above-mentioned method of first acquiring and then processing; in the post-processing queue, since the post-processing queue only includes data packet 1 and data packet 2, after image processing is performed on data packet 1, the first data packet in the current post-processing queue is selected in the opposite direction of the first direction, that is, image processing is performed on data packet 2.
  • Fig. 7 is a schematic flow chart of a photographing processing method provided by an embodiment of the present application.
  • the method 400 may be executed by the electronic device shown in Fig. 1; the method 400 includes S401 to S412, and S401 to S412 are described in detail below.
  • the user can instruct the electronic device to run the camera application by clicking the icon of the "Camera” application.
  • the user can instruct the electronic device to run the camera application by sliding right on the display screen of the electronic device.
  • the lock screen interface includes an icon of the camera application, and the user instructs the electronic device to run the camera application by clicking the icon of the camera application.
  • the application has the permission to call the camera application; the user can instruct the electronic device to run the camera application by clicking a corresponding control.
  • the user can instruct the electronic device to run a camera application by selecting a control of a camera function.
  • running the camera application may refer to launching the camera application.
  • the photo processing method of the present application is applicable to scenarios of at least one continuous photo shooting; as shown in Figure 8, for one continuous photo shooting operation, the electronic device can detect N photo shooting operations, including the first photo shooting operation, the second photo shooting operation, and the Nth photo shooting operation; for one photo shooting operation among the N photo shooting operations, the electronic device can detect the operation of clicking the photo shooting control and the operation of popping up the photo shooting control.
  • the electronic device may generate a photographed image based on a single frame image, that is, a true image taken at one time.
  • the electronic device may also generate a photographed image based on multiple frames of images; for example, as shown in FIG8 , the electronic device may generate one photographed image based on N frames of images.
  • a first photographing operation (an example of a first operation) is detected.
  • the first photographing operation may refer to the first photographing operation detected by the electronic device; for example, within a first preset time period, the electronic device does not detect a photographing operation; after the first preset time period, the photographing operation detected by the electronic device may refer to the first photographing operation.
  • the first photographing operation may include an operation of clicking a photographing control and an operation of popping up a photographing control.
  • S403 Store the data packet of the first photographing operation into a post-processing queue.
  • the electronic device may obtain a data packet of the first photographing operation (eg, data packet 1) in the ZSL queue of the electronic device according to the time information of detecting the first photographing operation to be performed; and store the data packet of the first photographing operation in a post-processing queue.
  • a data packet of the first photographing operation eg, data packet 1
  • the electronic device may obtain a data packet of the first photographing operation (eg, data packet 1) in the ZSL queue of the electronic device according to the time information of detecting the first photographing operation to be performed; and store the data packet of the first photographing operation in a post-processing queue.
  • the first photo operation may include an operation of clicking a photo control and an operation of popping up a photo control.
  • the electronic device may be triggered to obtain a data packet of the first photo operation (for example, data packet 1) in the ZSL queue of the electronic device based on the time information of detecting the clicking of the photo control; and store the data packet of the first photo operation in a post-processing queue.
  • the data packet of the first photographing operation is stored in the post-processing queue according to the first direction.
  • the data packets in the post-processing queue are used to generate captured images of the photographing operation, that is, the real images of the photographing operation;
  • the electronic device also includes a pre-processing queue, and the data packets in the pre-processing queue are used to generate thumbnail images of the photographing operation.
  • a data packet of a first photographing operation is obtained in a post-processing queue; the first photographing operation may be a photographing operation with the earliest photographing time in the post-processing queue.
  • the first data packet may be the earliest data packet stored in the post-processing queue;
  • the post-processing queue only includes the first data packet; for example, the post-processing queue only includes data packet 1.
  • the first data packet may be data packet 1, which may include a start frame, at least one image frame (for example, 3 image frames) and an end frame; wherein the start frame is used to identify the starting position of data packet 1 in the post-processing queue; and the end frame is used to identify the ending position of data packet 1 in the post-processing queue.
  • data packet 1 may include a start frame, at least one image frame (for example, 3 image frames) and an end frame; wherein the start frame is used to identify the starting position of data packet 1 in the post-processing queue; and the end frame is used to identify the ending position of data packet 1 in the post-processing queue.
  • the electronic device after the electronic device detects the first photographing operation (for example, the first photographing operation), the electronic device also detects the second photographing operation (for example, the second photographing operation) or the third photographing operation (for example, the third photographing operation), etc.; after detecting the first photographing operation, the electronic device performs image processing on the data packet of the first photographing operation to generate a first photographed image; at the same time, the electronic device can continuously update the post-processing queue according to the detected second photographing operation or the third photographing operation, and store the data packets of subsequent photographing operations in the post-processing queue; the electronic device processes the data packets in the post-processing queue respectively to generate photographed images of different photographing operations; the generated photographed images are stored in the gallery application, and when the electronic device detects that the thumbnail image is clicked, the electronic device displays the photographed image corresponding to the thumbnail image in the gallery application to realize the review of the photographed image.
  • the electronic device after the electronic device detects the first photographing operation, the electronic device also detects the second photographing operation
  • the electronic device may execute the first process 405 and the second process 406; wherein, the first process 405 and the second process 406 may be executed synchronously; wherein, the first process 450 may refer to a process in which the electronic device generates a captured image of a first photographing operation, and the first process 405 includes S4051; the second process 406 may refer to a process in which the electronic device detects a photographing operation and updates a post-processing queue in real time based on the photographing operation; taking three photographing operations as an example, the second process may include S4061 to S4064.
  • the first captured image may refer to a captured image corresponding to the first photographing operation, that is, a real image generated by the electronic device based on the first photographing operation.
  • the first data packet may be data packet 1, which may include a start frame, image frame 1, image frame 2, image frame N and an end frame; wherein the start frame is used to identify the starting position of data packet 1; and the end frame is used to identify the ending position of data packet 1.
  • start frame may refer to the header of a data packet; the end frame may also refer to the tail of a data packet.
  • the first data packet may also only include a start frame, image frame 1 and an end frame.
  • a data packet may include one image frame or may include multiple image frames; the embodiment of the present application does not impose any limitation on the number of image frames in a data packet.
  • the image frame in the first data packet may be original image data, that is, the image frame may be a Raw image; wherein the image processing includes a first algorithm and a second algorithm; wherein the first algorithm is an algorithm for a first color space; and the second algorithm is an algorithm for converting an image in a first color space into an image in a second color space.
  • the first algorithm is an algorithm of the Raw color space
  • the algorithm of the Raw color space may include but is not limited to: black level correction (Black Level Correction, BLC), lens shading correction (Lens Shading Correction, LSC) and other algorithms.
  • black level correction is used to correct the black level.
  • Black level refers to the video signal level without a line of light output on a calibrated display device.
  • the reason for black level correction is: on the one hand, due to the dark current of the image sensor, the pixel also has the problem of voltage output in the absence of light; on the other hand, the image sensor is not accurate enough when performing analog-to-digital conversion.
  • Lens Shading Correction It is used to eliminate the problem of inconsistent color and brightness around the image and the center of the image caused by the lens optical system.
  • the second algorithm includes converting a Raw image into an image in a YUV color space; an algorithm converting an image in a YUV color space into other storage formats; wherein other storage formats include JPEG format (JPG format), GIF format, DNG format or RAW format, etc.
  • JPG format JPG format
  • GIF format GIF format
  • DNG format DNG format
  • RAW format etc.
  • the above is an example of an image processing algorithm; the above image processing process of the data packet can refer to any existing algorithm for generating captured images, and the present application does not impose any limitation on this.
  • the second photographing operation may refer to detecting a second photographing operation within a second preset time period; wherein the interval between the first photographing operation and the second photographing operation is relatively short; and the electronic device may be in a continuous photographing scene.
  • the second photographing operation may include an operation of clicking a photographing control and an operation of popping up a photographing control.
  • the second photographing operation is performed after the first photographing operation, and the electronic device detects the second photographing operation at a time later than the first photographing operation.
  • S4062 Store the data packet of the second photographing operation into a post-processing queue.
  • the electronic device can obtain a data packet of the first photographing operation (for example, data packet 2) in the ZSL queue of the electronic device according to the time information of detecting the second photographing operation to be performed; and store the data packet of the second photographing operation in the post-processing queue.
  • a data packet of the first photographing operation for example, data packet 2
  • the ZSL queue of the electronic device can obtain a data packet of the first photographing operation (for example, data packet 2) in the ZSL queue of the electronic device according to the time information of detecting the second photographing operation to be performed.
  • the second photo operation may include an operation of clicking a photo control and an operation of popping up a photo control.
  • the triggered electronic device can obtain a data packet of the second photo operation (for example, data packet 2) in the ZSL queue of the electronic device based on the time information of detecting the clicking of the photo control; and store the data packet of the second photo operation in a post-processing queue.
  • the data packet of the second photographing operation is stored in the post-processing queue according to the first direction; for example, data packet 2 is stored to the left of data packet 1 according to the first direction; it can be understood that data packet 1 is stored first and then data packet 2 according to the first direction.
  • the third photographing operation may refer to detecting the third photographing operation within a second preset time period; wherein the interval between the second photographing operation and the third photographing operation is relatively short; and the electronic device may be in a continuous photographing scene.
  • the third photographing operation may include an operation of clicking a photographing control and an operation of popping up a photographing control.
  • the third photographing operation is performed after the second photographing operation, and the electronic device detects the third photographing operation at a time later than the time of the third photographing operation.
  • S4064 Store the data packet of the third photographing operation into a post-processing queue.
  • the electronic device can obtain a data packet of the first photographing operation (for example, data packet 3) in the ZSL queue of the electronic device according to the time information of detecting the third photographing operation to be performed; and store the data packet of the third photographing operation in the post-processing queue.
  • a data packet of the first photographing operation for example, data packet 3
  • the ZSL queue of the electronic device can obtain a data packet of the first photographing operation (for example, data packet 3) in the ZSL queue of the electronic device according to the time information of detecting the third photographing operation to be performed.
  • the third photo operation may include an operation of clicking a photo control and an operation of popping up a photo control.
  • the electronic device detects the operation of popping up the photo control, the triggered electronic device can obtain a data packet of the third photo operation (for example, data packet 3) in the ZSL queue of the electronic device based on the time information of detecting the click of the photo control; and store the data packet of the third photo operation in a post-processing queue.
  • a data packet of the third photo operation for example, data packet 3
  • the data packet of the third photographing operation is stored in the post-processing queue according to the first direction; for example, data packet 3 is stored to the left of data packet 2 according to the first direction; it can be understood that data packet 1 is stored first according to the first direction, Then data packet 2 is stored, and then data packet 3 is stored.
  • S407 Select the first data packet in the current post-processing queue in the opposite direction of the storage direction.
  • the storage direction can be a first direction, and the first direction refers to the direction from right to left; then the opposite direction of the first direction can refer to the reverse direction of the first direction; if the first direction is from right to left, then the reverse direction of the first direction can refer to the direction from left to right; in addition, if the electronic device processes a data packet, the data packet can be regarded as being removed from the post-processing queue; it can be understood that the data packets stored in the post-processing queue are all unprocessed data packets; the first data packet selected in the opposite direction of the storage direction refers to the latest data packet at the time of the photo operation.
  • the unprocessed data packets included in the post-processing queue at this time are data packet 2 and data packet 3 , and the data packets in the post-processing queue are selected in the reverse order of the storage, and data packet 3 is obtained.
  • the electronic device may select the first data packet in the current post-processing queue in a stored reverse order.
  • the electronic device when the electronic device processes the end frame in the first data packet (for example, the end frame in data packet 1), the current post-processing queue includes data packets 2 and 3; at this time, the electronic device can select the first data packet in the reverse direction of the current post-processing queue, that is, select data packet 3; and perform image processing on data packet 3 to generate a captured image.
  • the electronic device can select the first data packet in the reverse direction of the current post-processing queue, that is, select data packet 3; and perform image processing on data packet 3 to generate a captured image.
  • the electronic device performs image processing on the collected image data packets by adopting a method of first collecting and then processing to generate a captured image, thereby shortening the time between the electronic device detecting the operation of clicking on the thumbnail image and displaying the captured image to a certain extent; it can be understood that after the electronic device processes the data packet of the first photographing operation in the continuous photographing operation, it can select the first data packet in the reverse direction of the stored image data queue for image processing, that is, obtain the data packet with the latest shooting time for image processing; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking on the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • method 400 can be applicable to a continuous shooting scenario; therefore, the electronic device can detect a shooting operation multiple times; while executing S440, the electronic device can also detect other shooting operations, and new data packets will be continuously stored in the post-processing queue based on other shooting operations, as shown in (b) in Figure 9; in an embodiment of the present application, after processing the first data packet, the electronic device can perform image processing on the data packets in the post-processing queue in sequence according to the method of first collecting and then processing; instead, the electronic device selects the current last data packet in the post-processing queue for processing, thereby shortening the user's shooting waiting time to a certain extent.
  • the first start frame is selected in the post-processing queue according to a reverse processing method to obtain the last data packet.
  • a continuous photographing operation including three photographing operations is used as an example for explanation; wherein, the electronic device detects the first photographing operation, the second photographing operation and the third photographing operation respectively in chronological order from front to back; the data packet corresponding to the first photographing operation includes: a start frame, image frame 1, image frame 2, image frame 3 and an end frame; the data packet corresponding to the second photographing operation includes: a start frame, image frame 3, image frame 4, image frame 5 and an end frame; the data packet corresponding to the third photographing operation includes: a start frame, image frame 7, image frame 8, image frame 9 and an end frame are data of the third photographing operation stored in the post-processing queue; after the electronic device detects the first photographing operation, at the first moment the frame selection module indicates through indication information 1 that the data packet of the first photographing operation is dequeued from the start frame (for example For example, start to obtain data from the post-processing queue); at the second moment, the frame selection module indicates through indication information 2 that the data
  • S408 Perform image processing on the selected data packet to generate a second captured image.
  • the first data packet in the current post-processing queue is selected as data packet 3 in the opposite direction of the storage direction; image processing is performed on data packet 3 to generate a second captured image.
  • data packet 3 corresponds to the third photographing operation, that is, the photographing operation with the latest photographing operation time; the second photographed image is a photographed image generated according to the third photographing operation.
  • the implementation method of image processing can refer to the relevant description of S4051 above, which will not be repeated here.
  • S409 An operation of clicking a thumbnail image (an example of a second operation) is detected.
  • the electronic device since the electronic device detects the first photographing operation, the second photographing operation and the third photographing operation respectively in the order from early to late in terms of photographing time; among which the third photographing operation is the photographing operation with the latest photographing time, if the electronic device does not detect the photographing operation within a period of time after the third photographing operation, the thumbnail image displayed in the photographing interface of the electronic device is the thumbnail image of the third photographing operation; it can be understood that the thumbnail image with the latest photographing operation is usually displayed in the photographing interface of the electronic device; after the electronic device detects the operation of clicking on the thumbnail image of the third photographing operation, the electronic device can display the photographed image stored in the gallery application, that is, the second photographed image.
  • the operation of detecting clicking on the thumbnail image of the third photographing operation may be as shown in (f) in FIG17 .
  • displaying the second photographed image may be displaying the photographed image of the third photographing operation in a gallery application, that is, the actual photographed image of the third photographing operation.
  • the user after the user finishes the photo-taking operation, the user usually views the photographed image by clicking on a thumbnail image; it can be understood that the user views the actual photographed image stored in the gallery application by clicking on a thumbnail image.
  • the electronic device can only generate the second photographed image at the 9th second; and based on the photographing processing method provided in the embodiment of the present application, after the electronic device generates the first photographed image, the electronic device can generate the second photographed image; it can be understood that based on the photographing processing method provided in the embodiment of the present application, as shown in Figure 7, the electronic device can generate the second photographed image at the 6th second; therefore, in the embodiment of the present application, when the electronic device detects that the user clicks on the thumbnail image of the second photographed image, the second photographed image can be quickly displayed; thereby shortening the user's waiting time to a certain extent and improving the user's shooting experience.
  • the electronic device performs image processing on the collected image data packets by adopting the method of first collecting and then processing to generate a captured image, thereby shortening the time between the electronic device detecting the operation of clicking the thumbnail image and displaying the captured image to a certain extent; it can be understood that the electronic device takes the first shot in the continuous photo operation.
  • the first data packet in the reverse order of the stored image data queue can be selected for image processing, that is, the data packet with the latest shooting time is obtained for image processing; thereby ensuring that the electronic device can quickly process the images shot with a later shooting time, so that the electronic device can quickly display the shot image after detecting the operation of clicking the thumbnail image; to a certain extent, it can shorten the time the user waits for the shot image and improve the user's shooting experience.
  • the storage direction can be a first direction, and the first direction refers to the direction from right to left; then the opposite direction of the first direction (for example, the reverse direction) refers to the opposite direction of the first direction; if the first direction is from right to left, then the reverse direction of the first direction can refer to the direction from left to right; in addition, if the electronic device processes a data packet, the data packet can be regarded as being removed from the post-processing queue; it can be understood that the data packets stored in the post-processing queue are all unprocessed data packets; the first data packet selected according to the reverse direction of storage refers to the latest data packet at the time of the photo operation.
  • S412 Perform image processing on the selected data packet to generate a third captured image.
  • the first data packet in the current post-processing queue is selected as data packet 2 in the opposite direction of the storage direction; image processing is performed on data packet 2 to generate a third captured image.
  • data packet 2 corresponds to a second photographing operation
  • the moment when the electronic device detects the second photographing operation is between the moment when the first photographing operation is detected and the moment when the second photographing operation is detected.
  • the implementation method of image processing can refer to the relevant description of S4051 above, which will not be repeated here.
  • the electronic device performs image processing on the collected image data packets by adopting a method of first collecting and then processing to generate a captured image, thereby shortening the time between the electronic device detecting the operation of clicking on the thumbnail image and displaying the captured image to a certain extent; it can be understood that after the electronic device processes the data packet of the first photographing operation in the continuous photographing operation, it can select the first data packet in the reverse direction of the stored image data queue for image processing, that is, obtain the data packet with the latest shooting time for image processing; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking on the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • FIG7 takes an example where an electronic device detects three consecutive photographing operations; the photographing processing method provided in the embodiment of the present application is also applicable to the scenario where the electronic device detects one photographing operation and two consecutive photographing operations.
  • the post-processing queue only includes data packet 1; the electronic device can first perform image processing on data packet 1, and then perform image processing on the data packet using the above-mentioned method of first acquiring and then processing; in the post-processing queue, since the post-processing queue only includes data packet 1, after performing image processing on data packet 1, if the electronic device does not detect other data packets, the image processing process can be terminated.
  • the post-processing queue only includes data packet 1 and data packet 2; the electronic device can first perform image processing on data packet 1, and then perform image processing on the data packet using the above-mentioned method of first acquiring and then processing; in the post-processing queue, since the post-processing queue only includes data packet 1 and data packet 2, after image processing is performed on data packet 1, the first data packet in the current post-processing queue is selected in the opposite direction of the storage direction, that is, image processing is performed on data packet 2.
  • an example of an electronic device detecting more than three consecutive photo taking operations is described below in conjunction with FIG. 11 to FIG. 16 .
  • the electronic device when the electronic device detects a continuous photographing operation, the electronic device can select the current first data packet in the post-processing queue in the opposite direction of the storage direction of the data packet in the post-processing queue for image processing to generate a captured image; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly generate the captured images after detecting the continuous photographing operation; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • the data packet of the first consecutive photographing operation in the post-processing queue can be processed first; after processing the data packet of the first consecutive photographing operation, the latest data packet in the consecutive photographing operation can be processed; and then the data packets of different photographing operations can be processed in chronological order from back to front; ensuring that the electronic device can quickly process images captured at a later time, so that the electronic device can quickly generate captured images after detecting the consecutive photographing operation; to a certain extent, it can shorten the time users wait for captured images and improve the user's shooting experience.
  • a continuous photographing operation includes 5 photographing operations for illustration; the 5 photographing operations generate 5 photographed images correspondingly, namely photographed image 1, photographed image 2, photographed image 3, photographed image 4 and photographed image 5; the data packet collected by the electronic device for the first photographing operation is data packet 1; the data packet collected by the electronic device for the second photographing operation is data packet 2; the data packet collected by the electronic device for the third photographing operation is data packet 3; the data packet collected by the electronic device for the fourth photographing operation is data packet 4; the data packet collected by the electronic device for the fifth photographing operation is data packet 5; as shown in FIG12 , assuming that the time required for the electronic device to process a data packet is 3 seconds; the time required to collect a data packet is 0.1 seconds, that is, data of 10 photographing operations can be collected in 1 second; then at the first moment (for example, the 0.1 second), the electronic device collects data packet 1; and performs image processing on data packet 1 to generate photographed image 1; at 3.1 seconds, the electronic device complete
  • obtaining a data packet from the post-processing queue and performing image processing on the data packet can be regarded as removing the data packet from the post-processing queue; because the electronic device may also collect new data packets based on the detected photo-taking operation while processing the data packets in the post-processing queue; therefore, the data packets in the post-processing queue can be regarded as being in a real-time update state, that is, there are newly written data packets and also removed data packets; when processing to the end of the previous data packet, the first data packet in the current post-processing queue in the reverse direction can be selected; wherein, the reverse order can be understood as the opposite direction of the data packets stored in the post-processing queue.
  • the electronic device obtains the first data packet in the current post-processing queue in a reverse order. Please refer to the relevant description of Figures 9 and 10, which will not be repeated here.
  • the electronic device may select the current first data packet in the post-processing queue in the opposite direction of the data packets stored in the post-processing queue for image processing to generate a captured image; thereby ensuring that the electronic device can quickly process images captured with a later shooting time, so that the electronic device can quickly generate captured images after detecting the continuous photographing operation; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • the first consecutive photographing operation includes 5 photographing operations; for example, the first consecutive photographing operation includes the first photographing operation to the fifth photographing operation; the second consecutive photographing operation includes 2 photographing operations; for example, the second consecutive photographing operation includes the sixth photographing operation and the seventh photographing operation; as shown in FIG13 , 7 photographing operations correspond to the generation of 7 photographed images, namely photographed image 1, photographed image 2, photographed image 3, photographed image 4, photographed image 5, photographed image 6 and photographed image 7; the data packet collected by the electronic device for the first photographing operation is data packet 1; the data packet collected by the electronic device for the second photographing operation is data packet 2; the data packet collected by the electronic device for the third photographing operation is data packet 3; the data packet collected by the electronic device for the fourth photographing operation is data packet 4; the data packet collected by the electronic device for the fifth photographing operation is data packet 5; the data packet collected by the electronic device for the sixth photographing operation is data packet 6; the data packet collected by the electronic
  • the electronic device obtains the first data packet in the post-processing queue in the opposite direction of the storage direction of the data packet (for example, in reverse order) for image processing; for example, at the first moment, the electronic device obtains data packet 1 for image processing to generate captured image 1; while the electronic device performs image processing on data packet 1, based on the photo operation detected by the electronic device, data packets 2 to 5 are stored in the post-processing queue; at a second moment, when the electronic device processes to the end frame in data packet 1, the electronic device obtains the first data packet, i.e., data packet 5, in the post-processing queue in the opposite direction of the storage direction of the data packet; the electronic device performs image processing on data packet 5 to generate captured image 5; at a third moment, when the electronic device processes to the end frame in data packet 5, the electronic device obtains the first data packet, i.e., data packet 7, in the post-processing queue in the opposite direction of the storage direction; the electronic device perform
  • the electronic device obtains the first data packet, i.e., data packet 6, in the direction opposite to the storage direction of the data packet; the electronic device performs image processing on data packet 6 to generate captured image 6; after the captured image 6 is generated, i.e., when the electronic device processes to the end frame in data packet 6, the electronic device obtains the first data packet, i.e., data packet 4, in the current post-processing queue in the direction opposite to the storage direction; the electronic device performs image processing on data packet 4 to generate captured image 4; after the captured image 4 is generated, the electronic device obtains the first data packet, i.e., data packet 3, in the current post-processing queue in the direction opposite to the storage direction of the data packet; the electronic device performs image processing on data packet 3 to generate captured image 3; after the captured image 3 is generated, the electronic device obtains the first data packet, i.e., data packet 2, in the post-processing queue in the direction opposite to the storage direction of the data packet; the electronic device performs image processing on data packet 2 to generate
  • the electronic device when the electronic device processes the end frame of the current data packet, the electronic device can obtain the first data packet in the post-processing queue in the opposite direction of the storage direction of the data packet; it can be understood that when the electronic device processes the end frame of the previous data packet, When the data packet reaches the end, the electronic device can select the data packet to be processed next from the post-processing queue; when selecting, the selection is made in the order from the back to the front of the shooting time; that is, the electronic device gives priority to the data packet with the latest shooting time.
  • the first consecutive photographing operation includes 5 photographing operations; for example, the first consecutive photographing operation includes the first photographing operation to the fifth photographing operation; the second consecutive photographing operation includes 2 photographing operations; for example, the second consecutive photographing operation includes the sixth photographing operation and the seventh photographing operation; as shown in FIG15 , 7 photographing operations correspond to the generation of 7 photographed images, namely photographed image 1, photographed image 2, photographed image 3, photographed image 4, photographed image 5, photographed image 6 and photographed image 7; the data packet collected by the electronic device for the first photographing operation is data packet 1; the data packet collected by the electronic device for the second photographing operation is data packet 2; the data packet collected by the electronic device for the third photographing operation is data packet 3; the data packet collected by the electronic device for the fourth photographing operation is data packet 4; the data packet collected by the electronic device for the fifth photographing operation is data packet 5; the data packet collected by the electronic device for the sixth photographing operation is data packet 6; the data packet collected by the electronic
  • the electronic device obtains the first data packet in the post-processing queue in a reverse direction for image processing; for example, at the first moment, the electronic device obtains data packet 1 for image processing to generate captured image 1; while the electronic device performs image processing on data packet 1, based on the photo-taking operation detected by the electronic device, data packets 2 to 7 are stored in the post-processing queue; at a second moment, when the electronic device processes to the end frame in data packet 1, the electronic device obtains the first data packet in the post-processing queue in a reverse direction, namely, data packet 7; the electronic device performs image processing on data packet 7 to generate captured image 7; the above operations are repeated, and after generating captured image 7, the electronic device obtains the first data packet in the current post-processing queue in a reverse direction, namely, data packet 6; the electronic device performs image processing on data packet 6 to generate captured image 6; after generating captured image 6, namely, when the electronic device processes to data packet 6 At the end
  • the difference between the above situation 2 and situation 1 is that: in situation 1, after the electronic device processes data packet 1, the post-processing queue includes data packets 2 to 5. At this time, the electronic device obtains the first data packet in the post-processing queue in the opposite direction of the storage direction of the data packet, that is, obtains data packet 5; in situation 2, after the electronic device processes data packet 1, the post-processing queue includes data packets 2 to 7.
  • the electronic device obtains the first data packet in the post-processing queue in the opposite direction of the storage direction of the data packet, that is, obtains data packet 7; the existence of the above situation
  • the reason is that when the electronic device performs continuous photo taking operations, the time intervals between different photo taking operations may be different; due to the difference in time intervals, the image data stored in the post-processing queue in the electronic device may be different.
  • the electronic device performs image processing on the collected image data packets by adopting a method of first collecting and then processing to generate a captured image, thereby shortening the time between the electronic device detecting the operation of clicking on the thumbnail image and displaying the captured image to a certain extent; it can be understood that after the electronic device processes the data packet of the first photographing operation in the continuous photographing operation, it can select the first data packet in the reverse direction of the stored image data queue for image processing, that is, obtain the data packet with the latest shooting time for image processing; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking on the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • the electronic device displays a preview interface 501, as shown in (a) of FIG. 17 ;
  • the preview interface 501 includes a preview image, a photo control 502, and a thumbnail display control 503, wherein the image displayed in the thumbnail display control 503 is a thumbnail image of the last photo;
  • the electronic device detects multiple rapid and continuous photo operations; for example, the electronic device detects an operation of clicking the photo control 502, as shown in (b) of FIG. 17 ; after the electronic device detects the operation of clicking the photo control 502, the electronic device detects an operation of popping up the photo control 502, and displays a display interface 504, as shown in (c) of FIG.
  • the electronic device After the electronic device detects the operation of popping up the photo control 502, the electronic device detects the operation of clicking the photo control 502, as shown in (d) in Figure 17; After the electronic device detects the operation of clicking the photo control 502, the electronic device detects the operation of popping up the photo control 502, as shown in (e) in Figure 17; After the electronic device detects the operation of popping up the photo control 502, the electronic device detects the operation of clicking the thumbnail display control 503, as shown in (f) in Figure 17; After the electronic device detects the operation of clicking the thumbnail display control 503, the electronic device can display the display interface in the gallery application, as shown in Figure 18.
  • the electronic device performs image processing on the data packets of continuous photo shooting by the method of first collecting and then processing, which can shorten the waiting time for generating the captured image when the operation of clicking the thumbnail image is detected; it can be understood that the electronic device can give priority to processing the data packet with the latest shooting time, so that the waiting time after detecting that the electronic device clicks the thumbnail display control, that is, the waiting time for displaying the captured image can be shortened to a certain extent, thereby improving the user's shooting experience.
  • the electronic device executes the photo processing method provided in an embodiment of the present application.
  • the electronic device executes the photo processing method provided in an embodiment of the present application.
  • a preview interface as shown in (a) of FIG. 20 may be displayed; the preview interface includes a preview image and a setting control 620; the electronic device detects that the setting control 620 is clicked. 20 (b); after the electronic device detects the operation of clicking the setting control 620, it displays the setting interface, as shown in (c) in FIG. 20; the setting interface includes a control 630 for quickly displaying the captured image; the electronic device detects the operation of clicking the control 630 for quickly displaying the captured image, as shown in (d) in FIG. 20; after the electronic device detects the operation of quickly displaying the control 630 for quickly displaying the captured image, it executes the photo processing method provided in an embodiment of the present application.
  • the electronic device performs image processing on the collected image data packets by adopting a method of first collecting and then processing to generate a captured image, thereby shortening the time between the electronic device detecting the operation of clicking on the thumbnail image and displaying the captured image to a certain extent; it can be understood that after the electronic device processes the data packet of the first photographing operation in the continuous photographing operation, it can select the first data packet in the reverse direction of the stored image data queue for image processing, that is, obtain the data packet with the latest shooting time for image processing; thereby ensuring that the electronic device can quickly process the captured images with a later shooting time, so that the electronic device can quickly display the captured image after detecting the operation of clicking on the thumbnail image; to a certain extent, it can shorten the time the user waits for the captured image and improve the user's shooting experience.
  • FIG21 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • the electronic device 700 includes a processing module 710 and a display module 720 .
  • the processing module 710 is used to detect a first operation, where the first operation is an operation of instructing the electronic device to take a photo; in response to the first operation, obtain a first data packet; store the first data packet in an image data queue, where the data packet stored in the image data queue is used to generate a captured image, and the first data packet is the data packet with the earliest acquisition time in the image data queue; perform image processing on the first data packet to generate a first captured image; detect a second operation, where the second operation includes N capturing operations, where the time interval of the N capturing operations is less than a preset time length, where the capturing operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or equal to 2; in response to the second operation, obtain N data packets, where the N data packets correspond one-to-one to the N capturing operations; store the N data packets in the image data queue in a sequence from earliest to latest based on the acquisition time; after generating the first captured image, obtain a second data packet
  • the N data packets include the second data packet and N-1 data packets
  • the processing module 710 is further configured to:
  • the image processing is performed on the N-1 data packets in sequence to generate N-1 captured images.
  • the second data packet is obtained at a first moment; wherein the first moment is a moment of processing the first end frame.
  • each of the N data packets includes a start frame and an end frame, the start frame is used to indicate a start position of a data packet in the image data queue, and the end frame is used to indicate an end position of a data packet in the image data queue; the processing module 710 is specifically used to:
  • determining position information of a target start frame in the image data queue wherein the target start frame is the latest start frame in the image data queue
  • the second data packet is acquired.
  • the M frames of image data are image data in a first color space; the image processing includes processing using a first algorithm and a second algorithm, the first algorithm is an algorithm for the first color space, and the second algorithm is an algorithm for converting an image in the first color space into an image in a second color space.
  • the N shooting operations are continuous shooting operations.
  • the N+1 data packets in the image data queue are data packets obtained in the zero-second delay queue based on the time information of the first operation and the time information of the second operation.
  • module can be implemented in the form of software and/or hardware, and is not specifically limited to this.
  • a “module” can be a software program, a hardware circuit, or a combination of the two that implements the above functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor, or a group processor, etc.) and memory for executing one or more software or firmware programs, a combined logic circuit, and/or other suitable components that support the described functions.
  • ASIC application specific integrated circuit
  • processor such as a shared processor, a dedicated processor, or a group processor, etc.
  • memory for executing one or more software or firmware programs, a combined logic circuit, and/or other suitable components that support the described functions.
  • the units of each example described in the embodiments of the present application can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of the present application.
  • Fig. 22 shows a schematic diagram of the structure of an electronic device provided by the present application.
  • the dotted line in Fig. 22 indicates that the unit or the module is optional; the electronic device 800 can be used to implement the photo processing method described in the above method embodiment.
  • the electronic device 800 includes one or more processors 801, which can support the electronic device 800 to implement the photo processing method in the method embodiment.
  • the processor 801 can be a general-purpose processor or a special-purpose processor.
  • the processor 801 can be a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices, such as discrete gates, transistor logic devices, or discrete hardware components.
  • the processor 801 may be used to control the electronic device 800, execute software programs, and process data of the software programs.
  • the electronic device 800 may also include a communication unit 805 to implement input (reception) and output (transmission) of signals.
  • the electronic device 800 may be a chip
  • the communication unit 805 may be an input and/or output circuit of the chip
  • the communication unit 805 may be a communication interface of the chip
  • the chip may be used as a terminal device or other electronic device. Part.
  • the electronic device 800 may be a terminal device, and the communication unit 805 may be a transceiver of the terminal device, or the communication unit 805 may include one or more memories 802 in 800, on which a program 804 is stored.
  • the program 804 may be executed by the processor 801 to generate instructions 803, so that the processor 801 executes the photo processing method described in the above method embodiment according to the instructions 803.
  • data may also be stored in the memory 802 .
  • the processor 801 may also read data stored in the memory 802 .
  • the data may be stored at the same storage address as the program 804 , or may be stored at a different storage address from the program 804 .
  • the processor 801 and the memory 802 may be provided separately or integrated together, for example, integrated on a system on chip (SOC) of a terminal device.
  • SOC system on chip
  • the memory 802 can be used to store the related program 804 of the photographing processing method provided in the embodiment of the present application
  • the processor 801 can be used to call the related program 804 of the photographing processing method stored in the memory 802 when executing the photographing processing method, and execute the photographing processing method of the embodiment of the present application; for example, a first operation is detected, and the first operation is an operation of instructing the electronic device to take a photo; in response to the first operation, a first data packet is acquired; the first data packet is stored in the image data queue, and the data packet stored in the image data queue is used to generate a captured image, and the first data packet is the earliest data packet in the image data queue; image processing is performed on the first data packet to generate a first captured image; a second operation is detected, and the second operation is performed
  • the operation includes N shooting operations, the time interval of the N shooting operations is less than a preset time length, the shooting operation is an operation of instructing the electronic device to capture an image, and N is an integer greater than or
  • the present application also provides a computer program product, which, when executed by the processor 801, implements the photo processing method in any method embodiment of the present application.
  • the computer program product may be stored in the memory 802 , such as a program 804 , which is finally converted into an executable target file that can be executed by the processor 801 after preprocessing, compiling, assembling, and linking.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a computer, the photo processing method of any method embodiment in the present application is implemented.
  • the computer program can be a high-level language program or an executable target program.
  • the computer-readable storage medium is, for example, memory 802.
  • Memory 802 may be a volatile memory or a non-volatile memory, or memory 802 may include both volatile memory and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM, DR RAM
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the size of the serial number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)
  • Facsimiles In General (AREA)

Abstract

本申请涉及终端领域,提供了一种拍照处理方法和电子设备;该方法包括:检测到第一操作,第一操作为指示电子设备拍照的操作;响应于第一操作,获取图像数据队列中的第一数据包;对第一数据包进行图像处理,生成第一拍摄图像;检测到第二操作,第二操作包括N次拍摄操作;响应于第二操作,获取N个数据包;在生成第一拍摄图像之后,获取图像数据队列中的第二数据包;对第二数据包进行图像处理,生成第二拍摄图像;检测到第三操作,第三操作为点击第二拍摄图像的缩略图像的操作;响应于第三操作,显示第二拍摄图像;基于本申请的方案,在电子设备检测到点击缩略图像的操作时,能够在一定程度上快速地显示拍摄图像,提高拍摄体验。

Description

拍照处理方法和电子设备
本申请要求于2022年11月22日提交国家知识产权局、申请号为202211468036.3、申请名称为“拍照处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,具体地,涉及一种拍照处理方法和电子设备。
背景技术
随着电子设备中拍摄功能的发展,相机应用程序在电子设备中的应用越来越广泛。目前,在电子设备检测到多次连续拍照操作的情况下,电子设备通常会最后处理拍摄时间最晚的数据包;但是,在拍照结束后,用户通常会通过点击拍摄时间最晚的缩略图像对实际的拍摄图像进行查看;由于电子设备通常在最后处理拍摄时间最晚的拍摄图像,因此电子设备需要在一段时长后才能在图库应用程序中显示拍摄时间最晚的拍摄图像;因此,目前在用户点击拍摄时间最晚的缩略图像后,电子设备无法快速地显示实际的拍摄图像;从而导致用户的等待时长较长,拍摄体验感较差。
因此,在电子设备检测到点击缩略图像的情况下,如何快速地显示实际的拍摄图像成为一个亟需解决的问题。
发明内容
本申请提供了一种拍照处理方法和电子设备,在电子设备检测到点击缩略图像的情况下,能够在一定程度上快速地显示拍摄图像,提高拍摄体验。
第一方面,提供了一种拍照处理方法,应用于电子设备,所述拍照处理方法包括:
检测到第一操作,所述第一操作为指示所述电子设备拍照的操作;
响应于所述第一操作,获取第一数据包;
在图像数据队列中存储所述第一数据包,所述图像数据队列中存储的数据包用于生成拍摄图像,所述第一数据包为所述图像数据队列中采集时刻最早的数据包;
对所述第一数据包进行图像处理,生成第一拍摄图像;
检测到第二操作,所述第二操作包括N次拍摄操作,所述N次拍摄操作的时间间隔小于预设时长,所述拍摄操作为指示所述电子设备采集图像的操作,N为大于或者等于2的整数;
响应于所述第二操作,获取N个数据包,所述N个数据包与所述N次拍摄操作一一对应;
在所述图像数据队列中基于采集时刻从早到晚的顺序存储所述N个数据包;
在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,所述第二数据包为在所述图像数据队列中采集时刻最晚的数据包;
对所述第二数据包进行所述图像处理,生成第二拍摄图像;
检测到第三操作,所述第三操作为点击所述第二拍摄图像的缩略图像的操作;
响应于所述第三操作,显示所述第二拍摄图像在本申请的实施例中,电子设备可以按照采集时刻从晚到早的顺序(例如,先采集后处理的顺序)通过采用对图像数据队列中的数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理多次连续拍照操作中的第一操作的数据包之后,可以选取存储的图像数据队列中采集时刻最晚的第二数据包进行图像处理,生成第二拍摄图像;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
结合第一方面,在第一方面的某些实现方式中,所述N个数据包包括所述第二数据包与N-1个数据包,在生成所述第二拍摄图像之后,还包括:
基于所述N-1个数据包的采集时刻按照从晚到早的顺序,依次获取所述图像数据队列中的N-1个数据包;
对所述N-1个数据包依次进行所述图像处理,生成N-1个拍摄图像。
在本申请的实施例中,在对图像数据队列中的第一数据包与第二数据包(例如,最后一个数据包)进行图像处理后,可以按照数据包采集时刻从晚到早的顺序对图像数据队列中的其他数据包进行图像处理,生成对应的拍摄图像。
结合第一方面,在第一方面的某些实现方式中,所述第一数据包包括第一结束帧,所述第一结束帧用于指示所述第一数据包在所述图像数据队列中的结束位置;所述在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,包括:
在第一时刻获取所述第二数据包;其中,所述第一时刻为处理所述第一结束帧的时刻。
在本申请的实施例中,由于图像数据队列为基于拍照操作实时更新的数据队列,因此,为了确保选取的第二数据包为图像数据队列中拍照操作为最晚时刻的数据包;在处理至第一数据包中的第一结束帧时,可以按照数据包采集时刻从晚到早的顺序在图像数据队列中选取第二数据包;从而提高选取的第二数据包的准确性。
结合第一方面,在第一方面的某些实现方式中,所述N个数据包中的每个数据包包括起始帧与结束帧,所述起始帧用于指示一个数据包在所述图像数据队列中的起始位置,所述结束帧用于指示一个数据包在所述图像数据队列中的结束位置;所述在第一时刻获取所述第二数据包,包括:
在所述第一时刻,确定所述图像数据队列中目标起始帧的位置信息,其中,所述目标起始帧为所述图像数据队列中时刻最晚的起始帧;
基于所述目标起始帧的位置信息,获取所述第二数据包。
在本申请的实施例中,由于对于N个数据包中的每个数据包均包括起始帧;因此,在以按照数据包采集时刻从晚到早的顺序选取图像数据队列中的第二数据包时,可以先基于时间信息从晚到早的顺序确定图像数据队列中的目标起始帧(例如,采集时刻最晚的起始帧),该目标起始帧为第二数据包的起始帧;从而基于目标起始帧选取图像数据队列中第二数据包;在本申请的实施例中,由于是通过第二数据包的起始帧的位 置选取图像数据队列中的第二数据包,从而能够在一定程度上减少电子设备的运算量。
结合第一方面,在第一方面的某些实现方式中,所述第一数据包包括第一起始帧,所述第一起始帧用于标识所述第一数据包在所述图像数据队列中的起始位置;所述第一起始帧与所述第一结束帧之间包括M帧图像数据,M为大于或者等于1的整数。
结合第一方面,在第一方面的某些实现方式中,所述M帧图像数据为第一颜色空间的图像数据;所述图像处理包括采用第一算法与第二算法的处理,所述第一算法为所述第一颜色空间的算法,所述第二算法为将所述第一颜色空间的图像转换为第二颜色空间的图像的算法。
结合第一方面,在第一方面的某些实现方式中,所述N次拍摄操作为连续拍照操作。
结合第一方面,在第一方面的某些实现方式中,所述图像数据队列中的N+1个数据包为基于所述第一操作的时间信息与所述第二操作的时间信息在零秒延迟队列中获取的数据包。
在一种可能的实现方式中,电子设备可以基于检测到第一操作的时间信息与第二操作的时间信息,在零秒延迟队列中获取相应的数据包,并将数据包存储在图像数据队列中。
第二方面,提供了一种电子设备,电子设备包括一个或多个处理器与存储器;存储器与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器调用计算机指令以使得电子设备执行:
检测到第一操作,所述第一操作为指示所述电子设备拍照的操作;
响应于所述第一操作,获取第一数据包;
在图像数据队列中存储所述第一数据包,所述图像数据队列中存储的数据包用于生成拍摄图像,所述第一数据包为所述图像数据队列中采集时刻最早的数据包;
对所述第一数据包进行图像处理,生成第一拍摄图像;
检测到第二操作,所述第二操作包括N次拍摄操作,所述N次拍摄操作的时间间隔小于预设时长,所述拍摄操作为指示所述电子设备采集图像的操作,N为大于或者等于2的整数;
响应于所述第二操作,获取N个数据包,所述N个数据包与所述N次拍摄操作一一对应;
在所述图像数据队列中基于采集时刻从早到晚的顺序存储所述N个数据包;
在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,所述第二数据包为在所述图像数据队列中采集时刻最晚的数据包;
对所述第二数据包进行所述图像处理,生成第二拍摄图像;
检测到第三操作,所述第三操作为点击所述第二拍摄图像的缩略图像的操作;
响应于所述第三操作,显示所述第二拍摄图像。
结合第二方面,在第二方面的某些实现方式中,所述N个数据包包括所述第二数据包与N-1个数据包,在生成所述第二拍摄图像之后,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:
基于所述N-1个数据包的采集时刻按照从晚到早的顺序,依次获取所述图像数据队列中的N-1个数据包;
对所述N-1个数据包依次进行所述图像处理,生成N-1个拍摄图像。
结合第二方面,在第二方面的某些实现方式中,所述第一数据包包括第一结束帧,所述第一结束帧用于指示所述第一数据包在所述图像数据队列中的结束位置;所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:
在第一时刻获取所述第二数据包;其中,所述第一时刻为处理所述第一结束帧的时刻。
结合第二方面,在第二方面的某些实现方式中,所述N个数据包中的每个数据包包括起始帧与结束帧,所述起始帧用于指示一个数据包在所述图像数据队列中的起始位置,所述结束帧用于指示一个数据包在所述图像数据队列中的结束位置;所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行:
在所述第一时刻,确定所述图像数据队列中目标起始帧的位置信息,其中,所述目标起始帧为所述图像数据队列中时刻最晚的起始帧;
基于所述目标起始帧的位置信息,获取所述第二数据包。
结合第二方面,在第二方面的某些实现方式中,所述第一数据包包括第一起始帧,所述第一起始帧用于标识所述第一数据包在所述图像数据队列中的起始位置;所述第一起始帧与所述第一结束帧之间包括M帧图像数据,M为大于或者等于1的整数。
结合第二方面,在第二方面的某些实现方式中,所述M帧图像数据为第一颜色空间的图像数据;所述图像处理包括采用第一算法与第二算法的处理,所述第一算法为所述第一颜色空间的算法,所述第二算法为将所述第一颜色空间的图像转换为第二颜色空间的图像的算法。
结合第二方面,在第二方面的某些实现方式中,所述N次拍摄操作为连续拍照操作。
结合第二方面,在第二方面的某些实现方式中,所述图像数据队列中的N+1个数据包为基于所述第一操作的时间信息与所述第二操作的时间信息在零秒延迟队列中获取的数据包。
第三方面,提供了一种电子设备,包括用于执行第一方面或者第一方面中的任意一种实现方式中的拍照处理方法的模块/单元。
第四方面,提供一种电子设备,所述电子设备包括一个或多个处理器和存储器与;所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行第一方面或者第一方面中的任意一种实现方式中的拍照处理方法。
第五方面,提供了一种芯片系统,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行第一方面或第一方面中的任一种拍照处理方法。
第六方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或者第一方面中的任意一种实现方式中的拍照处理方法。
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码被电子设备运行时,使得该电子设备执行第一方面或者第一方面中的任意一种实现方式中的拍照处理方法。
在本申请的实施例中,电子设备可以按照采集时刻从晚到早的顺序(例如,先采 集后处理的顺序)通过采用对图像数据队列中的数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理多次连续拍照操作中的第一操作的数据包之后,可以选取存储的图像数据队列中采集时刻最晚的第二数据包进行图像处理,生成第二拍摄图像;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
附图说明
图1是一种适用于本申请的电子设备的硬件系统的示意图;
图2是一种现有的图像处理顺序的示意图;
图3是本申请实施例提供的一种图形用户界面的示意图;
图4是本申请实施例提供的另一种图形用户界面的示意图;
图5是本申请实施例提供的一种软件架构示意图;
图6是本申请实施例提供的一种拍照处理方法的示意性流程图;
图7是本申请实施例提供的另一种拍照处理方法的示意性流程图;
图8是本申请实施例提供的一种连续拍照的操作的数据包的示意图;
图9是本申请实施例提供的一种后处理队列的示意图;
图10是本申请实施例提供的又一种拍照处理方法的示意图;
图11是本申请实施例提供的一种生成拍摄图像的顺序的示意图;
图12是本申请实施例提供的一种后处理队列中存储数据包的示意图;
图13是本申请实施例提供的另一种生成拍摄图像的顺序的示意图;
图14是本申请实施例提供的另一种后处理队列中存储数据包的示意图;
图15是本申请实施例提供的又一种生成拍摄图像的顺序的示意图;
图16是本申请实施例提供的又一种后处理队列中存储数据包的示意图;
图17是本申请实施例提供的又一种图形用户界面的示意图;
图18是本申请实施例提供的又一种图形用户界面的示意图;
图19是本申请实施例提供的又一种图形用户界面的示意图;
图20是本申请实施例提供的又一种图形用户界面的示意图;
图21是本申请实施例提供的一种电子设备的结构示意图;
图22是本申请实施例提供的另一种电子设备的结构示意图。
具体实施方式
在本申请的实施例中,以下术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
为了便于对本申请实施例的理解,首先对本申请实施例中涉及的相关概念进行简要说明。
1.缩略图像
缩略图像是指电子设备中缓存的分辨率较小的图像,缩略图像与拍摄图像相比图像质量较差;与电子设备中的拍摄图像相比,缩略图像的分辨率较小;可选地,通过拍摄界面的缩略图像可以索引到相册中的实际的拍摄图像。
2.拍摄图像
在本申请的实施例中,拍摄图像可以是指用户检测到一次拍照操作生成的真实图像;可以理解为,电子设备检测到一次拍照操作,生成的存储在图库应用程序中的图像。
3.后处理队列
后处理队列中用于存储生成拍摄图像的数据包;后处理队列中存储的图像数据可以是从零秒延迟(zero shutter lag,ZSL)队列中获取的图像数据;其中,电子设备在进行拍摄之前,通常会在电子设备中显示待拍摄画面的图像,这些显示的图像也被称为预览图像。
可选地,ZSL队列中的图像可以为Raw图像。
图1示出了一种适用于本申请的电子设备的硬件系统。
电子设备100可以是手机、智慧屏、平板电脑、可穿戴电子设备、车载电子设备、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、投影仪等等,本申请实施例对电子设备100的具体类型不作任何限制。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
需要说明的是,图1所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图1所示的部件更多或更少的部件,或者,电子设备100可以包括图1所示的部件中某些部件的组合,或者,电子设备100可以包括图1所示的部件中某些部件的子部件。图1示的部件可以以硬件、软件、或软件和硬件的组合实现。
示例性地,处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。例如,处理器110可以包括以下接口中的至少一个:内部集成电路(inter-integrated circuit,I2C)接口、内部集成电路音频(inter-integrated circuit sound,I2S)接口、脉冲编码调制(pulse code modulation,PCM)接口、通用异步接收传输器(universal asynchronous receiver/transmitter,UART)接口、移动产业处理器接口(mobile industry processor interface,MIPI)、通用输入输出(general-purpose input/output,GPIO)接口、SIM接口、USB接口。
示例性地,在本申请的实施例中,处理器110可以用于执行本申请实施例提供的拍照处理方法;例如,检测到第一操作,第一操作为指示电子设备拍照的操作;响应于第一操作,获取第一数据包;在图像数据队列中存储第一数据包,图像数据队列中存储的数据包用于生成拍摄图像,第一数据包为图像数据队列中采集时刻最早的数据包;对第一数据包进行图像处理,生成第一拍摄图像;检测到第二操作,第二操作包括N次拍摄操作,N次拍摄操作的时间间隔小于预设时长,拍摄操作为指示电子设备采集图像的操作,N为大于或者等于2的整数;响应于第二操作,获取N个数据包,N个数据包与N次拍摄操作一一对应;在图像数据队列中基于采集时刻从早到晚的顺序存储N个数据包;在生成第一拍摄图像之后,获取图像数据队列中的第二数据包,第二数据包为在图像数据队列中采集时刻最晚的数据包;对第二数据包进行图像处理,生成第二拍摄图像;检测到第三操作,第三操作为点击第二拍摄图像的缩略图像的操作;响应于第三操作,显示第二拍摄图像。
图1所示的各模块间的连接关系只是示意性说明,并不构成对电子设备100的各模块间的连接关系的限定。可选地,电子设备100的各模块也可以采用上述实施例中多种连接方式的组合。
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等器件实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
电子设备100可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194可以用于显示图像或视频。
可选地,显示屏194可以用于显示图像或视频。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体(active-matrix organic light-emitting diode,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、迷你发光二极管(mini light-emitting diode,Mini LED)、微型发光二极管(micro light-emitting diode,Micro LED)、 微型OLED(Micro OLED)或量子点发光二极管(quantum dot light emitting diodes,QLED)。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
示例性地,电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。
示例性地,ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过摄像头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。
示例性地,摄像头193(也可以称为镜头)用于捕获静态图像或视频。可以通过应用程序指令触发开启,实现拍照功能,如拍摄获取任意场景的图像。摄像头可以包括成像镜头、滤光片、图像传感器等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。成像镜头主要是用于对拍照视角中的所有物体(也可以称为待拍摄场景、目标场景,也可以理解为用户期待拍摄的场景图像)发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
示例性地,数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
示例性地,视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。
示例性地,陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x轴、y轴和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。例如,当快门被按下时,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航和体感游戏等场景。
示例性地,加速度传感器180E可检测电子设备100在各个方向上(一般为x轴、y轴和z轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。加速度传感器180E还可以用于识别电子设备100的姿态,作为横竖屏切换和计步器等应用程序的输入参数。
示例性地,距离传感器180F用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,例如在拍摄场景中,电子设备100可以利用距离传感器180F测距以实现快速对焦。
示例性地,环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
示例性地,指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现解锁、访问应用锁、拍照和接听来电等功能。
示例性地,触摸传感器180K,也称为触控器件。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,触摸屏也称为触控屏。触摸传感器180K用于检测作用于其上或其附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,并且与显示屏194设置于不同的位置。
目前,电子设备通常是按照先采集先处理的方式对图像传感器采集的数据包进行图像处理;由于电子设备按照先采集先处理的方式对数据包进行处理,则拍摄时间最晚的图像电子设备通常在最后处理;但是,在用户结束拍照操作后,用户通常会点击缩略图查看实际的拍摄图像,此时缩略图像通常为拍摄时间最晚的缩略图像;由于电子设备通常在最后处理拍摄时间最晚的拍摄图像,因此电子设备需要一段时长后才能在图库应用程序中显示拍摄时间最晚的拍摄图像;因此,目前在用户点击缩略图像后,存在图库应用程序中无法快速地显示拍摄时间最晚的拍摄图像的问题;从而导致用户的等待时长较长,拍摄体验感较差。
示例性地,如图2所示,以电子设备检测到3次快速连续拍摄的操作进行举例说明;在电子设备检测到3次快速连续拍摄的操作的情况下,电子设备检测到拍照操作的顺序为:第一次拍照操作、第二次拍照操作与第三次拍照操作;由于电子设备按照先采集先处理的方式对采集的图像进行处理;其中,第一次拍摄操作对应电子设备采集数据包1,第二次拍照操作对应电子设备采集数据包2,第三次拍照操作对应电子设备采集数据包3;电子设备处理数据包的顺序为:数据包1、数据包2与数据包3;电子设备检测到点击缩略图像后显示拍摄图像(例如,图库应用程序显示拍摄图像)的顺序为按照时间的逆顺序进行显示,可以理解为电子设备显示拍摄图像的顺序为:第三拍摄图像、第二拍摄图像与第一拍摄图像;由于一次拍摄可以采集一帧或者多帧图像,通过对一帧或者多帧图像的处理生成一次拍摄的拍摄图像;在电子设备检测到第三次拍照操作后,电子设备可能还在处理第一次拍摄的图像或者第二次拍摄的图像;此时,电子设备无法在用户点击第三次拍摄的缩略图像之后,快速地显示第三次拍摄的拍摄图像;导致用户的等待时长较长,拍摄体验感较差。
例如,假设电子设备采集一个数据包的时长为0.1秒;电子设备处理一个数据包所需时长为3秒,即电子设备生成一张拍摄图像所需的时长为3秒;若电子设备检测到快速地3次连续拍照操作,则电子设备需要9秒才能生成第三拍摄图像;因此,用户在完成拍照操作,点击第三拍摄图像的缩略图像后,电子设备还需要一段时长后才能显示第三拍摄图像;导致用户的等待时长较长,拍摄体验感较差。
示例性地,电子设备运行相机应用程序后,显示预览界面201,如图3中的(a) 所示;预览界面201中包括预览图像、拍照控件202与缩略图显示控件203,其中,缩略图显示控件203中显示的图像为上一次拍照的缩略图像;电子设备检测到多次快速连续拍摄的操作;例如,电子设备检测到点击拍照控件202的操作,如图3中的(b)所示;在电子设备检测到点击拍照控件202的操作之后,电子设备检测到弹起拍照控件202的操作,显示显示界面204,如图3中的(c)所示;在电子设备检测到弹起拍照控件202的操作之后,电子设备检测到点击拍照控件202的操作,如图3中的(d)所示;在电子设备检测到点击拍照控件202的操作之后,电子设备检测到弹起拍照控件202的操作,显示显示界面205,如图4中的(a)所示;在电子设备检测到弹起拍照控件202的操作之后,电子设备检测到点击缩略图显示控件203的操作,如图4中的(b)所示;在电子设备检测到点击缩略图显示控件203的操作之后,电子设备可以显示显示界面206,显示界面206中包括第二次拍照操作采集的图像;由于目前电子设备按照先采集先处理的方式对数据包进行图像处理,因此在电子设备检测到点击缩略图显示控件203的操作时,可能正在处理第一次拍照操作采集的第一拍摄图像还未处理第二次拍照操作采集的数据包;因此,此时显示的第二次拍照的图像为通过对第二次拍照的缩略图像进行缩放处理得到的图像,并非第二次拍照的真实图像;例如,如图4中的(c)所示,显示界面206中显示的图像中包括图像区域207,图像区域207中细节信息较差;在一定时长之后,电子设备生成第二次拍照的拍摄图像,显示显示界面208,显示界面208中包括第二次拍照的拍摄图像,即第二次拍照摄的真实图像;第二次拍照图像的真实图像中包括图像区域209;图像区域209中的细节信息优于图像区域207中的细节信息,如图4中的(d)所示。
需要说明的是,用户在完成拍照操作后通常会点击缩略图显示控件203查看拍摄图像;在点击缩略图显示控件203之后,用户希望能够快速地在图库应用程序中显示拍摄图像;若电子设备按照目前的先采集先处理的方式(例如,按照采集时刻从早到晚的顺序)对图像进行处理,则电子设备显示无法在检测到弹起拍照控件后快速显示拍摄图像,导致用户的拍摄等待时长较长,拍摄体验较差。
还应理解,电子设备检测到一次点击拍照控件的操作与一次弹起拍照控件的操作可以是看作是检测到一次拍照操作;多次快速连续拍摄的操作可以是指电子设备在较短的时间内检测到多次拍照操作;例如,可以是在较短的时间内电子设备检测到3次拍照操作,3次拍照操作包括第一次点击拍照控件的操作与第一次弹起拍照控件的操作;第二次点击拍照控件的操作与第二次弹起拍照控件的操作;第三次点击拍照控件的操作与第三次弹起拍照控件的操作。
有鉴于此,本申请的实施例提供了一种拍照处理方法和电子设备;在本申请的实施例中,电子设备通过按照采集时刻从早到晚的顺序采对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
图5是本申请实施例提供的电子设备的软件系统的示意图。
如图5所示,系统架构中可以包括应用层210、应用框架层220、硬件抽象层230、驱动层240以及硬件层250。
示例性地,应用层210可以包括图库应用程序。
可选地,应用层210中还可以包括相机应用程序、日历、通话、地图、导航、WLAN、蓝牙、音乐、视频、短信息等应用程序。
示例性地,应用框架层220为应用层的应用程序提供应用程序编程接口(application programming interface,API)和编程框架;应用框架层可以包括一些预定义的函数。
例如,应用框架层220中包括窗口管理器、内容提供器、资源管理器、通知管理器和视图系统。
其中,窗口管理器用于管理窗口程序;窗口管理器可以获取显示屏大小,判断是否有状态栏、锁定屏幕和截取屏幕。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频、图像、音频、拨打和接听的电话、浏览历史和书签、以及电话簿。
资源管理器为应用程序提供各种资源,比如本地化字符串、图标、图片、布局文件和视频文件。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于下载完成告知和消息提醒。通知管理器还可以管理以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知。通知管理器还可以管理以对话窗口形式出现在屏幕上的通知,例如在状态栏提示文本信息、发出提示音、电子设备振动以及指示灯闪烁。
视图系统包括可视控件,例如显示文字的控件和显示图片的控件。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成,例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
示例性地,硬件抽象层230用于将硬件抽象化。
例如,硬件抽象层230中包括后处理队列、选帧模块与图像处理模块;其中,后处理队列中用于存储用于生成拍摄图像的数据包;图像处理模块用于对选帧模块选取的数据包进行图像处理,生成拍摄图像;选帧模块用于执行本申请实施例提供的拍照处理方法;例如,选帧模块可以用于通过本申请实施例提供的拍照处理方法从后处理队列中选取数据包并传输至图像处理模块,图像处理模块用于对获取的数据包进行图像处理生成拍摄图像。
示例性地,驱动层240用于为不同硬件设备提供驱动。
例如,驱动层可以包括显示屏驱动与相机驱动。
示例性地,硬件层250位于软件系统的最底层。
例如,硬件层250可以包括显示屏与相机模组;其中,显示屏用于显示视频;相机模组用于采集图像。
下面结合图6至图16对本申请实施例提供的拍照处理方法进行详细地描述。
图6是本申请实施例提供的一种拍照处理方法的示意性流程图。该方法300可以由图1所示的电子设备执行;该方法300包括S301至S311,下面分别对S301至S311进行详细的描述。
应理解,图6所示的图像数据队列可以是指图5或者图7所示的后处理队列。
S301、检测到第一操作。
其中,第一操作为指示电子设备拍照的操作。
示例性地,在第一预设时长内,电子设备未检测到拍摄操作;在第一预设时长之后,电子设备检测到的拍照操作可以是指为第一操作。
示例性地,第一操作可以包括点击拍照控件的操作与弹起拍照控件的操作。
可选地,第一操作可以为后续图7所示的第一拍照操作,此处不再赘述。
S302、响应于第一操作,获取第一数据包。
其中,第一数据包为图像数据队列中采集时刻最早的数据包。
可选地,在电子设备检测到用户弹起拍照控件的操作后,响应于弹起拍照控件的操作,触发电子设备采集第一数据包。
示例性地,第一数据包中可以包括M个图像帧;在电子设备检测到用户弹起拍照控件的操作后,响应于弹起拍照控件的操作,触发电子设备中的图像传感器采集M个图像帧。
示例性地,可以根据第一操作的时间信息在图像数据队列中获取第一数据包;第一数据包可以为第一操作对应的数据包,第一操作为多次连续拍照操作中拍照时刻最早的拍照操作。
可选地,在一种实现方式中,第一数据包包括第一起始帧,第一起始帧用于标识第一数据包在图像数据队列中的起始位置;第一起始帧与第一结束帧之间包括M帧图像数据,M为大于或者等于1的整数。
示例性地,如图8所示第一数据包可以为数据包1,数据包1中可以包括起始帧、图像帧1、图像帧2、图像帧N与结束帧;其中,起始帧用于标识数据包1的开始位置;结束帧用于标识数据包1的结束位置。
应理解,第一起始帧可以是指第一数据包的包头;第一结束帧也可以是指第一数据包的包尾。
还应理解,第一数据包中可以包括一个图像帧或者数据包中也可以包括多个图像帧;本申请实施例对第一数据包中图像帧的数量不作任何限定。
示例性地,如图9中的(a)所示,第一数据包可以为数据包1,数据包1可以包括起始帧、至少一个图像帧(例如,3个图像帧)与结束帧;其中,起始帧用于标识数据包1在后处理队列中的起始位置;结束帧用于标识数据包1在后处理队列中的终止位置。
S303、在图像数据队列中存储第一数据包。
其中,图像数据队列中存储的数据包用于生成拍摄图像,第一数据包为图像数据队列中采集时刻最早的数据包。
可选地,S330的实现方式可以参见后续图7中S403的相关描述,此处不再赘述。
S304、对第一数据包进行图像处理,生成第一拍摄图像。
可选地,在一种实现方式中,M帧图像数据为第一颜色空间的图像数据;图像处理包括采用第一算法与第二算法的处理,第一算法为第一颜色空间的算法,第二算法为将第一颜色空间的图像转换为第二颜色空间的图像的算法。
可选地,第一数据包中的图像帧可以为原始图像数据,即图像帧可以为Raw图像;其 中,图像处理包括第一算法与第二算法;其中,第一算法为第一颜色空间的算法;第二算法为将第一颜色空间的图像转换为第二颜色空间的图像的算法。
示例性地,第一算法为Raw颜色空间的算法;Raw颜色空间的算法可以包括但不限于:黑电平矫正处理(Black Level Correction,BLC)、镜头阴影校正(Lens Shading Correction,LSC)等算法。
其中,黑电平矫正处理用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平;进行黑电平校正的原因在于:一方面由于图像传感器存在暗电流,导致在没有光照的情况下像素也存在电压输出的问题;另一方面,由于图像传感器进行模数转换时精度不够。镜头阴影校正(Lens Shading Correction,LSC)用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。
示例性地,第二算法包括将Raw图像转换为YUV颜色空间的图像;将YUV颜色空间的图像转换为其他存储格式的算法;其中,其他存储格式包括JPEG格式(JPG格式)、GIF格式、DNG格式或者RAW格式等。
可选地,上述是对图像处理算法的举例说明;上述对数据包的图像处理过程可以参见现有的任意一种生成拍摄图像的算法,本申请对此不作任何限定。
S305、检测到第二操作。
其中,第二操作包括N次拍摄操作,N次拍摄操作的时间间隔小于预设时长,拍摄操作为指示电子设备采集图像的操作,N为大于或者等于2的整数。
可选地,N次拍摄操作可以包括如图7所示的第二拍照操作与第三拍照操作;实现方式可以参见图7所示的相关描述,此处不再赘述。
S306、响应于第二操作,获取N个数据包。
其中,N个数据包与N次拍摄操作一一对应。
可选地,N个数据包可以包括如图7所示的数据包2与数据包3;实现方式可以参见图7所示的相关描述,此处不再赘述。
S307、在图像数据队列中基于采集时刻从早到晚的顺序存储N个数据包。
可选地,如图9所示,数据包1可以为第一数据包;N个数据包可以包括数据包2与数据包3;基于采集时刻从早到晚的顺序存储N个数据包的实现方式可以参见图9的相关描述,此处不再赘述。
S308、在生成第一拍摄图像之后,获取图像数据队列中的第二数据包。
其中,第二数据包为在图像数据队列中采集时刻最晚的数据包。
可选地,第一数据包包括第一结束帧,第一结束帧用于指示第一数据包在图像数据队列中的结束位置;在生成第一拍摄图像之后,获取图像数据队列中的第二数据包,包括:
在第一时刻获取第二数据包;其中,第一时刻为处理第一结束帧的时刻。
在本申请的实施例中,由于图像数据队列为基于拍照操作实时更新的数据队列,因此,为了确保选取的第二数据包为图像数据队列中拍照操作为最晚时刻的数据包;在处理至第一数据包中的第一结束帧时,可以按照数据包采集时刻从晚到早的顺序在图像数据队列中选取第二数据包;从而提高选取的第二数据包的准确性。
可选地,N个数据包中的每个数据包包括起始帧与结束帧,起始帧用于指示一个 数据包在图像数据队列中的起始位置,结束帧用于指示一个数据包在图像数据队列中的结束位置;在第一时刻获取第二数据包,包括:
在第一时刻,确定图像数据队列中目标起始帧的位置信息,其中,目标起始帧为图像数据队列中时刻最晚的起始帧;
基于目标起始帧的位置信息,获取第二数据包。
在本申请的实施例中,由于对于N个数据包中的每个数据包均包括起始帧;因此,在以按照数据包采集时刻从晚到早的顺序选取图像数据队列中的第二数据包时,可以先基于时间信息从晚到早的顺序确定图像数据队列中的目标起始帧(例如,采集时刻最晚的起始帧),该目标起始帧为第二数据包的起始帧;从而基于目标起始帧选取图像数据队列中第二数据包;在本申请的实施例中,由于是通过第二数据包的起始帧的位置选取图像数据队列中的第二数据包,从而能够在一定程度上减少电子设备的运算量。
可选地,获取第二数据包的实现方式可以参见后续图10的相关描述,此处不再赘述。
S309、对第二数据包进行所述图像处理,生成第二拍摄图像。
可选地,N个数据包包括第二数据包与N-1个数据包,在生成第二拍摄图像之后,还包括:
基于N-1个数据包的采集时刻按照从晚到早的顺序,依次获取图像数据队列中的N-1个数据包;
对N-1个数据包依次进行图像处理,生成N-1个拍摄图像。
在本申请的实施例中,在对图像数据队列中的第一数据包与第二数据包(例如,最后一个数据包)进行图像处理后,可以按照数据包采集时刻从晚到早的顺序对图像数据队列中的其他数据包进行图像处理,生成对应的拍摄图像。
可选地,生成N-1个拍摄图像的实现方式可以参见后续图7中S4051或者S408或者S412的相关描述,此处不再赘述。
可选地,所述图像处理包括采用第一算法与第二算法的处理,所述第一算法为所述第一颜色空间的算法,所述第二算法为将所述第一颜色空间的图像转换为第二颜色空间的图像的算法。
S310、检测到第三操作。
其中,第三操作为点击第二拍摄图像的缩略图像的操作。
示例性地,以N次拍照操作为3次拍照操作进行举例说明;由于按拍照时刻的从早到晚的顺序电子设备分别检测到第一拍照操作、第二拍照操作与第三拍照操作;其中,第三拍照操作为拍照时间最晚的拍照操作,因此若在第三拍照操作之后在一段时长内,电子设备未检测到拍照操作,则电子设备的拍摄界面中显示的缩略图像为第三拍照操作的缩略图像;可以理解为,电子设备拍摄界面中通常显示拍照操作的时刻最晚的缩略图像;在电子设备检测到点击第三拍照操作的缩略图像的操作之后,电子设备可以显示存储在图库应用程序中的拍摄图像,即第二拍摄图像。
示例性地,检测到点击第三拍照操作的缩略图像的操作可以如17中的(f)所示。
可选地,可以参见图7中S409的相关描述,此处不再赘述。
S311、响应于所述第三操作,显示第二拍摄图像。
可选地,显示第二拍摄图像可以是在电子设备检测到点击第二拍摄图像的缩略图像时,在图库应用程序中显示第二拍摄图像;第二拍摄图像是指N次拍照操作中拍摄时间最晚的操作采集的拍摄图像。
应理解,在用户结束拍照操作后,用户通常通过点击缩略图像对拍摄图像进行查看;可以理解为,用户通过点击缩略图像查看图库应用程序中存储的实际的拍摄图像。
示例性地,如假设电子设备处理一个数据包的时长为3秒,即电子设备生成一张拍摄图像的时长为3秒;采集一个数据包的时长为1秒;采用现有的方式生成拍照时刻最晚的拍照图像;例如,按照如图2所示的先采集先处理的方式生成拍摄图像,则电子设备在第9秒时才能生成第二拍摄图像;而基于本申请的实施例提供的拍照处理方法,在电子设备生成第一拍摄图像之后,则电子设备可以生成第二拍摄图像;可以理解为,基于本申请实施例提供的拍摄处理方式,如图7所示,电子设备在第6秒时可以生成第二拍摄图像;因此,在本申请的实施例中,在电子设备检测到用户点击第二拍摄图像的缩略图像时,可以快速地显示第二拍摄图像;从而在一定程度上缩短用户的等待时长,提高用户的拍摄体验。
在本申请的实施例中,电子设备可以按照采集时刻从晚到早的顺序(例如,先采集后处理的顺序)通过采用对图像数据队列中的数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理多次连续拍照操作中的第一操作的数据包之后,可以选取存储的图像数据队列中采集时刻最晚的第二数据包进行图像处理,生成第二拍摄图像;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
可选地,图6所示的方法中是以电子设备检测到连续3次及3次以上的拍照操作进行举例说明;本申请提供的实施例同样适用于电子设备检测到一次拍照操作与连续的两次拍照操作。
示例性地,对于电子设备检测到一次拍照操作的场景中,在该场景中后处理队列中只包括数据包1;电子设备可以先对数据包1进行图像处理,接着采用上述先采集后处理的方式对数据包进行图像处理;在后处理队列中,由于后处理队列中只包括数据包1,因此在对数据包1进行图像处理后,电子设备未检测到其他数据包,则可以结束图像处理流程。
示例性地,对于电子设备检测到连续两次拍照操作的场景中,在该场景中后处理队列中只包括数据包1与数据包2;电子设备可以先对数据包1进行图像处理,接着采用上述先采集后处理的方式对数据包进行图像处理;在后处理队列中,由于后处理队列中只包括数据包1与数据包2,因此在对数据包1进行图像处理后,按照第一方向的相反方向,选取当前后处理队列中的第一个数据包,即对数据包2进行图像处理。
图7是本申请实施例提供的一种拍照处理方法的示意性流程图。该方法400可以由图1所示的电子设备执行;该方法400包括S401至S412,下面分别对S401至S412进行详细的描述。
S401、运行相机应用程序。
可选地,用户可以通过单击“相机”应用程序的图标,指示电子设备运行相机应用。
示例性地,电子设备处于锁屏状态时,用户可以通过在电子设备的显示屏上向右滑动的手势,指示电子设备运行相机应用。又或者,电子设备处于锁屏状态,锁屏界面上包括相机应用程序的图标,用户通过点击相机应用程序的图标,指示电子设备运行相机应用程序。
可选地,电子设备在运行其他应用程序时,该应用程序具有调用相机应用程序的权限;用户通过点击相应的控件可以指示电子设备运行相机应用程序。
示例性地,电子设备正在运行即时通信类应用程序时,用户可以通过选择相机功能的控件,指示电子设备运行相机应用程序等。
应理解,上述为对运行相机应用程序的操作的举例说明;还可以通过语音指示操作,或者其它操作的指示电子设备运行相机应用程序;本申请对此不作任何限定。
还应理解,运行相机应用程序可以是指启动相机应用程序。
需要说明的是,本申请的拍照处理方法适用于至少一次连续拍摄的场景中;如图8所示,对于一次连续拍照操作,电子设备可以检测到第一拍照操作、第二拍照操作与第N拍照操作等N次拍照操作;对于N次拍照操作中的一次拍照操作,电子设备可以检测到点击拍摄控件的操作与弹起拍摄控件的操作。
可选地,对于第一次拍照操作,电子设备可以基于单帧图像生成拍摄图像,即一次拍摄的真图。
可选地,对于一次拍照操作,电子设备也可以基于多帧图像生成拍摄图像;例如,如图8所示,电子设备可以基于N帧图像生成一张拍摄图像。
S402、检测到第一拍照操作(第一操作的一个示例)。
示例性地,第一拍照操作可以是指电子设备检测到的第一次拍摄操作;例如,在第一预设时长内,电子设备未检测到拍摄操作;在第一预设时长之后,电子设备检测到的拍照操作可以是指为第一次拍照操作。
示例性地,第一拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作。
S403、将第一拍照操作的数据包存储至后处理队列。
可选地,电子设备可以根据检测待第一拍摄操作的时间信息,在电子设备的ZSL队列中获取第一拍照操作的数据包(例如,数据包1);并将第一拍照的数据包存储至后处理队列。
可选地,第一拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作,在电子设备检测到弹起拍照控件的操作时,触发电子设备可以基于检测到点击拍照控件的时间信息在电子设备的ZSL队列中获取第一拍照操作的数据包(例如,数据包1);并将第一拍照操作的数据包存储至后处理队列。
示例性地,按照第一方向将第一拍照操作的数据包存储至后处理队列。
应理解,后处理队列中的数据包用于生成拍照操作的拍摄图像,即拍照操作的真实图像;电子设备还包括前处理队列,前处理队列中的数据包用于生成拍摄操作的缩略图像。
S404、获取后处理队列中的第一数据包。
示例性地,在后处理队列中获取第一次拍照操作的数据包;第一次拍照操作可以是在后处理队列中拍照时刻最早的拍照操作。
需要说明的是,第一数据包可以是最早存储至后处理队列中的数据包;若电子设备仅 检测到了一次拍照操作,则后处理队列中只包括第一数据包;例如,后处理队列中只包括数据包1。
可选地,如图9中的(a)所示,第一数据包可以为数据包1,数据包1可以包括起始帧、至少一个图像帧(例如,3个图像帧)与结束帧;其中,起始帧用于标识数据包1在后处理队列中的起始位置;结束帧用于标识数据包1在后处理队列中的终止位置。
应理解,对于连续拍照的场景;在电子设备检测第一拍照操作(例如,第一次拍照操作)之后,电子设备还检测到第二拍摄操作(例如,第二次拍照操作)或者第三拍照操作(例如,第三次拍照操作)等;在检测到第一拍照操作之后,电子设备会对第一拍照操作的数据包进行图像处理,生成第一拍照图像;同时,电子设备可以根据检测到的第二拍照操作或者第三拍照操作不断更新后处理队列,将后续拍照操作的数据包存储至后处理队列;电子设备分别对后处理队列中的数据包进行处理,生成不同拍照操作的拍摄图像;将生成的拍摄图像存在图库应用程序中,在电子设备检测到点击缩略图像时,电子设备显示图库应用程序中该缩略图像对应的拍摄图像,实现拍照图像的回看。
可选地,在步骤S404之后电子设备可以执行第一流程405与第二流程406;其中,第一流程405与第二流程406可以是同步执行的;其中,第一流程450可以是指电子设备生成第一拍照操作的拍摄图像的流程,第一流程405中包括S4051;第二流程406可以是指电子设备检测到拍照操作,并基于拍照操作实时更新后处理队列的流程;以三次拍照操作进行举例说明,则第二流程可以包括S4061至S4064。
S4051、对第一数据包进行图像处理,生成第一拍摄图像。
其中,第一拍摄图像可以是指第一拍照操作对应的拍摄图像,即电子设备基于第一拍照操作生成的真实图像。
示例性地,如图8所示第一数据包可以为数据包1,数据包1中可以包括起始帧、图像帧1、图像帧2、图像帧N与结束帧;其中,起始帧用于标识数据包1的开始位置;结束帧用于标识数据包1的结束位置。
应理解,起始帧可以是指数据包的包头;结束帧也可以是指数据包的包尾。
可选地,在拍摄图像为基于单帧图像生成的情况下,第一数据包中也可以只包括起始帧、图像帧1与结束帧。
应理解,数据包中可以包括一个图像帧或者数据包中也可以包括多个图像帧;本申请实施例对数据包中图像帧的数量不作任何限定。
可选地,第一数据包中的图像帧可以为原始图像数据,即图像帧可以为Raw图像;其中,图像处理包括第一算法与第二算法;其中,第一算法为第一颜色空间的算法;第二算法为将第一颜色空间的图像转换为第二颜色空间的图像的算法。
示例性地,第一算法为Raw颜色空间的算法;Raw颜色空间的算法可以包括但不限于:黑电平矫正处理(Black Level Correction,BLC)、镜头阴影校正(Lens Shading Correction,LSC)等算法。
其中,黑电平矫正处理用于对黑电平进行校正处理,黑电平是指在经过一定校准的显示装置上,没有一行光亮输出的视频信号电平;进行黑电平校正的原因在于:一方面由于图像传感器存在暗电流,导致在没有光照的情况下像素也存在电压输出的问题;另一方面,由于图像传感器进行模数转换时精度不够。镜头阴影校正(Lens Shading Correction,LSC) 用于消除由于镜头光学系统原因造成的图像四周颜色以及亮度与图像中心不一致的问题。
示例性地,第二算法包括将Raw图像转换为YUV颜色空间的图像;将YUV颜色空间的图像转换为其他存储格式的算法;其中,其他存储格式包括JPEG格式(JPG格式)、GIF格式、DNG格式或者RAW格式等。
可选地,上述是对图像处理算法的举例说明;上述对数据包的图像处理过程可以参见现有的任意一种生成拍摄图像的算法,本申请对此不作任何限定。
S4061、检测到第二拍摄操作。
可选地,第二拍照操作可以是指在第二预设时长内检测到第二次拍照操作;其中,第一次拍照操作与第二次拍照操作的间隔时长较短;电子设备可以处于连续拍照场景。
示例性地,第二拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作。
应理解,第二拍照操作与S402中的第一拍照操作相比,第二拍照操作在第一拍照操作之后,电子设备检测到第二拍照操作的时刻晚于第一拍照操作的时刻。
S4062、将第二拍照操作的数据包存储至后处理队列。
可选地,电子设备可以根据检测待第二拍摄操作的时间信息,在电子设备的ZSL队列中获取第一拍照操作的数据包(例如,数据包2);并将第二拍照的数据包存储至后处理队列。
可选地,第二拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作,在电子设备检测到弹起拍照控件的操作时,触发电子设备可以基于检测到点击拍照控件的时间信息在电子设备的ZSL队列中获取第二拍照操作的数据包(例如,数据包2);并将第二拍照的数据包存储至后处理队列。
示例性地,按照第一方向将第二拍照操作的数据包存储至后处理队列;例如,按照第一方向,将数据包2存储至数据包1的左边;可以理解为,按照第一方向先存储数据包1,再存储数据包2。
S4063、检测到第三拍照操作。
可选地,第三拍照操作可以是指在第二预设时长内检测到第三次拍照操作;其中,第二次拍照操作与第三次拍照操作的间隔时长较短;电子设备可以处于连续拍照场景。
示例性地,第三拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作。
应理解,第三拍照操作与S461中的第二拍照操作相比,第三拍照操作在第二拍照操作之后,电子设备检测到第三拍照操作的时刻晚于第三拍照操作的时刻。
S4064、将第三拍照操作的数据包存储至后处理队列。
可选地,电子设备可以根据检测待第三拍摄操作的时间信息,在电子设备的ZSL队列中获取第一拍照操作的数据包(例如,数据包3);并将第三拍照的数据包存储至后处理队列。
可选地,第三拍照操作可以包括点击拍照控件的操作与弹起拍照控件的操作,在电子设备检测到弹起拍照控件的操作时,触发电子设备可以基于检测到点击拍照控件的时间信息在电子设备的ZSL队列中获取第三拍照操作的数据包(例如,数据包3);并将第三拍照的数据包存储至后处理队列。
示例性地,按照第一方向将第三拍照操作的数据包存储至后处理队列;例如,按照第一方向,将数据包3存储至数据包2的左边;可以理解为,按照第一方向先存储数据包1, 再存储数据包2,再存储数据包3。
S407、按照存储方向的相反方向,选取当前后处理队列中的第一个数据包。
应理解,如图7所示,存储方向可以为第一方向,第一方向是指由右向左的方向;则第一方向的相反方向可以是指第一方向的逆序方向;若第一方向为由右向左的方向,则第一方向的逆序方向可以是指由左向右的方向;此外,若电子设备处理一个数据包,则该数据包可以看作是移出后处理队列;可以理解为,后处理队列中存储的数据包均为未处理的数据包;按照存储方向的相反方向选取的第一个数据包则是指拍照操作的时刻最晚的数据包。
示例性地,如图7所示,由于在第一流程405中电子设备对第一数据包进行图像处理;因此,此时后处理队列中包括的未处理的数据包为数据包2与数据包3,按照存储的逆序方向选取后处理队列中的数据包,则获取数据包3。
可选地,在处理第一数据包中的结束帧时,电子设备可以按照存储的逆序方向,选取当前后处理队列中的第一个数据包。
示例性地,如图9中的(b)所示,在电子设备处理第一数据包中的结束帧时(例如,数据包1中的结束帧),当前后处理队列中包括数据包2与数据包3;此时,电子设备可以选取当前的后处理队列中逆序方向的第一个数据包,即选取数据包3;并对数据包3进行图像处理生成拍摄图像。
在本申请的实施例中,电子设备通过采用先采集后处理的方式对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
需要说明的是,在本申请的实施例中,方法400可以适用于连续拍摄的场景中;因此,电子设备可以多次检测到拍照操作;在执行S440的同时电子设备还可以检测到其他拍照操作,基于其他拍照操作后处理队列中会不断存入新的数据包,如图9中的(b)所示;在本申请的实施例中,电子设备在处理到第一数据包之后,可以按照先采集后处理的方式顺序对后处理队列中的数据包进行图像处理;而是在后处理队列中选取当前的最后一个数据包进行处理,从而能够在一定程度上缩短用户的拍摄等待时长。
可选地,在处理到第一数据包中的结束帧时,根据逆序处理方式在后处理队列中选取第一个起始帧,获取最后一个数据包。
示例性地,如图10所示,以一次连续拍照操作包括3次拍照操作进行举例说明;其中,按照从前到后的时间顺序电子设备分别检测到第一拍照操作、第二拍照操作与第三拍照操作;第一拍照操作对应的数据包中包括:起始帧、图像帧1、图像帧2、图像帧3与结束帧;第二拍照操作对应的数据包中包括:起始帧、图像帧3、图像帧4、图像帧5与结束帧;第三拍照操作对应的数据包中包括:起始帧、图像帧7、图像帧8、图像帧9与结束帧为后处理队列中存储的第三次拍照操作的数据;在电子设备检测到第一拍照操作之后,在第一时刻选帧模块通过指示信息1指示第一拍照操作的数据包从起始帧开始出队(例 如,开始从后处理队列中获取数据);在第二时刻,选帧模块通过指示信息2指示第一拍照操作的数据包结束出队(例如,停止从后处理队列中获取数据);在第三时刻,选帧模块逆向遍历整个后处理处理队列,找到第一个起始帧的位置,即第三拍照操作的起始帧;通过指示信息3指示第三拍照操作的数据包从起始帧开始出队;在第四时刻,选帧模块通过指示信息4指示第三拍照操作的数据包结束出队;在第五时刻,选帧模块逆向遍历整个后处理处理队列,找到第一个起始帧的位置,即第二次拍摄的起始帧;通过指示信息5指示第二拍照操作的数据包从起始帧开始出队;在第六时刻,选帧模块通过指示信息6指示第二拍照操作的数据包结束出队;选帧模块从后处理队列中选取数据的顺序为:第一拍照操作的数据包、第三拍照操作的数据包与第二拍照操作的数据包。
S408、对选取的数据包进行图像处理,生成第二拍摄图像。
可选地,如图7所示,按照存储方向的相反方向,选取当前后处理队列中的第一个数据包为数据包3;对数据包3进行图像处理,生成第二拍摄图像。
应理解,数据包3对应第三拍照操作,即拍照操作时刻最晚的拍照操作;第二拍摄图像为根据第三拍照操作生成的拍摄图像。
可选地,图像处理的实现方式可以参见上述S4051的相关描述,此处不再赘述。
S409、检测到点击缩略图像的操作(第二操作的一个示例)。
应理解,由于按拍照时刻的从早到晚的顺序电子设备分别检测到第一拍照操作、第二拍照操作与第三拍照操作;其中,第三拍照操作为拍照时间最晚的拍照操作,因此若在第三拍照操作之后在一段时长内,电子设备未检测到拍照操作,则电子设备的拍摄界面中显示的缩略图像为第三拍照操作的缩略图像;可以理解为,电子设备拍摄界面中通常显示拍照操作的时刻最晚的缩略图像;在电子设备检测到点击第三拍照操作的缩略图像的操作之后,电子设备可以显示存储在图库应用程序中的拍摄图像,即第二拍摄图像。
示例性地,检测到点击第三拍照操作的缩略图像的操作可以如17中的(f)所示。
S410、显示第二拍摄图像。
可选地,显示第二拍摄图像可以是在图库应用程序中显示第三拍照操作的拍摄图像,即第三拍照操作的实际的拍摄图像。
应理解,在用户结束拍照操作后,用户通常通过点击缩略图像对拍摄图像进行查看;可以理解为,用户通过点击缩略图像查看图库应用程序中存储的实际的拍摄图像。
示例性地,如假设电子设备处理一个数据包的时长为3秒,即电子设备生成一张拍摄图像的时长为3秒;采集一个数据包的时长为1秒;采用现有的方式生成拍照时刻最晚的拍照图像;例如,按照如图2所示的先采集先处理的方式生成拍摄图像,则电子设备在第9秒时才能生成第二拍摄图像;而基于本申请的实施例提供的拍照处理方法,在电子设备生成第一拍摄图像之后,则电子设备可以生成第二拍摄图像;可以理解为,基于本申请实施例提供的拍摄处理方式,如图7所示则电子设备在第6秒时可以生成第二拍摄图像;因此,在本申请的实施例中,在电子设备检测到用户点击第二拍摄图像的缩略图像时,可以快速地显示第二拍摄图像;从而在一定程度上缩短用户的等待时长,提高用户的拍摄体验。
在本申请的实施例中,电子设备通过采用先采集后处理的方式对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍 照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
S411、按照存储方向的相反方向,选取当前后处理队列中的第一个数据包。
应理解,如图7所示,存储方向可以为第一方向,第一方向是指由右向左的方向;则第一方向的相反方向(例如,逆序方向)是指第一方向的相反方向;若第一方向为由右向左的方向,则第一方向的逆序方向可以是指由左向右的方向;此外,若电子设备处理一个数据包,则该数据包可以看作是移出后处理队列;可以理解为,后处理队列中存储的数据包均为未处理的数据包;按照存储的逆序方向选取的第一个数据包则是指拍照操作的时刻最晚的数据包。
示例性地,如图7所示,由于在第一流程405中电子设备对第一数据包进行图像处理,S408中对数据包3进行图像处理;因此,此时后处理队列中包括的未处理的数据包为数据包3,按照存储方向的相反方向选取后处理队列中的第一个数据包,则获取数据包2。
S412、对选取的数据包进行图像处理,生成第三拍摄图像。
可选地,如图7所示,按照存储方向的相反方向,选取当前后处理队列中的第一个数据包为数据包2;对数据包2进行图像处理,生成第三拍摄图像。
应理解,数据包2对应第二拍照操作,电子设备检测到第二拍摄操作的时刻位于检测到第一拍摄操作的时刻与检测到第二拍摄操作的时刻之间。
可选地,图像处理的实现方式可以参见上述S4051的相关描述,此处不再赘述。
在本申请的实施例中,电子设备通过采用先采集后处理的方式对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
可选地,图7是以电子设备检测到连续3次拍照操作进行举例说明;本申请实施例提供的拍照处理方法同样适用于电子设备检测到一次拍照操作与连续的两次拍照操作的场景。
示例性地,对于电子设备检测到一次拍照操作的场景中,在该场景中后处理队列中只包括数据包1;电子设备可以先对数据包1进行图像处理,接着采用上述先采集后处理的方式对数据包进行图像处理;在后处理队列中,由于后处理队列中只包括数据包1,因此在对数据包1进行图像处理后,电子设备未检测到其他数据包,则可以结束图像处理流程。
示例性地,对于电子设备检测到连续两次拍照操作的场景中,在该场景中后处理队列中只包括数据包1与数据包2;电子设备可以先对数据包1进行图像处理,接着采用上述先采集后处理的方式对数据包进行图像处理;在后处理队列中,由于后处理队列中只包括数据包1与数据包2,因此在对数据包1进行图像处理后,按照存储方向的相反方向,选取当前后处理队列中的第一个数据包,即对数据包2进行图像处理。
可选地,下面结合图11至图16对电子设备检测到连续三次以上的拍照操作进行举例说明。
示例一
可选地,在电子设备检测到一次连续拍照操作时,电子设备可以按照后处理队列中数据包的存储方向的相反方向在后处理队列中选取当前的第一个数据包进行图像处理,生成拍摄图像;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到连续拍照操作之后能够快速地生成拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
例如,可以先处理后处理队列中连续拍照操作中第一次拍照操作的数据包;在处理连续拍照操作中第一次拍照操作的数据包之后,处理连续拍照操作中时间最晚的数据包;再依次按照时间顺序从后向前处理不同拍摄操作的数据包;确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到连续拍照操作之后能够快速地生成拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
示例性地,如图11所示,以连续拍照操作包括5次拍照操作进行举例说明;5次拍摄操作对应生成5张拍摄图像,分别为拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4与拍摄图像5;第一次拍摄操作电子设备采集的数据包为数据包1;第二次拍摄操作电子设备采集的数据包为数据包2;第三次拍摄操作电子设备采集的数据包为数据包3;第四次拍摄操作电子设备采集的数据包为数据包4;第五次拍摄操作电子设备采集的数据包为数据包5;如图12所示,假设电子设备处理一个数据包所需的时长为3秒;采集一个数据包的时长为0.1秒,即1秒可以采集10次拍照操作的数据;则在第一时刻(例如,第0.1秒),电子设备采集数据包1;并对数据包1进行图像处理,生成拍摄图像1;在第3.1秒电子设备处理完数据包1;此时,由于电子设备采集一次拍照操作的图像数据为0.1秒,电子设备检测到5次拍照操作;则电子设备在第0.5秒采集5个数据包;在第二时刻(例如,第3.1秒),电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包5;电子设备对数据包5进行图像处理,生成拍摄图像5;重复执行上述操作,在生成拍摄图像5之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包4;电子设备对数据包4进行图像处理,生成拍摄图像4;在生成拍摄图像4之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包3;电子设备对数据包3进行图像处理,生成拍摄图像3;在生成拍摄图像3之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包2;电子设备对数据包2进行图像处理,生成拍摄图像2;此时,电子设备生成拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4与拍摄图像5。
需要说明的是,获取后处理队列一个数据包,对数据包进行图像处理可以看作是该数据包移除后处理队列;由于电子设备在对后处理队列中数据包进行处理的同时,电子设备还可能采集基于检测到的拍照操作采集新的数据包;因此,后处理队列中的数据包可以看作是处于实时更新状态,即有新写入的数据包同时也有移出的数据包;在处理至前一个数据包的包尾时,可以选取当前后处理队列中按照逆序方向的第一个数据包;其中,逆序顺序可以理解为与数据包存储至后处理队列的方向相反。
可选地,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包的实现方式 可以参见图9与图10的相关描述,此处不再赘述。
示例二
可选地,在电子设备检测到至少连续拍照操作时,电子设备可以按照后处理队列中存储数据包的相反方向在后处理队列中选取当前的第一个数据包进行图像处理,生成拍摄图像;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到连续拍照操作之后能够快速地生成拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
情况一
例如,以电子设备检测到两次连续拍照操作进行举例说明,其中,第一次连续拍照操作包括5次拍照操作;例如,第一次连续拍照操作包括第一次拍照操作至第五次拍摄操作;第二次连续拍照操作包括2次拍照操作;例如,第二次连续拍照操作包括第六次拍照操作与第七次拍照操作;如图13所示,7次拍摄操作对应生成7张拍摄图像,分别为拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4、拍摄图像5、拍摄图像6与拍摄图像7;第一次拍摄操作电子设备采集的数据包为数据包1;第二次拍摄操作电子设备采集的数据包为数据包2;第三次拍摄操作电子设备采集的数据包为数据包3;第四次拍摄操作电子设备采集的数据包为数据包4;第五次拍摄操作电子设备采集的数据包为数据包5;第六次拍摄操作电子设备采集的数据包为数据包6;第七次拍摄操作电子设备采集的数据包为数据包7,如图14所示。
示例性地,如图13与图14所示,在第一时刻,电子设备在后处理队列中按照数据包的存储方向的相反方向(例如,逆序方向)获取第一个数据包进行图像处理;例如,在第一时刻电子设备获取数据包1进行图像处理,生成拍摄图像1;在电子设备对数据包1进行图像处理的同时,基于电子设备检测到的拍照操作,后处理队列中存储数据包2至数据包5;在第二时刻,电子设备处理至数据包1中的结束帧时,电子设备在后处理队列中按照数据包的存储方向的相反方向获取第一个数据包,即数据包5;电子设备对数据包5进行图像处理,生成拍摄图像5;在第三时刻,电子设备处理至数据包5中的结束帧时,电子设备在后处理队列中按照存储方向的相反方向获取第一个数据包,即数据包7;电子设备对数据包7进行图像处理,生成拍摄图像7;重复执行上述操作,在生成拍摄图像7之后,电子设备在后处理队列中按照数据包的存储方向的相反方向获取第一个数据包,即数据包6;电子设备对数据包6进行图像处理,生成拍摄图像6;在生成拍摄图像6之后,即电子设备处理至数据包6中的结束帧时,电子设备按照存储方向的相反方向在当前的后处理队列中获取第一个数据包,即数据包4;电子设备对数据包4进行图像处理,生成拍摄图像4;在生成拍摄图像4之后,电子设备在后处理队列中按照数据包的存储方向的相反方向在当前的后处理队列中获取第一个数据包,即数据包3;电子设备对数据包3进行图像处理,生成拍摄图像3;在生成拍摄图像3之后,电子设备在后处理队列中按照数据包的存储方向的相反方向获取第一个数据包,即数据包2;电子设备对数据包2进行图像处理,生成拍摄图像2;此时,电子设备生成拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4、拍摄图像5、拍摄图像6与拍摄图像7。
应理解,电子设备在处理当前数据包的结束帧时,电子设备在后处理队列中可以按照数据包的存储方向的相反方向获取第一个数据包;可以理解为,在电子设备处理至上一个 数据包的包尾时,电子设备可以从后处理队列中选取下一次处理的数据包;选取时按照拍摄时间从后向前的顺序进行选取;即电子设备优先选取拍摄时间最靠后的数据包。
可选地,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包的实现方式可以参见图9与图10的相关描述,此处不再赘述。
情况二
例如,以电子设备检测到两次连续拍照操作进行举例说明,其中,第一次连续拍照操作包括5次拍照操作;例如,第一次连续拍照操作包括第一次拍照操作至第五次拍摄操作;第二次连续拍照操作包括2次拍照操作;例如,第二次连续拍照操作包括第六次拍照操作与第七次拍照操作;如图15所示,7次拍摄操作对应生成7张拍摄图像,分别为拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4、拍摄图像5、拍摄图像6与拍摄图像7;第一次拍摄操作电子设备采集的数据包为数据包1;第二次拍摄操作电子设备采集的数据包为数据包2;第三次拍摄操作电子设备采集的数据包为数据包3;第四次拍摄操作电子设备采集的数据包为数据包4;第五次拍摄操作电子设备采集的数据包为数据包5;第六次拍摄操作电子设备采集的数据包为数据包6;第七次拍摄操作电子设备采集的数据包为数据包7,如图16所示。
示例性地,如图15与图16所示,在第一时刻,电子设备在后处理队列中按照逆序方向获取第一个数据包进行图像处理;例如,在第一时刻电子设备获取数据包1进行图像处理,生成拍摄图像1;在电子设备对数据包1进行图像处理的同时,基于电子设备检测到的拍照操作,后处理队列中存储数据包2至数据包7;在第二时刻,电子设备处理至数据包1中的结束帧时,电子设备在后处理队列中按照逆序方向获取第一个数据包,即数据包7;电子设备对数据包7进行图像处理,生成拍摄图像7;重复执行上述操作,在生成拍摄图像7之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包6;电子设备对数据包6进行图像处理,生成拍摄图像6;在生成拍摄图像6之后,即电子设备处理至数据包6中的结束帧时,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包5;电子设备对数据包5进行图像处理,生成拍摄图像5;在生成拍摄图像5之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包4;电子设备对数据包4进行图像处理,生成拍摄图像4;在生成拍摄图像4之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包3;电子设备对数据包3进行图像处理,生成拍摄图像3;在生成拍摄图像3之后,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包,即数据包2;电子设备对数据包2进行图像处理,生成拍摄图像2;此时,电子设备生成拍摄图像1、拍摄图像2、拍摄图像3、拍摄图像4、拍摄图像5、拍摄图像6与拍摄图像7。
可选地,电子设备按照逆序方向在当前的后处理队列中获取第一个数据包的实现方式可以参见图9与图10的相关描述,此处不再赘述。
需要说明的是,上述情况二与情况一的区别在于:在情况一中,电子设备处理数据包1之后,后处理队列中包括数据包2至数据包5,此时电子设备在后处理队列中按照数据包的存储方向的相反方向获取第一个数据包,即获取数据包5;在情况二中,电子设备处理数据包1之后,后处理队列中包括数据包2至数据包7,此时电子设备在后处理队列中按照数据包的存储方向的相反方向获取第一个数据包,即获取数据包7;存在上述情况的 原因在于电子设备在连续拍照操作时不同拍照操作的时间间隔可能存在差异;由于时间间隔存在差异,因此电子设备中后处理队列中存储的图像数据可能不同。
在本申请的实施例中,电子设备通过采用先采集后处理的方式对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
下面结合图17至图20对适用于本申请实施例的界面示意图进行详细描述。
示例性地,电子设备运行相机应用程序后,显示预览界面501,如图17中的(a)所示;预览界面501中包括预览图像、拍照控件502与缩略图显示控件503,其中,缩略图显示控件503中显示的图像为上一次拍照的缩略图像;电子设备检测到多次快速连续拍摄的操作;例如,电子设备检测到点击拍照控件502的操作,如图17中的(b)所示;在电子设备检测到点击拍照控件502的操作之后,电子设备检测到弹起拍照控件502的操作,显示显示界面504,如图17中的(c)所示;在电子设备检测到弹起拍照控件502的操作之后,电子设备检测到点击拍照控件502的操作,如图17中的(d)所示;在电子设备检测到点击拍照控件502的操作之后,电子设备检测到弹起拍照控件502的操作,如图17中的(e)所示;在电子设备检测到弹起拍照控件502的操作之后,电子设备检测到点击缩略图显示控件503的操作,如图17中的(f)所示;在电子设备检测到点击缩略图显示控件503的操作之后,电子设备可以显示图库应用应用程序中的显示界面,如图18所示。
需要说明的是,结合图4中的(b)至图4中的(d)、图17中的(f)与图18可以看出,在本申请的实施例提供的拍照处理方法中,电子设备通过先采集后处理后的方法对连续拍照的数据包进行图像处理,可以在检测到点击缩略图像的操作时缩短生成拍摄图像的等待时长;可以理解为,电子设备可以优先处理拍摄时间最晚的数据包,从而在检测到电子设备点击缩略图显示控件后的等待时长,即可以在一定程序上缩短显示拍摄图像的等待时长;提高用户的拍摄体验。
下面结合图19与图20对电子设备中开启快速显示拍摄图像,即执行本申请实施例的拍照处理方法的界面示意图进行描述。
在一个示例中,如图19中的(b)所示电子设备检测到点击智能控件610的操作之后,电子设备执行本申请实施例提供的拍照处理方法。
示例性地,在电子设备运行相机应用程序后,显示如图19中的(a)所示的预览界面;在预览界面中,包括预览图像与智能控件610;电子设备检测到点击智能控件610的操作,如图19中的(b)所示;在电子设备检测到点击智能控件610的操作之后,执行本申请实施例提供的拍照处理方法。
在一个示例中,如图20中的(d)所示电子设备检测到点击控件630的操作之后,电子设备执行本申请实施例提供的拍照处理方法。
示例性地,电子设备中运行相机应用程序后,可以显示如图20中的(a)所示的预览界面;预览界面中包括预览图像与设置控件620;电子设备检测到点击设置控件620的操 作,如图20中的(b)所示;在电子设备检测到点击设置控件620的操作之后,显示设置界面,如图20中的(c)所示;设置界面中包括快速显示拍摄图像的控件630;电子设备检测到点击快速显示拍摄图像的控件630,如图20中的(d)所示;在电子设备检测到快速显示拍摄图像的控件630的操作之后,执行本申请实施例提供的拍照处理方法。
在本申请的实施例中,电子设备通过采用先采集后处理的方式对采集的图像数据包进行图像处理,生成拍摄图像,从而在一定程度上能够缩短电子设备检测到点击缩略图像的操作与显示拍摄图像之间的时长;可以理解为,电子设备在处理连续拍照操作中第一次拍照操作的数据包之后,可以选取存储的图像数据队列中逆序方向的第一个数据包进行图像处理,即获取拍摄时间最晚的数据包进行图像处理;从而确保电子设备能够快速处理拍摄时间靠后的拍摄图像,使得电子设备检测到点击缩略图像的操作之后能够快速地显示拍摄图像;在一定程度上能够缩短用户等待拍摄图像的时长,提高用户的拍摄体验。
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。
上文结合图1至图20详细描述了本申请实施例提供的拍照处理方法;下面将结合图21至图22详细描述本申请的装置实施例。应理解,本申请实施例中的装置可以执行前述本申请实施例的各种方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。
图21是本申请实施例提供的一种电子设备的结构示意图。该电子设备700包括处理模块710与显示模块720。
其中,处理模块710用于检测到第一操作,所述第一操作为指示所述电子设备拍照的操作;响应于所述第一操作,获取第一数据包;在图像数据队列中存储所述第一数据包,所述图像数据队列中存储的数据包用于生成拍摄图像,所述第一数据包为所述图像数据队列中采集时刻最早的数据包;对所述第一数据包进行图像处理,生成第一拍摄图像;检测到第二操作,所述第二操作包括N次拍摄操作,所述N次拍摄操作的时间间隔小于预设时长,所述拍摄操作为指示所述电子设备采集图像的操作,N为大于或者等于2的整数;响应于所述第二操作,获取N个数据包,所述N个数据包与所述N次拍摄操作一一对应;在所述图像数据队列中基于采集时刻从早到晚的顺序存储所述N个数据包;在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,所述第二数据包为在所述图像数据队列中采集时刻最晚的数据包;对所述第二数据包进行所述图像处理,生成第二拍摄图像;检测到第三操作,所述第三操作为点击所述第二拍摄图像的缩略图像的操作;显示模块720用于响应于所述第三操作,显示所述第二拍摄图像。
可选地,作为一个实施例,所述N个数据包包括所述第二数据包与N-1个数据包,处理模块710还用于:
基于所述N-1个数据包的采集时刻按照从晚到早的顺序,依次获取所述图像数据队列中的N-1个数据包;
对所述N-1个数据包依次进行所述图像处理,生成N-1个拍摄图像。
可选地,作为一个实施例,所述第一数据包包括第一结束帧,所述第一结束帧用于指示所述第一数据包在所述图像数据队列中的结束位置;处理模块710具体用于:
在第一时刻获取所述第二数据包;其中,所述第一时刻为处理所述第一结束帧的时刻。
可选地,作为一个实施例,所述N个数据包中的每个数据包包括起始帧与结束帧,所述起始帧用于指示一个数据包在所述图像数据队列中的起始位置,所述结束帧用于指示一个数据包在所述图像数据队列中的结束位置;处理模块710具体用于:
在所述第一时刻,确定所述图像数据队列中目标起始帧的位置信息,其中,所述目标起始帧为所述图像数据队列中时刻最晚的起始帧;
基于所述目标起始帧的位置信息,获取所述第二数据包。
可选地,作为一个实施例,所述第一数据包包括第一起始帧,所述第一起始帧用于标识所述第一数据包在所述图像数据队列中的起始位置;所述第一起始帧与所述第一结束帧之间包括M帧图像数据,M为大于或者等于1的整数。
可选地,作为一个实施例,所述M帧图像数据为第一颜色空间的图像数据;所述图像处理包括采用第一算法与第二算法的处理,所述第一算法为所述第一颜色空间的算法,所述第二算法为将所述第一颜色空间的图像转换为第二颜色空间的图像的算法。
可选地,作为一个实施例,所述N次拍摄操作为连续拍照操作。
可选地,作为一个实施例,所述图像数据队列中的N+1个数据包为基于所述第一操作的时间信息与所述第二操作的时间信息在零秒延迟队列中获取的数据包。
需要说明的是,上述电子设备700以功能模块的形式体现。这里的术语“模块”可以通过软件和/或硬件形式实现,对此不作具体限定。
例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
图22示出了本申请提供的一种电子设备的结构示意图。图22中的虚线表示该单元或该模块为可选的;电子设备800可以用于实现上述方法实施例中描述的拍照处理方法。
电子设备800包括一个或多个处理器801,该一个或多个处理器801可支持电子设备800实现方法实施例中的拍照处理方法。处理器801可以是通用处理器或者专用处理器。例如,处理器801可以是中央处理器(central processing unit,CPU)、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,如分立门、晶体管逻辑器件或分立硬件组件。
可选地,处理器801可以用于对电子设备800进行控制,执行软件程序,处理软件程序的数据。电子设备800还可以包括通信单元805,用以实现信号的输入(接收)和输出(发送)。
例如,电子设备800可以是芯片,通信单元805可以是该芯片的输入和/或输出电路,或者,通信单元805可以是该芯片的通信接口,该芯片可以作为终端设备或其它电子设备 的组成部分。
又例如,电子设备800可以是终端设备,通信单元805可以是该终端设备的收发器,或者,通信单元805可以800中可以包括一个或多个存储器802,其上存有程序804,程序804可被处理器801运行,生成指令803,使得处理器801根据指令803执行上述方法实施例中描述的拍照处理方法。
可选地,存储器802中还可以存储有数据。
可选地,处理器801还可以读取存储器802中存储的数据,该数据可以与程序804存储在相同的存储地址,该数据也可以与程序804存储在不同的存储地址。
可选地,处理器801和存储器802可以单独设置,也可以集成在一起,例如,集成在终端设备的系统级芯片(system on chip,SOC)上。
示例性地,存储器802可以用于存储本申请实施例中提供的拍照处理方法的相关程序804,处理器801可以用于在执行拍照处理方法时调用存储器802中存储的拍照处理方法的相关程序804,执行本申请实施例的拍照处理方法;例如,检测到第一操作,第一操作为指示电子设备拍照的操作;响应于第一操作,获取第一数据包;在图像数据队列中存储第一数据包,图像数据队列中存储的数据包用于生成拍摄图像,第一数据包为图像数据队列中采集时刻最早的数据包;对第一数据包进行图像处理,生成第一拍摄图像;检测到第二操作,第二操作包括N次拍摄操作,N次拍摄操作的时间间隔小于预设时长,拍摄操作为指示电子设备采集图像的操作,N为大于或者等于2的整数;响应于第二操作,获取N个数据包,N个数据包与N次拍摄操作一一对应;在图像数据队列中基于采集时刻从早到晚的顺序存储N个数据包;在生成第一拍摄图像之后,获取图像数据队列中的第二数据包,第二数据包为在图像数据队列中采集时刻最晚的数据包;对第二数据包进行图像处理,生成第二拍摄图像;检测到第三操作,第三操作为点击第二拍摄图像的缩略图像的操作;响应于第三操作,显示第二拍摄图像。
可选地,本申请还提供了一种计算机程序产品,该计算机程序产品被处理器801执行时实现本申请中任一方法实施例中的拍照处理方法。
例如,该计算机程序产品可以存储在存储器802中,例如是程序804,程序804经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器801执行的可执行目标文件。
可选地,本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例的拍照处理方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。
例如,该计算机可读存储介质例如是存储器802。存储器802可以是易失性存储器或非易失性存储器,或者,存储器802可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous  DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的电子设备的实施例仅仅是示意性的,例如,模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。
另外,本文中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准总之,以上仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的 精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (12)

  1. 一种拍照处理方法,其特征在于,应用于电子设备,所述拍照处理方法包括:
    检测到第一操作,所述第一操作为指示所述电子设备拍照的操作;
    响应于所述第一操作,获取第一数据包;
    在图像数据队列中存储所述第一数据包,所述图像数据队列中存储的数据包用于生成拍摄图像,所述第一数据包为所述图像数据队列中采集时刻最早的数据包;
    对所述第一数据包进行图像处理,生成第一拍摄图像;
    检测到第二操作,所述第二操作包括N次拍摄操作,所述N次拍摄操作的时间间隔小于预设时长,所述拍摄操作为指示所述电子设备采集图像的操作,N为大于或者等于2的整数;
    响应于所述第二操作,获取N个数据包,所述N个数据包与所述N次拍摄操作一一对应;
    在所述图像数据队列中基于采集时刻从早到晚的顺序存储所述N个数据包;
    在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,所述第二数据包为在所述图像数据队列中采集时刻最晚的数据包;
    对所述第二数据包进行所述图像处理,生成第二拍摄图像;
    检测到第三操作,所述第三操作为点击所述第二拍摄图像的缩略图像的操作;
    响应于所述第三操作,显示所述第二拍摄图像。
  2. 如权利要求1所述的拍照处理方法,其特征在于,所述N个数据包包括所述第二数据包与N-1个数据包,在生成所述第二拍摄图像之后,还包括:
    基于所述N-1个数据包的采集时刻按照从晚到早的顺序,依次获取所述图像数据队列中的N-1个数据包;
    对所述N-1个数据包依次进行所述图像处理,生成N-1个拍摄图像。
  3. 如权利要求1或2所述的拍照处理方法,其特征在于,所述第一数据包包括第一结束帧,所述第一结束帧用于指示所述第一数据包在所述图像数据队列中的结束位置;所述在生成所述第一拍摄图像之后,获取所述图像数据队列中的第二数据包,包括:
    在第一时刻获取所述第二数据包;其中,所述第一时刻为处理所述第一结束帧的时刻。
  4. 如权利要求3所述的拍照处理方法,其特征在于,所述N个数据包中的每个数据包包括起始帧与结束帧,所述起始帧用于指示一个数据包在所述图像数据队列中的起始位置,所述结束帧用于指示一个数据包在所述图像数据队列中的结束位置;所述在第一时刻获取所述第二数据包,包括:
    在所述第一时刻,确定所述图像数据队列中目标起始帧的位置信息,其中,所述目标起始帧为所述图像数据队列中时刻最晚的起始帧;
    基于所述目标起始帧的位置信息,获取所述第二数据包。
  5. 如权利要求3或4中所述的拍照处理方法,其特征在于,所述第一数据包包括第一起始帧,所述第一起始帧用于标识所述第一数据包在所述图像数据队列中的起始位置;所述第一起始帧与所述第一结束帧之间包括M帧图像数据,M为大于或者等于 1的整数。
  6. 如权利要求2至5中任一项所述的拍照处理方法,其特征在于,所述图像处理包括采用第一算法与第二算法的处理,所述第一算法为第一颜色空间的算法,所述第二算法为将所述第一颜色空间的图像转换为第二颜色空间的图像的算法。
  7. 如权利要求1至6中任一项所述的拍照处理方法,其特征在于,所述N次拍摄操作为连续拍照操作。
  8. 如权利要求1至7中任一项所述的拍照处理方法,其特征在于,所述图像数据队列中的N+1个数据包为基于所述第一操作的时间信息与所述第二操作的时间信息在零秒延迟队列中获取的数据包。
  9. 一种电子设备,其特征在于,包括:
    一个或多个处理器和存储器;
    所述存储器与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求1至8中任一项所述的拍照处理方法。
  10. 一种芯片系统,其特征在于,所述芯片系统应用于电子设备,所述芯片系统包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求1至8中任一项所述的拍照处理方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了计算机程序,当所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至8中任一项所述的拍照处理方法。
  12. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码被电子设备运行时,使得所述电子设备执行权利要求1至8中任一项所述的方法。
PCT/CN2023/114091 2022-11-22 2023-08-21 拍照处理方法和电子设备 WO2024109203A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211468036.3 2022-11-22
CN202211468036.3A CN116668836B (zh) 2022-11-22 2022-11-22 拍照处理方法和电子设备

Publications (1)

Publication Number Publication Date
WO2024109203A1 true WO2024109203A1 (zh) 2024-05-30

Family

ID=87710564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114091 WO2024109203A1 (zh) 2022-11-22 2023-08-21 拍照处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN116668836B (zh)
WO (1) WO2024109203A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050075870A (ko) * 2004-01-16 2005-07-25 삼성테크윈 주식회사 영상 모드 정보를 저장 및 재생하는 디지털 카메라와 영상모드 정보의 저장 및 재생 방법
CN102469265A (zh) * 2010-11-18 2012-05-23 卡西欧计算机株式会社 摄像装置及摄像方法
JP2012216215A (ja) * 2012-05-18 2012-11-08 Casio Comput Co Ltd 画像表示装置、及び画像表示方法、プログラム
CN104203073A (zh) * 2012-10-18 2014-12-10 奥林巴斯医疗株式会社 图像处理装置和图像处理方法
CN104754223A (zh) * 2015-03-12 2015-07-01 广东欧珀移动通信有限公司 一种生成缩略图的方法及拍摄终端
CN110996012A (zh) * 2019-12-23 2020-04-10 Oppo广东移动通信有限公司 连拍处理方法、图像处理器、拍摄装置和电子设备
US20220060653A1 (en) * 2020-08-21 2022-02-24 Canon Kabushiki Kaisha Image capture apparatus, image processing apparatus, and control method
CN116416323A (zh) * 2021-12-28 2023-07-11 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN116866729A (zh) * 2022-03-24 2023-10-10 北京小米移动软件有限公司 拍摄方法、装置及介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1756301A (zh) * 2004-09-30 2006-04-05 英保达股份有限公司 图像播放系统以及方法
JP2009302902A (ja) * 2008-06-13 2009-12-24 Nikon Corp カメラ
CN111567033A (zh) * 2019-05-15 2020-08-21 深圳市大疆创新科技有限公司 拍摄装置、无人飞行器、控制终端和拍摄方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050075870A (ko) * 2004-01-16 2005-07-25 삼성테크윈 주식회사 영상 모드 정보를 저장 및 재생하는 디지털 카메라와 영상모드 정보의 저장 및 재생 방법
CN102469265A (zh) * 2010-11-18 2012-05-23 卡西欧计算机株式会社 摄像装置及摄像方法
JP2012216215A (ja) * 2012-05-18 2012-11-08 Casio Comput Co Ltd 画像表示装置、及び画像表示方法、プログラム
CN104203073A (zh) * 2012-10-18 2014-12-10 奥林巴斯医疗株式会社 图像处理装置和图像处理方法
CN104754223A (zh) * 2015-03-12 2015-07-01 广东欧珀移动通信有限公司 一种生成缩略图的方法及拍摄终端
CN110996012A (zh) * 2019-12-23 2020-04-10 Oppo广东移动通信有限公司 连拍处理方法、图像处理器、拍摄装置和电子设备
US20220060653A1 (en) * 2020-08-21 2022-02-24 Canon Kabushiki Kaisha Image capture apparatus, image processing apparatus, and control method
CN116416323A (zh) * 2021-12-28 2023-07-11 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN116866729A (zh) * 2022-03-24 2023-10-10 北京小米移动软件有限公司 拍摄方法、装置及介质

Also Published As

Publication number Publication date
CN116668836A (zh) 2023-08-29
CN116668836B (zh) 2024-04-19

Similar Documents

Publication Publication Date Title
WO2021147482A1 (zh) 一种长焦拍摄的方法及电子设备
WO2021104485A1 (zh) 一种拍摄方法及电子设备
CN113382169B (zh) 一种拍照方法及电子设备
CN113824873B (zh) 一种图像处理的方法及相关电子设备
WO2021190613A1 (zh) 一种拍照方法及装置
WO2017173585A1 (zh) 一种拍照方法及终端
WO2021190348A1 (zh) 图像处理方法和电子设备
WO2024031879A1 (zh) 显示动态壁纸的方法和电子设备
WO2021219141A1 (zh) 拍照方法、图形用户界面及电子设备
CN113099146A (zh) 一种视频生成方法、装置及相关设备
WO2023142830A1 (zh) 切换摄像头的方法与电子设备
CN113660408A (zh) 一种视频拍摄防抖方法与装置
CN115689963A (zh) 一种图像处理方法及电子设备
WO2022083325A1 (zh) 拍照预览方法、电子设备以及存储介质
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
WO2024109207A1 (zh) 显示缩略图像的方法和电子设备
WO2023160230A9 (zh) 一种拍摄方法及相关设备
WO2023231697A1 (zh) 一种拍摄方法及相关设备
CN115633262B (zh) 图像处理方法和电子设备
WO2024109203A1 (zh) 拍照处理方法和电子设备
WO2022206589A1 (zh) 一种图像处理方法以及相关设备
CN114531539B (zh) 拍摄方法及电子设备
WO2023035868A1 (zh) 拍摄方法及电子设备
WO2023160224A9 (zh) 一种拍摄方法及相关设备
WO2023160221A1 (zh) 一种图像处理方法和电子设备