CN117692763A - Photographing method, electronic device, storage medium and program product - Google Patents

Photographing method, electronic device, storage medium and program product Download PDF

Info

Publication number
CN117692763A
CN117692763A CN202310971174.1A CN202310971174A CN117692763A CN 117692763 A CN117692763 A CN 117692763A CN 202310971174 A CN202310971174 A CN 202310971174A CN 117692763 A CN117692763 A CN 117692763A
Authority
CN
China
Prior art keywords
image
track
preview
track image
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310971174.1A
Other languages
Chinese (zh)
Inventor
王相钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310971174.1A priority Critical patent/CN117692763A/en
Publication of CN117692763A publication Critical patent/CN117692763A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a photographing method, electronic equipment, a storage medium and a program product, and relates to the technical field of image processing, wherein the photographing method comprises the following steps: responding to a track image preview request, and acquiring an image sequence acquired by a camera in a track image preview mode; if a photo request of the light track image is not received, performing light track fusion processing on images in the current image sequence every time the number of the newly added images in the image sequence reaches a preset value to obtain a first original light track image, and performing post-processing flow of preview flow on the first original light track image to obtain a preview light track image; if a light track image photographing request is received, performing light track fusion processing on an image sequence indicated by the light track image photographing request to obtain a second original light track image, and performing post-processing flow of photographing flow on the second original light track image to obtain a photographed light track image. The preview of the track image can be realized, so that the imaging effect of the track image is ensured.

Description

Photographing method, electronic device, storage medium and program product
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a photographing method, an electronic device, a storage medium, and a program product.
Background
With the development of image processing technology, the more powerful the shooting function supported by electronic devices is.
The streamer shutter can create a unique streamline effect, record the movement track of light, reflect the environmental change and the object movement under different illumination conditions, and commonly shoot scenes such as light painting graffiti, a vehicle, a water-jet, silk running water, gorgeous star rails, foot rest fireworks, starry sky photography and the like.
In the process of image shooting through a streaming shutter mode based on the current shooting method, a shooting request is generally issued to a bottom layer by an application, and the bottom layer performs image processing and generates a final track image.
In addition, regarding a photographing architecture configured in the current electronic device, a corresponding path and a processing flow of photographing streaming data and preview streaming data are mutually independent, so that a user needs to judge an image effect after obtaining a final track image, the track image for previewing cannot be obtained in a photographing process, and the photographing effect is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide a photographing method, an electronic device, a storage medium, and a program product, so as to implement previewing of a track image, thereby ensuring an imaging effect of the track image. The specific technical scheme is as follows:
In a first aspect, an embodiment of the present application provides a photographing method, which is applied to an electronic device, including:
responding to a track image preview request, and acquiring an image sequence acquired by a camera in a track image preview mode; the number of images contained in the image sequence increases with the duration of the track image preview mode;
if a photo request of the light track image is not received, performing light track fusion processing on images in the current image sequence every time the number of the newly added images in the image sequence reaches a preset value to obtain a first original light track image, and performing post-processing flow of preview flow on the first original light track image to obtain a preview light track image;
and if the track image photographing request is received, performing track fusion processing on an image sequence indicated by the track image photographing request to obtain a second original track image, and performing post-processing flow of photographing flow on the second original track image to obtain a photographed track image.
In one embodiment of the present application, the method further comprises:
responding to the track image photographing request in the track image preview mode, acquiring an image sequence indicated by the track image photographing request from a cache, and adding a photographing frame identification mark for the acquired image sequence;
And judging whether the original track image carries the photographing frame identification mark or not according to the original track image obtained by carrying out track fusion processing each time, if so, identifying the original track image as the second original track image, and if not, identifying the original track image as the first original track image.
In an embodiment of the present application, after the capturing the image sequence acquired by the camera in the track image preview mode, the method further includes:
the image sequence is stored in a buffer configured for preview access.
In an embodiment of the present application, after the acquiring the image sequence acquired by the camera in the track image preview mode in response to the track image preview request, the method further includes:
sampling the image sequence based on the sampling frame rate indicated by the track image preview request, and caching the sampled image sequence;
each time the number of newly added images in the image sequence reaches a preset value, performing track fusion processing on the images in the current image sequence to obtain a first original track image, including:
each time the number of newly added images in the sampled image sequence reaches the preset value, performing track fusion processing on the images in the current sampled image sequence to obtain the first original track image;
The step of performing track fusion processing on the image sequence indicated by the track image photographing request to obtain a second original track image comprises the following steps:
and performing track fusion processing on the sampled image sequence indicated by the track image photographing request to obtain the second original track image.
In one embodiment of the present application, the step of performing a post-processing procedure of the preview stream on the first original track image to obtain a preview track image includes:
and carrying out downsampling processing on the first original light track image to obtain the preview light track image.
In one embodiment of the present application, the step of obtaining the photographed track image includes:
and encoding the second original light track image into a preset output format to obtain the photographed light track image.
In an embodiment of the present application, before the acquiring, in response to the preview request, the image sequence acquired by the camera in the preview mode, the method further includes:
in response to the first operation, displaying a first preview image through a preview interface; the first preview image is a preview image of an image currently collected by the camera, and the first operation is used for indicating the electronic device to enter a streamer shutter mode.
In one embodiment of the present application, the method further comprises:
and after receiving the track image photographing request, exiting the track image preview mode, and displaying the first preview image through the preview interface.
In a second aspect, embodiments of the present application provide an electronic device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any of the first aspects.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, and when the program runs, controls a device in which the computer readable storage medium is located to execute the method of any one of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any one of the first aspects.
The beneficial effects of the embodiment of the application are that:
According to the photographing method, the electronic device, the storage medium and the program product, an image sequence acquired by a camera is acquired in a track image preview mode in response to a track image preview request, if the track image preview request is not received, track fusion processing is performed on images in a current image sequence every time the number of images newly added in the image sequence reaches a preset value, a first original track image is obtained, and a post-processing flow of a preview flow is performed on the first original track image, so that a preview track image is obtained; if a light track image photographing request is received, performing light track fusion processing on an image sequence indicated by the light track image photographing request to obtain a second original light track image, and performing post-processing flow of photographing flow on the second original light track image to obtain a photographed light track image. Therefore, the electronic device is instructed to start shooting the track image in the mode of the light-emitting track image preview request below the bottom layer, and the preview passage in the shooting framework configured in the electronic device is reasonably utilized, so that the preview of the track image can be realized under the condition that the original shooting framework is not required to be changed, a user can observe the shooting effect of the track image in real time, and instruct the electronic device to finish shooting when the shooting effect is determined to meet the requirement, at the moment, the electronic device responds to the track image shooting request, and obtains the shooting track image based on the image sequence indicated by the track image shooting request, so that the actual effect of the shooting track image can be ensured to meet the user expectation.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
Fig. 1a is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 1b is a software architecture block diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a photographing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a mobile phone interface provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an identification procedure of a selection module provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first instruction and the second instruction are for distinguishing different user instructions, and the sequence of the instructions is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In this application, the terms "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The embodiment of the application can be applied to terminals with communication functions, such as mobile phones, tablet personal computers, personal computers (Personal Computer, PCs), personal digital assistants (Personal Digital Assistant, PDAs), smart watches, netbooks, wearable electronic devices, augmented Reality (Augmented Reality, AR) devices, virtual Reality (VR) devices, vehicle-mounted devices, smart cars, robots, smart glasses, smart televisions and the like.
By way of example, fig. 1a shows a schematic diagram of the structure of a terminal 100. The terminal 100 may include a processor 110, a display 120, a camera 130, an internal memory 140, a sim (Subscriber Identification Module, subscriber identity module) card interface 150, a usb (Universal Serial Bus ) interface 160, a charge management module 170, a power management module 171, a battery 172, a sensor module 180, a mobile communication module 190, a wireless communication module 200, an antenna 1, an antenna 2, and the like. The sensor modules 180 may include, among other things, pressure sensors 180A, fingerprint sensors 180B, touch sensors 180C, ambient light sensors 180D, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal 100. In other embodiments of the present application, terminal 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include a central processor (Central Processing Unit, CPU), an application processor (Application Processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, terminal 100 can also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the terminal 100 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include Inter-integrated circuit (Inter-Integrated Circuit, I2C) interfaces, inter-integrated circuit audio (Inter-Integrated Circuit Sound, I2S) interfaces, pulse code modulation (Pulse Code Modulation, PCM) interfaces, universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interfaces, mobile industry processor interfaces (Mobile Industry Processor Interface, MIPI), general-Purpose Input/Output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 160 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 160 may be used to connect a charger to charge the terminal 100, or may be used to transfer data between the terminal 100 and a peripheral device. The USB interface 160 may also be used to connect headphones through which audio is played.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is for illustrative purposes, and is not limited to the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 190, the wireless communication module 200, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
Terminal 100 implements display functions through a GPU, display 120, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 120 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display 120 is used to display images, videos, and the like. The display 120 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic Light Emitting Diode (AMOLED), a flexible Light-Emitting Diode (FLED), miniled, microLed, micro-OLED, a quantum dot Light-Emitting Diode (Quantum Dot Light Emitting Diodes, QLED), or the like. In some embodiments, terminal 100 may include 1 or more display screens 120.
In some embodiments of the present application, when the display panel is made of OLED, AMOLED, FLED, the display screen 120 in fig. 1a may be folded. Here, the display 120 may be folded, which means that the display may be folded at any angle at any portion and may be held at the angle, for example, the display 120 may be folded in half from the middle. Or folded up and down from the middle.
The display 120 of the terminal 100 may be a flexible screen that is currently of great interest due to its unique characteristics and great potential. Compared with the traditional screen, the flexible screen has the characteristics of strong flexibility and bending property, can provide a new interaction mode based on the bending property for the user, and can meet more requirements of the user on the terminal. For a terminal equipped with a foldable display, the foldable display on the terminal can be switched between a small screen in a folded configuration and a large screen in an unfolded configuration at any time. Accordingly, users use a split screen function on a terminal configured with a foldable display screen, also more and more frequently.
The terminal 100 may implement a photographing function through an ISP, a camera 130, a video codec, a GPU, a display 120, an application processor, and the like, wherein the camera 130 includes a front camera and a rear camera.
The ISP is used to process the data fed back by the camera 130. For example, when shooting, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing, so that the electric signal is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 130.
The camera 130 is used to take pictures or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Cevice, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard Red Green Blue (RGB), YUV format image signal, and the like. In some embodiments, the terminal 100 may include 1 or N cameras 130, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The terminal 100 may support one or more video codecs. In this way, the terminal 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
The NPU is a Neural-Network (NN) computing processor, and can rapidly process input information by referencing a biological Neural Network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the terminal 100 can be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 140 may be used to store one or more computer programs, including instructions. The processor 110 may cause the terminal 100 to perform the video segmentation method provided in some embodiments of the present application, as well as various applications, data processing, and the like, by executing the above-described instructions stored in the internal memory 140. The internal memory 140 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the terminal 100, etc. In addition, the internal memory 140 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage units, flash memory units, universal flash memory (Universal Flash Storage, UFS), and the like. In some embodiments, the processor 110 may cause the terminal 100 to perform the video segmentation methods provided in the embodiments of the present application, as well as other applications and data processing, by executing instructions stored in the internal memory 140, and/or instructions stored in a memory provided in the processor 110.
The internal memory 140 may be used to store a related program of the video segmentation method provided in the embodiments of the present application, and the processor 110 may be used to invoke the related program of the video segmentation method stored in the internal memory 140 when information is presented, to perform the video segmentation method of the embodiments of the present application.
The sensor module 180 may include a pressure sensor 180A, a fingerprint sensor 180B, a touch sensor 180C, an ambient light sensor 180D, and the like.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 120. The pressure sensor 180A may be of various types, such as a resistive pressure sensor, an inductive pressure sensor, or a capacitive pressure sensor. The capacitive pressure sensor may be a device comprising at least two parallel plates of conductive material, the capacitance between the electrodes changing as a force is applied to the pressure sensor 180A, the terminal 100 determining the strength of the pressure based on the change in capacitance. When a touch operation is applied to the display screen 120, the terminal 100 detects the touch operation according to the pressure sensor 180A. The terminal 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon; and executing the instruction of newly creating the short message when the touch operation with the touch operation intensity being larger than or equal to the first pressure threshold acts on the short message application icon.
The fingerprint sensor 180B is used to collect a fingerprint. The terminal 100 can utilize the collected fingerprint characteristics to realize the functions of unlocking, accessing an application lock, shooting and receiving an incoming call, and the like.
The touch sensor 180C, also referred to as a touch device. The touch sensor 180C may be disposed on the display screen 120, and the touch sensor 180C and the display screen 120 form a touch screen, which is also referred to as a touch screen. The touch sensor 180C is used to detect a touch operation acting thereon or thereabout. The touch sensor 180C may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 120. In other embodiments, the touch sensor 180C may also be disposed on the surface of the terminal 100 and at a different location than the display 120.
The ambient light sensor 180D is used to sense ambient light level. The terminal 100 may adaptively adjust the brightness of the display 120 according to the perceived ambient light level. The ambient light sensor 180D may also be used to automatically adjust white balance at the time of photographing. Ambient light sensor 180D may also communicate the ambient information in which the device is located to the GPU.
The ambient light sensor 180D is also used to acquire the brightness, light ratio, color temperature, etc. of the acquisition environment in which the camera 130 acquires an image.
Taking an electronic device as a smart phone as an example, the photographing method in the embodiment of the present application may be implemented by using a smart phone system architecture shown in fig. 1b, where, referring to fig. 1b, the smart phone system architecture includes a kernel portion, a frame layer portion, and an application layer portion; the kernel part comprises a driving layer and a real-time operating system, wherein the driving layer comprises a GPU (graphic processor), a display driver (in the figure, an LCD driver is concrete), a TP driver (touch screen driver), keys and the like; the real-time operating system comprises interrupt management, task scheduling and MEM (memory management); the frame layer includes: system basic capability, underlying software services, hardware service capability, etc.; the application layer comprises: shooting applications, display applications, system applications, communication applications, etc.
Fig. 2 is a block diagram of an electronic device provided in an embodiment of the present application, and for convenience of understanding, an exemplary description is given below of a software system of the electronic device related to the embodiment of the present application, where a software system inside the electronic device may specifically be a layered architecture, and may include application layers (Applications, APP) and hardware abstraction layers (Hardware Abstrast Layer, HAL), and the functions of the APP layer and the HAL layer are briefly described below, and other system levels that may be possibly provided in the software system, and specific functions of each system level may refer to descriptions in the related art.
The APP layer may include various application packages, and illustratively, the APP layer may include a camera application for taking a picture, a gallery application for storing images, and the like.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry, which aims at abstracting the hardware. The HAL layer may specifically include a plurality of functional modules, see fig. 2, and may include a streamer shutter algorithm module, or algorithm module, for performing a track fusion process on a plurality of images to obtain an original track image, and a selection module for selecting whether to process the original track image obtained by the algorithm module into a preview track image or take a photo of the track image for output.
On the basis of the structure of the electronic device, the embodiment of the application provides a photographing method, referring to fig. 3, and the photographing method specifically includes the following steps:
step S301: responding to a track image preview request, and acquiring an image sequence acquired by a camera in a track image preview mode; the number of images contained in the sequence of images increases with the duration of the track image preview mode.
In practical applications, the electronic device may support multiple photographing modes, such as a normal photographing mode, a panoramic photographing mode, and the like, and the photographing mode according to the embodiments of the present application is specifically a streamer shutter mode.
Specifically, a native photographing architecture configured in the electronic device generally distinguishes a photographing mode and a preview mode, and there are differences between image processing flows in the two modes and corresponding processing paths in the electronic device. It should be noted that, in the embodiment of the present application, when the user needs to take a track image, the electronic device specifically performs acquisition of an image sequence in a track image preview mode, that is, a preview mode in a streaming shutter mode.
In addition, the electronic device responds to the track image preview request, and after entering the track image preview mode, the camera continuously collects images, so that as the time for entering the track image preview mode increases, the number of images in the image sequence in the embodiment of the application continuously increases.
Step S302: if the photo request of the track image is not received, carrying out track fusion processing on the images in the current image sequence every time the number of the newly added images in the image sequence reaches a preset value to obtain a first original track image, and carrying out post-processing flow of preview flow on the first original track image to obtain a preview track image.
Specifically, the capturing of the track images is based on the original implementation of stacking, that is, a large number of images are collected in a certain period of time, then static background pictures in the images are reserved based on a correlation algorithm, and a final image is obtained by performing synthesis processing on dynamic difference parts in each image, so that a motion track of light can be displayed in the final image. The specific processing procedure referred to herein may refer to what is in the related art, and will not be described in detail herein.
In step S302, when the electronic device is in the track image preview mode, the camera continues to acquire the image sequence, and in this process, in order to facilitate the user to determine the capturing effect of the track image in real time, a preset value may be preset, and each time the number of image increases in the image sequence reaches the preset value, the track fusion process may be performed on the current image sequence, so as to obtain the first original track image.
The size of the preset value may be selected based on actual requirements, which is not limited in the embodiment of the present application. As an example, to ensure the preview effect, the preset value may be set to 1, that is, each time the camera captures an image, the electronic device performs the track fusion processing based on the current image sequence, in this case, when the camera captures an nth image, the algorithm module performs the track fusion processing on the captured n images, and when the camera captures an n+1th image, the algorithm module performs the track fusion processing on the captured n+1th image.
After the first original image is obtained each time, the post-processing flow of the preview flow can be performed on the first original track image based on the actual preview requirement, the preview track image is obtained, and the preview track image is displayed through the preview interface of the electronic device. Therefore, as the number of images in the acquired image sequence gradually increases, the number of images for performing the track fusion processing increases, and the track in each preview track image gradually extends, so that a user can observe the track forming process in real time in the preview interface.
For the post-processing flow of the preview flow, reference may be made to the content in the related art, which is not described in detail in the embodiment of the present application.
Step S303: if a light track image photographing request is received, performing light track fusion processing on an image sequence indicated by the light track image photographing request to obtain a second original light track image, and performing post-processing flow of photographing flow on the second original light track image to obtain a photographed light track image.
In the above step S302, the user may observe the track formation process in real time based on the preview track image, so that the electronic device may be instructed to take a picture if the user observes an ideal track image display effect. Therefore, when the electronic equipment is in the track image preview mode, if a track image photographing request is received, track fusion processing can be performed on an image sequence indicated by the track image photographing request to obtain a second original track image, and a post-processing flow of a photographing flow is performed on the second original track image to obtain a photographed track image.
In the embodiment of the application, since the user can observe the track formation process in real time based on the preview track image, the image sequence indicated by the track image photographing request can be understood as the image sequence corresponding to the preview track image displayed on the preview interface when the user performs photographing through interaction with the electronic device, so that the finally obtained image effect of the photographed track image can be ensured to meet the expectations of the user.
For example, if the user instructs the electronic device to end capturing and output a captured track image and the preview interface is displaying a preview track image obtained based on the previous n frames of images captured in the track image preview mode, the previous n frames of images, that is, the sequence of images indicated by the track image capturing request, can be considered.
The post-processing flow of the photographing flow may refer to the content in the related art, which is not described in detail in the embodiment of the present application.
Fig. 4 is a schematic diagram of a mobile phone interface provided in the embodiment of the present application, and an actual application process of the embodiment of the present application is illustrated below by taking an electronic device as an example of a mobile phone with reference to fig. 2 and fig. 4.
As an example, in an actual application scenario, when a user needs to preview a track image, a streamer shutter option in a photographing APP interface illustrated in fig. 4 may be selected first, to instruct the electronic device to enter a streamer shutter mode, and in this case, to observe a real scene through normal preview, and when it is determined that the real scene is suitable to start photographing a track image, by clicking a corresponding control, for example 401 illustrated in fig. 4, the APP may instruct the electronic device to enter a track image preview mode, and in response to user interaction, the APP issues a preview instruction in the streamer shutter mode, that is, a track image preview request to the HAL to start photographing, and the camera performs image sequence acquisition in the track image preview mode.
Under the condition, if the photo request of the track image is not received, each time the newly added image in the image sequence reaches a preset value, the algorithm module performs track fusion processing on the current image sequence to obtain a first original track image, the first original track image is input into a post-processing passage corresponding to the preview flow by the selection module to obtain a preview track image, and the preview track image is sent and displayed, so that a user can preview a streaming shutter at a preview interface and observe the forming process of the track in real time.
In the process of performing the streaming shutter preview based on the preview interface, when the user observes that the preview track image meets the requirement, the user can click a corresponding control, for example, click 401 shown in fig. 4 again, so as to instruct the electronic device to end shooting, the algorithm module performs track fusion processing on the image sequence indicated by the user to obtain a second original track image, and the selection module inputs the second original track image into a post-processing channel corresponding to the shooting flow, so that a final picture, namely the shooting track image, can be obtained.
According to the photographing method provided by the embodiment of the application, in response to a track image preview request, an image sequence acquired by a camera is acquired in a track image preview mode, if the track image photographing request is not received, each time the number of images newly added in the image sequence reaches a preset value, track fusion processing is performed on the images in the current image sequence, a first original track image is obtained, and a post-processing flow of a preview flow is performed on the first original track image, so that a preview track image is obtained; if a light track image photographing request is received, performing light track fusion processing on an image sequence indicated by the light track image photographing request to obtain a second original light track image, and performing post-processing flow of photographing flow on the second original light track image to obtain a photographed light track image. Therefore, the electronic device is instructed to start shooting the track image in the mode of the light-emitting track image preview request below the bottom layer, and the preview passage in the shooting framework configured in the electronic device is reasonably utilized, so that the preview of the track image can be realized under the condition that the original shooting framework is not required to be changed, a user can observe the shooting effect of the track image in real time, and instruct the electronic device to finish shooting when the shooting effect is determined to meet the requirement, at the moment, the electronic device responds to the track image shooting request, and obtains the shooting track image based on the image sequence indicated by the track image shooting request, so that the actual effect of the shooting track image can be ensured to meet the user expectation.
In an embodiment of the present application, the photographing method provided in the embodiment of the present application further includes:
responding to a light track image photographing request in a light track image preview mode, acquiring an image sequence indicated by the light track image photographing request from a cache, and adding a photographing frame identification mark for the acquired image sequence;
and judging whether the original light track image carries a photographing frame identification mark according to the original light track image obtained by carrying out light track fusion processing each time, if so, identifying the original light track image as a second original light track image, and if not, identifying the original light track image as a first original light track image.
Fig. 5 is a schematic diagram of an identification flow of a selection module provided in an embodiment of the present application, and an exemplary photographing method provided in an embodiment of the present application is described below with reference to fig. 5.
Specifically, a buffer memory for storing images can be configured in the electronic device, and in the process of acquiring an image sequence by the camera, the electronic device can write the current image acquired by the camera into the buffer memory, so that the current image sequence can be stored in the buffer memory.
Based on the above embodiments of the present application, it can be seen that the image acquisition process of the present application is specifically performed in the track image preview mode, so, in order to facilitate the algorithm processing and simplify the path flow, the algorithm module may not distinguish the input image specifically, that is, the image capturing frame or the preview frame, that is, the algorithm module does not feel the front end image, and the track fusion processing process of the capturing frame and the preview frame is not different.
However, different post-processing flows are corresponding to the photo frame and the preview frame, so that after the original track image output by the algorithm module is obtained, the corresponding post-processing flow is determined, and when the received track image is taken, the image sequence indicated by the track image taking request can be obtained from the cache, and a photo frame identification mark is added to the obtained image sequence. Wherein the photo frame identification tag may be added specifically to one or more image frames in the image sequence.
Therefore, under the condition that a photo frame identification mark is not added to an image sequence input into the algorithm module under the condition that a photo frame shooting request is not received, after the algorithm module performs photo frame fusion processing on the image sequence and outputs an original photo frame image to the selection module, the selection module judges that a distinguishing mark does not exist in the image sequence, namely the photo frame identification mark, aiming at the input image information, the input image information is identified as a preview frame, namely a first original photo frame, so that the first original photo frame image is input into a post-processing passage corresponding to a preview stream, and finally the preview photo frame image for previewing is obtained.
Under the condition that a track image photographing request is received, the electronic device acquires a corresponding image sequence from a cache, inputs the image sequence added with a photographing frame identification mark into an algorithm module, performs track fusion processing on the input image sequence and outputs an original track image to a selection module, and the photographing frame identification mark added in the image frame can be reserved in the original track image after the track fusion processing, so that the selection module judges that a distinguishing mark exists in the input image information, and recognizes the input image information as a photographing frame, namely a second original track image, so that the second original track image is input into a post-processing passage corresponding to a photographing flow, and finally a photographing track image is obtained.
According to the photographing method provided by the embodiment of the application, in response to a light track image photographing request, an image sequence indicated by the light track image photographing request is obtained from a cache, a photographing frame identification mark is added for the obtained image sequence, and a photographing frame and a preview frame are distinguished based on the photographing frame identification mark. Therefore, under the condition that the track fusion processing is required for both the photographing frame and the preview frame, the algorithm module does not need to distinguish the photographing frame and the preview frame, but recognizes the original track image output by the algorithm module as a first original track image or a second original track image based on the photographing frame identification mark through the subsequent selection module, and determines the corresponding post-processing flow for the photographing frame and the preview frame, so that the algorithm module is convenient for image processing, and the path flow is simplified.
In an embodiment of the present application, after the capturing the image sequence captured by the camera in the track image preview mode, the method further includes:
the sequence of images is stored in a buffer configured for the preview path.
In the foregoing, it is mentioned that, in terms of the native photographing architecture configured in the electronic device, the preview stream and the photographing stream correspond to different processing flows and paths, respectively. Aiming at the current track image photographing scheme, the photographing of the track image is realized by issuing a photographing request, but in the embodiment of the application, the issuing of the track image preview request is performed, and in this case, the image sequence acquired by the camera can be stored through a buffer configured for a preview path. It should be appreciated that the caches configured for the preview and take paths are specifically used to fulfill different requirements, which differ in both path and function.
Under the condition of issuing a photographing request to photograph the track image, if the preview function is to be realized, the intrinsic logic of the internal passage of the photographing frame is not met, the original photographing frame needs to be changed, and the scheme is difficult to implement. In addition, under the condition that a photographing request is issued to perform optical track image photographing, an image sequence body acquired by a camera is stored in a buffer configured for a photographing path, and the performance of the buffer is not adapted to the preview requirement, so that even if a preview function can be realized, the preview effect is not ideal.
In this embodiment of the application, the shooting of track image is started through the mode of the preview request of lower luminous track image to the image that the camera gathered can be stored to the buffer memory to preview image configuration, has rationally utilized the inside passageway of frame of shooing, and inside buffer memory management is reasonable, thereby can guarantee the preview effect of preview track image under the circumstances of being convenient for realize the preview of track image.
In an embodiment of the present application, after the acquiring the image sequence acquired by the camera in the track image preview mode in response to the track image preview request, the method further includes:
sampling the image sequence based on the sampling frame rate indicated by the track image preview request, and caching the sampled image sequence;
Each time the number of newly added images in the image sequence reaches a preset value, performing track fusion processing on the images in the current image sequence to obtain a first original track image, including:
each time the number of newly added images in the sampled image sequence reaches a preset value, performing track fusion processing on the images in the current sampled image sequence to obtain a first original track image;
the step of performing track fusion processing on the image sequence indicated by the track image photographing request to obtain a second original track image includes:
and performing track fusion processing on the sampled image sequence indicated by the track image photographing request to obtain a second original track image.
Specifically, regarding to a photographing architecture configured inside the electronic device, the frame rate in the photographing mode is a fixed value and no adjustment is supported, so that the obtained track image has a single effect. For the preview mode, the frame rate supports adjustment, and the embodiment of the application performs shooting of the track image in the streaming shutter preview mode, so that the required frame rate can be determined according to actual requirements, the processing load of the electronic device is reduced, and the singleness of the obtained effect of the track image can be improved.
In an embodiment of the present application, the step of performing a post-processing procedure of the preview stream on the first original track image to obtain a preview track image includes:
and performing downsampling processing on the first original light track image to obtain a preview light track image.
In practical applications, the preview track image is only used for the user to judge whether the currently generated track effect meets the expectations, so the requirement for image resolution is lower than that of the photographed track image. Therefore, in order to reduce the processing load of the electronic device for displaying the preview track image and improve the preview efficiency, the first original track image may be subjected to downsampling processing, the resolution of the original track image is reduced, the preview track image is obtained, and the preview track image is displayed through the preview interface.
In an embodiment of the present application, the step of obtaining the photographed track image includes:
and encoding the second original track image into a preset output format to obtain a photographed track image.
Specifically, to obtain a photographed track image, the image needs to be output in an output format supported by the electronic device. Thus, the second original track image may be encoded into a preset output format based on the actual configuration of the electronic device, resulting in a final photographed track image. As an example, the preset output format may specifically be a JPEG (Joint Photographic Experts Group, an image file format) format.
In an embodiment of the present application, before the acquiring, in response to the preview request, the image sequence acquired by the camera in the preview mode, the method further includes:
in response to the first operation, displaying a first preview image through a preview interface; the first preview image is a preview image of an image currently collected by the camera, and the first operation is used for indicating the electronic device to enter a streamer shutter mode.
In the embodiment of the invention, after a luminous track image preview request is issued and before a shooting request of the luminous track image is issued, the images acquired by the camera are all applied to the generation of the original luminous track image. Because, in order to secure the effect of the track image, after the electronic device enters the streamer shutter mode, the image display can be performed specifically by the normal preview mode before the light-emitting track image preview request. Specifically, for an image frame currently collected by a camera, the image frame is processed into a first preview image meeting the preview requirement, and the first preview image is displayed through a preview interface. In this case, the user can observe the real scene based on the first preview image, and instruct the electronic device to perform the track image preview mode to start the shooting of the track image when it is determined that the real scene is suitable to start the shooting of the track image, thereby ensuring the imaging effect of the track image.
Specifically, the user may instruct the electronic device to enter the streamer shutter mode through a first operation, where the first operation is specifically an interaction with the photographing APP, and a specific category of the first operation needs to be determined according to a specific reference of the photographing APP. Taking fig. 4 as an example, the first operation is specifically to select the streamer shutter option from among a plurality of shooting modes under the preview interface.
In one embodiment of the present application, after receiving the track image photographing request, the track image preview mode may be exited, and the first preview image may be displayed through the preview interface.
Specifically, the user can observe the shooting effect of the track image in real time through the preview interface, so that after receiving a shooting request of the track image, the user can consider that the shot track image meeting the user requirement is shot, and therefore, the user can exit the track image preview mode and perform image display through the common preview mode, that is, the first preview image corresponding to the image frame currently acquired by the camera is displayed through the preview interface, so that the image processing load of the electronic device is reduced.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and a photographing method provided in an embodiment of the present application is further described below with reference to fig. 6.
Specifically, after an upper layer lower light-emitting track Image preview request, the camera performs Image sequence acquisition in a light track Image preview mode, at this time, relevant sensors configured in the electronic device, for example, a T sensor (T sensor), a w sensor (w sensor) and a uw sensor (uw sensor) detect signals acquired by the camera to obtain Image data corresponding to the Image sequence, and output the Image data to an IFE (Image Front End, an Image data processing hardware module) for processing, for example, output the Image data to IFE 0, IFE 1 and IFE 2 illustrated in the figure for processing, so as to obtain RAW (one Image data format) data and META (metadata) data corresponding to the processed Image data.
In the case that a track image photographing request is not received, RAW data and META corresponding to an image sequence are input into a pipeline for performing image processing by taking a source data module as an entry, the RAW2YUV module is used for encoding the image data corresponding to the image sequence into a YUV (one image data format) format, namely, the RAW format data corresponding to the image sequence is converted into the YUV format, the image data in the YUV format is input into an algorithm module, track fusion processing is performed on the input image data through the algorithm module, the obtained original track image is transmitted to a selection module, the selection module judges that a photographing frame identification mark is not carried therein, the original track image is identified as a first original track image, the first original track image is transmitted to a post-processing module corresponding to a preview stream through an output port connected to a preview channel of a streaming shutter, namely, the image is transmitted to an ISP (Image Signal Processor, an image processor) module for performing downsampling processing, a preview track image is obtained, and the preview track image is transmitted.
Meanwhile, the ZSL Buffer (Zero Shutter Lag Buffer, zero time delay) can Buffer the image data output by the IFE, under the condition that a photo request of a photo image is received, the image data corresponding to an image sequence indicated by the photo request of the photo image is obtained from the ZSL Buffer, the image data added with a photo frame identification mark is input to a source module, the processing flow from the subsequent raw2yuv module to the algorithm module is consistent with the condition that the photo request of the photo image is not received, the details are not repeated, after the algorithm module outputs the original photo image to the selection module, the selection module judges that the photo frame identification mark is carried in the original photo image, the original photo image is identified as a second original photo image, the second original photo image is transmitted to a post-processing module corresponding to the photo stream through an output port connected to a photo channel of the photo shutter, namely the second original photo image is encoded into a JPEG format by the JPEG module, the final photo image is obtained, and the photo image is transmitted to the Capture Buffer of the application.
Therefore, according to the photographing method provided by the embodiment of the application, the photographing of the track image is started in the mode of requesting the preview of the track image, the internal passage of the photographing frame is reasonably utilized, the buffer management is reasonable, the photographing frame and the preview frame are distinguished at the bottom layer through the selection module, the algorithm module does not need to distinguish the photographing frame and the preview frame, and the passage design is concise.
In a specific implementation, the application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the device where the computer readable storage medium is controlled to execute part or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, the embodiment of the application further provides a computer program product, where the computer program product contains executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the embodiment of the method.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the present application may be implemented as a computer program or program code that is executed on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (Digital Signal Processor, DSP), microcontroller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, compact disk Read-Only memories (Compact Disc Read Only Memory, CD-ROMs), magneto-optical disks, read-Only memories (ROMs), random Access Memories (RAMs), erasable programmable Read-Only memories (Erasable Programmable Read Only Memory, EPROMs), electrically erasable programmable Read-Only memories (Electrically Erasable Programmable Read Only Memory, EEPROMs), magnetic or optical cards, flash Memory, or tangible machine-readable Memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the drawings of the specification. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module is a logic unit/module, and in physical aspect, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is the key to solve the technical problem posed by the present application. Furthermore, to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems presented by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (11)

1. A photographing method, applied to an electronic device, the method comprising:
responding to a track image preview request, and acquiring an image sequence acquired by a camera in a track image preview mode; the number of images contained in the image sequence increases with the duration of the track image preview mode;
if a photo request of the light track image is not received, performing light track fusion processing on images in the current image sequence every time the number of the newly added images in the image sequence reaches a preset value to obtain a first original light track image, and performing post-processing flow of preview flow on the first original light track image to obtain a preview light track image;
and if the track image photographing request is received, performing track fusion processing on an image sequence indicated by the track image photographing request to obtain a second original track image, and performing post-processing flow of photographing flow on the second original track image to obtain a photographed track image.
2. The method according to claim 1, wherein the method further comprises:
responding to the track image photographing request in the track image preview mode, acquiring an image sequence indicated by the track image photographing request from a cache, and adding a photographing frame identification mark for the acquired image sequence;
And judging whether the original track image carries the photographing frame identification mark or not according to the original track image obtained by carrying out track fusion processing each time, if so, identifying the original track image as the second original track image, and if not, identifying the original track image as the first original track image.
3. The method of claim 2, further comprising, after the capturing of the sequence of images captured by the camera in the track image preview mode:
the image sequence is stored in a buffer configured for preview access.
4. The method according to claim 1 or 2, wherein after acquiring the sequence of images acquired by the camera in the track image preview mode in response to the track image preview request, further comprising:
sampling the image sequence based on the sampling frame rate indicated by the track image preview request, and caching the sampled image sequence;
each time the number of newly added images in the image sequence reaches a preset value, performing track fusion processing on the images in the current image sequence to obtain a first original track image, including:
Each time the number of newly added images in the sampled image sequence reaches the preset value, performing track fusion processing on the images in the current sampled image sequence to obtain the first original track image;
the step of performing track fusion processing on the image sequence indicated by the track image photographing request to obtain a second original track image comprises the following steps:
and performing track fusion processing on the sampled image sequence indicated by the track image photographing request to obtain the second original track image.
5. The method according to claim 1 or 2, wherein the step of performing a post-processing procedure of the preview stream on the first original track image to obtain a preview track image comprises:
and carrying out downsampling processing on the first original light track image to obtain the preview light track image.
6. The method according to claim 1 or 2, wherein the step of performing a post-processing procedure of the photographing flow on the second original track image to obtain a photographed track image comprises:
and encoding the second original light track image into a preset output format to obtain the photographed light track image.
7. The method of claim 1, wherein the responding to the preview request, before acquiring the image sequence acquired by the camera in the preview mode, further comprises:
in response to the first operation, displaying a first preview image through a preview interface; the first preview image is a preview image of an image currently collected by the camera, and the first operation is used for indicating the electronic device to enter a streamer shutter mode.
8. The method of claim 7, wherein the method further comprises:
and after receiving the track image photographing request, exiting the track image preview mode, and displaying the first preview image through the preview interface.
9. An electronic device, comprising:
one or more processors and memory;
the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 1 to 8.
11. A computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-8.
CN202310971174.1A 2023-08-02 2023-08-02 Photographing method, electronic device, storage medium and program product Pending CN117692763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310971174.1A CN117692763A (en) 2023-08-02 2023-08-02 Photographing method, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310971174.1A CN117692763A (en) 2023-08-02 2023-08-02 Photographing method, electronic device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117692763A true CN117692763A (en) 2024-03-12

Family

ID=90127249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310971174.1A Pending CN117692763A (en) 2023-08-02 2023-08-02 Photographing method, electronic device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117692763A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180108111A1 (en) * 2015-06-19 2018-04-19 Alibaba Group Holding Limited Previewing dynamic images and expressions
CN112399087A (en) * 2020-12-07 2021-02-23 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, image capturing apparatus, electronic device, and storage medium
CN115002332A (en) * 2021-03-01 2022-09-02 北京小米移动软件有限公司 Shooting processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180108111A1 (en) * 2015-06-19 2018-04-19 Alibaba Group Holding Limited Previewing dynamic images and expressions
CN112399087A (en) * 2020-12-07 2021-02-23 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, image capturing apparatus, electronic device, and storage medium
CN115002332A (en) * 2021-03-01 2022-09-02 北京小米移动软件有限公司 Shooting processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2021147482A1 (en) Telephoto photographing method and electronic device
CN112532892B (en) Image processing method and electronic device
CN113099146B (en) Video generation method and device and related equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115526787B (en) Video processing method and device
WO2021179804A1 (en) Image processing method, image processing device, storage medium, and electronic apparatus
EP4376433A1 (en) Camera switching method and electronic device
CN117499779B (en) Image preview method, device and storage medium
WO2022083325A1 (en) Photographic preview method, electronic device, and storage medium
CN116916151B (en) Shooting method, electronic device and storage medium
EP4262226A1 (en) Photographing method and related device
CN116668837B (en) Method for displaying thumbnail images and electronic device
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN117692763A (en) Photographing method, electronic device, storage medium and program product
CN115802148A (en) Method for acquiring image and electronic equipment
CN117689611B (en) Quality prediction network model generation method, image processing method and electronic equipment
CN115988339B (en) Image processing method, electronic device, storage medium, and program product
CN116723417B (en) Image processing method and electronic equipment
CN116709042B (en) Image processing method and electronic equipment
CN116996777B (en) Shooting method, electronic equipment and storage medium
CN117692791A (en) Image capturing method, terminal, storage medium and program product
WO2023035868A1 (en) Photographing method and electronic device
CN117689611A (en) Quality prediction network model generation method, image processing method and electronic equipment
CN117692752A (en) Image processing method, terminal, storage medium and program product
CN115460343A (en) Image processing method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination