CN113037996A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113037996A
CN113037996A CN202110119051.6A CN202110119051A CN113037996A CN 113037996 A CN113037996 A CN 113037996A CN 202110119051 A CN202110119051 A CN 202110119051A CN 113037996 A CN113037996 A CN 113037996A
Authority
CN
China
Prior art keywords
image
target
camera
system layer
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110119051.6A
Other languages
Chinese (zh)
Inventor
邓育杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110119051.6A priority Critical patent/CN113037996A/en
Publication of CN113037996A publication Critical patent/CN113037996A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/665Control of cameras or camera modules involving internal camera communication with the image sensor, e.g. synchronising or multiplexing SSIS control signals

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, belongs to the technical field of communication, and can solve the problem that the transplantation efficiency of the whole SAT function is low. The image processing method comprises the following steps: in a first system layer, determining a target camera corresponding to a target shooting parameter from M cameras; determining an image frame sequence corresponding to a target camera as a target image frame sequence on a first system layer, and transmitting the target image frame sequence to a second system layer; the target image frame sequence includes: at least one frame of image collected by the target camera; at the second system level, a sequence of target image frames is output. The image processing method provided by the embodiment of the application can be applied to the process of outputting the image frame sequence by the electronic equipment.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image processing method and device and electronic equipment.
Background
Currently, in a scene in which an object is shot by the electronic device in a zoom mode, a user may start a Spatial Alignment Transition (SAT) function in the electronic device, so that the electronic device may perform the SAT function to capture a clear image of the object.
In the related art, taking the electronic device having the camera 1 and the camera 2 as an example, in the process of executing the SAT function, the camera 1 and the camera 2 in the hardware layer of the electronic device may respectively capture a plurality of images, and then send the plurality of images to the operating system layer of the electronic device to generate a corresponding image buffer queue. In this way, the operating system layer may determine a camera (e.g., the camera 2) corresponding to the zoom magnification according to the set zoom magnification, and then output an image buffer queue corresponding to the camera 2, so that the display screen of the hardware layer may display the image buffer queue.
However, since the operating system layer selects a corresponding camera from the plurality of cameras in the hardware layer according to the set zoom magnification in the process of executing the SAT function, that is, the operating system layer and the hardware layer are closely coupled. Therefore, when the SAT function of an electronic device needs to be migrated to an electronic device without SAT function, the code of the operating system layer of the electronic device without SAT function needs to be modified, resulting in low migration efficiency of the whole SAT function.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, which can solve the problem of low migration efficiency of the whole SAT function.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device including M cameras, where M is a positive integer greater than 1, and the method includes: in a first system layer, determining a target camera corresponding to a target shooting parameter from M cameras; determining an image frame sequence corresponding to a target camera as a target image frame sequence on a first system layer, and transmitting the target image frame sequence to a second system layer; the target image frame sequence includes: at least one frame of image collected by the target camera; at the second system level, a sequence of target image frames is output.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the image processing apparatus includes M cameras, where M is a positive integer greater than 1, and the image processing apparatus further includes: the device comprises a determining module, a transmission module and an output module.
The determining module is used for determining a target camera corresponding to the target shooting parameter from M cameras in a first system layer; determining an image frame sequence corresponding to the target camera as a target image frame sequence on a first system layer; the target image frame sequence includes: at least one frame of image collected by the target camera. And the transmission module is used for transmitting the target image frame sequence determined by the determination module to the second system layer. And the output module is used for outputting the target image frame sequence transmitted by the transmission module at the second system layer.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, the electronic device may determine, in the first system layer, a target camera corresponding to the target shooting parameter from the M cameras, determine, in the first system layer, an image frame sequence acquired by the target camera as a target image frame sequence (the target image frame sequence includes at least one frame of image), and transmit the target image frame sequence to the second system layer, so that the electronic device may output the target image frame sequence in the second system layer. Since a new system layer (i.e., the first system layer) may be configured in the electronic device, in the process of executing the SAT function, the electronic device may determine the target camera corresponding to the target shooting parameter in the new system layer, determine the target image frame sequence corresponding to the target camera in the new system layer, and output the target image frame sequence in the operating system layer (i.e., the second system layer), that is, the operating system layer and the hardware layer of the electronic device are not tightly coupled, when the SAT function of an electronic device needs to be migrated on the electronic device without the SAT function, the code of the new system layer of the electronic device without the SAT function does not need to be modified, and thus, the migration efficiency of the whole SAT function may be improved.
Drawings
Fig. 1 is an architecture diagram of an electronic apparatus in the related art;
FIG. 2 is an architecture diagram of an electronic device provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of an image processing method provided in an embodiment of the present application;
fig. 4 is a second schematic diagram of an image processing method according to an embodiment of the present application;
fig. 5 is a third schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 6 is a fourth schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 7 is a fifth schematic diagram of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 shows an architecture diagram of an electronic apparatus 100 in the related art. As shown in fig. 1, the architecture of electronic device 100 includes an application layer 101, an operating system layer 102, and a hardware layer 103.
The application layer 101 includes various applications (e.g., a shooting-type application) installed in the electronic device 100.
Operating system layer 102 includes an operating system of electronic device 100, which may be any one or more computer operating systems that implement business processes via processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system.
The hardware layer 103 includes hardware such as a Central Processing Unit (CPU), a Memory Management Unit (MMU), a memory, a camera, and a display.
In the related art, assuming that a user needs to shoot an object 1 through the electronic device 100 in a zoom mode, the user may trigger the electronic device 100 to display an interface of a shooting application program, and input an SAT function option in the interface, so that the electronic device 100 may turn on the SAT function, and send a preview request to the operating system layer 102 in the application layer 101, where the preview request includes a zoom magnification (and/or an angle of view), and then the electronic device 100 may determine, according to the zoom magnification (and/or the angle of view), a camera 1 corresponding to the zoom magnification (and/or the angle of view) from all cameras of the electronic device 100, and control, in the operating system layer 102, the camera 1 of the hardware layer 103 to collect a plurality of preview images, and receive the plurality of preview images sent by the hardware layer 103, so as to generate the image buffer queue 1. Next, the electronic device 100 may output the image buffer queue 1 to an Image Signal Processor (ISP) of the hardware layer 103 at the operating system layer 102, so that the ISP may perform image processing on the image buffer queue 1 to obtain a preview image frame sequence, and thus the ISP may transmit the preview image frame sequence to the shooting class application of the application layer 101, so that the preview screen may be displayed in an interface of the shooting class application.
However, since the operating system layer 102 and the hardware layer 103 of the electronic device 100 are tightly coupled during the SAT function, when the SAT function of the electronic device 100 needs to be transplanted on an electronic device without the SAT function, the code of the operating system layer of the electronic device without the SAT function needs to be modified, and therefore, the migration efficiency of the whole SAT function is low.
Fig. 2 shows an architecture diagram of an electronic device 200 according to an embodiment of the application. As shown in FIG. 2, the architecture of electronic device 200 includes an application layer 201, a customized system layer 202, an operating system layer 203, and a hardware layer 204.
The customized system layer 202 may include an operating system of the electronic device 200, which may be any one or more computer operating systems that implement business processes through processes (processes).
In this embodiment, assuming that a user needs to shoot an object 1 through the electronic device 200 in a zoom mode, the user may trigger the electronic device 200 to start the SAT function in an interface of a shooting application program, and send a preview request to the customized system layer 202 in the application layer 201, where the preview request includes a zoom magnification (and/or an angle of view), and then the electronic device 200 may determine, in the customized system layer 202, a camera 1 corresponding to the zoom magnification (and/or the angle of view) from all cameras of the electronic device 200 according to the zoom magnification (and/or the angle of view), and control the camera 1 of the hardware layer 203 to capture a plurality of preview images, and receive the plurality of preview images sent by the hardware layer 203 to generate an image buffer queue 1 corresponding to the camera 1. Next, the electronic device 200 may transmit the image buffer queue 1 to the operating system layer 203 at the customized system layer 202, and may output the image buffer queue 1 to the ISP of the hardware layer 203 at the operating system layer 203, so that the ISP may perform image processing on the image buffer queue 1 to obtain a preview image frame sequence, so that the ISP may transmit the preview image frame sequence to the shooting class application of the application layer 201, so that the preview screen may be displayed in the interface of the shooting class application.
It is understood that the electronic device 200 may select a corresponding camera from the plurality of cameras of the hardware layer 203 at the newly configured system layer (i.e., the customized system layer 202) according to the zoom ratio, i.e., the operating system layer 202 and the hardware layer 203 are not tightly coupled, and therefore, the migration efficiency of the whole SAT function may be improved.
Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 3, the image processing method provided in the embodiment of the present application may include steps 101 to 103 described below.
In step 101, the image processing apparatus determines a target camera corresponding to the target shooting parameter from the M cameras in the first system layer.
In the embodiment of the application, the image processing device is an image processing device comprising M cameras; m is a positive integer greater than 1. It is understood that the image processing apparatus may specifically be: comprises at least two cameras.
Optionally, in this embodiment of the application, the image processing apparatus may specifically be: provided is an image processing device having an SAT function.
Optionally, in this embodiment of the application, the hardware parameters of each of the at least two cameras are different, and the hardware parameters may include at least one of the following: focal length, viewing angle, etc.
Optionally, in one scenario, in a case that the image processing apparatus displays a desktop, if the image processing apparatus detects a click input of a user on an application icon of a target application program in the desktop, the target application program may be opened, and a first interface of the target application program is displayed. Then, if the image processing apparatus detects a selection input of the "SAT function" option in the first interface by the user, the image processing apparatus may turn on the SAT function, and determine the target camera from the M cameras at the first system level.
Alternatively, in another scenario, when the image processing apparatus starts the SAT function and displays a preview screen in an interface of a first interface, if a shooting input of a user in the first interface is detected, the image processing apparatus may determine, in a first system layer, a target camera from the M cameras.
It can be understood that if the user triggers the image processing apparatus to turn on the SAT function, it may be considered that the user may need the image processing apparatus to zoom to shoot the object, and therefore, the image processing apparatus may determine the target camera corresponding to the target shooting parameter.
Optionally, in this embodiment of the application, the target application may specifically be: a camera-like application. The "image pickup application" may be understood as: an application program having an image pickup function.
Optionally, in this embodiment of the application, the first interface may specifically be: a capture preview interface of the target application.
Optionally, in an embodiment of the present application, the target shooting parameter includes at least one of: zoom magnification, angle of view, etc. The target photographing parameter may be a photographing parameter set in the first interface by a user.
Optionally, in this embodiment of the application, the target camera may include one camera or multiple cameras.
Optionally, in this embodiment of the application, in a case that the target shooting parameter includes a zoom magnification, the image processing apparatus may determine, in the first system layer, a target camera corresponding to the target shooting parameter from the M image frame sequences according to at least one first corresponding relationship. Each first corresponding relation is respectively; and the corresponding relation between one zooming magnification interval and one camera mark.
Further optionally, in this embodiment of the application, when the target camera includes one camera and the target shooting parameter includes a zoom magnification, the image processing apparatus may determine, from M zoom magnification intervals, one zoom magnification interval in which the zoom magnification is located, and then determine, as the target camera identifier, one camera identifier in one corresponding relationship corresponding to the one zoom magnification interval, so that the image processing apparatus may determine, as the target camera, one camera (i.e., the camera indicated by the target camera identifier) from among the M cameras.
Further alternatively, in this embodiment, in a case that the target camera includes multiple cameras and the target shooting parameter includes zoom magnification, if one of the M zoom magnification intervals does not include a zoom magnification interval in which the zoom magnification is located, the image processing apparatus may determine, according to the zoom magnification, one critical value from among 2M critical values (each of the 2M critical values is different) in the M zoom magnification intervals, where the one critical value is a maximum critical value from among the 2M critical values that is smaller than the zoom magnification, determine one zoom magnification interval corresponding to the one critical value as the first zoom magnification interval, determine another critical value from among the 2M critical values in the M zoom magnification intervals, where the another critical value is a minimum critical value from among the 2M critical values that is larger than the zoom magnification, and determining another zoom magnification interval corresponding to the another critical value as a second zoom magnification interval. Thus, the image processing apparatus may determine one camera id in one first corresponding relationship corresponding to the first zoom magnification interval as the first target camera id, and determine another camera id in another first corresponding relationship corresponding to the second zoom magnification interval as the second target camera id, so that the image processing apparatus may determine two cameras (i.e., the camera indicated by the first target camera id and the camera indicated by the second target camera id) of the M cameras as the target cameras.
It can be understood that, if one zoom magnification interval in which the zoom magnification (i.e., the target shooting parameter) is located is not included in the M zoom magnification intervals, it may be considered that the zoom magnification may be in a composite interval (i.e., an interval between two zoom magnification intervals in the M zoom magnification intervals), that is, a user may need two image frame sequences corresponding to the two zoom magnification intervals (i.e., image frame sequences corresponding to two cameras (i.e., cameras corresponding to two zoom magnification intervals)), and therefore, the image processing apparatus may determine the two cameras as the target cameras.
Step 102, the image processing device determines an image frame sequence corresponding to the target camera as a target image frame sequence on the first system layer, and transmits the target image frame sequence to the second system layer.
In this embodiment of the application, the target image frame sequence includes: at least one frame of image collected by the target camera.
Optionally, in this embodiment of the application, each frame of image of the at least one frame of image may be an original (Raw) image.
Optionally, in this embodiment of the present application, in a case that the target camera includes one camera, the target image frame sequence includes one image frame sequence; in the case where the target camera includes a plurality of cameras, the target image frame sequence includes a plurality of image frame sequences.
Optionally, in this embodiment of the application, after determining the target camera, the image processing apparatus may acquire the sequence of image frames by using the target camera, so that the image processing apparatus may determine the sequence of image frames acquired by using the target camera as the sequence of target image frames.
Alternatively, in this embodiment of the application, after determining the target camera, the image processing apparatus may determine, at the first system layer, an image frame sequence corresponding to the target camera from image frame sequences corresponding to M cameras (for example, M image frame sequences in the following embodiments), and determine the image frame sequence corresponding to the target camera as the target image frame sequence.
Step 103, the image processing device outputs the target image frame sequence at the second system layer.
Optionally, in this embodiment of the application, the image processing apparatus may output, at the second system layer, the target image frame sequence to the ISP, so that the ISP may perform the first image processing on the target image frame sequence to obtain the Yuv image frame sequence. In this way, the ISP may transmit the sequence of Yuv image frames to the target application of the application layer, so that the target application may sequentially display images in the sequence of Yuv image frames in the first interface to display the preview screen.
Further optionally, in this embodiment of the application, the first image processing may include at least one of: automatic exposure control processing, automatic gain control processing, automatic white balance processing, color correction processing, gamma correction processing, dead pixel removal processing, and the like.
Further optionally, in this embodiment of the application, in a case that the target image frame sequence includes a plurality of image frame sequences, after the ISP performs the first image processing on the plurality of image frame sequences, the ISP may perform synthesis processing on the obtained plurality of Yuv image frame sequences to obtain one Yuv image frame sequence. In this way, the ISP may transmit the one sequence of Yuv image frames to the target application of the application layer, so that the target application may sequentially display images in the one sequence of Yuv image frames in the first interface to display the preview screen.
Alternatively, in this embodiment of the application, after the image processing apparatus outputs the target image frame sequence, if a shooting input of the user in the first interface is detected, the image processing apparatus may acquire at least one frame of image from the target image frame sequence, and obtain a shot image (for example, the target image in the following embodiments) based on the at least one frame of image.
Alternatively, in this embodiment of the application, after the image processing apparatus outputs the target image frame sequence, if it is detected that the user sets the shooting parameters in the first interface, the image processing apparatus may perform the above steps 101 to 103 again based on the shooting parameters set by the user, so as to output the image frame sequence corresponding to the shooting parameters set by the user at the second system layer.
In this embodiment of the application, in the process that the image processing apparatus executes the SAT function, the image processing apparatus may determine, in the first system layer, a camera corresponding to the shooting parameter set by the user from the M cameras (i.e., a camera that the user may need), determine, in the first system layer, an image frame sequence corresponding to the camera that the user may need as a target image frame sequence (i.e., an image frame sequence that the user may need), and transmit the image frame sequence that the user may need to the second system, so that the electronic device may output, in the second system layer, the image frame sequence that the user may need, so that the first interface may display a preview picture. That is, the image processing apparatus may select the corresponding image frame sequence according to the target shooting parameter at the first system layer, that is, the operating system layer of the image processing apparatus may be decoupled from the hardware layer.
In the image processing method provided by the embodiment of the application, the image processing apparatus may determine, in the first system layer, a target camera corresponding to the target shooting parameter from the M cameras, determine, in the first system layer, an image frame sequence acquired by the target camera as a target image frame sequence (the target image frame sequence includes at least one frame of image), and transmit the target image frame sequence to the second system layer, so that the image processing apparatus may output the target image frame sequence in the second system layer. Since a new system layer (i.e., the first system layer) can be configured in the image processing apparatus, in the process of executing the SAT function, the image processing apparatus can determine the target camera corresponding to the target shooting parameter at the new system layer, determine the target image frame sequence corresponding to the target camera at the new system layer, and output the target image frame sequence at the operating system layer (i.e., the second system layer), that is, the operating system layer and the hardware layer of the image processing apparatus are not tightly coupled, when the SAT function of a certain image processing apparatus needs to be migrated on the image processing apparatus without the SAT function, the code of the new system layer of the image processing apparatus without the SAT function does not need to be modified, and thus, the migration efficiency of the whole SAT function can be improved.
It can be understood that, since a new system layer can be configured in the image processing apparatus, when the image processing apparatus is maintained, the new system layer can be maintained, and as long as the operating system layer provides a correspondingly adapted interface, the iterative use can be continued without being affected by the upgrade of the operating system layer or the difference between different operating system layers, so that the maintenance efficiency of the image processing apparatus can be improved.
Of course, before the image processing apparatus determines the target camera, the image processing apparatus may first acquire the image frame sequences through the M cameras, respectively, so that after the image processing apparatus determines the target camera, the image processing apparatus may acquire the image frame sequences from the M cameras, respectively, to determine the target image frame sequence.
Optionally, in this embodiment of the present application, as shown in fig. 4 in combination with fig. 3, before step 101 described above, the image processing method provided in this embodiment of the present application may further include step 201 described below.
Step 201, the image processing apparatus generates M image frame sequences corresponding to the M cameras in the first system layer.
In an embodiment of the application, for each of the M image frame sequences, one image frame sequence comprises: at least one frame of image collected by a corresponding camera; the M image frame sequences include a sequence of target image frames.
Further optionally, in one scenario, when the image processing apparatus starts the SAT function, the image processing apparatus may send, at the application layer, a preview request to the first system layer, where the preview request is used to request that a preview screen is displayed in the first interface, and thus, the image processing apparatus may control, at the first system layer, the M cameras to respectively acquire at least one frame of image and receive an image acquired by each camera according to the preview request. Therefore, the image processing device can store the image collected by each camera in the storage area corresponding to the first system layer, so as to generate M image frame sequences corresponding to M cameras according to the image collected by each camera.
Further optionally, in another scenario, when a shooting input of the user in the first interface is detected, the image processing apparatus may send, at the application layer, a shooting request to the first system layer, so that the image processing apparatus may control, at the first system layer, the M cameras to respectively capture at least one frame of image and receive an image captured by each camera according to the shooting request. Therefore, the image processing device can store the image collected by each camera in the storage area corresponding to the first system layer, so as to generate M image frame sequences corresponding to M cameras according to the image collected by each camera.
In an embodiment of the application, for each of the M image frame sequences, one image frame sequence comprises: and at least one frame of image collected by a corresponding camera.
Optionally, in this embodiment of the application, for each image frame sequence in the M image frame sequences, the image processing apparatus may sequentially store at least one image acquired by one camera in a sub-area of the storage area corresponding to the first system layer according to the acquisition time sequence to generate one image frame sequence, so as to generate the M image frame sequences.
It can be understood that the image processing apparatus may store the image acquired by each camera in a storage region corresponding to the first system, rather than in a storage region corresponding to the operating system layer (i.e., the second system layer in the following embodiments), and therefore, the memory of the operating system layer may be saved.
In this embodiment of the application, since the image processing apparatus may generate, at a new system layer (i.e., a first system layer), M image frame sequences corresponding to the M cameras, so that the image processing apparatus may determine, at the new system layer, an image frame sequence corresponding to the target camera from the M image frame sequences, an operating system layer of the image processing apparatus may be decoupled from a hardware layer, that is, the operating system layer of the image processing apparatus is not tightly coupled to the hardware layer, so that migration efficiency of the entire SAT function may be improved.
Furthermore, in the process of executing the SAT function, the image processing apparatus may intervene in the control flow of the SAT function at the first system level, and extract the target image frame sequence from the M image frame sequences, so that the degree of grasp of the flow of the SAT function can be increased.
In the following, it will be exemplified how the image processing apparatus generates M image frame sequences at the first system level.
Optionally, in this embodiment, with reference to fig. 4, as shown in fig. 5, before the step 201, the image processing method provided in this embodiment may further include the following step 301 and step 302, and the step 201 may be specifically implemented by the following step 201 a.
Step 301, the image processing apparatus applies for a first storage area for the first system layer in the storage area of the image processing apparatus.
Further optionally, in this embodiment of the application, in a case that the image processing apparatus displays the desktop, if the image processing apparatus detects a click input of an application icon of the "setup" application by a user, the image processing apparatus displays a "setup" interface. Then, if the image processing apparatus detects the selection input of the "SAT memory optimization" option by the user, the image processing apparatus may turn on the "SAT memory optimization" function and apply for the first storage area for the first system layer in the storage area of the image processing organization.
Further optionally, in this embodiment of the application, after the image processing apparatus starts the "SAT memory optimization" function, when the image processing apparatus starts the target application, the image processing apparatus may send a configuration stream (configurations) to the second system layer at the application layer, so that the image processing apparatus may apply for the first storage area for the first system layer at the second system layer.
In this embodiment of the present application, the first storage area is used to store (cache) the image frame sequence corresponding to each camera.
Step 302, the image processing device stores the image collected by each camera into a first storage area.
Further optionally, in this embodiment of the application, the first storage region may include M sub-regions, and each sub-region is used to store an image acquired by one camera.
Further optionally, in this embodiment of the application, for each camera in the M cameras, the image processing apparatus may store the image acquired by one camera in one sub-area corresponding to the one camera, so as to store the image acquired by the one camera, and store the image acquired by each camera in the first storage area.
In the embodiment of the application, the image processing apparatus may store the images acquired by the M cameras into the storage area corresponding to the first system layer, but not into the storage area corresponding to the second system layer (i.e., the operating system layer), so that the related memory optimization may be performed for each image frame sequence.
Step 201a, the image processing device generates an image frame sequence corresponding to each camera according to the image collected by each camera in the first storage area in the first system layer.
It should be noted that, for the description of the image processing apparatus generating the image frame sequence corresponding to each camera, reference may be made to the detailed description in the foregoing embodiments, and details of the embodiments of the present application are not repeated herein.
In the related art, the image processing apparatus stores the image captured by each camera in each sub-area of a certain storage area corresponding to the operating system layer, and after determining a certain image frame sequence corresponding to the shooting parameters, the image processing apparatus copies the certain image frame sequence to another sub-area of the certain storage area at the operating system layer, so that the image processing apparatus can output the copied certain image frame sequence at the operating system layer. However, the image processing apparatus needs to store the image captured by each camera in the storage area corresponding to the operating system layer, and needs to copy the image frame sequence corresponding to the shooting parameters in the storage area corresponding to the operating system layer, which may result in waste of memory and increase of time consumption for outputting the image frame sequence.
In the embodiment of the application, the image processing device stores the images acquired by each camera in each sub-area of the first storage area corresponding to the first system layer, so that after the target image frame sequence corresponding to the target shooting parameters is determined, the image processing device can directly transmit the target image frame sequence to the second system layer at the first system layer, and therefore the image processing device can directly output the target image frame sequence at the second system layer without copying the target image frame sequence, memory of an operating system layer is saved, and time consumption of outputting the image frame sequence is reduced.
In the embodiment of the application, the image processing device can apply for a storage area for the first system layer, and store the images acquired by the M cameras into the storage area corresponding to the first system layer, so that the image processing device can directly generate an image frame sequence corresponding to each camera in the first system layer, and then directly transmit the target image frame sequence to the second system layer.
Next, how the image processing apparatus generates a sequence of image frames corresponding to each camera will be described by taking as an example that the image processing apparatus has started the SAT function and displays a shooting preview interface (i.e., the first interface in the above-described embodiment).
Optionally, in this embodiment of the application, with reference to fig. 4, as shown in fig. 6, the step 201 may be specifically implemented by a step 201b described below.
In step 201b, when the shooting preview interface is displayed, the image processing apparatus generates image frame sequences corresponding to the respective cameras in the first system layer.
In an embodiment of the application, for each of the M image frame sequences, one image frame sequence comprises: and at least one frame of image corresponding to the image preview frame acquired by the corresponding camera.
Further optionally, in this embodiment of the application, in a case that a shooting preview interface is displayed, if it is detected that the image processing apparatus starts the SAT function, the image processing apparatus may send a preview request to the first system layer at the application layer, so that the image processing apparatus may generate, at the first system layer, a sequence of image frames corresponding to each camera respectively.
It should be noted that, for the description of the image processing apparatus generating the image frame sequence corresponding to each camera, reference may be made to the detailed description in the foregoing embodiments, and details of the embodiments of the present application are not repeated herein.
In the embodiment of the application, since the image processing apparatus may generate the image frame sequence corresponding to each camera in the first system layer respectively under the condition that the shooting preview interface is displayed, the electronic device may determine the target image frame sequence corresponding to the target shooting parameter in the first system layer, and output the target image frame sequence in the operating system layer (i.e., the second system layer), that is, the operating system layer and the hardware layer of the electronic device are not tightly coupled, so that the migration efficiency of the whole SAT function may be improved.
As will be exemplified below, the image processing apparatus obtains a captured image based on a user's capturing input.
Optionally, in this embodiment of the present application, with reference to fig. 3, as shown in fig. 7, after the step 103, the image processing method provided in this embodiment of the present application may further include the following step 401 and step 402.
In step 401, the image processing apparatus receives a first input from a user.
Further optionally, in this embodiment of the application, when the image processing apparatus displays the first interface and the first interface includes a "shooting" control, the user may perform a first input on the "shooting" control.
In an embodiment of the present application, the first input is used to trigger the image processing apparatus to obtain a captured image.
Further optionally, in this embodiment of the application, the first input may specifically be: and clicking input of the shooting control by the user.
Step 402, the image processing device responds to the first input, obtains N frames of images from the target image frame sequence, and performs image processing on the N frames of images to obtain a target image.
In the embodiment of the application, N is a positive integer.
Further optionally, in this embodiment of the application, the image processing apparatus may first store the target image frame sequence in a storage area corresponding to the second system layer, and then acquire N frames of images from the target image frame sequence.
It is understood that the target image frame sequence transmitted from the first system layer to the second system layer is not stored (not used) before the first input by the user, and the target image frame sequence may be stored in the corresponding storage area of the second system layer after the first input by the user by means of late-binding.
It should be noted that for the description of late-binding, reference may be made to specific descriptions in the related art, and details of the embodiments of the present application are not repeated herein.
Further alternatively, in this embodiment of the application, the image processing apparatus may acquire a current shooting mode of the image processing apparatus to determine the value of N according to the shooting mode, so that the image processing apparatus may acquire N frames of images from the target image frame sequence.
Further optionally, in this embodiment of the application, the N frames of images may be: a continuous N-frame image in the sequence of target image frames, or a discontinuous N-frame image.
Further optionally, in this embodiment of the application, after acquiring the N frames of images, the image processing apparatus may output, at the second system layer, the N frames of images to the ISP, so that the ISP may perform image processing on the N frames of images to obtain the target image.
Specifically, the image processing may specifically include: and (4) an image synthesis algorithm.
It should be noted that, for the description of the image synthesis algorithm, reference may be made to specific descriptions in the related art, and details are not repeated herein in the embodiments of the present application.
Specifically, the target image may specifically be: a JPEG image.
Further optionally, in this embodiment of the application, after obtaining the target image, the image processing apparatus may display the target image in the first interface, so that a user may input the target image, so that the image processing apparatus may store the target image.
Specifically, in this embodiment of the application, the image processing apparatus may store the target image in a storage space corresponding to the second system layer, so as to store the target image.
In the embodiment of the application, the image processing apparatus may directly acquire the N frames of images from the target image frame sequence output by the second system layer according to the first input of the user to obtain the target image, without copying the target image frame sequence in the second system layer first and then acquiring the images from the copied target image frame sequence, so that the memory of the image processing apparatus may be saved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be the image processing apparatus in the foregoing embodiment, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus executes an image processing method as an example, and an apparatus of the image processing method provided in the embodiment of the present application is described.
Fig. 8 is a schematic diagram showing a possible structure of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus includes M cameras, and M is a positive integer greater than 1. As shown in fig. 8, the image processing apparatus 60 may include: a determination module 61, a transmission module 62 and an output module 63.
The determining module 61 is configured to determine, in the first system layer, a target camera corresponding to the target shooting parameter from the M cameras; determining an image frame sequence corresponding to the target camera as a target image frame sequence on a first system layer; the target image frame sequence includes: at least one frame of image collected by the target camera. A transmission module 62, configured to transmit the target image frame sequence determined by the determination module 61 to the second system layer. And an output module 63, configured to output, at the second system level, the sequence of target image frames transmitted by the transmission module 62.
In a possible implementation manner, the image processing apparatus 60 provided in the embodiment of the present application may further include: and generating a module. The generating module is used for generating M image frame sequences corresponding to the M cameras on a first system layer; a sequence of image frames comprising: and at least one frame of image collected by a corresponding camera. Wherein the M image frame sequences include a target image frame sequence.
In a possible implementation manner, the generating module is specifically configured to generate, in a first system layer, image frame sequences corresponding to each camera respectively in a case that a shooting preview interface is displayed. Wherein an image frame sequence comprises: and at least one frame of image corresponding to the image preview frame acquired by the corresponding camera.
In a possible implementation manner, the image processing apparatus 60 provided in the embodiment of the present application may further include: the device comprises an application module and a storage module. The application module is configured to apply for a first storage area for a first system layer in a storage area of the image processing apparatus 60. And the storage module is used for storing the images acquired by each camera into the first storage area applied by the application module. The generating module is specifically configured to generate, at the first system layer, an image frame sequence corresponding to each camera according to an image acquired by each camera in the first storage area.
In a possible implementation manner, the image processing apparatus 60 provided in the embodiment of the present application may further include: the device comprises a receiving module, an obtaining module and a processing module. The receiving module is used for receiving a first input of a user. And the acquisition module is used for responding to the first input received by the receiving module and acquiring the N frames of images from the target image frame sequence. The processing module is used for carrying out image processing on the N frames of images acquired by the acquisition module to obtain a target image; n is a positive integer.
The image processing apparatus according to the embodiment of the present application can configure a new system layer (i.e., the first system layer) in the image processing apparatus, and thus, during the execution of the SAT function, the image processing apparatus may determine, at the new system layer, a target camera corresponding to the target photographing parameter, and determines the target image frame sequence corresponding to the target camera in the new system layer, and outputs the target image frame sequence in the operating system layer (i.e. the second system layer), that is, since the operating system layer and the hardware layer of the image processing apparatus are not closely coupled, when it is necessary to migrate the SAT function of a certain image processing apparatus to an image processing apparatus without the SAT function, the code of the new system layer of the image processing device without SAT function does not need to be modified, so that the transplanting efficiency of the whole SAT function can be improved.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 3 to fig. 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 70 is further provided in this embodiment of the present application, and includes a processor 72, a memory 71, and a program or an instruction stored in the memory 71 and executable on the processor 72, where the program or the instruction is executed by the processor 72 to implement each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The electronic equipment comprises M cameras, wherein M is a positive integer larger than 1.
The processor 110 is configured to determine, in the first system layer, a target camera corresponding to the target shooting parameter from the M cameras; determining an image frame sequence corresponding to the target camera as a target image frame sequence on a first system layer, and transmitting the target image frame sequence to a second system layer; the target image frame sequence includes: at least one frame of image collected by the target camera; and, at a second system level, outputting a sequence of target image frames.
In the electronic device provided by the embodiment of the application, because a new system layer (i.e., the first system layer) may be configured in the electronic device, in a process of executing the SAT function, the electronic device may determine, in the new system layer, a target camera corresponding to a target shooting parameter, determine, in the new system layer, a target image frame sequence corresponding to the target camera, and output, in the operating system layer (i.e., the second system layer), the target image frame sequence, that is, the operating system layer and the hardware layer of the electronic device are not tightly coupled, so that when the SAT function of an electronic device needs to be migrated on an electronic device without the SAT function, the code of the new system layer of the electronic device without the SAT function does not need to be modified, and thus, the migration efficiency of the whole SAT function may be improved.
Optionally, in this embodiment of the application, the processor 110 is further configured to generate, at a first system layer, M image frame sequences corresponding to the M cameras; a sequence of image frames comprising: and at least one frame of image collected by a corresponding camera.
Wherein, the M image frame sequences include a target image frame sequence.
In this embodiment of the application, since the electronic device may generate, at a new system layer (i.e., a first system layer), M image frame sequences corresponding to the M cameras, so that the electronic device may determine, at the new system layer, an image frame sequence corresponding to the target camera from the M image frame sequences, an operating system layer of the electronic device may be decoupled from a hardware layer, that is, the operating system layer and the hardware layer of the electronic device are not tightly coupled, so that migration efficiency of the whole SAT function may be improved.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, in a first system layer, respectively generate a sequence of image frames corresponding to each camera in a case that a shooting preview interface is displayed.
Wherein an image frame sequence comprises: and at least one frame of image corresponding to the image preview frame acquired by the corresponding camera.
In the embodiment of the application, because the electronic device can generate the image frame sequence corresponding to each camera in the first system layer respectively under the condition that the shooting preview interface is displayed, the electronic device can determine the target image frame sequence corresponding to the target shooting parameter in the first system layer and output the target image frame sequence in the operating system layer (i.e., the second system layer), that is, the operating system layer and the hardware layer of the electronic device are not closely coupled, so that the migration efficiency of the whole SAT function can be improved.
Optionally, in this embodiment of the application, the processor 110 is further configured to apply for a first storage area for the first system layer in a storage area of the electronic device; and storing the image collected by each camera in a first storage area.
The processor 110 is specifically configured to, at the first system layer, respectively generate an image frame sequence corresponding to each camera according to an image acquired by each camera in the first storage area.
In the embodiment of the application, because the electronic device can apply for a storage area for the first system layer, and store the images acquired by the M cameras in the storage area corresponding to the first system layer, the electronic device can directly generate an image frame sequence corresponding to each camera on the first system layer, and then directly transmit the target image frame sequence to the second system layer, the electronic device can directly output the target image frame sequence on the second system layer without copying the target image frame sequence, and therefore, the memory of the electronic device can be saved.
Optionally, in this embodiment of the application, the user input unit 107 is configured to receive a first input of a user.
The processor 110 is further configured to respond to the first input, acquire N frames of images from the target image frame sequence, and perform image processing on the N frames of images to obtain a target image; n is a positive integer.
In the embodiment of the application, the electronic device may directly acquire the N frames of images from the target image frame sequence output by the second system layer according to the first input of the user to obtain the target image, without copying the target image frame sequence in the second system layer first and then acquiring the images from the copied target image frame sequence, so that the memory of the electronic device may be saved.
It should be understood that, in the embodiment of the present application, the input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 processes image data of a still picture or a video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method is applied to an electronic device comprising M cameras, wherein M is a positive integer greater than 1, and the method comprises the following steps:
in a first system layer, determining a target camera corresponding to a target shooting parameter from the M cameras;
determining an image frame sequence corresponding to the target camera as a target image frame sequence on the first system layer, and transmitting the target image frame sequence to a second system layer; the sequence of target image frames comprises: at least one frame of image collected by the target camera;
at the second system level, outputting the sequence of target image frames.
2. The method according to claim 1, wherein before determining, at the first system level, a target camera corresponding to the target shooting parameter from the M cameras, the method further comprises:
generating M image frame sequences corresponding to the M cameras in the first system layer; a sequence of image frames comprising: at least one frame of image collected by a corresponding camera;
wherein the M image frame sequences include the target image frame sequence.
3. The method of claim 2, wherein said generating, at the first system level, M image frame sequences corresponding to the M cameras comprises:
under the condition of displaying a shooting preview interface, respectively generating an image frame sequence corresponding to each camera in the first system layer;
wherein an image frame sequence comprises: and at least one frame of image corresponding to the image preview frame acquired by the corresponding camera.
4. The method of claim 2, wherein prior to generating the sequence of M image frames for the M cameras at the first system level, the method further comprises:
applying for a first storage area for the first system layer in a storage area of the electronic equipment;
storing the image collected by each camera into the first storage area;
the generating, at the first system layer, M image frame sequences corresponding to the M cameras includes:
and respectively generating an image frame sequence corresponding to each camera in the first system layer according to the image acquired by each camera in the first storage area.
5. The method of claim 1, wherein after outputting the sequence of target image frames at the second system level, the method further comprises:
receiving a first input of a user;
responding to the first input, acquiring N frames of images from the target image frame sequence, and carrying out image processing on the N frames of images to obtain a target image; n is a positive integer.
6. An image processing apparatus, the image processing apparatus including M cameras, M being a positive integer greater than 1, the image processing apparatus further comprising: the device comprises a determining module, a transmission module and an output module;
the determining module is used for determining a target camera corresponding to the target shooting parameter from the M cameras in a first system layer; determining an image frame sequence corresponding to the target camera as a target image frame sequence on the first system layer; the sequence of target image frames comprises: at least one frame of image collected by the target camera;
the transmission module is used for transmitting the target image frame sequence determined by the determination module to a second system layer;
the output module is configured to output, at the second system layer, the target image frame sequence transmitted by the transmission module.
7. The image processing apparatus according to claim 6, characterized by further comprising: a generation module;
the generating module is configured to generate, at the first system layer, M image frame sequences corresponding to the M cameras; a sequence of image frames comprising: at least one frame of image collected by a corresponding camera;
wherein the M image frame sequences include the target image frame sequence.
8. The image processing apparatus according to claim 7, wherein the generating module is specifically configured to, in a case where a shooting preview interface is displayed, respectively generate, at the first system layer, an image frame sequence corresponding to each camera;
wherein an image frame sequence comprises: and at least one frame of image corresponding to the image preview frame acquired by the corresponding camera.
9. The image processing apparatus according to claim 7, characterized by further comprising: the device comprises an application module and a storage module;
the application module is used for applying for a first storage area for the first system layer in the storage area of the image processing device;
the storage module is used for storing the images acquired by each camera into the first storage area applied by the application module;
the generating module is specifically configured to generate, at the first system layer, an image frame sequence corresponding to each camera according to an image acquired by each camera in the first storage area.
10. The image processing apparatus according to claim 6, characterized by further comprising: the device comprises a receiving module, an obtaining module and a processing module;
the receiving module is used for receiving a first input of a user;
the acquiring module is used for responding to the first input received by the receiving module and acquiring N frames of images from the target image frame sequence;
the processing module is used for carrying out image processing on the N frames of images acquired by the acquisition module to obtain a target image; n is a positive integer.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 5.
CN202110119051.6A 2021-01-28 2021-01-28 Image processing method and device and electronic equipment Withdrawn CN113037996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110119051.6A CN113037996A (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110119051.6A CN113037996A (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113037996A true CN113037996A (en) 2021-06-25

Family

ID=76459406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110119051.6A Withdrawn CN113037996A (en) 2021-01-28 2021-01-28 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113037996A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989696A (en) * 2021-09-18 2022-01-28 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989696A (en) * 2021-09-18 2022-01-28 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN113989696B (en) * 2021-09-18 2022-11-25 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112770059B (en) Photographing method and device and electronic equipment
CN113794834B (en) Image processing method and device and electronic equipment
CN112291475B (en) Photographing method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN113014804A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113037997A (en) Image processing method and device and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113709368A (en) Image display method, device and equipment
CN111586305B (en) Anti-shake method, anti-shake device and electronic equipment
CN113037996A (en) Image processing method and device and electronic equipment
CN112672055A (en) Photographing method, device and equipment
CN112508820A (en) Image processing method and device and electronic equipment
US20140111678A1 (en) Method and system for capturing, storing and displaying animated photographs
CN113794831B (en) Video shooting method, device, electronic equipment and medium
WO2022095878A1 (en) Photographing method and apparatus, and electronic device and readable storage medium
CN112153291B (en) Photographing method and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN113891018A (en) Shooting method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment
CN112291474A (en) Image acquisition method and device and electronic equipment
CN112446848A (en) Image processing method and device and electronic equipment
CN112367470B (en) Image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210625