CN116761080A - Image data processing method and terminal equipment - Google Patents

Image data processing method and terminal equipment Download PDF

Info

Publication number
CN116761080A
CN116761080A CN202211251774.2A CN202211251774A CN116761080A CN 116761080 A CN116761080 A CN 116761080A CN 202211251774 A CN202211251774 A CN 202211251774A CN 116761080 A CN116761080 A CN 116761080A
Authority
CN
China
Prior art keywords
event
algorithm
image data
algorithms
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211251774.2A
Other languages
Chinese (zh)
Inventor
苏俊钦
邓锋贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211251774.2A priority Critical patent/CN116761080A/en
Publication of CN116761080A publication Critical patent/CN116761080A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a processing method of image data and a terminal device, which can be applied to various shooting modes. The method provided by the application comprises the following steps: acquiring first image data; processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode; the processing of the first image data by a plurality of algorithms to obtain target image data of a target shooting mode includes: and running a first event in a first algorithm and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm. Therefore, the terminal equipment can shorten the operation time of the algorithm by running the events in different algorithms in parallel compared with sequentially running the events in different algorithms, further shorten the photographing response time in the target photographing mode, and be beneficial to improving the user experience.

Description

Image data processing method and terminal equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for processing image data and a terminal device.
Background
The terminal device provides a camera application to meet photographing requirements of a user. The camera application may provide the user with a variety of different photographing modes, such as a normal photographing mode, a portrait photographing mode, a delayed photographing mode, and so on.
However, the photographing response time of the terminal device is relatively long. The photographing response time length refers to the time difference between receiving a photographing instruction input by a user from the terminal equipment and presenting a photographed image to the user by the terminal equipment.
Disclosure of Invention
The application provides a processing method of image data and terminal equipment, which are applied to the technical field of terminals, can shorten photographing response time and are beneficial to improving user experience.
In a first aspect, the present application proposes a method for processing image data, the method comprising: acquiring first image data; processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode; the processing of the first image data by a plurality of algorithms to obtain target image data of a target shooting mode includes: and running a first event in a first algorithm in the plurality of algorithms and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm.
When the terminal device detects that a user triggers a photographing operation in a target photographing mode, initial image data can be acquired through the camera in response to the operation. The first image data may be initial image data or data generated by an event in an algorithm, which is not limited in the present application.
If the first image data is the initial image data, the terminal device processes the initial image data (i.e. the first image data) through the plurality of algorithms to obtain target image data. If the first image data is data generated by an event in the algorithm, the processing procedure from the initial image data to the target image data includes other algorithms besides the plurality of algorithms, and the number of other algorithms and the events included in the algorithms are not limited in the present application.
The plurality of algorithms described above includes a first algorithm and a second algorithm, but the present application is not limited thereto. For example, the plurality of algorithms includes a first algorithm, a second algorithm, and a third algorithm. Alternatively, the plurality of algorithms includes a first algorithm, a second algorithm, a third algorithm, and a fourth algorithm.
The first algorithm and the second algorithm are any two algorithms in the plurality of algorithms, and the order of the first algorithm and the second algorithm in the plurality of algorithms is not limited by the application.
The terminal device runs a first event in the first algorithm and a second event in the second algorithm in parallel, wherein the first event is not a trigger event of the second event, the second event is not a trigger event of the first event, and the first event and the second event have no data dependence, namely, the input of the first event is not the output of the second event, and the input of the second event is not the output of the first event.
The first event and the second event are trigger events of a third event in the second algorithm, that is, after the first event and the second event are completed, the third event can be triggered to run. The third event may be executed according to the output of the first event and the second event, or may be executed according to the output of the first event or the output of the second event, which is not limited in the present application. The first event and the second event are trigger events of a third event in the second algorithm. It is understood that the input data required for the third event to operate includes the output of the first event and/or the second event. It should be noted that, the first event and the second event are trigger events of a third event in the second algorithm, and it may be noted that the events triggering the third event may include the first event and the second event, and in a possible example, the third event may also include events other than the first event and the second event.
Illustratively, in the example shown in fig. 5, the first algorithm may be algorithm 1, the first event in the first algorithm may be the process of algorithm 1, the second algorithm is algorithm 2, the second event in the second algorithm is process 1 of algorithm 2, and the third event in the second algorithm is process 2 of algorithm 2.
The application is described by the first event, the second event and the third event, and the event operation strategies of other events with the same relation are the same, which is not described in detail herein. It can be understood that a plurality of events in the same algorithm are operated in sequence, and the triggering relationship of the events in different algorithms is preset.
According to the image data processing method provided by the application, the events with no trigger relation in different algorithms can be operated in parallel, and compared with the events in different algorithms operated in sequence, the operation time of different algorithms can be shortened, so that the photographing response time in a target photographing mode is shortened, and the user experience is improved.
With reference to the first aspect, in certain implementations of the first aspect, the first event generates second image data and a tag of the second image data, the tag of the second image data being used to indicate that the second image data is data generated by the first event; the method further comprises the steps of: and after the first event and the second event are operated, operating a fourth event based on the label of the second image data, wherein the fourth event comprises a third event, and the triggering event of the fourth event comprises the first event and the second event.
After the first event of the terminal device is run, the second image data and the label of the second image data can be obtained, wherein the label of the second image data is used for indicating that the second image data is the data generated by the first event. For example, the first event is for denoising, the second image data is denoised image data, and the label of the second image data may be denoised.
After the first event and the second event are completed, the terminal device may execute a fourth event, where the fourth event is used to represent all events triggered by the first event and the second event, and the first event and the second event may trigger a third event, so the fourth event includes, but is not limited to, the third event.
Illustratively, in the example shown in fig. 5, the first algorithm may be algorithm 1, the first event in the first algorithm may be the process of algorithm 1, the second algorithm is algorithm 2, the second event in the second algorithm is process 1 of algorithm 2, and the fourth event may include the de-initialization of algorithm 1, process 2 of algorithm 2, and the initialization of algorithm 3. Wherein process 2 of algorithm 2 is a third event.
And when the terminal equipment runs the one or more events, acquiring the second image data based on the label of the second image data, and inputting the second image data into the one or more events.
It can be understood that, in the present application, the image data generated by the first event is taken as an example, and the image data generated by other events are similar and are not described in detail herein.
According to the image data processing method provided by the application, the first event can generate the image data and the label of the image data so as to be convenient for acquiring the triggering event of the first event at any time, compared with a method that the whole algorithm is finished and then the image data is returned so as to be convenient for the subsequent algorithm to use, the triggering event of the first event can acquire the image data based on the label of the image data, the data generated by the first event is not required to be acquired after all the first algorithm is finished, the time for waiting for the data can be saved, the running time of different algorithms is favorably shortened, the photographing response time under the target photographing mode is shortened, and the user experience is favorably improved.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and if the number of times of transmission of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transmission of the image data generated by the first event, releasing the image data generated by the first event.
The image data generated by the first event may be an input of an event in other algorithms, it being understood that the image data generated by the first event may be passed between events in a plurality of operators.
For example, when the image data generated by the first event is input of the third event, and the third event is executed, the image data generated by the first event is input to the third event, and then the image data generated by the first event realizes 1 transfer.
If the image data generated by the first event is input by a plurality of events in the second algorithm, it can be understood that the image data generated by the first event is transferred a plurality of times. The maximum transfer number of image data generated by the first event is the maximum number of times that image data generated by the first event can be transferred.
If the number of times of transfer of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transfer of the image data generated by the first event, it may be stated that no subsequent event requires the image data generated by the first event, and the terminal device may release the image data generated by the first event.
For example, in the example shown in fig. 5, the data generated by the processing procedure of the algorithm 1 is input to the processing procedure 2 of the algorithm 2, the maximum transfer number of the data generated by the processing procedure of the algorithm 1 is 1, if the terminal device runs the processing procedure 2 of the algorithm 2 and acquires the data generated by the processing procedure of the algorithm 1, the transfer number of the data generated by the processing procedure of the algorithm 1 is 1, which is equal to the maximum transfer number, and the terminal device may release the data generated by the processing procedure of the algorithm 1.
It can be understood that, in the present application, the image data generated by the first event is taken as an example, and the image data generated by other events are similar and are not described in detail herein.
According to the image data processing method provided by the application, the transfer times of the image data generated by the event are counted, and when the transfer times are equal to the maximum transfer times, the image data can be released, so that the memory is saved.
With reference to the first aspect, in certain implementations of the first aspect, the maximum number of passes is equal to a number of fifth events in the plurality of operators, the fifth events being events that are data dependent from the first events, a triggering event of the fifth events including the first events.
The fifth event is used to represent an event in which the input is image data generated by the first event, that is, an event in which there is a data dependency with the first event. The more the fifth event, the more times the image data generated by the first event needs to be transferred. Specifically, the maximum number of passes is equal to the number of fifth events in the plurality of operators. It should be noted that, the fifth event is an event of the non-first algorithm.
It should be further noted that, if the fifth event is an event that has a data dependency with the first event, the fifth event must be an event that has a triggering relationship with the first event, that is, the first event triggers the fifth event, but the event that has a triggering relationship is not necessarily an event that has a data dependency.
Illustratively, in the example shown in fig. 5, the process of algorithm 1 is an input of process 2 of algorithm 2, the process of algorithm 1 and process 2 of algorithm 2 are data-dependent events, the process of algorithm 1 may trigger process 2 of algorithm 2, and the process of algorithm 1 has a triggering relationship with process 2 of algorithm 2. The processing procedure of the algorithm 1 can trigger the de-initialization of the algorithm 1 and the initialization of the algorithm 3, and then the processing procedure of the algorithm 1 has trigger relation with the de-initialization of the algorithm 1 and the initialization of the algorithm 3, but is not a data-dependent event.
With reference to the first aspect, in certain implementations of the first aspect, the first event and the second event run on different processors.
When the terminal equipment runs the first event and the second event in parallel, the terminal equipment can run the first event and the second event on different processors to ensure the normal running of the events, and simultaneously reasonably utilizes the resources of the processors.
Illustratively, in the example shown in fig. 5, the first event is process 1 of algorithm 1, the second event is process 1 of algorithm 2, and process 1 of algorithm 1 and process 1 of algorithm 2 may run on different processors.
According to the processing method of the image data, the first event and the second event are operated on different processors, and the second event can be operated while the first event is operated again, so that the operation time of the event is saved, and the processing efficiency of the image data is improved.
With reference to the first aspect, in certain implementations of the first aspect, running in parallel a first event in a first algorithm and a second event in a second algorithm of the plurality of algorithms includes: running a first event in a first processor; and running a second event in the second processor, wherein the resource utilization rate of the second processor before running the second event is less than or equal to a preset threshold value.
Running the first event in the first processor may also be referred to as running the first event on the first processor or running the first event on the first processor, which is not limited by the embodiments of the present application.
When the terminal device runs the event on the processor, in order to ensure the normal running of the event and the reasonable use of the processor resources, the terminal device can run the event on the processor with the resource utilization rate smaller than or equal to the preset threshold value. The terminal device may operate the first event if the resource usage rate of the first processor is less than or equal to a preset threshold value, and operate the second event if the resource usage rate of the first processor is less than or equal to the preset threshold value. Wherein. The preset threshold may be 50%, 60%, 70%, or 75%, and the specific value of the preset threshold is not limited in the embodiment of the present application.
According to the image data processing method, the event is operated on the processor with the resource utilization rate smaller than or equal to the preset threshold value, so that the normal operation of the event is guaranteed, and the resources of the processor can be reasonably utilized.
With reference to the first aspect, in certain implementations of the first aspect, the target photographing mode is a time-lapse photographing mode, a portrait photographing mode, or a normal photographing mode.
In a second aspect, the present application proposes a terminal device comprising: an acquisition module and a processing module. Wherein, the acquisition module is used for: acquiring first image data; the processing module is used for: processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode; specifically, the processing module is used for: and running a first event in a first algorithm in the plurality of algorithms and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm.
With reference to the second aspect, in certain implementations of the second aspect, the first event generates second image data and a tag of the second image data, the tag of the second image data being used to indicate that the second image data is data generated by the first event; the processing module is also used for: and after the first event and the second event are operated, operating a fourth event based on the label of the second image data, wherein the fourth event comprises a third event, and the triggering event of the fourth event comprises the first event and the second event.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is further configured to: and if the number of times of transmission of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transmission of the image data generated by the first event, releasing the image data generated by the first event.
With reference to the second aspect, in some implementations of the second aspect, the maximum number of passes is equal to a number of fifth events in the plurality of operators, the fifth events being events that are data dependent from the first events, a triggering event of the fifth events including the third event.
With reference to the second aspect, in some implementations of the second aspect, the first event and the second event run on different processors.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is further configured to: running a first event in a first processor; and running a second event in the second processor, wherein the resource utilization rate of the second processor before running the second event is less than or equal to a preset threshold value.
With reference to the second aspect, in certain implementations of the second aspect, the target photographing mode is a time-lapse photographing mode, a portrait photographing mode, or a normal photographing mode.
In a third aspect, an embodiment of the present application provides a terminal device, which may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. The terminal device may be a mobile phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The terminal device includes: a processor and a memory; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to cause the terminal device to perform a method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which when executed by a processor performs a method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run, causes a computer to perform the method as in the first aspect.
In a sixth aspect, an embodiment of the present application provides a chip comprising a processor for invoking a computer program in a memory to perform a method according to the first aspect.
It should be understood that the second to sixth aspects of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
FIG. 1 is a schematic diagram of a mobile phone for taking a portrait;
FIG. 2 is a schematic diagram of an algorithm operation;
fig. 3 is a software architecture block diagram of a terminal device to which the embodiment of the present application is applicable;
FIG. 4 is a schematic flow chart of a method for processing image data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an algorithm operation provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a cross-algorithm data transmission according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a terminal device according to an embodiment of the present application;
Fig. 8 is a schematic block diagram of another terminal device according to an embodiment of the present application.
Detailed Description
For purposes of clarity in describing the embodiments of the present application, the words "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The "at … …" in the embodiment of the present application may be an instant when a certain situation occurs, or may be a period of time after a certain situation occurs, which is not particularly limited. In addition, the display interface provided by the embodiment of the application is only used as an example, and the display interface can also comprise more or less contents.
When the terminal device detects that a user photographs through a camera application, image data can be acquired through a camera, the image data are processed through a plurality of algorithms, and the processed image data, namely, a photo, are displayed.
At present, the terminal equipment detects the operation of triggering photographing by a user, and in response to the operation, image data can be processed sequentially through a plurality of algorithms, so that the data processing time is long, the time from the detection of the operation of triggering photographing by the user to the display of the photograph is long, and the experience of the user is seriously influenced.
Illustratively, fig. 1 shows a schematic diagram of a mobile phone taking a portrait. As shown in the interface a of fig. 1, the camera application may provide a video photographing function, a photo photographing function, a portrait photographing function, and a panorama photographing function. At this time, the camera application provides a function of capturing images, and the interface displayed by the mobile phone is a capturing image interface. The portrait photographing function interface displays a photographing icon 101 and a flipping icon 102. The user can start the camera of the mobile phone to acquire an image by clicking the photographing icon 101 and enable the mobile phone to save the image, and can switch the camera for acquiring the image by clicking the flipping icon 102. In addition, a thumbnail of a history picture or video, that is, a thumbnail of a photograph obtained by a photographing function of a camera application or a thumbnail of a video obtained by a recording function of a camera application, is displayed at the lower left corner of the interface. When the mobile phone detects that the user triggers the photographing icon 101, the mobile phone can denoise the image acquired by the camera through an image denoising algorithm such as mean filtering, can enhance the contrast of the image acquired by the camera through an image enhancing algorithm such as gamma (gamma) algorithm, can also virtualize the background of the image acquired by the camera through a background blurring algorithm to highlight the human image, and displays the image after blurring the background at the lower left corner of the interface, namely, the interface b in fig. 1 is displayed. As shown in interface b in fig. 1, the lower left corner of the interface displays the image after blurring the background. The processing mode can cause that the image processing time is too long, namely, the mobile phone displays the image with the virtual background at the lower left corner of the interface only after detecting that the user triggers the photographing icon 101, the image processing efficiency is low, and the user experience is responded.
The embodiment of the application discovers that the mobile phone uses a plurality of algorithms to process the same frame of image, namely, after the image is processed by one algorithm, the processed image is sent to another algorithm for processing, and so on until the algorithms are processed. The processing time length is the sum of the processing time lengths of the algorithms, so that the processing time length is longer, the image processing efficiency is lower, and the user experience is responded.
In the example shown in fig. 1, the mobile phone may denoise the image obtained by the camera using an image denoising algorithm, perform image enhancement on the denoised image using an image enhancement algorithm, and perform background blurring on the image after image enhancement using a background blurring algorithm. If the image denoising algorithm is 1000 ms long, the image enhancement algorithm is 500 ms long, and the background blurring algorithm is 500 ms long, the processing time is (1000+500+500) =2000 ms long. The mobile phone displays the image with the virtual background at the lower left corner of the interface only after detecting 2000 milliseconds after the user triggers the photographing icon 101, so that the image processing efficiency is low, and the user experience is responded.
In order to solve the above problems, embodiments of the present application study the process of processing the same frame of image by a plurality of algorithms to determine a method for solving the above problems. The embodiment of the application takes three algorithms as an example to process the same frame of image, and describes the existing processing process.
By way of example, FIG. 2 shows a schematic diagram of the operation of an algorithm. As shown in fig. 2, the pipeline (pipeline) connects the plug-in of algorithm 1, the plug-in of algorithm 2, and the plug-in of algorithm 3 in series to form a machine learning workflow. The pipeline processing mechanism is to place the plug-ins of the 3 algorithms in the same pipe, and call programs of the corresponding algorithms to process the images sequentially through the plug-ins of the algorithms, so that the processed images are finally obtained. In one possible implementation, algorithm 1 may be the image denoising algorithm, algorithm 2 may be the image enhancement algorithm, and algorithm 3 may be the background blurring algorithm, but the embodiment of the application is not limited thereto. The plug-in of algorithm 1, the plug-in of algorithm 2 and the plug-in of algorithm 3 are on the integrated side of the terminal device, and algorithm 1, algorithm 2 and algorithm 3 are on the algorithm side of the terminal device.
As shown in fig. 2, the plug-in of algorithm 1, the plug-in of algorithm 2, and the plug-in of algorithm 3 each include 3 phases, which are a start phase, a process frame phase, and an end phase, respectively. The initial stage is the initialization of the plug-in, the processing frame stage is the processing process of the plug-in, and the end stage is the de-initialization of the plug-in. When the terminal device executes the plug-in of the algorithm 1, an initialization function "Init" of the algorithm 1 can be called by calling an initialization application program interface (application program interface, API) in the initial stage, a frame function "Frunc" can be processed by a process API call in the processing frame stage, and a function "unitt" can be initialized by removing the initialization API call in the end stage. When the terminal device executes the plug-in of the algorithm 2, an initialization function "Init" of the algorithm 2 can be called by calling an initialization application program interface (application program interface, API) in an initial stage, a Frame function "Frame1 process" can be called by a process API in executing the process 1, a Frame function "Frame2 process" can be called by the process API in executing the process 2, and a function "unit" can be initialized by removing the initialization API call in an end stage. When the terminal device executes the plug-in of the algorithm 3, an initialization function "Init" of the algorithm 3 can be called by calling an initialization application program interface (application program interface, API) in the initial stage, a frame function "process" can be called by a process API in the process frame stage, and a function "unitt" can be initialized by removing the initialization API call in the end stage.
If the terminal device processes the image by using the algorithm 1, the algorithm 2 and the algorithm 3, the terminal device sequentially executes the plugin of the algorithm 1, the plugin of the algorithm 2 and the plugin of the algorithm 3, that is, when executing the plugin of the algorithm 1, sequentially executes the initialization of the plugin of the algorithm 1, the processing procedure of the plugin of the algorithm 1 and the de-initialization of the plugin of the algorithm 1, wherein the processing procedure of the plugin of the algorithm 1 generates the data 1. After executing the plug-in of algorithm 1, the terminal device executes the plug-in of algorithm 2. When the terminal device executes the plug-in of the algorithm 2, the initialization of the plug-in of the algorithm 2, the processing procedure 1 of the plug-in of the algorithm 2, the processing procedure 2 of the plug-in of the algorithm 2 and the de-initialization of the plug-in of the algorithm 2 are sequentially executed. The input data of the processing procedure 1 of the plug-in of the algorithm 2 is obtained from the outside, which is not limited herein. The processing result obtained by the processing procedure 1 of the plug-in of algorithm 2 and the data 1 are input data of the processing procedure 2 of the plug-in of algorithm 2, and the processing result obtained by the processing procedure 2 of the plug-in of algorithm 2 is the data 2. After executing the plug-in of algorithm 2, the terminal device executes the plug-in of algorithm 3. When the terminal device executes the plug-in of the algorithm 3, the initialization of the plug-in of the algorithm 3, the processing procedure of the plug-in of the algorithm 3 and the de-initialization of the plug-in of the algorithm 3 are sequentially executed. Wherein data 2 is the input data of the processing of the plug-in of algorithm 3.
The plug-ins of the 3 algorithms are sequentially executed, the processing time length is the sum of the plug-in processing time lengths of the 3 algorithms, the processing time length is longer, the image processing efficiency is lower, and the user experience is responded.
As can be seen from the process flow shown in fig. 2, the reason that the processing duration is longer is that the 3 algorithm plugins are sequentially executed, and the embodiment of the application analyzes the process flow shown in fig. 2, where the process 2 of the algorithm 2 plugin depends on the process 2 of the algorithm 1 plugin and the initialization of the algorithm 2 plugin, and the process 2 of the algorithm 3 plugin depends on the process 2 of the algorithm 2 plugin and the initialization of the algorithm 3 plugin, so that the process 2 of the algorithm 1 plugin and the initialization of the algorithm 2 plugin need only be executed when the process 2 of the algorithm 2 plugin is executed, and the process 2 of the algorithm 2 plugin and the initialization of the algorithm 3 plugin need only be executed when the process 2 of the algorithm 3 plugin is executed.
In view of this, the embodiment of the present application provides a processing method, where a terminal device may process, in parallel, events that are not dependent to shorten a processing time period, and processes, in sequence, events that are dependent to ensure smooth execution of an algorithm.
The method provided by the embodiment of the application can be applied to terminal equipment for processing data by applying a plurality of algorithms, and is not limited to terminal equipment for installing camera application, such as the mobile phone. The terminal device may also be a wearable terminal device such as a tablet computer, a personal computer (personal computer, PC), a smart screen, an artificial intelligence (artificial intelligence, AI) speaker, a car machine device, and a smart watch, or may be various teaching aids (e.g., a learning machine, an early education machine), a smart toy, a portable robot, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, or the like, or may be a device with a mobile office function, a device with an intelligent home function, a device with an audio/video entertainment function, a device supporting smart travel, or the like. It should be understood that the embodiment of the present application does not limit the specific technology and the specific device configuration adopted by the terminal device.
In order to better understand the embodiments of the present application, the following describes a software structure of the terminal device according to the embodiments of the present application. The software system of the terminal device can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. The layered architecture may adopt an Android (Android) system, an apple (IOS) system, or other operating systems, which is not limited in the embodiment of the present application. Taking an Android system with a layered architecture as an example, a software structure of the terminal device is illustrated.
Fig. 3 is a block diagram illustrating a software structure of a terminal device to which an embodiment of the present application is applied. The layered architecture divides the software system of the terminal device into a plurality of layers, each layer having a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer (applications), an application framework layer (application framework), an Zhuoyun rows (Android run) and system libraries, a hardware abstraction layer (hardware abstraction layer, HAL), and a kernel layer (kernel), in order from top to bottom.
The application layer may include a series of application packages that run applications by calling an application program interface (application programming interface, API) provided by the application framework layer. As shown in fig. 3, the application package may include applications for cameras, gallery, calendar, memo, map, navigation, bluetooth, music, video call, short message, etc.
The application framework layer provides APIs and programming frameworks for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 3, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The telephony manager is arranged to provide communication functions for the terminal device. Such as the management of call status (including on, hung-up, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
The android system runtime comprises a core library and a virtual machine. And the android system is responsible for scheduling and managing the android system when running. The core library consists of two parts: one part is a function which needs to be called by Java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes Java files of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and abnormality management, garbage collection and the like. The system library may contain modules for a number of functions, such as: a surface manager, a media library and a three-dimensional graphics processing library.
The surface manager is used to manage the display subsystem and provides a fusion of the two-dimensional and three-dimensional layers for the plurality of applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The hardware abstraction layer comprises an integration side, a business layer, a calculation framework layer, a resource hosting side and an algorithm side. The integration side comprises a plug-in of an integration algorithm for invoking a plurality of algorithms for the same data processing. For example, the integration side may include functional plug-ins to invoke algorithm 1, algorithm 2, and algorithm 3. The traffic layer includes policies that implement a plurality of algorithms. Each of the plurality of algorithms may be broken down into a plurality of sub-tasks, and the strategy for executing the plurality of algorithms includes an execution sequence for executing the sub-tasks. The computational graph framework layer is used for generating a computational graph according to a strategy for executing a plurality of algorithms and running the computational graph. The resource hosts are used to host data passed between algorithms, i.e., register data of subtasks to facilitate retrieval of data by other word tasks. The algorithm side is used for providing corresponding functions for the calculation frame layer. It can be understood that, compared with the prior art, the embodiment of the application adds the service layer and the calculation map frame layer, wherein the service layer is preset with the strategy for executing the algorithm, and the calculation map frame layer generates and executes the calculation map according to the preset strategy.
The kernel layer is a layer between hardware and software. The kernel layer is used for driving the hardware so that the hardware works. The kernel layer at least comprises a display driver, a fingerprint screen driver, a camera driver, a Bluetooth driver and the like, which is not limited by the embodiment of the application.
Fig. 4 shows a schematic flow chart of a method 400 of processing image data. The method 400 may be performed by a terminal device, such as the handset shown in fig. 1, but embodiments of the application are not limited thereto.
As shown in fig. 4, the method may include:
s401, acquiring first image data.
When the terminal device detects that a user triggers a photographing operation in a target photographing mode, initial image data can be acquired through the camera in response to the operation. The first image data may be initial image data or data generated by an event in an algorithm, which is not limited in the present application.
Alternatively, the target photographing mode may be a time-lapse photographing mode, a portrait photographing mode, or a general photographing mode, but the embodiment of the present application is not limited thereto.
S402, processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode; the processing of the first image data by a plurality of algorithms to obtain target image data of a target shooting mode includes: and running a first event in a first algorithm in the plurality of algorithms and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm.
If the first image data is the initial image data, the terminal device processes the initial image data (i.e. the first image data) through the plurality of algorithms to obtain target image data. If the first image data is data generated by an event in the algorithm, the processing procedure from the initial image data to the target image data includes other algorithms besides the plurality of algorithms, and the number of other algorithms and the events included in the algorithms are not limited in the present application.
The plurality of algorithms described above includes a first algorithm and a second algorithm, but the present application is not limited thereto. For example, the plurality of algorithms includes a first algorithm, a second algorithm, and a third algorithm. Alternatively, the plurality of algorithms includes a first algorithm, a second algorithm, a third algorithm, and a fourth algorithm.
The first algorithm and the second algorithm are any two algorithms in the plurality of algorithms, and the order of the first algorithm and the second algorithm in the plurality of algorithms is not limited by the application.
The terminal device runs a first event in the first algorithm and a second event in the second algorithm in parallel, wherein the first event is not a trigger event of the second event, the second event is not a trigger event of the first event, and the first event and the second event have no data dependence, namely, the input of the first event is not the output of the second event, and the input of the second event is not the output of the first event.
The first event and the second event are trigger events of a third event in the second algorithm, that is, after the first event and the second event are completed, the third event can be triggered to run. The third event may be executed according to the output of the first event and the second event, or may be executed according to the output of the first event or the output of the second event, which is not limited in the present application. The first event and the second event are trigger events of a third event in the second algorithm. It is understood that the input data required for the third event to operate includes the output of the first event and/or the second event. It should be noted that, the first event and the second event are trigger events of a third event in the second algorithm, and it may be noted that the events triggering the third event may include the first event and the second event, and in a possible example, the third event may also include events other than the first event and the second event.
Illustratively, in the example shown in fig. 2, the method provided by the embodiment of the present application may shorten the processing time of the image data. Specifically, fig. 5 shows a schematic diagram of an algorithm operation provided by an embodiment of the present application. As shown in fig. 5, the algorithm side includes a functional plug-in. The functional plug-in includes a start phase, a process frame phase, and an end phase. The initial stage is the initialization of the function plug-in, the processing frame stage is the processing process of the function plug-in, and the ending stage is the de-initialization of the function plug-in. The initialization of the functional plug-in is used to start the read strategy by initializing the API. The process of the functional plug-in is used to provide data to the process of the algorithm through a process API. The de-initialization of the functional plug-in is used to end the read strategy by the de-initializing API.
The traffic layer may include policies that execute algorithms. The terminal device may run the initialization of the algorithm 1 and the initialization of the algorithm 2 in parallel, and monitor whether the initialization of the algorithm 1 and the initialization of the algorithm 2 are completed. After the initialization of the algorithm 1 is completed, the terminal device may trigger the processing procedure of the algorithm 1 and monitor whether the processing procedure of the algorithm 1 is completed. After the initialization of algorithm 2 is completed, the terminal device may trigger process 1 of algorithm 2 and monitor whether process 1 of algorithm 2 is completed. Wherein, the input data of the processing procedure of the algorithm 1 and the processing procedure of the algorithm 2 are obtained from the integration side.
After the process of algorithm 1 is completed, the terminal device may trigger the de-initialization of algorithm 1, process 2 of algorithm 2, and the initialization of algorithm 3. After the process 1 of algorithm 2 is completed, the terminal device may trigger process 2 of algorithm 2. It should be noted that, after the processing procedure of the algorithm 1 and the processing procedure 1 of the algorithm 2 are completed, the processing procedure 2 of the algorithm 2 may start to be executed. The terminal device may monitor whether the de-initialization of algorithm 1, process 2 of algorithm 2, and the initialization of algorithm 3 are complete. After the process 2 of algorithm 2 is completed, the terminal device may trigger the de-initialization of algorithm 2 and the process of algorithm 3. After the initialization of algorithm 3 is completed, the terminal device may trigger the processing procedure of algorithm 3. It should be noted that, after the initialization of both the processing procedure 2 of the algorithm 2 and the initialization of the algorithm 3 are completed, the processing procedure of the algorithm 3 may start to be executed. The terminal device may monitor whether the de-initialization of algorithm 2 and the processing of algorithm 3 are complete. After the processing of algorithm 3 is completed, the de-initialization and ending of algorithm 3 is triggered. The terminal device may monitor whether the de-initialization and ending of algorithm 3 is complete.
The algorithm side includes algorithm 1, algorithm 2, and algorithm 3. Algorithm 1, algorithm 2, and algorithm 3 are the same as the examples in fig. 2 described above. Each of the algorithm 1, the algorithm 2 and the algorithm 3 can register a function in the event management of the computational graph framework layer through the corresponding API so as to be convenient for calling.
The computational graph event management of the computational graph framework layer can generate computational graphs based on the execution policies of the business layer and run the computational graphs, i.e. perform parallel scheduling, i.e. call functions on the algorithm side based on the policies.
It should be noted that, in the example shown in fig. 5, the initialization of the running algorithm 1 may be referred to as an event, the initialization of the running algorithm 1 and whether the initialization of the listening algorithm 1 is completed may be referred to as a task, and the other similar processes, so the initialization of the algorithm 1, the processing procedure of the algorithm 1, and the like shown in the box in the service layer in fig. 5 may be referred to as tasks.
According to the image data processing method provided by the application, the events with no trigger relation in different algorithms can be operated in parallel, and compared with the events in different algorithms operated in sequence, the operation time of different algorithms can be shortened, so that the photographing response time in a target photographing mode is shortened, and the user experience is improved.
The terminal device may include multiple processors and the terminal device may run different tasks or events on different processors.
Illustratively, in the example shown in fig. 5 above, the beginning of the traffic layer, the initialization of algorithm 1, the processing of algorithm 2, the processing of algorithm 3, and the ending may run on processor 1. The initialization of algorithm 2, process 1 of algorithm 2, the initialization of algorithm 3, the de-initialization of algorithm 2, and the de-initialization of algorithm 3 may run on processor 2. The de-initialization of algorithm 1 may run on processor 3.
In the method 400 described above, the first event and the second event may be executed on different processors.
When the terminal equipment runs the first event and the second event in parallel, the terminal equipment can run the first event and the second event on different processors to ensure the normal running of the events, and simultaneously reasonably utilizes the resources of the processors.
Illustratively, in the example shown in fig. 5, the first event is process 1 of algorithm 1, the second event is process 1 of algorithm 2, and process 1 of algorithm 1 and process 1 of algorithm 2 may run on different processors.
According to the image data processing method provided by the embodiment of the application, the first event and the second event are operated on different processors, and the second event can be operated while the first event is operated again, so that the event operation time is saved, and the image data processing efficiency is improved.
Optionally, S402 in the method 400, running in parallel a first event in a first algorithm and a second event in a second algorithm in the plurality of algorithms includes: running a first event in a first processor; and running a second event in the second processor, wherein the resource utilization rate of the second processor before running the second event is less than or equal to a preset threshold value.
Running the first event in the first processor may also be referred to as running the first event on the first processor or running the first event on the first processor, which is not limited by the embodiments of the present application.
When the terminal device runs the event on the processor, in order to ensure the normal running of the event and the reasonable use of the processor resources, the terminal device can run the event on the processor with the resource utilization rate smaller than or equal to the preset threshold value. The terminal device may operate the first event if the resource usage rate of the first processor is less than or equal to a preset threshold value, and operate the second event if the resource usage rate of the first processor is less than or equal to the preset threshold value. Wherein. The preset threshold may be 50%, 60%, 70%, or 75%, and the specific value of the preset threshold is not limited in the embodiment of the present application.
According to the image data processing method, the event is operated on the processor with the resource utilization rate smaller than or equal to the preset threshold value, so that the normal operation of the event is guaranteed, and the resources of the processor can be reasonably utilized.
As an optional embodiment, in the method 400, if the first event generates the second image data and a tag of the second image data, the tag of the second image data is used to indicate that the second image data is the data generated by the first event; the method 400 may further include: and after the first event and the second event are operated, operating a fourth event based on the label of the second image data, wherein the fourth event comprises a third event, and the triggering event of the fourth event comprises the first event and the second event.
After the first event of the terminal device is run, the second image data and the label of the second image data can be obtained, wherein the label of the second image data is used for indicating that the second image data is the data generated by the first event. For example, the first event is for denoising, the second image data is denoised image data, and the label of the second image data may be denoised.
After the first event and the second event are completed, the terminal device may execute a fourth event, where the fourth event is used to represent all events triggered by the first event and the second event, and the first event and the second event may trigger a third event, so the fourth event includes, but is not limited to, the third event.
Illustratively, in the example shown in fig. 5, the first algorithm may be algorithm 1, the first event in the first algorithm may be the process of algorithm 1, the second algorithm is algorithm 2, the second event in the second algorithm is process 1 of algorithm 2, and the fourth event may include the de-initialization of algorithm 1, process 2 of algorithm 2, and the initialization of algorithm 3. Wherein process 2 of algorithm 2 is a third event.
And when the terminal equipment runs the one or more events, acquiring the second image data based on the label of the second image data, and inputting the second image data into the one or more events.
It can be understood that, in the present application, the image data generated by the first event is taken as an example, and the image data generated by other events are similar and are not described in detail herein.
According to the image data processing method provided by the embodiment of the application, the first event can generate the image data and the label of the image data, so that the triggering event of the first event can be acquired at any time, compared with a method that the whole algorithm is finished and then the image data is returned to facilitate the use of a subsequent algorithm, the triggering event of the first event can acquire the image data based on the label of the image data, the data generated by the first event does not need to be acquired after all the first algorithm is finished, the time for waiting for the data can be saved, the running time of different algorithms is shortened, the photographing response time under a target photographing mode is shortened, and the user experience is improved.
As an alternative embodiment, the method 400 may further include: and if the number of times of transmission of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transmission of the image data generated by the first event, releasing the image data generated by the first event.
The image data generated by the first event may be an input of an event in other algorithms, it being understood that the image data generated by the first event may be passed between events in a plurality of operators.
For example, when the image data generated by the first event is input of the third event, and the third event is executed, the image data generated by the first event is input to the third event, and then the image data generated by the first event realizes 1 transfer.
If the image data generated by the first event is input by a plurality of events in the second algorithm, it can be understood that the image data generated by the first event is transferred a plurality of times. The maximum transfer number of image data generated by the first event is the maximum number of times that image data generated by the first event can be transferred.
If the number of times of transfer of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transfer of the image data generated by the first event, it may be stated that no subsequent event requires the image data generated by the first event, and the terminal device may release the image data generated by the first event.
For example, in the example shown in fig. 5, the data generated by the processing procedure of the algorithm 1 is input to the processing procedure 2 of the algorithm 2, the maximum transfer number of the data generated by the processing procedure of the algorithm 1 is 1, if the terminal device runs the processing procedure 2 of the algorithm 2 and acquires the data generated by the processing procedure of the algorithm 1, the transfer number of the data generated by the processing procedure of the algorithm 1 is 1, which is equal to the maximum transfer number, and the terminal device may release the data generated by the processing procedure of the algorithm 1.
It can be understood that, in the present application, the image data generated by the first event is taken as an example, and the image data generated by other events are similar and are not described in detail herein.
According to the image data processing method provided by the embodiment of the application, the transmission times of the image data generated by the event are counted, and when the transmission times are equal to the maximum transmission times, the image data can be released, so that the memory is saved.
As an alternative embodiment, the maximum number of passes is equal to a number of fifth events in the plurality of operators, the fifth events being events that are data dependent from the first events, the triggering event of the fifth events comprising the first events.
The fifth event is used to represent an event in which the input is image data generated by the first event, that is, an event in which there is a data dependency with the first event. The more the fifth event, the more times the image data generated by the first event needs to be transferred. Specifically, the maximum number of passes is equal to the number of fifth events in the plurality of operators. It should be noted that, the fifth event is an event of the non-first algorithm.
It should be further noted that, if the fifth event is an event that has a data dependency with the first event, the fifth event must be an event that has a triggering relationship with the first event, that is, the first event triggers the fifth event, but the event that has a triggering relationship is not necessarily an event that has a data dependency.
Illustratively, in the example shown in fig. 5, the process of algorithm 1 is an input of process 2 of algorithm 2, the process of algorithm 1 and process 2 of algorithm 2 are data-dependent events, the process of algorithm 1 may trigger process 2 of algorithm 2, and the process of algorithm 1 has a triggering relationship with process 2 of algorithm 2. The processing procedure of the algorithm 1 can trigger the de-initialization of the algorithm 1 and the initialization of the algorithm 3, and then the processing procedure of the algorithm 1 has trigger relation with the de-initialization of the algorithm 1 and the initialization of the algorithm 3, but is not a data-dependent event.
For a better understanding of the transfer of data during the operation of the algorithm, embodiments of the present application are described with reference to the example shown in fig. 5. Fig. 6 shows a schematic diagram of a cross-algorithm data transmission. As shown in fig. 6, the integration side and the service layer are the same as the example shown in fig. 5, and are not described here again. The computational graph framework layer may also include resource hosting for hosting data required to be transferred between algorithms, which may be preset with the data required to be transferred between algorithms and a maximum number of transfers of the data.
The input data of the processing procedure of algorithm 1 is data 1 acquired from the integration side, and the output data of the processing procedure of algorithm 1 is data 2 and a tag (or referred to as an identification) of data 2. The terminal device may register data 2 and tags of data 2 at the resource hosting in the computational framework layer.
The input data of the processing procedure 1 of the algorithm 2 is data 3 acquired from the integration side, and the output data of the processing procedure 1 of the algorithm 2 is data 4.
The inputs of the processing procedure of the algorithm 2 are data 2 and data 4, and the terminal device can acquire the data 2 from the resource hosting place based on the label of the data 2 and reduce the maximum transfer frequency of the data 2 by 1. And if the maximum transfer number of the data 2 is 0, releasing the virtual address of the stored data 2. The terminal device can process the data 2 and the data 4 through the processing procedure 2 of the algorithm 2 to obtain the data 5 and the label of the data 5. The terminal device may host registration data 5 and tags for data 5 at the resource in the computational framework layer.
The input of the processing procedure of the algorithm 3 is data 5, and the terminal device can acquire the data 5 from the resource hosting place based on the label of the data 5, and reduce the maximum transfer times of the data 5 by 1. And if the maximum transfer number of the data 5 is 0, releasing the virtual address of the stored data 5. The terminal device can process the data 5 through the processing procedure of the algorithm 3 to obtain a processing result. The processing result is the final result of the data 1 processed by the algorithm 1, the algorithm 2 and the algorithm 3.
The sequence numbers of the above-mentioned processes do not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
The method of the embodiment of the application has been described above, and the device for executing the method provided by the embodiment of the application is described below. It will be understood by those skilled in the art that the method and the apparatus for performing the method may be combined and referred to each other, and that the apparatus provided by the embodiments of the present application may perform the steps in the method described above.
Fig. 7 shows a schematic block diagram of a terminal device 700 according to an embodiment of the present application. The terminal device 700 includes: an acquisition module 710 and a processing module 720.
Wherein, the obtaining module 710 is configured to: acquiring first image data; the processing module 720 is configured to: processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode; specifically, the processing module 710 is configured to: and running a first event in a first algorithm in the plurality of algorithms and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm.
It should be understood that the terminal device 700 herein is embodied in the form of functional modules. The term module herein may refer to an application specific integrated circuit (application specific integrated circuit, ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor, etc.) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an alternative example, it will be understood by those skilled in the art that the terminal device 700 may be specifically a terminal device in the foregoing method embodiment, or the functions of the terminal device in the foregoing method embodiment may be integrated in the terminal device 700, and the terminal device 700 may be used to execute each flow and/or step corresponding to the terminal device in the foregoing method embodiment, which is not repeated herein for avoiding repetition.
The terminal device 700 has a function of implementing the corresponding steps executed by the terminal device in the method embodiment; the above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above.
In an embodiment of the present application, the terminal device 700 in fig. 7 may also be a chip or a chip system, for example: system on chip (SoC).
Fig. 8 is a schematic block diagram of another terminal device 800 provided in an embodiment of the present application. The terminal device 800 includes a processor 810, a transceiver 820, and a memory 830. Wherein the processor 810, the transceiver 820 and the memory 830 are in communication with each other through an internal connection path, the memory 830 is configured to store instructions, and the processor 820 is configured to execute the instructions stored in the memory 830 to control the transceiver 820 to transmit and/or receive signals.
It should be understood that the terminal device 800 may be specifically a terminal device in the above method embodiment, or the functions of the terminal device in the above method embodiment may be integrated in the terminal device 800, and the terminal device 800 may be configured to perform the steps and/or flows corresponding to the terminal device in the above method embodiment. The memory 830 may optionally include read-only memory and random access memory, and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type. The processor 810 may be configured to execute instructions stored in the memory, and when the processor executes the instructions, the processor may perform steps and/or processes corresponding to the terminal device in the above-described method embodiments.
It is to be appreciated that in embodiments of the application, the processor 810 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor executes instructions in the memory to perform the steps of the method described above in conjunction with its hardware. To avoid repetition, a detailed description is not provided herein.
The image data processing method provided by the embodiment of the application can be applied to the electronic equipment with the communication function. The electronic device includes a terminal device, and specific device forms and the like of the terminal device may refer to the above related descriptions, which are not repeated herein.
The embodiment of the application provides a terminal device, which comprises: comprising the following steps: a processor and a memory; the memory stores computer-executable instructions; the processor executes the computer-executable instructions stored in the memory to cause the terminal device to perform the method described above.
An embodiment of the present application provides a chip, where the chip includes a processor, and the processor is configured to call a computer program in a memory to execute the technical solution in the foregoing embodiment. The principle and technical effects of the present application are similar to those of the above-described related embodiments, and will not be described in detail herein.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium stores a computer program. The computer program realizes the above method when being executed by a processor. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
In one possible implementation, the computer readable medium may include RAM, ROM, compact disk-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and accessible by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (Digital Subscriber Line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (Digital Versatile Disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Embodiments of the present application provide a computer program product comprising a computer program which, when executed, causes a computer to perform the above-described method.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the application.

Claims (10)

1. A method of processing image data, comprising:
acquiring first image data;
processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode;
the processing the first image data through a plurality of algorithms to obtain target image data of a target shooting mode includes:
and running a first event in a first algorithm in the plurality of algorithms and a second event in a second algorithm in the plurality of algorithms in parallel, wherein the first event and the second event are trigger events of a third event in the second algorithm.
2. The method of claim 1, wherein the first event generates second image data and a tag for the second image data, the tag for the second image data being indicative of the second image data being the data generated by the first event;
the method further comprises the steps of:
and after the first event and the second event are operated, operating a fourth event based on the label of the second image data, wherein the fourth event comprises the third event, and the triggering event of the fourth event comprises the first event and the second event.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and if the number of times of transmission of the image data generated by the first event among the plurality of operators is equal to the maximum number of times of transmission of the image data generated by the first event, releasing the image data generated by the first event.
4. A method according to claim 3, wherein the maximum number of passes is equal to the number of fifth events in the plurality of operators, the fifth events being events that are data dependent from the first event, a triggering event for the fifth events comprising the third event.
5. The method of any of claims 1-4, wherein the first event and the second event are run on different processors.
6. The method of claim 5, wherein running a first event in a first algorithm and a second event in a second algorithm of the plurality of algorithms in parallel comprises:
running the first event in a first processor;
and running the second event in a second processor, wherein the resource utilization rate of the second processor before running the second event is less than or equal to a preset threshold value.
7. The method according to any one of claims 1 to 6, wherein the target photographing mode is a time-lapse photographing mode, a portrait photographing mode, or a normal photographing mode.
8. A terminal device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory to cause the terminal device to perform the method of any one of claims 1-7.
9. A computer readable storage medium storing a computer program, which when executed by a processor implements the method according to any one of claims 1-7.
10. A computer program product comprising a computer program which, when run, causes a computer to perform the method of any of claims 1-7.
CN202211251774.2A 2022-10-13 2022-10-13 Image data processing method and terminal equipment Pending CN116761080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211251774.2A CN116761080A (en) 2022-10-13 2022-10-13 Image data processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211251774.2A CN116761080A (en) 2022-10-13 2022-10-13 Image data processing method and terminal equipment

Publications (1)

Publication Number Publication Date
CN116761080A true CN116761080A (en) 2023-09-15

Family

ID=87957732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211251774.2A Pending CN116761080A (en) 2022-10-13 2022-10-13 Image data processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN116761080A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140176532A1 (en) * 2012-12-26 2014-06-26 Nvidia Corporation Method for image correction and an electronic device embodying the same
CN109089043A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Shoot image pre-processing method, device, storage medium and mobile terminal
KR20190086166A (en) * 2018-01-12 2019-07-22 한국과학기술연구원 Intelligent video recording apparatus and method
CN110968404A (en) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 Equipment data processing method and device
CN112612372A (en) * 2020-12-22 2021-04-06 努比亚技术有限公司 Touch event regulation and control method and device and computer readable storage medium
CN113407752A (en) * 2021-08-19 2021-09-17 杭州欧若数网科技有限公司 Graph database memory management method, system, electronic device and storage medium
CN113747060A (en) * 2021-08-12 2021-12-03 荣耀终端有限公司 Method, apparatus, storage medium, and computer program product for image processing
WO2022156766A1 (en) * 2021-01-22 2022-07-28 维沃移动通信(杭州)有限公司 Picture-photographing method and apparatus, and electronic device
CN115097994A (en) * 2021-10-30 2022-09-23 荣耀终端有限公司 Data processing method and related device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140176532A1 (en) * 2012-12-26 2014-06-26 Nvidia Corporation Method for image correction and an electronic device embodying the same
KR20190086166A (en) * 2018-01-12 2019-07-22 한국과학기술연구원 Intelligent video recording apparatus and method
CN109089043A (en) * 2018-08-30 2018-12-25 Oppo广东移动通信有限公司 Shoot image pre-processing method, device, storage medium and mobile terminal
CN110968404A (en) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 Equipment data processing method and device
CN112612372A (en) * 2020-12-22 2021-04-06 努比亚技术有限公司 Touch event regulation and control method and device and computer readable storage medium
WO2022156766A1 (en) * 2021-01-22 2022-07-28 维沃移动通信(杭州)有限公司 Picture-photographing method and apparatus, and electronic device
CN113747060A (en) * 2021-08-12 2021-12-03 荣耀终端有限公司 Method, apparatus, storage medium, and computer program product for image processing
CN113407752A (en) * 2021-08-19 2021-09-17 杭州欧若数网科技有限公司 Graph database memory management method, system, electronic device and storage medium
CN115097994A (en) * 2021-10-30 2022-09-23 荣耀终端有限公司 Data processing method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李长乐;刘玉斌;赵杰;: "一种新的图像处理系统的研究", 半导体光电, no. 02, 15 April 2010 (2010-04-15) *

Similar Documents

Publication Publication Date Title
CN110070496B (en) Method and device for generating image special effect and hardware device
US20090094552A1 (en) Guided Transition User Interfaces
CN113220445A (en) Image or video data acquisition method and terminal equipment
CN111225108A (en) Communication terminal and card display method of negative screen interface
CN113835569A (en) Terminal device, quick start method for internal function of application and storage medium
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN113220446A (en) Image or video data processing method and terminal equipment
WO2023143545A1 (en) Resource processing method and apparatus, electronic device, and computer-readable storage medium
CN116761080A (en) Image data processing method and terminal equipment
CN114449171B (en) Method for controlling camera, terminal device, storage medium and program product
WO2023005751A1 (en) Rendering method and electronic device
CN114489429B (en) Terminal equipment, long screen capturing method and storage medium
CN117135341A (en) Image processing method and electronic equipment
CN113157092B (en) Visualization method, terminal device and storage medium
CN116095413A (en) Video processing method and electronic equipment
CN114035870A (en) Terminal device, application resource control method and storage medium
CN115373869A (en) Process processing method and device based on AAR and electronic equipment
CN109766246B (en) Method and apparatus for monitoring applications
TW201224925A (en) System and method for multiple native software applications user interface composition
US20240036891A1 (en) Sub-application running method and apparatus, electronic device, program product, and storage medium
CN115175002B (en) Video playing method and device
CN113344797B (en) Method and device for special effect processing, readable medium and electronic equipment
WO2022042763A1 (en) Video playback method, and device
WO2024099206A1 (en) Graphical interface processing method and apparatus
CN116934887A (en) Image processing method, device, equipment and storage medium based on end cloud cooperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination