CN113806054A - Task processing method and device, electronic equipment and storage medium - Google Patents

Task processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113806054A
CN113806054A CN202111137684.6A CN202111137684A CN113806054A CN 113806054 A CN113806054 A CN 113806054A CN 202111137684 A CN202111137684 A CN 202111137684A CN 113806054 A CN113806054 A CN 113806054A
Authority
CN
China
Prior art keywords
task
processing
processed
subtask
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137684.6A
Other languages
Chinese (zh)
Inventor
郭晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111137684.6A priority Critical patent/CN113806054A/en
Publication of CN113806054A publication Critical patent/CN113806054A/en
Priority to PCT/CN2022/075002 priority patent/WO2023045207A1/en
Priority to TW111117088A priority patent/TW202314496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Abstract

The method obtains a to-be-processed sub task comprising at least part of to-be-processed data by acquiring the to-be-processed task and dividing the to-be-processed task according to the functions of at least two processing modules through a main thread. And processing the corresponding to-be-processed subtasks through the processing modules in parallel by the plurality of sub threads to obtain a subtask processing result, and determining a task processing result of the to-be-processed task based on each subtask processing result. According to the task division method and the task division device, when the tasks are received, the tasks can be divided through the main thread, the sub-threads are established and used for parallelly processing the partial tasks through the different processing modules, the sub-task processing results of the processing modules are collected to obtain the final task processing result, and the task processing efficiency is improved.

Description

Task processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task processing method and apparatus, an electronic device, and a storage medium.
Background
When performing task processing, the conventional method usually performs task processing based on an application program of a computer bottom layer. In some current scenarios, task processing can be performed directly through the front-end page. However, since the scripting language of the front-end page is an interpreted scripting language, the front-end page can only be operated in a single-thread mode, and the task processing efficiency is low.
Disclosure of Invention
The disclosure provides a task processing method and device, an electronic device and a storage medium, and aims to improve task processing efficiency when a front-end page is used for processing a task.
According to a first aspect of the present disclosure, there is provided a task processing method, the method including:
acquiring a task to be processed, wherein the task to be processed comprises data to be processed;
dividing the to-be-processed tasks according to the functions of at least two processing modules through a main thread to obtain at least two to-be-processed subtasks, wherein each to-be-processed subtask corresponds to one processing module and comprises at least part of to-be-processed data, and each processing module is used for processing different to-be-processed subtasks;
respectively calling processing module interfaces through a plurality of sub-threads, and processing corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result;
and determining a task processing result of the task to be processed based on each subtask processing result.
In a possible implementation manner, the acquiring the task to be processed includes;
displaying a task processing page having at least two processing modules;
and acquiring the task to be processed through the task processing interface.
In one possible implementation, each of the sub-threads sends the sub-task processing result to the main thread through an asynchronous message passing mechanism.
In one possible implementation, the processing module is stored as a binary format file, and the binary format file is obtained by compiling a source code other than the non-browser code according to a binary code compiling specification.
In a possible implementation manner, the processing module is a deep learning model obtained by training in advance.
In one possible implementation, each of the deep learning models is used for processing at least the following two types of to-be-processed subtasks:
a face detection task, a hair detection task, a lip segmentation task, and a nail detection task.
In a possible implementation manner, the process of processing the corresponding to-be-processed sub-task by the processing module includes:
and inputting the image data in the subtask to be processed into the deep learning model, and outputting at least one of a face detection frame, a hair detection frame, a lip detection frame and a nail detection frame of the image data as a subtask processing result.
In a possible implementation manner, the respectively invoking the processing module interfaces through the multiple sub-threads, and concurrently processing the corresponding to-be-processed sub-tasks through each of the processing modules and obtaining the sub-task processing results includes:
creating a plurality of worker threads through the main thread;
and calling a processing module interface through each worker thread respectively, and processing the corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
In a possible implementation manner, the determining, by the main thread, a task processing result of the to-be-processed task by summarizing the sub-task processing results by the main thread includes:
and summarizing the processing results of all the subtasks through the main thread, and adding the processing results of all the subtasks into a front-end page to obtain task processing results.
In a possible implementation manner, the subtask processing result at least includes at least two text messages in a face detection box coordinate, a hair detection box coordinate, a lip detection box coordinate, and a nail detection box coordinate;
and the task processing result is text information comprising the processing results of the subtasks.
In one possible implementation, the subtask processing result includes at least two marked image information of image data with a face detection frame, image data with a hair detection frame, image data with a lip detection frame, and image data with a nail detection frame;
the task processing result is a front-end page with overlapped image information, and the overlapped image information is image data of at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame or image data of at least one object detection frame obtained by overlapping at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame.
According to a second aspect of the present disclosure, there is provided a task processing apparatus, the apparatus including:
the task determining module is used for acquiring a task to be processed, wherein the task to be processed comprises data to be processed;
the task segmentation module is used for segmenting the to-be-processed tasks according to the functions of at least two processing modules through a main thread to obtain at least two to-be-processed subtasks, each to-be-processed subtask corresponds to one processing module and comprises at least part of to-be-processed data, and each processing module is used for processing different to-be-processed subtasks;
the task processing module is used for respectively calling the processing module interfaces through a plurality of sub threads, processing the corresponding to-be-processed subtasks through each processing module in parallel and obtaining a subtask processing result;
and the result determining module is used for determining the task processing result of the task to be processed based on each subtask processing result.
In one possible implementation, the task determination module includes;
the page display submodule is used for displaying a task processing page with at least two processing modules;
and the task acquisition submodule is used for acquiring the task to be processed through the task processing interface.
In one possible implementation, each of the sub-threads sends the sub-task processing result to the main thread through an asynchronous message passing mechanism.
In one possible implementation, the processing module is stored as a binary format file, and the binary format file is obtained by compiling a source code other than the non-browser code according to a binary code editing specification.
In a possible implementation manner, the processing module is a deep learning model obtained by training in advance.
In a possible implementation manner, the process of processing the corresponding to-be-processed sub-task by the processing module includes:
and inputting the image data in the subtask to be processed into the deep learning model, and outputting at least one of a face detection frame, a hair detection frame, a lip detection frame and a nail detection frame of the image data as a subtask processing result.
In one possible implementation manner, the task processing module includes:
the thread creating submodule is used for creating a plurality of worker threads through the main thread;
and the task processing submodule is used for respectively calling a processing module interface through each worker thread, and processing the corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
In one possible implementation, the result determination module includes:
and the result determining submodule is used for summarizing the processing results of the subtasks through the main thread and adding the processing results of the subtasks into a front-end page to obtain task processing results.
In a possible implementation manner, the subtask processing result at least includes at least two text messages in a face detection box coordinate, a hair detection box coordinate, a lip detection box coordinate, and a nail detection box coordinate;
and the task processing result is text information comprising the processing results of the subtasks.
In one possible implementation, the subtask processing result includes at least two marked image information of image data with a face detection frame, image data with a hair detection frame, image data with a lip detection frame, and image data with a nail detection frame;
the task processing result is a front-end page with overlapped image information, and the overlapped image information is image data of at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame or image data of at least one object detection frame obtained by overlapping at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to the embodiment of the disclosure, the application for processing different tasks is used as the processing module of the task processing page, when the task processing page receives the task, the task can be processed in parallel through the plurality of processing modules in a task dividing mode, the sub-task processing results of each processing module are obtained in an asynchronous message transmission mode, the task processing results are finally obtained according to the sub-task processing results, and the task processing efficiency of the task processing based on the front-end page is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a method of task processing according to an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a task processing page according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of determining subtask processing results according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating results of task processing according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of determining task processing results according to an embodiment of the disclosure;
FIG. 6 shows a schematic diagram of a task processing device according to an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 8 shows a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The Worker thread in the embodiment of the disclosure is used for creating a multi-thread environment for the JavaScript language in the single-thread mode, and can be an additional thread developed for the main thread. And when the worker thread runs, the worker thread can run in the background without interfering with the main thread, and after the worker thread finishes a calculation task, a result is returned to the main thread. Alternatively, the worker thread cannot communicate directly with the main thread and communication may be accomplished through an asynchronous messaging mechanism, such as postMessage. The postMessage is a common function introduced in the front-end language, allows scripts from different sources to effectively communicate in an asynchronous mode, and can realize cross-text document, multi-window and cross-domain message transfer. And the main thread and the worker thread send respective messages through the postMessage function.
The webAssembly is a technical solution that can write code using a non-JavaScript programming language and can run on a browser, and optionally, the code written by the non-JavaScript programming language can be any code such as C language code, C + + language code, and Rust language code.
Fig. 1 shows a flowchart of a task processing method according to an embodiment of the present disclosure. In a possible implementation manner, the task processing method according to the embodiment of the present disclosure is executed by a web client of a browser or other application programs that can load a front-end page, or an applet in the application programs that can load the front-end page. Alternatively, a browser or other application may be installed in a terminal device, the terminal device may be any terminal device capable of installing the above application, such as a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and the task processing method may be implemented by a js (javascript) script language of an installed web client, an application, or a applet calling a task processing page in the terminal device.
The following description mainly takes a method for executing task processing by a web client in a browser as an example.
As shown in fig. 1, a task processing method according to an embodiment of the present disclosure may include the following steps:
and step S10, acquiring the task to be processed.
In one possible implementation, the task to be processed may be obtained through a browser. Optionally, the determination mode of the to-be-processed task may be obtained by a task processing page loaded and displayed by the browser, for example, a task processing page having at least two processing modules is displayed, and the to-be-processed task is obtained through a task processing interface. The task to be processed can be an image recognition task, a video processing task, a text processing task, an audio processing task and the like.
Further, the to-be-processed task may further include to-be-processed data, and the to-be-processed data is used for performing task processing. For example, when the task to be processed is a video processing task, the data to be processed is video data that needs to be processed. And when the task to be processed is an image processing task, the data to be processed is image data needing to be processed. And when the task to be processed is an audio processing task, the task to be processed is audio data needing to be processed.
In a possible implementation manner, the process of obtaining the to-be-processed task through the task processing page may be to obtain the to-be-processed data in a human-computer interaction manner and determine the to-be-processed task. Optionally, the task processing page may include a data acquisition control for controlling, when triggered, the corresponding data acquisition device to be turned on to acquire the data to be processed, and generate the task to be processed corresponding to the data to be processed. That is to say, the browser may collect the data to be processed in response to the data collection control being triggered, and determine the task to be processed according to the data to be processed. For example, when the task processing page is used for processing an image processing task, the data acquisition control in the task processing page is an image acquisition control. When a user triggers the image acquisition control by clicking and the like, the browser controls the terminal equipment provided with the browser to start the image acquisition device, and image data are acquired as to-be-processed data through the image acquisition device. And further, generating a corresponding image processing task as the task to be processed according to the image data to be processed.
Furthermore, a plurality of data can be stored in the terminal device in which the browser of the embodiment of the present disclosure is installed in advance, and a user can acquire the data stored in the terminal device as data to be processed in a manner of performing human-computer interaction with the task processing page. Optionally, the task processing page may include a data upload control for uploading data to the browser, and when the data upload control is triggered, the data in the terminal device is uploaded as the data to be processed, and the task to be processed is determined according to the data to be processed. For example, when the task processing page is used for processing an image processing task, the data upload control in the task processing page is an image upload control. When a user triggers the image uploading control in a clicking mode or the like, the browser controls the terminal equipment provided with the browser to open the local album, and the user selects at least one image data uploading browser in the local album as data to be processed. And further, generating a corresponding image processing task as the task to be processed according to the image data to be processed.
Optionally, the task processing page may include a data acquisition control and a data upload control at the same time, and the user may select to trigger one of the controls to determine the data to be processed, and generate the corresponding task to be processed.
FIG. 2 shows a schematic diagram of a task processing page according to an embodiment of the present disclosure. As shown in fig. 2, a data collection control 21 and a data upload control 22 may be included in the task processing page 20. The user acquires the data to be processed when triggering the data acquisition control 21, and uploads the data to be processed when triggering the data upload control 22, so as to generate a task to be processed according to the data to be processed. In a possible implementation manner, the embodiment of the present disclosure is configured to perform a face recognition task, where the data acquisition control 21 is configured to acquire a face image, and the data uploading control 22 is configured to upload the face image. Optionally, when the user triggers the data acquisition control 21, the browser controls the camera of the terminal device to start, and acquires the face image as the data to be processed to generate the face recognition task to be processed. When the user triggers the data uploading control 22, the browser controls the local album of the terminal device to be opened, and selects the face image to be identified as the data to be processed to generate the face identification task to be processed.
According to the embodiment of the application program processing method and the application program processing device, the data to be processed can be directly determined through the task processing page, and the task to be processed is generated based on the data to be processed, so that the whole task processing process is independently completed by the browser from the beginning to the end, and the task processing program on the bottom layer of the terminal equipment does not need to be additionally called.
In a possible implementation manner, the task processing page displayed by the browser may further include at least two processing modules, where each processing module is respectively configured to process different tasks, and may be used by calling a processing module interface through the JS scripting language. Alternatively, the processing module may be stored as a binary format file obtained by compiling a source code other than the non-browser code by a binary code compilation specification. The binary code compiling specification can be WebAssembly, the source code can be C language code, C + + language code or Rust language code and the like, and a binary format file supported by the browser is obtained through editing by a compiling tool emsdk provided by the WebAssembly.
Further, the processing module may be a program for performing task processing, for example, when the task processing page is used for performing video processing, the processing module may include a program such as an image processing tool for performing image processing, an audio processing tool for performing audio processing, and the like. Alternatively, the processing module may also be a deep learning model obtained by training in advance, for example, when the task processing page is used for performing face recognition, the processing module may include a face recognition model for performing face recognition, a hair recognition model for performing hair recognition, a mouth recognition model for performing mouth recognition, and the like. That is, in the case that the processing module is a deep learning model, each deep learning model is at least used for processing the following two types of to-be-processed subtasks: a face detection task, a hair detection task, a lip segmentation task, and a nail detection task.
And step S20, dividing the to-be-processed task according to the functions of the at least two processing modules through the main thread to obtain at least two to-be-processed subtasks.
In a possible implementation manner, the main thread divides the task to be processed according to the functions of the at least two processing modules and the content of the task to be processed, so as to obtain at least two sub-tasks to be processed. Each to-be-processed sub-task may correspond to one processing module, and includes at least part of to-be-processed data in the to-be-processed task. Optionally, the process of segmenting the task to be processed is performed by a JavaScript main thread.
The task to be processed in the embodiment of the present invention is a human face recognition task, and the processing module of the task processing page includes a human eye recognition module, a hair recognition module, a face recognition module, a mouth recognition module, an arm recognition module, and a hand recognition module. Because the human face recognition process needs to position and recognize key points such as human eyes, hair, faces, mouths and the like, and the processing modules on the task processing page comprise a human eye recognition module capable of performing human eye recognition, a hair recognition module for performing hair recognition, a face recognition module for performing face recognition and a mouth recognition module for mouth recognition. The tasks to be processed may be divided into a human eye recognition subtask corresponding to a human eye recognition module, a hair recognition subtask corresponding to a hair recognition module, a face recognition subtask corresponding to a face recognition module, and a mouth recognition subtask corresponding to a mouth recognition module.
Further, as the identification processes of different processing modules need to be carried out based on a complete human face, the human eye identification subtask, the hair identification subtask, the face identification subtask and the mouth identification subtask all include human face image data to be processed in the human face identification task.
According to the embodiment of the disclosure, the task to be processed can be divided into a plurality of subtasks in a task division manner, so that the task processing can be performed in parallel through different processing modules, and the task processing efficiency can be improved.
And step S30, calling processing module interfaces through a plurality of sub threads respectively, and processing corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
In a possible implementation manner, the task processing method according to the embodiment of the disclosure calls each processing module interface through a plurality of sub-threads respectively to process the corresponding to-be-processed tasks in parallel, so as to obtain a task processing result. Optionally, each sub-thread is created by the main thread, that is, the main thread is a JavaScript language thread in a single thread mode, so that one cannot process a task in parallel when the task is processed by the main thread. Therefore, a plurality of worker threads can be opened up through the main thread, and corresponding to-be-processed subtasks are processed in parallel through the processing modules based on the browser threads. The worker thread can run in the background while the main thread runs, and the worker thread and the main thread do not interfere with each other. And after the worker thread finishes the task processing of the current to-be-processed subtask, obtaining a subtask processing result, and returning the subtask processing result to the main thread. That is, the task processing process may create multiple worker threads for passing through the main thread. And calling the processing module interfaces through each worker thread respectively, and processing the corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
In a possible implementation manner, after a plurality of worker threads are created through the main thread, based on the characteristic that a worker can run in the background while running in the main thread, processing module interfaces can be respectively called through each worker thread, so that corresponding to-be-processed subtasks are processed through each processing module in parallel, and a subtask processing result is obtained.
The embodiment of the disclosure solves the defect that the browser can only execute the task through a single thread by creating the worker thread, realizes parallel task processing through a plurality of processing modules, and improves the task processing speed and efficiency.
In one possible implementation, each processing module is a deep learning model obtained by training in advance. The process of the processing module for processing the corresponding to-be-processed subtask may be that at least part of the to-be-processed data included in the to-be-processed subtask corresponding to the processing module is input into the processing module, and a corresponding subtask processing result is output. When the processing module is a deep learning model for object recognition, the image data in the subtask to be processed may be input into the deep learning model, and at least one of a face detection frame, a hair detection frame, a lip detection frame, and a nail detection frame of the image data may be output as a subtask processing result. For example, when the sub-task to be processed is a mouth recognition task and the processing module is a mouth recognition model, the face image to be recognized in the mouth recognition task is input into the mouth recognition model, and the coordinate information of the mouth is output as the processing result of the sub-task.
FIG. 3 is a schematic diagram illustrating a determination of a subtask processing result according to an embodiment of the present disclosure. As shown in fig. 3, when the processing module of the task processing page is a deep learning model obtained by training, the embodiment of the present disclosure may output a corresponding subtask processing result 32 by inputting at least part of the data to be processed included in the subtask 30 to the processing module 31 obtained by training in advance.
And step S40, determining the task processing result of the task to be processed based on each subtask processing result.
In a possible implementation manner, after each processing module processes the to-be-processed subtask to obtain a subtask processing result, each worker thread may send the subtask processing result to the main thread through an asynchronous message passing mechanism. Alternatively, a messaging mechanism may be messaging through a postMessage function. Further, the browser collects the processing results of all the subtasks through the main thread to determine the task processing results of the tasks to be processed. And the task processing result is determined according to the content of the subtask processing result and the type of the data to be processed included in the task to be processed. The process of determining the task processing result by the main thread may be to summarize the processing results of each subtask and add the processing results of each subtask to the front-end page to obtain the task processing result.
Alternatively, the subtask processing result may be text information. And in response to that the processing result of each subtask is text information, summarizing each text information through the main thread, and obtaining the task processing result of the task to be processed in a mode of directly adding each text information to the front-end page. That is, when the subtask processing result obtained by each processing module processing the subtask to be processed is text information, the main thread can directly summarize each text information to obtain a front-end page including the processing result of each subtask as a task processing result, and the task processing result can be displayed by a browser. The subtask processing result may include at least two pieces of text information among at least face detection box coordinates, hair detection box coordinates, lip detection box coordinates, and nail detection box coordinates. The task processing result may be a front-end page including sub-task processing results.
The task to be processed in the embodiment of the present disclosure is taken as an image recognition task, and each processing module is respectively used for recognizing a position coordinate where an object of image data in the task to be processed is located. When each processing module is respectively used for identifying at least two of the face position, the hair position, the lip position and the nail position in the image data, the obtained subtask processing result comprises at least two text information in the face detection frame coordinate, the hair detection frame coordinate, the lip detection frame coordinate and the nail detection frame coordinate. After summarizing the detection frame coordinates identified by each processing module, the main thread directly adds the text information of the detection frame coordinates to the front-end page to obtain the text information including the processing results of each subtask as the task processing results of the tasks to be processed.
Optionally, under the condition that the subtask processing result includes at least two pieces of text information in the face detection frame coordinate, the hair detection frame coordinate, the lip detection frame coordinate, and the nail detection frame coordinate, the main thread may further draw a corresponding image frame at a coordinate position represented by each piece of text information on the image data in the task to be processed by each subtask processing result, and add the drawn image data to the front-end page to obtain the task processing result.
In a possible implementation manner, the sub-task processing result may also be image information, and the image information may be image data in which at least one region is labeled. And in response to the fact that the data to be processed is image data, the processing result of each subtask is image information of at least one area in the marked image data, summarizing each image information through the main thread, superposing the image information, and adding the superposed image information into a front-end page to obtain a task processing result of the task to be processed. That is, when the sub-task processing result obtained by processing the to-be-processed sub-task by each processing module is image information including a plurality of image frames, and the image frames are used for labeling at least one region in the image data, the task processing result is determined in a manner of overlapping the image frames in each image information. Alternatively, the image frames may be superimposed by superimposing image frames whose parts overlap in each subtask processing result, so as to obtain a minimum task image frame that can include each superimposed image frame. Further, each of the obtained task image frames is displayed as a task processing result. Or, the manner of overlapping the image frames may be to directly overlap the image frames to obtain and display the task processing result.
Alternatively, the subtask processing result may include at least two marked image information among the image data having the face detection frame, the image data having the hair detection frame, the image data having the lip detection frame, and the image data having the nail detection frame, and the task processing result may be a front page having superimposed image information including image data of at least two detection frames among the face detection frame, the hair detection frame, the lip detection frame, and the nail detection frame, or image data of at least one object detection frame including at least two detection frames among the face detection frame, the hair detection frame, the lip detection frame, and the nail detection frame.
The task to be processed in the embodiment of the present disclosure is taken as an object identification task, and each processing module is respectively used for identifying one feature position of image data in the task to be processed. The processing module comprises at least two of a face recognition module, a mouth recognition module, a hair recognition module and a nail recognition module, and the face image information, the mouth image information, the hair image information and the nail image information are respectively obtained after the image data in the task to be processed are processed and recognized. The face image information comprises image data with at least one face detection frame, the mouth image information comprises image data with at least one lip detection frame, the hair image information comprises image data with at least one hair detection frame, the nail image information comprises image data with at least one nail detection frame, the face detection frame, the mouth detection frame, the hair detection frame and the nail detection frame in the image information can be directly overlapped through a main thread, and a front end page comprising the image data with the detection frames is obtained as a task processing result. Optionally, at least one object detection frame obtained by superimposing at least two detection frames among the face detection frame, the hair detection frame, the lip detection frame, and the nail detection frame may be further added to the image data by the main thread, and the object detection frames are added to the front-end page as a task processing result.
FIG. 4 shows a schematic diagram of a task processing result according to an embodiment of the disclosure. As shown in fig. 4, when the task to be processed is a face recognition task, each task to be processed is processed by the processing module to obtain a plurality of image information, where each task to be processed includes an area image frame at a feature position of the face. The browser obtains at least one face image frame 41 by superposing at least partially overlapped image frames in each image information through the main thread, and determines a task processing result 40 including the at least one face image frame 41. The task processing result 40 is displayed through a task processing page of the browser.
FIG. 5 is a diagram illustrating a task processing result determination according to an embodiment of the present disclosure. As shown in fig. 5, after determining a to-be-processed task 50 through a task processing page of a browser, the embodiment of the present disclosure divides the to-be-processed task according to the to-be-processed task 50 and each processing module 51 of the task processing page by a main thread to obtain at least two to-be-processed subtasks 52. Meanwhile, a worker thread is allocated to each to-be-processed subtask 52, and a processing module 53 corresponding to the to-be-processed subtask is called through each worker thread, so that the to-be-processed subtask is processed to obtain a subtask processing result 54. The subtask processing results 54 are sent to the main thread through the postMessage. The browser collects the subtask processing results 54 by the main thread to obtain a task processing result 55. Alternatively, the browser may also display the task processing result 55 through a task processing page.
The following description will discuss an example in which the embodiments of the present disclosure are used to perform object recognition. The processing module included in the browser comprises a face detection module, a hair detection module, a lip detection module and a nail detection module. After receiving the tasks to be processed, dividing the tasks to be processed into a face detection task, a hair detection task, a lip detection task and a nail detection task through the main thread according to the functions of the processing modules. Furthermore, 4 worker threads are created through the main thread, and each worker thread calls each processing module to perform subtask processing. And after the currently called processing module finishes the subtask processing, one subtask processing result of the obtained face detection result, hair detection result, lip detection result and nail detection result is sent to the main thread in a postMessage mode. And after the main thread summarizes the subtask processing results sent by each worker thread, processing each subtask processing result and writing the processed subtask processing result into the front-end page code in a drawing mode to obtain the task processing result of the front-end page serving as the object identification task. Optionally, when the current task needs to perform object identification on multiple frames of images, the main thread sequentially processes each frame, and performs object identification on the next frame in a polling manner after obtaining a task processing result of the current frame.
According to the method and the device, programs for processing different tasks can be compiled into the processing module of the task processing page through the WebAssembly, and the processing module is directly called through the browser to process the tasks. Furthermore, the to-be-processed task can be divided through the main thread after the browser determines the to-be-processed task, the divided subtasks are processed through the plurality of worker threads in parallel according to the processing modules, the main thread collects the processing results of the processing modules to obtain task processing results, and task processing speed and task processing efficiency are improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a task processing device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the task processing methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 6 shows a schematic diagram of a task processing device according to an embodiment of the present disclosure, as shown in fig. 6, the device including:
a task determining module 60, configured to obtain a to-be-processed task, where the to-be-processed task includes to-be-processed data;
a task dividing module 61, configured to divide the to-be-processed task according to the functions of at least two processing modules through a main thread to obtain at least two to-be-processed subtasks, where each to-be-processed subtask corresponds to one processing module and includes at least part of the to-be-processed data, and each processing module is configured to process different to-be-processed subtasks;
the task processing module 62 is configured to respectively call processing module interfaces through a plurality of sub-threads, and concurrently process the corresponding to-be-processed sub-tasks through each of the processing modules to obtain sub-task processing results;
and a result determining module 63, configured to determine a task processing result of the to-be-processed task based on each sub-task processing result.
In one possible implementation, the task determination module includes;
the page display submodule is used for displaying a task processing page with at least two processing modules;
and the task acquisition submodule is used for acquiring the task to be processed through the task processing interface.
In one possible implementation, each of the sub-threads sends the sub-task processing result to the main thread through an asynchronous message passing mechanism.
In one possible implementation, the processing module is stored as a binary format file, and the binary format file is obtained by compiling a source code other than the non-browser code according to a binary code editing specification.
In a possible implementation manner, the processing module is a deep learning model obtained by training in advance.
In a possible implementation manner, the process of processing the corresponding to-be-processed sub-task by the processing module includes:
and inputting the image data in the subtask to be processed into the deep learning model, and outputting at least one of a face detection frame, a hair detection frame, a lip detection frame and a nail detection frame of the image data as a subtask processing result.
In one possible implementation manner, the task processing module includes:
the thread creating submodule is used for creating a plurality of worker threads through the main thread;
and the task processing submodule is used for respectively calling a processing module interface through each worker thread, and processing the corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
In one possible implementation, the result determination module includes:
and the result determining submodule is used for summarizing the processing results of the subtasks through the main thread and adding the processing results of the subtasks into a front-end page to obtain task processing results.
In a possible implementation manner, the subtask processing result at least includes at least two text messages in a face detection box coordinate, a hair detection box coordinate, a lip detection box coordinate, and a nail detection box coordinate;
and the task processing result is text information comprising the processing results of the subtasks.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 shows a schematic diagram of an electronic device 800 according to an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 8 shows a schematic diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A method for processing a task, the method comprising:
acquiring a task to be processed, wherein the task to be processed comprises data to be processed;
dividing the to-be-processed tasks according to the functions of at least two processing modules through a main thread to obtain at least two to-be-processed subtasks, wherein each to-be-processed subtask corresponds to one processing module and comprises at least part of to-be-processed data, and each processing module is used for processing different to-be-processed subtasks;
respectively calling processing module interfaces through a plurality of sub-threads, and processing corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result;
and determining a task processing result of the task to be processed based on each subtask processing result.
2. The method of claim 1, wherein the obtaining the task to be processed comprises;
displaying a task processing page having at least two processing modules;
and acquiring the task to be processed through the task processing interface.
3. The method of claim 1, wherein each of the child threads sends the subtask processing results to the main thread via an asynchronous message passing mechanism.
4. The method according to any one of claims 1 to 3, wherein the processing module is stored as a binary format file obtained by compiling a source code other than the non-browser code by a binary code compilation specification.
5. The method according to any one of claims 1 to 4, wherein the processing module is a pre-trained deep learning model.
6. The method of claim 5, wherein each deep learning model is used to process at least two types of pending subtasks:
a face detection task, a hair detection task, a lip segmentation task, and a nail detection task.
7. The method according to claim 6, wherein the processing module processes the corresponding to-be-processed subtasks by:
and inputting the image data in the subtask to be processed into the deep learning model, and outputting at least one of a face detection frame, a hair detection frame, a lip detection frame and a nail detection frame of the image data as a subtask processing result.
8. The method according to any one of claims 1 to 7, wherein the calling the processing module interfaces through the plurality of sub-threads respectively, and the processing the corresponding to-be-processed sub-tasks through each processing module in parallel and obtaining the sub-task processing results comprises:
creating a plurality of worker threads through the main thread;
and calling a processing module interface through each worker thread respectively, and processing the corresponding to-be-processed subtasks through each processing module in parallel to obtain a subtask processing result.
9. The method according to any one of claims 1 to 8, wherein the determining a task processing result of the task to be processed based on each of the sub-task processing results comprises:
and summarizing the processing results of all the subtasks through the main thread, and adding the processing results of all the subtasks into a front-end page to obtain task processing results.
10. The method according to any one of claims 1 to 9, wherein the subtask processing result at least includes at least two text messages among face detection box coordinates, hair detection box coordinates, lip detection box coordinates, and nail detection box coordinates;
and the task processing result is text information comprising the processing results of the subtasks.
11. The method according to any one of claims 1 to 9, wherein the subtask processing result includes at least two marked image information of image data with a face detection frame, image data with a hair detection frame, image data with a lip detection frame, and image data with a nail detection frame;
the task processing result is a front-end page with overlapped image information, and the overlapped image information is image data of at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame or image data of at least one object detection frame obtained by overlapping at least two detection frames in the face detection frame, the hair detection frame, the lip detection frame and the nail detection frame.
12. A task processing apparatus, characterized in that the apparatus comprises:
the task determining module is used for acquiring a task to be processed, wherein the task to be processed comprises data to be processed;
the task segmentation module is used for segmenting the to-be-processed tasks according to the functions of at least two processing modules through a main thread to obtain at least two to-be-processed subtasks, each to-be-processed subtask corresponds to one processing module and comprises at least part of to-be-processed data, and each processing module is used for processing different to-be-processed subtasks;
the task processing module is used for respectively calling the processing module interfaces through a plurality of sub threads, processing the corresponding to-be-processed subtasks through each processing module in parallel and obtaining a subtask processing result;
and the result determining module is used for determining the task processing result of the task to be processed based on each subtask processing result.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 11.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 11.
CN202111137684.6A 2021-09-27 2021-09-27 Task processing method and device, electronic equipment and storage medium Pending CN113806054A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111137684.6A CN113806054A (en) 2021-09-27 2021-09-27 Task processing method and device, electronic equipment and storage medium
PCT/CN2022/075002 WO2023045207A1 (en) 2021-09-27 2022-01-29 Task processing method and apparatus, electronic device, storage medium, and computer program
TW111117088A TW202314496A (en) 2021-09-27 2022-05-06 Task processing method, electrionic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137684.6A CN113806054A (en) 2021-09-27 2021-09-27 Task processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113806054A true CN113806054A (en) 2021-12-17

Family

ID=78938604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137684.6A Pending CN113806054A (en) 2021-09-27 2021-09-27 Task processing method and device, electronic equipment and storage medium

Country Status (3)

Country Link
CN (1) CN113806054A (en)
TW (1) TW202314496A (en)
WO (1) WO2023045207A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546624A (en) * 2022-03-01 2022-05-27 清华大学 Task processing method and device, electronic equipment and storage medium
CN115658325A (en) * 2022-11-18 2023-01-31 北京市大数据中心 Data processing method, data processing device, multi-core processor, electronic device, and medium
WO2023045207A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Task processing method and apparatus, electronic device, storage medium, and computer program
CN116739090A (en) * 2023-05-12 2023-09-12 北京大学 Deep neural network reasoning measurement method and device based on Web browser

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116126547B (en) * 2023-04-18 2023-06-27 邦邦汽车销售服务(北京)有限公司 Vehicle data processing method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200082624A1 (en) * 2018-09-06 2020-03-12 8th Wall Inc. Providing augmented reality in a web browser
CN111860253A (en) * 2020-07-10 2020-10-30 东莞正扬电子机械有限公司 Multitask attribute identification method, multitask attribute identification device, multitask attribute identification medium and multitask attribute identification equipment for driving scene
CN111881401A (en) * 2020-08-04 2020-11-03 浪潮云信息技术股份公司 Browser deep learning method and system based on WebAssembly
CN112257135A (en) * 2020-10-30 2021-01-22 久瓴(上海)智能科技有限公司 Model loading method and device based on multithreading, storage medium and terminal
CN112437343A (en) * 2020-05-15 2021-03-02 上海哔哩哔哩科技有限公司 Browser-based cover generation method and system
CN112650502A (en) * 2020-12-31 2021-04-13 广州方硅信息技术有限公司 Batch processing task processing method and device, computer equipment and storage medium
CN112905347A (en) * 2021-03-04 2021-06-04 北京澎思科技有限公司 Data processing method, device and storage medium
CN113221771A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593642B2 (en) * 2019-09-30 2023-02-28 International Business Machines Corporation Combined data pre-process and architecture search for deep learning models
CN113298692B (en) * 2021-05-21 2024-04-16 北京索为云网科技有限公司 Augmented reality method for realizing real-time equipment pose calculation based on mobile terminal browser
CN113034659A (en) * 2021-05-24 2021-06-25 成都天锐星通科技有限公司 Three-dimensional rendering data processing method and device, electronic equipment and readable storage medium
CN113806054A (en) * 2021-09-27 2021-12-17 北京市商汤科技开发有限公司 Task processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200082624A1 (en) * 2018-09-06 2020-03-12 8th Wall Inc. Providing augmented reality in a web browser
CN112437343A (en) * 2020-05-15 2021-03-02 上海哔哩哔哩科技有限公司 Browser-based cover generation method and system
CN111860253A (en) * 2020-07-10 2020-10-30 东莞正扬电子机械有限公司 Multitask attribute identification method, multitask attribute identification device, multitask attribute identification medium and multitask attribute identification equipment for driving scene
CN111881401A (en) * 2020-08-04 2020-11-03 浪潮云信息技术股份公司 Browser deep learning method and system based on WebAssembly
CN112257135A (en) * 2020-10-30 2021-01-22 久瓴(上海)智能科技有限公司 Model loading method and device based on multithreading, storage medium and terminal
CN112650502A (en) * 2020-12-31 2021-04-13 广州方硅信息技术有限公司 Batch processing task processing method and device, computer equipment and storage medium
CN112905347A (en) * 2021-03-04 2021-06-04 北京澎思科技有限公司 Data processing method, device and storage medium
CN113221771A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Living body face recognition method, living body face recognition device, living body face recognition equipment, storage medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄夏;: "Web前端中的增强现实开发技术研究", 信息与电脑(理论版), no. 09 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045207A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Task processing method and apparatus, electronic device, storage medium, and computer program
CN114546624A (en) * 2022-03-01 2022-05-27 清华大学 Task processing method and device, electronic equipment and storage medium
CN114546624B (en) * 2022-03-01 2024-04-09 清华大学 Task processing method and device, electronic equipment and storage medium
CN115658325A (en) * 2022-11-18 2023-01-31 北京市大数据中心 Data processing method, data processing device, multi-core processor, electronic device, and medium
CN115658325B (en) * 2022-11-18 2024-01-23 北京市大数据中心 Data processing method, device, multi-core processor, electronic equipment and medium
CN116739090A (en) * 2023-05-12 2023-09-12 北京大学 Deep neural network reasoning measurement method and device based on Web browser
CN116739090B (en) * 2023-05-12 2023-11-28 北京大学 Deep neural network reasoning measurement method and device based on Web browser

Also Published As

Publication number Publication date
TW202314496A (en) 2023-04-01
WO2023045207A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN106385591B (en) Video processing method and video processing device
CN113806054A (en) Task processing method and device, electronic equipment and storage medium
CN113486765B (en) Gesture interaction method and device, electronic equipment and storage medium
US20200007944A1 (en) Method and apparatus for displaying interactive attributes during multimedia playback
CN112991553B (en) Information display method and device, electronic equipment and storage medium
CN110989901B (en) Interactive display method and device for image positioning, electronic equipment and storage medium
CN111626183A (en) Target object display method and device, electronic equipment and storage medium
CN113065591A (en) Target detection method and device, electronic equipment and storage medium
CN110929616B (en) Human hand identification method and device, electronic equipment and storage medium
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN113989469A (en) AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium
JP2023510443A (en) Labeling method and device, electronic device and storage medium
WO2023051356A1 (en) Virtual object display method and apparatus, and electronic device and storage medium
CN110837766B (en) Gesture recognition method, gesture processing method and device
CN111274489A (en) Information processing method, device, equipment and storage medium
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN112906467A (en) Group photo image generation method and device, electronic device and storage medium
CN114387622A (en) Animal weight recognition method and device, electronic equipment and storage medium
CN114549797A (en) Painting exhibition method, device, electronic equipment, storage medium and program product
CN114463212A (en) Image processing method and device, electronic equipment and storage medium
CN110263743B (en) Method and device for recognizing images
CN113377478B (en) Entertainment industry data labeling method, device, storage medium and equipment
CN114782656A (en) Virtual object display method and device, electronic equipment and storage medium
CN114638949A (en) Virtual object display method and device, electronic equipment and storage medium
CN114924828A (en) AR image display method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40059712

Country of ref document: HK