CN112163468A - Image processing method and device based on multiple threads - Google Patents

Image processing method and device based on multiple threads Download PDF

Info

Publication number
CN112163468A
CN112163468A CN202010955146.7A CN202010955146A CN112163468A CN 112163468 A CN112163468 A CN 112163468A CN 202010955146 A CN202010955146 A CN 202010955146A CN 112163468 A CN112163468 A CN 112163468A
Authority
CN
China
Prior art keywords
image data
processed
target
tasks
target task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010955146.7A
Other languages
Chinese (zh)
Inventor
丁勇
李合青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010955146.7A priority Critical patent/CN112163468A/en
Publication of CN112163468A publication Critical patent/CN112163468A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The embodiment of the invention provides an image processing method, an image processing device, a storage medium and an electronic device based on multithreading, wherein the method comprises the following steps: the method comprises the steps of obtaining target image data to be processed, generating the target image data into a target task, adding the target task to the tail of a target task queue in a neural network model, calling a plurality of threads in a thread pool to sequentially take out the plurality of tasks from the head of the target task queue, and executing the same image processing operation on the corresponding image data to be processed in the plurality of tasks in the neural network model, so that the technical problems of low operation efficiency and high operation cost of an image processing algorithm in the related technology are solved, and the technical effect of better improving the operation speed of the neural network algorithm is achieved under the condition of ensuring the accuracy of the algorithm.

Description

Image processing method and device based on multiple threads
Technical Field
The embodiment of the invention relates to the field of communication, in particular to an image processing method and device based on multithreading, a storage medium and an electronic device.
Background
With the rapid development of information technology, video image acquisition, data storage and image processing technology are continuously improved, in the related technology, the content in the video acquired by the camera is monitored, and useful information is checked and screened in a manual mode, so that the mode has very low efficiency, and sometimes people are easy to vague and can also omit some useful information, and therefore, the intelligent operation of the monitoring camera is very important.
Some computer vision algorithms are typically deployed on the monitoring device hardware, such as: the deep neural network is used for solving the inspection, identification or tracking services of people and objects in real life, one is to design the neural network with better generalization performance, although the algorithm testing efficiency is improved, the calculation amount brought by the algorithm testing efficiency is multiplied, and the actual model in the algorithm deployment process has higher resource requirements on a hardware platform and can greatly increase the cost. The other is to compress the model of the neural network, but this has the consequence that although the algorithm can be accelerated on the hardware platform, at the same time a great deal of precision is lost.
Therefore, in consideration of the limited resources of the hardware platform, the current related art has a technical problem that the operation efficiency of the algorithm for image processing through the neural network model is too low.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, a storage medium and an electronic device based on multithreading, and at least solves the technical problem that the running efficiency of an algorithm for processing an image through a neural network model is too low in the related technology.
According to an embodiment of the present invention, there is provided a multithread-based image processing method including: acquiring target image data to be processed; generating the target image data into a target task, and adding the target task to the tail of a target task queue in a neural network model, wherein the target task comprises the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises the corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence; and calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on the corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
According to another embodiment of the present invention, there is provided a multithread-based image processing apparatus including: the acquisition module is used for acquiring target image data to be processed; an adding module, configured to generate the target image data into a target task, and add the target task to a tail of a target task queue in a neural network model, where the target task includes the target image data, the target neural network model is configured to process the image data to be processed, the target task queue is configured to record tasks to be processed, each task to be processed includes corresponding image data to be processed, and tasks in the target task queue are set to be sequentially taken out from a head of the target task queue; and the calling module is used for calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence and executing the same image processing operation on the corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
By the method, the target image data is generated into the target task by utilizing the acquired target image data to be processed, the target task is added to the tail of the target task queue in the neural network model, the plurality of threads in the thread pool are called to sequentially take out the plurality of tasks from the head of the target task queue, and the same image processing operation is executed on the corresponding image data to be processed in the plurality of tasks, so that the technical scheme of designing a neural network with better generalization performance or compressing the model of the neural network in the related technology is replaced, the technical problems of low operation efficiency and higher operation cost of the image processing algorithm in the related technology can be solved, the operation speed of the neural network algorithm is better improved under the condition of ensuring the accuracy of the algorithm, the generalization performance and the robustness of the neural network model are ensured, the technical effect of accelerating the running speed of the algorithm on the hardware platform.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an image processing method based on multithreading according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a multithreading-based image processing method according to an embodiment of the invention;
FIG. 3 is a flow diagram of another multithreading-based image processing method according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of an image processing apparatus based on multithreading according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of an image processing method based on multithreading according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the multithread-based image processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the embodiment, a multithreading-based image processing method running on a mobile terminal, a computer terminal or a similar operation device is provided, fig. 2 is a schematic flowchart of multithreading-based image processing according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the following steps:
s202, acquiring target image data to be processed;
s204, generating target image data into target tasks, and adding the target tasks to the tail of a target task queue in a neural network model, wherein the target tasks comprise the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence;
and S206, calling a plurality of threads in the thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
Optionally, the target neural network model may be but not limited to a deep neural network applied in the image field, for example, applied in the fields of image recognition, detection, tracking, and the like, the target neural network model may also be but not limited to a network structure in which different neural networks are designed in advance, training samples are made by manually labeling some image data, the neural network is trained on a computer by using training data based on an open framework or an autonomously designed training framework, and the final loss of the model is made smaller and smaller by continuously adjusting weight parameters in each layer of network, specifically, for example, 1) a stage of designing a model of the deep neural network and training the network model; 2) training in a training sample with a label, and continuously adjusting the weight parameter value of each layer in the network to make the difference between a predicted value and a true value obtained by calculating data through a neural network smaller and smaller; 3) a stage of testing a network model, in which a trained neural network model is used to perform preprocessing operation on test data, where the preprocessing generally includes: color conversion and scaling are carried out, and then the processed data are subjected to inference front item calculation; 4) and finally, carrying out post-processing operation on the inferred data so as to obtain results of detection or identification and the like of the target in the image.
For example, the neural network model may include, but is not limited to, a deep neural network model, a convolutional neural network model, a residual neural network model, and the like, and may further include, but is not limited to, a supervised learning based neural network model and an unsupervised neural network model.
The above is merely an example, and the present embodiment does not limit this.
Optionally, in this embodiment, the target image data to be processed may include, but is not limited to, data after preprocessing, for example, each image is taken out from the image test set, the image data is decoded, and the original JPG image data is decoded into YUV image data for subsequent use. The above is only an example, and the specific image data type, and the present embodiment is not limited in any way.
Optionally, in this embodiment, the target task queue may include, but is not limited to, a task queue that can be called by a thread and performs corresponding processing, the target task may include, but is not limited to, a target task generated based on the target image data to be processed, and image data of each image corresponds to one target task.
Optionally, in this embodiment, the generating of the target image data as the target task and the adding of the target task to the tail of the target task queue may include, but are not limited to, satisfying a first-in first-out principle, where the adding of the task is performed from the tail of the queue, and the taking of the task is performed from the head of the queue.
Optionally, in this embodiment, the thread pool may include, but is not limited to, a set of threads for performing the same image processing operation. The main body of the calling thread pool for performing the image processing operation may include, but is not limited to, a manager thread, and the manager thread is controlled to call the threads in the thread pool so as to complete the image processing operation.
Through the steps, the obtained target image data to be processed is utilized to generate the target image data into the target task, the target task is added to the tail of the target task queue in the neural network model, a plurality of threads in the thread pool are called to sequentially take out a plurality of tasks from the head of the target task queue, and the same image processing operation is executed on the corresponding image data to be processed in the plurality of tasks in the neural network model, so that the technical scheme that the neural network with better generalization performance or the model of the compressed neural network is designed in the related technology is replaced, the technical problems of low operation efficiency and higher operation cost of the image processing algorithm in the related technology can be solved, the operation speed of the neural network algorithm is better improved under the condition of ensuring the accuracy of the algorithm, and the generalization performance and the robustness of the neural network model are ensured, the technical effect of accelerating the running speed of the algorithm on the hardware platform.
In an optional embodiment, the method further comprises: acquiring the task number of the tasks to be processed recorded in the target task queue; and adjusting the number of threads in the thread pool according to the number of the tasks, wherein the threads in the thread pool are used for processing the tasks in the target task queue.
Optionally, in this embodiment, the number of threads corresponding to the to-be-processed task recorded in the processing target task queue may be dynamically adjusted by detecting the number of tasks in the target task queue, and an execution subject performing the dynamic adjustment may include, but is not limited to, a manager thread.
Optionally, in this embodiment, the detection process may be performed according to a preset period, or may be performed in real time, or may be performed by setting a threshold as a trigger line.
In an optional embodiment, the adjusting the number of threads in the thread pool according to the number of tasks includes:
increasing the number of threads in the thread pool if the number of tasks is greater than a first threshold; and/or
And reducing the number of threads in the thread pool when the number of tasks is smaller than a second threshold value, wherein the second threshold value is smaller than the first threshold value.
Optionally, in this embodiment, the first threshold may be the same as or different from the second threshold, and when the first threshold is the same as the second threshold, a number threshold is set, and the number of tasks is compared, so that when the number of tasks is greater than the number threshold, the number of required call threads is increased, and when the number is less than or equal to the number threshold, the number of required call threads is decreased.
Optionally, in this embodiment, the process of increasing or decreasing may be dynamic, that is, the number of the tasks may be detected periodically each time, and compared with the number threshold, the number of the threads required to be called is determined to be changed, the manner of increasing the threads may include, but is not limited to, calling a new thread to perform the task processing, or may include, but is not limited to, increasing the computational resources occupied by each thread during the operation, and the manner of decreasing the threads may include, but is not limited to, deleting an idle thread in the thread pool, or decreasing the computational resources occupied by the threads performing the computational processing in the thread pool.
Through the embodiment, the operation processing speed of the algorithm model can be accelerated by increasing the number of the required calling threads, the technical effect of improving the operation processing efficiency is realized, and the technical effect of reducing the overhead can be achieved by reducing the number of the required calling threads.
In an optional embodiment, the invoking of the multiple threads in the thread pool sequentially fetches multiple tasks from the head of the target task queue includes: and calling the plurality of threads by adopting a thread lock to respectively take out the plurality of tasks from the head of the target task queue in a mutually exclusive mode.
Optionally, in this embodiment, when taking out a task, a contention relationship may be set to exist among the multiple threads in the thread pool, and in order to achieve the purpose of performing secure access on data in the task queue among the multiple threads, the thread lock may be but is not limited to be used to control each thread to acquire a task in the task queue, that is, access the multiple tasks in a mutually exclusive manner.
In an optional embodiment, performing the same image processing operation on the corresponding to-be-processed image data included in the plurality of tasks in the neural network model includes:
executing a first image processing operation on corresponding image data to be processed included in the plurality of tasks to obtain first target image data, wherein the first image processing operation is used for preprocessing the image data to be processed in at least one of the following modes: the image processing method comprises the steps of carrying out color space conversion processing on the image data to be processed, carrying out scaling processing on the image data to be processed, carrying out normalization processing on the image data to be processed, and carrying out mean value reduction processing on the image data to be processed.
Optionally, in this embodiment, the manner of performing the preprocessing may include, but is not limited to, performing color space conversion processing on the image data to be processed by using a preset algorithm, performing scaling processing on the image data to be processed by using a preset algorithm, performing normalization processing on the image data to be processed by using a preset algorithm, and performing mean value reduction processing on the image data to be processed by using a preset algorithm.
Optionally, in this embodiment, the preset algorithm may include, but is not limited to, performing a configuration algorithm based on different neural network models, in other words, different preset algorithms may be set based on different neural network models, for example, taking a 4K AI camera as an example, the color space of the acquired picture is YUV420SP, the resolution size is 3840x2160, the resolution of the input image required by the neural network model is very small (e.g., 224x224), and there may be a difference between the color space and YUV420SP (e.g., RGB format), so that color space conversion and scaling must exist to process the image data to be processed into the first target image data, which is data in a data format corresponding to the preset algorithm.
Optionally, in this embodiment, the preset algorithm may include, but is not limited to, an image enhancement technique, and the image data to be processed is preprocessed by using the image enhancement technique to obtain the first target image data, so that the generated image data may effectively represent the overall or local characteristics of the image, and purposefully enhancing the overall or local characteristics of the image may include, but is not limited to, making an originally unclear image clear or emphasizing some interesting features, enlarging differences between different object features in the image, suppressing uninteresting features, enhancing interpretation and recognition effects of the image data, and further, achieving a technical effect of increasing image recognition efficiency.
The above is merely an example, and the present embodiment does not make any specific limitation on the specific algorithm.
In an optional embodiment, after performing the first image processing operation on the corresponding to-be-processed image data included in the plurality of tasks to obtain the first target image data, the method further includes:
performing a second image processing operation on the first target image data to obtain a set of image feature values, wherein the second image processing operation is used for performing forward inference processing on the first target image data, and the forward inference processing includes at least one of: convolution operation processing, pooling operation processing, processing for adjusting linear data into nonlinear data by adopting an activation function, and full-connection processing.
Optionally, in this embodiment, the manner of performing the forward inference process may include, but is not limited to, performing convolution operation on the first target image data by using a preset algorithm, performing pooling operation, adjusting linear data in the first target image data to nonlinear data by using an activation function, and finally performing operations such as full connection to increase the expression capability of the data.
The above is only an example, and a specific flow or manner of the forward inference process is provided, and this embodiment is not limited in any way.
In an optional embodiment, the acquiring target image data to be processed includes: acquiring first image data; and performing decoding operation on the first image data to obtain second image data, wherein the target image data comprises the second image data.
Optionally, in this embodiment, the first image data may include, but is not limited to, image data in JPG or JPEG format, and the second image data in YUV format is obtained by performing a decoding operation on the first image data.
The invention will be further illustrated with reference to specific examples:
fig. 3 is a schematic flowchart of another image processing method based on multithreading according to an embodiment of the present invention, and as shown in fig. 3, a thread pool method is used to accelerate the running of the neural network model on the multi-core hardware platform, and the steps of the flow may be, but are not limited to, the following:
s302, acquiring a test image;
s304, decoding the test image to obtain target image data, wherein each image is taken out from the image test set, and the image data is decoded, for example: decoding the original JPG image data into YUV image data for subsequent use;
s306, adding the decoded data from the tail of the target task queue, wherein the target task queue meets the principle of first-in first-out, inserting the data from the tail of the queue when adding the data, and taking out the data from the head of the queue;
s308, after the target task is added into the target task queue in the thread pool, the number of tasks in the task queue is detected by the manager thread at regular time, if the data in the task queue is more, the number of threads is dynamically increased by the manager thread, the threads are mainly kept in a thread array, and then the threads sequentially take out the task from the task queue in a mutual exclusion manner and perform subsequent execution.
It should be noted that, since there is a competitive relationship between the threads taking data from the task queue, in order to achieve the purpose of performing secure access on data in the task queue between threads, the present invention employs a thread lock to control each thread to perform mutually exclusive access on data in the task queue, that is, a task.
S310, when the manager thread in the thread pool finds that the number in the task queue is small, the manager thread can delete the threads in the thread array dynamically, and the purpose of reducing the overhead is achieved.
S312, after each thread takes out the task from the task queue, the data is preprocessed, which mainly includes: conversion of color space, scaling operation, normalization and mean reduction, etc.
S314, then carrying out forward reasoning processing on the preprocessed data, which mainly comprises the following steps: performing convolution operation and pooling operation on the data, and adjusting the linear data into nonlinear data by adopting an activation function.
And S316, performing post-processing operation on the result data after the forward reasoning processing, wherein the post-processing operation can be preset according to different service scenes, and obtaining the processed image data.
According to the embodiment, large sample and/or small sample test data can be adopted, and then the distributed threads are adopted for image processing, so that the running speed of the neural network can be improved well under the condition that the processing accuracy of target image data is guaranteed.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an image processing apparatus based on multiple threads is also provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a multithread-based image processing apparatus according to an embodiment of the present invention, which includes, as shown in fig. 4:
an obtaining module 402, configured to obtain target image data to be processed;
an adding module 404, configured to generate the target image data into a target task, and add the target task to a tail of a target task queue in a neural network model, where the target task includes the target image data, the target neural network model is configured to process the image data to be processed, the target task queue is configured to record tasks to be processed, each task to be processed includes corresponding image data to be processed, and tasks in the target task queue are set to be sequentially taken out from a head of the target task queue;
a calling module 406, configured to call a plurality of threads in a thread pool to sequentially fetch a plurality of tasks from a head of the target task queue, and execute the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model, where each thread in the plurality of threads is used to fetch one task in the plurality of tasks and execute the image processing operation on the corresponding to-be-processed image data included in the fetched task.
In an optional embodiment, the apparatus is further configured to: acquiring the task number of the tasks to be processed recorded in the target task queue; and adjusting the number of threads in the thread pool according to the number of the tasks, wherein the threads in the thread pool are used for processing the tasks in the target task queue.
In an optional embodiment, the invoking module 406 includes:
an increasing unit, configured to increase the number of threads in the thread pool if the number of tasks is greater than a first threshold; and/or
A reducing unit, configured to reduce the number of threads in the thread pool when the number of tasks is smaller than a second threshold, where the second threshold is smaller than the first threshold.
In an optional embodiment, the invoking module 406 includes:
and the taking-out unit is used for calling the plurality of threads by adopting the thread lock to respectively take out the plurality of tasks from the head of the target task queue in a mutually exclusive mode.
In an optional embodiment, the invoking module 406 includes:
a processing unit, configured to perform a first image processing operation on corresponding to-be-processed image data included in the multiple tasks to obtain first target image data, where the first image processing operation is used to pre-process the to-be-processed image data in at least one of the following manners: the image processing method comprises the steps of carrying out color space conversion processing on the image data to be processed, carrying out scaling processing on the image data to be processed, carrying out normalization processing on the image data to be processed, and carrying out mean value reduction processing on the image data to be processed.
In an optional embodiment, the apparatus is further configured to:
after a first image processing operation is performed on corresponding image data to be processed included in the plurality of tasks to obtain first target image data, a second image processing operation is performed on the first target image data to obtain a group of image characteristic values, wherein the second image processing operation is used for performing forward inference processing on the first target image data, and the forward inference processing includes at least one of: convolution operation processing, pooling operation processing, processing for adjusting linear data into nonlinear data by adopting an activation function, and full-connection processing.
In an optional embodiment, the obtaining module 402 includes:
an acquisition unit configured to acquire first image data;
and the decoding unit is used for performing decoding operation on the first image data to obtain second image data, wherein the target image data comprises the second image data.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring target image data to be processed;
s2, generating target image data into target tasks, and adding the target tasks to the tail of a target task queue in a neural network model, wherein the target tasks comprise the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence;
and S3, calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
The computer readable storage medium is further arranged to store a computer program for performing the steps of:
s1, acquiring target image data to be processed;
s2, generating target image data into target tasks, and adding the target tasks to the tail of a target task queue in a neural network model, wherein the target tasks comprise the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence;
and S3, calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring target image data to be processed;
s2, generating target image data into target tasks, and adding the target tasks to the tail of a target task queue in a neural network model, wherein the target tasks comprise the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence;
and S3, calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A multithreading-based image processing method, comprising:
acquiring target image data to be processed;
generating the target image data into a target task, and adding the target task to the tail of a target task queue in a neural network model, wherein the target task comprises the target image data, the target neural network model is used for processing the image data to be processed, the target task queue is used for recording the tasks to be processed, each task to be processed comprises the corresponding image data to be processed, and the tasks in the target task queue are set to be taken out from the head of the target task queue in sequence;
and calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence, and executing the same image processing operation on the corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
2. The method of claim 1, further comprising:
acquiring the task number of the tasks to be processed recorded in the target task queue;
and adjusting the number of threads in the thread pool according to the number of the tasks, wherein the threads in the thread pool are used for processing the tasks in the target task queue.
3. The method of claim 2, wherein said adjusting the number of threads in the thread pool according to the number of tasks comprises:
increasing the number of threads in the thread pool if the number of tasks is greater than a first threshold; and/or
And reducing the number of threads in the thread pool when the number of tasks is smaller than a second threshold value, wherein the second threshold value is smaller than the first threshold value.
4. The method of claim 1, wherein invoking the plurality of threads in the thread pool to sequentially fetch the plurality of tasks from the head of the target task queue comprises:
and calling the plurality of threads by adopting a thread lock to respectively take out the plurality of tasks from the head of the target task queue in a mutually exclusive mode.
5. The method of claim 1, wherein performing the same image processing operation on corresponding to-be-processed image data included in the plurality of tasks in the neural network model comprises:
executing a first image processing operation on corresponding image data to be processed included in the plurality of tasks to obtain first target image data, wherein the first image processing operation is used for preprocessing the image data to be processed in at least one of the following modes: the image processing method comprises the steps of carrying out color space conversion processing on the image data to be processed, carrying out scaling processing on the image data to be processed, carrying out normalization processing on the image data to be processed, and carrying out mean value reduction processing on the image data to be processed.
6. The method according to claim 5, wherein after performing a first image processing operation on corresponding to-be-processed image data included in the plurality of tasks, resulting in first target image data, the method further comprises:
performing a second image processing operation on the first target image data to obtain a set of image feature values, wherein the second image processing operation is used for performing forward inference processing on the first target image data, and the forward inference processing includes at least one of: convolution operation processing, pooling operation processing, processing for adjusting linear data into nonlinear data by adopting an activation function, and full-connection processing.
7. The method of claim 1, wherein the acquiring target image data to be processed comprises:
acquiring first image data;
and performing decoding operation on the first image data to obtain second image data, wherein the target image data comprises the second image data.
8. An image processing apparatus based on multithreading, comprising:
the acquisition module is used for acquiring target image data to be processed;
an adding module, configured to generate the target image data into a target task, and add the target task to a tail of a target task queue in a neural network model, where the target task includes the target image data, the target neural network model is configured to process the image data to be processed, the target task queue is configured to record tasks to be processed, each task to be processed includes corresponding image data to be processed, and tasks in the target task queue are set to be sequentially taken out from a head of the target task queue;
and the calling module is used for calling a plurality of threads in a thread pool to take out a plurality of tasks from the head of the target task queue in sequence and executing the same image processing operation on the corresponding to-be-processed image data included in the plurality of tasks in the neural network model, wherein each thread in the plurality of threads is used for taking out one task in the plurality of tasks and executing the image processing operation on the corresponding to-be-processed image data included in the taken out one task.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202010955146.7A 2020-09-11 2020-09-11 Image processing method and device based on multiple threads Pending CN112163468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010955146.7A CN112163468A (en) 2020-09-11 2020-09-11 Image processing method and device based on multiple threads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010955146.7A CN112163468A (en) 2020-09-11 2020-09-11 Image processing method and device based on multiple threads

Publications (1)

Publication Number Publication Date
CN112163468A true CN112163468A (en) 2021-01-01

Family

ID=73858073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010955146.7A Pending CN112163468A (en) 2020-09-11 2020-09-11 Image processing method and device based on multiple threads

Country Status (1)

Country Link
CN (1) CN112163468A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113568666A (en) * 2021-06-07 2021-10-29 阿里巴巴新加坡控股有限公司 Image processing method and device, storage medium and processor
CN113688868A (en) * 2021-07-21 2021-11-23 深圳市安软科技股份有限公司 Multithreading image processing method and device
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN115296958A (en) * 2022-06-28 2022-11-04 青岛海尔科技有限公司 Distribution method and device of equipment control task, storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885590A (en) * 2017-11-30 2018-04-06 百度在线网络技术(北京)有限公司 Task processing method and device for smart machine
CN110147251A (en) * 2019-01-28 2019-08-20 腾讯科技(深圳)有限公司 For calculating the framework, chip and calculation method of neural network model
CN110659134A (en) * 2019-09-04 2020-01-07 腾讯云计算(北京)有限责任公司 Data processing method and device applied to artificial intelligence platform
CN111310638A (en) * 2019-12-31 2020-06-19 深圳云天励飞技术有限公司 Data processing method and device and computer readable storage medium
CN111373436A (en) * 2018-12-18 2020-07-03 深圳市大疆创新科技有限公司 Image processing method, terminal device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885590A (en) * 2017-11-30 2018-04-06 百度在线网络技术(北京)有限公司 Task processing method and device for smart machine
CN111373436A (en) * 2018-12-18 2020-07-03 深圳市大疆创新科技有限公司 Image processing method, terminal device and storage medium
CN110147251A (en) * 2019-01-28 2019-08-20 腾讯科技(深圳)有限公司 For calculating the framework, chip and calculation method of neural network model
CN110659134A (en) * 2019-09-04 2020-01-07 腾讯云计算(北京)有限责任公司 Data processing method and device applied to artificial intelligence platform
CN111310638A (en) * 2019-12-31 2020-06-19 深圳云天励飞技术有限公司 Data processing method and device and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860401A (en) * 2021-02-10 2021-05-28 北京百度网讯科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN112860401B (en) * 2021-02-10 2023-07-25 北京百度网讯科技有限公司 Task scheduling method, device, electronic equipment and storage medium
CN113568666A (en) * 2021-06-07 2021-10-29 阿里巴巴新加坡控股有限公司 Image processing method and device, storage medium and processor
CN113688868A (en) * 2021-07-21 2021-11-23 深圳市安软科技股份有限公司 Multithreading image processing method and device
CN113688868B (en) * 2021-07-21 2023-08-22 深圳市安软科技股份有限公司 Multithreading image processing method and device
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN115296958A (en) * 2022-06-28 2022-11-04 青岛海尔科技有限公司 Distribution method and device of equipment control task, storage medium and electronic device
CN115296958B (en) * 2022-06-28 2024-03-22 青岛海尔科技有限公司 Distribution method and device of equipment control tasks, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN112163468A (en) Image processing method and device based on multiple threads
US8463025B2 (en) Distributed artificial intelligence services on a cell phone
US20230196837A1 (en) Action recognition method and apparatus, and device and storage medium
CN110830807B (en) Image compression method, device and storage medium
CN111160202B (en) Identity verification method, device, equipment and storage medium based on AR equipment
CN109195011B (en) Video processing method, device, equipment and storage medium
CN111241985B (en) Video content identification method and device, storage medium and electronic equipment
CN109101913A (en) Pedestrian recognition methods and device again
CN110956202A (en) Image training method, system, medium and intelligent device based on distributed learning
CN114915753A (en) Architecture of cloud server, data processing method and storage medium
CN113988225A (en) Method and device for establishing representation extraction model, representation extraction and type identification
CN110796240A (en) Training method, feature extraction method, device and electronic equipment
CN110245696A (en) Illegal incidents monitoring method, equipment and readable storage medium storing program for executing based on video
CN112771546A (en) Operation accelerator and compression method
CN104462422A (en) Object processing method and device
US11947631B2 (en) Reverse image search based on deep neural network (DNN) model and image-feature detection model
CN115984742A (en) Training method of video frame selection model, video processing method and device
CN111353577A (en) Optimization method and device of multi-task-based cascade combination model and terminal equipment
WO2022141094A1 (en) Model generation method and apparatus, image processing method and apparatus, and readable storage medium
CN114422776A (en) Detection method and device for camera equipment, storage medium and electronic device
Zhong et al. Prediction system for activity recognition with compressed video
EP3683733A1 (en) A method, an apparatus and a computer program product for neural networks
Jaszewski et al. Exploring efficient and tunable convolutional blind image denoising networks
CN111626075A (en) Target identification method and device
WO2022141092A1 (en) Model generation method and apparatus, image processing method and apparatus, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination