CN112036503A - Image processing method and device based on step-by-step threads and storage medium - Google Patents

Image processing method and device based on step-by-step threads and storage medium Download PDF

Info

Publication number
CN112036503A
CN112036503A CN202010955167.9A CN202010955167A CN112036503A CN 112036503 A CN112036503 A CN 112036503A CN 202010955167 A CN202010955167 A CN 202010955167A CN 112036503 A CN112036503 A CN 112036503A
Authority
CN
China
Prior art keywords
target
message queue
processing result
image data
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010955167.9A
Other languages
Chinese (zh)
Inventor
丁勇
李合青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010955167.9A priority Critical patent/CN112036503A/en
Publication of CN112036503A publication Critical patent/CN112036503A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an image processing method and device based on a step thread and a storage medium. Wherein, the method comprises the following steps: under the condition that image data to be processed exists in a first message queue of a target neural network model, acquiring the first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue; and under the condition that the to-be-processed processing result exists in the second message queue, acquiring the to-be-processed first processing result from the second message queue through the second target thread, and executing a second processing operation on the first processing result to obtain a second processing result. The invention solves the technical problem of low efficiency of executing image processing through the neural network model.

Description

Image processing method and device based on step-by-step threads and storage medium
Technical Field
The invention relates to the field of computers, in particular to an image processing method and device based on a step thread and a storage medium.
Background
The deep neural network technology is widely applied to the field of image processing, such as the field of target detection, identification and tracking, and the principle of the deep neural network technology is roughly as follows: designing a model of a deep neural network, and training the network model stage: training is carried out in a training sample with a label, and the difference between a predicted value and a true value obtained by calculating data through a neural network is smaller and smaller by continuously adjusting the weight parameter value of each layer in the network. And a network model testing stage: using the trained neural network model to perform preprocessing operation on the test data, the general preprocessing includes: and color conversion and scaling are carried out, then the processed data is subjected to pre-reasoning calculation, and finally the post-processing operation is carried out on the inferred data, so that the results of detection or identification of the target in the image and the like are obtained.
However, in order to make the trained neural network model have better generalization, the network layers of the designed deep neural network are deeper and deeper, and the structure of the designed deep neural network is more and more complex, which brings some problems, such as: the number of network layers is large, the parameter quantity is large, and the whole time consumption for performing previous calculation is increased. Therefore, there is a problem that the efficiency of performing image processing by the neural network model is low.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device based on a step-by-step thread and a storage medium, which at least solve the technical problem of low efficiency of executing image processing through a neural network model.
According to an aspect of an embodiment of the present invention, there is provided an image processing method based on a step-by-step thread, including: under the condition that image data to be processed exists in a first message queue of a target neural network model, acquiring first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue; and under the condition that the second message queue has the to-be-processed processing result, acquiring the to-be-processed first processing result from the second message queue through a second target thread, and executing a second processing operation on the first processing result to obtain a second processing result.
As an alternative, the executing, by the first target thread, the first processing operation on the first image data includes: and executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
As an optional solution, the storing, by the first target thread, the first processing result into the second message queue of the target neural network model includes: storing the first preprocessing result into the second message queue through a first target thread; or storing the first preprocessing result into a target memory address through a first target thread, and storing the target memory address into the second message queue.
As an optional solution, the method further includes: under the condition that an image to be processed exists, acquiring a target image from the image to be processed through a third target thread; decoding the target image through the third target thread to obtain target image data; and storing the target image data into the first message queue through the third target thread.
As an optional solution, the decoding the target image by the third target thread to obtain target image data includes: decoding, by the third target thread, image data of the target image into YUV image data, wherein the target image data includes the YUV image data.
As an optional scheme, the obtaining, by the second target thread, the to-be-processed first processing result from the second message queue further includes: and executing inference processing operation on the first processing result through the second target thread to obtain a target inference processing result.
As an optional scheme, the obtaining a target inference processing result by performing inference processing operation on the first image data by the first target thread further includes: storing the target inference processing result into a third message queue through a second target thread; or storing the target inference processing result into a target memory address through a second target thread, and storing the target memory address into the third message queue.
As an optional scheme, the obtaining, by the second target thread, the to-be-processed first processing result from the second message queue further includes: and executing post-processing operation on the target inference processing result through the second target thread to obtain a target post-processing result, wherein the post-processing operation carries out post-processing on the target inference processing result according to a service scene.
As an optional scheme, before the obtaining, by the second target thread, the to-be-processed first processing result from the second message queue, the method further includes: and acquiring a status flag of the first processing result, wherein the status flag is used for indicating that the first processing result is already stored in the second message queue, and the status flag is used for indicating that the acquisition of the first processing result is allowed.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus based on a step-by-step thread, including: a first obtaining unit, configured to, when image data to be processed exists in a first message queue of a target neural network model, obtain, by a first target thread, first image data to be processed from the first message queue, perform a first processing operation on the first image data to obtain a first processing result, and store the first processing result in a second message queue of the target neural network model, where the target neural network model is used to process the image data to be processed, and the second message queue is used to record a processing result obtained by performing the first processing operation on the image data in the first message queue; a second obtaining unit, configured to, when there is a to-be-processed processing result in the second message queue, obtain, by a second target thread, a to-be-processed first processing result from the second message queue, and perform a second processing operation on the first processing result, to obtain a second processing result.
As an optional solution, the first obtaining unit includes: and the first execution module is used for executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
As an optional solution, the first execution module includes: a first storing submodule, configured to store the first preprocessing result into the second message queue through a first target thread; or a second storing submodule, configured to store the first preprocessing result into a target memory address through the first target thread, and store the target memory address into the second message queue.
As an optional solution, the apparatus further includes: a third obtaining unit, configured to obtain, in the presence of an image to be processed, a target image from the image to be processed through a third target thread; a decoding unit, configured to decode the target image through the third target thread to obtain target image data; and a storing unit, configured to store the target image data into the first message queue through the third target thread.
As an alternative, the decoding unit includes: a decoding module, configured to decode, by the third target thread, the image data of the target image into YUV image data, where the target image data includes the YUV image data.
As an optional solution, the second obtaining unit includes: and the second execution module is used for executing inference processing operation on the first processing result through the second target thread to obtain a target inference processing result.
As an optional solution, the second execution module further includes: a third storing submodule, configured to store the target inference processing result in the third message queue through a second target thread; or the fourth storing submodule is used for storing the target inference processing result into a target memory address through the second target thread and storing the target memory address into the third message queue.
As an optional solution, the second obtaining unit further includes: and a third execution module, configured to execute a post-processing operation on the target inference processing result through the second target thread to obtain a target post-processing result, where the post-processing operation performs post-processing on the target inference processing result according to a service scenario.
As an optional scheme, the method further comprises the following steps: a fourth obtaining unit, configured to obtain a status flag of the first processing result before the first processing result to be processed is obtained from the second message queue through the second target thread, where the status flag is used to indicate that the first processing result has been already stored in the second message queue, and the status flag is used to indicate that obtaining of the first processing result is allowed.
According to a further aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned step-and-thread based image processing method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the image processing method based on the step-and-thread by using the computer program.
In the embodiment of the present invention, under the condition that image data to be processed exists in a first message queue of a target neural network model, first image data to be processed is obtained from the first message queue through a first target thread, a first processing operation is performed on the first image data to obtain a first processing result, and the first processing result is stored in a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording a processing result obtained by performing the first processing operation on the image data in the first message queue; and under the condition that the to-be-processed processing result exists in the second message queue, acquiring a to-be-processed first processing result from the second message queue through a second target thread, executing a second processing operation on the first processing result to obtain a second processing result, and respectively processing the data obtained from the message queue through multiple threads, wherein the threads do not interfere with each other, so that the aim of accelerating the processing speed of the image data is fulfilled, the technical effect of improving the image processing efficiency of the neural network model is realized, and the technical problem of low efficiency of executing the image processing through the neural network model is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic illustration of a flow chart of an alternative step-and-thread based image processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 8 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 9 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 10 is a schematic illustration of an alternative step-and-thread based image processing method according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an alternative step-and-thread based image processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Optionally, as an alternative implementation, as shown in fig. 1, the image processing method based on the step-and-thread includes:
s102, under the condition that image data to be processed exists in a first message queue of a target neural network model, obtaining first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue;
s104, under the condition that the second message queue has the to-be-processed processing result, obtaining the to-be-processed first processing result from the second message queue through the second target thread, and executing a second processing operation on the first processing result to obtain a second processing result.
Optionally, the image processing method based on the step-by-step thread may be applied to a scenario where a multi-core hardware product runs a neural network model, but not limited thereto. Optionally, the target neural network model may be but not limited to a deep neural network applied in the image field, for example, applied in the fields of image recognition, detection, tracking, and the like, the target neural network model may also be but not limited to a network structure in which different neural networks are designed in advance, training samples are made by manually labeling some image data, the neural network is trained on a computer by using training data based on an open framework or an autonomously designed training framework, and the final loss of the model is made smaller and smaller by continuously adjusting weight parameters in each layer of network, specifically, for example, 1) a stage of designing a model of the deep neural network and training the network model; 2) training in a training sample with a label, and continuously adjusting the weight parameter value of each layer in the network to make the difference between a predicted value and a true value obtained by calculating data through a neural network smaller and smaller; 3) a stage of testing a network model, in which a trained neural network model is used to perform preprocessing operation on test data, where the preprocessing generally includes: color conversion and scaling are carried out, and then the processed data are subjected to inference front item calculation; 4) and finally, carrying out post-processing operation on the inferred data so as to obtain results of detection or identification and the like of the target in the image. Alternatively, the distributed thread may be, but is not limited to, a thread technology extended based on a multithreading technology, where the multithreading may be, but is not limited to, a technology in which multiple threads are concurrently executed from software or hardware, and the thread may be, but is not limited to, the smallest unit capable of performing operation scheduling by an operating system. The processing operation may be, but is not limited to, various types of operations for processing image data, such as conversion processing of a color space, scaling processing, normalization processing, averaging processing, and the like. The message queue may be, but is not limited to, storing data or data storage addresses, where the data storage addresses are used to search for storage space stored in the data storage addresses, for example, storing target data in a target memory array on the heap, and then the target data may be, but is not limited to, searched and called according to the memory address of the target memory array.
It should be noted that, under the condition that image data to be processed exists in a first message queue of a target neural network model, first image data to be processed is obtained from the first message queue through a first target thread, a first processing operation is performed on the first image data to obtain a first processing result, and the first processing result is stored in a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording a processing result obtained by performing the first processing operation on the image data in the first message queue; and under the condition that the to-be-processed processing result exists in the second message queue, acquiring the to-be-processed first processing result from the second message queue through the second target thread, and executing a second processing operation on the first processing result to obtain a second processing result.
Further by way of example, as shown in fig. 2, the optional example includes a first message queue 202, first image data 204, a second message queue 206, a first processing result 208, and a second processing result 210, where the first image data 204 to be processed is obtained from the first message queue 202, a first processing operation is performed on the first image data 204, and the obtained first processing result 208 is stored in the second message queue 206; a first processing result 208 to be processed is obtained in the second message queue 206, and a second processing operation is performed on the first processing result 208, resulting in a second processing result 210 (the dashed line indicates that the first processing result 208 performing the second processing operation is obtained in the second message queue 206).
According to the embodiment provided by the application, under the condition that image data to be processed exists in a first message queue of a target neural network model, first image data to be processed is obtained from the first message queue through a first target thread, first processing operation is performed on the first image data to obtain a first processing result, and the first processing result is stored in a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by performing the first processing operation on the image data in the first message queue; and under the condition that the to-be-processed processing result exists in the second message queue, acquiring a to-be-processed first processing result from the second message queue through the second target thread, executing a second processing operation on the first processing result to obtain a second processing result, and respectively processing the data obtained from the message queue through multiple threads, wherein the threads do not interfere with each other, so that the purpose of accelerating the processing speed of the image data is achieved, and the technical effect of improving the image processing efficiency based on the step-by-step threads is realized.
As an alternative, performing a first processing operation on the first image data by the first target thread includes:
and executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
It should be noted that, the first target thread performs a preprocessing operation on the first image data to obtain a first preprocessing result. Optionally, the pre-processing operation may include, but is not limited to, at least one of: color space conversion, scaling, normalization, and averaging operations, etc.
Further illustratively, optionally, as shown in fig. 3, a preprocessing operation is performed on the first image data 304 in the first message queue 302 to obtain a first preprocessing result 306 (the dotted line indicates that the first image data 304 subjected to the preprocessing operation is obtained in the first message queue 302).
According to the embodiment provided by the application, the first target thread executes the preprocessing operation on the first image data to obtain the first preprocessing result, so that the purpose of quickly obtaining the preprocessing result is achieved, and the effect of improving the obtaining efficiency of the preprocessing result is achieved.
As an alternative, storing the first processing result in a second message queue of the target neural network model by the first target thread includes:
s1, storing the first preprocessing result into a second message queue through the first target thread; or
S2, storing the first pre-processing result into the target memory address through the first target thread, and storing the target memory address into the second message queue.
It should be noted that the first preprocessing result is stored in the second message queue through the first target thread; or storing the first preprocessing result into a target memory address through the first target thread, and storing the target memory address into the second message queue. Optionally, the first target thread stores the first preprocessing result in a memory space of the target memory address, where the memory space may be, but is not limited to, one of a plurality of memory address spaces on a heap, and the heap may be, but is not limited to, a class of special data structures in computer science that are in the same city, and may be, but is not limited to, an array object of a complete binary tree. Alternatively, the target memory address may be, but is not limited to, a memory address pre-allocated according to the first image data and used for storing data (e.g., first processing result data, etc.) related to the first image data.
For further example, optionally, as shown in fig. 4, the first preprocessing result 402 is stored in the target memory address 404, and the target memory address 404 is stored in the second message queue 406, wherein the data corresponding to the first preprocessing result 402 can be searched and called according to the target memory address 404.
According to the embodiment provided by the application, the first preprocessing result is stored in the second message queue through the first target thread; or the first target thread stores the first preprocessing result into the target memory address and stores the target memory address into the second message queue, so that the purpose of flexibly storing the first preprocessing result is achieved, and the effect of improving the storage flexibility of data is realized.
As an optional solution, the method further comprises:
s1, acquiring a target image from the image to be processed through a third target thread under the condition that the image to be processed exists;
s2, decoding the target image through the third target thread to obtain target image data;
and S3, storing the target image data into the first message queue through the third target thread.
It should be noted that, in the case that there is an image to be processed, the target image is obtained from the image to be processed through the third target thread; decoding the target image through a third target thread to obtain target image data; and storing the target image data into the first message queue through the third target thread. Alternatively, decoding may be, but is not limited to, converting image data in an original format to image data in another format by compression techniques.
To further illustrate, optionally, as shown in fig. 5, a target image 504 is obtained from an image 502 to be processed, the target image 504 is decoded to obtain target image data 506, and the target image data 506 is stored in a first message queue 508 designed in advance.
According to the embodiment provided by the application, under the condition that the image to be processed exists, the target image is obtained from the image to be processed through the third target thread; decoding the target image through a third target thread to obtain target image data; target image data are stored in the first message queue through the third target thread, the purpose of decoding the image in advance is achieved, and the effect of improving the overall processing efficiency of the image is achieved.
As an optional scheme, decoding the target image through the third target thread to obtain target image data includes:
decoding, by the third target thread, image data of the target image into YUV image data, wherein the target image data includes YUV image data.
It should be noted that, the image data of the target image is decoded into YUV image data by the third target thread, where the target image data includes YUV image data.
To further illustrate, optionally, as shown in fig. 6, the input image data 602 in jpg format is decoded into YUV image data 604, and the YUV image data 604 is stored in a target memory address 606 allocated in the heap space in advance, wherein the operation is executed all the time as long as there is still image data input, and the target memory address 606 where the decoded image data is stored is added to the message queue 608 each time.
According to the embodiment provided by the application, the image data of the target image is decoded into YUV image data through the third target thread, wherein the target image data comprises YUV image data, the purpose of realizing compatible decoding and storage operation through the thread is achieved, and the effect of improving the utilization efficiency of the thread is realized.
As an optional scheme, obtaining, by the second target thread, the to-be-processed first processing result from the second message queue further includes:
and executing inference processing operation on the first processing result through the second target thread to obtain a target inference processing result.
It should be noted that, the second target thread executes inference processing operation on the first processing result to obtain a target inference processing result. Optionally, the inference processing operation may include, but is not limited to, at least one of: the method comprises the following steps of performing convolution operation and pooling on data, changing the data from linear to nonlinear by adopting an activation function, enhancing the expression capability of the data, fully connecting and the like.
To further illustrate, optionally, as shown in fig. 7, an inference processing operation is performed on the first processing result 702 to obtain a target inference processing result 704.
According to the embodiment provided by the application, the second target thread executes the inference processing operation on the first processing result to obtain the target inference processing result, so that the purpose of executing the inference operation on the data is achieved, and the effect of improving the expression capability of the data is realized.
As an optional scheme, performing inference processing operation on the first image data by the first target thread to obtain a target inference processing result further includes:
s1, storing the target inference processing result into a third message queue through the second target thread; or
And S2, storing the target inference processing result into the target memory address through the second target thread, and storing the target memory address into the third message queue.
It should be noted that the target inference processing result is stored in the third message queue through the second target thread; or storing the target inference processing result into the target memory address through the second target thread, and storing the target memory address into the third message queue.
For further example, optionally, as shown in fig. 8, the target inference processing result 802 is stored in the target memory address 404, and the target memory address 404 is stored in the third message queue 804, where data corresponding to the target inference processing result 802 may be searched and called according to the target memory address 404. .
According to the embodiment provided by the application, the target reasoning processing result is stored in the third message queue through the second target thread; or the target inference processing result is stored into the target memory address through the second target thread, and the target memory address is stored into the third message queue, so that the purpose that various data related to the first image data are stored in the same target memory address is achieved, and the effect of improving the utilization rate of the storage space is achieved.
As an optional scheme, obtaining, by the second target thread, the to-be-processed first processing result from the second message queue further includes:
and executing post-processing operation on the target reasoning processing result through the second target thread to obtain a target post-processing result, wherein the post-processing operation carries out post-processing on the target reasoning processing result according to the service scene.
It should be noted that, a post-processing operation is performed on the target inference processing result through the second target thread to obtain a target post-processing result, where the post-processing operation performs post-processing on the target inference processing result according to a service scenario.
For further example, optionally, as shown in fig. 9, a target inference processing result 902 (or a target memory address 404) is obtained in the third message queue 804, and a post-processing operation is performed to obtain a target post-processing result.
According to the embodiment provided by the application, the post-processing operation is executed on the target inference processing result through the second target thread to obtain the target post-processing result, wherein the post-processing operation carries out post-processing on the target inference processing result according to the service scene, so that the purpose of obtaining different post-processing results according to different services to be achieved is achieved, and the effect of obtaining flexibility of the post-processing results is achieved.
As an optional scheme, before obtaining the first processing result to be processed from the second message queue through the second target thread, the method further includes:
and acquiring a state identifier of the first processing result, wherein the state identifier is used for indicating that the first processing result is stored in the second message queue, and the state identifier is used for indicating that the first processing result is allowed to be acquired.
It should be noted that, a status flag of the first processing result is obtained, where the status flag is used to indicate that the first processing result is stored in the second message queue, and the status flag is used to indicate that the obtaining of the first processing result is allowed. Optionally, in the implementation of the image processing method based on the step-by-step thread, both the first target thread and the second target thread are mutually exclusive-accessed, for example, if the target data in the message queue is to be acquired by the second target thread, the target data is allowed to be completely stored in the message queue by the first target thread.
To further illustrate, alternatively, for example, as shown in fig. 10, when the first processing result 1002 is in a state of being completely stored in the second message queue 1004, the first processing result 1002 is allowed to be acquired; conversely, when the first processing result 1002 is in a state of not being completely stored in the second message queue 1004, the first processing result 1002 is prohibited from being acquired.
According to the embodiment provided by the application, the state identifier of the first processing result is obtained, wherein the state identifier is used for indicating that the first processing result is stored in the second message queue, the state identifier is used for indicating that the first processing result is allowed to be obtained, and through the setting of mutual exclusion access, the purpose of protecting the data in the message queue from being obtained under the condition that the data is not completely stored is achieved, and the effect of improving the safety of the data is achieved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a step-and-thread based image processing apparatus for implementing the above-described step-and-thread based image processing method. As shown in fig. 11, the apparatus includes:
a first obtaining unit 1102, configured to, when image data to be processed exists in a first message queue of a target neural network model, obtain, by a first target thread, first image data to be processed from the first message queue, perform a first processing operation on the first image data to obtain a first processing result, and store the first processing result in a second message queue of the target neural network model, where the target neural network model is configured to process the image data to be processed, and the second message queue is configured to record a processing result obtained by performing the first processing operation on the image data in the first message queue;
a second obtaining unit 1104, configured to, in a case that there is a to-be-processed processing result in the second message queue, obtain, by the second target thread, a to-be-processed first processing result from the second message queue, and perform a second processing operation on the first processing result, so as to obtain a second processing result.
Alternatively, the image processing apparatus based on the step-and-thread may be applied to, but not limited to, a scenario where a multi-core hardware product runs a neural network model. Optionally, the target neural network model may be but not limited to a deep neural network applied in the image field, for example, applied in the fields of image recognition, detection, tracking, and the like, the target neural network model may also be but not limited to a network structure in which different neural networks are designed in advance, training samples are made by manually labeling some image data, the neural network is trained on a computer by using training data based on an open framework or an autonomously designed training framework, and the final loss of the model is made smaller and smaller by continuously adjusting weight parameters in each layer of network, specifically, for example, 1) a stage of designing a model of the deep neural network and training the network model; 2) training in a training sample with a label, and continuously adjusting the weight parameter value of each layer in the network to make the difference between a predicted value and a true value obtained by calculating data through a neural network smaller and smaller; 3) a stage of testing a network model, in which a trained neural network model is used to perform preprocessing operation on test data, where the preprocessing generally includes: color conversion and scaling are carried out, and then the processed data are subjected to inference front item calculation; 4) and finally, carrying out post-processing operation on the inferred data so as to obtain results of detection or identification and the like of the target in the image. Alternatively, the distributed thread may be, but is not limited to, a thread technology extended based on a multithreading technology, where the multithreading may be, but is not limited to, a technology in which multiple threads are concurrently executed from software or hardware, and the thread may be, but is not limited to, the smallest unit capable of performing operation scheduling by an operating system. The processing operation may be, but is not limited to, various types of operations for processing image data, such as conversion processing of a color space, scaling processing, normalization processing, averaging processing, and the like. The message queue may be, but is not limited to, storing data or data storage addresses, where the data storage addresses are used to search for storage space stored in the data storage addresses, for example, storing target data in a target memory array on the heap, and then the target data may be, but is not limited to, searched and called according to the memory address of the target memory array.
The first obtaining unit is configured to, under the condition that image data to be processed exists in a first message queue of a target neural network model, obtain, by a first target thread, first image data to be processed from the first message queue, perform a first processing operation on the first image data to obtain a first processing result, and store the first processing result in a second message queue of the target neural network model, where the target neural network model is configured to process the image data to be processed, and the second message queue is configured to record a processing result obtained by performing the first processing operation on the image data in the first message queue; and the second obtaining unit is used for obtaining a first processing result to be processed from the second message queue through the second target thread under the condition that the second message queue has the processing result to be processed, and executing a second processing operation on the first processing result to obtain a second processing result.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
According to the embodiment provided by the application, under the condition that image data to be processed exists in a first message queue of a target neural network model, first image data to be processed is obtained from the first message queue through a first target thread, first processing operation is performed on the first image data to obtain a first processing result, and the first processing result is stored in a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by performing the first processing operation on the image data in the first message queue; and under the condition that the to-be-processed processing result exists in the second message queue, acquiring a to-be-processed first processing result from the second message queue through the second target thread, executing a second processing operation on the first processing result to obtain a second processing result, and respectively processing the data obtained from the message queue through multiple threads, wherein the threads do not interfere with each other, so that the purpose of accelerating the processing speed of the image data is achieved, and the technical effect of improving the image processing efficiency based on the step-by-step threads is realized.
As an alternative, the first obtaining unit 1102 includes:
the first execution module is used for executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an optional solution, the first execution module includes:
the first storage submodule is used for storing the first preprocessing result into a second message queue through a first target thread; or
And the second storing submodule is used for storing the first preprocessing result into the target memory address through the first target thread and storing the target memory address into the second message queue.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an optional scheme, the apparatus further comprises:
the third acquisition unit is used for acquiring a target image from the image to be processed through a third target thread under the condition that the image to be processed exists;
the decoding unit is used for decoding the target image through the third target thread to obtain target image data;
and the storage unit is used for storing the target image data into the first message queue through the third target thread.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an alternative, the decoding unit includes:
and the decoding module is used for decoding the image data of the target image into YUV image data through the third target thread, wherein the target image data comprises YUV image data.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an alternative, the second obtaining unit 1104 includes:
and the second execution module is used for executing inference processing operation on the first processing result through a second target thread to obtain a target inference processing result.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an optional solution, the second execution module further includes:
the third storage submodule is used for storing the target inference processing result into a third message queue through the second target thread; or
And the fourth storing submodule is used for storing the target inference processing result into the target memory address through the second target thread and storing the target memory address into the third message queue.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an optional scheme, the second obtaining unit 1104 further includes:
and the third execution module is used for executing post-processing operation on the target inference processing result through the second target thread to obtain a target post-processing result, wherein the post-processing operation carries out post-processing on the target inference processing result according to the service scene.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
As an optional scheme, the method further comprises the following steps:
and the fourth obtaining unit is used for obtaining the state identifier of the first processing result before the first processing result to be processed is obtained from the second message queue through the second target thread, wherein the state identifier is used for indicating that the first processing result is stored in the second message queue, and the state identifier is used for indicating that the first processing result is allowed to be obtained.
For specific embodiments, reference may be made to the example shown in the step-and-thread-based image processing method described above, and details in this example are not described here again.
According to yet another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the step-and-thread based image processing method, as shown in fig. 12, the electronic device includes a memory 1202 and a processor 1204, the memory 1202 stores a computer program, and the processor 1204 is configured to execute the steps of any one of the above method embodiments through the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, under the condition that image data to be processed exist in a first message queue of a target neural network model, acquiring the first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue;
s2, when there is a to-be-processed processing result in the second message queue, obtaining a to-be-processed first processing result from the second message queue through the second target thread, and performing a second processing operation on the first processing result to obtain a second processing result.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 12 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
The memory 1202 may be used for storing software programs and modules, such as program instructions/modules corresponding to the step-and-thread based image processing method and apparatus in the embodiments of the present invention, and the processor 1204 executes various functional applications and data processing by running the software programs and modules stored in the memory 1202, so as to implement the step-and-thread based image processing method. The memory 1202 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1202 can further include memory located remotely from the processor 1204, which can be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1202 may be, but not limited to, specifically configured to store information such as the first image data, the first processing result, and the second processing result. As an example, as shown in fig. 12, the memory 1202 may include, but is not limited to, a first obtaining unit 1102 and a second obtaining unit 1104 in the step-and-thread based image processing apparatus. In addition, other module units in the image processing apparatus based on the step-by-step thread can be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmitting device 1206 is configured to receive or transmit data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmitting device 1206 includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the transmitting device 1206 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1208 for displaying information such as the first image data, the first processing result, and the second processing result; and a connection bus 1210 for connecting the respective module parts in the above-described electronic apparatus.
According to a further aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, under the condition that image data to be processed exist in a first message queue of a target neural network model, acquiring the first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue;
s2, when there is a to-be-processed processing result in the second message queue, obtaining a to-be-processed first processing result from the second message queue through the second target thread, and performing a second processing operation on the first processing result to obtain a second processing result.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. An image processing method based on step-by-step threads is characterized by comprising the following steps:
under the condition that image data to be processed exists in a first message queue of a target neural network model, acquiring the first image data to be processed from the first message queue through a first target thread, executing a first processing operation on the first image data to obtain a first processing result, and storing the first processing result into a second message queue of the target neural network model, wherein the target neural network model is used for processing the image data to be processed, and the second message queue is used for recording the processing result obtained by executing the first processing operation on the image data in the first message queue;
and under the condition that the to-be-processed processing result exists in the second message queue, acquiring the to-be-processed first processing result from the second message queue through a second target thread, and executing a second processing operation on the first processing result to obtain a second processing result.
2. The method of claim 1, wherein performing a first processing operation on the first image data by a first target thread comprises:
and executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
3. The method of claim 2, wherein storing the first processing result in a second message queue of the target neural network model by the first target thread comprises:
storing the first preprocessing result into the second message queue through a first target thread; or
And storing the first preprocessing result into a target memory address through a first target thread, and storing the target memory address into the second message queue.
4. The method of claim 1, further comprising:
under the condition that an image to be processed exists, acquiring a target image from the image to be processed through a third target thread;
decoding the target image through the third target thread to obtain target image data;
and storing the target image data into the first message queue through the third target thread.
5. The method of claim 4, wherein decoding the target image by the third target thread to obtain target image data comprises:
decoding, by the third target thread, image data of the target image into YUV image data, wherein the target image data includes the YUV image data.
6. The method of claim 1, wherein obtaining the first processing result to be processed from the second message queue via the second target thread further comprises:
and executing inference processing operation on the first processing result through the second target thread to obtain a target inference processing result.
7. The method of claim 6, wherein performing inference processing operations on the first image data by the first target thread to obtain a target inference processing result further comprises:
storing the target inference processing result into a third message queue through a second target thread; or
And storing the target inference processing result into a target memory address through a second target thread, and storing the target memory address into the third message queue.
8. The method of claim 6, wherein obtaining the first processing result to be processed from the second message queue via the second target thread further comprises:
and executing post-processing operation on the target inference processing result through the second target thread to obtain a target post-processing result, wherein the post-processing operation carries out post-processing on the target inference processing result according to a service scene.
9. The method according to any one of claims 1 to 8, further comprising, before the obtaining, by the second target thread, the first processing result to be processed from the second message queue:
and acquiring a state identifier of the first processing result, wherein the state identifier is used for indicating that the first processing result is stored in the second message queue, and the state identifier is used for indicating that the acquisition of the first processing result is allowed.
10. An image processing apparatus based on a step-by-step thread, comprising:
a first obtaining unit, configured to, when image data to be processed exists in a first message queue of a target neural network model, obtain, by a first target thread, first image data to be processed from the first message queue, perform a first processing operation on the first image data, obtain a first processing result, and store the first processing result in a second message queue of the target neural network model, where the target neural network model is configured to process the image data to be processed, and the second message queue is configured to record a processing result obtained by performing the first processing operation on the image data in the first message queue;
a second obtaining unit, configured to, when there is a to-be-processed processing result in the second message queue, obtain, by a second target thread, the to-be-processed first processing result from the second message queue, and perform a second processing operation on the first processing result to obtain a second processing result.
11. The apparatus of claim 10, wherein the first obtaining unit comprises:
and the first execution module is used for executing preprocessing operation on the first image data through the first target thread to obtain a first preprocessing result.
12. The apparatus of claim 11, wherein the first execution module comprises:
the first storage submodule is used for storing the first preprocessing result into the second message queue through a first target thread; or
And the second storing submodule is used for storing the first preprocessing result into a target memory address through the first target thread and storing the target memory address into the second message queue.
13. The apparatus of claim 10, further comprising:
the third acquisition unit is used for acquiring a target image from the image to be processed through a third target thread under the condition that the image to be processed exists;
the decoding unit is used for decoding the target image through the third target thread to obtain target image data;
and the storage unit is used for storing the target image data into the first message queue through the third target thread.
14. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 9.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 9 by means of the computer program.
CN202010955167.9A 2020-09-11 2020-09-11 Image processing method and device based on step-by-step threads and storage medium Pending CN112036503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010955167.9A CN112036503A (en) 2020-09-11 2020-09-11 Image processing method and device based on step-by-step threads and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010955167.9A CN112036503A (en) 2020-09-11 2020-09-11 Image processing method and device based on step-by-step threads and storage medium

Publications (1)

Publication Number Publication Date
CN112036503A true CN112036503A (en) 2020-12-04

Family

ID=73588934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010955167.9A Pending CN112036503A (en) 2020-09-11 2020-09-11 Image processing method and device based on step-by-step threads and storage medium

Country Status (1)

Country Link
CN (1) CN112036503A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086744A1 (en) * 2014-12-02 2016-06-09 Shanghai United Imaging Healthcare Co., Ltd. A method and system for image processing
CN108491890A (en) * 2018-04-04 2018-09-04 百度在线网络技术(北京)有限公司 Image method and device
CN111338787A (en) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016086744A1 (en) * 2014-12-02 2016-06-09 Shanghai United Imaging Healthcare Co., Ltd. A method and system for image processing
CN108491890A (en) * 2018-04-04 2018-09-04 百度在线网络技术(北京)有限公司 Image method and device
CN111338787A (en) * 2020-02-04 2020-06-26 浙江大华技术股份有限公司 Data processing method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN107844634B (en) Modeling method of multivariate general model platform, electronic equipment and computer readable storage medium
CN108365967B (en) Method, system, terminal and computer readable storage medium for dynamically configuring communication parameters
CN110650347B (en) Multimedia data processing method and device
CN110198292B (en) Domain name recognition method and device, storage medium and electronic device
CN105740405B (en) Method and device for storing data
CN104484405A (en) Method and device for carrying out crawling task
CN112685055A (en) Cluster construction method and device
CN104468330A (en) Data processing method and device of distributed information queue
CN110543756B (en) Device identification method and device, storage medium and electronic device
CN110991298A (en) Image processing method and device, storage medium and electronic device
CN109947983A (en) Video recommendation method, system, terminal and computer readable storage medium
CN112036503A (en) Image processing method and device based on step-by-step threads and storage medium
CN108427671B (en) Information conversion method and apparatus, storage medium, and electronic apparatus
CN110222286A (en) Information acquisition method, device, terminal and computer readable storage medium
US20210224632A1 (en) Methods, devices, chips, electronic apparatuses, and storage media for processing data
CN109271499A (en) Recommended method, device and the terminal device of answer user in a kind of knowledge question
CN114330675A (en) Chip, accelerator card, electronic equipment and data processing method
CN111178373B (en) Operation method, device and related product
CN110232393B (en) Data processing method and device, storage medium and electronic device
CN109003223B (en) Picture processing method and device
CN111163123A (en) Service request processing method and device
CN114332522A (en) Image identification method and device and construction method of residual error network model
CN111901500A (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN111339367A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN110737780A (en) Method and device for sending data and method and device for receiving data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination