CN113158243A - Distributed image recognition model reasoning method and system - Google Patents

Distributed image recognition model reasoning method and system Download PDF

Info

Publication number
CN113158243A
CN113158243A CN202110414068.4A CN202110414068A CN113158243A CN 113158243 A CN113158243 A CN 113158243A CN 202110414068 A CN202110414068 A CN 202110414068A CN 113158243 A CN113158243 A CN 113158243A
Authority
CN
China
Prior art keywords
edge
image
neural network
convolutional neural
weight parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110414068.4A
Other languages
Chinese (zh)
Inventor
李领治
成聪
王进
谷飞
杨哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
CERNET Corp
Original Assignee
Suzhou University
CERNET Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University, CERNET Corp filed Critical Suzhou University
Priority to CN202110414068.4A priority Critical patent/CN113158243A/en
Publication of CN113158243A publication Critical patent/CN113158243A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The invention discloses a distributed image recognition model reasoning method and a system, wherein the method comprises the steps of constructing a convolutional neural network image classification model and segmenting according to layers to obtain layer information of the model, weight parameter matrixes of all layers and calculated quantity; determining the number of edge devices according to the layer information, and deploying the model distribution to the edge devices according to the weight parameter matrix size and the calculated amount of each layer by combining the storage space and the calculation capacity of the edge devices; the edge device uses a distributed convolutional neural network image classification inference scheme of linear coding to classify and identify the images, and an identification result is obtained through image inference calculation. The system comprises edge devices which are provided with weight parameter matrixes and are communicated with each other in a linear coding mode. The invention ensures the stability of the image recognition process through reasonable deployment and avoids the problem of falling behind; by using the distributed image classification inference scheme of linear coding for classification identification, the data security is protected.

Description

Distributed image recognition model reasoning method and system
Technical Field
The invention relates to the technical field of distributed image recognition, in particular to a distributed image recognition model reasoning method and system.
Background
Edge devices are devices that provide an entry point to an enterprise or service provider core network, such as routers, routing switches, integrated access devices, multiplexers, and various metropolitan and wide area network access devices. The types of edge devices are very rich, and in the world of everything interconnection, the edge devices are widely applied due to the characteristics of low cost, high response and low time delay.
The edge device is composed of an edge server and an edge node, and in the distributed image inference, the computing pressure of the cloud server is usually large. In this case, the model for image classification is deployed on the edge device, and inference of image classification can be completed on the edge device by using the edge device, so that the computing pressure of the cloud server is relieved, and the network bandwidth is saved. However, in distributed image inference, edge devices are distributed, and the variety of edge devices is accompanied by the variation of performance, and the heterogeneity of edge devices leads to the general stability of edge devices. In the process that the edge equipment participates in distributed image recognition, the time for completing the overall calculation is influenced by the edge equipment with the weakest performance, and the problem of a queue-falling person occurs; meanwhile, the edge device storing the calculation data is easy to reveal the data privacy after being attacked by the network.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defects of the prior art, and provide a distributed image recognition model inference method and a distributed image recognition model inference system, which can distribute a convolutional neural network image classification model on edge equipment, effectively avoid the problem of falling behind of the edge equipment in the process of distributed image recognition, and improve the security of image data.
In order to solve the technical problem, the invention provides a distributed image recognition model inference method, which comprises the following steps:
step 1: constructing a convolutional neural network image classification model and segmenting according to layers to obtain layer information of the convolutional neural network image classification model, a weight parameter matrix of each layer and calculated quantity;
step 2: determining the number of edge devices according to the layer information, and deploying the convolutional neural network image classification model on the edge devices according to the weight parameter matrix size and the calculated amount of each layer by combining the storage space and the calculation capacity of the edge devices;
and step 3: the edge device uses a distributed convolutional neural network image classification inference scheme of linear coding to carry out convolutional neural network classification identification on the image, and an image identification result is obtained through image inference calculation.
Further, the static deployment scheme a used when the convolutional neural network image classification model is distributed and deployed on the edge device in the step 2 is specifically as follows:
step A1: detecting the performance of edge nodes, ranking the performance of the edge nodes, and ranking the weight parameter matrix size of each layer according to the calculated amount of each layer of the convolutional neural network image classification model;
step A2: setting the input of a static deployment scheme A, wherein the input comprises a queue W of a weight parameter matrix, a list node of edge Nodes, a list Me of the maximum storage capacity of the edge Nodes, the length n of the list node s and the node number m of a single weight parameter matrix backup deployed in different edge Nodes, and each element in the W stores a weight parameter matrix of each layer in a convolutional neural network image classification model divided by layers and information that the weight parameter matrix belongs to a certain layer in the convolutional neural network model;
step A3: acquiring a head element from a queue W, wherein a layer to which the head element belongs is a layer with the largest calculated amount in layers which are not deployed in a current convolutional neural network image classification model, judging whether all elements in the queue W are backed up and deployed on m different edge nodes, and if not, executing the step A4; if yes, go to step A5;
step A4: traversing the node list from the beginning, if the current edge node has enough storage space for deploying the weight parameter matrix in the current head-of-line element, deploying the current head-of-line element on the current edge node, and storing the layer information and the information of the edge node in the deployed current head-of-line element in deployment records Recodes;
step A5: dequeuing the head-of-line element, and executing step A6;
step A6: and repeating the step A3 until the weight parameter matrixes in the queue w are completely deployed, ending and outputting deployment records Recodes.
Further, the detecting the performance of the edge node and ranking the performance of the edge node in the step a1 specifically include: and the edge server sends the same calculation task to all edge nodes, the edge nodes send calculation results to the edge service after completing the task, and the edge server ranks the performance of the edge nodes according to the speed of the returned calculation results.
Further, the dynamic deployment scheme B is used when the convolutional neural network image classification model is deployed on the edge device in the step 2, and the specific process is as follows:
step B1: setting inputs of a dynamic deployment scenario B, including a matrix x of the same size as the input data matrix for each layer of the convolutional neural network image classification modeliAnd a weight parameter matrix wi,xiComposed List input data matrix lists X and wiThe weight parameter matrix lists W are all according to xiAnd wiSequencing the calculated amount of the layers;
step B2: the edge server obtains the input data matrix x of the layer with the maximum current calculated amount from the weight parameter matrix list WiAnd the largest-sized weight parameter matrix wiAnd sending to n edge nodes;
step B3: the edge node determines whether there is sufficient storage capacity to deploy wiIf yes, return xi×wiThe result of (1) to the edge server; if not, not returning the result;
step B4: the edge server selects m edge nodes with the highest speed from the edge nodes returning the result to deploy the weight parameter matrix wi
Step B5: and repeating the steps B2-B4 until all the layers of weight parameter matrixes of the convolutional neural network image classification model are deployed.
Further, the distributed convolutional neural network image classification inference scheme of the linear coding in the step 3 is a non-coding scheme, a 2-repetition scheme or an MDS coding scheme, and is used for ensuring the stability of image recognition and ensuring the privacy and safety of image data in the image recognition process.
The invention also provides a distributed image recognition model inference system, which comprises edge equipment, wherein the edge equipment comprises an edge server and at least one edge node, and the edge node and the edge server and the edge node are communicated with each other in a linear coding mode;
the convolutional neural network image classification model is divided according to layers and is distributed and deployed on edge equipment in a weight parameter matrix mode according to layer information and the size and the calculated amount of the weight parameter matrix of each layer, and the edge equipment on the same layer with the convolutional neural network image classification model is deployed to form a distributed reasoning calculation module;
the edge nodes without the weight parameter matrix initiate image identification requirements and input image data, and the image data is subjected to image reasoning calculation on the current distributed reasoning calculation module; and after the image reasoning calculation of the current distributed reasoning calculation module is finished, inputting the result to a distributed reasoning calculation module at the next layer of the convolutional neural network image classification model for image reasoning calculation until the image data passes through all layers of the convolutional neural network image classification model to obtain an identification result.
Further, the distributed inference calculation module comprises a main edge device and a plurality of slave edge devices, and the image inference calculation process of each layer of the convolutional neural network image classification model is as follows:
the main edge device receives input data for coding, distributes the coded input data to the auxiliary edge device which is responsible for the layer of reasoning calculation, the auxiliary edge device returns the image reasoning calculation result to the main edge device, and the main edge device decodes the returned result to obtain the image reasoning calculation result of the current layer.
The system further comprises image acquisition equipment, wherein the image acquisition equipment is connected with main edge equipment which belongs to a first layer of a convolutional neural network image classification model in the distributed inference calculation module through edge nodes without a deployed weight parameter matrix; after the edge node without the weight parameter matrix is deployed to initiate an image identification requirement, the image acquisition equipment acquires image data and transmits the image data to the edge node without the weight parameter matrix as input image data.
Furthermore, at least the last layer of the convolutional neural network image classification model is deployed on the edge server, and the identification result obtained after the image data passes through all the layers of the convolutional neural network image classification model is output by the edge server.
Further, the weight parameter matrix of the convolutional neural network image classification model is deployed on a plurality of different edge devices in a backup mode.
Compared with the prior art, the technical scheme of the invention has the following advantages:
the convolutional neural network image classification model is deployed on the edge equipment according to the size of the weight parameter matrix of the layer and the size of the calculated amount, so that the stability of the image reasoning and calculation process during image recognition is ensured by reasonable deployment, and the problem of a queue-falling person is effectively avoided; in the image reasoning calculation process, the image is subjected to convolutional neural network classification and identification by using a distributed convolutional neural network image classification reasoning scheme of linear coding, so that the privacy and the safety of image data are protected.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference will now be made in detail to the present disclosure, examples of which are illustrated in the accompanying drawings.
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a dynamic deployment scenario B in the method of the present invention.
FIG. 3 is a schematic diagram of hidden leakage of intermediate image data in the convolutional neural network image classification inference process in the method of the present invention.
Fig. 4 is a schematic illustration of a non-coding scheme in the method of the present invention.
FIG. 5 is a schematic representation of the 2-copy protocol in the process of the invention.
Fig. 6 is a schematic diagram of the system of the present invention.
FIG. 7 is a network topology diagram of a distributed inferential computation module in the system of the present invention.
FIG. 8 is a structural diagram of a convolutional neural network image classification model in an embodiment of the present invention.
Fig. 9 is an experimental result of time required for completing image classification inference by using a random deployment scheme C, a static deployment scheme a, and a dynamic deployment scheme B when model deployment and 10 groups of image classification inference are completed and delay sets with different value ranges are subjected to uniform distribution in the embodiment of the present invention.
Fig. 10 is an experimental result of time required for completing image classification inference by using a random deployment scheme C, a static deployment scheme a, and a dynamic deployment scheme B when model deployment and 10 groups of image classification inference are completed and delay sets with the same mathematical expectation and different variances are subjected to normal distribution in the embodiment of the present invention.
Fig. 11 is an experimental result of time required for completing image classification inference by using a random deployment scheme C, a static deployment scheme a, and a dynamic deployment scheme B when model deployment and 10 groups of image classification inference are completed and delay sets with different mathematical expectations obey exponential distribution in the embodiment of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
In the description of the present invention, it should be understood that the term "comprises/comprising" is intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in the flowchart of fig. 1, an embodiment of a distributed image recognition model inference method according to the present invention includes:
step 1: constructing a Convolutional Neural Network (CNN) image classification model and segmenting according to layers to obtain layer information of the CNN image classification model, a weight parameter matrix of each layer and calculated quantity; the size and the calculation amount of the weight parameter matrix of each layer in the segmented convolutional neural network image classification model are different, so that proper edge equipment needs to be selected from edge equipment with different performances to deploy the weight parameter matrices on the edge equipment. The convolutional neural network image classification model is composed of a plurality of weight parameter matrixes and is divided into a plurality of layers, and each layer exists in the form of the weight parameter matrix. The image inference is essentially matrix multiplication calculation, and the multiplication with the weight parameter matrix is an image matrix.
Step 2: determining the number of edge devices according to the layer information, and deploying the convolutional neural network image classification model on the edge devices according to the weight parameter matrix size and the calculated amount of each layer by combining the storage space and the calculation capacity of the edge devices; the distributed deployment of the convolutional neural network image classification model is to deploy the weight parameter matrixes of each layer on the edge devices, and because the performance of the edge devices is different, the weight parameter matrixes of each layer need to be guided to be deployed on the edge devices with different performance according to the calculated amount of each layer. The high-performance edge equipment is provided with the weight parameter matrix with large size and calculation amount, the poor-performance edge equipment is provided with the weight parameter matrix with small size and calculation amount, the reasonable deployment can ensure the stability of the image reasoning calculation process during image recognition, and the problem of queue falling is effectively avoided. The static deployment scheme A or the dynamic deployment scheme B is used when the convolutional neural network image classification model is distributed and deployed on the edge device.
The static deployment scheme a can solve the problem that it cannot be guaranteed that the found edge node is the current optimal performance. The specific process of the static deployment scheme A is as follows:
step A1: detecting the performance of the edge nodes and ranking the performance of the edge nodes, specifically, enabling the edge server to send the same calculation tasks to all the edge nodes, sending calculation results to the edge server after the edge nodes finish the tasks, and ranking the performance of the edge nodes by the edge server according to the speed of the returned calculation results; ranking the weight parameter matrix size of each layer according to the calculated quantity of each layer of the convolutional neural network image classification model;
step A2: setting the input of a static deployment scheme A, wherein the input comprises a queue W of a weight parameter matrix, a list node of edge Nodes, a list Me of the maximum storage capacity of the edge Nodes, the length n of the list node s and the node number m of a single weight parameter matrix backup deployed in different edge Nodes, and each element in the W stores a weight parameter matrix of each layer in a convolutional neural network image classification model divided by layers and information that the weight parameter matrix belongs to a certain layer in the convolutional neural network model; the queue W and the list Nodes are ordered, so that the static deployment algorithm solves the problem that a weight parameter matrix of a layer with large calculation amount in a convolutional neural network image classification model is deployed on a high-performance edge node.
Step A3: acquiring a head element from a queue W, wherein a layer to which the head element belongs is a layer with the largest calculated amount in layers which are not deployed in a current convolutional neural network image classification model, judging whether all elements in the queue W are backed up and deployed on m different edge nodes, and if not, executing the step A4; if yes, go to step A5;
step A4: traversing the node list from the beginning, if the current edge node has enough storage space for deploying the weight parameter matrix in the current head-of-line element, deploying the current head-of-line element on the current edge node, and storing the layer information and the information of the edge node in the deployed current head-of-line element in deployment records Recodes;
step A5: dequeuing the head-of-line element, and executing step A6;
step A6: and repeating the step A3 until the weight parameter matrixes in the queue w are completely deployed, ending and outputting deployment records Recodes.
The dynamic deployment scheme B may also reasonably deploy the weight parameter matrix of each layer on the edge node, but unlike the static deployment scheme a, the deployment of the weight parameter matrix in the dynamic deployment scheme B is dynamically performed simultaneously in the edge node performance detection process. As shown in fig. 2, the specific process of the dynamic deployment scenario B is as follows:
step B1: setting inputs of a dynamic deployment scenario B, including a matrix x of the same size as the input data matrix for each layer of the convolutional neural network image classification modeliAnd a weight parameter matrix wi,xiComposed List input data matrix lists X and wiThe weight parameter matrix lists W are all according to xiAnd wiSorting the calculated amount of the layers in a descending order;
step B2: the edge server obtains the input data matrix x of the layer with the maximum current calculated amount from the weight parameter matrix list WiAnd the largest-sized weight parameter matrix wiAnd sending to n edge nodes;
step B3: the edge node determines whether there is sufficient storage capacity to deploy wiIf yes, return xi×wiThe result of (1) to the edge server; if not, not returning the result;
step B4: the edge server selects m edge nodes with the highest speed from the edge nodes returning the result to deploy the weight parameter matrix wi
Step B5: and repeating the steps B2-B4 until all the layers of weight parameter matrixes of the convolutional neural network image classification model are deployed.
And step 3: the edge device uses a distributed convolutional neural network image classification inference scheme of linear coding to carry out convolutional neural network classification identification on the image, and an image identification result is obtained through image inference calculation. The linear coding scheme can protect the privacy of data.
The image classification inference scheme of the linear coding distributed convolutional neural network is a non-coding scheme, a 2-repetition scheme or an MDS coding scheme, and is used for guaranteeing the stability of image recognition and ensuring the privacy and safety of image data in the image recognition process. The intermediate image data calculated in the convolutional neural network classification and identification process as shown in fig. 3 can still see the original content of the image data from the intermediate image data, and there is a problem of privacy leakage of the image data, so that a non-coding scheme of linear coding, a 2-repetition scheme or an MDS coding scheme is adopted to protect data security.
In the process of the distributed deployment of the convolutional neural network image classification model, the backup deployment is carried out on a plurality of edge nodes on each layer of model, so that a plurality of images can carry out distributed image classification reasoning simultaneously in a non-coding scheme. According to this feature, as shown in fig. 4, the image data is cut and recombined in a non-coding scheme to protect the privacy of the image data. From the left half of fig. 4, it can be seen that there are three image data matrices for performing distributed image inference simultaneously, and in order to protect the privacy of their original image data, they are fragmented and then shuffled and recombined before being sent to several edge nodes where the weight parameter matrices of the corresponding layer in the convolutional neural network image classification model are deployed. And when the new image data matrix is multiplied by the weight parameter matrix on the edge node, the final result can still be generated according to the original cutting record after the obtained result is returned. The data is encrypted in the data transmission process, and the data security can still be ensured by segmenting the disordered image data matrix in the calculation process of the edge node. The non-coding scheme protects the security of image data privacy of the image data matrix in the transmission process and the calculation process of other edge nodes by means of horizontally segmenting a plurality of input image data matrixes, recombining and distributing the image data matrixes. Although the non-coding scheme has the advantage of simple implementation, there is no corresponding processing method for delay return and loss of the result in the face of distributed inference computation.
The 2-copy scheme is an improvement on a non-coding scheme and aims to solve the problems of occasional delayed return of a calculation result and loss of the calculation result in the process of distributed image classification reasoning. As shown in fig. 5, the 2-copy scheme copies the new image data matrix after the image data matrix is sliced and reorganized. And finally, distributing the new image data matrix expanded after being copied to a corresponding edge node for calculation. When the edge nodes of a new image number matrix are obtained to perform calculation and return calculation results, the problem that the calculation results are delayed to return or calculation data are lost occurs in one edge node. Because the new image data matrix is backed up and expanded when the calculation task is prepared, even if a situation that a certain calculation task is not completed in time occurs in the image classification inference process of the distributed convolutional neural network, the image inference process of the round cannot be influenced. With the backup of the computing task, the same computing task is copied into two computing tasks, the two computing tasks are simultaneously put into distributed computing, and when the computing tasks are not completed in time, the copied image data matrix is used for reasoning. The 2-copy scheme can thus cope well with the above-mentioned problems occurring at edge nodes. The 2-copy scheme improves the problem that the non-coding scheme cannot handle delayed return and loss of computation results in distributed image reasoning, but the 2-copy scheme has disadvantages. For example, if there are n image data matrices, 2n new image data matrices are generated using a 2-replication scheme, and the edge node that distributes the image data matrices when performing the combination of the computation results preferably receives only n computation results to obtain the final result, and in the worst case receives 2n-1 computation results to obtain the final result (for example, n is 5 computation tasks with the reference 1, 2, 3, 4, 5. by the 2-replication scheme, the tasks are replicated to 1, 1, 2, 2, 3, 3, 4, 4, 5, 5. in the distributed computation, preferably, the slave returns 1, 2, 3, 4, 5, 5 results to the master, i.e., in the best case, the return n results can terminate the computation, in the worst case, the master receives 1, 1, 2, 2, 3, 3, 4, 4, 5, i.e. in the worst case 2n-1 is returned to get the final result. ). The 2-copy scheme thus has variations in the requirements for the number of returned calculations in distributed image inference processing, which also results in instability of the time required for distributed image inference.
The MDS coding scheme, also known as maximum distance separable coding, is a redundant coding. The method can encode the calculation data to ensure the privacy and safety of the calculation data, and can also realize the redundancy of the calculation tasks so as to solve the problem that part of the calculation data is lost or the working nodes fall behind in the distributed reasoning calculation. The encoding scheme is composed of an encoding matrix, an original data matrix and a decoding matrix, and a master computing device and a slave computing device exist in distributed computing. MDS coding is linear coding, in the coding stage, the master device uses a coding matrix to multiply an original data matrix for coding, and the coded data is sent to the slave device for calculation. The linear coding protects the security of the data during data transmission and during computation on the slave device. Because the encoding matrix and the decoding matrix are both stored in the main device, even if an attacker collects the matrix block coded by the original matrix, the coded original matrix block cannot be decoded, thereby protecting the safety of data. The MDS coding scheme is divided into two parts, encoding and decoding, wherein an encoding matrix E used in the encoding processm×n(m > n) is a vandermonde matrix because any n row vectors in an m row by n column vandermonde matrix constitute a reversible matrix. Encoding of a matrix of image data for an MDS encoding scheme into
Figure BDA0003025072490000111
Wherein A isiRepresenting an input image data matrix, wherein the distributed CNN image classification inference is carried out by n images at the same time, and m redundant coded image data matrixes A 'are obtained after MDS coding'i. After the m encoded image data matrixes are sent to the corresponding edge nodes for calculation, the decoding can be completed as long as n calculation results can be returned to obtain the result of the calculation.The decoding process is
Figure BDA0003025072490000112
Firstly, according to n returned calculation results in the coding matrix Em×nN code vectors participating in the matrix coding of n original image data are found, and the n code vectors are sequentially combined into a matrix En×nInverse matrix thereof
Figure BDA0003025072490000113
Is a decoding matrix of n calculation results. When the matrix composed of n returned calculation results as elements is multiplied by the decoding matrix
Figure BDA0003025072490000114
The final calculation of this round is decoded. The MDS coding scheme uses the vandermonde matrix as a coding matrix, which brings an advantage that successful decoding can be performed only if n computation results are returned for m computation tasks in distributed inference computation, which is also a characteristic that the 2-copy scheme does not have. However, the vandermonde matrix has disadvantages as an encoding matrix, and as the number of rows of the vandermonde matrix is increased, the value of the data element in the vandermonde matrix is increased, which results in an increasing amount of calculation for encoding and decoding. Therefore, in the process of carrying out the distributed CNN image classification reasoning, the number of images which are simultaneously reasoned can be reduced, so that the calculation amount of coding and decoding in an MDS coding scheme is reduced, the problem of a queue-drop person is solved, and the safety of data privacy is ensured. The distributed image classification inference scheme using the redundant coding has the characteristics of well coping with the problem of the falling behind people and accelerating the image inference speed, and ensures that the privacy protection of the image data reaches the weak safety standard.
As shown in fig. 6, an embodiment of the distributed image recognition model inference system of the present invention includes an edge device, where the edge device includes an edge server and at least one edge node, and the edge node, and the edge server and the edge node communicate with each other by way of linear coding; the method is used for protecting the safety of the image data in the image classification reasoning process of the image recognition.
The convolutional neural network image classification model is divided according to layers and is distributed and deployed on the edge equipment in the form of a weight parameter matrix according to layer information and the size and the calculated amount of the weight parameter matrix of each layer, the weight parameter matrix of the convolutional neural network image classification model is deployed on a plurality of different edge equipment in a backup mode, and the edge equipment on the same layer with the convolutional neural network image classification model is deployed to form a distributed inference calculation module. As shown in fig. 7, the distributed inference calculation module includes a master edge device and several slave edge devices, and in this embodiment, a Multiprocessing library of Python is used to design and develop the distributed inference calculation module running on the master and slave edge nodes. The image reasoning and calculating process of each layer of the convolutional neural network image classification model comprises the following steps: the main edge device of the input image data in the distributed reasoning calculation module of the current layer distributes the encoded calculation data to the secondary edge device responsible for the reasoning calculation of the layer, the secondary edge device returns the image reasoning calculation result to the main edge device, and the main edge device decodes the returned result to obtain the image reasoning calculation result of the current layer.
The edge nodes without the weight parameter matrix initiate image recognition requirements and input image data, and the edge nodes can be smart phones, commodity vending machines, intelligent garbage cans and the like. The edge node is selected as the input of the image data because the edge node is closer to the user than other types of equipment in the internet, so that the data transmission time is short, and the equipment cost of the edge node is lower than that of the server, so that the image classification calculation service with higher speed and lower cost can be provided for the user. Image data is subjected to image reasoning calculation on a current distributed reasoning calculation module; and after the image reasoning calculation of the current distributed reasoning calculation module is finished, inputting the result to a distributed reasoning calculation module at the next layer of the convolutional neural network image classification model for image reasoning calculation until the image data passes through all layers of the convolutional neural network image classification model to obtain an identification result.
The system further comprises image acquisition equipment, wherein the image acquisition equipment is a camera and is used for acquiring photo or video image data. The image acquisition equipment is connected with main edge equipment which belongs to a first layer of a convolutional neural network image classification model in the distributed inference calculation module through edge nodes without a weight parameter matrix; after the edge node without the weight parameter matrix initiates an image identification requirement, the image data acquired by the image acquisition equipment is transmitted to the edge node without the weight parameter matrix as input image data; the edge node initiating the image identification requirement is not provided with a weight parameter matrix and is connected with the camera, and the main function is to initiate and maintain the reasoning calculation of each layer. At least the last layer of the convolutional neural network image classification model is deployed on the edge server, and the identification result obtained by the image data passing through all the layers of the convolutional neural network image classification model is output by the edge server. The reasoning work of at least the last layer of the distributed image recognition model reasoning system is carried out on the edge server, the recognition result of the input image data is obtained after reasoning of all layers of the convolutional neural network image classification model and is a label value, for example, the input image data is a taxi picture, and the obtained label value is a taxi after the distributed image recognition model reasoning system.
To further illustrate the beneficial effect of the weight parameter matrix deployment in the present invention, the convolutional neural network image classification model shown in fig. 8 is constructed in this embodiment to perform a simulation experiment. The convolutional neural network image classification model comprises 15 layers, specifically: the first layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 64; the second layer is a pooling layer; the third layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 128; the fourth layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 128; the fifth layer is a pooling layer; the sixth layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 256; the seventh layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 256; the eighth layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 256; the ninth layer is a pooling layer; the tenth layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 512; the eleventh layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 512; the twelfth layer is a convolution layer, the size of a convolution kernel is 3 multiplied by 3, and the number of output characteristic graphs is 512; the thirteenth layer is a full connection layer, and the number of output characteristic graphs is 1024; the fourteenth layer is a full connection layer, and the number of output characteristic graphs is 1024; the fifteenth layer is a fully-connected layer, and the number of output characteristic graphs is 1024. Training the convolutional neural network image classification model to vectorize the convolutional neural network image classification model to obtain matrix multiplication required by inference of images in a 28 × 28 MNIST data set (data set of handwritten digital images), and obtaining the input data matrix size of each calculation layer and the size of a weight parameter matrix required to be stored after the convolutional neural network image classification model is vectorized as shown in table 1. From table 1, it can be seen that the majority of the computation layers in the convolutional neural network image classification model are convolutional layers and the sizes of the input data matrix and the weight parameter matrix of each layer are different. In the last calculation layer full link layer, the multiplication result of the input data matrix and the weight parameter matrix is a matrix of 1 × 10 in size, because there are 10 types of recognition results (i.e., tag values) of images in the data set provided in this embodiment.
Figure BDA0003025072490000141
Figure BDA0003025072490000151
Table 1 input data matrix size and weight parameter matrix size for each post-quantization computation layer
How the weight parameter matrix of the convolutional layer on the edge node is distributed and deployed can affect the performance of model reasoning of the whole distributed image recognition, in a simulation experiment, the weight parameter matrix of a full-connection layer is firstly deployed on an edge server, the weight parameter matrix of the convolutional layer is distributed and deployed on the edge node, and the weight parameter matrix in each convolutional layer is backed up and deployed on a plurality of edge nodes.
In the process of distributing and deploying the convolutional neural network image classification model on the edge device, 20 distributed processes are used for simulating 20 edge nodes and setting each weight parameter matrix, and backup deployment on 3 edge nodes is needed. And setting the sum of the element numbers of the deployed weight parameter matrix of each edge node not to exceed 4000000 according to the sum of the sizes of the weight parameter matrices of the convolution layers in the vectorized CNN.
In the process of reasoning and calculation for classifying and identifying the images, 3 images are set and simultaneously the reasoning and calculation is carried out. And a random deployment scheme C is used for carrying out a comparison experiment with a static deployment scheme A and a dynamic deployment scheme B used in the invention.
The random deployment scheme C is fast in finding the edge device for deploying the weight parameter matrix, but cannot ensure that the found edge device is currently the best in performance. The specific process of the random deployment scheme C is as follows:
step C1: setting the input of a random deployment scheme C, wherein the input comprises a queue W of a weight parameter matrix, a list node of edge Nodes, a list Me of the maximum storage capacity of the edge Nodes, the length n of the list node s and the node number m of a single weight parameter matrix backup deployed in different edge Nodes, and each element in the W stores a weight parameter matrix of each layer in a convolutional neural network image classification model divided by layers and information that the weight parameter matrix belongs to a certain layer in the convolutional neural network model; setting backup deployment for each weight parameter matrix on m edge nodes, wherein the backup deployment is used for improving the stability of the whole distributed image reasoning scheme;
step C2: and when the queue W is not empty, acquiring a current queue head element from the queue W, wherein the queue head element comprises a weight parameter matrix of a certain layer in the CNN image classification model and information of the layer. Judging whether the weight parameter matrix in the head-of-line element at the moment completes backup deployment on the m edge nodes, if not, executing the step C3; if so, go to step C5;
step C3: randomly extracting one from the n edge nodes, and if the current edge node is not deployed with the weight parameter matrix in the head-of-line element at the moment and the space on the current edge node can be deployed, deploying the weight parameter matrix in the head-of-line element at the moment on the current edge node;
step C4: acquiring layer information in a convolutional neural network image classification model to which the weight parameter matrix belongs from the current head-of-line element, storing the layer information and information of edge nodes in the deployed head-of-line element in deployment records, and executing step C6;
step C5: dequeuing the head-of-line element, and executing step C6;
step C6: step C2 is repeated until all elements in queue W are dequeued, ending and outputting deployment record codes.
Different distribution and different value delay sets T are set on the 20 edge nodes to test the performance of three distributed deployment schemes, namely a random deployment scheme C, a static deployment scheme A and a dynamic deployment scheme B. When the delay set T is subject to uniform distribution, T is obtained from a plurality of value ranges between 1 and 5 and between 1 and 30. When the delay set T follows a normal distribution, the mathematical expectation of setting T to 10 is constant and the variance takes a value between 11 and 16; when the set of delays T follows an exponential distribution, the mathematically expected value of T is made between 11 and 16.
After the distributed deployment of the model is completed by using a random deployment scheme C, a static deployment scheme A and a dynamic deployment scheme B, 10 groups of image classification reasoning are carried out on edge nodes of the deployed model by using a non-coding image classification reasoning scheme respectively, and the completion time is recorded. When model deployment and 10 groups of image classification inference are completed and time delay sets with different value ranges are subjected to uniform distribution, the experimental results of the time required for completing the image classification inference by using a random deployment scheme C, a static deployment scheme A and a dynamic deployment scheme B are shown in FIG. 9; the experimental results of the time required for completing the image classification inference by using the random deployment scheme C, the static deployment scheme A and the dynamic deployment scheme B under the conditions of completing the model deployment and the 10 groups of image classification inference and mathematically expecting the same time delay sets with different variances to obey normal distribution are shown in FIG. 10; the experimental results of the time required for completing the image classification inference by using the random deployment scheme C, the static deployment scheme a and the dynamic deployment scheme B when the model deployment and the 10 groups of image classification inference are completed and the delay sets with different mathematic expectations obey the exponential distribution are shown in fig. 11.
As can be seen from fig. 9, as the value range of the delay sets (abscissa) subject to uniform distribution increases, the sum of the model deployment time and the time for completing the inference of 10 groups of images (ordinate) generally increases; under various value ranges of the time delay set T, the time spent by the static deployment scheme A and the dynamic deployment scheme B provided by the invention is always less than the time spent by the random deployment scheme C for completing the experimental task. As can be seen from fig. 10, when the mathematical expectation of the normally distributed delay set is unchanged and the variance has multiple values, the model distributed static deployment scheme a and the dynamic deployment scheme B perform better than the random deployment scheme C; the static deployment scheme A and the dynamic deployment scheme B are similar in time consumption, and the static deployment scheme A is slightly superior to the dynamic deployment scheme B. As can be seen from fig. 11, the delay sets set on the 20 edge nodes at this time obey exponential distribution; when the mathematical expectation of the exponentially distributed delay set T is increased from 11 to 16, the static deployment scenario a and the dynamic deployment scenario B always maintain stable and excellent performance in terms of the sum of the time required to complete deployment and reason about 10 groups of images; the performance of the two schemes is still close, and only when the delay set obeys exponential distribution, the dynamic deployment scheme B is slightly better than the static deployment scheme A, but both schemes are better than the random deployment scheme C. By combining fig. 9, fig. 10 and fig. 11, the static deployment scheme a and the dynamic deployment scheme B both have good performance when the convolutional neural network image classification model is deployed in a distributed manner, which further illustrates that the static and dynamic deployment schemes B can more reasonably deploy the convolutional neural network image classification model on edge nodes, thereby ensuring the stability of the image inference calculation process during image recognition and effectively avoiding the problem of a dequeuer.
Compared with the prior art, the technical scheme of the invention has the following advantages: the convolutional neural network image classification model is deployed on the edge equipment according to the size of the weight parameter matrix of the layer and the size of the calculated amount, so that the stability of the image reasoning and calculation process during image recognition is ensured by reasonable deployment, and the problem of a queue-falling person is effectively avoided; in the image reasoning calculation process, the image is subjected to convolutional neural network classification and identification by using a distributed convolutional neural network image classification reasoning scheme of linear coding, so that the privacy and the safety of image data are protected.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the spirit or scope of the invention.

Claims (10)

1. A model inference method for distributed image recognition is characterized by comprising the following steps:
step 1: constructing a convolutional neural network image classification model and segmenting according to layers to obtain layer information of the convolutional neural network image classification model, a weight parameter matrix of each layer and calculated quantity;
step 2: determining the number of edge devices according to the layer information, and deploying the convolutional neural network image classification model on the edge devices according to the weight parameter matrix size and the calculated amount of each layer by combining the storage space and the calculation capacity of the edge devices;
and step 3: the edge device uses a distributed convolutional neural network image classification inference scheme of linear coding to carry out convolutional neural network classification identification on the image, and an image identification result is obtained through image inference calculation.
2. The model inference method for distributed image recognition according to claim 1, characterized by: in the step 2, a static deployment scheme a is used when the convolutional neural network image classification model is deployed on the edge device in a distributed manner, and the specific process is as follows:
step A1: detecting the performance of edge nodes, ranking the performance of the edge nodes, and ranking the weight parameter matrix size of each layer according to the calculated amount of each layer of the convolutional neural network image classification model;
step A2: setting the input of a static deployment scheme A, wherein the input comprises a queue W of a weight parameter matrix, a list node of edge Nodes, a list Me of the maximum storage capacity of the edge Nodes, the length n of the list node s and the node number m of a single weight parameter matrix backup deployed in different edge Nodes, and each element in the W stores a weight parameter matrix of each layer in a convolutional neural network image classification model divided by layers and information that the weight parameter matrix belongs to a certain layer in the convolutional neural network model;
step A3: acquiring a head element from a queue W, wherein a layer to which the head element belongs is a layer with the largest calculated amount in layers which are not deployed in a current convolutional neural network image classification model, judging whether all elements in the queue W are backed up and deployed on m different edge nodes, and if not, executing the step A4; if yes, go to step A5;
step A4: traversing the node list from the beginning, if the current edge node has enough storage space for deploying the weight parameter matrix in the current head-of-line element, deploying the current head-of-line element on the current edge node, and storing the layer information and the information of the edge node in the deployed current head-of-line element in deployment records Recodes;
step A5: dequeuing the head-of-line element, and executing step A6;
step A6: and repeating the step A3 until the weight parameter matrixes in the queue w are completely deployed, ending and outputting deployment records Recodes.
3. The model inference method for distributed image recognition according to claim 2, characterized in that: the detecting the performance of the edge node and ranking the performance of the edge node in the step a1 specifically include: and the edge server sends the same calculation task to all edge nodes, the edge nodes send calculation results to the edge service after completing the task, and the edge server ranks the performance of the edge nodes according to the speed of the returned calculation results.
4. The model inference method for distributed image recognition according to claim 1, characterized by: in the step 2, a dynamic deployment scheme B is used when the convolutional neural network image classification model is deployed on the edge device in a distributed manner, and the specific process is as follows:
step B1: setting inputs of a dynamic deployment scenario B, including a matrix x of the same size as the input data matrix for each layer of the convolutional neural network image classification modeliAnd a weight parameter matrix wi,xiComposed List input data matrix lists X and wiThe weight parameter matrix lists W are all according to xiAnd wiSequencing the calculated amount of the layers;
step B2: the edge server obtains the input data matrix x of the layer with the maximum current calculated amount from the weight parameter matrix list WiAnd the largest-sized weight parameter matrix wiAnd sending to n edge nodes;
step B3: the edge node determines whether there is sufficient storage capacity to deploy wiIf yes, return xi×wiThe result of (1) to the edge server; if not, not returning the result;
step B4: the edge server selects m edge nodes with the highest speed from the edge nodes returning the result to deploy the weight parameter matrix wi
Step B5: and repeating the steps B2-B4 until all the layers of weight parameter matrixes of the convolutional neural network image classification model are deployed.
5. The method of model inference for distributed image recognition according to any of claims 1-4, characterized by: the image classification inference scheme of the linearly coded distributed convolutional neural network in the step 3 is a non-coding scheme, a 2-repetition scheme or an MDS coding scheme, and is used for ensuring the stability of image recognition and ensuring the privacy and safety of image data in the image recognition process.
6. A distributed image recognition model inference system is characterized in that: the edge device comprises an edge server and at least one edge node, wherein the edge server and the edge node are communicated with each other in a linear coding mode;
the convolutional neural network image classification model is divided according to layers and is distributed and deployed on edge equipment in a weight parameter matrix mode according to layer information and the size and the calculated amount of the weight parameter matrix of each layer, and the edge equipment on the same layer with the convolutional neural network image classification model is deployed to form a distributed reasoning calculation module;
the edge nodes without the weight parameter matrix initiate image identification requirements and input image data, and the image data is subjected to image reasoning calculation on the current distributed reasoning calculation module; and after the image reasoning calculation of the current distributed reasoning calculation module is finished, inputting the result to a distributed reasoning calculation module at the next layer of the convolutional neural network image classification model for image reasoning calculation until the image data passes through all layers of the convolutional neural network image classification model to obtain an identification result.
7. The distributed image recognition model inference system of any of claim 6, characterized by: the distributed inference calculation module comprises a main edge device and a plurality of auxiliary edge devices, and the image inference calculation process of each layer of the convolutional neural network image classification model is as follows:
the main edge device receives input data for coding, distributes the coded input data to the auxiliary edge device which is responsible for the layer of reasoning calculation, the auxiliary edge device returns the image reasoning calculation result to the main edge device, and the main edge device decodes the returned result to obtain the image reasoning calculation result of the current layer.
8. The distributed image recognition model inference system of claim 7, wherein: the distributed inference calculation module is connected with a main edge device which belongs to a first layer of a convolutional neural network image classification model through an edge node which is not provided with a weight parameter matrix; after the edge node without the weight parameter matrix is deployed to initiate an image identification requirement, the image acquisition equipment acquires image data and transmits the image data to the edge node without the weight parameter matrix as input image data.
9. The distributed image recognition model inference system of claim 6, wherein: at least the last layer of the convolutional neural network image classification model is deployed on the edge server, and the identification result obtained after the image data passes through all the layers of the convolutional neural network image classification model is output by the edge server.
10. The distributed image recognition model inference system of any of claims 6-9, wherein: the weight parameter matrix of the convolutional neural network image classification model is deployed on a plurality of different edge devices in a backup mode.
CN202110414068.4A 2021-04-16 2021-04-16 Distributed image recognition model reasoning method and system Pending CN113158243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110414068.4A CN113158243A (en) 2021-04-16 2021-04-16 Distributed image recognition model reasoning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110414068.4A CN113158243A (en) 2021-04-16 2021-04-16 Distributed image recognition model reasoning method and system

Publications (1)

Publication Number Publication Date
CN113158243A true CN113158243A (en) 2021-07-23

Family

ID=76868222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110414068.4A Pending CN113158243A (en) 2021-04-16 2021-04-16 Distributed image recognition model reasoning method and system

Country Status (1)

Country Link
CN (1) CN113158243A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114021075A (en) * 2021-11-12 2022-02-08 福建师范大学 Code matrix multiplication method utilizing computational capability of dequeue node
CN114138501A (en) * 2022-02-07 2022-03-04 杭州智现科技有限公司 Processing method and device for edge intelligent service for field safety monitoring
WO2023193169A1 (en) * 2022-04-07 2023-10-12 Huawei Technologies Co.,Ltd. Method and apparatus for distributed inference
CN116974654A (en) * 2023-09-21 2023-10-31 浙江大华技术股份有限公司 Image data processing method and device, electronic equipment and storage medium
WO2024051222A1 (en) * 2022-09-09 2024-03-14 中国电信股份有限公司 Machine vision defect recognition method and system, edge side device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717574A (en) * 2018-07-11 2020-01-21 杭州海康威视数字技术股份有限公司 Neural network operation method and device and heterogeneous intelligent chip
WO2020133317A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Computing resource allocation technology and neural network system
CN112579285A (en) * 2020-12-10 2021-03-30 南京工业大学 Edge network-oriented distributed neural network collaborative optimization method
CN112612601A (en) * 2020-12-07 2021-04-06 苏州大学 Intelligent model training method and system for distributed image recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717574A (en) * 2018-07-11 2020-01-21 杭州海康威视数字技术股份有限公司 Neural network operation method and device and heterogeneous intelligent chip
WO2020133317A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Computing resource allocation technology and neural network system
CN112612601A (en) * 2020-12-07 2021-04-06 苏州大学 Intelligent model training method and system for distributed image recognition
CN112579285A (en) * 2020-12-10 2021-03-30 南京工业大学 Edge network-oriented distributed neural network collaborative optimization method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114021075A (en) * 2021-11-12 2022-02-08 福建师范大学 Code matrix multiplication method utilizing computational capability of dequeue node
CN114138501A (en) * 2022-02-07 2022-03-04 杭州智现科技有限公司 Processing method and device for edge intelligent service for field safety monitoring
CN114138501B (en) * 2022-02-07 2022-06-14 杭州智现科技有限公司 Processing method and device for edge intelligent service for field safety monitoring
WO2023193169A1 (en) * 2022-04-07 2023-10-12 Huawei Technologies Co.,Ltd. Method and apparatus for distributed inference
WO2024051222A1 (en) * 2022-09-09 2024-03-14 中国电信股份有限公司 Machine vision defect recognition method and system, edge side device, and storage medium
CN116974654A (en) * 2023-09-21 2023-10-31 浙江大华技术股份有限公司 Image data processing method and device, electronic equipment and storage medium
CN116974654B (en) * 2023-09-21 2023-12-19 浙江大华技术股份有限公司 Image data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113158243A (en) Distributed image recognition model reasoning method and system
Zhang et al. Rest: An efficient transformer for visual recognition
CN108090521B (en) Image fusion method and discriminator of generative confrontation network model
CN108764317B (en) Residual convolutional neural network image classification method based on multipath feature weighting
CN110572362A (en) network attack detection method and device for multiple types of unbalanced abnormal traffic
Han et al. Signal processing and networking for big data applications
TW201935327A (en) Adjustment method for convolutional neural network and electronic apparatus
Brandão et al. A biased random‐key genetic algorithm for scheduling heterogeneous multi‐round systems
CN108664993B (en) Dense weight connection convolutional neural network image classification method
WO2018120723A1 (en) Video compressive sensing reconstruction method and system, and electronic apparatus and storage medium
CN110298446A (en) The deep neural network compression of embedded system and accelerated method and system
CN111240746A (en) Floating point data inverse quantization and quantization method and equipment
CN113672369A (en) Method and device for verifying ring of directed acyclic graph, electronic equipment and storage medium
CN112132279A (en) Convolutional neural network model compression method, device, equipment and storage medium
CN112749666A (en) Training and motion recognition method of motion recognition model and related device
CN112799852B (en) Multi-dimensional SBP distributed signature decision system and method for logic node
CN110135428A (en) Image segmentation processing method and device
CN112612601A (en) Intelligent model training method and system for distributed image recognition
CN115599541A (en) Sorting device and method
CN111582284B (en) Privacy protection method and device for image recognition and electronic equipment
CN110728351A (en) Data processing method, related device and computer storage medium
CN108564155A (en) Smart card method for customizing, device and server
CN116468947A (en) Cutter image recognition method, cutter image recognition device, computer equipment and storage medium
CN116957041A (en) Method, device and computing equipment for compressing neural network model
Alpaydin Multiple neural networks and weighted voting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240517