CN115187847A - Image joint reasoning identification system and method based on cloud edge architecture - Google Patents

Image joint reasoning identification system and method based on cloud edge architecture Download PDF

Info

Publication number
CN115187847A
CN115187847A CN202210832612.1A CN202210832612A CN115187847A CN 115187847 A CN115187847 A CN 115187847A CN 202210832612 A CN202210832612 A CN 202210832612A CN 115187847 A CN115187847 A CN 115187847A
Authority
CN
China
Prior art keywords
cloud
edge
image
image recognition
computing center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210832612.1A
Other languages
Chinese (zh)
Inventor
陈静
肖恭翼
郭莹
李娜
孙浩
李文
张传福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Computer Science Center National Super Computing Center in Jinan
Original Assignee
Qilu University of Technology
Shandong Computer Science Center National Super Computing Center in Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Computer Science Center National Super Computing Center in Jinan filed Critical Qilu University of Technology
Priority to CN202210832612.1A priority Critical patent/CN115187847A/en
Publication of CN115187847A publication Critical patent/CN115187847A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The invention relates to an image joint reasoning and identifying system and method based on a cloud edge terminal architecture, which comprises a cloud platform layer, a communication network layer, an edge layer and a terminal layer; the cloud platform layer comprises a cloud computing center, a database, a cloud file storage system and a mirror image warehouse; the edge layer comprises various devices integrated in a target image recognition scene, including edge devices and a local file storage system; the terminal layer is terminal equipment with a data acquisition function in a target image recognition scene and is used for transmitting real-time images and videos acquired by monitoring to the edge layer through a wired or wireless network so as to be detected. The cloud-edge joint reasoning algorithm is adopted, the defect that the model is only arranged on the cloud end or the edge end on one side is overcome, more and faster multi-target detection and identification can be realized, cloud-edge resources are fully utilized, and the method is more suitable for complex multi-target image identification and detection operation.

Description

Image joint reasoning identification system and method based on cloud edge architecture
Technical Field
The invention belongs to the cross field of artificial intelligence processing and joint reasoning for images or video stream data to be detected based on edge calculation. In particular to an image joint reasoning system and method based on an artificial intelligence model and based on a cloud edge architecture and a joint reasoning algorithm.
Background
Artificial intelligence is widely applied to target recognition, and is expanded to the field of multi-target recognition such as marine fish recognition, pathological image recognition, automatic driving and the like through cloud computing and edge computing technologies, but as the expansion scene becomes more complex, higher requirements on recognition accuracy, resource allocation and the like are provided.
At present, some target identification systems based on cloud servers or edge terminals exist, but when the target identification systems are applied to complex scenes of multi-target detection, the following defects exist: firstly, a target identification system of an edge end is deployed at the edge end in a model, a cloud end is responsible for decision making, the edge end is responsible for data processing and identification, but the edge equipment resources are limited, and the deployed model is light in weight, so that the independent work is difficult to realize in the face of a complex scene of multi-target detection; secondly, the cloud target recognition system is deployed in a cloud server in a model, receives data uploaded by the edge end, and processes and recognizes the data, but the cloud server is usually far away from a working scene, and is high in data transmission cost, long in time consumption and not suitable for a multi-target image recognition scene.
Disclosure of Invention
The invention provides an image joint reasoning and identifying system based on a cloud edge architecture to solve the requirements and the defects of the prior art;
the invention provides a stable cloud edge architecture based on a Kubeedge-Sedna management platform, and the cloud center can effectively manage a large amount of image recognition edge equipment and support large-scale deployment and centralized management.
The invention also provides a joint reasoning algorithm, which breaks through the condition that the edge end resources are limited, unloads the high-load tasks of the edge end to the cloud end, improves the dynamic resource allocation efficiency and the image recognition efficiency, and reduces the deployment cost.
Interpretation of terms:
the Kubeedge-Sedna management platform: kubean is an open source system, supports edge calculation, and extends the editing function of the containerized application program to the platform of the edge end; sedna is an edge cloud collaborative AI project hatched by Kubedge SIG AI, and can expand the function of Kubed and provide the functions of cross-cloud edge collaborative training and collaborative reasoning.
The technical scheme of the invention is as follows:
an image joint reasoning and identifying system based on a cloud edge terminal architecture comprises a cloud platform layer, a communication network layer, an edge layer and a terminal layer;
the cloud platform layer comprises a cloud computing center, a database, a cloud file storage system and a mirror image warehouse;
the cloud computing center is used for training a cloud end and an edge end artificial intelligence image recognition model and processing an image recognition task unloaded from an edge end to the cloud computing center; the database is used for storing addressing directory addresses; the cloud file storage system is used for storing cloud and edge artificial intelligence image recognition models and training set and test set image files used for model training; the mirror image warehouse is used for storing mirror images generated by cloud and edge artificial intelligent image recognition models trained by the cloud computing center;
the communication network layer is used for information interaction between the edge layer and the cloud platform layer, data uploading and downloading of the model and the mirror image;
the edge layer comprises various devices integrated in a target image recognition scene, including edge devices and a local file storage system;
the edge equipment bears an edge artificial intelligence image recognition model and is used for processing monitoring data transmitted from a terminal layer in a target image recognition scene, and whether the monitoring data starts the cloud artificial intelligence image recognition model for detection is calculated through a joint reasoning algorithm; the local file storage system is used for storing image files, executable programs, image data sets uploaded by a terminal layer and detection results of historical and latest versions of the edge artificial intelligent image recognition model;
the terminal layer is terminal equipment with a data acquisition function in a target image recognition scene and is used for transmitting real-time images and videos acquired by monitoring to the edge layer through a wired or wireless network so as to be detected;
wherein, whether the monitoring data starts a cloud artificial intelligence image recognition model for detection is calculated through a joint reasoning algorithm, and the method comprises the following steps: obtaining the current CPU residual rate of the edge device, the current memory residual rate of the edge device, the delay of a gateway between the edge device and the cloud computing center network connection, the model starting time and the current operation data size, and computing whether the monitoring data starts a cloud artificial intelligence image recognition model for detection through a joint reasoning algorithm.
According to the optimization of the method, the cloud computing center deploys an open-source Kubeedge-Sedna management platform;
the edge device is added into the cloud platform layer through a Join Token key and certificate verification mode of a Kubeedge-Sedna management platform, and the cloud computing center manages and controls the edge device;
the cloud computing center receives access of the terminal equipment and specifies the edge equipment to which the terminal equipment belongs, and the cloud computing center controls the edge equipment.
According to the invention, the edge device is an Nvidia Nx Xavier integrated device; the terminal equipment comprises a camera or a monitor.
According to the invention, preferably, the cloud computing center, the database, the cloud file storage system and the mirror image warehouse are all in a local area network; the edge device, the local file storage system and the terminal layer are in a local area network.
An image joint reasoning identification method based on a cloud edge architecture is realized based on the image joint reasoning identification system, and comprises the following steps:
step 1: the cloud computing center respectively trains needed cloud and edge artificial intelligent image recognition models according to different target image recognition scene requirements, packs the two models into a mirror image, and stores the mirror image in the mirror image warehouse; sending the mirror image file of the edge artificial intelligence image recognition model to the local file storage system;
step 2: the cloud computing center starts a Pod on the appointed edge equipment through a Yaml file, operates a target image recognition service and deploys a Sedna plug-in, wherein a joint reasoning algorithm is arranged in the Sedna plug-in;
and 3, step 3: the edge device pulls a mirror image required by the operation of the Pod container from the local file storage system, the Pod container is started successfully, and the Pod container state is returned to the cloud computing center;
and 4, step 4: after running a target image recognition service through the Yaml file, the edge equipment works in a target image recognition scene by utilizing a Sednaplug-in joint reasoning algorithm;
and 5: the terminal equipment is connected with the edge equipment through an ip address, and the edge equipment acquires target image identification data through a real-time image transmitted by the terminal equipment;
step 6: the edge device reads target image recognition data to be detected, obtains the current CPU residual rate of the edge device, the current memory residual rate of the edge device, the delay of a gateway between the edge device and a cloud computing center network connection, model starting time and the current operation data size at the same time, calculates whether the monitoring data starts a cloud artificial intelligent image recognition model for detection through a joint reasoning algorithm, and performs target image recognition service to obtain a target image recognition result.
Preferably, in step 1, the cloud computing center separately trains the required cloud and edge artificial intelligence image recognition models according to different target image recognition scene requirements, and specifically includes:
step 1.1: constructing a cloud and edge artificial intelligent image recognition model;
the cloud artificial intelligent image recognition model comprises a main neural network CSP-Darknet53, a neural network LeNet-5 and a neural network Darknet-19;
the edge artificial intelligent image recognition model adopts a CSP-Darknet53 neural network architecture;
step 1.2: training a cloud artificial intelligence image recognition model;
acquiring a data set;
performing feature labeling on a training set in the data set, adding a category label, and leading the training set to be in an xml format to obtain an xml data set;
normalizing the coordinates of the images in the xml data set;
training a cloud artificial intelligence image recognition model;
step 1.3: training an artificial intelligent image recognition model of an edge end;
acquiring a data set;
performing feature labeling on a training set in the data set, adding a category label, and leading the training set to be in an xml format to obtain an xml data set;
normalizing the coordinates of the images in the xml data set;
and training an artificial intelligent image recognition model of the edge end.
Preferably, in step 6, calculating whether the monitoring data enables a cloud artificial intelligence image recognition model for detection through a joint reasoning algorithm includes:
step 6.1: calculating the relation D (r, h) between the edge equipment model load and the resource utilization to obtain a resource pressure load parameter n 1 The value of (c):
d (r, h) is represented by formula (1):
Figure BDA0003746083880000041
in the formula (1), R a And R b Respectively, the current CPU residual rate and the memory residual rate, V, of the edge device 1 、V 2 To adjust the constant, a is the model start-up time average constant, h i The current operation data size;
when the value of D (r, h) is less than 0,n 1 =0, go to step 6.2; otherwise, n 1 =1, go to step 6.3;
step 6.2: when n is 1 If =0, it indicates that the remaining resources of the current edge device are not enough to support the detected load; calculating the network condition W (t) of the edge equipment and the cloud computing center to obtain a network state parameter n 2 W (t) is represented by formula (2):
Figure BDA0003746083880000042
in formula (2), T i For the delay of each gateway between the edge device and the cloud computing center network connection, m isNumber of gateways on the communication path; when W (t) is greater than 150ms 2 =0, otherwise, n 2 =1;
If n is 2 If the connection state of the current edge device and the cloud computing center is good, uploading the image to a cloud end, and starting a cloud end artificial intelligence image recognition model for recognition; if n is 2 =0, if the current network connection state between the edge device and the cloud computing center is poor, the image is marked to be in a state unsuitable for uploading cloud identification, the step 6.1 is returned, and the next time n is 2 When the calculated image meets the conditions, the calculated image is uploaded to a cloud artificial intelligence image recognition model for detection;
step 6.3: when n is 1 If the current edge equipment residual resources are enough to support the load of the detection, starting an artificial intelligent image recognition model for detection, and calculating the detection precision B (y) of the edge model; b (y) is represented by formula (3):
Figure BDA0003746083880000043
in formula (3), C is the minimum desired accuracy, y i The actual accuracy of the detection; when B (y) is more than or equal to 0, detecting the accurate qualified parameter n 3 =1, go to step 6.4; b (y)<At 0, n 3 =0, go to step 6.4;
step 6.4: if n is 3 If the image is recognized by the edge image recognition model, the recognition result of the image is represented to have higher accuracy, and the image is marked as recognized and stored in a local file storage system;
step 6.5: if n is 3 If =0, it means that the recognition result of the edge-side image recognition model for the image is low in accuracy, and W (t) is calculated to obtain n 2 If n is equal to 2 If the image is not suitable for being uploaded to the cloud terminal, marking the image as a state which is not suitable for being uploaded to the cloud terminal identification, and uploading the image to the cloud terminal after the network condition is good; if n is 2 And if the image is 1, uploading the image to a cloud end, and starting a cloud end artificial intelligence image recognition model for recognition.
Preferably, in step 6, if the cloud artificial intelligence image recognition model is started, the detection result and the image are stored in a cloud file storage system, and a backup of the detection result is sent to a local file storage system; otherwise, starting an edge artificial intelligence image recognition model, storing the detection result and the image in a local file storage system, regularly adopting a network idle time transmission mode, and packaging and sending the data of the local file storage system to the cloud platform layer.
The invention has the beneficial effects that:
1. and (5) structuring. The invention provides a mature Kubeedge-Sedna management platform as a method for cloud decision, edge layer management and terminal layer management, which supports large-scale node access and management, and task starting and issuing.
2. And (4) light weight. The invention adopts a mature, light and open-source edge computing management platform, preferentially selects the Nvidia NX Xavier integrated equipment as the edge equipment, has low power consumption, small volume, strong computing power and rapid deployment, can support large-scale deployment, and is more suitable for multi-target detection scenes.
3. The cloud edge is cooperative. The system adopts a joint reasoning algorithm, overcomes the defect that the model is only deployed on the cloud end or the edge end on one side, can detect and identify multiple targets more and more quickly, and is more suitable for complex multi-target image identification and detection operation.
4. And continuously updating. The system adopts a data set returning mode, and model training and model reasoning are independently separated, so that the cloud can continuously train a new model, correct parameters and characteristic values, keep the model updated alternately, provide continuous and efficient detection efficiency, and better meet the requirements of a multi-target detection scene.
5. And a better cloud edge resource optimization strategy. The edge computing is closer to a data collection end, data transmission delay is reduced, cloud edge collaborative reasoning is used, cloud edge resources are fully utilized, and task execution efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of an image joint reasoning identification system based on a cloud edge architecture according to the present invention;
FIG. 2 is a schematic diagram of the cloud edge architecture;
FIG. 3 is a schematic diagram of mirror image packing and mirror image distribution;
FIG. 4 is a schematic diagram of cloud edge information interaction;
FIG. 5 is a schematic diagram of a federated inference algorithm;
FIG. 6 is a schematic view of the fish to be tested in example 4;
fig. 7 is a schematic diagram of a neural network architecture of a cloud artificial intelligence image recognition model.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
An image joint reasoning and identifying system based on a cloud edge-end architecture is shown in figure 1 and comprises a cloud platform layer, a communication network layer, an edge layer and a terminal layer;
the cloud platform layer comprises a cloud computing center, a database, a cloud file storage system and a mirror image warehouse;
the cloud computing center is integrated by software such as Go environment, kubeige platform, docker, sedna plug-in and the like, and is shown in fig. 3; the image recognition task processing method is used for training the cloud and edge artificial intelligent image recognition models and processing the image recognition tasks unloaded from the edge to the cloud computing center; the database is used for storing addressing directory addresses; the cloud file storage system is used for storing cloud and edge artificial intelligence image recognition models and training sets and test set image files used for model training; the mirror image warehouse is used for storing mirror image files generated by the cloud end and edge end artificial intelligence image recognition models trained by the cloud computing center; as shown in fig. 3.
The communication network layer is used for information interaction, data uploading and downloading of the model and the mirror image between the edge layer and the cloud platform layer; are wired/wireless communication lines between the edge layer and the cloud platform layer, and respective gateways.
The edge layer comprises various devices integrated in a target image recognition scene, including an edge device and a local file storage system;
the edge device bears an edge artificial intelligence image recognition model and is used for processing monitoring data transmitted from a terminal layer in a target image recognition scene, and calculating whether the monitoring data starts a cloud artificial intelligence image recognition model for detection or not through a joint reasoning algorithm; the local file storage system is used for storing a history and image files, executable programs, image data sets uploaded by a terminal layer and detection results of the edge-end artificial intelligence image recognition model of the latest version;
the terminal layer is terminal equipment with a data acquisition function in a target image recognition scene and is used for transmitting real-time images and videos acquired by monitoring to the edge layer through a wired or wireless network so as to be detected;
the method for detecting whether the monitoring data start a cloud artificial intelligence image recognition model through the joint reasoning algorithm comprises the following steps: obtaining the current CPU residual rate of the edge device, the current memory residual rate of the edge device, the delay of a gateway between the edge device and the cloud computing center network connection, the model starting time and the current operation data size, and calculating whether the monitoring data starts a cloud artificial intelligence image recognition model for detection through a joint reasoning algorithm.
Example 2
The image joint reasoning and identifying system based on the cloud edge architecture in embodiment 1 is characterized in that:
the cloud computing center deploys an open-source Kubeedge-Sedna management platform;
the edge device is added into a cloud platform layer through a Join Token key and certificate verification mode of a Kubeedge-Sedna management platform, and a cloud computing center controls the edge device; the cloud computing center receives access of terminal equipment, specifically: and creating resource objects such as equipment, object models and the like through the Yaml configuration file, adding the terminal equipment through Url in the equipment resource group so as to receive the access of the terminal equipment, and appointing the edge equipment to which the terminal equipment belongs, wherein the cloud computing center controls the edge equipment. Thus, a cloud-edge architecture is formed, as shown in fig. 2.
The edge device is a device with high floating point computing capability, and adopts an Nvidia Nx Xavier processor; the terminal equipment comprises a camera or a monitor.
The cloud computing center, the database, the cloud file storage system and the mirror image warehouse are all in a local area network; the edge device, the local file storage system and the terminal layer are in a local area network.
Example 3
An image joint reasoning identification method based on a cloud edge architecture is realized based on the image joint reasoning identification system in embodiment 1 or 2, and comprises the following steps:
step 1: the cloud computing center respectively trains needed cloud and edge artificial intelligent image recognition models according to different target image recognition scene requirements, packs the two models into a mirror image, and stores the mirror image in a mirror image warehouse; sending the edge artificial intelligent image recognition model to a local file storage system;
and 2, step: the cloud computing center starts a Pod on the appointed edge equipment through the Yaml file, operates the target image recognition service and deploys the Sednas plug-in, and a joint reasoning algorithm is arranged in the Sednas plug-in; as shown in fig. 4;
and step 3: the edge device pulls a mirror image required by the operation of the Pod from the local file storage system, the Pod is started successfully, and the Pod state is returned to the cloud computing center;
the cloud computing center defines a Pod, a target image recognition service and a Sedna plug-in through a Yaml file, a communication network layer transmits a starting instruction to an edge layer, after receiving the starting instruction, edge equipment starts the Pod according to the content of the Yaml file, runs the target image recognition service, pulls a mirror image required by the Pod from a local file storage system, and returns edge equipment information and Pod information to a cloud.
And 4, step 4: after running a target image recognition service through the Yaml file, the edge equipment works in a target image recognition scene by utilizing a Sednaplug-in joint reasoning algorithm;
and 5: the terminal equipment is connected with the edge equipment through an ip address, and the edge equipment acquires target image identification data through a real-time image transmitted by the terminal equipment;
step 6: the method comprises the steps that edge equipment reads target image recognition data to be detected, the CPU residual rate of the current edge equipment, the memory residual rate of the current edge equipment, the delay of a gateway between the edge equipment and a cloud computing center network connection, model starting time and the size of current operation data are obtained at the same time, whether a cloud artificial intelligence image recognition model is started for detecting the monitoring data or not is calculated through a joint reasoning algorithm, target image recognition service is carried out, and a target image recognition result is obtained. As shown in fig. 5.
Example 4
The image joint reasoning and identification method based on the cloud edge architecture in embodiment 3 is characterized in that:
in step 1, the cloud computing center respectively trains out required cloud and edge artificial intelligence image recognition models according to marine fish image recognition scene requirements, and the method specifically comprises the following steps:
step 1.1: establishing a cloud and edge artificial intelligent image recognition model;
the cloud artificial intelligent image recognition model comprises a main neural network CSP-Darknet53, a neural network LeNet-5 and a neural network Darknet-19 which are officially provided by YOLO 5; the cloud artificial intelligent image recognition model adopts a multi-scale multi-neural network training mode, targets with different fish sizes correspond to different network models, CSP-Darknet53 provided by YOLO5 official part is used as a main neural network corresponding to a large-scale characteristic data set, leNet-5 and Darknet-19 respectively correspond to a multi-neural network structure formed by medium-scale and small-scale characteristic data sets, and the multi-neural network structure is shown in figure 7;
the edge artificial intelligent image recognition model adopts CSP-Darknet53 neural network architecture provided by YOLO5 official party;
step 1.2: training a cloud artificial intelligence image recognition model;
acquiring a data set; performing feature labeling on a training set in the data set by adopting image labeling software, adding a class label, and leading the training set to be in an xml format to obtain an xml data set;
reading and returning a training set file by using an os.listdir (xmlfilePath) function, wherein image w =1./size [0], image =1./size [1], and normalizing coordinates of images in the xml data set;
training process: sending the data set after normalization processing into a CSP-Darknet53 neural network for training, and extracting characteristics from a backhaul in the data set; the cloud artificial intelligence image recognition model adopts a multi-scale training strategy, and the features with different sizes and scales are sent to different neural networks for training; the method comprises the following specific steps:
CSP-Darknet53 has a characteristic image input size of (N, C) in ,H in ,W in ) = (N, 3, 640), corresponding to large scale features, and taking convolution layers kernel =6, stride =2, padding = 2; performing spatial pyramid pooling by adopting a plurality of 5x 5-sized MaxPool layers in the SPPF, and converting the obtained feature map into a feature vector with a fixed size;
the characteristic image input size of Darknet-19 is (N, C) in ,H in ,W in ) = (N, 3,224,224), corresponding to a mesoscale feature, and using convolution layers kernel =4, stride =2, padding =2 and convolution kernels with 3x3 alternating with 1x 1;
LeNet-5 has a feature image input size of (N, C) in ,H in ,W in ) = (N, 3, 32), corresponding to small scale features, all convolution kernels are 5 × 5 in size, step size is 1, all pooling methods are average pooling;
step 1.2: training an artificial intelligent image recognition model of an edge end;
acquiring a data set;
performing feature labeling on a training set in the data set by adopting image labeling software, adding a class label, and leading the training set to be in an xml format to obtain an xml data set;
reading and returning a training set file by using an os.listdir (xmlfilePath) function, wherein image w =1./size [0], image =1./size [1], and normalizing coordinates of images in the xml data set;
training process: sending the data set after normalization processing into a CSP-Darknet53 neural network for training, and extracting characteristics from a backhaul in the data set; the method specifically comprises the following steps: CSP-Darknet53 has a characteristic image input size of (N, C) in ,H in ,W in )=(N,3,640,640), corresponding to large scale features, and taking convolution layers of kernel =6, stride =2, padding = 2; in SPPF, a plurality of 5x5 MaxPoint layers are adopted for spatial pyramid pooling, and the obtained feature map is converted into a feature vector with a fixed size.
The cloud artificial intelligence image recognition model adopts YOLOv5x as a main body, targets with multiple neural network structures and different fish sizes correspond to different network models, and training and testing are performed by combining a model fusion mode.
The edge model is trained and tested using a single YOLOv5x neural network architecture.
The method comprises the following steps that a marine underwater monitoring camera transmits a real-time video and a frame image to edge equipment to obtain a data set; the target to be detected is shown in FIG. 6;
in step 6, as shown in fig. 5, a joint reasoning algorithm is used to calculate whether the monitoring data starts a cloud artificial intelligence image recognition model for detection, and the current CPU residual rate (R) is obtained a ) And the remaining memory rate (R) b ) Comprises the following steps: 36% and 42%; model startup time constant 2.5s (a), and image data size 8.6M (h) i ) (ii) a Respective gateway network delays (T) between edge devices and cloud servers i ):T 0 =90ms,T 1 =136ms,T 2 =170ms,T 3 =128ms,T 4 =112ms,T 5 =143ms,T 6 =101ms,T 7 =121ms, m =8. The method comprises the following steps:
step 6.1: calculating the relation D (r, h) between the edge equipment model load and the resource utilization to obtain a resource pressure load parameter n 1 The value of (c):
d (r, h) is represented by formula (1):
Figure BDA0003746083880000091
in the formula (1), R a And R b Respectively the current CPU surplus rate and the memory surplus rate, V, of the edge device 1 、V 2 In order to adjust the constant, the resource configuration of the edge device is different under different application scenes, and the terminal device has different definitionFor example, the ratio of cpu surplus rate to memory surplus rate is unbalanced due to the difference of the size of the uploaded data, for example, the suddenly uploaded data becomes larger, so that the memory surplus rate becomes lower, but the cpu surplus rate remains considerable, and V 1 、V 2 To adjust the constants to balance this relationship; when data size h i When it is 4m-9m, V 1 、V 2 Taking 0.2 and 0.25; when the data size h i When it is 9m-15m, V 1 、V 2 Taking 0.36 and 0.55; when the data size h i When it is 15m-25m, V 1 、V 2 Taking 0.34 and 0.5; when data size h i When it exceeds 25m, V 1 、V 2 Taking 0.32 and 0.55; a is the model start time constant, h i The current operation data size;
Figure BDA0003746083880000092
therefore, the model load and the residual resource use function D (r, h) is more than or equal to 0, and n is obtained 1 =1。
Step 6.2: starting an artificial intelligent image recognition model of the marine fishes at the edge end for recognition, and obtaining a recognition result and accuracy, wherein the current accuracy is y i =0.8329, calculating B (y),
Figure BDA0003746083880000101
c is the minimum desired accuracy, taken to be 0.8 in this example.
Figure BDA0003746083880000102
Thus, the edge end has higher accuracy in identifying the image, and n is obtained 3 =1 then there is no need to calculate the value of W (t).
Step 6.3: as shown in fig. 5, the edge device stores the image file and the image recognition result image and result in the local file storage system.

Claims (8)

1. An image joint reasoning and identifying system based on a cloud edge-end architecture is characterized by comprising a cloud platform layer, a communication network layer, an edge layer and a terminal layer;
the cloud platform layer comprises a cloud computing center, a database, a cloud file storage system and a mirror image warehouse;
the cloud computing center is used for training a cloud end and an edge end artificial intelligence image recognition model and processing an image recognition task unloaded from an edge end to the cloud computing center; the database is used for storing addressing directory addresses; the cloud file storage system is used for storing cloud and edge artificial intelligence image recognition models and training set and test set image files used for model training; the mirror image warehouse is used for storing mirror images generated by cloud and edge artificial intelligent image recognition models trained by the cloud computing center;
the communication network layer is used for information interaction, data uploading and downloading of the model and the mirror image between the edge layer and the cloud platform layer;
the edge layer comprises various devices integrated in a target image recognition scene, including edge devices and a local file storage system;
the edge device bears an edge artificial intelligence image recognition model and is used for processing monitoring data transmitted from a terminal layer in a target image recognition scene, and calculating whether the monitoring data starts a cloud artificial intelligence image recognition model for detection or not through a joint reasoning algorithm; the local file storage system is used for storing a history and image files, executable programs, image data sets uploaded by a terminal layer and detection results of the edge-end artificial intelligence image recognition model of the latest version;
the terminal layer is terminal equipment with a data acquisition function in a target image recognition scene and is used for transmitting real-time images and videos acquired by monitoring to the edge layer through a wired or wireless network so as to be detected;
wherein, whether the monitoring data starts a cloud artificial intelligence image recognition model for detection is calculated through a joint reasoning algorithm, and the method comprises the following steps: obtaining the current CPU residual rate of the edge device, the current memory residual rate of the edge device, the delay of a gateway between the edge device and the cloud computing center network connection, the model starting time and the current operation data size, and calculating whether the monitoring data starts a cloud artificial intelligence image recognition model for detection through a joint reasoning algorithm.
2. The image joint reasoning and recognition system based on the cloud edge-side architecture as claimed in claim 1, wherein the cloud computing center deploys an open-source Kubeedge-Sedna management platform;
the edge device is added into the cloud platform layer through a Join Token key and certificate verification mode of a Kubeedge-Sedna management platform, and the cloud computing center controls the edge device;
the cloud computing center receives access of the terminal equipment and specifies the edge equipment to which the terminal equipment belongs, and the cloud computing center controls the edge equipment.
3. The image joint reasoning and identifying system based on the cloud edge architecture as claimed in claim 1, wherein the edge device is an Nvidia Nx Xavier integrated device; the terminal equipment comprises a camera or a monitor.
4. The image joint reasoning and identification system based on the cloud side architecture as claimed in claim 1, wherein the cloud computing center, the database, the cloud file storage system and the mirror image warehouse are all in a local area network; the edge device, the local file storage system and the terminal layer are in a local area network.
5. An image joint reasoning identification method based on a cloud edge architecture, which is realized based on the image joint reasoning identification system of any one of claims 1-4, and is characterized by comprising the following steps:
step 1: the cloud computing center respectively trains needed cloud and edge artificial intelligent image recognition models according to different target image recognition scene requirements, packs the two models into a mirror image, and stores the mirror image in the mirror image warehouse; sending the mirror image file of the edge artificial intelligence image recognition model to the local file storage system;
and 2, step: the cloud computing center starts a Pod on the appointed edge equipment through a Yaml file, operates a target image recognition service and deploys a Sedna plug-in, wherein a joint reasoning algorithm is arranged in the Sedna plug-in;
and 3, step 3: the edge device pulls a mirror image required by the operation of the Pod from the local file storage system, the Pod is started successfully, and the Pod state is returned to the cloud computing center;
and 4, step 4: after running a target image recognition service through a Yaml file, the edge equipment works in a target image recognition scene by utilizing a Sednaplug-in joint reasoning algorithm;
and 5: the terminal equipment is connected with the edge equipment through an ip address, and the edge equipment acquires target image identification data through a real-time image transmitted by the terminal equipment;
and 6: the edge device reads target image recognition data to be detected, obtains the current CPU residual rate of the edge device, the current memory residual rate of the edge device, the delay of a gateway between the edge device and a cloud computing center network connection, model starting time and the current operation data size at the same time, calculates whether the monitoring data starts a cloud artificial intelligent image recognition model for detection through a joint reasoning algorithm, and performs target image recognition service to obtain a target image recognition result.
6. The method for image joint inference recognition based on cloud edge architecture as claimed in claim 5, wherein in step 1, the cloud computing center respectively trains required cloud and edge artificial intelligence image recognition models according to different target image recognition scene requirements, specifically including:
step 1.1: constructing a cloud and edge artificial intelligent image recognition model;
the cloud artificial intelligent image recognition model comprises a main neural network CSP-Darknet53, a neural network LeNet-5 and a neural network Darknet-19;
the edge artificial intelligent image recognition model adopts a CSP-Darknet53 neural network architecture;
step 1.2: training a cloud artificial intelligence image recognition model;
acquiring a data set;
performing feature labeling on a training set in the data set, adding a category label, and leading the training set to be in an xml format to obtain an xml data set;
normalizing the coordinates of the images in the xml data set;
training a cloud artificial intelligence image recognition model;
step 1.3: training an artificial intelligent image recognition model of an edge end;
acquiring a data set;
carrying out feature labeling on a training set in a data set, adding a class label, and leading to an xml format to obtain an xml data set;
normalizing the coordinates of the images in the xml data set;
and training an artificial intelligence image recognition model of the edge end.
7. The method for joint image inference recognition based on cloud edge architecture as claimed in claim 5, wherein in step 6, calculating whether the monitoring data enables a cloud artificial intelligence image recognition model for detection through a joint inference algorithm includes:
step 6.1: calculating the relation D (r, h) between the edge equipment model load and the resource utilization to obtain a resource pressure load parameter n 1 The value of (c):
d (r, h) is represented by formula (1):
Figure FDA0003746083870000031
in the formula (1), R a And R b Respectively, the current CPU residual rate and the memory residual rate, V, of the edge device 1 、V 2 To adjust the constant, a is the model start-up time average constant, h i The size of the current operation data;
when the value of D (r, h) is less than 0,n 1 =0, go to step 6.2; otherwise, n 1 =1, go to step 6.3;
step 6.2: when n is 1 If =0, it indicates that the remaining resources of the current edge device are not enough to support the detected load; calculating the network condition W (t) of the edge equipment and the cloud computing center to obtain a network state parameter n 2 W (t) is represented by formula (2):
Figure FDA0003746083870000032
in the formula (2), T i Delay of each gateway between the edge device and the cloud computing center network connection is shown, and m is the number of the gateways on the communication path; when W (t) is greater than 150ms 2 =0, otherwise n 2 =1;
If n is 2 If the current connection state of the edge device and the cloud computing center network is good, uploading the image to a cloud end, and starting a cloud end artificial intelligent image recognition model for recognition; if n is 2 =0, if the current network connection state between the edge device and the cloud computing center is poor, the image is marked to be in a state unsuitable for uploading cloud identification, the step 6.1 is returned, and the next time n is 2 When the calculated image meets the conditions, the calculated image is uploaded to a cloud artificial intelligence image recognition model for detection;
step 6.3: when n is 1 If the current edge equipment residual resources are enough to support the load of the detection, starting an artificial intelligent image recognition model for detection, and calculating the detection precision B (y) of the edge model; b (y) is represented by the formula (3):
Figure FDA0003746083870000033
in the formula (3), C is the minimum desired accuracy, y i The actual accuracy of the detection; when B (y) is more than or equal to 0, detecting the accurate qualified parameter n 3 =1, go to step 6.4; b (y)<When 0, n 3 =0, go to step 6.4;
step 6.4: if n is 3 If the image is recognized by the edge image recognition model, the recognition result of the image is represented to have higher accuracy, and the image is marked as recognized and stored in a local file storage system;
step 6.5: if n is 3 If =0, it means that the recognition result of the edge-side image recognition model for the image is low in accuracy, and W (t) is calculated to obtain n 2 If n is a value of 2 If the image is not suitable for being uploaded to the cloud terminal, marking the image as a state which is not suitable for being uploaded to the cloud terminal identification, and uploading the image to the cloud terminal after the network condition is good; if n is 2 And when the image is not less than 1, uploading the image to a cloud end, and starting a cloud end artificial intelligence image recognition model for recognition.
8. The image joint reasoning and identification method based on the cloud edge architecture as claimed in any one of claims 5 to 7, wherein in step 6, if the cloud artificial intelligence image identification model is enabled, the detection result and the image are stored in the cloud file storage system, and a backup of the detection result is sent to the local file storage system; otherwise, starting an artificial intelligent image recognition model of the edge terminal, storing the detection result and the image in a local file storage system, and packaging and sending data of the local file storage system to a cloud platform layer by periodically adopting a network idle transmission mode.
CN202210832612.1A 2022-07-14 2022-07-14 Image joint reasoning identification system and method based on cloud edge architecture Pending CN115187847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210832612.1A CN115187847A (en) 2022-07-14 2022-07-14 Image joint reasoning identification system and method based on cloud edge architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210832612.1A CN115187847A (en) 2022-07-14 2022-07-14 Image joint reasoning identification system and method based on cloud edge architecture

Publications (1)

Publication Number Publication Date
CN115187847A true CN115187847A (en) 2022-10-14

Family

ID=83518525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210832612.1A Pending CN115187847A (en) 2022-07-14 2022-07-14 Image joint reasoning identification system and method based on cloud edge architecture

Country Status (1)

Country Link
CN (1) CN115187847A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661992A (en) * 2023-05-09 2023-08-29 支付宝(杭州)信息技术有限公司 Terminal Bian Yun collaborative computing method, device, system, medium and program product
CN117422952A (en) * 2023-10-31 2024-01-19 北京东方国信科技股份有限公司 Artificial intelligent image recognition model management method and device and cloud edge service platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661992A (en) * 2023-05-09 2023-08-29 支付宝(杭州)信息技术有限公司 Terminal Bian Yun collaborative computing method, device, system, medium and program product
CN117422952A (en) * 2023-10-31 2024-01-19 北京东方国信科技股份有限公司 Artificial intelligent image recognition model management method and device and cloud edge service platform

Similar Documents

Publication Publication Date Title
CN115187847A (en) Image joint reasoning identification system and method based on cloud edge architecture
CN108171117B (en) Electric power artificial intelligence visual analysis system based on multicore heterogeneous Computing
CN112990211B (en) Training method, image processing method and device for neural network
CN112732450B (en) Robot knowledge graph generation system and method under end-edge-cloud cooperative framework
US20230315062A1 (en) Industrial internet of things, mthods and mediums for early warning of descending function fault of equipment
US11881013B2 (en) Video system
CN114584571A (en) Power grid station digital twin synchronous communication method based on spatial computing technology
CN112506637A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN115457006A (en) Unmanned aerial vehicle inspection defect classification method and device based on similarity consistency self-distillation
US11937580B2 (en) Computer vision-based feeding monitoring and method therefor
CN112327930A (en) Routing inspection path determining method and device
CN113971775A (en) Optimized yolov4 algorithm-based violation behavior identification method and system
CN112016380B (en) Wild animal monitoring method and system
JP2021166284A5 (en)
CN116566987A (en) Novel edge and cloud cooperative system based on industrial Internet
CN108596068B (en) Method and device for recognizing actions
CN115118647A (en) System and method for perceiving and announcing computing power information in computing power network
CN113115072A (en) Video target detection tracking scheduling method and system based on end cloud cooperation
CN114706675A (en) Task deployment method and device based on cloud edge cooperative system
CN111586616A (en) Regional environment parameter acquisition system, method and equipment
CN108985968A (en) A kind of automatic production method of concrete and device, storage medium, terminal
CN117076906B (en) Distributed intelligent fault diagnosis method and system, computer equipment and storage medium
CN114926583B (en) Site simulation management method and device, electronic equipment and storage medium
CN114895701B (en) Unmanned aerial vehicle inspection method and system
CN112486677B (en) Data graph transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination