CN112419401A - Aircraft surface defect detection system based on cloud edge cooperation and deep learning - Google Patents

Aircraft surface defect detection system based on cloud edge cooperation and deep learning Download PDF

Info

Publication number
CN112419401A
CN112419401A CN202011320279.3A CN202011320279A CN112419401A CN 112419401 A CN112419401 A CN 112419401A CN 202011320279 A CN202011320279 A CN 202011320279A CN 112419401 A CN112419401 A CN 112419401A
Authority
CN
China
Prior art keywords
cloud
defect
neural network
picture
edge side
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011320279.3A
Other languages
Chinese (zh)
Inventor
贺顺杰
杨博
陈彩莲
关新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202011320279.3A priority Critical patent/CN112419401A/en
Publication of CN112419401A publication Critical patent/CN112419401A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses an aircraft surface defect detection system based on cloud edge cooperation and deep learning, which relates to the field of image detection and comprises a cloud end, an edge side and a terminal; the system uses two neural networks for defect detection, one is a light-weight small neural network deployed on the edge side, and the other is a large neural network deployed on the cloud end; the edge side firstly detects the pictures collected by the terminal to obtain a preliminary detection result, filters out the defect pictures, then transmits the defect pictures to the cloud, accurately detects the defect positions and defect types by using the large neural network, and finally transmits the result back to the edge side. According to the invention, through cloud-edge cooperation, the overall time delay of the system is reduced, and the utilization rate of network resources is also improved.

Description

Aircraft surface defect detection system based on cloud edge cooperation and deep learning
Technical Field
The invention relates to the field of image detection, in particular to an aircraft surface defect detection system based on cloud edge cooperation and deep learning.
Background
The detection of the surface defects of the airplane is a very critical ring in the production, manufacturing and daily maintenance of the airplane, and if corrosion, pot holes and cracks on the surface of the airplane body are not timely repaired, irreparable loss is likely to be caused in the flying process of the airplane. At present, the defect detection of the surface of the airplane is generally carried out by means of human eye observation, and a certain area of the surface of the airplane is inspected by a worker with technical experience to find out possible defects. The manual inspection method is stable but has many defects, firstly, the subjective intention of people can cause misjudgment of the defects, and the fatigue working state and different lighting environments can also cause omission and misjudgment of surface defects of workers. Secondly, the size of the airplane is extremely large relative to the size of people, and a maintenance worker needs to use an external platform for surface detection, so that the airplane is extremely inconvenient, and safety accidents are likely to happen. Third, the efficiency of manual inspection is extremely low, and it may be difficult to meet the quality inspection requirements of its enterprise during frequent flights and mass production of airplanes.
In order to overcome the defects of manual detection, several automatic surface defect detection methods have been proposed by research teams. The first method is to adopt a traditional image detection method and utilize an image detection technology to extract a specific defect in a picture. The specific operation flow is that the image boundary is extracted first to remove unnecessary edge information; then, segmenting the image, and extracting a potential defect existing area; then filtering unnecessary noise information by using a proper filter; and finally, extracting defects by using a sobel operator or other edge detection algorithms.
The second is to use an image detection method based on deep learning. Deep learning is an important research hotspot in the field of artificial intelligence in recent years, and the appearance of the deep learning enables computers to realize certain intelligence. The main idea of deep learning is to build a deep learning network similar to a human brain neural network for a computer by manpower to enable the computer to obtain certain learning ability. Through the artificially constructed deep learning network, the computer can automatically learn the internal rules and the representation levels of the sample data and obtain the analysis and judgment capabilities of the sample data of the same type. The sample data here mainly includes images, text, data, voice, and the like. The image detection method based on deep learning firstly needs to select a proper deep learning network, and secondly needs to train the network by a large number of pictures with well calibrated defect positions and defect types so as to obtain the learning capability. After training is finished, if the same type of picture is input into the network, the network can autonomously give the defect position and defect type in the picture. In practical application, a camera is generally deployed in an industrial field to acquire a surface image of an area to be detected, and then the image is transmitted to a cloud server through a network. The cloud server is deployed with a trained deep learning network, judges the positions and types of defects of the uploaded pictures, and then transmits the judgment results back to an industrial field to assist engineering personnel in overhauling.
For the traditional image detection method, firstly, the generalization is not strong, and different detection algorithms need to be designed according to different defects. Secondly, the method is very dependent on expert knowledge and experience, and when a defect detection algorithm is designed, the structure and the shape of the defect need to be very known, so that the requirement on prior knowledge is high. For the image detection method based on deep learning, firstly, network resources in an industrial environment are limited, uploading pictures to a cloud server consumes a large amount of network resources, and network resources of other industrial applications may be occupied. Secondly, the result of the surface defect detection needs to be displayed in real time, but in a cloud system, a large amount of time delay is generated due to the limitation of a field network, and the real-time performance of the detection system is affected.
Therefore, the technical personnel in the field are dedicated to developing an aircraft surface defect detection system based on cloud edge cooperation and deep learning, and the accuracy of the detection result is improved.
Disclosure of Invention
In view of the above defects in the prior art, the technical problems to be solved by the present invention are how to improve the universality of the image detection method, ensure the accuracy, reduce the time delay, and autonomously optimize the detection algorithm with the increase of the detection times after the system is deployed to the field, thereby improving the detection accuracy.
In order to achieve the purpose, the invention provides an aircraft surface defect detection system based on cloud edge cooperation and deep learning, which comprises a cloud end, an edge side and a terminal, wherein a light-weight small neural network is deployed at the edge side; the cloud end is provided with a large neural network; the edge side firstly detects the pictures collected by the terminal to obtain a preliminary detection result, filters out the defect pictures, then transmits the defect pictures to the cloud, accurately detects the defect positions and defect types by using the large neural network, and finally transmits the detection result back to the edge side.
The invention also discloses a method for detecting the defects of the airplane surface defect detection system based on cloud edge cooperation and deep learning, which comprises the following steps:
step a, training is acquired by picture data and data enhancement;
b, selecting, training and deploying the neural network;
step c, acquiring the terminal data;
d, processing the edge side data;
and e, processing the cloud data.
Further, the step a further comprises:
step a1, acquiring picture data for training;
step a2, defect calibration;
step a3, the data enhancement.
Further, the step a2 further includes:
step a21, opening a picture to be calibrated;
step a22, clicking 'Create RectBox' to Create a calibration frame, and selecting a position frame where the defect is located;
step a23, noting the types of the selected defects;
step a24, saving the standard picture, and exporting the xml file.
Further, there are three main ways of data enhancement, first, the original picture is subjected to rotation transformation of 90 degrees, 180 degrees and 270 degrees, and then the picture is subjected to appropriate inward contraction and outward expansion to generate a new picture; second, the image is cut into several pieces using a sliding window of 300 x 300 pixels; third, oversampling and detail copying artificially copies the picture with the defect, allowing the defective picture to be trained multiple times.
Further, the step b further comprises:
b1, selecting a target detection network as the neural network of the cloud side and the edge side, wherein the cloud side selects a YOLO V4 network with the best performance and stability, and the edge side selects a YOLO V4 TINY network with higher speed and lower accuracy;
b2, after the neural network is selected, training the neural network, and continuously adjusting the weight of the neural network by using the picture data obtained in the step a to finally enable the recognition accuracy rate to be close to the highest;
step b3, the large-scale neural network is deployed to the cloud end, and the small-scale neural network is deployed to the edge side.
Further, the step b3 can also be deployed by selecting a container mode, and the specific steps are as follows:
step b31, packaging the program for realizing the defect detection and the environment for running the program into mirror images by using docker;
b32, uploading the mirror image to a docker hub;
b33, the edge side and the cloud side respectively download the required mirror images from the docker hub;
and b34, respectively carrying out operation tests on the cloud side and the edge side.
Furthermore, in the step c, the unmanned aerial vehicle carrying the camera is selected to shoot the surface of the airplane, the surface of the airplane is firstly divided into a plurality of areas, then the unmanned aerial vehicle is controlled by a maintainer to shoot an image of a certain area, and after shooting is completed, the unmanned aerial vehicle transmits the image to the equipment at the edge side through a picture transmission system.
Further, the step d using the small neural network deployed in the step b for identification results in the following parameters: the confidence degree represents the credibility of the frame-selected position and the classification result, the numerical value of the confidence degree is between 0 and 1, the closer to 1, the higher the credibility is represented, and the closer to 0, the lower the credibility is represented; the system sets a threshold value according to experience before deployment, when the confidence coefficient is larger than the threshold value, the result identified by the small neural network is selected to be directly used, and when the confidence coefficient is smaller than the threshold value, the identified picture is uploaded to the cloud end for next detection.
Further, the step e further comprises:
step e1, collecting the defect pictures uploaded by the edge side at the cloud, identifying the picture data by using the large-scale deep learning network, and transmitting the positions and types of the identified defects back to the edge side;
step e2, the detection result is not returned in the form of picture, but the defect position and the defect type are directly returned to the edge side in the form of xml. After the edge side receives the defect position information, a result graph which is the same as the cloud identification result can be automatically drawn according to the defect position information and the defect type information;
step e3, storing the defect pictures and updating the neural network, wherein all the defect pictures uploaded to the cloud and the calibration results thereof are stored in the cloud database after being determined to be correct; although the neural network trained in the step b has already reached the maximum accuracy rate that can be reached at that time, when the number of the defect picture data increases, the accuracy rate of the neural network can be certainly improved by training again; however, the frequent training of the network causes excessive waste of computing resources, so that the network is trained again only when the number of newly added pictures reaches ten percent of the number of the last training pictures; after the new neural network is trained, testing the recognition accuracy of the newly generated neural network by using a test set, and if the accuracy is higher than that of the last time, respectively deploying the new neural network to the edge side and the cloud end; if the difference between the accuracy and the last generated accuracy is not large, the deployment is not selected, so that the resource waste caused by frequent deployment is reduced.
The patent mainly discloses an aircraft surface defect detecting system based on cloud limit is in coordination with degree of depth study, wherein, cloud limit is in coordination and is referred to the high in the clouds, and the data of edge side mutual cooperation processing terminal. The cloud has strong computing power, but is far away from the terminal, so that the data transmission is delayed to a certain extent, and the real-time requirement cannot be met. The edge side and the terminal are in the industrial field, the data transmission speed is high, but the task processing capacity of the edge side is limited. The cloud-edge cooperative system can utilize the advantages of the cloud-edge cooperative system and the edge cooperative system at the same time, simple tasks issued by the terminal are processed by the edge side, and the tasks which are difficult to process by the edge side are processed by the cloud side, so that the overall time delay of the system is reduced, and the utilization rate of network resources is improved. According to the characteristic of cloud edge cooperation, the system deploys corresponding deep learning networks at the cloud end and the edge side respectively to cooperate to complete a defect detection task.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a block diagram of an aircraft surface defect detection system in accordance with a preferred embodiment of the present invention;
FIG. 2 is a system pre-preparation workflow diagram in accordance with a preferred embodiment of the present invention;
FIG. 3 is a flow chart of the system operation of a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
As shown in fig. 1 and 2, a preferred embodiment of the present invention includes a cloud end, an edge side, and a terminal, wherein the edge side is deployed with a lightweight small neural network; the cloud end is provided with a large neural network; the edge side firstly detects the pictures collected by the terminal to obtain a preliminary detection result, filters out the defect pictures, then transmits the defect pictures to the cloud, accurately detects the defect positions and defect types by using the large neural network, and finally transmits the detection result back to the edge side.
As shown in fig. 3, the method for detecting the defect based on the cloud edge cooperation and deep learning aircraft surface defect detection system includes the following steps:
the method comprises the following steps: the training uses the data acquisition of the picture and data enhancement, this step can be divided into the following three steps mainly:
s1, training picture data acquisition
S2, defect calibration
S3, data enhancement
S1, the deep learning network for defect image detection requires a large number of pictures for training. Generally, the higher the number of pictures and the better the quality, the higher the accuracy of the trained network recognition. It is necessary to collect picture data for training the network before training the network. These picture data should contain the defects that the system is to identify. Meanwhile, the shooting angle, the lighting environment and other shooting conditions of the pictures should be basically consistent with those of the real surface defect detection. The shot picture should include a defect picture and a normal picture, and if different defects need to be identified, the number of the different types of defects should be as consistent as possible. In order to ensure the identification accuracy of the network, the number of pictures is at least about one thousand according to experience. After the shooting is finished, the pictures are arranged so that the pictures can be calibrated in the next step.
S2, after collecting the pictures, the defect pictures need to be calibrated. The purpose of the calibration is to indicate the location of the defect and the type of the defect in a picture. The kind of defect to be identified is listed first, and its specific name and division standard are determined. Then, defects on the picture are calibrated by means of software, and generally labelImg is used for calibration. The method comprises the following specific processes of firstly, opening a picture to be calibrated; secondly, clicking 'Create RectBox' to Create a calibration frame, and selecting a position frame where the defect is located; thirdly, noting the types of the selected defects, noting that the types of the defects noted here must be in the types of the defects listed at the beginning; and fourthly, saving the standard picture and exporting an xml file. xml is the abbreviation of extensible markup language, and a file written in xml is called xml file, and its content generally presents a tree structure. In addition to the necessary structure, the xml file generated after the calibration of the defect picture contains the defect position and defect type information in the picture. The position of the defect is indicated by four numbers xmin, ymin, xmax, ymax, respectively. They respectively represent the coordinates of two opposite corners of the calibration frame, and the position of the whole calibration frame can be deduced by the four numbers. The kind of defect is represented by a previously agreed character or number.
And S3, enhancing the data. In the pictures collected in S1, the proportion of the pictures with defects is generally not too high, because the occurrence of defects is a small probability event. Furthermore, it is possible that a sufficient number of pictures cannot be taken, since the thousand pictures are in the first place an order of magnitude. The above problem can be solved here by data enhancement.
Data enhancement is a method of processing raw data to obtain more data in order to solve the problem of too little data for training in the deep learning process. Data enhancement can be mainly performed in the following ways. First, rotation and expansion are changed. And performing rotation transformation of 90 degrees, 180 degrees and 270 degrees on the original picture to generate a new picture. And the picture is properly shrunk inwards and expanded outwards to generate a new picture, so that the data set is expanded, and the generalization capability of the model is favorably improved. And secondly, cutting a sliding window. Cameras used for defect detection typically have high resolution. Therefore, the size of the image may be very large. Although networks allow for arbitrarily sized images as input, too large an image can affect the computational efficiency of the network. However, directly reducing the resolution can lose the detail information in the image. Therefore, a sliding window of 300 x 300 pixels is used to cut the image into several pieces. There is a partial overlap between two consecutive windows to ensure that the detail around the edge region is intact. Third, oversampling and detail copying. Oversampling, i.e. artificially copying the picture with the defect, allows the defect picture to be trained many times, which can improve the accuracy of the model to some extent. Aiming at the problem of small target detection, the defects in the picture can be copied, and the times of the defects appearing in the picture are artificially increased on the basis of ensuring that other objects are not influenced.
Step two: the selection, training and deployment of the network are mainly divided into the following three steps:
s1, selecting cloud and edge side network
S2, training of network
S3, deployment of network
After the image data set for training is obtained, an appropriate network needs to be selected for training. In an aircraft surface defect detection scenario, a target detection network is typically selected. Target detection can be divided into two key subtasks: object classification and object localization. The target positioning is used for judging the position of an interested target in the picture and selecting the target in a form of a candidate frame; the object classification is used to determine the kind of the object in the candidate frame. Currently, popular target detection networks can be divided into two categories: the first type is a two-stage network represented by the R-CNN system; the second type is a single-stage network represented by YOLO and SSD, and both algorithms have their own advantages in accuracy and speed with the development of research. Each of the second steps will be specifically described below:
and S1, selecting the cloud and the edge side network. According to the description of the system at the beginning, a large neural network with high accuracy and a small neural network with high speed are required to be deployed at the cloud side and the edge side respectively. According to the current research situation at home and abroad, the large neural network selects the YOLO V4 network with the best performance and stability, while the small neural network selects the YOLO V4-tiny network with higher speed and lower accuracy.
And S2, training the network. After the network is selected, the network needs to be trained. And continuously adjusting the weight of the network by inputting the picture data obtained in the first network input step, and finally enabling the identification rate of the network to be close to the highest. In the training process, in order to improve the final accuracy of the network, a transfer learning technology is used. Transfer learning refers to letting a computer transfer knowledge and methods learned from other hot areas to areas of interest so that the computer does not have to rely on data in that area for the first time in every area.
The specific process of the transfer learning is as follows: firstly, a mature data set or a similar defect data set is used for training a target network to enable the network to obtain initial weight. The data set used herein is typically either a COCO data set or a VOC data set, both of which are commonly used target detection data sets. And secondly, training the network obtained in the first step by using the image data acquired in the first step to obtain the final network weight.
And S3, deployment of the network. After the large neural network and the small neural network are trained, the large neural network and the small neural network need to be deployed to the cloud side and the edge side respectively. The large neural network is deployed to the cloud end, and the small neural network is deployed to the edge side. Before deployment, it is necessary to ensure that the environments of the cloud and the edge side meet the operating conditions of the system.
Of course, the manner in which the container is used may alternatively be deployed. The method comprises the following specific steps: firstly, packaging a program for realizing defect detection and an environment for running the program into mirror images by using a docker; secondly, uploading the generated mirror image to a docker hub (the docker hub is an official website for storing the docker mirror image); thirdly, downloading the required mirror images from the docker hub respectively by the edge side and the cloud side; and fourthly, respectively carrying out operation tests on the cloud side and the edge side. The method of using docker reduces the trouble caused by mismatch of the operating environment.
To this point, all of the preliminary preparation of the surface defect detection system has been completed.
Step three: terminal data acquisition
The traditional plane surface defect detection needs to build a platform for maintenance personnel to climb to the position near the plane body to be overhauled to check. This is due to the large difference between the size of the aircraft and the human body type. This has the disadvantage that when moving from one area to another for inspection, the position of the inspection platform needs to be adjusted and personnel are then transported to the platform for inspection. In the system of this patent design, the selection uses the unmanned aerial vehicle who carries on the camera to shoot the aircraft surface. Unmanned aerial vehicle can adjust the position of oneself in the space wantonly, selects the region of shooing according to the needs of overhauing, has reduced the trouble of setting up the platform in earlier stage. And the existing unmanned aerial vehicle obtains staged results in controllability, shooting performance and cost, and is completely competent for the task of shooting the surface of the aircraft.
Follow traditional maintenance flow when unmanned aerial vehicle shoots. Firstly divide into a plurality of regions with the aircraft surface, then the maintainer controls unmanned aerial vehicle and carries out the image shooting to a certain region, and after the shooting was accomplished, the unmanned aerial vehicle had the picture through picture biography system and transmitted the equipment of edge side to carry out defect detecting to the picture.
Step four: edge side data processing
And (4) receiving the picture transmitted by the unmanned aerial vehicle by the edge side, and identifying by using the small neural network deployed in the step two. Since the network deployed on the edge side is a small neural network, it is not desirable that the edge side can completely recognize the defect included in each picture. The small neural network is deployed on the edge side in order to screen out a picture containing a defect from among a plurality of taken pictures. Since the pictures containing the defects are only a small part of all the pictures in the shot pictures, most of the shot pictures can be filtered locally through screening of the edge-side small neural network. The amount of data transmitted to the cloud is reduced.
After the small neural network identifies the defect, the following parameters are generated: the number of candidate boxes, the position of each candidate box, the category of defects contained in the box, and the confidence of the category. The confidence degree represents the confidence degree of the selected positions and the classification results, the value of the confidence degree is between 0 and 1, the closer to 1 represents the higher confidence degree, and the closer to 0 represents the lower confidence degree. The system can set a threshold value according to experience before deployment, and when the confidence coefficient is larger than the threshold value, the result identified by directly using the small neural network is selected. And when the confidence coefficient is smaller than the threshold value, uploading the identified picture to the cloud end, and carrying out the next detection.
Step five: cloud data processing, which can be mainly divided into the following two steps:
s1, detecting defective pictures
S2, result feedback
S3, storing defect picture and updating deep learning network
And S1, detecting the defect picture. And collecting the defect picture data uploaded at the edge side at the cloud end, and identifying the picture data by using a large-scale deep learning network. The large-scale deep learning network can accurately identify the positions and the types of all defects in the picture and return the identified results to the edge side.
And S2, feeding back the result. In order to reduce the amount of data transmitted, the detection result is not returned in the form of a picture, but the position of the candidate frame and the type of the defect are directly returned to the edge side in the form of xml. After the edge side receives the returned picture, a result graph which is the same as the cloud identification result can be automatically drawn according to the position of the candidate frame and the type information of the defect.
And S3, storing the defect picture and updating the deep learning network. And all the defect maps uploaded to the cloud and the calibration results thereof are stored in a database of the cloud after being determined to be correct. These pictures will be used for analysis of defect causes and updating of deep learning networks. Although the deep learning network trained in the step two has already reached the maximum accuracy rate that can be reached at that time, when the number of picture data increases, the accuracy rate of the deep learning network can be certainly improved by retraining again. However, the frequent training of the network causes excessive waste of computing resources, so that the network is trained again only when the number of newly added pictures reaches ten percent of the number of the last training pictures.
And after the new network training is finished, testing the identification accuracy of the newly generated deep learning network by using the test set. If the accuracy is higher than that of the deep learning network generated last time, deploying the new network to the edge end and the cloud end respectively; if the accuracy rate is not much different from the last generated deep learning network, the deep learning network is selected not to be deployed, so that the resource waste caused by frequent network deployment is reduced.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. An aircraft surface defect detection system based on cloud edge cooperation and deep learning is characterized by comprising a cloud end, an edge side and a terminal, wherein a light-weight small neural network is deployed on the edge side; the cloud end is provided with a large neural network; the edge side firstly detects the pictures collected by the terminal to obtain a preliminary detection result, filters out the defect pictures, then transmits the defect pictures to the cloud, accurately detects the defect positions and defect types by using the large neural network, and finally transmits the detection result back to the edge side.
2. The method for detecting the defects of the airplane surface defect detection system based on cloud edge coordination and deep learning as claimed in claim 1, which comprises the following steps:
step a, training is acquired by picture data and data enhancement;
b, selecting, training and deploying the neural network;
step c, acquiring the terminal data;
d, processing the edge side data;
and e, processing the cloud data.
3. The method for detecting the defects of the aircraft surface defect detection system based on the cloud edge coordination and the deep learning as claimed in claim 2, wherein the step a further comprises:
step a1, acquiring picture data for training;
step a2, defect calibration;
step a3, data enhancement.
4. The method for detecting the defects of the aircraft surface defect detection system based on cloud edge coordination and deep learning as claimed in claim 3, wherein the step a2 further comprises:
step a21, opening a picture to be calibrated;
step a22, clicking 'Create RectBox' to Create a calibration frame, and selecting a position frame where the defect is located;
step a23, noting the types of the selected defects;
step a24, saving the standard picture, and exporting the xml file.
5. The method for detecting the defects of the aircraft surface defect detection system based on cloud edge coordination and deep learning as claimed in claim 3, wherein the manner of the step a3 is mainly three, firstly, the original picture is subjected to rotation transformation of 90 degrees, 180 degrees and 270 degrees, and then the picture is subjected to appropriate inward contraction and outward expansion to generate a new picture; second, the image is cut into several pieces using a sliding window of 300 x 300 pixels; third, oversampling and detail copying artificially copies the picture with the defect, allowing the defective picture to be trained multiple times.
6. The method for detecting the defects of the aircraft surface defect detection system based on the cloud edge coordination and the deep learning as claimed in claim 2, wherein the step b further comprises:
b1, selecting a target detection network as the neural network of the cloud side and the edge side, wherein the cloud side selects a YOLO V4 network with the best performance and stability, and the edge side selects a YOLO V4 TINY network with higher speed and lower accuracy;
b2, after the neural network is selected, training the neural network, and continuously adjusting the weight of the neural network by using the picture data obtained in the step a to improve the identification accuracy;
step b3, the large-scale neural network is deployed to the cloud end, and the small-scale neural network is deployed to the edge side.
7. The method for detecting the defects of the aircraft surface defect detection system based on cloud edge coordination and deep learning as claimed in claim 6, wherein the step b3 can be deployed by using a container as an option, and comprises the following specific steps:
step b31, packaging the program for realizing the defect detection and the environment for running the program into mirror images by using docker;
b32, uploading the mirror image to a docker hub;
b33, the edge side and the cloud side respectively download the required mirror images from the docker hub;
and b34, respectively carrying out operation tests on the cloud side and the edge side.
8. The method for detecting the defects of the airplane surface defect detecting system based on cloud edge coordination and deep learning as claimed in claim 2, wherein the unmanned plane carrying the camera is selected in the step c to shoot the airplane surface, the airplane surface is firstly divided into a plurality of areas, then the unmanned plane is operated by a maintenance personnel to shoot an image of a certain area, and after the shooting is completed, the unmanned plane transmits the image to the edge side device through a picture transmission system.
9. The method for detecting the defects of the aircraft surface defect detection system based on cloud edge coordination and deep learning as claimed in claim 2, wherein the step d of identifying by using the small neural network deployed in the step b generates the following parameters: the number of candidate frames, the framing position, the classification result of the defect contained in the frame and the confidence coefficient, wherein the confidence coefficient represents the reliability of the framing position and the classification result, and the value of the confidence coefficient is between 0 and 1; the system sets a threshold value according to experience before deployment, when the confidence coefficient is larger than the threshold value, the result identified by the small neural network is selected to be directly used, and when the confidence coefficient is smaller than the threshold value, the identified picture is uploaded to the cloud end for next detection.
10. The method for detecting the defects of the aircraft surface defect detection system based on the cloud edge coordination and the deep learning as claimed in claim 2, wherein the step e further comprises:
step e1, collecting the defect picture uploaded by the edge side at the cloud, identifying the picture data by using the large-scale deep learning network, and returning an identification result to the edge side;
step e2, the detection result is not returned in a picture form, but the defect position and the defect type are directly returned to the edge side in an xml form, and after the edge side receives the detection result, a result graph which is the same as the cloud identification result can be automatically drawn according to the defect position and the defect type;
step e3, storing the defect pictures and updating the neural network, wherein all the defect pictures uploaded to the cloud and the calibration results thereof are stored in the cloud database after being determined to be correct; when the number of the newly added pictures reaches ten percent of the number of the last training pictures, training the network again; after the new neural network is trained, testing the recognition accuracy of the newly generated neural network by using a test set, and if the accuracy is higher than that of the last time, respectively deploying the new neural network to the edge side and the cloud end; and if the difference between the accuracy and the last generated accuracy is small, selecting not to deploy.
CN202011320279.3A 2020-11-23 2020-11-23 Aircraft surface defect detection system based on cloud edge cooperation and deep learning Pending CN112419401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320279.3A CN112419401A (en) 2020-11-23 2020-11-23 Aircraft surface defect detection system based on cloud edge cooperation and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320279.3A CN112419401A (en) 2020-11-23 2020-11-23 Aircraft surface defect detection system based on cloud edge cooperation and deep learning

Publications (1)

Publication Number Publication Date
CN112419401A true CN112419401A (en) 2021-02-26

Family

ID=74777893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320279.3A Pending CN112419401A (en) 2020-11-23 2020-11-23 Aircraft surface defect detection system based on cloud edge cooperation and deep learning

Country Status (1)

Country Link
CN (1) CN112419401A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819829A (en) * 2021-04-19 2021-05-18 征图新视(江苏)科技股份有限公司 Visual defect detection method based on double-depth learning model
CN112966608A (en) * 2021-03-05 2021-06-15 哈尔滨工业大学 Target detection method, system and storage medium based on edge-side cooperation
CN113252701A (en) * 2021-07-02 2021-08-13 湖南大学 Cloud edge cooperation-based power transmission line insulator self-explosion defect detection system and method
CN113361414A (en) * 2021-06-08 2021-09-07 天津大学 Remote sensing image cloud amount calculation method based on composite neural network
CN113567466A (en) * 2021-08-02 2021-10-29 大量科技(涟水)有限公司 Intelligent identification system and method for appearance defects of microchip
CN114445411A (en) * 2022-04-11 2022-05-06 广东电网有限责任公司佛山供电局 Unmanned aerial vehicle line patrol defect identification system and control method
CN114627089A (en) * 2022-03-21 2022-06-14 成都数之联科技股份有限公司 Defect identification method, defect identification device, computer equipment and computer readable storage medium
WO2023093053A1 (en) * 2021-11-25 2023-06-01 达闼科技(北京)有限公司 Inference implementation method, network, electronic device, and storage medium
WO2023169053A1 (en) * 2022-03-07 2023-09-14 北京拙河科技有限公司 Target tracking method and system based on camera array
CN116938601A (en) * 2023-09-15 2023-10-24 湖南视觉伟业智能科技有限公司 Division authentication method for real-name authentication equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN110443298A (en) * 2019-07-31 2019-11-12 华中科技大学 It is a kind of based on cloud-edge cooperated computing DDNN and its construction method and application
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111142049A (en) * 2020-01-16 2020-05-12 合肥工业大学 Intelligent transformer fault diagnosis method based on edge cloud cooperation mechanism
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 Method for detecting and classifying surface defects based on aircraft skin
CN111431986A (en) * 2020-03-18 2020-07-17 宁波智诚祥科技发展有限公司 Industrial intelligent quality inspection system based on 5G and AI cloud edge cooperation
CN111784685A (en) * 2020-07-17 2020-10-16 国网湖南省电力有限公司 Power transmission line defect image identification method based on cloud edge cooperative detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN110443298A (en) * 2019-07-31 2019-11-12 华中科技大学 It is a kind of based on cloud-edge cooperated computing DDNN and its construction method and application
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111142049A (en) * 2020-01-16 2020-05-12 合肥工业大学 Intelligent transformer fault diagnosis method based on edge cloud cooperation mechanism
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 Method for detecting and classifying surface defects based on aircraft skin
CN111431986A (en) * 2020-03-18 2020-07-17 宁波智诚祥科技发展有限公司 Industrial intelligent quality inspection system based on 5G and AI cloud edge cooperation
CN111784685A (en) * 2020-07-17 2020-10-16 国网湖南省电力有限公司 Power transmission line defect image identification method based on cloud edge cooperative detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈从翰: "基于YOLOv3算法的深度神经网络在飞机表面缺陷检识别中的应用", 《中国优秀硕士学位论文全文数据库 信息科技II辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966608A (en) * 2021-03-05 2021-06-15 哈尔滨工业大学 Target detection method, system and storage medium based on edge-side cooperation
CN112819829A (en) * 2021-04-19 2021-05-18 征图新视(江苏)科技股份有限公司 Visual defect detection method based on double-depth learning model
CN113361414B (en) * 2021-06-08 2022-09-02 天津大学 Remote sensing image cloud amount calculation method based on composite neural network
CN113361414A (en) * 2021-06-08 2021-09-07 天津大学 Remote sensing image cloud amount calculation method based on composite neural network
CN113252701A (en) * 2021-07-02 2021-08-13 湖南大学 Cloud edge cooperation-based power transmission line insulator self-explosion defect detection system and method
CN113252701B (en) * 2021-07-02 2021-10-26 湖南大学 Cloud edge cooperation-based power transmission line insulator self-explosion defect detection system and method
CN113567466B (en) * 2021-08-02 2022-10-28 大量科技(涟水)有限公司 Intelligent identification method for appearance defects of microchip
CN113567466A (en) * 2021-08-02 2021-10-29 大量科技(涟水)有限公司 Intelligent identification system and method for appearance defects of microchip
WO2023093053A1 (en) * 2021-11-25 2023-06-01 达闼科技(北京)有限公司 Inference implementation method, network, electronic device, and storage medium
WO2023169053A1 (en) * 2022-03-07 2023-09-14 北京拙河科技有限公司 Target tracking method and system based on camera array
CN114627089A (en) * 2022-03-21 2022-06-14 成都数之联科技股份有限公司 Defect identification method, defect identification device, computer equipment and computer readable storage medium
CN114445411A (en) * 2022-04-11 2022-05-06 广东电网有限责任公司佛山供电局 Unmanned aerial vehicle line patrol defect identification system and control method
CN116938601A (en) * 2023-09-15 2023-10-24 湖南视觉伟业智能科技有限公司 Division authentication method for real-name authentication equipment
CN116938601B (en) * 2023-09-15 2023-11-24 湖南视觉伟业智能科技有限公司 Division authentication method for real-name authentication equipment

Similar Documents

Publication Publication Date Title
CN112419401A (en) Aircraft surface defect detection system based on cloud edge cooperation and deep learning
CN106971152B (en) Method for detecting bird nest in power transmission line based on aerial images
US11380232B2 (en) Display screen quality detection method, apparatus, electronic device and storage medium
CN110070530B (en) Transmission line icing detection method based on deep neural network
CN108037770B (en) Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence
CN111723654B (en) High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN103679674B (en) Method and system for splicing images of unmanned aircrafts in real time
CN108038424B (en) Visual automatic detection method suitable for high-altitude operation
CN106203265A (en) A kind of Construction Fugitive Dust Pollution based on unmanned plane collection image is derived from dynamic monitoring and coverage prognoses system and method
WO2020007118A1 (en) Display screen peripheral circuit detection method and device, electronic equipment and storage medium
CN110197231A (en) The bird feelings detecting devices merged based on visible light and infrared light image and recognition methods
CN109672863A (en) A kind of construction personnel's safety equipment intelligent monitoring method based on image recognition
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
Mao et al. Development of power transmission line defects diagnosis system for UAV inspection based on binocular depth imaging technology
CN112837282A (en) Small sample image defect detection method based on cloud edge cooperation and deep learning
KR102270834B1 (en) Method and system for recognizing marine object using hyperspectral data
CN112367400B (en) Intelligent inspection method and system for power internet of things with edge cloud coordination
CN107818303A (en) Unmanned plane oil-gas pipeline image automatic comparative analysis method, system and software memory
CN109887343A (en) It takes to a kind of flight and ensures node automatic collection monitoring system and method
CN112947511A (en) Method for inspecting fan blade by unmanned aerial vehicle
CN116846059A (en) Edge detection system for power grid inspection and monitoring
CN109297978A (en) The inspection of power circuit unmanned plane and fault intelligence diagnosis system based on binocular imaging
CN115660647A (en) Maintenance method for building outer wall
CN115877865A (en) Unmanned aerial vehicle inspection method and device and unmanned aerial vehicle inspection system
CN111783891B (en) Customized object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226

RJ01 Rejection of invention patent application after publication