WO2020232816A1 - 一种目标对象的质检方法及边缘计算设备 - Google Patents

一种目标对象的质检方法及边缘计算设备 Download PDF

Info

Publication number
WO2020232816A1
WO2020232816A1 PCT/CN2019/096174 CN2019096174W WO2020232816A1 WO 2020232816 A1 WO2020232816 A1 WO 2020232816A1 CN 2019096174 W CN2019096174 W CN 2019096174W WO 2020232816 A1 WO2020232816 A1 WO 2020232816A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
edge computing
mathematical model
video frame
defect
Prior art date
Application number
PCT/CN2019/096174
Other languages
English (en)
French (fr)
Inventor
沈建发
林立
Original Assignee
网宿科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网宿科技股份有限公司 filed Critical 网宿科技股份有限公司
Priority to EP19920647.5A priority Critical patent/EP3770851A4/en
Priority to US17/061,475 priority patent/US20210034044A1/en
Publication of WO2020232816A1 publication Critical patent/WO2020232816A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/41875Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by quality surveillance of production
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • This application relates to the field of Internet technology, and in particular to a quality inspection method and edge computing equipment for a target object.
  • the purpose of this application is to provide a quality inspection method and edge computing device for a target object, which can improve the quality inspection efficiency of the target object while saving labor costs.
  • one aspect of the present application provides a quality inspection method for a target object.
  • the method includes: an edge computing device receives a target image of the target object currently to be quality-inspected; based on the mathematical model of the edge computing platform, The image is recognized to determine whether the target object has defects.
  • the edge computing device includes: a target image receiving unit for receiving a target image of the target object currently to be inspected; a defect recognition unit for The mathematical model of the edge computing platform recognizes the target image to determine whether the target object has defects.
  • the edge computing device includes a processor and a memory.
  • the memory is used to store a computer program.
  • the computer program is executed by the processor, Achieve the above-mentioned quality inspection method for the target object.
  • the technical solution provided by this application can use machine learning methods to train a large number of samples of the target object to obtain a mathematical model capable of identifying defects, which can then be used to detect the target object currently produced. Whether there are defects.
  • the edge computing device can store the trained mathematical model locally, and can use the mathematical model to automatically detect the target image of the target object currently undergoing quality inspection, thereby judging whether the target object in the target image has defects .
  • the present application can automatically detect the defects of the target object, thereby saving a lot of labor costs and having high detection efficiency.
  • Figure 1 is a schematic structural diagram of a product line system in an embodiment of the present application.
  • Figure 2 is a schematic diagram of a quality inspection method for a target object in an embodiment of the present application
  • FIG. 3 is a schematic diagram of functional modules of an edge computing device in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an edge computing device in an embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a computer terminal in an embodiment of the present application.
  • This application provides a quality inspection method for a target object, and the target object can be any product produced on a product line.
  • This method can be applied to the product line system as shown in FIG. 1, in the product line system, multiple image acquisition devices with camera functions can be deployed on the production platform of each product.
  • the image acquisition device can shoot the finished product in real time, thereby obtaining the video stream data of the product.
  • the product line system may also include an edge computing platform and multiple edge computing devices. Wherein, the edge computing device may be connected to one or more image acquisition devices, so that video stream data captured by the one or more image acquisition devices can be acquired.
  • the edge computing device can generate image samples of the product according to the acquired video stream data, and can upload the image samples to the edge computing platform, so as to perform the mathematical model training process through the edge computing platform.
  • edge computing equipment can also perform defect detection on products based on the mathematical model obtained through training.
  • the computing power of edge computing devices is usually lower than that of edge computing platforms. Therefore, simple computing processes such as defect detection and video stream data preprocessing can be completed in edge computing devices, while in edge computing platforms, Perform complex calculation processes such as mathematical model training.
  • the quality inspection method of the target object provided in this application can be executed by the above-mentioned edge computing device, and the method can include the following steps.
  • the edge computing device receives the target image of the target object currently to be inspected.
  • the image acquisition device and the edge computing device may communicate in a wired or wireless manner.
  • the image capture device and the edge computing device may be connected through a network cable, so that the video stream data captured by the image capture device is transmitted through the network cable.
  • the image acquisition device and the edge computing device can also transmit data through wireless communication methods such as WiFi, Bluetooth, ZigBee (Zigbee), and mobile communication (2G/3G/4G/5G).
  • a video stream reading tool can be installed in advance, and the communication address of the image capture device can be configured in the video stream reading tool. In this way, through the video stream reading tool, the edge computing device can obtain real-time video stream data captured by the image capture device for the current target object.
  • the video stream reading tool can be the ffmpeg tool, and the edge computing device and the image capture device can follow the real-time streaming protocol (Real Time Streaming Protocol, RTSP) for video stream data transmission .
  • Real Time Streaming Protocol RTSP
  • the video stream data may include different types of video frames.
  • the video stream data may include I frame (key frame), P frame (forward prediction coding frame), and B frame (bidirectional prediction interpolation coding frame).
  • I frame key frame
  • P frame forward prediction coding frame
  • B frame bidirectional prediction interpolation coding frame
  • the I frame can completely retain the content of the video picture
  • the P frame usually only retains the difference from the I frame
  • the B frame only retains the difference between the previous and subsequent video frames.
  • the I frame can be decoded independently, and the content of the video screen can be restored after decoding.
  • the P frame and B frame cannot be decoded independently, but need to be decoded together with the I frame or the front and rear video frames to restore the content of the video screen.
  • the edge computing device obtains the video stream data of the target object, it can identify the video frame that can independently decode and restore the content of the video screen from the video stream data.
  • This type of video frame can be used as the target object's video frame. Effective video frame.
  • the header information of the video frame may include a field for characterizing the type of the video frame.
  • this field may be a picture_coding_type field.
  • the flag bit carried in this field the type of the current video frame can be determined.
  • the edge computing device can traverse each video frame in the video stream data, and identify a designated field in the header information of each video frame, and the designated field can be the aforementioned picture_coding_type field. If the flag bit carried in the designated field characterizes that the video frame is a key frame, the video frame may be used as a valid video frame of the target object. After these valid video frames are decoded, the content of the video frames can be displayed completely, so that product defect detection can be performed on these valid video frames.
  • the data amount of the valid video frames identified from the video stream data is usually large. If the identified valid video frames are directly uploaded to the edge computing platform for mathematical model training, on the one hand, data transmission will be caused. On the other hand, it will also cause a large amount of data to be processed in the training process of the mathematical model.
  • the edge computing device may perform data filtering processing on the identified valid video frames. Specifically, in the images shot for the target object, the backgrounds are often the same. Therefore, after the effective video frame is identified, the background in the effective video frame can be filtered out.
  • the background on which the target object is located may be a solid color background, and the background color is different from the color of the target object and the colors of defects that may appear on the surface of the target object.
  • the background color can be green.
  • the background color of the background where the target object is located can be pre-recorded in the edge computing device. After the edge computing device recognizes a valid video frame, it can read the background color and display it in the video screen displayed by the valid video frame. To filter out the background color.
  • the edge computing device can use the data corresponding to the video screen with the background color filtered out as the feature data extracted from the effective video frame, and the data amount of the feature data is reduced compared with the data amount of the effective video frame.
  • the amount of data in the transmission process and training process can be reduced.
  • the edge computing device can also use a target detection algorithm to identify the target object from the video picture displayed by the valid video frame, so that the background in the valid video frame can also be filtered out, thereby obtaining Characteristic data of the target object.
  • edge computing devices can choose one of many target detection algorithms to identify target objects. These target detection algorithms can include R-CNN (Regions with CNN, region-related convolutional neural networks), Fast Algorithms such as R-CNN (fast R-CNN), ION (Inside-Outside Net, internal and external network), HyperNet (super network).
  • the edge computing device can also use other image compression algorithms to compress the data of the effective video frame, and upload the compressed effective video frame to the edge computing platform.
  • This application is effective in reducing the effective video frame.
  • the data volume algorithm is not limited, and any algorithm that can reduce the data volume is included in the protection scope of this application.
  • the edge computing device may upload the feature data to the edge computing platform to train the mathematical model of the target object through the edge computing platform.
  • the feature data uploaded by the edge computing device may contain various defects.
  • quality inspectors can manually label each feature data in the edge computing platform, thereby marking the defect type and defect location contained in the feature data.
  • the manual annotation result of the feature data can be received by the edge computing platform.
  • the edge computing platform can classify the feature data to form a training sample set.
  • the training sample set there can be a large number of characteristic data samples for each different defect type.
  • the training sample set can be input into a preset mathematical model, so as to train the preset mathematical model.
  • the mathematical model may be an existing classifier.
  • the mathematical model may be a K-nearest neighbor classifier, a naive Bayes classifier, a support vector machine classifier, a decision tree classifier, a convolutional neural network classifier, etc.
  • training samples can be input into the mathematical model, and the output result of the mathematical model can be a probability vector.
  • the output result of the mathematical model can be a probability vector containing 13 probability values, each of which can represent the probability of belonging to a certain defect type.
  • the defect type corresponding to the largest probability value can be used as the defect type predicted based on the input training sample.
  • the predicted defect type may be inconsistent with the defect type actually represented by the input training sample.
  • the mathematical model needs to be corrected many times until the trained mathematical model processes the input feature data to identify the defect The result is consistent with the manual labeling result of the input feature data. In this way, after continuous training of a large number of training samples, the parameters in the mathematical model can be repeatedly corrected, so that the mathematical model can correctly predict the types of defects contained in the training samples.
  • S5 Recognize the target image based on the mathematical model of the edge computing platform to determine whether the target object has defects.
  • the mathematical model can be fed back to the edge computing device.
  • the edge computing device can store the mathematical model locally. In this way, after the edge computing device obtains the current target image to be inspected, it can input the target image into the locally stored mathematical model, and based on the output of the mathematical model As a result, it is determined whether the target object has a defect in the target image.
  • the mathematical model may include a model algorithm, model metadata, and model weight parameters.
  • the model algorithm can match the type of the mathematical model.
  • the adopted algorithm is a convolutional neural network algorithm.
  • the model metadata can index the training sample set used in the training process of the rational model.
  • the training sample set may include a data set of characteristic data and a defect data set of various defects.
  • the model weight parameter can be used as a threshold for determining defects.
  • any target probability value in the probability vector may be compared with the model weight parameter, if the target probability value is greater than or Equal to the model weight parameter, the defect type corresponding to the target probability value can be taken as the defect of the target object in the target image.
  • the mathematical model is for the input target image, and the output result is 13 Probability vector of probability values. Then, the edge computing device can compare these 13 probability values with the model weight parameters. If among the 13 probability values, only the probability value representing the black line is greater than the model weight parameter, then it indicates that the mathematical model is passed.
  • the defect type of the target object in the target image is the black line.
  • each probability value in the probability vector is less than the model weight parameter, it indicates that there is no defect in the target image.
  • the model weight parameter indicates that there is no defect in the target image.
  • the edge computing device may only need to determine whether the target object has a defect in the target image, but does not care about the existence of several defects. In this case, the edge computing device may determine whether the probability vector contains A maximum probability value is determined from a plurality of probability values, and the maximum probability value is compared with the model weight parameter. If the maximum probability value is greater than or equal to the model weight parameter, the maximum probability value can be The corresponding defect type is used as the defect of the target object in the target image. If the maximum probability value is less than the model weight parameter, it indicates that there is no defect in the target image.
  • the edge computing device when the edge computing device determines that the target object has a defect in the target image, it indicates that the production line of the target object may have a process failure. At this time, the edge computing device can pass OT (Operation Technology, operation technology). ) Layer, send a stop production instruction to the industrial control equipment in the production workshop. In this way, the industrial control equipment can suspend the production line of the target object. At the same time, the edge computing device can also send out an alarm message and mark the location of the defect in the target image to remind the quality inspector to re-inspect the defect in the target image. In practical applications, the alarm prompt information may be sound and light information, or text or graphic information displayed on an electronic billboard.
  • OT Operaation Technology, operation technology
  • the quality inspector can overhaul the production line of the target object if he finds that there is a defect. If the quality inspector finds that there are no defects in the target image, or the type of defect recognition is wrong, they can feed back a false positive confirmation instruction to the edge computing device, indicating that the edge computing device's recognition result for the target image is wrong. In this case, it indicates that there may be recognition errors in the mathematical model in the edge computing device. At this time, the edge computing device may upload the target image to the edge computing platform.
  • the edge computing platform can generate the artificial labeling result of the target image in the above-mentioned manner, and can use the artificial labeling result to correct the trained mathematical model, so that the corrected mathematical model can correctly judge according to the target image Whether the target object has defects.
  • the edge computing platform can send the corrected mathematical model to the edge computing device.
  • the edge computing device can receive the corrected mathematical model fed back by the edge computing platform, and use the corrected mathematical model to replace the locally stored mathematical model, and subsequently can use the corrected mathematical model for defect identification.
  • different mathematical models can be trained for different target objects.
  • different mathematical models can also be stored locally on the edge computing device.
  • the edge computing device obtains the target image of the target object, it can first identify the type of the target object, and can select a mathematical model matching the type of the target object to detect the target image.
  • the edge computing device includes:
  • the target image receiving unit is used to receive the target image of the target object currently to be inspected
  • the defect recognition unit is used for recognizing the target image based on the mathematical model of the edge computing platform to determine whether the target object has a defect.
  • the edge computing device further includes: a feature data extraction unit, configured to extract feature data of the target object from the video stream data of the target object, and upload the feature data to the edge computing platform ,
  • the edge computing platform trains the mathematical model of the target object according to the received feature data.
  • the feature data extraction unit includes: an effective video frame identification module, configured to identify the effective video frame of the target object from the video stream data, and extract the effective video frame from the effective video frame. Characteristic data of the target object.
  • the effective video frame identification module includes: a designated field identification module, configured to traverse each video frame in the video stream data and identify a designated field in the header information of each video frame;
  • the key frame identification module is configured to use the video frame as a valid video frame of the target object if the flag bit carried in the designated field indicates that the video frame is a key frame.
  • the effective video frame recognition module includes: a background filtering module, which is used to read the background color of the background where the target object is located, and to display the effective video frame in the video screen Background color filtering; a feature data determination module for taking data corresponding to the video screen with the background color filtered out as the feature data of the target object extracted from the effective video frame.
  • the edge computing platform includes: a labeling result receiving unit configured to receive a manual labeling result of the feature data, the manual labeling result being used to at least characterize the type of defect existing in the feature data; training The unit is used to classify the feature data according to the defect type to form a training sample set, and use the training sample set to train a preset mathematical model, so that the trained mathematical model processes the input feature data to obtain The defect recognition result of is consistent with the manual annotation result of the input feature data.
  • a labeling result receiving unit configured to receive a manual labeling result of the feature data, the manual labeling result being used to at least characterize the type of defect existing in the feature data
  • training The unit is used to classify the feature data according to the defect type to form a training sample set, and use the training sample set to train a preset mathematical model, so that the trained mathematical model processes the input feature data to obtain The defect recognition result of is consistent with the manual annotation result of the input feature data.
  • the trained mathematical model at least includes a model weight parameter
  • the model weight parameter is used as a threshold for determining defects
  • the output result of the mathematical model is a probability vector containing multiple probability values, where , Different probability values correspond to different defect types.
  • the quality inspection unit includes: a traversal comparison module, configured to compare any target probability value in the probability vector with the model weight parameter, and if the target probability value is greater than or equal to all
  • the model weight parameter uses the defect type corresponding to the target probability value as the defect of the target object in the target image.
  • the quality inspection unit includes: a maximum probability value comparison module, configured to determine the maximum probability value among the multiple probability values contained in the probability vector, and compare the maximum probability value with the model The weight parameters are compared, and if the maximum probability value is greater than or equal to the model weight parameter, the defect type corresponding to the maximum probability value is taken as the defect of the target object in the target image.
  • a maximum probability value comparison module configured to determine the maximum probability value among the multiple probability values contained in the probability vector, and compare the maximum probability value with the model The weight parameters are compared, and if the maximum probability value is greater than or equal to the model weight parameter, the defect type corresponding to the maximum probability value is taken as the defect of the target object in the target image.
  • the edge computing device further includes a shutdown instruction sending unit, configured to send a production stop instruction to the industrial control device when it is determined that the target object has a defect in the target image, so that the industrial control device The equipment suspends the production line of the target object; an alarm prompt unit is used to send out alarm prompt information and mark the position of the defect in the target image.
  • a shutdown instruction sending unit configured to send a production stop instruction to the industrial control device when it is determined that the target object has a defect in the target image, so that the industrial control device The equipment suspends the production line of the target object; an alarm prompt unit is used to send out alarm prompt information and mark the position of the defect in the target image.
  • the edge computing device further includes: a target image uploading unit, configured to upload the target image to the edge computing platform if a false alarm confirmation instruction for the warning message is received So that the edge computing platform corrects the mathematical model according to the target image; a model replacement unit is configured to receive the corrected mathematical model fed back by the edge computing platform, and use the corrected mathematical model to replace the local Stored mathematical model.
  • a target image uploading unit configured to upload the target image to the edge computing platform if a false alarm confirmation instruction for the warning message is received So that the edge computing platform corrects the mathematical model according to the target image
  • a model replacement unit is configured to receive the corrected mathematical model fed back by the edge computing platform, and use the corrected mathematical model to replace the local Stored mathematical model.
  • this application also provides an edge computing device.
  • the edge computing device includes a processor and a memory.
  • the memory is used to store a computer program. When the computer program is executed by the processor, the above The quality inspection method of the target object.
  • the computer terminal 10 may include one or more (only one is shown in the figure) processor 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a processor for storing data
  • the memory 104 and the transmission module 106 for communication functions are only for illustration, and does not limit the structure of the above electronic device.
  • the computer terminal 10 may also include more or fewer components than those shown in FIG. 5, or have a different configuration from that shown in FIG.
  • the memory 104 may be used to store software programs and modules of application software.
  • the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104.
  • the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory remotely provided with respect to the processor 102, and these remote memories may be connected to the computer terminal 10 via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the transmission device 106 is used to receive or send data via a network.
  • the above-mentioned specific examples of the network may include a wireless network provided by the communication provider of the computer terminal 10.
  • the transmission device 106 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices through a base station to communicate with the Internet.
  • the transmission device 106 may be a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the technical solution provided by this application can use machine learning methods to train a large number of samples of the target object to obtain a mathematical model capable of identifying defects, which can then be used to detect the target object currently produced. Whether there are defects.
  • the edge computing device can obtain the video stream data shot for the target object in the product line in real time, and the video stream data can include data of each video frame. In practical applications, not every video frame can individually display the image of the target object. In view of this, after acquiring the video frame data of the target object, the edge computing device can identify the effective video frame that can separately display the image of the target object, and can extract the characteristic data of the target object from the identified effective video frame Then, upload the characteristic data to the edge computing platform.
  • the edge computing platform After the edge computing platform receives a large amount of feature data of the target object, it can use the feature data to train the mathematical model of the target object through the method of machine learning. After completing the training, the edge computing platform can send the trained mathematical model to the edge computing device. In this way, the edge computing device can store the trained mathematical model locally, and can use the mathematical model to automatically detect the target image of the target object currently undergoing quality inspection, thereby judging whether the target object in the target image has defects. It can be seen from the above that, after the mathematical model is trained by the machine learning method, the present application can automatically detect the defects of the target object, thereby saving a lot of labor costs and having high detection efficiency.
  • each embodiment can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the above technical solutions can be embodied in the form of software products, which can be stored in computer-readable storage media, such as ROM/RAM, magnetic A disc, an optical disc, etc., include a number of instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute the methods described in each embodiment or some parts of the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种目标对象的质检方法及边缘计算设备,其中,所述方法包括:边缘计算设备接收目标对象当前待质检的目标图像(S1);基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷(S3)。本申请提供的技术方案,能够在节省人工成本的情况下,提高目标对象的质检效率。

Description

一种目标对象的质检方法及边缘计算设备
交叉引用
本申请引用于2019年5月23日递交的名称为“一种目标对象的质检方法及边缘计算设备”的第201910432190.7号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及互联网技术领域,特别涉及一种目标对象的质检方法及边缘计算设备。
背景技术
随着工业制造水平的不断提高,用户和生产企业对产品质量的要求越来越高。产品除了要求满足使用性能外,还需要有良好的表面质量。但是,在制造产品的过程中,表面缺陷的产生往往是不可避免的。表面缺陷是产品表面出现物理或化学性质不均匀的局部区域,如表面的划痕、斑点、孔洞、褶皱等等。表面缺陷不仅影响产品的美观和舒适度,而且一般也会对其使用性能带来不良影响,所以生产企业需要对产品的表面缺陷进行检测,从而有效控制产品质量。
目前,传统的检测产品表面缺陷的方法是人工检测,然而该方法会带来抽检率低、准确性不高、实时性差、效率低、劳动强度大、受人工经验和主观因素的影响大等不利因素,因此,目前亟需一种更加有效的产品质检方法。
发明内容
本申请的目的在于提供一种目标对象的质检方法及边缘计算设备,能够在节省人工成本的情况下,提高目标对象的质检效率。
为实现上述目的,本申请一方面提供一种目标对象的质检方法,所述方法包括:边缘计算设备接收目标对象当前待质检的目标图像;基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
为实现上述目的,本申请另一方面还提供一种边缘计算设备,所述边缘计算设备包括:目标图像接收单元,用于接收目标对象当前待质检的目标图像;缺陷识别单元,用于基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
为实现上述目的,本申请另一方面还提供一种边缘计算设备,所述边缘计算设备包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时,实现上述的目标对象的质检方法。
由上可见,本申请提供的技术方案,可以采用机器学习的方法,针对目标对象的大量样本进行训练,从而得到能够识别缺陷的数理模型,后续便可以利用该数理模型来检测当前生产的目标对象是否存在缺陷。具体地,边缘计算设备可以在本地存储完成训练的数理模型,并可以利用该数理模型,自动地针对目标对象当前待质检的目标图像进行检测,从而判断该目标图像中的目标对象是否存在缺陷。由上可见,本申请利用机器学习的方法训练得到数理模型后,能够自动地对目标对象的缺陷进行检测,从而节省了大量的人工成本,并且具备较高的检测效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中产品线系统的结构示意图;
图2是本申请实施例中目标对象的质检方法示意图;
图3是本申请实施例中边缘计算设备的功能模块示意图;
图4是本申请实施例中边缘计算设备的结构示意图;
图5是本申请实施例中计算机终端的结构示意图。
具体实施例
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例作进一步地详细描述。
本申请提供一种目标对象的质检方法,所述目标对象可以是产品线上生产的任意产品。该方法可以应用于如图1所示的产品线系统中,在所述产品线系统中,在每个产品的生产台上可以部署多个具备摄像功能的图像采集设备。该图像采集设备可以实时拍摄完成制作的产品,从而得到产品的视频流数据。在所述产品线系统中,还可以包括边缘计算平台和多个边缘计算设备。其中,所述边缘计算设备可以与一个或者多个图像采集设备相连,从而可以获取所述一个或者多个图像采集设备拍摄的视频流数据。所述边缘计算设备一方面可以根据获取的视频流数据生成产品的图像样本,并可以将图像样本上传至边缘计算平台,从而通过边缘计算平台进行数理模型的训练过程。另一方面,边缘计算设备还可以根据训练得到的数理模型,对产品进行缺陷检测。在实际应用中,边缘计算设备的计算能力通常会低于边缘计算平台,因此,在边缘计算设备中可以完成缺陷检测、视频流数据预处理等简单的计算过程,而在边缘计算平台中则可以进行数理模型训练等复杂的计算过程。
请参阅图2,本申请提供的目标对象的质检方法,可以由上述的边缘计算设备执行,并且该方法可以包括以下步骤。
S1:边缘计算设备接收目标对象当前待质检的目标图像。
在本实施例中,图像采集设备与所述边缘计算设备之间可以通过有线或者无线的方式进行通信。例如,图像采集设备与所述边缘计算设备之间可以通过网线连接,从而通过网线传输图像采集设备拍摄的视频流数据。此外,图像 采集设备与所述边缘计算设备之间也可以通过WiFi、蓝牙、ZigBee(紫蜂)、移动通信(2G/3G/4G/5G)等无线通信方式进行数据传输。在所述边缘计算设备中,可以预先安装视频流读取工具,并可以在该视频流读取工具中配置图像采集设备的通信地址。这样,通过该视频流读取工具,边缘计算设备可以实时获取图像采集设备针对当前的目标对象拍摄的视频流数据。在一个具体的应用示例中,所述视频流读取工具可以是ffmpeg工具,并且边缘计算设备与图像采集设备之间可以遵循实时流传输协议(Real Time Streaming Protocol,RTSP)进行视频流数据的传输。
在本实施例中,所述视频流数据中可以包含不同类型的视频帧。例如,所述视频流数据中可以包括I帧(关键帧)、P帧(前向预测编码帧)以及B帧(双向预测内插编码帧)。其中,I帧可以完整保留视频画面的内容,而P帧通常只会保留与I帧的差别,B帧只会保留与前后视频帧之间的差别。鉴于此,I帧可以独立进行解码,并在解码后能够还原出视频画面的内容。而P帧和B帧则无法进行独立解码,而是需要与I帧或者前后视频帧共同解码,才能还原出视频画面的内容。因此,当边缘计算设备获取到目标对象的视频流数据后,可以从所述视频流数据中识别出能够独立解码并还原出视频画面内容的视频帧,这一类的视频帧可以作为目标对象的有效视频帧。
在一个实施例中,视频流数据中不同类型的视频帧可以通过标志位来识别。具体地,在视频帧的头部信息中,可以包含用于表征视频帧类型的字段。举例来说,该字段可以是picture_coding_type字段。通过识别该字段中携带的标志位,便可以确定出当前视频帧的类型。这样,边缘计算设备可以遍历所述视频流数据中的各个视频帧,并识别各个所述视频帧的头部信息中的指定字段,该指定字段便可以是上述的picture_coding_type字段。若所述指定字段中携带的标志位表征所述视频帧为关键帧,则可以将所述视频帧作为所述目标对象的有效视频帧。这些有效视频帧经过解码之后,都可以完整地展示视频帧的内容,从而可以针对这些有效视频帧进行产品的缺陷检测。
在本实施例中,从视频流数据中识别出的有效视频帧的数据量通常较大,如果直接将识别出的有效视频帧上传至边缘计算平台进行数理模型的训练,一方面会导致数据传输的带宽消耗量较大,另一方面也会导致数理模型的训练过程中所需处理的数据量较大。鉴于此,在本实施例中,边缘计算设备可以对识别出的有效视频帧进行数据过滤的处理。具体地,针对目标对象拍摄的图像中,背景往往是相同的,因此在识别出所述有效视频帧后,可以将所述有效视频帧中的背景滤除。在实际应用中,目标对象所处的背景可以是纯色背景,并且背景色与目标对象的颜色以及目标对象的表面可能出现的缺陷的颜色均不相同。例如,目标对象的颜色为银色,目标对象的表面可能出现的缺陷的颜色为黑色,那么背景色可以采用绿色。这样,可以在边缘计算设备中预先录入目标对象所处背景的背景色,当边缘计算设备识别出有效视频帧后,可以读取所述背景色,并在所述有效视频帧展示的视频画面中,将所述背景色滤除。这样,边缘计算设备便可以将滤除了背景色的视频画面对应的数据作为从所述有效视频帧中提取的特征数据,该特征数据的数据量相比有效视频帧的数据量有所减少,从而可以减少传输过程和训练过程的数据量。
在另一个实施例中,边缘计算设备还可以采用目标检测算法,从所述有效视频帧展示的视频画面中,识别出所述目标对象,这样也能够滤除有效视频帧中的背景,从而得到目标对象的特征数据。在实际应用中,边缘计算设备可以从众多的目标检测算法中选择一种来进行目标对象的识别,这些目标检测算法可以包括R-CNN(Regions with CNN,区域相关的卷积神经网络)、Fast R-CNN(快速R-CNN)、ION(Inside-Outside Net,内外网络)、HyperNet(超网络)等算法。
当然,在实际应用中,边缘计算设备还可以采用其它的图像压缩算法,对有效视频帧的数据进行压缩,并将压缩后的有效视频帧上传至边缘计算平台,本申请对减少有效视频帧的数据量的算法并不做限定,任何能够实现数据量减少的算法都包含在本申请的保护范围内。
在本实施例中,在提取出所述目标对象的特征数据后,边缘计算设备可以将特征数据上传至边缘计算平台,以通过边缘计算平台训练目标对象的数理模型。具体地,边缘计算设备上传的特征数据中,可以包含各种各样的缺陷。在数理模型的训练阶段,质检人员可以在边缘计算平台中对各个特征数据进行人工标注,从而标注出特征数据中包含的缺陷类型以及缺陷位置。这样,特征数据的人工标注结果可以被所述边缘计算平台接收。根据人工标注结果表征的缺陷类型,边缘计算平台可以对特征数据进行分类,从而形成训练样本集。在该训练样本集中,针对每种不同的缺陷类型,都可以具备大量的特征数据的样本。后续,可以将该训练样本集输入预设的数理模型,从而对该预设的数理模型进行训练。
在实际应用中,所述数理模型可以是现有的分类器。例如,所述数理模型可以是K-近邻分类器、朴素贝叶斯分类器、支持向量机分类器、决策树分类器、卷积神经网络分类器等。在训练时,可以将训练样本输入该数理模型中,数理模型的输出结果可以是一个概率向量。举例来说,训练样本中共包含了13中不同的缺陷类型,那么数理模型输出的结果可以是包含13个概率值的概率向量,其中的每个概率值可以表示属于某一种缺陷类型的概率。在判断数理模型的输出结果是否正确时,可以将最大的概率值对应的缺陷类型作为根据输入的训练样本预测得到的缺陷类型。在训练初期,预测得到的缺陷类型可能与输入的训练样本实际表征的缺陷类型不一致,此时则需要对数理模型进行多次校正,直至训练后的数理模型对输入的特征数据处理得到的缺陷识别结果与所述输入的特征数据的人工标注结果一致。这样,经过大量训练样本的不断训练,可以对数理模型中的参数进行反复校正,从而使得数理模型能够正确地预测出训练样本中包含的缺陷类型。
S5:基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
在本实施例中,边缘计算平台完成数理模型的训练后,可以将数理模型 反馈给所述边缘计算设备。所述边缘计算设备可以在本地存储该数理模型,这样,边缘计算设备在获取到当前待质检的目标图像后,可以将该目标图像输入本地存储的数理模型,并根据所述数理模型的输出结果判断所述目标对象在所述目标图像中是否存在缺陷。
在实际应用中,所述数理模型可以包括模型算法、模型元数据以及模型权重参数。其中,所述模型算法可以与数理模型的类型相匹配,例如,对于卷积神经网络分类器而言,采用的算法就是卷积神经网络算法。所述模型元数据,可以指数理模型训练过程中使用的训练样本集。在实际应用中,所述训练样本集可以包括特征数据的数据集以及各种缺陷的缺陷数据集。所述模型权重参数可以作为判定缺陷的阈值。具体地,在判断所述目标对象在所述目标图像中是否存在缺陷时,可以将所述概率向量中的任一目标概率值与所述模型权重参数进行比较,若所述目标概率值大于或者等于所述模型权重参数,可以将所述目标概率值对应的缺陷类型作为所述目标对象在所述目标图像中存在的缺陷,举例来说,数理模型针对输入的目标图像,输出结果为包含13个概率值的概率向量。然后,边缘计算设备可以将这13个概率值分别与所述模型权重参数进行比较,如果在这13个概率值中,只有表征黑线的概率值大于该模型权重参数,那么就表明通过数理模型的检测,该目标对象在所述目标图像中存在的缺陷类型就是黑线。此外,如果概率向量中的各个概率值均小于该模型权重参数,则表明目标图像中不存在缺陷。通过这种检测方式,如果在概率向量中同时存在多个概率值都大于或者等于该模型权重参数,那么就表明目标对象在目标图像中存在两种及两种以上的缺陷。
在另一个实施例中,边缘计算设备可能仅仅需要判定目标对象在目标图像中是否存在缺陷,而并不关心存在几种缺陷,在这种情况下,边缘计算设备可以在所述概率向量包含的多个概率值中确定出最大概率值,并将所述最大概率值与所述模型权重参数进行比较,若所述最大概率值大于或者等于所述模型权重参数,则可以将所述最大概率值对应的缺陷类型作为所述目标对象在所述 目标图像中存在的缺陷。如果最大概率值小于该模型权重参数,则表明目标图像中不存在缺陷。
在本实施例中,边缘计算设备在判定所述目标对象在所述目标图像中存在缺陷时,表明目标对象的生产线可能存在工艺故障,此时,边缘计算设备可以通过OT(Operation Technology,操作技术)层,向生产车间的工控设备发送停止生产指令。这样,工控设备便可以暂停所述目标对象的生产线。同时,边缘计算设备还可以发出报警提示信息,并在所述目标图像中标注出所述缺陷所处的位置,以提醒质检人员来对目标图像中的缺陷进行复检。在实际应用中,所述报警提示信息可以是声光信息,也可以是在电子看板上展示的文字或者图形信息。质检人员在观察目标图像中标注出的缺陷位置后,如果发现确实存在缺陷,则可以对目标对象的生产线进行检修。而如果质检人员发现目标图像中并不存在缺陷,或者缺陷识别的类型有误,那么便可以向边缘计算设备反馈误报确认指令,表示边缘计算设备针对目标图像的识别结果是错误的。在这种情况下,表明边缘计算设备中的数理模型可能存在识别误差。此时,边缘计算设备可以将所述目标图像上传至所述边缘计算平台。这样,边缘计算平台可以按照上述的方式,生成该目标图像的人工标注结果,并可以利用该人工标注结果对完成训练的数理模型进行校正,使得校正后的数理模型能够正确地根据该目标图像判断目标对象是否存在缺陷。最终,边缘计算平台可以将完成校正的数理模型发送至边缘计算设备。这样,边缘计算设备可以接收所述边缘计算平台反馈的校正后的数理模型,并利用所述校正后的数理模型替换本地存储的数理模型,后续便可以利用校正后的数理模型进行缺陷识别。
在本实施例中,针对不同的目标对象,可以训练不同的数理模型。这样,在边缘计算设备本地,也可以存储不同的数理模型。当边缘计算设备获取到目标对象的目标图像时,可以先识别目标对象的类型,并可以选用与所述目标对象的类型相匹配的数理模型,对所述目标图像进行检测。
本申请还提供一种边缘计算设备,请参阅图3,所述边缘计算设备包括:
目标图像接收单元,用于接收目标对象当前待质检的目标图像;
缺陷识别单元,用于基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
在一个实施例中,所述边缘计算设备还包括:特征数据提取单元,用于从所述目标对象的视频流数据中提取所述目标对象的特征数据,将所述特征数据上传至边缘计算平台,边缘计算平台根据接收的特征数据训练目标对象的数理模型。
在一个实施例中,所述特征数据提取单元包括:有效视频帧识别模块,用于从所述视频流数据中识别所述目标对象的有效视频帧,并从所述有效视频帧中提取所述目标对象的特征数据。
在一个实施例中,所述有效视频帧识别模块包括:指定字段识别模块,用于遍历所述视频流数据中的各个视频帧,并识别各个所述视频帧的头部信息中的指定字段;关键帧识别模块,用于若所述指定字段中携带的标志位表征所述视频帧为关键帧,将所述视频帧作为所述目标对象的有效视频帧。
在一个实施例中,所述有效视频帧识别模块包括:背景过滤模块,用于读取所述目标对象所处背景的背景色,并在所述有效视频帧展示的视频画面中,将所述背景色滤除;特征数据确定模块,用于将滤除了背景色的视频画面对应的数据作为从所述有效视频帧中提取的所述目标对象的特征数据。
在一个实施例中,所述边缘计算平台包括:标注结果接收单元,用于接收所述特征数据的人工标注结果,所述人工标注结果至少用于表征所述特征数据中存在的缺陷类型;训练单元,用于按照缺陷类型对所述特征数据进行分类,形成训练样本集,并利用所述训练样本集对预设的数理模型进行训练,以使得训练后的数理模型对输入的特征数据处理得到的缺陷识别结果与所述输入的特征数据的人工标注结果一致。
在一个实施例中,完成训练的所述数理模型中至少包括模型权重参数,所述模型权重参数作为判定缺陷的阈值,并且所述数理模型的输出结果为包含 多个概率值的概率向量,其中,不同的概率值对应不同的缺陷类型。
在一个实施例中,所述质检单元包括:遍历比较模块,用于将所述概率向量中的任一目标概率值与所述模型权重参数进行比较,若所述目标概率值大于或者等于所述模型权重参数,将所述目标概率值对应的缺陷类型作为所述目标对象在所述目标图像中存在的缺陷。
在一个实施例中,所述质检单元包括:最大概率值比较模块,用于在所述概率向量包含的多个概率值中确定出最大概率值,并将所述最大概率值与所述模型权重参数进行比较,若所述最大概率值大于或者等于所述模型权重参数,将所述最大概率值对应的缺陷类型作为所述目标对象在所述目标图像中存在的缺陷。
在一个实施例中,所述边缘计算设备还包括:停工指令发送单元,用于在判定所述目标对象在所述目标图像中存在缺陷时,向工控设备发送停止生产指令,以使得所述工控设备暂停所述目标对象的生产线;报警提示单元,用于发出报警提示信息,并在所述目标图像中标注出所述缺陷所处的位置。
在一个实施例中,所述边缘计算设备还包括:目标图像上传单元,用于若接收到针对所述报警提示信息的误报确认指令,将所述目标图像上传至所述边缘计算平台,以使得所述边缘计算平台根据所述目标图像对所述数理模型进行校正;模型替换单元,用于接收所述边缘计算平台反馈的校正后的数理模型,并利用所述校正后的数理模型替换本地存储的数理模型。
请参阅图4,本申请还提供一种边缘计算设备,所述边缘计算设备包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时,实现上述的目标对象的质检方法。
请参阅图5,在本申请中,上述实施例中的技术方案均可以应用于如图5所示的计算机终端10上。计算机终端10可以包括一个或多个(图中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)、用于存储数据的存储器104、以及用于通信功能的传输模 块106。本领域普通技术人员可以理解,图5所示的结构仅为示意,其并不对上述电子装置的结构造成限定。例如,计算机终端10还可包括比图5中所示更多或者更少的组件,或者具有与图5所示不同的配置。
存储器104可用于存储应用软件的软件程序以及模块,处理器102通过运行存储在存储器104内的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至计算机终端10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输装置106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括计算机终端10的通信供应商提供的无线网络。在一个实例中,传输装置106包括一个网络适配器(Network Interface Controller,NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输装置106可以为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
由上可见,本申请提供的技术方案,可以采用机器学习的方法,针对目标对象的大量样本进行训练,从而得到能够识别缺陷的数理模型,后续便可以利用该数理模型来检测当前生产的目标对象是否存在缺陷。具体地,边缘计算设备可以实时获取产品线中针对目标对象拍摄的视频流数据,该视频流数据中可以包括各个视频帧的数据。在实际应用中,并非每一个视频帧都能单独展示目标对象的图像。鉴于此,边缘计算设备在获取到目标对象的视频帧数据后,可以从中识别出能够单独展示目标对象的图像的有效视频帧,并可以从识别出的有效视频帧中提取出目标对象的特征数据后,将特征数据上传至边缘计算平台。边缘计算平台在接收到目标对象大量的特征数据后,可以通过机器学习的方法,利用这些特征数据训练目标对象的数理模型。在完成训练之后,边缘计 算平台可以将完成训练的数理模型发送至边缘计算设备。这样,边缘计算设备可以在本地存储完成训练的数理模型,并可以利用该数理模型,自动地针对目标对象当前待质检的目标图像进行检测,从而判断该目标图像中的目标对象是否存在缺陷。由上可见,本申请利用机器学习的方法训练得到数理模型后,能够自动地对目标对象的缺陷进行检测,从而节省了大量的人工成本,并且具备较高的检测效率。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,针对边缘计算设备的实施例来说,均可以参照前述方法的实施例的介绍对照解释。
通过以上的实施例的描述,本领域的技术人员可以清楚地了解到各实施例可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (19)

  1. 一种目标对象的质检方法,所述方法包括:
    边缘计算设备接收目标对象当前待质检的目标图像;
    基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
  2. 根据权利要求1所述的方法,其中,所述数理模型为依据所述目标对象的视频流数据训练得到,所述训练的方法包括以下步骤:
    从所述目标对象的视频流数据中提取所述目标对象的特征数据,将所述特征数据上传至边缘计算平台,边缘计算平台根据接收的特征数据训练目标对象的数理模型。
  3. 根据权利要求2所述的方法,其中,从所述目标对象的视频流数据中提取所述目标对象的特征数据包括:
    从所述视频流数据中识别所述目标对象的有效视频帧,并从所述有效视频帧中提取所述目标对象的特征数据。
  4. 根据权利要求3所述的方法,其中,从所述视频流数据中识别所述目标对象的有效视频帧包括:
    遍历所述视频流数据中的各个视频帧,并识别各个所述视频帧的头部信息中的指定字段;若所述指定字段中携带的标志位表征所述视频帧为关键帧,将所述视频帧作为所述目标对象的有效视频帧。
  5. 根据权利要求3所述的方法,其中,从所述有效视频帧中提取所述目标对象的特征数据包括:
    读取所述目标对象所处背景的背景色,并在所述有效视频帧展示的视频画面中,将所述背景色滤除;
    将滤除了背景色的视频画面对应的数据作为从所述有效视频帧中提取的所述目标对象的特征数据。
  6. 根据权利要求2所述的方法,其中,所述数理模型由所述边缘计算平台按照以下方式训练得到:
    所述边缘计算平台接收所述特征数据的人工标注结果,所述人工标注结果至少用于表征所述特征数据中存在的缺陷类型;
    所述边缘计算平台按照缺陷类型对所述特征数据进行分类,形成训练样本集,并利用所述训练样本集对预设的数理模型进行训练,以使得训练后的数理模型对输入的特征数据处理得到的缺陷识别结果与所述输入的特征数据的人工标注结果一致。
  7. 根据权利要求1或6所述的方法,其中,完成训练的所述数理模型中至少包括模型权重参数,所述模型权重参数作为判定缺陷的阈值,并且所述数理模型的输出结果为包含多个概率值的概率向量,其中,不同的概率值对应不同的缺陷类型。
  8. 根据权利要求7所述的方法,其中,判断所述目标对象在所述目标图像中是否存在缺陷包括:
    将所述概率向量中的任一目标概率值与所述模型权重参数进行比较,若所述目标概率值大于或者等于所述模型权重参数,将所述目标概率值对应的缺陷类型作为所述目标对象在所述目标图像中存在的缺陷。
  9. 根据权利要求7所述的方法,其中,判断所述目标对象在所述目标图像中是否存在缺陷包括:
    在所述概率向量包含的多个概率值中确定出最大概率值,并将所述最大概率值与所述模型权重参数进行比较,若所述最大概率值大于或者等于所述模型权重参数,将所述最大概率值对应的缺陷类型作为所述目标对象在所述目标图像中存在的缺陷。
  10. 根据权利要求1所述的方法,其中,所述方法还包括:
    在判定所述目标对象在所述目标图像中存在缺陷时,向工控设备发送停止 生产指令,以使得所述工控设备暂停所述目标对象的生产线;
    发出报警提示信息,并在所述目标图像中标注出所述缺陷所处的位置。
  11. 根据权利要求10所述的方法,其中,所述方法还包括:
    若接收到针对所述报警提示信息的误报确认指令,将所述目标图像上传至所述边缘计算平台,以使得所述边缘计算平台根据所述目标图像对所述数理模型进行校正;
    接收所述边缘计算平台反馈的校正后的数理模型,并利用所述校正后的数理模型替换本地存储的数理模型。
  12. 一种边缘计算设备,包括:
    目标图像接收单元,用于接收目标对象当前待质检的目标图像;
    缺陷识别单元,用于基于边缘计算平台的数理模型,对所述目标图像进行识别,以判断所述目标对象是否存在缺陷。
  13. 根据权利要求12所述的边缘计算设备,其中,所述边缘计算设备还包括:
    特征数据提取单元,用于从所述目标对象的视频流数据中提取所述目标对象的特征数据,将所述特征数据上传至边缘计算平台,边缘计算平台根据接收的特征数据训练目标对象的数理模型。
  14. 根据权利要求13所述的边缘计算设备,其中,所述特征数据提取单元包括:
    有效视频帧识别模块,用于从所述视频流数据中识别所述目标对象的有效视频帧,并从所述有效视频帧中提取所述目标对象的特征数据。
  15. 根据权利要求14所述的边缘计算设备,其中,所述有效视频帧识别模块包括:
    指定字段识别模块,用于遍历所述视频流数据中的各个视频帧,并识别各个所述视频帧的头部信息中的指定字段;
    关键帧识别模块,用于若所述指定字段中携带的标志位表征所述视频帧为关键帧,将所述视频帧作为所述目标对象的有效视频帧。
  16. 根据权利要求14所述的边缘计算设备,其中,所述有效视频帧识别模块包括:
    背景过滤模块,用于读取所述目标对象所处背景的背景色,并在所述有效视频帧展示的视频画面中,将所述背景色滤除;
    特征数据确定模块,用于将滤除了背景色的视频画面对应的数据作为从所述有效视频帧中提取的所述目标对象的特征数据。
  17. 根据权利要求12所述的边缘计算设备,其中,所述边缘计算设备还包括:
    停工指令发送单元,用于在判定所述目标对象在所述目标图像中存在缺陷时,向工控设备发送停止生产指令,以使得所述工控设备暂停所述目标对象的生产线;
    报警提示单元,用于发出报警提示信息,并在所述目标图像中标注出所述缺陷所处的位置。
  18. 根据权利要求17所述的边缘计算设备,其中,所述边缘计算设备还包括:
    目标图像上传单元,用于若接收到针对所述报警提示信息的误报确认指令,将所述目标图像上传至所述边缘计算平台,以使得所述边缘计算平台根据所述目标图像对所述数理模型进行校正;
    模型替换单元,用于接收所述边缘计算平台反馈的校正后的数理模型,并利用所述校正后的数理模型替换本地存储的数理模型。
  19. 一种边缘计算设备,包括处理器和存储器,所述存储器用于存储计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至11中任一所述的方法。
PCT/CN2019/096174 2019-05-23 2019-07-16 一种目标对象的质检方法及边缘计算设备 WO2020232816A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19920647.5A EP3770851A4 (en) 2019-05-23 2019-07-16 QUALITY INSPECTION PROCESS FOR TARGET OBJECT, AND PERIPHERAL COMPUTER DEVICE
US17/061,475 US20210034044A1 (en) 2019-05-23 2020-10-01 Method for quality controling of target object and edge computing device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910432190.7 2019-05-23
CN201910432190.7A CN110298819A (zh) 2019-05-23 2019-05-23 一种目标对象的质检方法及边缘计算设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/061,475 Continuation US20210034044A1 (en) 2019-05-23 2020-10-01 Method for quality controling of target object and edge computing device

Publications (1)

Publication Number Publication Date
WO2020232816A1 true WO2020232816A1 (zh) 2020-11-26

Family

ID=68027120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096174 WO2020232816A1 (zh) 2019-05-23 2019-07-16 一种目标对象的质检方法及边缘计算设备

Country Status (4)

Country Link
US (1) US20210034044A1 (zh)
EP (1) EP3770851A4 (zh)
CN (1) CN110298819A (zh)
WO (1) WO2020232816A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184701A (zh) * 2020-10-22 2021-01-05 中国联合网络通信集团有限公司 检测结果的确定方法、装置及系统
CN113759854A (zh) * 2021-09-18 2021-12-07 深圳市裕展精密科技有限公司 基于边缘计算的智能工厂管控系统及方法
CN114627360A (zh) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 基于级联检测模型的变电站设备缺陷识别方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090243B (zh) * 2018-10-23 2023-04-14 宁波方太厨具有限公司 一种实现厨房电器智能互联的方法
CN110866901A (zh) * 2019-11-05 2020-03-06 中国科学院计算机网络信息中心 一种基于边缘计算技术的质检方法及系统
CN111462167A (zh) * 2020-04-21 2020-07-28 济南浪潮高新科技投资发展有限公司 一种结合边缘计算与深度学习的智能终端视频分析算法
CN113804244B (zh) * 2020-06-17 2024-06-25 富联精密电子(天津)有限公司 缺陷分析方法及装置、电子装置及计算机可读存储介质
CN113850285A (zh) * 2021-07-30 2021-12-28 安徽继远软件有限公司 基于边缘计算的输电线路缺陷识别方法及系统
WO2023097639A1 (zh) * 2021-12-03 2023-06-08 宁德时代新能源科技股份有限公司 用于图像分割的数据标注方法和系统以及图像分割装置
CN115909177B (zh) * 2023-02-22 2023-08-22 江苏甬金金属科技有限公司 一种传送轧带的表面智能监测方法及系统
CN117703325B (zh) * 2024-02-06 2024-05-07 西安思坦仪器股份有限公司 油田波码分注注水地面控制系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844238A (zh) * 2016-03-23 2016-08-10 乐视云计算有限公司 视频鉴别方法及系统
US20170076168A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Technologies for object recognition for internet-of-things edge devices
CN108830837A (zh) * 2018-05-25 2018-11-16 北京百度网讯科技有限公司 一种用于检测钢包溶蚀缺陷的方法和装置
CN108961239A (zh) * 2018-07-02 2018-12-07 北京百度网讯科技有限公司 连铸坯质量检测方法、装置、电子设备及存储介质
CN109084955A (zh) * 2018-07-02 2018-12-25 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9721334B2 (en) * 2015-12-03 2017-08-01 International Business Machines Corporation Work-piece defect inspection via optical images and CT images
US20180374022A1 (en) * 2017-06-26 2018-12-27 Midea Group Co., Ltd. Methods and systems for improved quality inspection
US10593033B2 (en) * 2017-06-27 2020-03-17 Nec Corporation Reconstructor and contrastor for medical anomaly detection
CN107292885A (zh) * 2017-08-08 2017-10-24 广东工业大学 一种基于自动编码器的产品缺陷分类识别方法及装置
CN109064454A (zh) * 2018-07-12 2018-12-21 上海蝶鱼智能科技有限公司 产品缺陷检测方法及系统
CN109242825A (zh) * 2018-07-26 2019-01-18 北京首钢自动化信息技术有限公司 一种基于深度学习技术的钢铁表面缺陷识别方法和装置
CN109461149A (zh) * 2018-10-31 2019-03-12 泰州市创新电子有限公司 喷漆表面缺陷的智能检测系统及方法
CN109671058B (zh) * 2018-12-05 2021-04-20 武汉精立电子技术有限公司 一种大分辨率图像的缺陷检测方法及系统
CN109711474B (zh) * 2018-12-24 2023-01-17 中山大学 一种基于深度学习的铝材表面缺陷检测算法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076168A1 (en) * 2015-09-11 2017-03-16 Intel Corporation Technologies for object recognition for internet-of-things edge devices
CN105844238A (zh) * 2016-03-23 2016-08-10 乐视云计算有限公司 视频鉴别方法及系统
CN108830837A (zh) * 2018-05-25 2018-11-16 北京百度网讯科技有限公司 一种用于检测钢包溶蚀缺陷的方法和装置
CN108961239A (zh) * 2018-07-02 2018-12-07 北京百度网讯科技有限公司 连铸坯质量检测方法、装置、电子设备及存储介质
CN109084955A (zh) * 2018-07-02 2018-12-25 北京百度网讯科技有限公司 显示屏质量检测方法、装置、电子设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184701A (zh) * 2020-10-22 2021-01-05 中国联合网络通信集团有限公司 检测结果的确定方法、装置及系统
CN114627360A (zh) * 2020-12-14 2022-06-14 国电南瑞科技股份有限公司 基于级联检测模型的变电站设备缺陷识别方法
CN113759854A (zh) * 2021-09-18 2021-12-07 深圳市裕展精密科技有限公司 基于边缘计算的智能工厂管控系统及方法
CN113759854B (zh) * 2021-09-18 2023-09-05 富联裕展科技(深圳)有限公司 基于边缘计算的智能工厂管控系统及方法

Also Published As

Publication number Publication date
CN110298819A (zh) 2019-10-01
EP3770851A1 (en) 2021-01-27
US20210034044A1 (en) 2021-02-04
EP3770851A4 (en) 2021-07-21

Similar Documents

Publication Publication Date Title
WO2020232816A1 (zh) 一种目标对象的质检方法及边缘计算设备
CN106534967B (zh) 视频剪辑方法及装置
US11836967B2 (en) Method and device for small sample defect classification and computing equipment
CN113239930B (zh) 一种玻璃纸缺陷识别方法、系统、装置及存储介质
CN112115927B (zh) 一种基于深度学习的机房设备智能识别方法及系统
CN111310826B (zh) 样本集的标注异常检测方法、装置及电子设备
CN112560816A (zh) 一种基于YOLOv4的设备指示灯识别方法及系统
CN114898466A (zh) 一种面向智慧工厂的视频动作识别方法及系统
CN113792578A (zh) 用于变电站异常的检测方法、设备及系统
CN111681215A (zh) 卷积神经网络模型训练方法、加工件缺陷检测方法及装置
CN116797977A (zh) 巡检机器人动态目标识别与测温方法、装置和存储介质
CN111398292A (zh) 一种基于gabor滤波和CNN的布匹瑕疵检测方法、系统及设备
CN115223043A (zh) 一种草莓缺陷检测方法、装置、计算机设备及存储介质
CN110618129A (zh) 一种电网线夹自动检测与缺陷识别方法及装置
TWI747686B (zh) 缺陷檢測方法及檢測裝置
CN113052234A (zh) 一种基于图像特征和深度学习技术的玉石分类方法
CN112800909A (zh) 一种自学习型的烟丝杂物视像检测方法
WO2023280117A1 (zh) 指示信号识别方法、设备以及计算机存储介质
CN116452505A (zh) 基于改进YOLOv5的连铸坯内部缺陷检测与评级方法
CN112714284A (zh) 一种电力设备检测方法、装置及移动终端
CN115661527A (zh) 基于人工智能的图像分类模型训练方法、分类方法及装置
CN115147386A (zh) U型管的缺陷检测方法、装置及电子设备
CN113469994A (zh) 受电弓检测方法、装置、电子设备和存储介质
US11893791B2 (en) Pre-processing image frames based on camera statistics
CN112560776A (zh) 一种基于图像识别的智能风机定检方法及系统

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019920647

Country of ref document: EP

Effective date: 20201001

NENP Non-entry into the national phase

Ref country code: DE