CN115937575A - Equipment accessory detection method, device, system, equipment and storage medium - Google Patents

Equipment accessory detection method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN115937575A
CN115937575A CN202211392968.4A CN202211392968A CN115937575A CN 115937575 A CN115937575 A CN 115937575A CN 202211392968 A CN202211392968 A CN 202211392968A CN 115937575 A CN115937575 A CN 115937575A
Authority
CN
China
Prior art keywords
accessory
image
target
target accessory
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211392968.4A
Other languages
Chinese (zh)
Inventor
王晓虎
邓方进
黄泊源
仁义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Guangyu Mingdao Digital Technology Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Guangyu Mingdao Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Guangyu Mingdao Digital Technology Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202211392968.4A priority Critical patent/CN115937575A/en
Publication of CN115937575A publication Critical patent/CN115937575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a device accessory detection method, device, system, device and storage medium. The accessory detection method is applied to an edge computing node, and an accessory identification network is preloaded on the edge computing node; the method comprises the following steps: acquiring a target accessory image of the tested device; identifying the target accessory image by using the accessory identification network to obtain a prediction classification result of the target accessory; and comparing the prediction classification result with reference type information of the target accessory corresponding to the tested equipment to obtain a detection result of the target accessory. By the aid of the technical scheme, the accessory detection efficiency is improved, and the false detection rate and the missed detection rate are reduced.

Description

Equipment accessory detection method, device, system, equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for detecting an equipment accessory.
Background
For products or equipment assembled with various accessories, after the assembly process is finished, the accessories need to be checked to reduce quality defects such as misloading and neglected loading, so that the product quality is improved.
The traditional equipment accessory detection process is generally carried out in a manual detection mode, the detection efficiency is low, and the problems of missing detection and error detection exist.
Disclosure of Invention
In view of the above, in order to solve the above technical problems, the present application provides a device accessory detection method, apparatus, system, device and storage medium.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of an embodiment of the present application, a device accessory detection method is provided, where the method is applied to an edge computing node, and the edge computing node loads an accessory identification network in advance; the method comprises the following steps:
acquiring a target accessory image of the tested device;
identifying the target accessory image by using the accessory identification network to obtain a prediction classification result of the target accessory;
and comparing the prediction classification result with reference type information of the target accessory corresponding to the tested equipment to obtain a detection result of the target accessory.
Optionally, the acquiring the target accessory image of the device under test includes:
acquiring an image of the device under test;
detecting the image by using a target detection network to obtain an image area containing the target accessory;
and obtaining a target accessory image based on the image area.
Optionally, the device under test comprises a vehicle; the acquiring the image of the device under test comprises:
images of a plurality of portions of the vehicle are acquired using a plurality of cameras disposed around the vehicle.
Optionally, the recognizing, by the accessory recognition network, the target accessory image to obtain a prediction classification result of the target accessory includes:
obtaining probability values of the target accessory belonging to multiple preset classifications;
determining the maximum value of the probability values of the multiple preset classifications, comparing the maximum value with a first classification threshold value corresponding to the target accessory, and determining the type corresponding to the maximum value as a predicted classification result of the target accessory under the condition that the maximum value is greater than or equal to the first classification threshold value corresponding to the target accessory.
Optionally, the method further comprises:
aiming at the accessory identification network, acquiring the recall ratio and precision ratio corresponding to the target accessory by setting different classification threshold values;
and determining the first classification threshold according to the precision ratio and a precision ratio-precision ratio P-R curve corresponding to the precision ratio.
Optionally, the method further comprises:
under the condition that a detection result of a target accessory of the tested equipment is obtained, uploading the target accessory image and a detection result corresponding to the target accessory image to a cloud end so that the cloud end can optimize the accessory identification network;
updating the accessory identification network with the optimized accessory identification network.
Optionally, the method further comprises:
aiming at the optimized accessory identification network, acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values through a cloud end;
and determining a second classification threshold according to the P-R curve corresponding to the recall ratio and the precision ratio, and updating the first classification threshold into the second classification threshold.
Optionally, the method further comprises:
and sending prompt information in response to the fact that the predicted classification result is inconsistent with the reference type information comparison result of the target accessory corresponding to the tested device.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for detecting an equipment accessory, where the apparatus is applied to an edge computing node, and the edge computing node loads an accessory identification network in advance; the device comprises:
the accessory image acquisition module is used for acquiring a target accessory image of the equipment to be tested;
the classification prediction module is used for recognizing the target accessory image by using the accessory recognition network to obtain a prediction classification result of the target accessory;
and the information comparison module is used for comparing the prediction classification result with the reference type information of the target accessory corresponding to the tested equipment to obtain the detection result of the target accessory.
Optionally, the accessory image acquisition module is specifically configured to:
acquiring an image of the device under test;
detecting the image by using a target detection network to obtain an image area containing the target accessory;
and obtaining a target accessory image based on the image area.
Optionally, the device under test comprises a vehicle; the acquiring the image of the device under test comprises:
images of a plurality of portions of the vehicle are acquired using a plurality of cameras disposed around the vehicle.
Optionally, the classification prediction module is specifically configured to:
obtaining probability values of the target accessory belonging to various preset classifications;
determining the maximum value in the probability values of the multiple preset classifications, comparing the maximum value with a first classification threshold corresponding to the target accessory, and determining the type corresponding to the maximum value as a predicted classification result of the target accessory when the maximum value is greater than or equal to the first classification threshold corresponding to the target accessory.
Optionally, the classification prediction module further comprises:
the first rate value acquisition module is used for acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values aiming at the accessory identification network;
and the threshold value determining module is used for determining the first classification threshold value according to the precision ratio and a precision ratio-precision ratio P-R curve corresponding to the precision ratio.
Optionally, the apparatus further comprises:
the information transmission module is used for uploading the target accessory image and the detection result corresponding to the target accessory image to a cloud under the condition that the detection result of the target accessory of the tested device is obtained, so that the cloud optimizes the accessory identification network;
and the identification network updating module is used for updating the accessory identification network by using the optimized accessory identification network.
Optionally, the identifying network update module further includes:
the second rate value acquisition module is used for acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values through a cloud end aiming at the optimized accessory identification network;
and the threshold updating module is used for determining a second classification threshold according to the P-R curve corresponding to the recall ratio and the precision ratio and updating the first classification threshold into the second classification threshold.
Optionally, the apparatus further comprises:
and the result response module is used for responding to the inconsistency between the prediction classification result and the reference type information comparison result of the target accessory corresponding to the tested equipment and sending prompt information.
According to a third aspect of the embodiments of the present application, there is provided an equipment accessory detection system, including an image acquisition device, an edge calculation node; the edge computing node pre-loads an accessory identification network;
the image acquisition equipment is used for acquiring images of a plurality of components of the equipment to be tested;
the edge computing node is used for obtaining a target accessory image of the tested equipment, and recognizing the target accessory image by using an accessory recognition network to obtain a detection result of the target accessory.
According to a fourth aspect of embodiments of the present application, there is provided an electronic apparatus, including: a memory and a processor; the memory for storing a computer program; the processor is used for executing the equipment accessory detection method by calling the computer program.
According to a fifth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described device accessory detection method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the technical scheme provided by the application, the edge computing node loads the accessory identification network in advance, the obtained target accessory images are classified by the accessory identification network in the edge computing node, and the classification result of the accessory identification network is compared with the accessory reference type information to perform accessory detection, so that the accessory detection efficiency is improved, and the false detection rate and the missed detection rate are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application. Moreover, not all of the above-described effects need to be achieved by any of the embodiments in this application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flow chart diagram illustrating a method for detecting an equipment accessory according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for body color detection for a vehicle according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an exemplary embodiment of an equipment accessory detection apparatus according to the present application;
fig. 4 is a hardware schematic diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first classification threshold may also be referred to as the second classification threshold, and similarly, the second classification threshold may also be referred to as the first classification threshold, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
Along with market demand changes, products are diversified, corresponding product accessories are various in types, and challenges are brought to wrong and wrong assembly control of product assembly. For products or equipment assembled with various accessories, after the assembly process is finished, quality defects such as misloading and neglected loading need to be reduced through equipment accessory detection, and therefore product quality is improved. For example, after the final assembly process of the automobile is completed, matching detection needs to be performed on the exterior parts and part of the interior parts of the whole automobile.
The traditional equipment accessory detection process is generally carried out in a manual detection mode, the detection efficiency of the mode is low, and the problems of missed detection and error detection are easy to occur when the difference between different types of detected equipment accessories is small, the types of the equipment accessories are various and the like.
In order to solve the above problems, the present application provides an apparatus accessory detection method, which is applied to an edge computing node, where an accessory identification network is preloaded on the edge computing node. Referring to fig. 1, the method may include the steps of:
s101, acquiring a target accessory image of the tested device;
the target accessory image is an image of an accessory to be detected, which contains the equipment to be detected, and the image can be an image containing the target accessory directly acquired by using image acquisition equipment, or an image containing the target accessory obtained by performing image acquisition on the equipment to be detected by using the image acquisition equipment and processing the acquired image.
In one example, the device under test may be a device equipped with at least one accessory, and the accessory to be tested may be one or more of the accessories equipped with the device under test, and the one or more accessories may identify the accessory type information through an image. For example, the device to be tested may be a vehicle, and the parts to be tested may be exterior parts such as car LOGO, car body color, sunroof, tire brand, door handle, wheel hub model, rear view mirror, etc. of a vehicle assembly, part of interior parts such as vanity mirror, steering wheel, airbag, etc. which may determine the type of parts thereof by images.
The edge computing node and the image acquisition equipment are in image transmission connection with each other in advance, and after the image acquisition equipment is triggered to start image acquisition, the edge computing node can acquire a target accessory image through the image transmission connection.
S102, recognizing the target accessory image by using the accessory recognition network to obtain a prediction classification result of the target accessory;
the accessory identification network is preloaded in the edge computing node. The accessory recognition network is a trained neural network and is used for recognizing the input images and predicting the categories of the input images.
After the step S101 is completed, the edge computing node takes the acquired target accessory image as an input of an accessory identification network, and identifies the target accessory image by using the accessory identification network to obtain a prediction classification result of the target accessory. For example, the edge computing node acquires a target accessory image of an accessory A of the tested device, the accessory A corresponds to three types A1, A2 and A3, and the accessory identification network is used for predicting which type of the accessory A belongs to the preset categories A1, A2 and A3; if the target accessory image of the accessory A is input into the accessory identification network, the accessory identification network identifies the target accessory image of the accessory A, the predicted classification result of the accessory A is output to be A1 type, and the edge calculation node obtains the predicted classification result of the accessory A to be A1.
In one example, based on the detection of the tire model of the vehicle, the edge computing node acquires a tire model image, wherein the tire corresponds to three models, namely 215/60R1691H, 215/55R1690U and 215/55R 1685Q; the edge computing node takes the tire model image as an input of an accessory identification network, the accessory identification network predicts that the tire model is 215/55R1690U of the second model, and the edge computing node obtains a predicted classification result of the tire model to be 215/55R1690U.
S103, comparing the prediction classification result with reference type information of a target accessory corresponding to the tested device to obtain a detection result of the target accessory.
The reference type information of the target accessory corresponding to the tested equipment refers to type information corresponding to the target accessory theoretically installed by the tested equipment according to the design requirements of the equipment; the prediction classification result refers to accessory type information obtained by identifying a target accessory image actually assembled by the equipment to be tested by the accessory identification network.
Taking the example where the device under test is a vehicle, the vehicle has an accessory table for indicating reference type information of a target accessory of the vehicle. Referring to table 1, the following table shows a part of an accessory table of a series of vehicles, and table information indicates reference type information of part accessories of the series of vehicles. The accessory shown in the table can be predicted to belong to by acquiring an accessory image and using an accessory identification network.
Figure BDA0003932064820000081
TABLE 1 some series of vehicle part list tables
After the step S102 is completed, the edge computing node compares the prediction classification result with the reference type information of the corresponding target accessory after obtaining the prediction classification result of the target accessory. Under the condition that the prediction classification result is the same as the reference type information, obtaining a detection result of the target accessory as the correct type matching of the target accessory; and under the condition that the prediction classification result is different from the reference type information, obtaining a detection result of the target accessory as a target accessory type matching error.
Still taking the tire model detection as an example, the result of the predicted classification of the tire model obtained by the edge computing node is 215/55R1690U; assuming that the device under test is the vehicle described in table 1, the reference type information corresponding to the tire model is 215/60R1691H. The edge calculation node compares the prediction classification result 215/55R1690U with the reference type information 215/60R1691H, so that the prediction classification result is wrongly matched with the reference type information, and the detection result of the tire model is 'wrong matching of the tire model'.
According to the technical scheme provided by the embodiment of the application, the edge computing node loads the accessory identification network in advance, the obtained target accessory images are classified by the accessory identification network in the edge computing node, and the classification result of the accessory identification network is compared with the accessory reference type information to detect the accessories, so that the accessory detection efficiency is improved, and the false detection rate and the missed detection rate are reduced.
In addition, the detection mode combines the accessory identification network and the edge computing node, and utilizes the edge computing node to quickly respond to accessory detection, thereby shortening the time delay of detecting a target accessory by utilizing the related technology.
In addition, the cost of the edge computing node is low relative to the cost of the server, and the requirement of the edge computing node on the installation of the field equipment is low, so that the cost can be reduced by adopting the edge computing node to carry out anomaly detection.
In some embodiments, the acquiring a target accessory image of the device under test may include: acquiring an image of the device under test; detecting the image by using a target detection network to obtain an image area containing the target accessory; and obtaining a target accessory image based on the image area.
That is, the edge computing node first acquires an image acquired by the image acquisition device for at least one part of the device to be tested, wherein the image includes at least one target accessory.
In one example, the device under test may be a vehicle, and acquiring an image of the device under test may utilize a plurality of cameras disposed around the vehicle to acquire an image of at least a portion of the vehicle including at least one target accessory of the vehicle, such as acquiring an image of a portion of the vehicle including an accessory tire.
And then, carrying out target detection on the image by using a target detection network, wherein when the image contains the target accessory, the obtained detection result contains a detection frame of the target accessory and the category and position information of the target accessory. Taking the above-obtained vehicle part image including the tire as an example, the image is taken as an input of a target detection network, the accessory tire is found by using the target detection network, and the position of the accessory tire in the image is marked by using a detection frame. For example, the object detection network detects a tire and outputs position information of a detection frame in which an image region of the tire is located, coordinates [ x1, y1] of an upper left corner point of the detection frame, and coordinates [ x2, y2] of a lower right corner point.
And finally, according to the detection frame which is output by the target detection network and contains the target accessory, the image containing the target accessory can be obtained, namely the image input into the target detection network can be cut according to the detection frame to obtain the image of the target accessory. For example, the image of the vehicle portion including the accessory tire is clipped from the coordinates [ x1, y1], [ x2, y2] of the two diagonal points of the detection frame in which the tire image area is located, and the tire image is acquired.
In the embodiment of the disclosure, the target detection is performed on the obtained image of the device to be detected through the target detection network, and the target accessory image is obtained according to the image area which is output by the target detection network and contains the target accessory, so that the obtained target accessory image contains more effective technical features, the accuracy of image identification of the target accessory is improved, and the accuracy of accessory detection is improved.
In some embodiments, the edge computing node obtains an image of a target accessory of the device under test, and may also directly acquire the image of the target accessory by using an image acquisition device, where an image transmission connection is established in advance between the image acquisition device and the edge computing node. In one example, the image capture device may comprise at least one robot carrying a camera mounted on a robot boom, the camera being operable to transmit images to the edge computing node. When the detected equipment reaches a preset detection station, triggering the robot to move to a specified photographing point near the detection station according to a preset path; the robot moves the camera to the position corresponding to the accessories according to a preset accessory photographing sequence, acquires accessory images and transmits the acquired images to the edge computing node. For example, the device under test is a vehicle, and the preset photographing sequence of the accessories is a rearview mirror, a door handle, and a front tire. The robot moves the camera to the position of the rearview mirror according to the preset path after reaching the first photographing point, photographs and collects images of the rearview mirror, and then sequentially moves the camera to the positions of the door handle and the front tire to collect images.
The image acquisition equipment can comprise two or more robots carrying cameras, a plurality of appointed photographing points can be preset, the robots work simultaneously under the condition of not influencing each other, different accessory images are acquired respectively, and the image acquisition efficiency is improved.
Further, after the accessory image is acquired, an image processing program corresponding to the accessory may be triggered, and the acquired accessory image may be further processed, for example, noise reduction, resolution enhancement, background processing, and the like.
The method comprises the steps that an accessory identification network is preloaded in an edge computing node, the accessory identification network is a trained neural network, the accessory identification network can be constructed in a transfer learning mode, and the method mainly comprises three processes of sample preparation, network structure determination, network training and verification.
Firstly, a training sample set of the accessory identification network is prepared according to the accessory to be detected of the device to be detected. The accessory identification network is used for identifying an image of an accessory to be detected of the equipment to be detected, acquiring the image of the corresponding accessory by using image acquisition equipment based on the accessory to be detected, and marking type information corresponding to the image to be used as a training sample set. For example, taking the device under test as a vehicle as an example, a camera may be used to capture an image of an accessory of the vehicle and label the type, such as capturing an image of an accessory such as a wheel hub, a door handle, a rearview mirror, a tire, a vehicle body, and labeling the type respectively.
The method can acquire accessory images at different angles, and enriches a training sample set; the acquired accessory images can be subjected to data enhancement operation, such as random rotation, scaling, overturning, noise addition, super-resolution enhancement by utilizing a GAN network and the like, so that the training sample set is expanded, and the data quality is enhanced.
And then, constructing a pre-training network in a transfer learning mode, and determining a network structure. The pre-training accessory identification network can be any lightweight classification network such as SqueezeNet, shuffleNet, mobileNet and the like.
And then training the constructed pre-training network by using a training sample set, and training the network after selecting to freeze all layers of the pre-training network except the layer before the fully-connected layer. In the training process, each round of iterative network runs once on a training set and a verification set to respectively obtain loss values and precision values corresponding to the training set and the verification set; through a gradient descent method, utilizing the loss value and the precision value on the training set to update the network parameters; and evaluating the generalization ability of the network by using the loss value and the precision value on the verification set. After a certain number of training, the loss value on the training set is gradually reduced, the loss value on the verification set is gradually increased after a minimum value appears, and a network corresponding to the minimum value on the loss value of the training set can be used as a trained accessory identification network.
In some embodiments, the identifying the target accessory image by using the accessory identification network to obtain the predicted classification result of the target accessory may include: obtaining probability values of the target accessory belonging to multiple preset classifications; determining the maximum value in the probability values of the multiple preset classifications, comparing the maximum value with a first classification threshold corresponding to the target accessory, and determining the type corresponding to the maximum value as a predicted classification result of the target accessory when the maximum value is greater than or equal to the first classification threshold corresponding to the target accessory.
Taking accessory a as an example, assuming that accessory a includes three types A1, A2, and A3, accessory a corresponds to a first classification threshold of 0.7 in the accessory identification network. Identifying the image of the accessory A by using the accessory identification network, and acquiring probability values of the accessory A, which are predicted by the accessory identification network to belong to A1, A2 and A3, which are respectively [0.76,0.21,0.03]; and determining that the maximum value in the probability values is 0.76, and considering that the maximum value 0.76 is greater than the first classification threshold value 0.7, A1 corresponding to the maximum value 0.76 is the predicted classification result of the accessory a, and the edge computing node obtains the predicted classification result of the accessory a as A1.
In some embodiments, the first classification threshold may be determined by: aiming at the accessory identification network, acquiring recall ratio and precision ratio corresponding to the target accessory by setting different classification threshold values; and determining the first classification threshold according to the precision ratio and a precision ratio-precision ratio P-R curve corresponding to the precision ratio.
Still taking the accessory a as an example, for the trained accessory identification network, according to the prediction result and the training sample when the network is determined to be the trained accessory identification network, several values between (0,1) may be selected as classification threshold values, and the recall ratio and precision ratio of the accessory a classification result corresponding to the classification threshold values are respectively obtained. The precision-recall ratio P-R curve can be drawn, and the corresponding value of the P-R curve, where the precision ratio is equal to the recall ratio, is determined as a first classification threshold; or according to the recall ratio and the precision ratio, calculating an F1 value by using the following formula, drawing an F1 value-classification threshold curve, and determining the classification threshold corresponding to the point with the largest F1 value as a first classification threshold, wherein the formula is as follows:
Figure BDA0003932064820000121
in the embodiment of the disclosure, the first classification threshold is determined by setting different classification thresholds to obtain the P-R curve corresponding to the target accessory, and the classification effect of the accessory identification network corresponding to the first classification threshold is better, so that the accuracy of the predicted classification result output by the accessory identification network can be improved.
In some embodiments, the edge computing node may further upload the target accessory image and a detection result corresponding to the target accessory image to a cloud after obtaining a detection result of a target accessory of the device under test, so that the cloud optimizes the accessory identification network; and updating the accessory identification network by using the optimized accessory identification network.
That is, the edge computing node may upload the target accessory image and the corresponding accessory detection result to the cloud after obtaining one or more accessory detection results; and uploading the detection results of all accessories to be detected of the equipment to be detected after the detection results are obtained. In one example, the device under test has an accessory table, and the accessory table can be uploaded to the cloud together after the edge computing node starts to detect the accessory or all the accessories under test of the device under test are detected, so as to store the detection process data of the device under test.
After the target accessory image and the corresponding target accessory image detection result are uploaded to the cloud, the cloud can acquire reference type information of the target accessory, label corresponding type information on the target accessory image with the accessory detection result of correct matching, and take the labeled target accessory image as a training sample. For example, a tire image is obtained, and the detection result is that "the tire model matches correctly", the cloud obtains reference model information of the tire, and marks the tire image with a corresponding tire model.
The cloud side optimizes the accessory identification network by using the training sample, evaluates the performance of the optimized accessory identification network and determines whether to update the accessory identification network, for example, an F1 value is calculated by obtaining precision ratio and recall ratio, and the accessory identification network is updated by using the optimized accessory identification network under the condition that the F1 value is larger than the F1 value of the accessory identification network before optimization.
In the embodiment of the disclosure, the detected accessory image is used as a new training sample, the iteration optimization of the accessory identification network is realized by uploading the detected accessory image to the cloud, and the prediction accuracy of the accessory identification network is improved by utilizing the iteration optimization of the accessory identification network, so that the accessory detection accuracy is improved.
In some embodiments, for the optimized accessory identification network, the cloud acquires the recall ratio and precision ratio corresponding to the target accessory by setting different classification thresholds; the edge computing node may determine a second classification threshold according to the P-R curve corresponding to the recall ratio and the precision ratio, and update the first classification threshold to the second classification threshold. The manner of obtaining the recall ratio and the precision ratio and determining the classification threshold in this embodiment is referred to the foregoing embodiments, and is not described herein again.
In some embodiments, the edge computing node may further send a prompt message in response to the comparison result between the prediction classification result and the reference type information of the target accessory corresponding to the device under test being inconsistent.
For example, if the predicted classification result of the component a of the device under test is A2 and the reference type information corresponding to the component a of the device under test is A1, the comparison result between the predicted classification result of the component a and the reference type information is inconsistent. The edge computing node responds to the inconsistency of the comparison result of the accessory A, can send prompt information to a detector through the alarm device, for example, controls the audible and visual alarm device to display a red light and prompts 'the type of the accessory A is matched wrongly' through voice, and can also send information indicating the inconsistency of the comparison result of the predicted classification result of the accessory A and the reference type information to equipment such as a terminal, a server, a display screen and the like.
In the embodiment of the disclosure, the method and the device remind the staff of processing the abnormity by sending the prompt message, thereby reducing the quality defects of the equipment and improving the quality of the equipment.
The following describes a scheme of the present application in conjunction with a specific application scenario of device accessory detection. Taking the detection of the automobile exterior parts as an example, after the final assembly process flow of automobile manufacturing and before performance testing are performed, the automobile exterior parts need to be detected so as to prevent quality defects such as wrong assembly and neglected assembly of the parts. Among other things, the automotive exterior parts may include body color, sunroof, glass type, tire type, automotive LOGO, hub model, etc.
The embodiment of the application takes the detection of an automobile appearance part as an example, the scheme of the application is exemplarily detailed, the hub model is supposed to be detected, the hub model comprises four types of C1, C2, C3 and C4, and the hub model of the design configuration list of the detected vehicle is C1.
First, the accessory identification network is preloaded in the edge computing node. The accessory identification network is preset with classifications including C1, C2, C3 and C4 which respectively correspond to the sequences of preset classification probability values output by the accessory identification network; the optimal classification threshold value of the hub model corresponding to the accessory identification network is 0.85, namely the classification effect of the accessory identification network on the hub model is the best under the classification threshold value, and the probability of correctly identifying the hub model corresponding to the classification threshold value is the highest.
The following describes in detail the various steps of detecting the hub type, which, as shown in fig. 2, may include:
s201, acquiring a hub image of a detected vehicle;
and communication connection is pre-established between the edge computing node and the image acquisition equipment. The image acquisition equipment can be a plurality of industrial cameras arranged around the detected vehicle, the industrial cameras can be used for lighting and shooting according to the ambient brightness, and the focal length can be automatically adjusted according to the distance; the image acquisition equipment can also be a camera arranged on a mechanical arm of the robot, and when the robot moves the camera to a fixed photographing point, the camera is triggered to shoot an accessory image corresponding to the photographing point.
When the detected vehicle reaches the designated detection position, triggering the image acquisition equipment to start acquiring the image containing the wheel hub of the detected vehicle, and transmitting the image to the edge computing node; after receiving the image, the edge computing node may further process the image, such as performing target detection on the image to obtain an image area containing the hub, and obtaining a more accurate hub image based on the image area.
S202, identifying the hub image by using the accessory identification network to obtain a prediction classification result of the hub image type;
the edge computing node takes the acquired hub image as an input of the accessory identification network, and acquires a probability value that the hub output by the accessory identification network belongs to a preset classification, for example, the output result is [0.88,0.001,0.11,0.009], which respectively corresponds to the hub models [ C1, C2, C3, C4]. The maximum value in the probability values is 0.88, the corresponding hub model is C1, and the hub model C1 corresponding to the maximum value is determined to be a preset classification result in view that the maximum value 0.88 is greater than the optimal classification threshold value 0.85 corresponding to the hub model.
S203, comparing the prediction classification result with the reference type information of the hub type of the detected vehicle to obtain a detection result of the hub type.
The prediction classification result of the hub model is C1, the reference type information of the hub model of the tested vehicle is C1, the comparison result of the prediction classification result of the hub model and the reference type information can be determined to be consistent, and the detection result of the hub model obtained based on the consistency of the comparison result is that the matching of the hub model is correct.
Further, the edge computing node may control an alarm device to prompt a corresponding detection result, and the alarm device may be an audible and visual alarm device. For example, if the wheel hub model is correctly matched as a result of the detection of the wheel hub model obtained by the edge calculation node, the sound-light alarm device can be controlled to display a green light and the sound prompt of 'correct wheel hub model matching' can be given; based on a similar principle, the edge computing node obtains a detection result of the hub model, namely the matching error of the hub model, and can control the audible and visual alarm device to display a red light and prompt the 'matching error of the hub model' through voice.
The technical scheme includes that an accessory identification network is loaded in advance by an edge computing node, obtained target accessory images are classified by the accessory identification network in the edge computing node, and accessory detection is performed by comparing classification results of the accessory identification network with accessory reference type information, so that accessory detection efficiency is improved, and false detection rate and missing detection rate are reduced.
Corresponding to the foregoing embodiment of the device accessory detection method, referring to fig. 3, the present application also provides an embodiment of a device accessory detection apparatus, which is applied to an edge computing node, where the edge computing node loads an accessory identification network in advance; the device comprises:
an accessory image acquisition module 301 for acquiring a target accessory image of the device under test;
a classification prediction module 302, configured to recognize the target accessory image by using the accessory recognition network to obtain a prediction classification result of the target accessory;
and the information comparison module 303 is used for comparing the prediction classification result with reference type information of a target accessory corresponding to the tested device to obtain a detection result of the target accessory.
In some embodiments, the accessory image acquisition module is specifically configured to: acquiring an image of the device under test; detecting the image by using a target detection network to obtain an image area containing the target accessory; and obtaining a target accessory image based on the image area.
In some embodiments, the device under test comprises a vehicle; the acquiring the image of the device under test may include: images of a plurality of portions of the vehicle are acquired using a plurality of cameras disposed around the vehicle.
In some embodiments, the classification prediction module is specifically configured to: obtaining probability values of the target accessory belonging to various preset classifications; determining the maximum value in the probability values of the multiple preset classifications, comparing the maximum value with a first classification threshold corresponding to the target accessory, and determining the type corresponding to the maximum value as a predicted classification result of the target accessory when the maximum value is greater than or equal to the first classification threshold corresponding to the target accessory.
In some embodiments, the classification prediction module further comprises: the first rate value acquisition module is used for acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values aiming at the accessory identification network; and the threshold value determining module is used for determining the first classification threshold value according to the precision ratio and a precision ratio-precision ratio P-R curve corresponding to the precision ratio.
In some embodiments, the apparatus further comprises: the information transmission module is used for uploading the target accessory image and the detection result corresponding to the target accessory image to a cloud under the condition that the detection result of the target accessory of the tested device is obtained, so that the cloud optimizes the accessory identification network; and the identification network updating module is used for updating the accessory identification network by using the optimized accessory identification network.
In some embodiments, the identifying network update module further comprises: the second rate value acquisition module is used for acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values through a cloud end aiming at the optimized accessory identification network; and the threshold updating module is used for determining a second classification threshold according to the P-R curve corresponding to the recall ratio and the precision ratio and updating the first classification threshold into the second classification threshold.
In some embodiments, the apparatus further comprises: and the result response module is used for responding to the inconsistency between the prediction classification result and the reference type information comparison result of the target accessory corresponding to the tested equipment and sending prompt information.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
An electronic device is further provided in the embodiments of the present application, and a schematic structural diagram of the electronic device is shown in fig. 4, where the electronic device 400 includes at least one processor 401, a memory 402, and a bus 403, and the at least one processor 401 is electrically connected to the memory 402; the memory 402 is configured to store at least one computer-executable instruction, and the processor 401 is configured to execute the at least one computer-executable instruction, so as to perform the steps of any one of the device accessory detection methods as provided in any one of the embodiments or any one of the alternative embodiments of the present application.
Further, the processor 401 may be an FPGA (Field-Programmable Gate Array) or other devices with logic processing capability, such as an MCU (micro controller Unit) and a CPU (Central processing Unit).
According to the technical scheme provided by the embodiment of the application, the edge computing node loads the accessory identification network in advance, the obtained target accessory images are classified by the accessory identification network in the edge computing node, and the classification result of the accessory identification network is compared with the accessory reference type information to detect the accessories, so that the accessory detection efficiency is improved, and the false detection rate and the missed detection rate are reduced.
The embodiment of the present application further provides another readable storage medium, which stores a computer program, and the computer program is used for implementing the steps of any one of the device accessory detection methods provided in any one of the embodiments or any one of the alternative embodiments of the present application when the computer program is executed by a processor.
Embodiments of the present application provide readable storage media including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a readable storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
According to the technical scheme provided by the embodiment of the application, the edge computing node loads the accessory identification network in advance, the obtained target accessory images are classified by the accessory identification network in the edge computing node, and the classification result of the accessory identification network is compared with the accessory reference type information to detect the accessories, so that the accessory detection efficiency is improved, and the false detection rate and the missed detection rate are reduced.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. The equipment accessory detection method is applied to an edge computing node, and the edge computing node loads an accessory identification network in advance; the method comprises the following steps:
acquiring a target accessory image of the tested device;
identifying the target accessory image by using the accessory identification network to obtain a prediction classification result of the target accessory;
and comparing the prediction classification result with reference type information of the target accessory corresponding to the tested equipment to obtain a detection result of the target accessory.
2. The method of claim 1, wherein the acquiring a target accessory image of a device under test comprises:
acquiring an image of the device under test;
detecting the image by using a target detection network to obtain an image area containing the target accessory;
and obtaining a target accessory image based on the image area.
3. The method of claim 1, wherein the identifying the target accessory image using the accessory identification network to obtain the predicted classification result of the target accessory comprises:
obtaining probability values of the target accessory belonging to various preset classifications;
determining the maximum value in the probability values of the multiple preset classifications, comparing the maximum value with a first classification threshold corresponding to the target accessory, and determining the type corresponding to the maximum value as a predicted classification result of the target accessory when the maximum value is greater than or equal to the first classification threshold corresponding to the target accessory.
4. The method of claim 3, further comprising:
aiming at the accessory identification network, acquiring the recall ratio and precision ratio corresponding to the target accessory by setting different classification threshold values;
and determining the first classification threshold according to the precision ratio and a precision ratio-precision ratio P-R curve corresponding to the precision ratio.
5. The method of claim 1, further comprising:
under the condition that a detection result of a target accessory of the tested device is obtained, uploading the target accessory image and a detection result corresponding to the target accessory image to a cloud end so that the cloud end can optimize the accessory identification network;
updating the accessory identification network with the optimized accessory identification network.
6. The method of claim 5, further comprising:
aiming at the optimized accessory identification network, acquiring the recall ratio and the precision ratio corresponding to the target accessory by setting different classification threshold values through a cloud end;
and determining a second classification threshold according to the P-R curve corresponding to the recall ratio and the precision ratio, and updating the first classification threshold into the second classification threshold.
7. The device for detecting the equipment accessories is characterized in that the device is applied to an edge computing node, and the edge computing node is preloaded with an accessory identification network; the device comprises:
the accessory image acquisition module is used for acquiring a target accessory image of the equipment to be tested;
the classification prediction module is used for recognizing the target accessory image by using the accessory recognition network to obtain a prediction classification result of the target accessory;
and the information comparison module is used for comparing the prediction classification result with the reference type information of the target accessory corresponding to the tested equipment to obtain the detection result of the target accessory.
8. The equipment accessory detection system is characterized by comprising image acquisition equipment and an edge computing node; the edge computing node pre-loads an accessory identification network;
the image acquisition equipment is used for acquiring images of a plurality of components of the equipment to be tested;
the edge computing node is used for obtaining a target accessory image of the tested equipment, and recognizing the target accessory image by using an accessory recognition network to obtain a detection result of the target accessory.
9. An electronic device, comprising: a processor, a memory;
the memory for storing a computer program;
the processor is configured to execute the device accessory detection method according to any one of claims 1 to 6 by calling the computer program.
10. A readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the device accessory detection method of any one of claims 1-6.
CN202211392968.4A 2022-11-08 2022-11-08 Equipment accessory detection method, device, system, equipment and storage medium Pending CN115937575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211392968.4A CN115937575A (en) 2022-11-08 2022-11-08 Equipment accessory detection method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211392968.4A CN115937575A (en) 2022-11-08 2022-11-08 Equipment accessory detection method, device, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115937575A true CN115937575A (en) 2023-04-07

Family

ID=86551506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211392968.4A Pending CN115937575A (en) 2022-11-08 2022-11-08 Equipment accessory detection method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115937575A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977941A (en) * 2023-09-22 2023-10-31 太原理工大学 Method and system for detecting key working procedures of tunneling roadway

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116977941A (en) * 2023-09-22 2023-10-31 太原理工大学 Method and system for detecting key working procedures of tunneling roadway

Similar Documents

Publication Publication Date Title
US11138467B2 (en) Method, device, product, and computer program for operating a technical system
US10551854B2 (en) Method for detecting target object, detection apparatus and robot
CN106183979B (en) A kind of method and apparatus reminded according to spacing vehicle
CN110135318B (en) Method, device, equipment and storage medium for determining passing record
US10613507B2 (en) Automated loading bridge positioning using encoded decals
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN109472251B (en) Object collision prediction method and device
CN109740547A (en) A kind of image processing method, equipment and computer readable storage medium
KR102229220B1 (en) Method and device for merging object detection information detected by each of object detectors corresponding to each camera nearby for the purpose of collaborative driving by using v2x-enabled applications, sensor fusion via multiple vehicles
CN115937575A (en) Equipment accessory detection method, device, system, equipment and storage medium
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
US11250279B2 (en) Generative adversarial network models for small roadway object detection
CN111488766A (en) Target detection method and device
CN113052071B (en) Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle
CN110631771A (en) Method and apparatus for leak detection
US11341379B1 (en) Smart image tagging and selection on mobile devices
CN113055658A (en) Tunnel hazardous chemical substance vehicle identification method and system based on panoramic stitching technology
JP2021144689A (en) On-vehicle sensing device and sensor parameter optimization device
CN108873097B (en) Safety detection method and device for parking of vehicle carrying plate in unmanned parking garage
CN109960990B (en) Method for evaluating reliability of obstacle detection
CN111709377A (en) Feature extraction method, target re-identification method and device and electronic equipment
CN110606221A (en) Automatic bullet hanging method for bullet hanging vehicle
CN115984723A (en) Road damage detection method, system, device, storage medium and computer equipment
CN112270333A (en) Elevator car abnormity detection method and system aiming at electric vehicle identification
CN112967399A (en) Three-dimensional time sequence image generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination