CN116977241A - Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component - Google Patents

Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component Download PDF

Info

Publication number
CN116977241A
CN116977241A CN202210404455.4A CN202210404455A CN116977241A CN 116977241 A CN116977241 A CN 116977241A CN 202210404455 A CN202210404455 A CN 202210404455A CN 116977241 A CN116977241 A CN 116977241A
Authority
CN
China
Prior art keywords
detection object
defect
vehicle component
image
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210404455.4A
Other languages
Chinese (zh)
Inventor
杨智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BMW Brilliance Automotive Ltd
Original Assignee
BMW Brilliance Automotive Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BMW Brilliance Automotive Ltd filed Critical BMW Brilliance Automotive Ltd
Priority to CN202210404455.4A priority Critical patent/CN116977241A/en
Publication of CN116977241A publication Critical patent/CN116977241A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to methods, apparatus, computer readable storage media, and computer program products for detecting defects in vehicle components. A method of detecting a defect in a vehicle component, the method comprising: shooting the vehicle component by using a camera to acquire an image of the vehicle component; identifying a detection object in an image of the vehicle component; segmenting a region including the detection object from the image of the vehicle component as a detection object image; and judging whether the detected object in the detected object image has defects or not by using the trained model.

Description

Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component
Technical Field
The present disclosure relates to the field of defect detection of vehicle components, and more particularly, to a method, apparatus, computer readable storage medium, and computer program product for detecting defects of vehicle components.
Background
In the manufacturing process of a vehicle component, defects may sometimes occur in the vehicle component due to the manufacturing process employed. For example, in the welding process of a vehicle body, a stud of a bolt welded to the vehicle body may be contaminated, deformed, or skewed by a spark generated during the welding process. Such defects may be detrimental to subsequent manufacturing processes.
Disclosure of Invention
The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. However, it should be understood that this summary is not an exhaustive overview of the disclosure. It is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. Its purpose is to present some concepts related to the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The inventors of the present disclosure have noted that it is desirable to detect defects (such as weld defects) in vehicle components in a timely manner to facilitate handling the defects and to minimize impact on subsequent manufacturing processes. However, in the detection method of detecting defects of a vehicle component such as a vehicle body known to the inventors of the present disclosure, it is necessary for a detection person to manually detect each detection object (for example, a stud of a bolt) on the vehicle component one by one to determine whether the detection object has a defect. Such a detection method often consumes a great deal of manpower and time resources, and reduces production efficiency. On the other hand, the manual detection has certain subjectivity, different detection personnel may use different detection standards, and the condition of false detection and missing detection may be caused.
An object of the present disclosure is to provide a method, an apparatus, a computer-readable storage medium, and a computer program product capable of automatically detecting defects of a vehicle component with high efficiency, so that a lot of manpower and time resources can be saved, and detection efficiency and quality are improved, thereby improving overall production efficiency and quality.
According to one aspect of the present disclosure, there is provided a method of detecting a defect of a vehicle component, the method comprising: shooting the vehicle component by using a camera to acquire an image of the vehicle component; identifying a detection object in an image of the vehicle component; segmenting a region including the detection object from the image of the vehicle component as a detection object image; and judging whether the detected object in the detected object image has defects or not by using the trained model.
According to another aspect of the present disclosure, there is provided an apparatus for detecting a defect of a vehicle component, the apparatus comprising: a memory having instructions stored thereon; and a processor configured to execute instructions stored on the memory to cause the apparatus to perform a method of detecting a defect of a vehicle component according to the present disclosure.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform a method of detecting a defect of a vehicle component according to the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when run by a processor, causes the processor to perform a method of detecting a defect of a vehicle component according to the present disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is an exemplary flowchart of a method of detecting defects in a vehicle component according to an embodiment of the present disclosure;
FIG. 2 is an exemplary schematic diagram of an application scenario of a method of detecting defects of a vehicle component according to an embodiment of the present disclosure;
FIG. 3 is an exemplary schematic illustration of an image of a portion of a vehicle component including a detection object according to an embodiment of the disclosure;
fig. 4 illustrates an exemplary configuration of a computing device capable of implementing embodiments in accordance with the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
A method of detecting defects of a vehicle component according to an embodiment of the present disclosure is described in detail below with reference to fig. 1 to 3. Fig. 1 is an exemplary flowchart of a method 10 of detecting a defect of a vehicle component according to an embodiment of the present disclosure, fig. 2 is an exemplary schematic diagram of an application scenario 20 of a method of detecting a defect of a vehicle component according to an embodiment of the present disclosure, and fig. 3 is an exemplary schematic diagram of an image 30 of a portion of a vehicle component including a detection object according to an embodiment of the present disclosure.
First, in step S1100 of fig. 1, a vehicle component to be detected is photographed with a camera to acquire an image of the vehicle component.
In some embodiments, the vehicle component is a body of a vehicle. The body is the overall frame of the vehicle and is the basis for the assembly of the vehicle. In a factory for assembling a vehicle, a vehicle body is generally manufactured in a vehicle body shop. In a vehicle body shop, press-formed metal vehicle body members are welded mainly by welding equipment to assemble a complete vehicle body frame (vehicle body). Hereinafter, a vehicle body is described as an example of a vehicle component to be detected.
The camera may be, for example, a two-dimensional camera or a three-dimensional camera, or both the two-dimensional camera and the three-dimensional camera may be used. In the case of using a two-dimensional camera, the acquired image of the vehicle component is a two-dimensional image, and the detection of the defect of the vehicle component is also based on such a two-dimensional image. In the case of using a three-dimensional camera, the acquired image of the vehicle component is a three-dimensional image, and the detection of the defect of the vehicle component is also based on such a three-dimensional image. Hereinafter, a two-dimensional camera is described as an example.
For example, as shown in fig. 2, in the application scene 20, a vehicle body 2100 supported by a support apparatus is photographed with a camera 2200 to acquire an image of the vehicle body 2100. For example, a support device that supports the vehicle body 2100 may be slidably advanced on a rail, for example, and when the vehicle body 2100 is movably present in a photographing range of a camera 2200 mounted near the rail, the vehicle body 2100 is photographed by the camera 2200. Alternatively, the vehicle body 2100 may be photographed by moving the robot or the arm-mounted camera 2200 while the supporting device supporting the vehicle body 2100 is stationary. Alternatively, both the body 2100 and the camera 2200 may be movable. In the present disclosure, the manner in which the image of the vehicle component is acquired with the camera is not particularly limited as long as the detection object to be detected is included in the acquired image of the vehicle component.
The description of fig. 1 is returned to below. Next, in step S1200, a detection object to be detected in an image of a vehicle component is identified.
In some embodiments, where the vehicle component is a vehicle body, the detection object on the vehicle component may be a shank of a bolt on the vehicle body. As described above, the welding of the vehicle body member is required in the process of assembling the vehicle body member into a complete vehicle body. At the time of welding, for example, a hole is bored in a vehicle body member by a welding apparatus, and a bolt is welded to the vehicle body member to facilitate mounting of other vehicle components (e.g., a seat, a door, etc.) on the vehicle body in a subsequent assembly process. Therefore, on the welded body, a large number of bolts are generally included. However, when the bolts are welded to the vehicle body member, there is a possibility that the screws of the bolts may be defective due to failure of welding equipment, misoperation, or the like. The defects of the shank of the bolt are, for example, one or more of contamination, deformation, skew, and the like. These defects of the screw of the bolt on the vehicle body may affect the accuracy of the subsequent assembly, and serious defects may even cause other vehicle components to be unable to be mounted on the vehicle body. Therefore, it is important to detect whether or not the screw of the bolt on the vehicle body has a defect. Hereinafter, description will be made with a screw of a bolt welded to a vehicle body as an example of a detection object to be detected.
In the image of the vehicle component obtained by step S1100, a plurality of detection objects are generally included. In order to detect whether each of a plurality of detection objects in an image of a vehicle component has a defect, it is necessary to first identify the plurality of detection objects in the image of the vehicle component. In the present disclosure, the manner of identifying the detection object in the image of the vehicle component is not particularly limited as long as the position and the area of the detection object in the image of the vehicle component can be identified. For example, the pattern of the detection object may be learned by an artificial neural network such as a convolutional neural network, and then the detection object in the image of the vehicle component is identified by the artificial neural network having learned the pattern of the detection object.
For example, as shown in fig. 3, in the image 30, a part of the vehicle body 3300 is shown to include detection objects (studs) 3110, 3120, and 3130. Since the image 30 is a two-dimensional image, the cylindrical studs 3110, 3120 and 3130 are shown as planar circles in the image 30. In the image 30, the stud 3110 is a detection object having no defect, the stud 3120 is a detection object having a contamination defect, and the stud 3130 is a detection object having a deformation defect, for example. For example, by inputting the image 30 to an artificial neural network in which the pattern of the detection objects has been learned, the positions and regions of all detection objects (studs) 3110, 3120, and 3130 in the image 30 (for example, which pixels in the image 30 or which pixels in the region represent the detection objects) can be identified. Note that the example of the image 30 in fig. 3 is merely a schematic example, and in practical applications, the image of the vehicle component is more complex, and the form of the detection object in the image is also more various.
The description of fig. 1 is returned to below. Next, in step S1300, a region including the detection object is segmented from the image of the vehicle component as a detection object image.
The process of this step may be regarded as "splitting" the image of the complete vehicle component into a plurality of test object images, each test object image preferably comprising one test object. In this way, parts of the image of the vehicle component that are not relevant to the detection can be eliminated, saving memory resources and computing resources. In addition, since each detection object image includes only one detection object, it is enabled to more accurately detect whether or not each detection object has a defect in a subsequent step. For example, after each detection object in the image of the vehicle component is identified in step S1200, a pixel region including pixels representing the detection object is segmented from the image of the vehicle component, and the pixel region is taken as the detection object image of the detection object. In the present disclosure, the manner of dividing the detection object image from the image of the vehicle component is not particularly limited as long as an image including each of the identified detection objects (complete detection objects) can be obtained. However, when a trained model in the following process has a requirement for the size of an input detection object, the segmented detection object image needs to satisfy the size requirement of the input of the trained model. For example, when the trained model requires that the input detection target image be m×n pixels, the detection target image needs to be processed (for example, by clipping, scaling, or the like) into m×n pixels in advance. Further, the detection object image may be subjected to preprocessing such as rotation, sharpening, conversion into a gradation image, and the like, so that the detection object image is more suitable for use in the subsequent processing for defect detection.
For example, as shown in fig. 3, after each of the detection objects (studs) 3110, 3120, and 3130 in the image 30 of the part of the vehicle body 3300 is identified, three detection object images bounded by blocks 3210, 3220, and 3230, respectively, may be segmented from the image 30. The three detection object images include one detection object (stud) among the detection objects (studs) 3110, 3120, and 3130, respectively, and are used for subsequent processing.
The description of fig. 1 is returned to below. Next, in step S1400, it is determined whether or not the detection object in the detection object image has a defect using the trained model.
For example, a plurality of detection object images (for example, three detection object images each including one detection object (stud) of the detection objects (studs) 3110, 3120, and 3130 as described above) each including each detection object on the vehicle component obtained through step S1300 are input to the trained model. The trained model determines whether the detection object in each detection object image has a defect by calculation, and outputs a determination result. The training of the trained model, the calculation, and the output of the judgment result are described in detail below.
In some embodiments, the trained model may be trained, for example, by:
(1) Segmenting a region including a detection object from an image of a vehicle component as a detection object image;
(2) Judging whether a detection object in the detection object image has a defect or not;
(3) Marking the image of the detection object according to the judging result of whether the detection object has defects;
(4) The marked multiple detection object images are input into a machine learning model to train the machine learning model, and the trained machine learning model is used as a trained model.
In the above-described step (1), for example, it is possible to identify the detection object in the image of the vehicle component in the same manner as in steps S1200 and S1300 of fig. 1, and to divide the region including the detection object from the image of the vehicle component as the detection object image. For better training, a large number of images of the vehicle components photographed in advance may be used to obtain a large number of detection object images as training samples. The scale of the training sample (the number of the detection object images for training) may be, for example, several thousands, several tens of thousands, or more. The more training samples, the more favorable the machine learning model is to get higher calculation accuracy.
The above steps (2) and (3) are usually performed manually for marking the image of the detection object as a training sample. For example, it is possible to browse each detection object image by an experienced inspector, determine whether or not a detection object in the detection object image has a defect, and mark the detection object image.
In the training stage, the judging results only comprise two judging results of the defect of the detected object and the defect of the detected object. For example, when a person browses each detection object image, it is possible to determine whether a detection object in the detection object image has a defect or does not have a defect, for example, based on actual experience and a desired effect. For example, when the trained model is expected to give a more severe judgment result (the trained model is expected to be able to detect a minute defect), the inspector may judge a detection object (e.g., a slightly contaminated screw) slightly different from a detection object (e.g., a clean, unbiased screw) that does not have a defect as having a defect. Alternatively, when the trained model is expected to give a more relaxed judgment result (it is expected that the trained model detects only serious defects affecting production), the inspector may judge only a detection object having serious defects possibly affecting subsequent production (e.g., a screw deformed or severely skewed due to attachment of a welding splash) as having defects. In the present disclosure, the judgment criteria for the training stage are not particularly limited as long as the obtained trained model can be made to meet the actual application needs. However, whatever judgment criteria are used, it is necessary to ensure that the same judgment criteria are used for each detection target image.
When the inspector determines whether or not the inspection object in the inspection object images has a defect, for example, each inspection object image may be marked with a number, and an operation device (e.g., a computer) for training may be caused to store each inspection object image in association with the marked number. For example, the detection object image of the detection object determined to have a defect may be marked with a number 1, the detection object image of the detection object determined to have no defect may be marked with a number 0, the number 1 may be stored in association with the detection object image of the detection object determined to have a defect, and the number 0 may be stored in association with the detection object image of the detection object determined to have no defect. Such processing can be regarded as processing of classifying (classifying into two types of 0 and 1) the detection object image. In another aspect, the meaning of 0 is that the probability of the detection object having a defect is 0, and the meaning of 1 is that the probability of the detection object having a defect is 1.
In step (4), all the marked detection object images and the marks thereof are simultaneously input into the machine learning model to train the machine learning model. The machine learning model may be, for example, a ResNet18 network. The ResNet18 network is a sort of task network in the machine learning model, typically consisting of 17 convolutional layers and 1 fully-connected layer. Because the ResNet18 network is simpler in structure, the ResNet18 network has the advantages of high reasoning speed and easy training. In the present disclosure, the machine learning model is preferably a ResNet18 network for the purposes of easy training, high computational accuracy, and fast computational speed. However, the machine learning model is not limited thereto, and any other network suitable for performing defect detection of (a detection object of) a vehicle component may be employed. Hereinafter, a ResNet18 network is described as an example of a machine learning model.
As described above, the input of the machine learning model is all the labeled detection object images for training and the labels thereof. The output of the machine learning model may be, for example, the probability that the detection object in each detection object image has a defect. In the case of marking the detection object image with 0 and 1, although the machine learning model is made to learn two discrete probabilities of 0 and 1, which are also probabilities of the detection object having a defect, the machine learning model may calculate an arbitrary numerical value in the [0,1] section as the probability of the detection object having a defect that it learns. In this case, the calculated probability may be directly outputted as a result of determining whether or not the detection object has a defect, or a threshold value (for example, 0.5) may be set in advance, and it may be determined that the detection object has a defect when the calculated probability is greater than the threshold value, and it may be determined that the detection object has no defect when the calculated probability is equal to or less than the threshold value. Alternatively, two thresholds (for example, a first threshold of 0.8 and a second threshold of 0.5) may be set in advance, and it may be determined that the detection object has a defect when the calculated probability is greater than the first threshold, and it may be determined that the detection object has a defect when the calculated probability is equal to or less than the first threshold and greater than the second threshold, and it may be determined that the detection object does not have a defect when the calculated probability is equal to or less than the second threshold.
After training the machine learning model in the manner described above, a trained model is obtained and used to detect defects in the vehicle component. In order to save labor and time resources, for example, when detecting a vehicle component by the method of manually detecting a defect of the vehicle component currently described above (that is, in practical application of the current manual detection method), the judgment and marking of the detection object image may be performed, and after a certain number of detection object image samples are accumulated, the machine learning model may be trained to obtain a trained model. The machine learning model may be trained while accumulating the detection target image samples, and after detecting a defect of the vehicle component using the trained model, the detection target image obtained from the detected image of the vehicle component may be added to the training sample of the machine learning model for training of the machine learning model. Thus, the number of training samples can be increased continuously, and the calculation accuracy of the machine learning model can be improved continuously.
Note that the judgment result that the trained model can give is related to its training mode in the training phase. For example, as described above, if a relatively severe judgment criterion is adopted in judging (classifying) the image of the detection object as a training sample in the training stage (for example, a detection object slightly different from a detection object having no defect is judged to have a defect), a relatively severe judgment result (a minute defect can be detected) is also obtained from the trained model trained from the thus obtained training sample, whereas if a relatively loose judgment criterion is adopted in judging (classifying) the image of the detection object as a training sample in the training stage (for example, only a detection object having a serious defect that may affect the subsequent production is judged to have a defect), a relatively loose judgment result (only a serious defect that affects the production) is also obtained from the trained model trained from the thus obtained training sample.
Once the trained model is obtained, the defects of the vehicle component can be detected in practical application each time the defects of the vehicle component are detected, for example, according to the processing in steps S1100 to S1400 in fig. 1, so that a great deal of manpower and time resources are saved, the detection efficiency and quality are improved, and the overall production efficiency and quality are improved. As for the output of the judgment result of the trained model, there are the following three modes as described above.
(mode one)
In some embodiments, the trained model calculates the probability that a detected object in the input detected object image has a defect, and directly outputs the calculated probability as a result of determining whether the detected object has a defect.
For example, as described above, the trained model may calculate an arbitrary value in the [0,1] section as a probability that the detection object has a defect. The trained model may directly output a value (e.g., 0.95 or 95%) of the probability that the test object has a defect. In this case, for example, a numerical value of the probability that the detection object has a defect may be visually indicated to the user in the vicinity of the corresponding detection object on the image of the vehicle component, so that the user can be assisted in rapidly distinguishing which detection object or objects (for example, the screw of the bolt) on the vehicle component (for example, the vehicle body) has a higher probability of having a defect, and thus can be focused on the detection object having a higher probability of having a defect. Alternatively, the threshold value may be set in advance by the user, and the numerical value of the probability may be visually indicated to the user only in the vicinity of the detection object having the probability of the defect greater than the threshold value in the image of the vehicle component. This can further concentrate the attention of the user on the detection object having a high probability of having a defect.
In this way, the judgment result is output, the probability of whether the detection object has a defect can be intuitively presented to the user, the time and effort of the user to detect the defect of the vehicle component can be saved, and even an inexperienced inspector can easily learn the probability of the detection object on the vehicle component having the defect.
(mode two)
In some embodiments, the trained model calculates a probability that a detection object in the input detection object image has a defect, determines that the detection object has a defect when the calculated probability is greater than a preset threshold, and determines that the detection object has no defect when the calculated probability is less than or equal to the preset threshold. In this case, a region including the detection object determined to have a defect may be visually indicated in the image of the vehicle component to prompt the user of the detection object having a defect. Alternatively, the detection object determined to have a defect and the detection object determined to have no defect may be presented to the user at the same time in different labeling manners (for example, in different colors).
For example, after the trained model calculates the probability that the detection object in the detection object image has a defect (for example, 0.95 or 95%), the probability in such a numerical form is not directly output, but further judgment results, that is, two judgment results that the detection object has a defect and the detection object does not have a defect, are given according to a preset threshold (for example, 0.5). This corresponds to classifying the detection objects into two categories, namely, having a defect and not having a defect, by using a threshold value after calculating the probability of the numerical value. In this case, in order to focus the user on the detection object having the defect, the detection object determined to have the defect may be presented to the user. For example, a detection object determined to be defective (in this example, it is assumed that only the detection object (screw) 3130 is determined to be defective) may be circled in an image of the vehicle component with a more conspicuous red box (which may be, for example, a boundary of the divided detection object image) as shown in fig. 3, thereby visually indicating an area including the detection object determined to be defective. Alternatively, as in blocks 3210 and 3220 shown in fig. 3, the detection objects that are determined to be free of defects may be circled in the image of the vehicle component in other colors that are less noticeable, so that the user can grasp the distribution of all detection objects on the vehicle component.
Outputting the determination result in this way can more intuitively prompt the user of the detection object determined to have the defect, further saving the user's time and effort to detect the defect of the vehicle component.
(mode three)
In some embodiments, the trained model calculates a probability that a detection object in the input detection object image has a defect, determines that the detection object has a defect when the calculated probability is greater than a first threshold value set in advance, determines that the detection object is likely to have a defect when the calculated probability is less than or equal to the first threshold value and greater than a second threshold value set in advance, and determines that the detection object does not have a defect when the calculated probability is less than or equal to the second threshold value. In this case, for example, a region including a detection object determined to have a defect, a region including a detection object determined to be likely to have a defect, and a region including a detection object determined to have no defect may be respectively marked in different colors in an image of the vehicle component.
For example, after the trained model calculates the probability (e.g., 0.95 or 95%) that the detection object has a defect in the detection object image, further determination results are given according to two preset thresholds (e.g., a first threshold of 0.8 and a second threshold of 0.5), that is, three determination results that the detection object has a defect, that the detection object may have a defect, and that the detection object does not have a defect. This corresponds to dividing the detection object into three categories of having a defect, possibly having a defect, and not having a defect using two thresholds after calculating the probability of the numerical value. In this case, the detection object determined to have a defect and the detection object determined to be likely to have a defect may be simultaneously presented to the user in different labeling manners (for example, in different colors), and optionally the detection object determined to have no defect may also be presented. For example, as shown in blocks 3210, 3220, and 3230 of fig. 3, a detection object determined to be non-defective may be circled in a white square, a detection object determined to be possibly defective may be circled in a yellow square, and a detection object determined to be defective may be circled in a red square in an image of a vehicle component, so that a region including the detection object determined to be defective, a region including the detection object determined to be possibly defective, and a region including the detection object determined to be non-defective may be respectively marked in different colors. In this case, since the first threshold value is set relatively high, it is believed that the detection object circled by the red square frame is indeed defective as a result of the judgment given by the trained model. For the detection object which is circled by the yellow square frame and is judged to possibly have the defect by the trained model, the user can be prompted to further check whether the detection object has the defect, so that the detection object possibly having the defect is prevented from being missed. On the other hand, it is considered that the defect of the detection object circled by the red square frame is serious, and the subsequent production is possibly affected, and further processing is required. In addition, it is considered that the defect of the detection object circled by the yellow square frame is slight, and it is not necessary to specially deal with such defect in the current production stage.
The judging result is output in this way, so that the user can be intuitively prompted for the detected object judged to be defective, the time and energy of the user are saved, and the user can be intuitively prompted for the detected object possibly having the defect, the detected object possibly having the defect is avoided from being missed as much as possible, and the detection precision is improved.
Note that the three ways of outputting the determination result listed above are merely examples. The output of the determination result is not limited to the above three modes. For example, in order to focus the user on the detection object determined to have a defect, a portion of the image of the vehicle component other than the region including the detection object may be blurred to emphasize the detection object.
Further, in some embodiments, in practical application, when the vehicle component is photographed with the camera, the identification code of the vehicle to which the vehicle component belongs may be photographed and identified with the other camera, and the determination result of whether the detection object on the vehicle component given by the trained model has a defect may be stored in association with the identification code of the vehicle.
For example, in the application scenario 20 of fig. 2, when the vehicle body 2100 is photographed with the camera 2200, the VIN code of the vehicle, for example, which is imprinted on the vehicle body 2100, may be photographed and recognized with another camera (not shown) as the identification code of the vehicle to which the vehicle body 2100 belongs. After a determination result of whether or not a detection object (for example, a stud of a bolt) on the vehicle body has a defect is obtained by, for example, steps S1100 to S1400 of fig. 1, the determination result is stored in association with the identification code of the vehicle. For example, when the determination result is output in any one of the three ways of outputting the determination result (for example, the third way) as described above, the image of the vehicle component (for example, the image 30 in fig. 3) marked with the determination result may be stored in association with the identification code of the vehicle. In this way, the detection result of the vehicle component and the detection history of the vehicle component can be conveniently checked, and the method can be conveniently used for processing defects of the vehicle component and subsequent production processes.
(computing device)
Fig. 4 illustrates an exemplary configuration of a computing device 40 capable of implementing embodiments in accordance with the present disclosure.
Computing device 40 is an example of a hardware device that is capable of applying the above aspects of the present disclosure. Computing device 40 may be any machine configured to perform processing and/or calculations. Computing device 40 may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a Personal Data Assistant (PDA), a smart phone, an in-vehicle computer, or a combination thereof.
As shown in fig. 4, computing device 40 may include one or more elements that may connect or communicate with bus 4100 via one or more interfaces. Bus 4100 can include, but is not limited to, industry standard architecture (Industry Standard Architecture, ISA) bus, micro channel architecture (Micro Channel Architecture, MCA) bus, enhanced ISA (EISA) bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, among others. Computing device 40 may include, for example, one or more processors 4200, one or more input devices 4300, and one or more output devices 4400. The one or more processors 4200 may be any kind of processor and may include, but are not limited to, one or more general purpose processors or special purpose processors (such as special purpose processing chips). The processor 4200 may correspond, for example, to a processor of an apparatus for detecting defects of a vehicle component of the present disclosure, configured to implement a method of detecting defects of a vehicle component of the present disclosure. The input device 4300 may be any type of input device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote controller. The output device 4400 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers.
The computing device 40 may also include or be connected to a non-transitory storage device 4700, which non-transitory storage device 4700 may be any storage device that is non-transitory and that may enable data storage, and may include, but is not limited to, a disk drive, an optical storage device, solid state memory, a floppy disk, a flexible disk, a hard disk, magnetic tape or any other magnetic medium, a compact disk or any other optical medium, cache memory and/or any other memory chip or module, and/or any other medium from which a computer may read data, instructions, and/or code. Computing device 40 may also include Random Access Memory (RAM) 4500 and Read Only Memory (ROM) 4600. The ROM 4600 may store programs, utilities or processes to be executed in a nonvolatile manner. The RAM 4500 may provide volatile data storage and store instructions related to the operation of the computing device 40. Computing device 40 may also include a network/bus interface 4800 coupled to data link 4900. The network/bus interface 4800 can be any kind of device or system capable of enabling communication with external apparatuses and/or networks and can include, but is not limited to, modems, network cards, infrared communication devices, wireless communication devices, and/or chip sets (such as bluetooth @) TM Devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication facilities, etc.).
In addition, another embodiment of the present disclosure also provides a computer-readable storage medium comprising computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method of detecting a defect of a vehicle component as described in the above embodiments.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (15)

1. A method of detecting a defect in a vehicle component, the method comprising:
shooting the vehicle component by using a camera to acquire an image of the vehicle component;
identifying a detection object in an image of the vehicle component;
segmenting a region including the detection object from the image of the vehicle component as a detection object image;
and judging whether the detected object in the detected object image has defects or not by using the trained model.
2. The method of claim 1, wherein the trained model is trained by:
segmenting a region including a detection object from an image of a vehicle component as a detection object image;
judging whether a detection object in the detection object image has a defect or not;
marking the detection object image according to the judgment result of whether the detection object has defects;
the marked multiple detection object images are input into a machine learning model to train the machine learning model, and the trained machine learning model is used as a trained model.
3. The method of claim 1, wherein,
and the trained model calculates the probability of the defect of the detection object and outputs the probability as a judging result of whether the detection object has the defect.
4. The method of claim 1, wherein,
the trained model calculates the probability that the test object has a defect,
when the probability is greater than a preset threshold value, judging that the detection object has a defect,
and when the probability is less than or equal to a preset threshold value, judging that the detection object has no defect.
5. The method of claim 4, wherein,
a region including the detection object determined to be defective is visually indicated in the image of the vehicle component to prompt the user of the detection object having the defect.
6. The method of claim 1, wherein,
the trained model calculates the probability that the test object has a defect,
when the probability is larger than a preset first threshold value, judging that the detection object has a defect,
when the probability is equal to or less than the first threshold value and is greater than a preset second threshold value, the detection object is judged to be possibly defective,
and when the probability is smaller than or equal to the second threshold value, judging that the detection object has no defect.
7. The method of claim 6, wherein,
an area including a detection object determined to have a defect, an area including a detection object determined to be likely to have a defect, and an area including a detection object determined to have no defect are respectively marked in different colors in an image of the vehicle component.
8. The method of claim 1, wherein,
when the camera is used for shooting the vehicle component, other cameras are used for shooting and identifying the identification code of the vehicle to which the vehicle component belongs,
and storing a judgment result of whether the detection object has a defect in association with the identification code of the vehicle.
9. The method of claim 2, wherein,
the judging result comprises two judging results of the defect of the detection object and the defect of the detection object,
the detection object image of the detection object determined to have a defect is marked with 1, and the detection object image of the detection object determined to have no defect is marked with 0.
10. The method according to claim 1 or 2, wherein,
the vehicle component is a body of a vehicle.
11. The method of claim 10, wherein,
the detection object is a screw of a bolt on the vehicle body,
the defect is one or more of the following defects generated by the screw rod during welding of the vehicle body: contamination, deformation, skew.
12. The method of claim 2, wherein,
the machine learning model is a ResNet18 network.
13. An apparatus for detecting defects in a vehicle component, comprising:
a memory having instructions stored thereon; and
a processor configured to execute instructions stored on the memory to cause the apparatus to perform the method of detecting a defect of a vehicle component of any one of claims 1 to 12.
14. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform the method of detecting a defect of a vehicle component as claimed in any one of claims 1 to 12.
15. A computer program product comprising a computer program which, when run by a processor, causes the processor to perform the method of detecting a defect of a vehicle component as claimed in any one of claims 1 to 12.
CN202210404455.4A 2022-04-18 2022-04-18 Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component Pending CN116977241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210404455.4A CN116977241A (en) 2022-04-18 2022-04-18 Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210404455.4A CN116977241A (en) 2022-04-18 2022-04-18 Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component

Publications (1)

Publication Number Publication Date
CN116977241A true CN116977241A (en) 2023-10-31

Family

ID=88480084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210404455.4A Pending CN116977241A (en) 2022-04-18 2022-04-18 Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component

Country Status (1)

Country Link
CN (1) CN116977241A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252486A (en) * 2023-11-14 2023-12-19 长春师范大学 Automobile part defect detection method and system based on Internet of things

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252486A (en) * 2023-11-14 2023-12-19 长春师范大学 Automobile part defect detection method and system based on Internet of things
CN117252486B (en) * 2023-11-14 2024-02-02 长春师范大学 Automobile part defect detection method and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN111080622B (en) Neural network training method, workpiece surface defect classification and detection method and device
CN106934800B (en) Metal plate strip surface defect detection method and device based on YOLO9000 network
CN110544231B (en) Lithium battery electrode surface defect detection method based on background standardization and centralized compensation algorithm
CN111310826B (en) Method and device for detecting labeling abnormality of sample set and electronic equipment
CN111179250B (en) Industrial product defect detection system based on multitask learning
CN110136116B (en) Injection pump defect detection method, device, equipment and storage medium
CN112651966A (en) Printed circuit board micro-defect detection method based on ACYOLOV4_ CSP
CN110766095A (en) Defect detection method based on image gray level features
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN110135514B (en) Workpiece classification method, device, equipment and medium
KR102297232B1 (en) Anomaly Detection via Morphological Transformations
CN112200776A (en) Chip packaging defect detection method and detection device
CN116977241A (en) Method, apparatus, computer readable storage medium and computer program product for detecting defects in a vehicle component
CN116542984A (en) Hardware defect detection method, device, computer equipment and storage medium
Khare et al. PCB-Fire: Automated Classification and Fault Detection in PCB
CN116843657A (en) Welding defect detection method and device based on attention fusion
CN114998357B (en) Industrial detection method, system, terminal and medium based on multi-information analysis
CN115984215A (en) Fiber bundle defect detection method based on twin network
CN114596243A (en) Defect detection method, device, equipment and computer readable storage medium
CN112184665A (en) Artificial intelligence defect detecting system applied to paper-plastic industry
CN111445458B (en) Method for detecting printing quality of mobile phone battery label
CN111724352B (en) Patch LED flaw labeling method based on kernel density estimation
Ashourpour et al. Real-Time Defect and Object Detection in Assembly Line: A Case for In-Line Quality Inspection
CN114002225B (en) Optical detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination