CN111103306A - Method for detecting and marking defects - Google Patents

Method for detecting and marking defects Download PDF

Info

Publication number
CN111103306A
CN111103306A CN201811282210.9A CN201811282210A CN111103306A CN 111103306 A CN111103306 A CN 111103306A CN 201811282210 A CN201811282210 A CN 201811282210A CN 111103306 A CN111103306 A CN 111103306A
Authority
CN
China
Prior art keywords
module
detection
defect
marking
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811282210.9A
Other languages
Chinese (zh)
Inventor
陈政隆
戴文智
阮春禄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Solomon Technology Corp
Original Assignee
Solomon Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Solomon Technology Corp filed Critical Solomon Technology Corp
Priority to CN201811282210.9A priority Critical patent/CN111103306A/en
Publication of CN111103306A publication Critical patent/CN111103306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95607Inspecting patterns on the surface of objects using a comparative method

Abstract

The invention provides a method for detecting and marking defects. The method comprises the steps of controlling a 2D shooting module to shoot different areas of a tested object to obtain detection 2D images of the areas, executing flaw detection processing on the detection 2D images to judge whether flaws exist in the areas or not, controlling a 3D shooting module to shoot the areas when flaws exist in any area to obtain appearance 3D data of the areas, measuring 3D positions of the flaws according to the appearance 3D data, and controlling a marking module to mark the 3D positions on the tested object. The invention can effectively improve the flaw detection speed, accurately determine the position of the flaw and mark the flaw.

Description

Method for detecting and marking defects
Technical Field
The present invention relates to methods, and more particularly to methods for detecting and marking defects.
Background
In the existing defect detection technology, the defects on the object are mostly detected manually, which consumes a lot of manpower and has unstable detection quality.
Although a detection system is proposed, the detection system uses a 2D camera to capture a 2D image of an entire object and detects defects in the 2D image to determine whether defects exist, however, the detection system cannot accurately locate the defects due to the lack of depth information in the 2D image, so that it is still necessary to manually confirm the positions of the defects and manually mark the defects after the defects are found.
In view of the above, a solution for automatically detecting and marking defects is proposed.
Disclosure of Invention
The present invention provides a method for detecting and marking defects, which can quickly detect defects and accurately locate and mark the defects.
In one embodiment, a method for detecting and marking defects in a detection and marking system, the detection and marking system comprising a 2D camera module, a 3D camera module and a marking module, the method for detecting and marking defects comprises the steps of:
a) controlling the 2D shooting module to carry out 2D shooting on different areas of a detected object so as to obtain a detection 2D image of each area;
b) performing a defect detection process on each of the detected 2D images to determine whether any of the detected 2D images includes a defective image;
c) when any detected 2D image comprises the flaw image, controlling the 3D shooting module to carry out 3D shooting on the area corresponding to the detected 2D image so as to obtain appearance 3D data;
d) measuring a 3D position of the defect according to the appearance 3D data; and
e) and controlling the marking module to mark the 3D position.
In one embodiment, the detecting and marking system further comprises a moving module; the step a) is to control the moving module to move along a detection path in a detection mode so as to enable the plurality of areas of the detected object to enter the shooting range of the 2D shooting module in sequence, and to carry out 2D shooting on the areas entering the shooting range.
In one embodiment, the method further includes a step f) of controlling the moving module to move along an input path according to a path setting operation in a setting mode before the step a), and generating the detection path according to a plurality of coordinates passed by the moving module during the moving.
In one embodiment, the method further comprises a step g) of calculating a simulation path according to 3D object data describing the shape of the object to be detected, and generating the detection path comprising a plurality of coordinates in order according to the simulation path; the step a) is to control the moving module to move among the plurality of coordinates in sequence.
In one embodiment, the method further comprises the following steps before the step a):
h1) obtaining the detection path and an initial region characteristic corresponding to the detection path;
h2) controlling the 2D photographing module or the 3D photographing module to photograph each region of the tested object;
h3) analyzing a region characteristic of each shot region, and setting the region as an initial region when the region characteristic of any one region conforms to the initial region characteristic; and
h4) and correcting a plurality of coordinates of the detection path according to the coordinates of the mobile module when the initial area is shot.
In one embodiment, the step b) is to obtain a defect recognition model corresponding to the object to be tested, and perform the defect detection process on each of the detected 2D images based on the defect recognition model, wherein the defect recognition model is generated by performing a training process on a plurality of defect template images based on machine learning.
In one embodiment, the method further includes a step i) of analyzing the image of the defect according to the appearance 3D data and a plurality of defect identification rules respectively corresponding to different defect types to determine a defect type of the defect; the step e) selects one of a plurality of marks according to the defect type, and marks the selected mark on the tested object at the 3D position, wherein the plurality of marks respectively correspond to different defect types.
In one embodiment, the step e) controls the marking module to mark a removable mark on the 3D position of the object under test.
In one embodiment, the step e) controls the marking module to mark an unremovable mark at the 3D position of the object to be measured.
In one embodiment, the detecting and marking system further comprises a moving module for moving the 2D camera module and the 3D camera module simultaneously; the step a) is to control the moving module to move along a detection path in a detection mode and control the 2D shooting module to carry out 2D shooting on different areas of the detected object in the moving process; the step b) is to execute the defect detection processing on the shot detection 2D image in real time; the step c) is that when the shot detected 2D image comprises the flaw image, the moving module is immediately controlled to stop moving, and the 3D shooting module is controlled to carry out 3D shooting on the current area; the method for detecting and marking defects further comprises a step j) of repeatedly executing the steps a) to e) until the detection is completed.
The invention can effectively improve the flaw detection speed, accurately determine the flaw position and mark.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a schematic view of a first embodiment of a detection and marking system according to the present invention;
FIG. 2 is a first schematic view of a detection and marking system according to a second embodiment of the present invention;
FIG. 3 is a second schematic view of a detection and marking system according to a second embodiment of the present invention;
FIG. 4 is a schematic view of a detection and marking system according to a third embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for detecting and marking defects according to a first embodiment of the present invention;
FIG. 6 is a partial flowchart of a method for detecting and marking defects according to a second embodiment of the present invention;
FIG. 7 is a partial flowchart of a method for detecting and marking defects according to a third embodiment of the present invention; and
FIG. 8 is a flowchart illustrating a method for detecting and marking defects according to a fourth embodiment of the present invention.
Wherein the reference numerals
10 … detection and marking system
100 … control module
101 … 2D camera module
102 … 3D camera module
103 … marking module
104 … moving module
105 … memory module
106 … computer program
107 … communication module
108 … human-machine interface
11 … computer device
12 … bearing platform
20-22 … test object
200. 220 … defect
201. 221 … mark
P1, P2 … coordinates
S10-S15 … first detection step
S20-S28 … setting step
S30-S35 … correction steps
S40-S49 … second detection step
Detailed Description
The following detailed description of the embodiments of the present invention with reference to the drawings and specific examples is provided for further understanding of the objects, aspects and effects of the present invention, but not for limiting the scope of the appended claims.
Referring to fig. 1 to 4 together, fig. 1 is a schematic diagram of a detecting and marking system according to a first embodiment of the present invention, fig. 2 is a first schematic diagram of a detecting and marking system according to a second embodiment of the present invention, fig. 3 is a second schematic diagram of a detecting and marking system according to a second embodiment of the present invention, and fig. 4 is a schematic diagram of a detecting and marking system according to a third embodiment of the present invention.
The detecting and marking system 10 of the present invention may include a 2D camera module 101, a 3D camera module 102, a marking module 103, and a control module 100 electrically connected to the modules for controlling the same.
The 2D photographing module 101 (such as a monochrome camera or a color camera) is used for performing 2D photographing to generate a detected 2D image. The 3D photographing module 102 is configured to perform 3D photographing to generate apparent 3D data.
In one embodiment, the 3D camera module 102 may include a 2D camera and a depth gauge (e.g., a laser range finder). The 2D camera is used for capturing a picture of the object from a specific view angle to generate a 2D image. The depth meter is used for measuring the depth value of each position in the picture, namely measuring the distance between the actual position corresponding to each pixel of each 2D image and the depth meter. And, the point cloud data of the view angle can be generated by processing each 2D image and the corresponding depth values.
In one embodiment, the 2D camera module 101 and the 3D camera module 102 may be integrated, and the 3D camera module 102 is a depth meter, i.e. when performing 3D shooting, the 2D camera module 101 is used to shoot 2D images and the 3D camera module 102 is used to obtain depth values.
The marking module 103 (e.g., a labeler, a spray coating device, or a laser engraving machine) is used for marking (e.g., labeling, spraying, or burning a designated image and text) a designated position in the three-dimensional space.
In one embodiment, the detecting and marking system 10 further includes a moving module 104 electrically connected to the control module 100.
In an embodiment, as shown in fig. 2 and 3, the 2D camera module 101, the 3D camera module 102 and the marking module 103 may be disposed on a moving module 104 (e.g., a robot arm), and the moving module 104 may move the 2D camera module 101, the 3D camera module 102 and the marking module 103 in a three-dimensional space, so as to photograph or mark different regions of the object 20 on the supporting stage 12.
In an embodiment, the 2D photographing module 101, the 3D photographing module 102 and/or the marking module 103 may be respectively disposed on different moving modules 104 (e.g., disposed on different robots), so that 2D photographing, 3D photographing and/or marking operations can be performed simultaneously, thereby effectively reducing the detection time.
The two arrangements are suitable for detecting larger, heavier or more fragile objects to be detected because the carrying platform and the objects to be detected do not need to be moved.
In an embodiment, as shown in fig. 4, the 2D camera module 101 and the 3D camera module 102 are fixedly disposed. The marking module 103 and the stage 12 are provided in different movement modules 104. For example, the marking module 103 is disposed on a robot arm, and the platform 12 is disposed on a multi-axis moving device. Accordingly, the moving module 104 can move the platform 12 to make each region of the object 22 face the 2D camera module 101 and the 3D camera module 102 for shooting.
In one embodiment, the 2D camera module 101, the 3D camera module 102 and the mark module 103 are fixedly disposed. The moving module 104 may move the stage 12 to direct the flawed area of the object 22 to the marking module 103 for marking.
Since the 2D camera module 101 and the 3D camera module 102 have precise optical structures, the two arrangements can prevent the 2D camera module 101 and the 3D camera module 102 from being damaged or failing to focus due to movement and provide better shooting quality, and are suitable for detecting smaller or finer objects to be detected.
In one embodiment, the detecting and marking system 10 may further include a memory module 105 electrically connected to the control module 100. The memory module 105 is used for storing data.
In one embodiment, the detection and marking system 10 may further include a human-machine interface 108 (e.g., a light, a speaker, a button, or other input/output devices) electrically connected to the control module 100.
In one embodiment, the detecting and marking system 10 may further include a communication module 107 (e.g., a wireless communication module such as a bluetooth transceiver, a Zig-zag transceiver, a Wi-Fi transceiver Sub-1GHz transceiver, etc., or a wired communication module such as a USB module, a wired network module, a serial data communication module, etc.) electrically connected to the control module 100. The detecting and marking system 10 can be connected to an external computer device 11 (such as a remote controller or a personal computer) via the communication module 107.
Thus, the user can control the detecting and marking system 10 or know the current status (such as the current working mode or the detecting progress) of the detecting and marking system 10 through the human-machine interface 108 or the computer device 11.
In one embodiment, the memory module 105 includes a non-transitory computer readable medium storing a computer program 106 (such as firmware, an operating system, or an application), and the computer program 106 records a computer readable program code. The control module 100 may execute the computer program 106 to control the detecting and marking system 10 to implement the steps of the method for detecting and marking defects according to the embodiments of the present invention.
Referring to fig. 5, a flowchart of a method for detecting and marking defects according to a first embodiment of the invention is shown. The method for detecting and marking defects of the present embodiment can be applied to the detecting and marking system 10 of any one of the embodiments shown in fig. 1 to 4 (which will be described later with reference to fig. 1 to 2).
Step S10: the control module 100 controls the 2D photographing module 101 to perform 2D photographing on different areas of the object 20 (the vehicle is taken as an example in fig. 2) to obtain detected 2D images of the areas.
In the embodiment shown in fig. 2, the control module 100 may read a group of detection paths from the memory module 105, control the moving module 104 to move along the detection paths so that a plurality of regions to be detected of the object 20 sequentially enter the capturing range of the 2D capturing module 101, and control the 2D capturing module 101 to perform 2D capturing to obtain the detected 2D images of the regions when the regions enter the capturing range.
Step S11: the control module 100 performs a defect detection process on each of the captured 2D images to determine whether each of the detected 2D images includes any defect (such as the defect 200 shown in fig. 2).
In one embodiment, the control module 100 may perform the defect detection process on each detected 2D image based on a plurality of defect recognition rules pre-stored in the memory module 105.
In an embodiment, the control module 100 may compare each of the detected 2D images with a plurality of defect images pre-stored in the memory module 105 to determine whether each of the detected 2D images includes a defective image.
Step S12: when any detected 2D image includes a defective image, the control module 100 determines that the area corresponding to the detected 2D image has a defect, and performs step S13. Moreover, when all the detected 2D images do not include any defect image, the control module 100 determines that all the areas have no defect, and ends the method for detecting and marking defects.
Step S13: the control module 100 controls the 3D photographing module 102 to perform 3D photographing on the area with the defect to obtain the appearance 3D data (e.g., point cloud data) of the area.
In the embodiment shown in fig. 2, the control module 100 can obtain the coordinates of the moving module 104 when the 2D camera module 101 shoots the area, and control the moving module 104 to move the 3D camera module 102 according to the coordinates so that the area enters the shooting range of the 3D camera module 102 for 3D shooting.
Step S14: the control module 100 measures the 3D location of the flaw based on the apparent 3D data. In one embodiment, the control module 100 can obtain a depth coordinate (e.g., a distance from the 3D camera module 102) of the defect according to the appearance 3D data, and obtain a 3D position of the defect according to the coordinate of the moving module 104 and the depth coordinate when the defect is captured.
Step S15: the control module 100 marks the defect according to the 3D position via the marking module 103.
In the embodiment shown in fig. 2, the control module 100 may control the moving module 104 to move to the 3D position, and control the marking module 103 to mark the defect 200 of the object 20, such as spraying the surrounding frame mark 201 around the defect 200.
The invention can effectively improve the flaw detection speed by detecting flaws through 2D shooting, and can determine the 3D position of the flaws through 3D shooting and accurately mark the flaws.
Referring to fig. 6, a partial flowchart of a method for detecting and marking defects according to a second embodiment of the invention is shown. The present embodiment provides a detection path setting function, which can generate the detection path automatically or through learning, and is described with reference to fig. 1 and 2 (but can also be used in the systems shown in fig. 3 and 4). The method for detecting and marking defects of the present embodiment further includes the following steps for implementing the detection path setting function.
Step S20: the control module 100 enters the setting mode automatically or according to a user operation. In the set mode, the detection and marking system 10 can selectively execute the manual route setting process (steps S21 to S24) or the automatic route setting process (steps S25 to S28).
For example, if the control module 100 can obtain (e.g. from the memory module 105 or the computer device 11) 3D object data (e.g. CAD file of the object 20) describing the shape of the current object 20, the auto-routing process can be executed optionally. If the control module 100 cannot acquire the 3D object data, a manual path setting procedure may be performed.
The manual routing procedure includes the following steps. Step S21: the control module 100 receives a path setting operation (e.g. sequentially inputting a plurality of designated directions, which may constitute an input path) from a user via the human-machine interface 108 or the computer device 11.
Step S22: the control module 100 selects any region of the object 20 (e.g., a region located in a current photographing range of the 2D photographing module 101 or the 3D photographing module 102) as a start region, and controls the 2D camera 101 or the 3D photographing module 102 to photograph the selected region to obtain a start 2D image or start 3D data. The object 20 is placed on the platform 12 in a predetermined arrangement.
In an embodiment, the control module 100 may further perform feature analysis on the initial 2D image or the initial 3D data to obtain initial region features of the initial region.
Step S23: the control module 100 controls the moving module 104 to move in a plurality of designated directions in sequence according to the path setting operation, so as to move along the input path.
It should be noted that steps S21 and S23 may be executed sequentially or simultaneously. In the case of sequential execution, the user can set a complete input path (step S21), i.e. input all the designated directions at one time. The detection and marking system 10 controls the movement of the moving module 104 according to the set input path (step S22).
In the case of simultaneous execution, the user may input a designated direction (step S21), and the detecting and marking system 10 controls the moving module 104 to move in real time according to the input designated direction (step S22). After the movement is completed, the user can then input the next designated direction to make the detection and marking system 10 control the movement of the moving module 104 in real time according to the next designated direction, and so on until the input path setting is completed.
In an embodiment, during the movement of the moving module 104 along the input path, the control module 100 may control the 2D photographing module 101 to continuously photograph and output the generated 2D image to the human-computer interface 108 or the computer device 11 for the user to confirm whether the current input path is satisfactory, such as whether all regions are photographed clearly.
Step S24: the control module 100 records the coordinates passed by the moving module 104 during the period of controlling the moving module 104 to move along the input path (i.e. during the execution of step S23), generates the detection path according to the recorded coordinates, stores the detection path in the memory module 105, and ends the setting.
In an embodiment, the control module 100 may further correspond the start area characteristics of the start area to the detection path and record the detection path in the memory module 105.
Therefore, the invention can complete the manual setting of the detection path, and can enable a user to customize a dedicated detection path for detected objects of different types or different sizes.
The automatic routing program includes the following steps. Step S25: the control module 100 loads 3D object data. The loaded 3D object data is used to describe the shape of the designated object under test (default object).
Step S26: the control module 100 selects one of the plurality of regions of the predetermined object as a start region, and analyzes the shape of the start region to generate start region characteristics.
Step S27: the control module 100 calculates a simulation path according to the 3D object data. In one embodiment, the starting point of the simulation path is the starting area set in step S26.
In one embodiment, the control module 100 plans a simulation path (e.g., a path passing through all regions of the predetermined object) according to the predetermined object, and calculates a detection path including a plurality of coordinates according to the simulation path. When the moving module 104 sequentially moves among the plurality of coordinates, each area of the predetermined object can enter the shooting range of the 2D camera module 101.
Step S28: the control module 100 records the generated detection path and the corresponding start area feature in the memory module 105, and ends the setting. .
Therefore, the invention can automatically generate the detection path suitable for the detected object according to the 3D object data of the detected object, can automatically complete the setting of the detection path without the manual operation of a user, and can effectively save manpower.
Although the embodiment shown in fig. 6 can generate the detection path of a specific object under test, the generated detection path is only suitable for the current placement mode, but the placement mode is different, and even if the same type of object under test is detected, the detection path corresponding to the new placement mode must be generated again, which increases the detection time.
Referring to fig. 7, a partial flowchart of a method for detecting and marking defects according to a third embodiment of the invention is shown. In order to solve the above problems, the present embodiment provides a detection path calibration function, which can automatically change and calibrate the existing detection path according to the current placement mode of the object to be detected, so that the calibrated detection path is suitable for the current placement mode, which will be described later with reference to fig. 1 to 3 (but can also be used in the system shown in fig. 4). In the present embodiment, the detection path is planned based on the placement mode shown in fig. 2 and the object 20 to be detected, and the placement mode and the object 21 to be detected shown in fig. 3 are the current detection targets. Compared to the method for detecting and marking defects shown in fig. 1, the method for detecting and marking defects of the present embodiment further includes the following steps for implementing the detection path correction function before step S10.
Step S30: the control module 100 enters a detection mode.
Step S31: the control module 100 (which may be operated by a user) loads the detection path and the start area feature (e.g., the nose feature) corresponding to the detection path from the memory module 105. The loaded detection path is generated by placing the object to be detected based on a predetermined placement (as shown in fig. 2).
In one embodiment, the tested object 20 corresponding to the loaded testing path is the same, similar or the same type of object as the current tested object 21 (e.g. a vehicle of a different model).
Step S32: the control module 100 controls the 2D photographing module 101 or the 3D photographing module 102 to photograph each region of the test object 21 to obtain a positioning 2D image or positioning 3D data. The photographed object 21 is placed in another placement (as shown in fig. 3) different from the predetermined placement.
Step S33: the control module 100 performs a feature analysis process on the photographed positioning 2D image or the positioning 3D data to analyze the region features of each of the photographed regions and compares the region features of each of the regions with the start region feature.
If the control module 100 determines that the area feature of any area matches the start area feature, step S34 is executed. If the control module 100 determines that the area characteristics of all the areas do not match the initial area characteristics, the method for detecting and marking defects is ended, and an alert message is output to the human-machine interface 108 or the computer device 11.
Step S34: the control module 100 sets the region with the matched features as the initial region of the current detection.
Step S35: the control module 100 performs a calibration process on the loaded detection path according to the coordinates (new initial coordinates) of the moving module 104 when the initial region is captured.
In one embodiment, the detection path includes a plurality of coordinates in order, the calibration process replaces the first coordinate (original initial coordinate) of the detection path with a new initial coordinate, calculates the offset between the original initial coordinate and the new initial coordinate, and modifies the other coordinates of the detection path according to the calculated offset, so that the modified detection path is applicable to the current placement mode of the object 21 to be detected. Next, step S10 is executed.
For example, if the detection path is generated based on the preset placement manner shown in fig. 2, the starting point of the detection path is the coordinate P1, and the start area feature is the head feature (i.e., the start area is the head), that is, the 2D camera module 101 can shoot the head after the moving module 104 moves to the coordinate P1.
However, when the detection is performed based on the placement of fig. 3, the original detection path is not suitable since the position of the start area (head) is changed (for example, when the detection is performed according to the original detection path, failure in detecting all areas or colliding with the object 21 may occur).
In this regard, the detection and control system 10 can control the 2D camera module 101 or the 3D camera module 102 to capture each region of the object 21 via the moving module 104 when the placement mode is changed to obtain a positioning 2D image or positioning 3D data of each region. Then, the detection and control system 10 analyzes the positioning 2D image or the positioning 3D data of each region to obtain the region feature of each region, and compares the region feature of each region with the start region feature to determine the position of the start region (head). Then, the detection path is corrected according to the coordinate P2 of the moving module 104 when the start area is captured, so as to obtain the detection path suitable for the placement mode of fig. 3.
The invention can save labor and time for replanning the detection path by correcting the existing detection path when the detected object or the placing mode thereof is changed.
Referring to fig. 8, a flowchart of a method for detecting and marking defects according to a fourth embodiment of the invention is shown. The method for detecting and marking defects of the present embodiment can use machine learning techniques and automatic analysis techniques to perform defect detection and defect type analysis, which will be described later with reference to fig. 1 and 4 (but can also be used in the systems shown in fig. 2-3). The method for detecting and marking defects of the present embodiment includes the following steps.
Step S40: the control module 100 controls the moving module 104 to move along the detection path (e.g., from the start coordinate) in the detection mode so that each region of the object 22 enters the capturing range of the 2D capturing module 101.
In one embodiment, as shown in fig. 4, the moving module 104 is a multi-axis rotating device and is connected to the carrier 12. The moving module 104 can move according to the coordinates of the detection path to make different areas of the object 22 enter the capturing ranges of the upper 2D camera module 101 and the upper 3D camera module 102.
Step S41: the control module 100 controls the 2D photographing module 101 to perform 2D photographing on an area entering a photographing range to generate a detection 2D image
In one embodiment, the 2D photographing module 101 performs 2D photographing during the moving process.
Step S42: the control module 100 reads the defect recognition model corresponding to the current object 22 from the memory module 105, and performs defect detection processing on the captured 2D inspection image in real time based on the defect recognition model.
In one embodiment, the defect identification model is generated by performing a training process on a plurality of defect template images based on machine learning, and can be used to quickly identify various defects.
Step S43: when the control module 100 determines that the detected 2D image includes the defect 220, it determines that the defect 220 exists in the current area, and performs step S44. Otherwise, the control module 100 performs step S49.
Step S44: the control module 100 controls the moving module 104 to stop moving so that the area where the flaw 220 is located is within the photographing range of the 3D photographing module 102.
Step S45: the control module 100 controls the 3D photographing module 102 to perform 3D photographing on a current region to generate apparent 3D data of the region.
It is worth mentioning that the time required for 3D photographing is much longer than that required for 2D photographing. If 3D photographing is performed during movement, there is a high probability of obtaining poor quality appearance 3D data, which may cause failure or misalignment of subsequent marking processes.
According to the invention, 3D shooting is carried out after the moving period is stopped, so that high-quality appearance 3D data can be effectively obtained, and the marking accuracy is further improved.
Step S46: the control module 100 measures the 3D location of the flaw based on the apparent 3D data.
Step S47: the control module 100 reads the defect recognition rules from the memory module 105 and performs a defect type analysis process on the appearance 3D data based on an automatic analysis technique according to the defect recognition rules to determine the defect type of the defect 220. The plurality of defect identification rules are respectively corresponding to different defect types.
It should be noted that, in the embodiment, the defect detection processing is performed on the detected 2D image based on the machine learning technique, and the defect type analysis is performed on the appearance 3D data based on the automatic analysis technique, but the invention is not limited thereto.
In one embodiment, the fault detection process and fault type analysis may be performed based on machine learning techniques. Alternatively, the flaw detection process may be performed based on an automatic analysis technique, and the flaw type analysis may be performed based on a machine learning technique.
Step S48: the control module 100 selects one of the marks according to the defect type (e.g., stain defect), and controls the marking module 103 (and the moving module 104) to mark the selected mark (e.g., red arrow mark 221) on the object 22 at the 3D position. The marks correspond to different defect types respectively.
In one embodiment, the indicia are removable indicia (e.g., stickers or spray applied erasable paints, such as aqueous based materials).
In one embodiment, the mark is a non-removable mark (e.g., laser-fired mark on the test object or spray-coated non-erasable pigment such as oil-based material).
Step S49: the control module 100 determines whether the detection is completed (e.g., whether all the regions are detected or the detection path is completed).
If the control module 100 determines that the detection is complete, the detection and marking method is terminated. Otherwise, the control module 100 performs step S40 again to control the moving module 104 to move to the next coordinate to make the next area of the object 22 enter the shooting range of the 2D shooting module 101.
The invention can improve the detection speed and accuracy by using the machine learning technology and the automatic analysis technology, and can effectively determine the defect type by analyzing the appearance 3D data.
It should be noted that, although the control module 100 of the detection and marking system 10 performs processes (such as defect detection process, measuring 3D position, identifying the feature of the start area, calibrating the detection path or analyzing the defect type) in the foregoing embodiments, the invention is not limited thereto.
In one embodiment, the processing may be executed by the external computer device 11. Specifically, the inspection and marking system 10 may transmit the acquired 2D image and 3D data to the computer device 11 in real time or non-real time, and the computer device 11 may transmit the processing result (which may include control commands) back to the inspection and marking system 10 after the processing is completed, so that the inspection and marking system 10 may perform subsequent operations (e.g., capturing the next region of the object or marking defects).
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method for detecting and marking defects in a detection and marking system, the detection and marking system comprising a 2D camera module, a 3D camera module and a marking module, the method comprising:
a) controlling the 2D shooting module to carry out 2D shooting on different areas of a detected object so as to obtain a detection 2D image of each area;
b) performing a defect detection process on each of the detected 2D images to determine whether any of the detected 2D images includes a defective image;
c) when any detected 2D image comprises the flaw image, controlling the 3D shooting module to carry out 3D shooting on the area corresponding to the detected 2D image so as to obtain appearance 3D data;
d) measuring a 3D position of the defect according to the appearance 3D data; and
e) and controlling the marking module to mark the 3D position.
2. The method of claim 1, wherein the detecting and marking system further comprises a moving module; the step a) is to control the moving module to move along a detection path in a detection mode so as to enable the plurality of areas of the detected object to enter the shooting range of the 2D shooting module in sequence, and to carry out 2D shooting on the areas entering the shooting range.
3. The method according to claim 2, further comprising a step f) of controlling the moving module to move along an input path according to a path setting operation in a setting mode before the step a), and generating the detection path according to coordinates passed by the moving module during the moving.
4. The method according to claim 2, further comprising a step g) of calculating a simulation path according to 3D object data describing the shape of the object under test, and generating the inspection path including a plurality of coordinates in order according to the simulation path; the step a) is to control the moving module to move among the plurality of coordinates in sequence.
5. The method of claim 2, further comprising the steps of, before the step a):
h1) obtaining the detection path and an initial region characteristic corresponding to the detection path;
h2) controlling the 2D photographing module or the 3D photographing module to photograph each region of the tested object;
h3) analyzing a region characteristic of each shot region, and setting the region as an initial region when the region characteristic of any one region conforms to the initial region characteristic; and
h4) and correcting a plurality of coordinates of the detection path according to the coordinates of the mobile module when the initial area is shot.
6. The method according to claim 1, wherein the step b) obtains a defect recognition model corresponding to the object, and performs the defect detection process on each of the detected 2D images based on the defect recognition model, wherein the defect recognition model is generated by performing a training process on a plurality of defect template images based on machine learning.
7. The method of claim 1, further comprising a step of i) analyzing the image of the defect according to the appearance 3D data and a plurality of defect recognition rules respectively corresponding to different defect types to determine a defect type of the defect; the step e) selects one of a plurality of marks according to the defect type, and marks the selected mark on the tested object at the 3D position, wherein the plurality of marks respectively correspond to different defect types.
8. The method as claimed in claim 1, wherein the step e) controls the marking module to mark a removable mark at the 3D position of the object to be tested.
9. The method according to claim 1, wherein the step e) controls the marking module to mark a non-removable mark at the 3D position of the object under test.
10. The method of claim 1, wherein the detecting and marking system further comprises a moving module for moving the 2D camera module and the 3D camera module simultaneously; the step a) is to control the moving module to move along a detection path in a detection mode and control the 2D shooting module to carry out 2D shooting on different areas of the detected object in the moving process; the step b) is to execute the defect detection processing on the shot detection 2D image in real time; the step c) is that when the shot detected 2D image comprises the flaw image, the moving module is immediately controlled to stop moving, and the 3D shooting module is controlled to carry out 3D shooting on the current area; the method for detecting and marking defects further comprises a step j) of repeatedly executing the steps a) to e) until the detection is completed.
CN201811282210.9A 2018-10-29 2018-10-29 Method for detecting and marking defects Pending CN111103306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811282210.9A CN111103306A (en) 2018-10-29 2018-10-29 Method for detecting and marking defects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811282210.9A CN111103306A (en) 2018-10-29 2018-10-29 Method for detecting and marking defects

Publications (1)

Publication Number Publication Date
CN111103306A true CN111103306A (en) 2020-05-05

Family

ID=70419401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811282210.9A Pending CN111103306A (en) 2018-10-29 2018-10-29 Method for detecting and marking defects

Country Status (1)

Country Link
CN (1) CN111103306A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112326669A (en) * 2020-10-28 2021-02-05 哈尔滨工程大学 Coating defect detection and marking system and method
CN113721485A (en) * 2020-05-26 2021-11-30 由田新技股份有限公司 Automatic pressing device
CN115266733A (en) * 2022-08-03 2022-11-01 沛县万豪纺织科技有限公司 Fabric defect detecting and marking equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104634787A (en) * 2015-02-13 2015-05-20 东华大学 Automatic detection device and method for paint spraying flaws on outer surface of automobile body
CN106767443A (en) * 2016-11-22 2017-05-31 中北大学 A kind of new automatic secondary element image detector and measuring method
CN106872487A (en) * 2017-04-21 2017-06-20 佛山市南海区广工大数控装备协同创新研究院 The surface flaw detecting method and device of a kind of view-based access control model
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
US20180105393A1 (en) * 2016-10-19 2018-04-19 Otis Elevator Company Automatic marking system
CN107944422A (en) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 Three-dimensional image pickup device, three-dimensional camera shooting method and face identification method
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN108496124A (en) * 2015-11-09 2018-09-04 艾天诚工程技术系统股份有限公司 The automatic detection and robot assisted processing of surface defect

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104634787A (en) * 2015-02-13 2015-05-20 东华大学 Automatic detection device and method for paint spraying flaws on outer surface of automobile body
CN108496124A (en) * 2015-11-09 2018-09-04 艾天诚工程技术系统股份有限公司 The automatic detection and robot assisted processing of surface defect
US20180105393A1 (en) * 2016-10-19 2018-04-19 Otis Elevator Company Automatic marking system
CN106767443A (en) * 2016-11-22 2017-05-31 中北大学 A kind of new automatic secondary element image detector and measuring method
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN106872487A (en) * 2017-04-21 2017-06-20 佛山市南海区广工大数控装备协同创新研究院 The surface flaw detecting method and device of a kind of view-based access control model
CN107657603A (en) * 2017-08-21 2018-02-02 北京精密机电控制设备研究所 A kind of industrial appearance detecting method based on intelligent vision
CN107944422A (en) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 Three-dimensional image pickup device, three-dimensional camera shooting method and face identification method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721485A (en) * 2020-05-26 2021-11-30 由田新技股份有限公司 Automatic pressing device
CN112326669A (en) * 2020-10-28 2021-02-05 哈尔滨工程大学 Coating defect detection and marking system and method
CN115266733A (en) * 2022-08-03 2022-11-01 沛县万豪纺织科技有限公司 Fabric defect detecting and marking equipment
CN115266733B (en) * 2022-08-03 2024-01-23 沛县万豪纺织科技有限公司 Embryo cloth flaw detection marking equipment

Similar Documents

Publication Publication Date Title
CN110530877B (en) Welding appearance quality detection robot and detection method thereof
WO2017080282A1 (en) Circuit board inspection method and apparatus
US20080013825A1 (en) Simulation device of robot system
CN108817613A (en) A kind of arc welding robot weld seam deviation-rectifying system and method
CN111103306A (en) Method for detecting and marking defects
JP6750841B2 (en) Inspection method, inspection device, processing device, program and recording medium
CN111156923A (en) Workpiece detection method, workpiece detection device, computer equipment and storage medium
CN107742304B (en) Method and device for determining movement track, mobile robot and storage medium
CN104741739A (en) Position correcting system of welding robot
JP2005201861A (en) Three-dimensional visual sensor
JP2018059830A (en) Exterior appearance inspection method
CN210451366U (en) Galvanometer correction system
JP2020027058A (en) Bar arrangement make management system and bar arrangement make management method
CN208366871U (en) Detection system
JP2022173182A (en) Automatic Guidance, Positioning and Real-time Correction Method for Laser Projection Marking Using Camera
CN112884743A (en) Detection method and device, detection equipment and storage medium
CN115713476A (en) Visual detection method and device based on laser welding and readable storage medium
CN115131268A (en) Automatic welding system based on image feature extraction and three-dimensional model matching
KR20150066845A (en) Process inspection device, method and system for assembling process in product manufacturing using depth map sensors
TWI708041B (en) Method of detecting and marking defect
CN112620926B (en) Welding spot tracking method and device and storage medium
CN116276938B (en) Mechanical arm positioning error compensation method and device based on multi-zero visual guidance
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
JP6395455B2 (en) Inspection device, inspection method, and program
KR20210008661A (en) Method for monitoring cracks on surface of structure by tracking of markers in image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200505

WD01 Invention patent application deemed withdrawn after publication